Time-of-Check Time-of-Use Attacks Against LLMs
By amelia Hernandez | WASHINGTON D.C. – 2025/09/18 12:27:33
A newly released research paper, “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents,” highlights a previously underexplored class of security threats targeting Large Language Model (LLM) agents.
Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities wiht security implications.
The paper delves into Time-of-Check to Time-of-Use (TOCTOU) vulnerabilities, which occur when an agent validates an external state, such as a file or API response, that is later modified before being used. This can lead to attacks like malicious configuration swaps or payload injection.
The researchers introduce TOCTOU-Bench, a benchmark comprising 66 realistic user tasks designed to evaluate these vulnerabilities.They also adapt detection and mitigation techniques from systems security, proposing prompt rewriting, state integrity monitoring, and tool-fusing as countermeasures.
Our study highlights challenges unique to agentic workflows…
The study achieved up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. Combining all three approaches reduced TOCTOU vulnerabilities from an executed trajectory from 12% to 8%.
The findings suggest a new avenue for research at the intersection of AI safety and systems security.
