LLM Security: Time-of-Check Time-of-Use Attacks

by Archynetys Technology & Science Desk

Time-of-Check Time-of-Use Attacks Against LLMs

By amelia Hernandez | WASHINGTON D.C. – 2025/09/18 12:27:33


A newly released research paper, “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents,” highlights a previously underexplored class of security threats targeting Large Language Model (LLM) agents.

Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities wiht security implications.

The paper delves into Time-of-Check to Time-of-Use (TOCTOU) vulnerabilities, which occur when an agent validates an external state, such as a file or API response, that is later modified before being used. This can lead to attacks like malicious configuration swaps or payload injection.

The researchers introduce TOCTOU-Bench, a benchmark comprising 66 realistic user tasks designed to evaluate these vulnerabilities.They also adapt detection and mitigation techniques from systems security, proposing prompt rewriting, state integrity monitoring, and tool-fusing as countermeasures.

Our study highlights challenges unique to agentic workflows…

The study achieved up to 25% detection accuracy using automated detection methods, a 3% decrease in vulnerable plan generation, and a 95% reduction in the attack window. Combining all three approaches reduced TOCTOU vulnerabilities from an executed trajectory from 12% to 8%.

The findings suggest a new avenue for research at the intersection of AI safety and systems security.


About the Author

Amelia Hernandez is a cybersecurity reporter with a focus on AI and emerging technologies. She has been covering the intersection of security and artificial intelligence for over five years.

Related Posts

Leave a Comment