A purpose-built AI safety agent detected vulnerabilities in 92% of exploited DeFi good contracts in a brand new open-source benchmark.
The examine, launched Thursday by AI safety agency Cecuro, evaluated 90 real-world good contracts exploited between October 2024 and early 2026, representing $228 million in verified losses. The specialised system flagged vulnerabilities tied to $96.8 million in exploit worth, in contrast with simply 34% detection and $7.5 million in protection from a baseline GPT-5.1-based coding agent.
Each programs ran on the identical frontier mannequin. The distinction, in line with the report, was the applying layer: domain-specific methodology, structured evaluation phases and DeFi-focused safety heuristics layered on high of the mannequin.
The findings arrive amid rising concern that AI is accelerating crypto crime. Separate analysis from Anthropic and OpenAI has proven that AI brokers can now execute end-to-end exploits on most identified weak good contracts, with exploit functionality reportedly doubling roughly each 1.3 months. The common value of an AI-powered exploit try is about $1.22 per contract, sharply decreasing the barrier to large-scale scanning.
Earlier CoinDesk protection outlined how dangerous actors corresponding to North Korea have begun utilizing AI to scale hacking operations and automate elements of the exploit course of, underscoring the widening hole between offensive and defensive capabilities.
Cecuro argues that many groups depend on general-purpose AI instruments or one-off audits for safety, an strategy the benchmark suggests could miss high-value, advanced vulnerabilities. A number of contracts within the dataset had beforehand undergone skilled audits earlier than being exploited.
The benchmark dataset, analysis framework and baseline agent have been open-sourced on GitHub. The corporate mentioned it has not launched its full safety agent as a result of issues that comparable tooling could possibly be repurposed for offensive use.

