Mythos, the brand new AI mannequin from Anthropic that has sparked concern and confusion in conventional tech and finance, can also be driving an enormous shift in how the crypto trade thinks about safety.
For years, decentralized finance has centered its defenses on good contracts. Code is audited, vulnerabilities are cataloged, and lots of frequent exploits are effectively understood. However Mythos, a mannequin designed to determine and chain collectively weaknesses throughout methods, is pushing consideration past code and into the infrastructure that helps it.
“The larger dangers sit in infrastructure,” mentioned Paul Vijender, head of safety at Gauntlet, a danger administration agency. “Once I take into consideration AI-driven threats, I’m much less involved about good contract exploits and extra centered on AI-assisted assaults towards the human and infrastructure layers.”
That features key administration methods, signing companies, bridges, oracle networks, and the cryptographic layers that join them. These elements are much less seen than good contracts and are sometimes exterior conventional audit scope.
Actually, this month, internet infrastructure supplier Vercel, which many crypto firms use, disclosed a safety breach which will have uncovered buyer API keys, prompting crypto initiatives to rotate credentials and evaluation their code. Vercel traced the intrusion to a compromised Google Workspace connection through the third-party AI instrument Context.ai, which an worker used.
Mythos belongs to a brand new class of AI methods constructed to simulate adversaries. As a substitute of scanning for recognized bugs, it explores how protocols work together, testing how small weaknesses could be mixed into real-world exploits. That method has drawn consideration past crypto. Banks like JP Morgan are more and more treating AI-driven cyber danger as systemic and are exploring instruments like Mythos for stress testing. Earlier this month, Coinbase and Binance each reportedly approached Anthropic to check Mythos.
Early findings from fashions like Mythos have recognized weaknesses within the behind-the-scenes methods that hold crypto platforms safe, together with the expertise that protects keys and handles communication between methods.
“I believe there are two areas the place AI fashions are particularly precious,” Vijender mentioned. “First, multi-step exploit chains that traditionally solely get found after cash is misplaced. Second, infrastructure-layer vulnerabilities that conventional audits by no means contact.”
That shift issues in a system constructed on composability, the place DeFi protocols can join and construct on one another’s companies.
DeFi protocols are designed to interconnect. They share liquidity, depend on frequent oracles, and work together by way of layers of integrations which might be tough to map in full. That interconnectedness has pushed development, however it additionally creates pathways for danger to unfold, as seen in current bridge exploits just like the Hyperbridge assault, by which an attacker minted $1 billion value of bridged Polkadot tokens on Ethereum by exploiting a flaw in how cross-chain messages had been verified.
“Composability is what makes DeFi capital environment friendly and modern,” Vijender mentioned. “But it surely additionally means a minor vulnerability in a single protocol can change into a crucial exploit vector with contagion potential throughout the ecosystem.”
With out AI, these dependencies are exhausting to hint. With AI, they are often mapped and exploited at scale. The result’s a shift from remoted exploits to systemic failures that cascade throughout protocols.
Evolution of AI assaults
Nonetheless, some trade leaders see Mythos as an acceleration reasonably than a turning level.
At Aave Labs, founder Stani Kulechov mentioned AI displays the dynamics already at play in DeFi’s adversarial surroundings.
“Web3 is not any stranger to well-funded and motivated adversaries,” he informed CoinDesk. “AI fashions characterize an evolution within the instruments used to attain exploits.”
From that perspective, DeFi is already constructed for machine-speed assaults. Sensible contracts execute robotically, and defenses reminiscent of liquidation mechanisms and danger parameters function with out human intervention.
“DeFi operates at compute pace, so AI doesn’t introduce a brand new dynamic,” Kulechov mentioned. “It intensifies an surroundings that has at all times required fixed vigilance.”
Even so, Aave is seeing AI floor new classes of vulnerabilities, together with points that human auditors could have beforehand deprioritized.
“The Mythos paper reveals that AI can uncover previous bugs that had been beforehand deprioritized,” he mentioned.
That breadth nonetheless issues in a system the place even smaller vulnerabilities can undermine belief or be mixed into bigger exploits.
If attackers can transfer sooner, the query turns into whether or not defenses can hold tempo.
For each Gauntlet and Aave, the reply lies in altering the safety mannequin itself. Audits earlier than deployment and monitoring after had been designed for human-paced threats. AI compresses that timeline.
“To defend towards offensive AI, we might want to take an AI-centric method the place pace and steady adaptation are important,” Vijender of Gauntlet mentioned. That features steady auditing, real-time simulation, and methods constructed with the belief that breaches will occur.
A ‘better approach’
Aave has already built-in AI into its workflows, utilizing it for simulations and code evaluation alongside human auditors. “We take an AI-first method the place it provides clear worth,” Kulechov of Aave Labs mentioned. “But it surely enhances, reasonably than replaces, human-led auditing.”
In that sense, AI equips each attackers and defenders.
For builders, the long-term impact could also be much less disruption than divergence.
“We haven’t examined Mythos but, however we’re genuinely concerned about what it and instruments like it might probably do for protocol safety,” mentioned Hayden Adams, founder and CEO of Uniswap Labs. “AI offers builders higher methods to emphasize check and harden methods.”
Over time, Adams expects the hole between safe and insecure protocols to widen.
“Initiatives that prioritize safety may have better capability to check and harden methods earlier than launching,” he mentioned. “Initiatives that don’t might be most in danger.”
Which may be the actual shift. Safety is now not about eliminating vulnerabilities. It’s about repeatedly adapting to a system by which these vulnerabilities are consistently rediscovered and recombined.
Learn extra: Transfer over bitcoin and quantum dangers. Anthropic’s Mythos AI may have main implications for DeFi

