Anthropic is testing probably the most highly effective AI mannequin it has ever constructed, and the world wasn’t alleged to know but.
An information leak reported by Fortune on Thursday revealed that the AI lab behind Claude has educated a brand new mannequin referred to as “Mythos,” which it internally describes as “by far probably the most highly effective AI mannequin we have ever developed.”
The mannequin was found in a draft weblog publish left in an unsecured, publicly searchable knowledge cache, alongside practically 3,000 different unpublished belongings, based on cybersecurity researchers who reviewed the fabric.
Anthropic confirmed the mannequin’s existence after Fortune’s inquiry, calling it “a step change” in AI efficiency and “probably the most succesful we have constructed so far.” The corporate stated it’s being trialed by “early entry prospects” and acknowledged {that a} “human error” in its content material administration system prompted the leak.
The draft weblog publish launched a brand new mannequin tier referred to as “Capybara,” described as bigger and extra succesful than Anthropic’s current Opus fashions, which have been beforehand its strongest.
“In comparison with our earlier finest mannequin, Claude Opus 4.6, Capybara will get dramatically larger scores on exams of software program coding, tutorial reasoning, and cybersecurity, amongst others,” the draft stated.
It is the cybersecurity dimension that issues most for the crypto trade. The draft weblog publish stated the mannequin “poses unprecedented cybersecurity dangers,” a framing that has direct implications for blockchain safety, good contract auditing, and the escalating arms race between attackers and defenders in DeFi.
This week alone, Ripple introduced an AI-driven safety overhaul for the XRP Ledger after an AI-assisted purple workforce uncovered greater than 10 vulnerabilities in its 13-year-old codebase. Ethereum launched a devoted post-quantum safety hub backed by eight years of analysis.
And the Resolv stablecoin misplaced its peg after an attacker exploited a minting contract with no oracle checks and single-key entry management, the sort of infrastructure failure that extra succesful AI instruments may doubtlessly establish earlier than an attacker does, or exploit sooner than defenders can reply.
For the AI token market, the leak raises a unique query. Bittensor’s decentralized community just lately launched Covenant-72B, a mannequin that competes with Meta’s Llama 2 70B, triggering a 90% rally in TAO and driving subnet tokens to a mixed market cap of $1.47 billion.
A “step change” from a centralized lab like Anthropic resets the benchmark that decentralized AI initiatives must match. The aggressive distance between what a well-funded company lab can construct and what a permissionless community can produce simply acquired wider.
Anthropic stated it’s “being deliberate” concerning the mannequin’s launch given its capabilities. The draft weblog famous the mannequin is dear to run and never but prepared for basic availability. The corporate eliminated public entry to the info cache after Fortune contacted it.
The leak itself is its personal cautionary story. An organization constructing what it describes as an AI mannequin with unprecedented cybersecurity capabilities left the announcement of that mannequin in an unsecured, publicly searchable knowledge retailer attributable to human error. The irony wants no elaboration.

