BridgeMind AI claimed Anthropic’s Claude Opus 4.6 was secretly degraded after a hallucination benchmark retest. The viral put up has since drawn sharp criticism for flawed methodology.
The declare triggered widespread debate over whether or not AI firms are quietly downgrading paid fashions to cut back prices.
BridgeMind Claims a 98% Surge in Hallucinations
BridgeMind, the workforce behind the BridgeBench coding benchmark, posted that Claude Opus 4.6 had fallen from second to tenth place on its hallucination leaderboard. Accuracy reportedly dropped from 83.3% to 68.3%.
“CLAUDE OPUS 4.6 IS NERFED. BridgeBench simply proved it. Final week Claude Opus 4.6 ranked #2 on the Hallucination benchmark with an accuracy of 83.3%. At the moment Claude Opus 4.6 was retested and it fell to #10 on the leaderboard with an accuracy of solely 68.3%,” they wrote.
The put up framed this as proof of “diminished reasoning ranges.” Nevertheless, a more in-depth have a look at the underlying knowledge tells a distinct story.
Critics Say the Comparability Is Essentially Flawed
In line with laptop scientist Paul Calcraft, the declare is “extremely dangerous science,” highlighting a essential downside with the methodology.
“Extremely dangerous science You examined Opus on 30 duties as we speak, earlier rating was on simply *6* duties Outcomes for six duties in widespread: 85.4% rating as we speak vs. 87.6% prevly. Swing is usually from a *single* fabrication with out repeats – simply statistical noise,” commented Calcraft.
The unique excessive rating got here from simply six benchmark duties. The brand new retest expanded the benchmark to 30 duties.
On the six overlapping duties, efficiency was almost an identical, dropping solely from 87.6% to 85.4%.
That small swing got here principally from a single further fabrication in a single activity. With no repeated runs, this falls nicely inside regular statistical variance for AI fashions.
Massive language fashions aren’t deterministic, and one dangerous output on a small pattern can shift outcomes considerably.
Broader Frustrations Gas the Narrative
Nonetheless, the put up struck a nerve. Since its February 2026 launch, Claude Opus 4.6 has confronted persistent complaints about perceived high quality decline.
Builders report shorter responses, weaker instruction-following, and diminished reasoning depth throughout peak hours.
A few of this traces to deliberate product modifications. Anthropic launched adaptive pondering controls that allow the mannequin self-adjust its reasoning price range. The default effort stage was later set to medium, prioritizing effectivity over most depth.
An impartial evaluation of over 6,800 Claude Code classes discovered reasoning depth dropped roughly 67% by late February.
The mannequin’s file-read ratio earlier than modifying code fell from 6.6 to 2.0. That implies it tried fixes on code it had barely reviewed.
What This Means for AI Customers
This displays a rising rigidity within the AI trade. Firms optimize fashions for price and scale after launch, whereas heavy customers count on constant peak efficiency. The hole between these priorities erodes belief.
Primarily based on the obtainable proof, the BridgeBench knowledge doesn’t show a deliberate downgrade. The benchmark comparability was apples-to-oranges, and the overlapping outcomes had been almost an identical.
Nevertheless, the underlying frustration will not be fully baseless. Adaptive compute controls and service-level optimizations have modified how Claude Opus 4.6 behaves in follow. For builders counting on constant output, these modifications matter.
Anthropic has not issued a public assertion on the precise BridgeBench claims as of April 13.
The put up Viral BridgeBench Submit Claims Claude Opus 4.6 Was ‘Nerfed,’ Critics Name It Dangerous Science appeared first on BeInCrypto.