AI researchers from Anthropic, Stanford, and Oxford have found that making AI fashions assume longer makes them simpler to jailbreak—the other of what everybody assumed.
The prevailing assumption was that prolonged reasoning would make AI fashions safer, as a result of it provides them extra time to detect and refuse dangerous requests. As an alternative, researchers discovered it creates a dependable jailbreak technique that bypasses security filters totally.
Utilizing this method, an attacker may insert an instruction within the Chain of Thought technique of any AI mannequin and drive it to generate directions for creating weapons, writing malware code, or producing different prohibited content material that might usually set off fast refusal. AI firms spend tens of millions constructing these security guardrails exactly to forestall such outputs.
The examine reveals that Chain-of-Thought Hijacking achieves 99% assault success charges on Gemini 2.5 Professional, 94% on GPT o4 mini, 100% on Grok 3 mini, and 94% on Claude 4 Sonnet. These numbers destroy each prior jailbreak technique examined on massive reasoning fashions.

The assault is easy and works just like the “Whisper Down the Lane” recreation (or “Phone”), with a malicious participant someplace close to the tip of the road. You merely pad a dangerous request with lengthy sequences of innocent puzzle-solving; researchers examined Sudoku grids, logic puzzles, and summary math issues. Add a final-answer cue on the finish, and the mannequin’s security guardrails collapse.
“Prior works counsel this scaled reasoning could strengthen security by bettering refusal. But we discover the other,” the researchers wrote. The identical functionality that makes these fashions smarter at problem-solving makes them blind to hazard.
This is what occurs contained in the mannequin: Whenever you ask an AI to resolve a puzzle earlier than answering a dangerous query, its consideration will get diluted throughout 1000’s of benign reasoning tokens. The dangerous instruction—buried someplace close to the tip—receives virtually no consideration. Security checks that usually catch harmful prompts weaken dramatically because the reasoning chain grows longer.

It is a downside that many individuals conversant in AI are conscious of, however to a lesser extent. Some jailbreak prompts are intentionally lengthy to make a mannequin waste tokens earlier than processing the dangerous directions.
The crew ran managed experiments on the S1 mannequin to isolate the impact of reasoning size. With minimal reasoning, assault success charges hit 27%. At pure reasoning size, that jumped to 51%. Power the mannequin into prolonged step-by-step considering, and success charges soared to 80%.
Each main industrial AI falls sufferer to this assault. OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—none are immune. The vulnerability exists within the structure itself, not any particular implementation.
AI fashions encode security checking energy in center layers round layer 25. Late layers encode the verification final result. Lengthy chains of benign reasoning suppress each indicators which finally ends up shifting consideration away from dangerous tokens.
The researchers recognized particular consideration heads accountable for security checks, concentrated in layers 15 by way of 35. They surgically eliminated 60 of those heads. Refusal conduct collapsed. Dangerous directions turned unimaginable for the mannequin to detect.
The “layers” in AI fashions are like steps in a recipe, the place every step helps the pc higher perceive and course of info. These layers work collectively, passing what they be taught from one to the subsequent, so the mannequin can reply questions, make choices, or spot issues. Some layers are particularly good at recognizing issues of safety—like blocking dangerous requests—whereas others assist the mannequin assume and cause. By stacking these layers, AI can turn into a lot smarter and extra cautious about what it says or does.
This new jailbreak challenges the core assumption driving current AI improvement. Over the previous yr, main AI firms shifted focus to scaling reasoning moderately than uncooked parameter counts. Conventional scaling confirmed diminishing returns. Inference-time reasoning—making fashions assume longer earlier than answering—turned the brand new frontier for efficiency good points.
The belief was that extra considering equals higher security. Prolonged reasoning would give fashions extra time to identify harmful requests and refuse them. This analysis proves that assumption was inaccurate, and even in all probability flawed.
A associated assault referred to as H-CoT, launched in February by researchers from Duke College and Taiwan’s Nationwide Tsing Hua College, exploits the identical vulnerability from a distinct angle. As an alternative of padding with puzzles, H-CoT manipulates the mannequin’s personal reasoning steps. OpenAI’s o1 mannequin maintains a 99% refusal price below regular circumstances. Underneath H-CoT assault, that drops beneath 2%.

The researchers suggest a protection: reasoning-aware monitoring. It tracks how security indicators change throughout every reasoning step, and if any step weakens the security sign, then penalize it—drive the mannequin to keep up consideration on probably dangerous content material no matter reasoning size. Early exams present this strategy can restore security with out destroying efficiency.
However implementation stays unsure. The proposed protection requires deep integration into the mannequin’s reasoning course of, which is way from a easy patch or filter. It wants to watch inside activations throughout dozens of layers in real-time, adjusting consideration patterns dynamically. That is computationally costly and technically advanced.
The researchers disclosed the vulnerability to OpenAI, Anthropic, Google DeepMind, and xAI earlier than publication. “All teams acknowledged receipt, and several other are actively evaluating mitigations,” the researchers claimed of their ethics assertion.
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.
