Felix Pinkston
Feb 18, 2026 20:03
New Anthropic analysis exhibits Claude Code autonomy practically doubled in 3 months, with skilled customers granting extra independence whereas sustaining oversight.
AI brokers are working independently for considerably longer intervals as customers develop belief of their capabilities, in response to new analysis from Anthropic printed February 18, 2026. The research, which analyzed hundreds of thousands of human-agent interactions, discovered that the longest-running Claude Code periods practically doubled from below 25 minutes to over 45 minutes between October 2025 and January 2026.
The findings arrive as Anthropic rides a wave of investor confidence, having simply closed a $30 billion Sequence G spherical that valued the corporate at $380 billion. That valuation displays rising enterprise urge for food for AI brokers—and this analysis affords the primary large-scale empirical have a look at how people truly work with them.
Belief Builds Steadily, Not By Functionality Jumps
Maybe probably the most putting discovering: the rise in autonomous operation time was easy throughout mannequin releases. If autonomy have been purely about functionality enhancements, you’d anticipate sharp jumps when new fashions dropped. As an alternative, the regular climb suggests customers are regularly extending belief as they acquire expertise.
The info backs this up. Amongst new Claude Code customers, roughly 20% of periods use full auto-approve mode. By the point customers hit 750 periods, that quantity exceeds 40%. However here is the counterintuitive half—skilled customers additionally interrupt Claude extra ceaselessly, not much less. New customers interrupt in about 5% of turns; veterans interrupt in roughly 9%.
What’s taking place? Customers aren’t abandoning oversight. They’re shifting technique. Moderately than approving each motion upfront, skilled customers let Claude run and step in when one thing wants correction. It is the distinction between micromanaging and monitoring.
Claude Is aware of When to Ask
The analysis revealed one thing surprising about Claude’s personal conduct. On complicated duties, the AI stops to ask clarifying questions greater than twice as typically as people interrupt it. Claude-initiated pauses truly exceed human-initiated interruptions on probably the most tough work.
Frequent causes Claude stops itself embrace presenting customers with decisions between approaches (35% of pauses), gathering diagnostic data (21%), and clarifying imprecise requests (13%). In the meantime, people sometimes interrupt to offer lacking technical context (32%) or as a result of Claude was operating sluggish or extreme (17%).
This means Anthropic’s coaching for uncertainty recognition is working. Claude seems calibrated to its personal limitations—although the researchers warning it might not at all times cease on the proper moments.
Software program Dominates, However Riskier Domains Emerge
Software program engineering accounts for practically 50% of all agentic instrument calls on Anthropic’s public API. That focus is smart—code is testable, reviewable, and comparatively low-stakes if one thing breaks.
However the researchers discovered rising utilization in healthcare, finance, and cybersecurity. Most actions stay low-risk and reversible—solely 0.8% of noticed actions appeared irreversible, like sending buyer emails. Nonetheless, the highest-risk clusters concerned delicate safety operations, monetary transactions, and medical information.
The group acknowledges limitations: many high-risk actions may very well be red-team evaluations somewhat than manufacturing deployments. They cannot at all times inform the distinction from their vantage level.
What This Means for the Trade
Anthropic’s researchers argue towards mandating particular oversight patterns like requiring human approval for each motion. Their information suggests such necessities would create friction with out security advantages—skilled customers naturally develop extra environment friendly monitoring methods.
As an alternative, they’re calling for higher post-deployment monitoring infrastructure throughout the business. Pre-deployment testing cannot seize how people truly work together with brokers in follow. The patterns they noticed—belief constructing over time, shifting oversight methods, brokers limiting their very own autonomy—solely emerge in real-world utilization.
For enterprises evaluating AI agent deployments, the analysis affords a concrete benchmark: even energy customers on the excessive finish of the distribution are operating Claude autonomously for below an hour at a stretch. The hole between what fashions can theoretically deal with (METR estimates 5 hours for comparable duties) and what customers truly allow suggests vital headroom stays—and that belief, not functionality, could be the binding constraint on adoption.
Picture supply: Shutterstock

