Zach Anderson
Mar 12, 2026 17:24
Harvey AI addresses info barrier enforcement for autonomous authorized brokers, partnering with Intapp to stop confidential information leakage throughout consumer issues.
Authorized AI startup Harvey is sounding the alarm on what it calls “a very powerful unsolved drawback” in its business: stopping autonomous AI brokers from by chance breaching the data obstacles that hold regulation corporations out of malpractice courtroom.
The corporate printed an in depth technical framework on March 12, 2026, outlining how conventional moral wall enforcement breaks down when AI brokers—reasonably than human attorneys—begin autonomously accessing agency doc programs.
Why Chatbots Had been Simple, Brokers Are Arduous
The shift from easy authorized chatbots to what Harvey calls “lengthy horizon brokers” creates three elementary issues that current compliance programs weren’t constructed to deal with.
First, brokers entry paperwork instantly. When an AI autonomously pulls 50 paperwork from a agency’s doc administration system to overview an acquisition settlement, it is making retrieval selections with out human oversight. If a type of paperwork sits behind an moral wall? The breach occurs earlier than anybody is aware of to cease it.
Second, brokers bear in mind issues. In contrast to stateless chatbot classes, lengthy horizon brokers preserve context throughout weeks of labor on complicated offers. If an agent picks up confidential info whereas engaged on Matter A, then will get assigned to Matter B on the other aspect of a battle, that prior context contaminates the brand new work. Present ethics guidelines are clear: it is a violation.
Third, brokers work too quick to watch manually. A junior affiliate evaluations possibly 50 paperwork every day. An agent processes a whole lot in minutes. The supervising associate sees outputs, not the 1000’s of intermediate steps that produced them.
The Stakes for Regulation Corporations
Harvey would not mince phrases about penalties. Courts can disqualify complete corporations from issues over moral wall failures. Purchasers convey malpractice claims. State bars impose disciplinary sanctions. The reputational harm alone can torch a agency’s most respected consumer relationships.
Most Am Regulation 200 corporations at present handle partitions by means of Intapp’s conflicts checking system, iManage or NetDocuments entry controls, and old-school measures like separate flooring and restricted e mail teams. These work as a result of the boundaries are clear—paperwork reside in folders, individuals have entry lists, and corporations can prohibit entry at each level.
Autonomous brokers obliterate that readability.
Harvey’s Technical Repair
The corporate introduced a partnership with Intapp to combine moral wall enforcement instantly into its AI platform. The strategy facilities on three rules.
Each agent operation will get scoped to a particular consumer matter as a tough safety boundary, not only a metadata tag. When Intapp flags a wall between Matter A and Matter B, Harvey’s system enforces that wall on the doc retrieval layer, the context layer, and the output layer concurrently.
Critically, the system “fails closed” reasonably than open. If an agent cannot affirm a doc falls inside its approved boundary, it skips that doc and flags the uncertainty. Work product is likely to be much less full, however the moral wall stays intact.
Each doc entry, context window, and agent session will get logged at a degree of element enough to show in courtroom—after the actual fact—that no walled info was accessed.
The Aggressive Panorama Heats Up
Harvey is not alone in pushing authorized AI towards autonomous brokers. LegalOn launched 5 specialised AI brokers for in-house authorized groups on February 10, 2026, concentrating on duties like playbook creation and contract translation. The broader business is racing towards what researchers name brokers that may “personal extra of the end-to-end lifecycle” of authorized work.
However Harvey’s framing suggests corporations shifting quickest with AI face the best publicity. Any regulation agency piloting brokers with out auditable moral wall enforcement is “creating discoverable proof of insufficient screening procedures,” the corporate warns. The productiveness features grow to be nugatory if they arrive with malpractice publicity connected.
For corporations evaluating AI distributors, Harvey’s message is blunt: in case your platform cannot exactly clarify how wall enforcement works at each the appliance and information layers, it isn’t prepared for delicate work.
Picture supply: Shutterstock

