Iris Coleman
Apr 11, 2026 15:21
LangChain argues closed AI agent harnesses create harmful vendor lock-in by way of proprietary reminiscence programs, pushing builders towards open-source alternate options.

LangChain is sounding alarms a couple of rising downside in AI growth: firms constructing brokers on closed platforms danger shedding management of their most useful asset—consumer reminiscence information.
The blockchain and AI infrastructure firm printed an in depth evaluation on April 11, 2026, arguing that “agent harnesses”—the scaffolding programs that handle how AI brokers work together with instruments and information—have gotten inseparable from reminiscence storage. When builders select proprietary harnesses, they’re successfully handing over their customers’ interplay historical past to 3rd events.
Why This Issues for Builders
Agent harnesses have turn out to be the usual structure for constructing AI programs. Claude Code alone reportedly comprises 512,000 strains of harness code, in keeping with leaked documentation referenced by LangChain. Even mannequin suppliers with probably the most superior AI are investing closely in these orchestration layers.
The issue? Reminiscence is not a plugin you’ll be able to swap out. As Letta CTO Sarah Wooders put it in a publish cited by LangChain: “Asking to plug reminiscence into an agent harness is like asking to plug driving right into a automotive.”
Quick-term reminiscence (dialog historical past, software outputs) and long-term reminiscence (cross-session preferences, realized behaviors) each circulation by way of the harness. If that harness sits behind a proprietary API, the info stays locked in.
The Lock-In Spectrum
LangChain outlined three ranges of danger:
Delicate: Utilizing stateful APIs like OpenAI’s Responses API or Anthropic’s server-side compaction shops state on their servers. Wish to swap fashions mid-conversation? Robust luck.
Unhealthy: Closed harnesses like Claude Agent SDK work together with reminiscence in undocumented methods. Even when artifacts exist client-side, their format stays proprietary and non-transferable.
Worst: Full harness-as-a-service choices like Anthropic’s Claude Managed Brokers put every thing—together with long-term reminiscence—behind an API. Zero visibility, zero possession.
OpenAI’s Codex generates encrypted compaction summaries unusable exterior their ecosystem, the evaluation famous. Mannequin suppliers are incentivized to maneuver extra performance behind APIs exactly as a result of reminiscence creates stickiness that uncooked mannequin entry does not.
The Sticky Issue
LangChain’s Harrison Chase shared a private instance: an inner electronic mail assistant constructed on their Fleet platform collected months of realized preferences. When unintentionally deleted, recreating it from the identical template produced a noticeably worse expertise. All these realized behaviors—tone, preferences, patterns—gone.
“With out reminiscence, your brokers are simply replicable by anybody who has entry to the identical instruments,” the publish said. Reminiscence transforms a generic AI into a customized system that improves over time.
The Open Various
LangChain is positioning its Deep Brokers framework as the answer—open supply, model-agnostic, with plugins for MongoDB, Postgres, and Redis for reminiscence storage. The framework makes use of open requirements like brokers.md and helps deployment by way of LangSmith or normal hosting.
Whether or not the trade follows stays unsure. Mannequin suppliers have sturdy incentives to seize customers by way of proprietary reminiscence programs, and plenty of builders prioritize getting brokers working earlier than worrying about information portability.
However for groups constructing manufacturing AI programs, the query deserves consideration now: Who truly owns the info your agent learns from customers? The reply may decide whether or not you’ll be able to ever swap suppliers—or whether or not your AI’s collected intelligence belongs to another person fully.
Picture supply: Shutterstock
