AI brokers dominated ETHDenver 2026, from autonomous finance to on-chain robotics. However as enthusiasm round “agentic economies” builds, a more durable query is rising: can establishments show what their AI techniques had been skilled on?
Among the many startups concentrating on that downside is Perle Labs, which argues that AI techniques require a verifiable chain of custody for his or her coaching knowledge, significantly in regulated and high-risk environments. With a concentrate on constructing an auditable, credentialed knowledge infrastructure for establishments, Perle has raised $17.5 million thus far, with its newest funding spherical led by Framework Ventures. Different traders embody CoinFund, Protagonist, HashKey, and Peer VC. The corporate studies multiple million annotators contributing over a billion scored knowledge factors on its platform.
BeInCrypto spoke with Ahmed Rashad, CEO of Perle Labs, on the sidelines of ETHDenver 2026. Rashad beforehand held an operational management position at Scale AI throughout its hypergrowth part. Within the dialog, he mentioned knowledge provenance, mannequin collapse, adversarial dangers and why he believes sovereign intelligence will grow to be a prerequisite for deploying AI in important techniques.
BeInCrypto: You describe Perle Labs because the “sovereign intelligence layer for AI.” For readers who are usually not inside the information infrastructure debate, what does that really imply in sensible phrases?
Ahmed Rashad: “The phrase sovereign is deliberate, and it carries a number of layers.
Essentially the most literal which means is management. In the event you’re a authorities, a hospital, a protection contractor, or a big enterprise deploying AI in a high-stakes setting, you could personal the intelligence behind that system, not outsource it to a black field you possibly can’t examine or audit. Sovereign means you recognize what your AI was skilled on, who validated it, and you’ll show it. A lot of the business at the moment can not say that.
The second which means is independence. Appearing with out outdoors interference. That is precisely what establishments just like the DoD, or an enterprise require once they’re deploying AI in delicate environments. You can’t have your important AI infrastructure depending on knowledge pipelines you don’t management, can’t confirm, and may’t defend in opposition to tampering. That’s not a theoretical danger. NSA and CISA have each issued operational steering on knowledge provide chain vulnerabilities as a nationwide safety concern.
The third which means is accountability. When AI strikes from producing content material into making selections, medical, monetary, navy, somebody has to have the ability to reply: the place did the intelligence come from? Who verified it? Is that file everlasting? On Perle, our purpose is to have each contribution from each knowledgeable annotator is recorded on-chain. It will possibly’t be rewritten. That immutability is what makes the phrase sovereign correct quite than simply aspirational.
In sensible phrases, we’re constructing a verification and credentialing layer. If a hospital deploys an AI diagnostic system, it ought to have the ability to hint every knowledge level within the coaching set again to a credentialed skilled who validated it. That’s sovereign intelligence. That’s what we imply.”
BeInCrypto: You had been a part of Scale AI throughout its hypergrowth part, together with main protection contracts and the Meta funding. What did that have train you about the place conventional AI knowledge pipelines break?
Ahmed Rashad: “Scale was an unimaginable firm. I used to be there in the course of the interval when it went from $90M and now it’s $29B, all of that was taking form, and I had a front-row seat to the place the cracks kind.
The elemental downside is that knowledge high quality and scale pull in reverse instructions. While you’re rising 100x, the stress is at all times to maneuver quick: extra knowledge, quicker annotation, decrease value per label. And the casualties are precision and accountability. You find yourself with opaque pipelines: you recognize roughly what went in, you may have some high quality metrics on what got here out, however the center is a black field. Who validated this? Have been they really certified? Was the annotation constant? These questions grow to be nearly unattainable to reply at scale with conventional fashions.
The second factor I realized is that the human ingredient is nearly at all times handled as a price to be minimized quite than a functionality to be developed. The transactional mannequin: pay per process then optimize for throughput simply degrades high quality over time. It burns via the most effective contributors. The individuals who can provide you genuinely high-quality, expert-level annotations are usually not the identical individuals who will sit via a gamified micro-task system for pennies. You must construct in another way in order for you that caliber of enter.
That realization is what Perle is constructed on. The information downside isn’t solved by throwing extra labor at it. It’s solved by treating contributors as professionals, constructing verifiable credentialing into the system, and making your entire course of auditable finish to finish.”
BeInCrypto: You’ve reached 1,000,000 annotators and scored over a billion knowledge factors. Most knowledge labeling platforms depend on nameless crowd labor. What’s structurally completely different about your fame mannequin?
Ahmed Rashad: “The core distinction is that on Perle, your work historical past is yours, and it’s everlasting. While you full a process, the file of that contribution, the standard tier it hit, the way it in comparison with knowledgeable consensus, is written on-chain. It will possibly’t be edited, can’t be deleted, can’t be reassigned. Over time, that turns into knowledgeable credential that compounds.
Evaluate that to nameless crowd labor, the place an individual is basically fungible. They haven’t any stake in high quality as a result of their fame doesn’t exist, every process is disconnected from the final. The inducement construction produces precisely what you’d count on: minimal viable effort.
Our mannequin inverts that. Contributors construct verifiable monitor data. The platform acknowledges area experience. For instance, a radiologist who constantly produces high-quality medical picture annotations builds a profile that displays that. That fame drives entry to higher-value duties, higher compensation, and extra significant work. It’s a flywheel: high quality compounds as a result of the incentives reward it.
We’ve crossed a billion factors scored throughout our annotator community. That’s not only a quantity quantity, it’s a billion traceable, attributed knowledge contributions from verified people. That’s the inspiration of reliable AI coaching knowledge, and it’s structurally unattainable to copy with nameless crowd labor.”
BeInCrypto: Mannequin collapse will get mentioned quite a bit in analysis circles however not often makes it into mainstream AI conversations. Why do you assume that’s, and will extra individuals be apprehensive?
Ahmed Rashad: “It doesn’t make mainstream conversations as a result of it’s a slow-moving disaster, not a dramatic one. Mannequin collapse, the place AI techniques skilled more and more on AI-generated knowledge begin to degrade, lose nuance, and compress towards the imply, doesn’t produce a headline occasion. It produces a gradual erosion of high quality that’s simple to overlook till it’s extreme.
The mechanism is easy: the web is filling up with AI-generated content material. Fashions skilled on that content material are studying from their very own outputs quite than real human information and expertise. Every technology of coaching amplifies the distortions of the final. It’s a suggestions loop with no pure correction.
Ought to extra individuals be apprehensive? Sure, significantly in high-stakes domains. When mannequin collapse impacts a content material suggestion algorithm, you worsen suggestions. When it impacts a medical diagnostic mannequin, a authorized reasoning system, or a protection intelligence device, the implications are categorically completely different. The margin for degradation disappears.
This is the reason the human-verified knowledge layer isn’t optionally available as AI strikes into important infrastructure. You want a steady supply of real, various human intelligence to coach in opposition to; not AI outputs laundered via one other mannequin. We’ve got over 1,000,000 annotators representing real area experience throughout dozens of fields. That variety is the antidote to mannequin collapse. You’ll be able to’t repair it with artificial knowledge or extra compute.”
BeInCrypto: When AI expands from digital environments into bodily techniques, what essentially modifications about danger, duty, and the requirements utilized to its growth?
Ahmed Rashad: The irreversibility modifications. That’s the core of it. A language mannequin that hallucinates produces a mistaken reply. You’ll be able to appropriate it, flag it, transfer on. A robotic surgical system working on a mistaken inference, an autonomous automobile making a foul classification, a drone appearing on a misidentified goal, these errors don’t have undo buttons. The price of failure shifts from embarrassing to catastrophic.
That modifications all the pieces about what requirements ought to apply. In digital environments, AI growth has largely been allowed to maneuver quick and self-correct. In bodily techniques, that mannequin is untenable. You want the coaching knowledge behind these techniques to be verified earlier than deployment, not audited after an incident.
It additionally modifications accountability. In a digital context, it’s comparatively simple to diffuse duty, was it the mannequin? The information? The deployment? In bodily techniques, significantly the place people are harmed, regulators and courts will demand clear solutions. Who skilled this? On what knowledge? Who validated that knowledge and underneath what requirements? The businesses and governments that may reply these questions would be the ones allowed to function. Those that may’t will face legal responsibility they didn’t anticipate.
We constructed Perle for precisely this transition. Human-verified, expert-sourced, on-chain auditable. When AI begins working in warehouses, working rooms, and on the battlefield, the intelligence layer beneath it wants to fulfill a unique normal. That normal is what we’re constructing towards.
BeInCrypto: How actual is the specter of knowledge poisoning or adversarial manipulation in AI techniques at the moment, significantly on the nationwide degree?
Ahmed Rashad: “It’s actual, it’s documented, and it’s already being handled as a nationwide safety precedence by individuals who have entry to labeled details about it.
DARPA’s GARD program (Guaranteeing AI Robustness In opposition to Deception) spent years particularly growing defenses in opposition to adversarial assaults on AI techniques, together with knowledge poisoning. The NSA and CISA issued joint steering in 2025 explicitly warning that knowledge provide chain vulnerabilities and maliciously modified coaching knowledge signify credible threats to AI system integrity. These aren’t theoretical white papers. They’re operational steering from companies that don’t publish warnings about hypothetical dangers.
The assault floor is critical. In the event you can compromise the coaching knowledge of an AI system used for menace detection, medical analysis, or logistics optimization, you don’t must hack the system itself. You’ve already formed the way it sees the world. That’s a way more elegant and harder-to-detect assault vector than conventional cybersecurity intrusions.
The $300 million contract Scale AI holds with the Division of Protection’s CDAO, to deploy AI on labeled networks, exists partly as a result of the federal government understands it can not use AI skilled on unverified public knowledge in delicate environments. The information provenance query will not be educational at that degree. It’s operational.
What’s lacking from the mainstream dialog is that this isn’t only a authorities downside. Any enterprise deploying AI in a aggressive setting, monetary companies, prescription drugs, important infrastructure, has an adversarial knowledge publicity they’ve most likely not totally mapped. The menace is actual. The defenses are nonetheless being constructed.”
BeInCrypto: Why can’t a authorities or a big enterprise simply construct this verification layer themselves? What’s the true reply when somebody pushes again on that?
Ahmed Rashad: “Some strive. And those who strive study shortly what the precise downside is.
Constructing the know-how is the straightforward half. The arduous half is the community. Verified, credentialed area specialists, radiologists, linguists, authorized specialists, engineers, scientists, don’t simply seem since you constructed a platform for them. You must recruit them, credential them, construct the motivation constructions that preserve them engaged, and develop the standard consensus mechanisms that make their contributions significant at scale. That takes years and it requires experience that the majority authorities companies and enterprises merely don’t have in-house.
The second downside is variety. A authorities company constructing its personal verification layer will, by definition, draw from a restricted and comparatively homogeneous pool. The worth of a worldwide knowledgeable community isn’t simply credentialing; it’s the vary of perspective, language, cultural context, and area specialization which you could solely get by working at actual scale throughout actual geographies. We’ve got over 1,000,000 annotators. That’s not one thing you replicate internally.
The third downside is incentive design. Conserving high-quality contributors engaged over time requires clear, truthful, programmable compensation. Blockchain infrastructure makes that doable in a approach that inside techniques usually can’t replicate: immutable contribution data, direct attribution, and verifiable cost. A authorities procurement system will not be constructed to try this effectively.
The trustworthy reply to the pushback is: you’re not simply shopping for a device. You’re accessing a community and a credentialing system that took years to construct. The choice isn’t ‘construct it your self’, it’s ‘use what already exists or settle for the information high quality danger that comes with not having it.’”
BeInCrypto: If AI turns into core nationwide infrastructure, the place does a sovereign intelligence layer sit in that stack 5 years from now?
Ahmed Rashad: “5 years from now, I believe it seems like what the monetary audit perform seems like at the moment, a non-negotiable layer of verification that sits between knowledge and deployment, with regulatory backing {and professional} requirements connected to it.
Proper now, AI growth operates with out something equal to monetary auditing. Corporations self-report on their coaching knowledge. There’s no unbiased verification, no skilled credentialing of the method, no third-party attestation that the intelligence behind a mannequin meets an outlined normal. We’re within the early equal of pre-Sarbanes-Oxley finance, working largely on belief and self-certification.
As AI turns into important infrastructure, working energy grids, healthcare techniques, monetary markets, protection networks, that mannequin turns into untenable. Governments will mandate auditability. Procurement processes would require verified knowledge provenance as a situation of contract. Legal responsibility frameworks will connect penalties to failures that might have been prevented by correct verification.
The place Perle sits in that stack is because the verification and credentialing layer, the entity that may produce an immutable, auditable file of what a mannequin was skilled on, by whom, underneath what requirements. That’s not a characteristic of AI growth 5 years from now. It’s a prerequisite.
The broader level is that sovereign intelligence isn’t a distinct segment concern for protection contractors. It’s the inspiration that makes AI deployable in any context the place failure has actual penalties. And as AI expands into extra of these contexts, the inspiration turns into probably the most priceless a part of the stack.”