Felix Pinkston
Mar 25, 2026 17:33
OpenAI expands its safety efforts with a brand new Security Bug Bounty program centered on agentic dangers, immediate injection assaults, and knowledge exfiltration in AI merchandise.

OpenAI has launched a public Security Bug Bounty program geared toward figuring out AI abuse and security dangers throughout its product suite, marking a big growth of the corporate’s method to securing more and more autonomous AI methods. This system, introduced March 25, 2026, particularly targets vulnerabilities in agentic AI merchandise that might result in real-world hurt.
The brand new initiative enhances OpenAI’s current Safety Bug Bounty by accepting submissions that pose significant abuse and security dangers even after they do not qualify as conventional safety vulnerabilities. Researchers who determine points could have their submissions triaged by each Security and Safety groups, with experiences routed between applications primarily based on scope.
Agentic Dangers Take Middle Stage
This system’s scope reveals OpenAI’s rising concern about AI brokers working with growing autonomy. Key focus areas embrace third-party immediate injection assaults the place malicious textual content can hijack a consumer’s agent—together with Browser, ChatGPT Agent, and related merchandise—to carry out dangerous actions or leak delicate data. To qualify for rewards, such assaults should be reproducible at the very least 50% of the time.
Different in-scope vulnerabilities embrace agentic merchandise performing disallowed actions on OpenAI’s web site at scale, publicity of proprietary data associated to mannequin reasoning, and bypasses of anti-automation controls or account belief indicators.
What’s Out of Scope
Normal jailbreaks will not qualify for this program. OpenAI explicitly excludes basic content-policy bypasses with out demonstrable security affect—getting a mannequin to make use of impolite language or return simply searchable data would not depend. Nevertheless, the corporate runs periodic non-public campaigns centered on particular hurt sorts, together with latest applications focusing on biorisk content material in ChatGPT Agent and GPT-5.
The corporate will think about edge instances on a case-by-case foundation if researchers determine flaws that create direct paths to consumer hurt with actionable remediation steps.
Trade Implications
This launch indicators that main AI builders are taking agentic security severely as these methods acquire capabilities to browse the online, execute code, and work together with exterior providers. The Mannequin Context Protocol (MCP) dangers talked about in this system scope recommend OpenAI is especially centered on how brokers work together with third-party instruments and knowledge sources.
For the broader AI ecosystem, this program establishes a framework that different corporations might comply with as autonomous brokers change into extra prevalent. Researchers interested by taking part can apply by way of OpenAI’s Bugcrowd portal, with the corporate emphasizing its dedication to working alongside moral hackers to safe AI methods earlier than vulnerabilities will be exploited at scale.
Picture supply: Shutterstock
