Caroline Bishop
Apr 08, 2026 17:45
OpenAI publicizes new fellowship program for exterior researchers centered on AI security and alignment, operating September 2026 by means of February 2027.

OpenAI is opening its doorways to exterior researchers with a brand new Security Fellowship program geared toward advancing impartial work on AI alignment and security challenges. Purposes are actually open, with a Might 3 deadline.
The five-month program runs from September 14, 2026 by means of February 5, 2027, concentrating on researchers, engineers, and practitioners who need to deal with security questions affecting each present and future AI programs. OpenAI has partnered with Constellation to offer workspace in Berkeley, although distant participation is an choice.
What OpenAI Needs
The corporate outlined precedence analysis areas together with security analysis, ethics, robustness, scalable mitigations, privacy-preserving security strategies, agentic oversight, and high-severity misuse domains. They’re particularly searching for work that is “empirically grounded, technically robust, and related to the broader analysis group.”
Fellows will not get inner system entry—a notable limitation—however will obtain API credit, compute help, a month-to-month stipend, and mentorship from OpenAI workers. The expectation is obvious: produce one thing tangible by program’s finish, whether or not that is a analysis paper, benchmark, or dataset.
Who Ought to Apply
OpenAI is casting a large internet on backgrounds. Laptop science is clear, however they’re additionally welcoming candidates from social science, cybersecurity, privateness, and human-computer interplay fields. The corporate explicitly acknowledged they “prioritize analysis means, technical judgment, and execution over particular credentials.”
Letters of reference are required. Profitable candidates will probably be notified by July 25.
The Larger Image
This fellowship arrives as AI security considerations have moved from tutorial debate to mainstream regulatory dialogue. OpenAI has confronted criticism through the years for allegedly deprioritizing security analysis in favor of functionality improvement—a pressure that led to high-profile departures from its security crew.
This system represents an try to domesticate exterior security analysis expertise whereas probably deflecting a few of that criticism. Whether or not it alerts a real shift in priorities or serves primarily as an optics play stays to be seen.
For researchers focused on AI security work with entry to OpenAI sources and mentorship, purposes shut Might 3 on the program’s official web page. Questions might be directed to [email protected].
Picture supply: Shutterstock
