Briefly
- A Stanford researcher constructed a Survivor-style recreation the place AI fashions type alliances and vote rivals out.
- The benchmark goals to deal with rising issues with saturated and contaminated AI evaluations.
- OpenAI’s GPT-5.5 ranked first in 999 multiplayer video games involving 49 AI fashions.
AI fashions at the moment are taking part in “Survivor”—form of.
In a brand new Stanford analysis challenge referred to as “Agent Island,” AI brokers negotiate alliances, accuse one another of secret coordination, manipulate votes, and remove rivals in multiplayer technique video games that purpose to check behaviors that conventional benchmarks miss.
The research, revealed on Tuesday by the analysis supervisor on the Stanford Digital Economic system Lab, Connacher Murphy, stated many AI benchmarks have gotten unreliable as a result of fashions ultimately be taught to unravel them, and benchmark information typically leaks into coaching units. Murphy created Agent Island as a dynamic benchmark the place AI brokers compete in opposition to one another in Survivor-style elimination video games as an alternative of answering static check questions.
“Excessive-stakes, multi-agent interactions might turn out to be commonplace as AI brokers develop in capabilities and are more and more endowed with sources and entrusted with decision-making authority,” Murphy wrote. “In such contexts, brokers would possibly pursue mutually incompatible objectives.”
Researchers nonetheless know comparatively little about how AI fashions behave when cooperating, Murphy defined, including that competing, forming alliances, or managing battle with different autonomous brokers, and he argues that static benchmarks fail to seize these dynamics.
Every recreation begins with seven randomly chosen AI fashions given faux participant names. Over 5 rounds, the fashions discuss privately, argue publicly, and vote one another out. The eradicated gamers later return to assist select the winner.
The format rewards persuasion, coordination, fame administration, and strategic deception alongside reasoning capacity.
In 999 simulated video games involving 49 AI fashions, together with ChatGPT, Grok, Gemini, and Claude, GPT-5.5 ranked first by a large margin with a talent rating of 5.64, in contrast with 3.10 for GPT-5.2 and a pair of.86 for GPT-5.3-codex, in response to Murphy’s Bayesian rating system. Anthropic’s Claude Opus fashions additionally ranked close to the highest.
The research discovered that fashions additionally favored AIs from the identical firm, with OpenAI fashions displaying the strongest same-provider desire and Anthropic fashions the weakest. Throughout greater than 3,600 final-round votes, fashions had been 8.3 share factors extra more likely to assist finalists from the identical supplier. The transcripts from the video games, Murphy famous, resembled political technique debates greater than conventional benchmark assessments.
One mannequin accused rivals of secretly coordinating votes after noticing related wording of their speeches. One other warned gamers to not turn out to be obsessive about monitoring alliances. Some fashions defended themselves by saying they adopted clear and constant guidelines whereas accusing others of placing on “social theater.”
The research comes as AI researchers more and more transfer towards game-based and adversarial benchmarks to measure reasoning and habits that static assessments typically miss. Latest tasks have included Google’s dwell AI chess tournaments, DeepMind’s use of Eve Frontier to review AI habits in advanced digital worlds, and new benchmark efforts by OpenAI designed to withstand training-data contamination.
The researchers argue that finding out how AI fashions negotiate, coordinate, compete, and manipulate each other might assist researchers consider habits in multi-agent environments earlier than autonomous brokers turn out to be extra broadly deployed.
The research warned that whereas benchmarks like Agent Island might assist establish dangers from autonomous AI fashions earlier than deployment, the identical simulations and interplay logs might additionally assist enhance persuasion and coordination methods between AI brokers.
“We mitigate this danger by utilizing a low-stakes recreation setting and interagent simulations
with out human members or real-world actions,” Murphy wrote. “Nonetheless, we don’t declare that these mitigations totally remove dual-use considerations.”
Every day Debrief Publication
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

