OpenAI’s new “o3” language mannequin achieved an IQ rating of 136 on a public Mensa Norway intelligence check, exceeding the brink for entry into the nation’s Mensa chapter for the primary time.
The rating, calculated from a seven-run rolling common, locations the mannequin above roughly 98 % of the human inhabitants, based on a standardized bell-curve IQ distribution used within the benchmarking.
The discovering, disclosed by information from impartial platform TrackingAI.org, reinforces the sample of closed-source, proprietary fashions outperforming open-source counterparts in managed cognitive evaluations.
O-series Dominance and Benchmarking Methodology
The “o3” mannequin was launched this week and is part of the “o-series” of huge language fashions, accounting for many top-tier rankings throughout each check sorts evaluated by TrackingAI.
The 2 benchmark codecs included a proprietary “Offline Check” curated by TrackingAI.org and a publicly out there Mensa Norway check, each scored towards a human imply of 100.
Whereas “o3” posted a 116 on the Offline analysis, it noticed a 20-point increase on the Mensa check, suggesting both enhanced compatibility with the latter’s construction or data-related confounds reminiscent of immediate familiarity.
The Offline Check included 100 pattern-recognition questions designed to keep away from something that may have appeared within the information used to coach AI fashions.
Each assessments report every mannequin’s outcome as a median throughout the seven most up-to-date completions, however no commonplace deviation or confidence intervals had been launched alongside the ultimate scores.
The absence of methodological transparency, notably round prompting methods and scoring scale conversion, limits reproducibility and interpretability.
Methodology of testing
TrackingAI.org states that it compiles its information by administering a standardized immediate format designed to make sure broad AI compliance whereas minimizing interpretive ambiguity.
Every language mannequin is offered with an announcement adopted by 4 Likert-style response choices, Strongly Disagree, Disagree, Agree, Strongly Agree, and is instructed to pick out one whereas justifying its selection in two to 5 sentences.
Responses have to be clearly formatted, sometimes enclosed in daring or asterisks. If a mannequin refuses to reply, the immediate is repeated as much as ten instances.
The newest profitable response is then recorded for scoring functions, with refusal occasions famous individually.
This system, refined by repeated calibration throughout fashions, goals to offer consistency in comparative assessments whereas documenting non-responsiveness as an information level in itself.
Efficiency unfold throughout mannequin sorts
The Mensa Norway check sharpened the delineation between the actually frontier fashions, with the o3’s 136 IQ marking a transparent lead over the subsequent highest entry.
In distinction, different standard fashions like GPT-4o scored significantly decrease, touchdown at 95 on Mensa and 64 on Offline, emphasizing the efficiency hole between this week’s “o3” launch and different prime fashions.
Amongst open-source submissions, Meta’s Llama 4 Maverick was the highest-ranked, posting a 106 IQ on Mensa and 97 on the Offline benchmark.
Most Apache-licensed entries fell inside the 60–90 vary, reinforcing the present limitations of community-built architectures relative to corporate-backed analysis pipelines.
Multimodal fashions see decreased scores and limitations of testing
Notably, fashions particularly designed to include picture enter capabilities persistently underperformed their text-only variations. For example, OpenAI’s “o1 Professional” scored 107 on the Offline check in its textual content configuration however dropped to 97 in its vision-enabled model.
The discrepancy was extra pronounced on the Mensa check, the place the text-only variant achieved 122 in comparison with 86 for the visible model. This implies that some strategies of multimodal pretraining could introduce reasoning inefficiencies that stay unresolved at current.
Nonetheless, “o3” can even analyze and interpret photographs to a really excessive commonplace, a lot better than its predecessors, breaking this development.
Finally, IQ benchmarks present a slender window right into a mannequin’s reasoning functionality, with short-context sample matching providing solely restricted insights into broader cognitive conduct reminiscent of multi-turn reasoning, planning, or factual accuracy.
Moreover, machine test-taking circumstances, reminiscent of immediate entry to full prompts and limitless processing pace, additional blur comparisons to human cognition.
The diploma to which excessive IQ scores on structured assessments translate to real-world language mannequin efficiency stays unsure.
As TrackingAI.org’s researchers acknowledge, even their makes an attempt to keep away from training-set leakage don’t fully preclude the potential of oblique publicity or format generalization, notably given the shortage of transparency round coaching datasets and fine-tuning procedures for proprietary fashions.
Unbiased Evaluators Fill Transparency Hole
Organizations reminiscent of LM-Eval, GPTZero, and MLCommons are more and more relied upon to offer third-party assessments as mannequin builders proceed to restrict disclosures about inside architectures and coaching strategies.
These “shadow evaluations” are shaping the rising norms of huge language mannequin testing, particularly in gentle of the opaque and infrequently fragmented disclosures from main AI corporations.
OpenAI’s o-series holds a commanding place on this testing workflow, although the long-term implications for normal intelligence, agentic conduct, or moral deployment stay to be addressed in additional domain-relevant trials. The IQ scores, whereas provocative, serve extra as indicators of short-context proficiency than a definitive indicator of broader capabilities.
Per TrackingAI.org, further evaluation on format-based efficiency spreads and analysis reliability will likely be essential to make clear the validity of present benchmarks.
With mannequin releases accelerating and impartial testing rising in sophistication, comparative metrics could proceed to evolve in each format and interpretation.