Iris Coleman
Apr 01, 2026 15:38
NVIDIA’s Blackwell Extremely GPUs set new MLPerf Inference data with 2.7x sooner DeepSeek-R1 processing, hitting 2.5 million tokens per second throughout 288 GPUs.

NVIDIA’s Blackwell Extremely GPUs have delivered record-breaking efficiency within the newest MLPerf Inference v6.0 benchmarks, attaining as much as 2.7x sooner token throughput in comparison with submissions simply six months in the past. The outcomes, revealed April 1, 2026, push NVIDIA’s cumulative MLPerf wins to 291—9 occasions greater than all different submitters mixed since 2018.
The standout determine: 4 GB300 NVL72 techniques operating 288 Blackwell Extremely GPUs processed 2.49 million tokens per second on DeepSeek-R1 in offline mode. That is the biggest GPU configuration ever submitted to any MLPerf Inference benchmark.
Software program Optimization Drives Huge Positive aspects
What’s significantly putting is not simply uncooked {hardware} muscle—it is how a lot efficiency NVIDIA extracted from the identical silicon by software program enhancements. The GB300 NVL72 delivered 8,064 tokens per second per GPU on DeepSeek-R1’s server state of affairs, up from 2,907 tokens six months prior. Identical chips, 2.77x extra output.
The efficiency bounce got here from a number of TensorRT-LLM enhancements: sooner fused kernels, optimized consideration knowledge parallel processing, and higher load balancing throughout ranks. For the brand new DeepSeek-R1 Interactive state of affairs—which calls for 5x sooner minimal token charges than customary server deployments—NVIDIA deployed disaggregated serving, Broad Professional Parallel sharding, and multi-token prediction to hit 250,634 tokens per second.
Accomplice Nebius achieved the two.7x speedup, demonstrating how NVIDIA’s open software program stack allows ecosystem optimization. The sensible implication? Token manufacturing prices dropped by over 60% on current infrastructure.
First and Solely Throughout New Benchmarks
MLPerf v6.0 launched a number of demanding new exams, and NVIDIA was the only real platform to submit outcomes throughout all of them:
- Qwen3-VL-235B-A22B: The primary multimodal vision-language mannequin in MLPerf, hitting 79 samples/sec offline
- GPT-OSS-120B: OpenAI’s 120B-parameter MoE reasoning mannequin, attaining 1.05 million tokens/sec offline
- WAN-2.2-T2V-A14B: Textual content-to-video technology at 21 seconds latency in single-stream mode
- DLRMv3: Transformer-based advice benchmark at 104,637 samples/sec
The multimodal Qwen3-VL submission used the vLLM open-source framework, whereas video technology ran on TensorRT-LLM VisualGen—each indicating how shortly the open-source ecosystem is constructing optimized pipelines for next-generation workloads.
Accomplice Ecosystem Reveals Depth
Fourteen companions submitted outcomes on the NVIDIA platform this spherical—the biggest companion participation for any single platform in MLPerf historical past. ASUS, Cisco, CoreWeave, Dell, Google Cloud, HPE, Lenovo, and Supermicro all delivered aggressive efficiency numbers, suggesting the Blackwell structure has matured sufficient for broad enterprise deployment.
This breadth issues for AI infrastructure patrons evaluating vendor lock-in threat. The outcomes arrived the identical week NVIDIA introduced a $2 billion strategic funding in Marvell Know-how to broaden AI infrastructure choices, signaling the corporate’s push to place itself because the foundational layer for AI computing moderately than a single-vendor resolution.
What Comes Subsequent
NVIDIA is main growth of MLPerf Endpoints, a brand new benchmark designed to measure real-world API efficiency below manufacturing visitors situations. Present chip-level benchmarks cannot seize latency spikes, queuing conduct, or throughput degradation below sustained load—metrics that really decide AI service economics.
For knowledge middle operators operating inference at scale, the message from these outcomes is obvious: software program optimization on current Blackwell {hardware} could ship extra price discount than ready for next-generation silicon. A 60% discount in per-token prices adjustments the economics of deploying reasoning fashions like DeepSeek-R1 in manufacturing.
Picture supply: Shutterstock
