Caroline Bishop
Feb 27, 2026 17:35
NVIDIA benchmarks present Run:ai platform doubles GPU utilization whereas reducing latency 61x for enterprise AI deployments working NIM inference microservices.
NVIDIA has launched complete benchmarking knowledge displaying its Run:ai orchestration platform can double GPU utilization for enterprises working AI inference workloads, whereas concurrently slashing first-request latency by as much as 61x in comparison with conventional cold-start deployments.
The findings come as organizations battle with a basic rigidity in LLM deployment: small embedding fashions would possibly eat only a few gigabytes of GPU reminiscence, whereas 70B+ parameter fashions demand a number of GPUs. With out clever orchestration, groups face an unpleasant alternative between overprovisioning (burning cash) and underprovisioning (degrading person expertise).
The Numbers That Matter
NVIDIA examined three NIM microservices—a 7B LLM, 12B vision-language mannequin, and 30B mixture-of-experts mannequin—on H100 GPUs. The outcomes problem typical deployment knowledge.
Utilizing GPU fractions with bin packing, three fashions that beforehand required three devoted H100s had been consolidated onto roughly 1.5 H100s. Every NIM retained 91-100% of single-GPU throughput. Mistral-7B matched its dedicated-GPU efficiency fully at 834 tokens per second with long-context enter.
Dynamic GPU fractions pushed efficiency additional underneath heavy load. Nemotron-3-Nano-30B sustained 1,025 tokens per second at 256 concurrent requests—in comparison with a static-fraction ceiling of simply 721 tokens per second at 4 concurrent requests earlier than instability. That is a 1.4x throughput enchancment when site visitors spikes hit.
Chilly Begin Downside Solved
Essentially the most dramatic positive factors got here from GPU reminiscence swap, which retains fashions in CPU reminiscence and dynamically strikes weights to GPU as requests arrive. Scale-from-zero chilly begins took 75-93 seconds for first-token era at 128-token enter. GPU reminiscence swap minimize that to 1.23-1.61 seconds—a 55-61x enchancment.
For longer 2,048-token prompts, cold-start instances of 158-180 seconds dropped to underneath 4 seconds with swap enabled.
Market Context
NVIDIA inventory trades at $181.24, down 2.42% up to now 24 hours, with a market cap of $4.49 trillion. The corporate has been aggressively increasing its AI infrastructure partnerships. Crimson Hat and NVIDIA launched a co-engineered AI Manufacturing unit platform on February 25, whereas VAST Knowledge introduced a platform tie-up on February 26.
Run:ai’s fractional GPU capabilities have proven production-ready leads to cloud supplier benchmarks. Testing with Nebius demonstrated assist for 2x extra concurrent customers on present {hardware}.
What This Means for Enterprise AI
The sensible implication: organizations can deploy extra fashions on fewer GPUs with out sacrificing latency SLAs. Static fractions work properly for predictable, low-concurrency workloads. Dynamic fractions deal with variable site visitors and excessive concurrency the place KV-cache progress creates reminiscence strain.
GPU reminiscence swap eliminates the penalty for preserving rarely-accessed fashions accessible—vital for organizations working numerous mannequin portfolios the place some endpoints see sporadic site visitors.
NVIDIA has revealed deployment guides for working NIM as native inference workloads on Run:ai. The platform helps single-GPU, multi-GPU, and fractional deployments with Kubernetes-native site visitors balancing and autoscaling.
Picture supply: Shutterstock

