Rongchai Wang
Mar 10, 2026 17:41
NVIDIA’s Jetson platform allows enterprise-grade AI to run regionally on industrial tools, from Caterpillar excavators to dual-arm robots, with out cloud dependency.
NVIDIA’s push to maneuver AI processing from information facilities to bodily machines is gaining severe traction. The corporate’s Jetson platform now runs generative AI fashions regionally on every little thing from eight-ton excavators to dual-arm robots, eliminating cloud latency and ongoing compute prices which have plagued industrial AI deployments.
At CES earlier this 12 months, Caterpillar demonstrated its Cat AI Assistant working on Jetson Thor inside a 306 CR mini-excavator—a machine sufficiently small to slot in a delivery container however advanced sufficient to require in depth operator coaching. The system makes use of Qwen3 4B for pure language processing and NVIDIA Nemotron speech fashions, all executing regionally with no web connection required.
Why Edge Issues for Industrial AI
The shift addresses a basic pressure in industrial AI. Cloud deployments work high-quality for chatbots, however bodily programs want one thing completely different: sub-millisecond response instances, constant habits no matter community circumstances, and the flexibility to function in environments the place connectivity is not assured.
Reminiscence shortages throughout the semiconductor trade have sophisticated issues additional, driving up prices for discrete element approaches. Jetson’s system-on-module design bundles compute and reminiscence collectively, simplifying {hardware} sourcing for producers.
NVIDIA inventory traded at $178.03 on March 10, down 1.7% on the day, with the corporate’s market cap holding at $4.57 trillion. The Jetson enterprise represents a smaller however strategically vital piece of NVIDIA’s broader AI infrastructure play.
Actual-World Deployments Accelerating
The developer ecosystem round Jetson has expanded quickly. Franka Robotics ran the NVIDIA GR00T N1.6 vision-language-action mannequin completely onboard its FR3 Duo dual-arm system at CES—notion to movement, no job scripting required.
NVIDIA’s personal GEAR Lab educated a humanoid controller on 100 million frames of motion-capture information, then deployed it on a bodily robotic the place the kinematic planner runs on Jetson Orin at roughly 12 milliseconds per go. The coverage loop executes at 50 Hz, all onboard.
A UIUC robotics group constructed a matcha-making robotic on Jetson Thor that received first place at an NVIDIA embodied AI hackathon. NYU’s Heart for Robotics just lately ran its YOR robotic on the platform, exhibiting improved generalization on pick-and-place duties.
Mannequin Efficiency Numbers
Jetson Thor delivers 52 tokens per second for Mistral 3 fashions at single concurrency, scaling to 273 tokens per second with eight concurrent requests. The Qwen 3.5-35B-A3B mannequin causes at 35 tokens per second. Bodily Intelligence’s PI 0.5 mannequin generates 120 motion tokens per second for robotics purposes.
ABB Robotics introduced a partnership with NVIDIA on March 9 centered on industrial-grade bodily AI deployment. Texas Devices adopted on March 5 with its personal collaboration concentrating on next-generation bodily AI programs.
NVIDIA plans to showcase these capabilities at GTC 2026 subsequent month, together with a panel on industrial autonomy. For builders already constructing on the platform, the message is obvious: the fashions are prepared, the {hardware} exists, and the query has shifted from whether or not edge AI works to how briskly it may possibly scale.
Picture supply: Shutterstock

