NVIDIA has launched the H200 NVL PCIe GPU, a brand new addition to its Hopper structure, aimed toward bettering AI and high-performance computing (HPC) purposes for enterprise servers. Unveiled on the Supercomputing 2024 convention, the H200 NVL affords a lower-power, air-cooled design that’s best for knowledge facilities with versatile configurations, based on NVIDIA.
Advantages of the H200 NVL GPU
The H200 NVL GPU is designed to accommodate the wants of knowledge facilities with enterprise racks which can be 20kW and beneath, which predominantly use air cooling. This makes it an important part for offering granularity in node deployment, permitting organizations to optimize their computing energy effectively. The GPU affords a 1.5x improve in reminiscence and a 1.2x improve in bandwidth over its predecessor, the NVIDIA H100 NVL, enabling sooner AI mannequin fine-tuning and inference efficiency.
Technological Enhancements
Complementing the {hardware} capabilities of H200 NVL is NVIDIA’s NVLink expertise, which affords GPU-to-GPU communication speeds seven occasions sooner than the fifth-generation PCIe. This development is especially useful for high-demand duties comparable to massive language mannequin inference and fine-tuning.
Business Adoption and Use Instances
Enterprises throughout varied sectors are already leveraging the H200 NVL for numerous purposes. Dropbox makes use of NVIDIA’s accelerated computing to boost its AI and machine studying capabilities, whereas the College of New Mexico applies it in analysis areas comparable to genomics and local weather modeling. These use instances underscore the GPU’s potential to drive effectivity and innovation in AI and HPC workloads.
Availability and Ecosystem Assist
Main expertise firms, together with Dell Applied sciences, Hewlett Packard Enterprise, Lenovo, and Supermicro, are anticipated to assist the H200 NVL in a wide range of configurations. NVIDIA’s world programs companions will start providing platforms that includes the H200 NVL in December. Moreover, NVIDIA is growing an Enterprise Reference Structure to help companions and prospects in deploying high-performance AI infrastructure at scale.
For additional particulars, go to the official NVIDIA weblog right here.
Picture supply: Shutterstock