Skip to main content

Supermicro Adds Portfolio for Next Wave of AI with NVIDIA Blackwell Ultra Solutions, Featuring NVIDIA HGX™ B300 NVL16 and GB300 NVL72

Air- and Liquid-Cooled Optimized Solutions with Enhanced AI FLOPs and HBM3e Capacity, with up to 800 Gb/s Direct-to-GPU Networking Performance

San Jose, Calif., GTC 2025 Conference – March 18, 2025 – Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is announcing new systems and rack solutions powered by the NVIDIA's Blackwell Ultra platform, featuring the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro and NVIDIA's new AI solutions strengthen leadership in AI by delivering breakthrough performance for the most compute-intensive AI workloads, including AI reasoning, agentic AI, and video inference applications.

"At Supermicro, we are excited to continue our long-standing partnership with NVIDIA to bring the latest AI technology to market with the NVIDIA Blackwell Ultra Platforms," said Charles Liang, president and CEO, Supermicro. "Our Data Center Building Block Solutions® approach has streamlined the development of new air and liquid-cooled systems, optimized to the thermals and internal topology of the NVIDIA HGX B300 NVL16 and GB300 NVL72. Our advanced liquid-cooling solution delivers exceptional thermal efficiency, operating with 40℃ warm water in our 8-node rack configuration, or 35℃ warm water in double-density 16-node rack configuration, leveraging our latest CDUs. This innovative solution reduces power consumption by up to 40% while conserving water resources, providing both environmental and operational cost benefits for enterprise data centers."

For more information, please visit https://www.supermicro.com/en/accelerators/nvidia

NVIDIA's Blackwell Ultra platform is built to conquer the most demanding cluster-scale AI applications by overcoming performance bottlenecks caused by limited GPU memory capacity and network bandwidth. NVIDIA Blackwell Ultra delivers an unprecedented 288GB HBM3e of memory per GPU, delivering drastic improvements in AI FLOPS for AI training and inference for the largest AI models. The networking platform integration with NVIDIA Quantum-X800 InfiniBand and Spectrum-X™ Ethernet doubles the compute fabric bandwidth, up to 800 Gb/s.

Supermicro integrates NVIDIA Blackwell Ultra with two types of solutions: Supermicro NVIDIA HGX B300 NVL16 systems, designed for every data center, and the NVIDIA GB300 NVL72, equipped with NVIDIA's next-generation Grace Blackwell architecture.

Supermicro NVIDIA HGX B300 NVL16 system

Supermicro NVIDIA HGX systems are the industry-standard building blocks for AI training clusters, with an 8-GPU NVIDIA NVLink™ domain and 1:1 GPU-to-NIC ratio for high-performance clusters. Supermicro's new NVIDIA HGX B300 NVL16 system builds upon this proven architecture with thermal design advancements in both a liquid-cooled and air-cooled version.

For B300 NVL16, Supermicro introduces a brand new 8U platform to maximize the output of the NVIDIA HGX B300 NVL16 board. Each GPU is connected in a 1.8TB/s 16-GPU NVLink domain providing a massive 2.3TB of HBM3e per system. Supermicro NVIDIA HGX B300 NVL16 improves upon performance in the network domain by integrating 8 NVIDIA ConnectX®-8 NICs directly into the baseboard to support 800 Gb/s node-to-node speeds via NVIDIA Quantum-X800 InfiniBand or Spectrum-X™ Ethernet.

B300

Supermicro NVIDIA GB300 NVL72

The NVIDIA GB300 NVL72 integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs in a single rack with exascale computing capacity, featuring upgraded HBM3e memory capacity for over 20TB of HBM3e memory interconnected in a 1.8TB/s 72-GPU NVLink domain. NVIDIA ConnectX®-8 SuperNIC provides 800Gb/s speeds for both GPU-to-NIC and NIC-to-network communication, drastically improving cluster-level performance of the AI compute fabric.

Liquid-Cooled AI Data Center Building Block Solutions

Expertise in liquid cooling, data center deployment, and building block approach positions Supermicro to deliver NVIDIA Blackwell Ultra with industry-leading time-to-deployment. Supermicro offers a complete liquid cooling portfolio, including newly developed direct-to-chip cold plates, a 250kW in-rack CDU, and cooling tower.

Supermicro's on-site rack deployment helps enterprises build data center from the ground up, including the planning, design, power-up, validation, testing, installation and configuration of racks, servers, switches and other networking equipment to meet the organization’s specific needs.

8U Supermicro NVIDIA HGX B300 NVL16 system – Designed for every data center with a streamlined thermally-optimized chassis and 2.3TB HBM3e memory per system.

NVIDIA GB300 NVL72 – Exascale AI supercomputer in a single rack with essentially double the HBM3e memory capacity and networking speeds over its predecessor.

Supermicro at GTC 2025

GTC visitors can find Supermicro at San Jose, CA from March 17-21, 2025. Visit us at booth #1115 to see the X14/H14 B200, B300, and GB300 systems on display along with our rack-scaled liquid-cooled solutions.