Skip to main content
Displaying 1 - 10 of 235

Supermicro's Revolutionary Data Center Building Block Solutions® (DCBBS) Simplify and Shorten Global-Scale Buildouts of AI/IT Liquid-Cooled Data Centers

  • Easy-to-design, easy-to-build, easy-to-deploy, and easy-to-operate solution for all critical computing and cooling infrastructure
  • Quick time-to-deployment and quick time-to-online with everything required to fully outfit AI/IT data centers
  • Saving cost with modularized building block solution architecture from system to rack to data center scale
More

Supermicro's DLC-2, the Next Generation Direct Liquid-Cooling Solutions, Aims to Reduce Data Center Power, Water, Noise, and Space, Saving on Electricity Cost by up to 40%, and Lowering TCO by up to 20%

  • Up to 40% power savings of data center
  • Faster time-to-deployment and reduced time-to-online by providing end-to-end liquid-cooling solution
  • Up to 40% reduced water consumption with warm water cooling now available at an inlet temperature of up to 45°C, reducing the necessity of chillers
  • E
More

Supermicro Delivers Best-In-Class Cost and Density Per Server Instance with the New MicroCloud, a Multi-Node Solution for Lightweight Entry Class Workloads Powered by AMD EPYC™ 4005 Series Processors

  • Delivers 3.3x more density than traditional 1U servers, maximizing savings on data center rack space, power, and lowering overall TCO
  • Offers up to 10 physically separated server nodes per chassis, 2080 cores per 42U rack and a TDP as low as 65W per server

San Jose, Calif., -- May 13, 2025 – Supermicro, Inc.

More

Supermicro Expands Enterprise AI Portfolio of over 100 GPU-Optimized Systems Supporting the Upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA H200 NVL Platform

With a Broad Range of Form Factors, Supermicro’s Expanded Portfolio of PCIe GPU Systems Can Scale to the Most Demanding Data Center Requirements, with up to 10 Double Width GPUs to Low Power Intelligent Edge Systems Providing Maximum Flexibility and Optimization for Enterprise AI LLM-Inference Workloads
More