Next Leap of AI Infrastructure is Here
Today’s advanced AI models are changing our lives rapidly. Accelerated compute infrastructure is evolving at unprecedented speed in all market segments. Flexible, Robust, and massively scalable infrastructure with next-generation GPUs is enabling a new chapter of AI.
In close partnership with NVIDIA, Supermicro delivers the broadest selection of NVIDIA-Certified systems providing the most performance and efficiency from small enterprises to massive, unified AI training clusters with the new NVIDIA H100 Tensor Core GPUs.
Together, we achieve up to nine times the training performance of the previous generation for some of the most challenging AI models, cutting a week of training time into just 20 hours. Supermicro systems with the new H100 PCI-E and HGX H100 GPUs, as well as the newly announced L40 GPU, bring PCI-E Gen5 connectivity, fourth-generation NVLink and NVLink Network for scale-out, and the new CNX cards empowering GPUDirect RDMA and Storage with NVIDIA Magnum IO and NVIDIA AI Enterprise software.