GPU Server Systems
Unrivaled GPU Systems: Deep Learning-Optimized Servers for the Modern Data Center
Universal GPU Systems
Modular Building Block Design, Future Proof Open-Standards Based Platform in 4U, 5U, 8U, or 10U for Large Scale AI training and HPC Applications
- GPU: NVIDIA HGX H100/H200/B200 4-GPU/8-GPU, AMD Instinct MI325X/MI300X/MI250 OAM Accelerator, Intel Data Center GPU Max Series
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 9TB
- Drives: Up to 24 Hot-swap E1.S, U.2 or 2.5" NVMe/SATA drives
Liquid-Cooled Universal GPU Systems
Direct-to-chip liquid-cooled systems for high-density AI infrastructure at scale.
- GPU: NVIDIA HGX H100/H200/B200 4-GPU/8-GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB
- Drives: Up to 24 Hot-swap U.2 or 2.5" NVMe/SATA drives
4U/5U GPU Lines with PCIe 5.0
Maximum Acceleration and Flexibility for AI/Deep Learning and HPC Applications
- GPU: Up to 10 NVIDIA H100 PCIe GPUs, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 9TB or 12TB
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
NVIDIA MGX™ Systems
Modular Building Block Platform Supporting Today's and Future GPUs, CPUs, and DPUs
- GPU: Up to 4 NVIDIA PCIe GPUs including H100, H100 NVL, and L40S
- CPU: NVIDIA GH200 Grace Hopper™ Superchip, Grace™ CPU Superchip, or Intel® Xeon®
- Memory: Up to 960GB ingegrated LPDDR5X memory (Grace Hopper or Grace CPU Superchip) or 16 DIMMs, 4TB DRAM (Intel)
- Drives: Up to 8 E1.S + 4 M.2 drives
AMD APU Systems
Multi-processor system combining CPU and GPU, Designed for the Convergence of AI and HPC
- GPU: 4 AMD Instinct MI300A Accelerated Processing Unit (APU)
- CPU: AMD Instinct™ MI300A Accelerated Processing Unit (APU)
- Memory: Up to 512GB integrated HBM3 memory (4x 128GB)
- Drives: Up to 8 2.5" NVMe or Optional 24 2.5" SATA/SAS via storage add-on card + 2 M.2 drives
4U GPU Lines with PCIe 4.0
Flexible Design for AI and Graphically Intensive Workloads, Supporting Up to 10 GPUs
- GPU: NVIDIA HGX A100 8-GPU with NVLink, or up to 10 double-width PCIe GPUs
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 32 DIMMs, 8TB DRAM or 12TB DRAM + PMem
- Drives: Up to 24 Hot-swap 2.5" SATA/SAS/NVMe
2U 2-Node Multi-GPU with PCIe 4.0
Dense and Resource-saving Multi-GPU Architecture for Cloud-Scale Data Center Applications
- GPU: Up to 3 double-width PCIe GPUs per node
- CPU: Intel® Xeon® or AMD EPYC™
- Memory: Up to 8 DIMMs, 2TB per node
- Drives: Up to 2 front hot-swap 2.5” U.2 per node
GPU Workstation
Flexible Solution for AI/Deep Learning Practitioners and High-end Graphics Professionals
- GPU: Up to 4 double-width PCIe GPUs
- CPU: Intel® Xeon®
- Memory: Up to 16 DIMMs, 6TB
- Drives: Up to 8 hot-swap 2.5” SATA/NVMe
Supermicro with GAUDI 3 AI Delivers Scalable Performance for AI Requirements
Range of Optimized Solutions for Data Centers of Any Size and Workloads For New Services and Increased Customer Satisfaction
Supermicro and Intel GAUDI 3 Systems Advance Enterprise AI Infrastructure
High_Bandwidth AI System Using Intel Xeon 6 Processors for Efficient LLM and GenAI Training and Inference Across Enterprise Scales
Lamini Chooses Supermicro GPU Servers For LLM Tuning Offering
Using Supermicro GPU Servers with the AMD Instinct™ MI300X Accelerators, Lamini is Able to Offer LLM Tuning AT High Speed
Supermicro Grace System Delivers 4X Performance For ANSYS® LS-DYNA®
Supermicro Grace CPU Superchip Systems Deliver 4X Performance at Same Power Consumption
Applied Digital Builds Massive AI Cloud With Supermicro GPU Servers
Applied Digital Offers Users the Latest In Scalable AI and HPC Infrastructure For AI Training and HPC Simulations with Supermicro High Performance Servers
Supermicro 4U AMD EPYC GPU Servers Offer AI Flexibility (AS-4125GS-TNRT)
Supermicro has long offered GPU servers in more shapes and sizes than we have time to discuss in this review. Today, we’re looking at their relatively new 4U air-cooled GPU server that supports two AMD EPYC 9004 Series CPUs, PCIe Gen5, and a choice of eight double-width or 12 single-width add-in GPU cards.
Supermicro And Nvidia Create Solutions To Accelerate CFD Simulations For Automotive And Aerospace Industries
Supermicro Servers with NVIDIA Datacenter GPUs Delivers Significant Speedup for CFD Simulations, Reducing Time To Market for Manufacturing Enterprises
A Look at the Liquid Cooled Supermicro SYS-821GE-TNHR 8x NVIDIA H100 AI Server
Today we wanted to take a look at the liquid cooled Supermicro SYS-821GE-TNHR server. This is Supermicro’s 8x NVIDIA H100 system with a twist: it is liquid cooled for lower cooling costs and power consumption. Since we had the photos, we figured we would put this into a piece.
Options For Accessing PCIe GPUs in a High Performance Server Architecture
Understanding Configuration Options for Supermicro GPU Servers Delivers Maximum Performance for Workloads
Petrobras Acquires Supermicro Servers Integrated by Atos To Reduce Costs and Increase Exploration Accuracy
Supermicro Systems Power Petrobras to the #33 Position in the Top500, November 2022 Rankings
Supermicro And ProphetStor Maximize GPU Efficiency For Multitenant LLM Training
In the dynamic world of AI and machine learning, efficient management of GPU resources in Multi-tenant environments is paramount, particularly for Large Language Model (LLM) training.
Supermicro Servers Increase GPU Offerings For SEEWEB, Giving Demanding Customers Faster Results For AI and HPC Workloads
Seeweb Selects Supermicro GPU Servers to Meet Customer Demands of HPC and AI Workloads
H12 Universal GPU Server
Open Standards-Based Server Design for Architectural Flexibility
3000W AMD Epyc Server Tear-Down, ft. Wendell of Level1Techs
We are looking at a server optimized for AI and machine learning. Supermicro has done a lot of work to cram as much as possible into 2114GT-DNR (2U2N) - a density optimized server. This is a really cool construction: there are two systems in this 2U chassis. The two redundant power supplies are 2,600W each and we'll see why we need so much power. It hosts six AMD MI210 Instinct GPUs and the dual Epyc processors. See the level of engineering Supermicro put into the design of this server.
H12 2U 2-Node Multi-GPU
Multi-Node Design for Compute and GPU-Acceleration Density
NEC Advances AI Research With Advanced GPU Systems From Supermicro
NEC uses Supermicro GPU servers with NVIDIA® A100s for Building a Supercomputer for AI Research (In Japanese)
Supermicro TECHTalk: High-Density AI Training/Deep Learning Server
Our newest data center system packs the highest density of advanced NVIDIA Ampere GPUs with fast GPU-GPU interconnect and 3rd Gen Intel® Xeon® Scalable processors. In this TECHTalk, we will show how we enable unparalleled AI performance in a 4U rack height package.
Hybrid 2U2N GPU Workstation-Server Platform Supermicro SYS-210GP-DNR Hands-on
Today we are finishing our latest series by taking a look at the Supermicro SYS-210GP-DNR, a 2U, 2-node 6 GPU system that Patrick recently got some hands-on time with at Supermicro headquarters.
Supermicro SYS-220GQ-TNAR+ a NVIDIA Redstone 2U Server
Today we are looking at the Supermicro SYS-220GQ-TNAR+ that Patrick recently got some hands-on time with at Supermicro headquarters.
Unveiling GPU System Design Leap - Supermicro SC21 TECHTalk with IDC
Presented by Josh Grossman, Principal Product Manager, Supermicro and Peter Rutten, Research Director, Infrastructure Systems, IDC
Mission Critical Server Solutions
Maximizing AI Development & Delivery with Virtualized NVIDIA A100 GPUs
Supermicro systems with NVIDIA HGX A100 offer a flexible set of solutions to support NVIDIA Virtual Compute Server (vCS) and NVIDIA A100 GPUs, enabling AI developments and delivery to run small and large AI models.
Supermicro SuperMinute: 2U 2-Node Server
Supermicro's breakthrough multi-node GPU/CPU platform is unlike any existing product in the market. With our advanced Building Block Solutions® design and resource-saving architecture, this system leverages the most advanced CPU and GPU engines along with advanced high-density storage in a space-saving form factor, delivering unrivaled energy-efficiency and flexibility.
SuperMinute: 4U System with HGX A100 8-GPU
For the most demanding AI workloads, Supermicro builds the highest-performance, fastest-to-market servers based on NVIDIA A100™ Tensor Core GPUs. With the newest version of NVIDIA® NVLink™ and NVIDIA NVSwitch™ technologies, these servers can deliver up to 5 PetaFLOPS of AI performance in a single 4U system.
SuperMinute: 2U System with HGX A100 4-GPU
The new AS -2124GQ-NART server features the power of NVIDIA A100 Tensor Core GPUs and the HGX A100 4-GPU baseboard. The system supports PCI-E Gen 4 for fast CPU-GPU connection and high-speed networking expansion cards.
High Performance GPU Accelerated Virtual Desktop Infrastructure Solutions with Supermicro Ultra SuperServers
1U 4 GPU Server White Paper
Available Models
Product Group | Hidden | Order | SKU | Description | Link | Image | Generation | Coming | New | Global | Max GPU | Supported GPUs | GPU-GPU | CPU | CPU Type | DIMM Slots | Drive Size | Drives | Networking | Form Factor | Total PCI-E Slots# | Total Power | GPU Type | Interface | Redundant Power | Applications | Buy Now |
---|