Inference-optimized system extends Supermicro’s leading portfolio of GPU Servers to offer customers an unparalleled selection of AI solutions for Inference, Training, and Deep Learning including Singe-Root, Dual-Root, Scale-up and Scale Out designs
Designed for the Next-Generation of AI, New HGX-2 System with 16 Tesla V100 GPUs and NVSwitch leverages over 80,000 Cuda Cores to deliver unmatched performance for deep learning and compute workloads
1U Server with Over a Half Petabyte of Flash Storage provides over ten million IOPS of Performance making it an ideal solution for IOPS intensive and Low Latency applications, data processing, and hyper-converged infrastructure (HCI)
Boost Performance up to 10x and Deliver 9x better Cost per Virtual Machine with Certified Hyper-converged BigTwin Servers with max CPU, memory, NVMe storage and expansion card support
The Film follows the Mission of Supermicro, Intel and NASA to minimize the environmental impact of Today’s Data Centers through Energy Efficiency and e-Waste Reduction
On Display at the Flash Memory Summit, new family of Latency, Bandwidth, Density and Thermal Optimized solutions supporting new EDSFF form factor allow rack level pooling of NVMe resources to reduce costs and improve efficiency and utilization
Resource-Saving Architecture optimizes power, cooling, shared resources and refresh cycle costs to Save Millions in TCO and reduce Total Cost for the Environment (TCE)
At RSA Singapore 2018, Supermicro will showcase compact, dense, low-power and low-latency systems designed for Software Defined Networking (SDN), Network Functions Virtualization (NFV), SD-WAN and vCPE applications