Supermicro Unveils NVIDIA GPU Server Test Drive Program with Leading Channel Partners to Deliver Workload Qualification on Remote Supermicro Servers
San Jose, Calif., January 20, 2020 -- Super Micro Computer, Inc. (Nasdaq: SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, released details on a new GPU test-drive program. Called STEP (Supermicro Test drive Engagement with Partners), the program allows customers to remote test drive either Supermicro’s 2U HGX A100 4-GPU or 4U HGX A100 8-GPU system with NVIDIA 3rd Generation NVLink technology.
“Supermicro’s collaboration with NVIDIA in the GPU test drive program delivers through channel partners a unique opportunity to test workloads on remote Supermicro servers leveraging NVIDIA’s HGX A100 platforms,” said Don Clegg, senior vice president, Worldwide Sales, Supermicro. “This program will showcase the power of these servers to support unique applications and accelerate time-to-market solutions.“
Customers can access the program through the Supermicro STEP landing page with links to participating partners where customers can begin the process of registration. Afterward, customers can link directly to remote Supermicro NVIDIA HGX A100 platforms to test and qualify their advanced workloads.
“The NVIDIA HGX AI supercomputing platform is purpose-built for the highest performance on simulation, data analytics and AI applications,” said Paresh Kharya, senior director of Product Management and Marketing at NVIDIA. “Supermicro’s decision to build their STEP program on the foundation of NVIDIA’s HGX technology will give customers access to the leading platform that can tackle the most complex problems and transform the global research community.”
Supermicro’s advanced high-density 2U and 4U servers feature NVIDIA HGX A100 4-GPU and 8-GPU baseboards. Supermicro’s Advanced I/O Module (AIOM) form factor further enhances networking communication with high flexibility. The AIOM can be coupled with the latest high-speed, low latency PCI-E 4.0 storage and networking devices that support NVIDIA GPUDirect ®RDMA and GPUDirect Storage with NVMe over Fabrics (NVMe-oF) on NVIDIA Mellanox® InfiniBand that feeds the scalable multi-GPU system with a continuous stream of data flow without bottlenecks.
Participating Partners
EMEA:
Boston
“Boston is pleased to offer our European partners and customers the opportunity to test one of Supermicro’s highest-performance and first-to-market servers based on NVIDIA A100™ GPUs,” said Manoj Nayee, Managing Director, Boston. “The Supermicro AS -2124GQ-NART system is an ideal building block for AI cluster requirements offered in the compact 2U form factor that provides an impressively dense configuration supporting four NVIDIA A100 GPUs. The team at Boston Labs and the Boston Training Academy looks forward to supporting customers for testing, training, and implementation of this next generation of accelerated computing.”
Broadberry
“Artificial intelligence (AI) has become an area of strategic importance and a key driver of economic development for Europe, according to the European Commission,” said Colin Broadberry, president of Broadberry Data Systems. “That’s why the Broadberry team is so excited to collaborate with Supermicro to offer clients remote access to NVIDIA A100 graphics processing units (GPU)-based servers, the Broadberry CyberServe A+ Server 2124GQ-NART and CyberServe EPYC EP2-4124GO-NART GPU Server.”
North/Latin America:
Colfax
“Colfax International is excited to collaborate with Supermicro and NVIDIA in launching this new test drive program that enables users across the industry to have access to the Supermicro AS -2124GQ-NART, the latest and most powerful 2U GPU server platform highlighting NVIDIA’s new HGX A100 4-GPU,” said Gautam Shah, CEO. “This solution is able to deliver new speed to previous challenges, accelerating HPC and AI workloads, which ultimately allows for faster time to solution. With Colfax’s expertise in the HPC and AI space, we’ll be able to help users select the best GPU solution for their organization.”
Exxact
“Exxact Corporation is excited to be collaborating with Supermicro to provide remote access to a powerful system TensorEX TS4-195183185 containing 8x A100 40GB SXM4 GPUs with 600GB/s GPU to GPU bandwidth capabilities, which will provide excellent opportunity to prove its capabilities in accelerating code across a wide variety of applications and codes,” said Andrew Nelson, VP of Technology.
Microway
“Microway’s expert team recommends the most flexible, performant, and cost-effective solutions for turn-key deployments. Our ability to rapidly deliver these latest HPC and AI systems is thanks to our close collaboration with Supermicro,” said Eliot Eshelman, VP Strategic Accounts at Microway. “We’re proud to be offering the first HGX A100 8-GPU solution to market, Supermicro AS -4124GO-NART, as our Navion 4U GPU with NVLink Server—and encourage potential customers to try it before they buy [microway.com] with our new remote testing resources.”
Thinkmate
“Many people are familiar with the power of NVIDIA® A100 graphics processing units (GPUs) for accelerating artificial intelligence (AI) workloads, but some have never seen how well they can work with AMD-based central processing units (CPUs),” said Brian Corn, vice president of products for Thinkmate. “Thanks to our collaboration with Supermicro, Thinkmate is able to give customers the opportunity to try out Supermicro’s powerful new AMD and NVIDIA-based servers – the Thinkmate GPX QN4-22E2-4NV and GPX QN6-42E2-8NV – and experience the power of this combination for themselves.”
AMAX
AMAX is delighted to collaborate with Supermicro and NVIDIA to enable customers to test drive an NVIDIA A100-based server DGS-428AS, and to help them accelerate their most demanding analytics, high-performance computing (HPC), inference, and training workloads,” said Dr. Rene Meyer, VP of Technology at AMAX. “AMAX’s AceleMax GPU solutions, powered by the latest NVIDIA A100 Tensor Core GPUs, deliver unprecedented performance and massive scalability on all accelerated workloads for HPC, data analytics, training and inference.”