Supermicro accelerates AI and Deep Learning

Super Micro has introduced that it says is the industry’s broadest portfolio of validated NGC-Ready systems optimised to accelerate AI and deep learning applications. Supermicro is highlighting many of these systems at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital.

  • Wednesday, 25th March 2020 Posted 4 years ago in by Phil Alsop

Supermicro NGC-Ready systems allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs. NGC hosts GPU-optimised  software containers for deep learning, machine learning and HPC applications, pre-trained models, and SDKs that can run anywhere the Supermicro NGC-Ready systems are deployed whether in data centres, cloud, edge micro-datacentres, or in distributed remote locations as environment-resilient and secured NVIDIA-Ready for Edge servers powered by the NVIDIA EGX intelligent edge platform.


“With over 26 years of experience delivering state-of-the-art computing solutions, Supermicro systems are the most power-efficient, the highest performing, and the best value,” said Charles Liang, CEO and president of Supermicro. “With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI. Customers can run their AI infrastructure with the highest ROI.”

Supermicro currently leads the industry with the broadest portfolio of NGC-Ready Servers optimised for data centre and cloud deployments and is continuing to expand its portfolio. In addition, the company offers five validated NGC-Ready for Edge servers (EGX) optimised for edge inferencing applications.


“NVIDIA’s container registry, NGC, enables superior performance for deep learning frameworks and pre-trained AI models with state-of-the-art accuracy,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The NGC-Ready systems from Supermicro can deliver users the performance they need to train larger models and provide low latency inference to make critical, real-time business decisions.”