Supermicro, Inc., a leading innovator in IT solutions for artificial intelligence (AI), cloud computing, storage, and 5G/edge infrastructure, has expanded its NVIDIA Blackwell systems portfolio. The company recently launched the new 4U DLC-2 liquid-cooled NVIDIA HGX B200 system, now available for volume shipment, alongside an air-cooled 8U front input/output (I/O) system. These cutting-edge systems are specifically engineered to handle the most demanding large-scale AI training and cloud inference workloads. By simplifying the deployment, management, and maintenance of both air- and liquid-cooled AI infrastructure, Supermicro significantly enhances operational efficiency.
Moreover, the new systems support the upcoming NVIDIA HGX B300 platform and feature easy front I/O access, which streamlines cabling, improves cooling efficiency, increases compute density, and lowers operating expenses (OPEX).
AI Authority Trend: Ericsson and Supermicro Advance Enterprise Connectivity for Edge AI Systems
Charles Liang, CEO and Chairman of Supermicro, emphasizes, “Our DLC-2-enabled NVIDIA HGX B200 systems lead the market by delivering superior power savings and accelerating time-to-market for AI factory deployments. Thanks to our Building Block architecture, we rapidly provide tailored solutions that meet diverse customer needs. Our portfolio now includes precisely optimized NVIDIA Blackwell solutions for various AI infrastructure setups, whether air-cooled or liquid-cooled.”
Supermicro’s DLC-2 technology introduces a next-generation direct liquid cooling system designed for AI-optimized data centers. This new cooling design provides significant benefits in both performance and cost, especially for high-density computing setups. For example, data centers can reduce electricity consumption by up to 40%, thanks to the warm water cooling capability with inlet temperatures as high as 45°C, which also cuts water usage by up to 40% and minimizes system downtime risks. Additionally, the system captures up to 98% of heat generated by CPUs, GPUs, memory modules, PCIe switches, voltage regulators, and power supplies, all while maintaining noise levels as low as 50 decibels, ensuring quiet data center operation.
Supermicro now offers one of the most diverse NVIDIA HGX B200 solution portfolios on the market, with six rear I/O and two new front I/O systems. The 4U and 8U front I/O NVIDIA HGX B200 models build upon proven architectures to address key deployment challenges like networking, cabling, and cooling in large-scale AI training and inference setups.
Kaustubh Sanghani, NVIDIA’s vice president of GPU product management, says, “Advanced infrastructure is driving the AI revolution across industries. Supermicro’s new front I/O B200 system, leveraging the NVIDIA Blackwell architecture, enables enterprises to deploy and scale AI faster than ever before, unlocking breakthrough innovations and operational efficiency.”
These modern AI data centers require high scalability, demanding many node-to-node connections. To meet this, Supermicro relocated eight high-performance 400G NVIDIA ConnectX-7 network interface controllers (NICs) and two NVIDIA Bluefield-3 data processing units (DPUs) to the system’s front, enabling easier network cabling, optical drive bay access, and management—all done from the cold aisle. The system fully supports NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet platforms to maximize computing performance.
AI Authority Trend: Supermicro Announces Strategic Partnership with DataVolt
Supermicro also upgraded their system components to better handle AI workloads, improving both efficiency and cost-effectiveness. The system supports up to 32 DIMM memory slots, allowing flexible, large-capacity configurations that complement the NVIDIA HGX B200’s HBM3e GPU memory. This combination eliminates CPU and GPU bottlenecks, accelerates data preprocessing, and improves multitasking in virtualized environments.
Each GPU in Supermicro’s systems is linked using fifth-generation NVLink at 1.8TB/s, providing a total of 1.4TB of HBM3e GPU memory per system. NVIDIA’s Blackwell platform delivers up to 15 times the real-time inference performance and three times the training speed for large language models compared to the previous Hopper generation.
The newly released 4U front I/O liquid-cooled system features dual Intel Xeon 6700 series processors supporting up to 350W per core and an NVIDIA HGX B200 8-GPU configuration with 180GB of HBM3e memory per GPU. It supports up to 8TB of DDR5 RAM across 32 DIMMs, alongside eight hot-swappable E1.S NVMe storage bays and two M.2 NVMe boot drives. Network options include eight single-port NVIDIA ConnectX-7 NICs or BlueField-3 SuperNICs, plus two dual-port BlueField-3 DPUs.
Designed as a building block for dense AI data centers, this liquid cooling system enables clusters with thousands of nodes while saving up to 40% in electricity compared to traditional air cooling.
For customers without liquid cooling infrastructure, Supermicro also introduced an 8U front I/O air-cooled system. This compact 8U form factor maintains a full GPU tray height for efficient air cooling while reducing the CPU tray height to optimize airflow and space.
AI Authority Trend: VCI Global Acquires NVIDIA H200 AI Chips via Supermicro to Boost AI Cloud Business
To share your insights, please write to us at sudipto@intentamplify.com



