Super Micro Computer, Inc., a full-fledged IT solutions provider for AI, cloud, storage, and 5G/edge, announced the expansion of its NVIDIA Blackwell architecture portfolio with the launch and availability of new 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems. These latest models are key components of Supermicro’s Data Center Building Block Solutions (DCBBS) suite, delivering unprecedented GPU density and energy efficiency for hyperscale data center and AI factory deployments.
Supermicro President and CEO Jim Liang stated, “The global demand for AI infrastructure is rapidly increasing, and our new liquid-cooled NVIDIA HGX B300 system delivers the performance density and energy efficiency required for today’s hyperscale computing facilities and AI factories. We have launched the industry’s most compact NVIDIA HGX B300 solution, supporting up to 144 GPUs in a single rack and reducing power consumption and thermal costs through our certified direct liquid cooling technology. This is Supermicro’s model for helping customers deploy AI at scale through DCBBS technology: faster time-to-market, maximized performance per watt, and end-to-end integration from design to deployment.”
AI Authority Trend: Supermicro Launches Air-Cooled AI Solution with AMD Instinct MI355X
The new 2-OU (OCP) liquid-cooled NVIDIA HGX B300 system is built on the 21-inch OCP Open Rack V3 (ORV3) form factor, enabling hyperscale computing facilities and cloud service providers to support up to 144 GPUs in a single rack, maximizing GPU density while saving data center space and maintaining maintenance. This rack-scale design features blind-mate coolant manifolds, a modular GPU/CPU tray architecture, and advanced component liquid cooling solutions. The system runs AI workloads with eight NVIDIA Blackwell Ultra GPUs (each with a maximum TDP of 1,100W) while significantly reducing rack footprint and power consumption. A single ORV3 rack can support up to 18 nodes and 144 GPUs, and can be seamlessly expanded using NVIDIA Quantum-X800 InfiniBand switches and Supermicro 1.8MW in-row coolant distribution units (CDUs). Eight NVIDIA HGX B300 compute racks, three NVIDIA Quantum-X800 InfiniBand networking racks, and two Supermicro rack-in-row CDUs can be integrated into a supercluster scalable unit with 1,152 GPUs.
The 4U front-I/O HGX B300 liquid-cooled system, as the counterpart to the 2-OU (OCP) system, provides the same computing performance within a traditional 19-inch EIA rack architecture, making it suitable for large-scale AI factory deployments. This 4U system utilizes Supermicro DLC-2 technology, which dissipates up to 98% of the system’s heat through liquid cooling, resulting in better energy efficiency and maintainability for intensive training and inference clusters, as well as lower noise levels.
AI Authority Trend: Supermicro Launches Fully Integrated AI Factory Solutions Powered by NVIDIA Blackwell
The Supermicro NVIDIA HGX B300 system significantly enhances performance through 2.1TB of HBM3e GPU memory per unit, enabling the running of larger-scale models at the system level. Both the 2-OU (OCP) and 4U platforms deliver cluster-level performance gains and can double computing network throughput (up to 800Gb/s) by integrating NVIDIA ConnectX-8 SuperNICs and pairing them with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. These advantages accelerate large-scale AI workloads such as agentic AI applications, basic model training, and large-scale multimodal inference within AI factories.
Supermicro developed these platforms based on customers’ core needs in areas such as total cost of ownership (TCO), maintainability, and efficiency. The DLC-2 technology stack can reduce power consumption in data centers by up to 40% , reduce water usage through 45°C warm water operation, and eliminate the need for cooling water and compressors. Supermicro DCBBS technology enables rack-mounted full validation and testing of these new systems at L11 and L12 levels before shipment, helping hyperscale facilities, enterprise, and federal government customers accelerate deployment.
These new systems expand Supermicro’s comprehensive NVIDIA Blackwell platform portfolio. The portfolio includes the NVIDIA GB300 NVL72, NVIDIA HGX B200, and the NVIDIA RTX PRO 6000 Blackwell server version. Supermicro’s NVIDIA-certified systems are fully tested to validate optimal performance for diverse AI applications and scenarios, and integrate NVIDIA networking technologies and NVIDIA AI software, including NVIDIA AI Enterprise and NVIDIA Run:ai. This portfolio provides customers with the flexibility to smoothly build the infrastructure from a single node to a complete AI factory.
AI Authority Trend: Supermicro and NVIDIA Expand Collaboration to Strengthen U.S. AI Manufacturing
Source – PR Newswire
To share your insights, please write to us at info@intentamplify.com





