Supermicro, Inc., a leading Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, has officially launched its NVIDIA Blackwell Ultra solutions worldwide. The company now offers Plug-and-Play (PnP)-ready NVIDIA HGX B300 systems and GB300 NVL72 racks to customers across the globe. These solutions are purpose-built and pre-validated at the system, rack, and data center levels, enabling organizations to deploy the highest-performance AI infrastructure quickly. They support large-scale AI training, real-time AI reasoning, multimodal AI inference, agentic AI applications, and physical AI deployments.
AI Authority Trend: Nokia and Supermicro Partner on AI-Optimized Data Center Solutions
“Supermicro has the best track record of fast and successful deployments of new NVIDIA technologies,” said Charles Liang, president and CEO of Supermicro. “Through Supermicro Data Center Building Block Solutions with our expertise in on-site deployment, we enable turn-key delivery of the highest-performance AI platform critical for customers seeking to invest in cutting-edge technology. Data center customers face many AI infrastructure challenges: complex network topology and cabling, power delivery, and thermal management. Supermicro delivers pre-validated, plug-and-play solutions at system, rack, and data center scale, enabling AI factories to deploy rapidly and helping our customers lead in AI.”
Supermicro integrates NVIDIA Blackwell Ultra advancements with system- and rack-level engineering to maximize efficiency and performance. At the system level, these solutions feature advanced air and liquid cooling designs optimized for increased GPU power utilization. With up to 1400W per GPU, Blackwell Ultra GB300 and B300 systems provide 50% greater inferencing performance and 50% more HBM3e memory capacity compared to previous NVIDIA Blackwell generations. This allows AI models to run faster and handle more complex computations.
AI Authority Trend: Lambda Expands AI Factories with Supermicro NVIDIA Blackwell Servers
Additionally, Supermicro leverages direct liquid cooling (DLC) technology, advanced air cooling, and optimized I/O designs to deliver the industry’s broadest portfolio of NVIDIA Blackwell systems at scale. The GB300 NVL72 rack-scale system achieves 1.1 exaFLOPS dense FP4 compute performance, while NVIDIA HGX B300 systems provide up to 7.5x performance gains over NVIDIA Hopper accelerator-based platforms, with 144 petaFLOPS FP4 compute and 270 GB HBM3e memory per GPU.
To further streamline AI deployment, Supermicro offers fully integrated solutions with NVIDIA AI Enterprise, NVIDIA Blueprints, and NVIDIA NIM software. Its Data Center Building Block Solutions (DCBBS) include on-site deployment for cabling, power, and thermal infrastructure, enabling rapid time-to-market. Combined with DLC-2 cooling technology, DCBBS reduces power consumption by 40%, data center footprint by 60%, water usage by 40%, and overall TCO by 20%. These innovations equip AI factories to handle the growing computational demands of modern AI workloads efficiently.
AI Authority Trend: Supermicro Expands NVIDIA Blackwell Portfolio with New Liquid- and Air-Cooled AI Systems
To share your insights, please write to us at sudipto@intentamplify.com



