SuperX AI Technology Limited unveiled its latest innovation, the SuperX GB300 NVL72, a rack-mounted AI supercomputing system designed to redefine the boundaries of large-scale AI model training and inference. Leveraging the extraordinary capabilities of the NVIDIA GB300 Grace Blackwell Ultra superchip, the platform targets models exceeding a trillion parameters while delivering unmatched performance density and energy efficiency. Its liquid-cooled design sets a new benchmark for modern data center infrastructure.

This next-generation system signals a transformative shift in AI computing. By offering up to 1.8 exaflops of AI performance within a single liquid-cooled rack, the GB300 NVL72 achieves compute densities that traditional air-cooled designs and standard AC power solutions cannot support. Consequently, organizations relying on conventional infrastructure may face significant limitations when attempting to deploy these high-powered workloads.

AI Authority TrendDatavault AI to Deploy AI-Driven Supercomputing for Biofuel Innovation

SuperX emphasizes that advanced power solutions, such as 800-volt direct current (800VDC), are now integral to AI system stability and performance, going beyond energy efficiency. “Directly supplying large amounts of power ensures system reliability and operational feasibility,” the company stated. The GB300 NVL72 serves as the core engine of the full-stack SuperX Modular AI Factory solution, combining liquid cooling, high-voltage DC power, and cutting-edge hardware into a unified deployment-ready platform.

At the heart of the system lies the GB300 superchip, integrating 72 NVIDIA Blackwell Ultra GPUs with 36 NVIDIA Grace CPUs in a 2:1 ratio. This configuration delivers 900GB/s chip-to-chip bandwidth, linking high-bandwidth GPU memory with Grace CPU memory seamlessly. The combination of 2,304GB HBM3E memory and LPDDR5X memory creates a unified memory pool, eliminating I/O bottlenecks for massive AI models. The complementary strengths of Grace CPUs and Blackwell Ultra GPUs optimize performance per watt, achieving both power efficiency and compute-intensive performance.

AI Authority TrendHPE Builds Next-Gen 100 Percent Direct Liquid Cooled Supercomputer at the Leibniz Supercomputing Center

The GB300 NVL72 scales impressively, connecting 72 GPUs in a single rack to operate as a unified GPU cluster. With 800Gb/s InfiniBand XDR connectivity and advanced liquid cooling, the system delivers continuous, high-density operation, supporting demanding AI workloads in hyperscale cloud computing, national AI infrastructures, scientific research, and industrial digital twins.

Key specifications include 36 Grace CPUs (144 cores), 72 Blackwell Ultra GPUs, approximately 165TB HBM3E GPU memory, 17TB LPDDR5X CPU memory, and 1.8 exaflops AI performance. The 48U NVIDIA MGX rack measures 2296mm x 600mm x 1200mm, making it the ideal foundation for the AI infrastructure of tomorrow.

With the GB300 NVL72, SuperX positions itself at the forefront of exascale AI computing, empowering enterprises, governments, and research institutions to tackle the most complex AI challenges with unprecedented speed and efficiency.

AI Authority TrendHPE Expands Liquid-Cooled Supercomputing, Launches AI Systems

To share your insights, please write to us at sudipto@intentamplify.com