Super Micro Computer, Inc., a leading IT solutions provider focused on AI, HPC, cloud, storage, 5G/edge, and green computing, showcased its advanced AI infrastructure solutions at NVIDIA GTC in Washington, D.C., showcasing systems customized to meet the stringent requirements of federal agencies. Supermicro also announced plans to deliver its next-generation NVIDIA AI platform in 2026, including the NVIDIA Vera Rubin NVL144 and NVIDIA Vera Rubin NVL144 CPX. Additionally, Supermicro unveiled Trade Agreement Act (TAA)-compliant systems, including the high-density 2 OU NVIDIA HGX B300 8-GPU system with up to 144 GPUs per rack, the new rack-scale NVIDIA GB200 NVL4 HPC solution, and the expanding NVIDIA GB300-based Super AI Station portfolio.
“Our expanded collaboration with NVIDIA and our focus on US-based manufacturing positions Supermicro as a trusted partner for the adoption of AI within federal agencies,” said Charles Liang, president and CEO of Supermicro. “With our headquarters, manufacturing and R&D facilities located in San Jose, California, in the heart of Silicon Valley, we are uniquely positioned and capable of delivering unique solutions in the market that are developed, built and validated (manufactured) specifically for our US federal agency customers. Supermicro’s long-standing collaboration with NVIDIA, a close partner also based in Silicon Valley, has solidified our position as a pioneer in the development of AI infrastructure in the US.”
AI Authority Trend: Supermicro Launches DCBBS for Comprehensive Data Center Solutions
Supermicro has expanded its latest solutions with the NVIDIA HGX B300 and B200, NVIDIA GB300 and GB200, and NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs, delivering unprecedented computing performance, efficiency, and scalability for key federal government workloads including cybersecurity and risk detection, engineering and design, healthcare and life sciences, data analytics and fusion platforms, modeling and simulation, and secure virtualized infrastructure.
Supermicro’s commitment to US-based manufacturing is a cornerstone of the company’s business strategy. All government-optimized systems are developed, built, and rigorously tested at its global headquarters in San Jose, California, ensuring full TAA compliance and Buy American eligibility. This US manufacturing capability enhances supply chain security and meets federal requirements for highly reliable, high-quality technology solutions.
Through its partnership with NVIDIA, Supermicro plans to launch the NVIDIA Vera Rubin NVL144 and NVIDIA Rubin CPX platforms in 2026. These platforms will deliver superior AI training and inference performance compared to their predecessors, enabling organizations to handle complex AI workloads with great efficiency.
Supermicro also announced the 2OU NVIDIA HGX B300 8GPU server, its highest density system yet. This server features an OCP-compliant rack-scale design and is compatible with Supermicro Data Center Building Block Solutions . This architecture allows for up to 144 GPUs in a single rack, delivering exceptional performance and scalability for large-scale AI and HPC deployments in government data centers.
Supermicro is expanding its government portfolio with optimizations for the NVIDIA AI Factory for Government reference design, a full-stack, end-to-end reference design that provides guidance for deploying and managing multiple AI workloads on-premise and in hybrid cloud, meeting the compliance needs of high-trust organizations.
The portfolio includes the NVIDIA GB300-based Super AI Station and the rack-scale NVIDIA GB200 NVL4 HPC solution, both optimized for federal environments with enhanced security, reliability and scalability to meet rigorous government standards.
AI Authority Trend: Hitachi Vantara and Supermicro Join Forces to Power Enterprise AI Infrastructure
New high-speed network support
Further highlighting its track record of fastest time-to-market with the latest NVIDIA technologies, Supermicro announced support for the newly announced NVIDIA BlueField-4 DPU and NVIDIA ConnectX-9 SuperNIC for Gigascale AI factories. These new accelerated infrastructure technologies will be rapidly integrated into new Supermicro AI systems as they become available, providing faster cluster-scale AI networking, storage access, and data processing offload for next-generation NVIDIA AI infrastructure. Supermicro’s modular hardware design enables new technologies such as NVIDIA BlueField-4 and NVIDIA ConnectX-9 to be quickly integrated into existing system designs with minimal redesign, accelerating time to market and reducing development costs.
New Super AI Station brings AI server power to your desktop
Continuing its history of being first to market with NVIDIA’s latest technologies, Supermicro has announced the new liquid-cooled Super AI Station ARS-511GD-NB-LCC. Featuring a high-end server-grade GB300 Super chip in a deskside form factor, this platform delivers unparalleled performance, delivering over 5x the AI petaflops of compute power compared to traditional PCIe-based GPU workstations. The new Super AI Station is a comprehensive solution for AI model training, fine-tuning, application and algorithm prototyping and development. Deployed on-premise, it offers unparalleled latency and complete data security, and can support models with up to 1 trillion parameters. This self-contained liquid-cooled AI platform is ideal for government agencies, startups, deep tech companies, and research institutions that cannot utilize standard server infrastructure for AI development, or who face challenges in leveraging cluster-scale and cloud AI services due to availability, cost, privacy, and latency constraints.
AI Authority Trend: Supermicro Rolls Out NVIDIA Blackwell Ultra Solutions for Global AI Deployments
The Super AI Station can be used in a desktop or rackmount environment and comes as a fully integrated all-in-one solution that includes:
- NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip
- Up to 784GB of coherent memory
- Integrated NVIDIA ConnectX-8 SuperNIC
- Closed-loop direct-to-chip liquid cooling for CPU, GPU, ConnectX-8, and memory
- Up to 20 petaflops of AI performance
- Bundled with NVIDIA AI software stack
- Option to add a PCIe GPU for rendering and graphics acceleration
- 5U desktop tower form factor with rackmount option
- 1600W power supply compatible with standard power outlets
Rack-scale GB200 NVL4 GPU-accelerated HPC and AI solution
Supermicro also announced the general availability of the ARS-121GL-NB2B-LCC NVL4 rack-scale platform, ideal for GPU-accelerated HPC and AI science workloads such as molecular simulation, weather modeling, fluid dynamics, and genomics. It delivers breakthrough performance with four NVIDIA NVLink-connected Blackwell GPUs integrated with two NVIDIA Grace CPUs via NVLink-C2C, and up to 32 nodes per rack interconnected over NVIDIA ConnectX-8 networking, delivering up to 800Gbps per GPU. This solution offers flexible system- and rack-level scalability based on workload requirements, and offers a choice of liquid cooling with in-rack or in-row CDUs.
- Four B200 GPUs and two Grace superchips per node with direct-to-chip liquid cooling
- 4-port 800G NVIDIA Quantum InfiniBand network per node, dedicated 800Gbps to each B200 GPU (alternative NIC options available)
- Up to 128 GPUs in a 48U NVIDIA MGX rack for unparalleled data center rack density
- Powered via OCP busbar for seamless scaling
These Supermicro systems are ideal for AI development and deployment using NVIDIA AI Enterprise software and NVIDIA Nemotron open AI models.
AI Authority Trend: Nokia and Supermicro Partner on AI-Optimized Data Center Solutions
Source – PR Newswire
To share your insights, please write to us at info@intentamplify.com





