SambaNova Systems has introduced its latest AI semiconductor, the SN50, marking a major step forward in high-performance AI infrastructure. The company claims that the new chip delivers up to five times faster performance than competing solutions while significantly reducing enterprise costs. At the same time, SambaNova announced a multi-year collaboration with Intel and revealed that it has secured more than $350 million in new funding from both existing and new investors.

The SN50 chip has been specifically designed for the growing demand for agent-based AI systems. According to the company, the chip reduces the total cost of ownership for enterprises by nearly one-third while enabling organizations to scale high-speed AI inference. As a result, businesses can build a robust foundation for deploying autonomous AI agents across large environments. SambaNova expects to begin shipping the SN50 to customers during the second half of 2026.

AI Authority TrendSambaNova Unveils SN50 AI Chip, Secures $350 Million Funding

To accelerate the rollout of the new chip, SambaNova has partnered with Intel and raised strategic Series E funding. This capital will help the company expand its manufacturing capabilities while also strengthening its AI cloud infrastructure.

“We are thrilled to partner with SambaNova to bring this exciting technology to the market,” said Rodrigo Liang, co-founder and CEO of SambaNova.

“AI is no longer a race to build the biggest model. Through SN50’s deep collaboration with Intel, the real competition is now: who can run entire data centers with AI agents at high speed and with high reliability, and at a cost structure that will transform AI from a lab to the most profitable engine in the cloud?”

Intel executives also highlighted the importance of offering enterprises alternative AI computing solutions.

“Intel’s Data Center portfolio is a game changer,” said Kevork Kechichian, executive vice president and general manager of the Data Center Division at Intel.

“Customers are looking for more choice and more efficient ways to scale AI. Combining Intel’s leadership in compute, networking and memory with SambaNova’s full-stack AI system and inference cloud platform will provide enterprises with a compelling alternative to GPUs for deploying advanced AI at scale.”

Technically, the SN50 significantly improves performance compared to its predecessor. It delivers five times the compute power and four times the network bandwidth per accelerator. Moreover, the system can support up to 256 accelerators connected through a multi-terabyte-per-second interconnect. Consequently, enterprises can generate faster responses, process larger batches of data, and run more complex AI models without sacrificing throughput.

Randan Downs, co-founder and managing partner of Cambium Capital, emphasized the growing infrastructure focus in the AI industry.

AI Authority TrendHonda and Mythic Partner to Develop Ultra-Efficient AI Chip for Software-Defined Vehicles

“AI is no longer software-centric; it is becoming an infrastructure-centric battlefield. SN50 is designed to meet the realistic latency requirements and economics that are critical to successfully deploying agent-based AI at scale.”

Industry analysts also believe the chip could reshape large-scale AI inference economics.

“These technologies are driving a rapid transformation in the way we manage our data,” said Peter Wratten, research vice president for performance-intensive computing at research firm IDC.

“The new SambaNova SN50 RDU is a game changer for the tokenomics of large-scale AI inference. SambaNova is disrupting the industry by delivering high performance and throughput on a utility-scale, air-cooled system.”

Meanwhile, SoftBank Corp. will become the first organization to deploy the SN50 chip. The company plans to integrate the technology into its next-generation AI data center in Japan. This deployment will allow SoftBank to deliver low-latency AI inference services to domestic enterprises while supporting both open-source and proprietary AI models.

Hirotora Tamba, Executive Officer and General Manager of the Next Generation Technology Development Division at SoftBank Corp., said: “With SN50, we will build an AI inference platform for Japan that offers the speed, robustness, and sovereignty that SoftBank’s customers and partners demand. By standardizing on SN50, we will be able to deliver world-class AI services on our own terms, with performance comparable to the world’s best GPU clusters, while ensuring greater economics and operational control.”

Furthermore, SambaNova and Intel plan to expand their collaboration across multiple areas, including scaling AI cloud infrastructure, building integrated AI systems, and strengthening joint go-to-market initiatives. The partnership will combine SambaNova’s AI systems with Intel Xeon processors, GPUs, networking, and storage technologies to create scalable AI inference platforms.

In addition to the technology partnership, SambaNova’s Series E funding round attracted strong investor interest and became oversubscribed. The round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital and several other investors. The new funding will support the company’s efforts to scale AI infrastructure globally and capture a growing share of the rapidly expanding AI inference market.

AI Authority TrendAmazon Unveils New AI Chip: How Trainium3 Challenges Nvidia’s Dominance

To share your insights, please write to us at info@intentamplify.com