SambaNova has introduced its next-generation SN50 AI chip, delivering a major leap in AI inference performance. The company claims the SN50 runs up to five times faster than competing chips, positioning it as a powerful solution for enterprises scaling agentic AI. At the same time, SambaNova announced a planned multi-year collaboration with Intel and secured more than $350 million in Series E funding from new and existing investors.

With enterprises rapidly moving AI agents from pilot programs into production, SambaNova designed the SN50 to reduce total cost of ownership by 3X. As a result, organizations can deploy autonomous AI agents at scale without sacrificing performance or efficiency. The company plans to begin shipping the SN50 to customers later this year.

AI Authority TrendCadence Unveils ChipStack AI Super Agent to Revolutionize Semiconductor Design

To accelerate distribution, SambaNova is collaborating closely with Intel while expanding its manufacturing and AI cloud capacity using the newly raised capital.

“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co-founder and CEO of SambaNova. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”

Likewise, Intel emphasized the growing demand for alternatives to GPU-centric systems.

“Customers are asking for more choice and more efficient ways to scale AI,” said Kevork Kechichian, EVP, General Manager, Data Center Group, Intel. “By combining Intel’s leadership in compute, networking, and memory with SambaNova’s full-stack AI systems and inference cloud platform, we are delivering a compelling option for organizations looking for GPU alternatives to deploy advanced AI at scale.”

Technically, the SN50 delivers five times more compute per accelerator and four times more network bandwidth compared to its predecessor. It connects up to 256 accelerators via a multi-terabyte-per-second interconnect, significantly reducing time-to-first-token while supporting larger batch sizes. Consequently, enterprises can run bigger models with longer context windows exceeding 10 trillion parameters and 10 million context lengths while maintaining lower latency and improved cost efficiency.

“AI is moving from a software story to an infrastructure story,” said Landon Downs, co-founder and managing partner at Cambium Capital. “SN50 is engineered for the real-world latency and economic requirements that will determine who successfully deploys agentic AI at scale.”

Peter Rutten, Research Vice-President Performance Intensive Computing at analyst firm IDC, added: “The new SambaNova SN50 RDU changes the tokenomics of AI inference at scale. By delivering both high performance and high throughput with a chip that uses existing power and is air cooled, SambaNova is changing the game.”

Meanwhile, SoftBank Corp. will become the first customer to deploy the SN50 inside its next-generation AI data centers in Japan. The deployment will support low-latency inference services for sovereign and enterprise clients across Asia-Pacific.

AI Authority TrendHonda and Mythic Partner to Develop Ultra-Efficient AI Chip for Software-Defined Vehicles

“With SN50, we are building an AI inference fabric for Japan that can serve our customers and partners with the speed, resiliency and sovereignty they expect from SoftBank,” said Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division of the Technology Unit at SoftBank Corp. “By standardizing on SN50, we gain the ability to deliver world-class AI services on our own terms with the performance of the best GPU clusters, but with far better economics and control.”

In addition, SambaNova and Intel plan to expand their partnership across AI cloud infrastructure, integrated AI systems, and global go-to-market initiatives. Intel also plans to make a strategic investment to accelerate an Intel-powered AI cloud rollout.

The oversubscribed Series E round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital and several global investors. Mr. Sharaf Al Hariri, Chairman of First Data, said that SambaNova plays a strategic role in the company’s plan to introduce cutting-edge AI technologies across Saudi Arabia and the broader Middle East. He explained that First Data is backing SambaNova’s platforms to deliver high-performance, low-latency, and sovereign AI capabilities, designed to run efficiently with lower power consumption and within existing air-cooled data center infrastructures. He added that SambaNova’s advanced inference capabilities and scalable AI services enhance First Data’s capacity to provide enterprise-grade AI infrastructure and solutions. According to Al Hariri, the investment also underscores First Data’s long-term vision to diversify its technology portfolio, strengthen innovation resilience, and remain agile amid shifting global technology trends while creating sustainable value across the region.

“We’re proud to be investing in SambaNova at such a pivotal time in the company’s growth,” said Monti Saroya, Partner at Vista Capital. “SN50 is engineered for agentic AI systems that orchestrate multiple models and process requests in near real-time more efficiently than traditional GPU-centric systems.”

As agentic workloads continue to expand globally, SambaNova is positioning the SN50 as the infrastructure backbone for the next era of high-performance, cost-efficient AI inference.

AI Authority TrendNota AI Partners with FuriosaAI to Boost AI Model Optimization for Data Center Chips

To share your insights, please write to us at info@intentamplify.com