Credo Technology Group Holding Ltd, a leading innovator in secure and high-speed connectivity solutions, has unveiled Weaver, a breakthrough memory fanout gearbox designed to dramatically enhance memory bandwidth and density. With this launch, Credo aims to optimize computing efficiency for AI accelerators and xPUs, tackling one of the biggest barriers in AI system performance memory bottlenecks.
As the first product in Credo’s new OmniConnect family, Weaver introduces a transformative approach to scaling AI infrastructures. The OmniConnect portfolio is built to address both scale-up and scale-out challenges, empowering data centers to meet the growing demands of AI buildouts with unmatched speed and efficiency.
AI Authority Trend: Tanka Brings AI Memory to Workplace Chat
Today, AI inference workloads increasingly face performance limitations due to memory constraints rather than compute power. Traditional memory architectures such as LPDDR5X and GDDR solutions often fall short in bandwidth, density, and power efficiency. Even High Bandwidth Memory (HBM), despite its advantages, struggles with high costs and limited scalability. To bridge this gap, Weaver employs advanced 112G Very Short Reach (VSR) SerDes technology along with Credo’s proprietary design to boost I/O density by up to 10 times. This innovation enables up to 6.4TB of memory capacity and 16TB/s of bandwidth using LPDDR5X, far exceeding the performance of conventional systems.
Don Barnetson, Senior Vice President of Product at Credo, emphasized the product’s scalability: “Weaver is designed to deliver the flexibility and scalability required for future AI inference systems. This innovation empowers our partners to optimize memory provisioning, reduce costs, and accelerate deployment of advanced AI workloads.”
AI Authority Trend: SK hynix to Unveil ‘Full Stack AI Memory Provider’ Vision at CES 2025
Mitesh Agrawal, CEO of Positron, also highlighted Weaver’s industry impact, stating, “The future of AI acceleration requires efficiency at all levels and innovative technology to process extremely large workloads. Credo’s Weaver is instrumental in helping us solve our toughest memory challenges, enabling us to deliver the high-performance compute power for our next generation of AI inference servers.”
In addition to performance benefits, Weaver offers flexible DRAM packaging and late binding capabilities, allowing system integrators to easily tailor memory configurations to fit evolving AI model requirements. Moreover, it supports seamless migration to next-generation memory protocols, ensuring long-term compatibility and value. Built-in telemetry and diagnostics further enhance reliability and system uptime critical factors in enterprise-scale AI operations.
By introducing Weaver, Credo positions itself at the forefront of AI infrastructure innovation paving the way for faster, more efficient, and scalable AI systems that can meet the ever-growing computational demands of the modern digital world.
AI Authority Trend: Cadence Debuts LPDDR6/5X 14.4Gbps Memory IP for Next-Gen AI Infrastructure
To share your insights, please write to us at info@intentamplify.com





