Cerebras Systems, together with G42’s Inception and MBZUAI’s Institute for Foundation Models (IFM), has officially launched Jais 2, the most advanced open-source Arabic large language model to date. This release marks a historic achievement, as Jais 2 becomes the first frontier-level language model to be both trained and deployed for inference entirely on Cerebras hardware. By combining deep AI expertise with Cerebras’ wafer-scale compute clusters, the teams achieved state-of-the-art performance while using only a fraction of the compute typically required for models of this size.
In a significant milestone for AI infrastructure, Natalia Vassilieva, VP and Field CTO at Cerebras, highlighted, “This marks the first time a frontier-grade LLM has been trained end-to-end and deployed in production for inference on Cerebras hardware, demonstrating a new, efficient blueprint for sovereign AI development.” She emphasized that training and serving a large-scale Arabic model on a unified compute architecture not only streamlined operations but also reduced overall costs and accelerated deployment timelines.
AI Authority Trend: Cerebras Systems Launches ‘Cerebras for Nations’ for Sovereign AI
Moving forward, the Jais 2 family which now includes new 8B and 70B models was trained exclusively on Cerebras wafer-scale clusters. This setup enabled machine learning workflows that leverage unprecedented compute, memory, and bandwidth performance. As a result, the Jais 2 chat application now operates inference directly on Cerebras systems, achieving speeds of up to 2,000 tokens per second, positioning it among the fastest LLMs globally.
Designed for the Arab World, Beyond Western-Centric Models
Although more than 400 million people speak Arabic, most frontier AI models remain heavily optimized for English and Western cultural contexts. Arabic’s rich linguistic structure and complex cultural diversity have historically been underrepresented in AI training data. Jais 2 directly addresses this long-standing gap by delivering a frontier-grade model deeply aligned with Arabic culture, dialects, and context.
Professor Preslav Nakov of MBZUAI underscored this achievement, stating, “Arabic has long been underserved in AI development due to limited high-quality data for training large language models… By dramatically expanding the quality and diversity of Arabic data, we created a foundation that reflects the richness of the Arabic language.”
While large Western models often struggle with culturally nuanced tasks such as dialect variation, honorific norms, religious reasoning, and region-specific humor earlier Arabic models have lacked the scale needed for advanced reasoning. Jais 2 bridges both challenges, combining frontier-level intelligence with native cultural grounding. The model performs exceptionally well in domains central to Arab culture, including poetry, cuisine, religion, and even dream interpretation.
AI Authority Trend: Cerebras and Core42 Achieve Record Performance on OpenAI’s gpt-oss-120B
A Major Leap Forward from the Original Jais Series
Jais 2 elevates the foundation set by the original Jais bilingual models. The latest release delivers:
- New 8B and 70B parameter models
- A redesigned training architecture
- A significantly expanded Arabic-first dataset
- A more rigorous fine-tuning and alignment process
- Industry-leading performance on the AraGen Arabic leaderboard
Notably, Jais 2 70B sets a new standard for accuracy among Arabic LLMs while maintaining strong multilingual reasoning capabilities.
Jais 2 is now publicly available on Hugging Face, and the enhanced Jais Chat app is live across web, iOS, and Android, making advanced Arabic AI more accessible than ever.
AI Authority Trend: Cerebras Made Lightning-Fast AI More Accessible – Here’s How
To share your insights, please write to us at info@intentamplify.com


