The Cloud Security Alliance (CSA), in collaboration with Noma Security, Harmonic Security, and Haize Labs, announced the general availability of RiskRubric.ai, the first AI model risk leaderboard. This innovative platform provides security assessments for hundreds of large language models (LLMs) based on six critical pillars: transparency, reliability, security, privacy, safety, and reputation.
RiskRubric.ai now serves as a free resource for AI builders and users who face challenges in rapidly innovating with AI while ensuring robust security. Engineering teams often encounter weeks-long approval bottlenecks, while security teams lack specialized tools to evaluate AI-specific risks. RiskRubric.ai addresses this gap by offering instant, actionable risk grades for the most commonly deployed enterprise models, removing the guesswork from AI model risk assessment.
AI Authority Trend: ISG Says Enterprises Use AI to Make Supply Chains More Resilient
Addressing the AI Trust Crisis at Scale
The platform rigorously evaluates hundreds of leading AI models using more than 1,000 reliability prompts, over 200 adversarial security tests, automated code scans, and detailed documentation reviews. Each model receives objective scores from 0-100 across the six pillars, which then convert into A-F letter grades. This structure enables rapid risk assessment without requiring deep AI expertise.
“Every AI-forward organization faces two critical challenges: how to embed meaningful security into model selection, and how to confidently communicate AI risks to stakeholders. Without standardized risk assessments, teams are essentially flying blind,” said Niv Braun, CEO and Co-Founder of Noma Security. “RiskRubric.ai is an excellent starting point on the path to more mature and secure AI for both enterprise cybersecurity teams and AI innovators. Contexualized, evidence-based LLM risk intelligence will direct model selection so CISOs can more confidently speak to AI risk with concrete metrics, and engineering teams can accelerate AI innovation. This collaborative effort with CSA and our industry partners represents a watershed moment as we make AI model security a reality through accessibility and transparency.”
The launch comes as AI agents gain autonomy and access to critical business systems, outpacing traditional security frameworks. Currently, RiskRubric.ai covers 150+ popular AI models, including GPT-4, Claude, Llama, Gemini, and specialized enterprise models, with continuous updates planned.
“The rapid adoption and evolution of AI has created an urgent need for a standardized model risk framework that the entire industry can trust,” said Caleb Sima, Chair of the CSA AI Safety Initiative. “RiskRubric.ai embodies CSA’s mission to deliver AI security best practices, tools and education to the cybersecurity industry at large. By providing transparent, vendor-neutral assessments free to the community, we’re ensuring that organizations of all sizes can make informed decisions about AI development and deployment. This isn’t only about identifying model risk, it’s about enabling responsible AI innovation at scale.”
AI Authority Trend: o9 Partners with Databricks for Data-to-Decision Transformation for Enterprises
Industry Collaboration Drives Comprehensive AI Model Assessment
Noma Security leads the technical development of RiskRubric.ai, leveraging experience securing millions of AI interactions monthly. “Our platform doesn’t just identify risks, it provides actionable intelligence teams need to mitigate AI risk through posture management and runtime protection, all in real time,” said Gal Moyal, Noma Security CTO.
Michael Machado, RiskRubric.ai Product Lead, added, “We’ve developed an assessment framework that scales from evaluating a single model in minutes to continuously monitoring hundreds of models as they evolve. What excites me most is seeing security teams go from spending weeks on manual model reviews to getting comprehensive risk intelligence instantly. This isn’t just a leaderboard, it’s an AI ops transformation that aligns AI governance with AI innovation.”
Haize Labs contributed advanced red-teaming methods. “The black-box nature of modern AI systems demands sophisticated testing approaches that go beyond traditional security assessments,” said Leonard Tang, CEO of Haize Labs.
Meanwhile, Harmonic Security guided privacy assessments. “RiskRubric.ai’s privacy pillar leverages our expertise in detecting sensitive data exposure risks, helping organizations understand not just whether a model is secure, but whether it can be trusted with their most sensitive information,” said Alastair Paterson, CEO of Harmonic Security.
By uniting these industry leaders, RiskRubric.ai sets a new standard for transparent, actionable AI security, empowering enterprises to innovate responsibly and securely.
AI Authority Trend: Airia Raises $100 Million to Accelerate Secure Enterprise AI Adoption
To share your insights, please write to us at sudipto@intentamplify.com
