SANS Institute spearheads a coordinated effort to tackle the growing security challenges of AI. Initiative includes new guidelines, a cybersecurity hackathon, and an AI summit.
Organizations that integrate Artificial Intelligence into their workforce and offerings are accelerating innovation, but many are unprepared for the security challenges that come with it. As they rush to deploy more efficient and cost-effective models, they often overlook the risks of model manipulation and adversarial attacks, threats that traditional defenses are not equipped to detect or stop. At the same time, many leaders are still grappling with how to safely and securely operationalize AI across their environments. As AI becomes deeply embedded in both business operations and critical infrastructure, the risks are expanding rapidly and at a global scale.
To help organizations navigate these risks and assist them in taking back control, the SANS Institute is launching a major initiative. SANS announced the upcoming release of its Critical AI Security Guidelines v1.0, a practical, operations-driven framework built for defenders and leaders who need to secure AI systems now. The guidelines will debut at the SANS AI Summit 2025 and focus on six critical areas: Access Controls, Data Protection, Deployment Strategies, Inference Security, Monitoring, and Governance, Risk and Compliance. They are designed to provide security teams and leadership with clear, practical direction for defending AI systems in real-world environments. Each section provides actionable recommendations to help organizations identify, mitigate, and manage the risks associated with modern AI technologies. Once released, the guidelines will be open to community feedback, allowing practitioners, researchers, and industry leaders to contribute insights and updates as threats evolve and new best practices emerge.
AI Authority Trend: Rambus Boosts Data Center and AI Security with Next-Gen CryptoManager IP
“We’re seeing organizations deploy large language models, retrieval-augmented generation, and autonomous agents faster than they can secure them,” said Rob T. Lee, Chief of Research and Co-Chair of the SANS AI Summit. “These guidelines are built for where the field is now. They’re not theoretical; they’re written for analysts and leaders in the trenches, who need to protect these systems starting today.”
As AI technologies become central to every aspect of business operations, the need for open-source tools to augment security teams and new capabilities to help secure AI has never been greater. To address this, the SANS AI Cybersecurity Hackathon invited the cybersecurity community to design open-source tools directly aligned with the new security guidelines. This unique event challenged participants to develop innovative solutions for protecting AI models, monitoring inference processes, defending against adversarial attacks, and addressing other vulnerabilities unique to AI systems. The tools produced during the hackathon will be showcased at the AI Summit, providing tangible, real-world solutions for organizations.
“We need more people who understand how AI works under the hood and how to defend it,” said Kate Marshall, SANS AI Hackathon Director and Co-Chair of the SANS AI Summit. “The hackathon is already making a difference. It’s not just creating tools; it’s showcasing talent, and that’s exactly what we need to secure AI systems for the future.”
AI Authority Trend: Intezer Doubles Customers and Boosts Revenue by 150% in 2024 as AI Security Demand Soars
The hackathon is a powerful step in addressing the growing AI skills gap, providing participants with hands-on experience and direct mentorship from top AI security experts. With the growing demand for AI security professionals, initiatives like this are critical in ensuring that the talent pipeline is ready to meet the needs of the industry. Winning tools will not only receive visibility and support but also become integral to helping organizations implement security guidelines effectively.
These collective efforts will culminate at the SANS AI Summit 2025 on March 31st, where leaders from cybersecurity, AI development, and policy will gather to launch the guidelines and explore how to secure AI systems in real-world applications. The summit will feature in-depth discussions on the guidelines’ implementation, live demonstrations of the winning hackathon tools, and discussions about AI security challenges in sectors such as government, healthcare, and critical infrastructure. It’s here that these efforts come together, with the guidelines, hackathon projects, and summit conversations creating a comprehensive, actionable roadmap for securing AI.
“We’ve reached a point where this kind of work isn’t optional,” said Rob T. Lee. “The industry needed something central, someplace trusted, to rally around AI security. We need real controls, real tools, and a way to grow the skills that will protect the world. That’s what this is. It’s not about SANS. It’s about coming together as a community to get this right.”
By combining the release of the Critical AI Security Guidelines, the momentum of the AI Cybersecurity Hackathon, and the collaborative education in AI at the AI Summit, SANS is creating a critical moment in the industry and securing a place where AI professionals can unite, innovate, and build the future of secure AI together.
AI Authority Trend: Zenity Launches AI Security Posture Management for Microsoft Fabric Skills
Source – GlobeNewswire
To share your insights, please write to us at sudipto@intentamplify.com