AI arrived at machine speed, embedded across thousands of applications, stitched into workflows that security teams never explicitly approved. However, AI adoption is no longer evenly distributed across the business.
According to Zscaler’s ThreatLabz 2026 AI Security Report, that acceleration is now outpacing the most basic forms of enterprise oversight. The report analyzed nearly one trillion AI and machine learning transactions across the Zscaler Zero Trust Exchange between January and December 2025.
What it reveals is not simply rapid adoption, but structural exposure. AI usage grew 91 percent across more than 3,400 applications, yet many organizations still cannot answer a foundational question: “Where exactly is AI running inside the enterprise?”
That gap between deployment and visibility is no longer theoretical. It is shaping a new class of machine-speed risk that traditional security models were never designed to absorb.
Zscaler ThreatLabz 2026 AI Security Report
This article is based on findings from the Zscaler ThreatLabz 2026 AI Security Report, which analyzed 989.3 billion AI and machine learning transactions observed across the Zscaler Zero Trust Exchange™ between January and December 2025.
The report details enterprise AI adoption trends, data exposure risks, agentic AI threats, compromise timelines, and the growing need for Zero Trust architecture and continuous AI governance in modern enterprises.
Rapid AI Scale Fuels Governance Crisis
Zscaler’s data shows engineering teams alone account for 48.9 percent of all AI usage, followed by IT at 31.8 percent and marketing at 6.9 percent. Finance and insurance together generate 23 percent of all AI and ML traffic, while technology and education recorded explosive year-over-year growth of 202 percent and 184 percent, respectively.
This matters because AI is not arriving as a single platform decision. It is proliferating through embedded features, third-party services, copilots, and agentic components that rarely trigger formal risk review.
Despite more than 200 percent AI usage growth in key sectors, many organizations still lack a basic inventory of AI models and embedded AI capabilities. That absence alone elevates AI governance to a board-level concern. You cannot secure, audit, or constrain what you cannot see.
Zscaler’s findings show the number of applications driving AI and ML transactions quadrupled year over year. Centralized visibility declined accordingly.
Machine-Speed AI Meets Human-Speed Defense
The most sobering data point in the report is not adoption volume. It is time to compromise.
Zscaler found that most enterprise AI systems could be compromised in as little as 16 minutes, with critical flaws uncovered in 100 percent of systems analyzed. That figure reframes AI risk from a long-term architectural concern into an operational one.
AI-driven expansion is now outpacing the ability of traditional, human-dependent defenses to respond in real time. Attackers are no longer constrained by staffing costs or fatigue. Automation is driving their marginal cost toward zero.
As Ram Varadarajan, CEO at Acalvio, puts it:
“Security teams can no longer depend on humans doing everything by hand. The model has to change to allow humans to direct AI-driven workflows, just as hackers do. It’s destined to be a bot-on-bot duel forevermore. Teams should start small. Pick a few high-impact workflows where AI provides scale and speed, and humans supply judgment and oversight. Assume a machine-speed AI-augmented attacker or autonomous AI attack, and defend with machine-speed AI that leverages the adversarial AI’s own vulnerabilities.
AI-driven expansion is now outpacing the ability of traditional, human-dependent defenses to respond in real-time. Defenders are expending finite resources against adversaries whose AI automation is driving attack costs toward zero, a gap that’s not going to be closed by adding more disconnected defensive security tools. Let’s face it, clouds are going to continue to sprawl – that’s a reality. To be able to scale with the attackers, AI-first cloud security has to shift from reactive blocking to AI-driven preemptive defense. The key to scaling defense on the cloud will be to use an AI-driven, real-time deception fabric to target the known cognitive and computational limits of attacker AI, imposing asymmetric conditions of compounding uncertainty and computational exhaustion.”
AI Is Now a High-Volume Data Conduit
AI systems are not just compute endpoints. They are data sinks. Zscaler reports that data transfers to AI and ML applications surged 93 percent in 2025, totaling more than 18,000 terabytes. That volume alone paints AI platforms as prime targets for cybercriminals seeking sensitive enterprise data.
This changes the threat calculus. AI is no longer a peripheral risk layered on top of existing infrastructure. It has become a primary conduit for regulated, proprietary, and customer information.
Diana Kelley, Chief Information Security Officer at Noma Security, frames the shift:
“AI risks have rapidly moved from a watch list item to a front-line security concern, especially when it comes to data security and misuse. To manage this emerging threat landscape, security teams need a mature, continuous security approach, which includes blue team programs, starting with a full inventory of all AI systems, including agentic components as a baseline for governance and risk management.
Securing AI in 2026 and beyond is not just about protecting models. It requires addressing stack sprawl and moving toward a platform-driven approach that delivers defense in depth through unified, AI-aware identity, configuration, and data visibility. Organizations that simplify their cloud and AI security stack and enable effective automation will be far better positioned to safely scale AI as threats continue to evolve.”
The emphasis on inventory may sound mundane. It is not. In an environment where AI capabilities are embedded across thousands of applications, inventory becomes the prerequisite for every other control.
Agentic AI Raises the Stakes Again
If current AI deployments are difficult to govern, agentic AI compounds the challenge.
Agentic systems leverage reasoning-capable large language models to execute autonomous workflows across domains. They act, chain tasks and adapt.
Kelley warns this is where the next wave of enterprise risk will emerge:
“I think the next wave of risk will stem from the broad adoption of agentic AI, systems that leverage the “reasoning” capabilities of LLMs to drive autonomous workflows. To prepare, organizations should implement agentic risk management, starting with established policies and standard operating procedures and supported by technical controls like cryptographic identity attestation and continuous policy enforcement for AI agents. This will allow enterprises to monitor and constrain agent autonomy to gain the benefits of agentic AI without putting the organization at unnecessary risk.”
To prepare, organizations should implement agentic risk management, starting with established policies and standard operating procedures and supported by technical controls like cryptographic identity attestation and continuous policy enforcement for AI agents.
Without identity, agent autonomy becomes unbounded. Without continuous enforcement, policy becomes advisory.
Randolph Barr, CISO at Cequence Security, sees the same fault line forming earlier in the lifecycle:
“In the haste to bring AI to market quickly, engineering and product teams often cut corners to meet aggressive launch timelines. When that happens, basic security controls get skipped, and those shortcuts make their way into production. So, while organizations are absolutely starting to think about model protections, prompt injection, data leakage, and anomaly detection, those efforts mean little if you haven’t locked down identity, access, and configuration at a foundational level. Security needs to be part of the development lifecycle from day one, not just an add-on at the time of launch.”
Agentic AI magnifies the cost of those shortcuts.
The SOC is Re-Architecting Around AI
Security operations centers are not waiting for perfect governance frameworks. They are adapting out of necessity.
“According to the State of AI in SOC Report cited by Kamal Shah, CEO at Prophet Security, security leaders expect AI to handle roughly 60 percent of SOC workloads within the next three years.
AI speeds up the work, teams chain skills, and incentives push toward scale. Security teams should shorten time to answer with outcomes that clearly state scope, impact, affected assets, and next actions, backed by evidence the business can trust. Treat coordinated disclosure as core infrastructure with a clear VDP or bug bounty program, simple reporting, defined SLAs, safe harbor language, and consistent communication, then keep tight feedback loops with researchers because responsiveness improves report quality and reduces time to fix.”
By observing how AI is being used to automate repetitive tasks, SOC teams can study these automated methodologies to understand the tempo and velocity of modern attacks.
AI SOC tools are giving security analysts similar capabilities in handling repetitive tasks such as alert triage and investigation, freeing their time to focus on higher-priority security tasks.
By integrating the reports from ethical hackers with new AI defenses, SOC teams can create a practical training ground for junior analysts, helping them transition into high-level operators who proactively hunt for threats, rather than performing manual triage.
What Zscaler’s Data Ultimately Signals
Zscaler’s 2026 findings do not argue against AI adoption. They expose the cost of ungoverned acceleration. AI is expanding faster than enterprise oversight. Attackers are automating faster than defenders. Data is flowing into AI systems at volumes security teams did not plan for.
The next phase of AI security will not be won by better prompts or tighter model tuning. It will hinge on visibility, identity, and machine-speed defense strategies that accept a simple truth.
AI has already crossed the line from experimental technology to core infrastructure. Security strategies that still treat it as optional are already behind.
AI Tech Insights Analysis
Zscaler’s data exposes an organizational mismatch. Enterprises are still structured around the idea that technology adoption happens in phases. Pilot. Review. Scale.
Secure. AI does not follow that sequence. It arrives embedded, federated, and continuously updated across vendors, teams, and workflows. By the time security teams detect it, it is already operational.
That is why inventory keeps surfacing as the first failure point. Not because enterprises lack security maturity, but because ownership of AI has become diffuse by design. Engineering integrates AI for velocity.
Product teams ship AI to stay competitive. Business units adopt AI features automatically bundled into SaaS platforms. No single group experiences the full risk surface, yet the organization absorbs it collectively.
This is also why model-centric security narratives fall short. The most consequential risks are highlighted in Zscaler’s data. Compromise in minutes. Exploding data transfers. Agentic autonomy.
Do not originate from model weights or prompt manipulation alone. They originate from identity gaps, configuration drift, and invisible execution paths. In that sense, AI security is converging with cloud security’s hardest lesson.
FAQs
1. Why is AI adoption creating new security risks for enterprises in 2026?
AI adoption is expanding faster than enterprise oversight. AI is now embedded across thousands of applications and workflows, often without centralized visibility, which increases the attack surface and accelerates breach timelines beyond what human-driven security teams can manage.
2. What does Zscaler’s AI security data reveal about enterprise readiness?
Zscaler’s data shows that many enterprises lack a basic inventory of AI systems despite rapid adoption. This visibility gap prevents effective governance, weakens identity and access controls, and leaves AI environments vulnerable to machine-speed attacks.
3. Why are traditional security tools struggling to protect AI systems?
Traditional tools rely on human-paced investigation and reactive blocking. AI-driven attacks operate autonomously and at machine speed, making manual workflows and disconnected point solutions ineffective against automated adversaries.
4. How does agentic AI change the enterprise risk profile?
Agentic AI introduces autonomous decision-making and workflow execution across systems. Without strong identity, policy enforcement, and continuous monitoring, agentic AI can act beyond intended boundaries, amplifying operational and security risk.
5. What should boards and executives prioritize to secure enterprise AI?
Executives should prioritize AI visibility, identity-centric controls, and platform-based security architectures. AI governance must be treated as a core enterprise risk issue, not a technical afterthought, with accountability extending beyond individual teams.
Discover the future of AI, one insight at a time – stay informed, stay ahead with AI Tech Insights.
To share your insights, please write to us at info@intentamplify.com





