Zscaler’s 2026 AI Security Report: 989.3B AI/ML Transactions, 91% Growth—And 100% of Enterprise AI at Risk
If AI is the new electricity, enterprises just plugged the grid into a lightning storm. Zscaler’s ThreatLabz has released its 2026 AI Security Report, and the headline is as astonishing as it is alarming: nearly a trillion AI/ML transactions surged through enterprises in 2025, growing 91% year over year—while 100% of analyzed enterprise AI systems contained critical vulnerabilities. Many could be compromised in under 16 minutes.
Let that sink in. AI adoption is exploding across every industry. Data is flowing into AI tools at historic levels. And the guardrails? Often still under construction.
In this deep dive, we unpack the key findings from the Zscaler report, what they mean for your AI program, how zero trust principles now apply directly to prompts, responses, and agent actions, and why Zscaler’s new AI Security Suite could become a cornerstone for safe AI adoption. If you’re a CISO, CIO, data leader, or board member, consider this your field guide to navigating AI at machine speed—safely.
For context, the 2026 AI Security Report is analyzed from the Zscaler Zero Trust Exchange, offering an unparalleled vantage point into enterprise traffic patterns. You can explore Zscaler’s research hub at ThreatLabz and read coverage of the findings via 247wallst.com.
The View From the Edge: What Zscaler Analyzed and Why It Matters
Zscaler’s Zero Trust Exchange processes massive volumes of enterprise traffic daily, which gives ThreatLabz a unique lens into AI/ML usage across organizations worldwide. The 2026 report covers January through December 2025, capturing AI/ML transactions crossing Zscaler’s cloud security fabric. That vantage point isn’t theoretical—it’s inline and real-time, where prompts, responses, and data flows actually occur.
Why this matters: – Ground truth beats survey data. This isn’t a perception study. It’s a behavioral analysis of how enterprises are using AI—and how attackers are exploiting it. – AI traffic now touches everything. From developer copilots and customer care bots to finance forecasting and manufacturing QA, AI has become the connective tissue of modern workflows. – Traditional perimeters can’t cope. With the rise of agents and tool integrations, the number of AI endpoints and actions has multiplied past what network-centric defenses can realistically govern.
Key Findings by the Numbers
Here are the standout data points from the Zscaler 2026 AI Security Report:
- 989.3 billion AI/ML transactions in 2025 across the Zscaler Zero Trust Exchange.
- 91% year-over-year growth in enterprise AI activity.
- 100% of analyzed enterprise AI systems contained critical vulnerabilities.
- Many systems could be compromised in under 16 minutes from initial contact.
- 18,033 terabytes of data transferred to AI/ML applications (a 93% YoY increase).
- ChatGPT alone accounted for 410 million data loss prevention (DLP) policy violations.
- Finance & Insurance led all sectors in AI/ML traffic at 23.3%, with Manufacturing close behind at 19.5%.
- Enterprises blocked 39% of AI/ML transactions due to security concerns.
- The report warns that AI has become a primary vector for autonomous, machine-speed conflict, with adoption outpacing oversight.
Together, these findings draw a clear line: AI is now mission-critical—and so are the risks when visibility and control don’t keep up.
Why AI Usage Is Surging—and Why Risk Is, Too
AI adoption has crossed a threshold: – Ubiquity of tools: From coding copilots to generative design, AI is embedded in workflows. – Accessibility: Browser-based UIs and APIs let anyone pilot AI, often without IT sign-off. – Value velocity: Teams get tangible wins in hours, not months—productivity, insights, automation.
But with acceleration comes exposure: – Shadow AI: Unapproved or unknown tools siphon sensitive data (customer lists, source code, financial models) outside defenses. – Agent sprawl: AI agents that can browse, execute tools, or access internal systems expand the blast radius of a single risky action. – Data gravity: AI centralizes data for context, but that aggregation becomes a high-value target.
Result: Attackers pivot to AI because it’s where the data—and the speed—now live.
Industries Leading AI/ML Traffic: Finance & Insurance and Manufacturing
The report shows two sectors topping AI/ML traffic: – Finance & Insurance (23.3%): Heavy use of predictive analytics, fraud detection, customer service automation, and risk modeling. These data-rich environments make strong AI performance—and strong security—non-negotiable. – Manufacturing (19.5%): AI-powered quality control, predictive maintenance, digital twins, and supply chain optimization require constant data movement between plant floors, cloud services, and third-party vendors.
In both sectors, the combination of regulated data, complex supply chains, and the introduction of autonomous agents heightens the stakes. A single misrouted prompt or compromised plugin can cascade into costly downtime, compliance violations, or investor scrutiny.
Machine-Speed Threats: From Prompt Injection to Data Exfiltration
ThreatLabz frames AI as a primary vector for “autonomous, machine-speed conflict.” That’s not hyperbole. Consider the attack surfaces unique to AI:
- Prompt injection and jailbreaks: Malicious content coaxes models or agents to reveal secrets, bypass safety filters, or execute unintended actions.
- Data exfiltration via responses: Sensitive data pasted into prompts can echo in logs, training sets, or outputs to unauthorized destinations.
- Tool and plugin abuse: Over-permissioned tools or connectors let an AI agent retrieve, write, or exfiltrate data across SaaS, code repos, and internal systems.
- Model supply chain risks: Unvetted models, datasets, and third-party APIs can introduce hidden backdoors or poisoning.
- Identity gaps: Many AI tools trust the browser session or API key rather than strong, continuous user and device identity.
- Lateral movement via agents: Once an agent is coerced, it can pivot across integrations—fast.
Now layer in the report’s finding that many enterprise AI systems can be compromised in under 16 minutes. That shrinks your detection and response window to “real time or bust.”
For foundational guidance on AI risk management, see the NIST AI Risk Management Framework and the OWASP Top 10 for LLM Applications.
39% of AI/ML Transactions Were Blocked—What That Signals
Enterprises blocked 39% of AI/ML transactions due to security concerns. That’s both reassuring and revealing: – Reassuring: Security and compliance controls are firing. Organizations are attempting to put boundaries around AI usage. – Revealing: Current AI traffic includes substantial volumes of risky or non-compliant activity—unapproved tools, sensitive data flows, or anomalous agent behaviors.
The takeaway is not to “block more.” It’s to be precise—apply identity-driven, context-aware policies that allow beneficial AI use while preventing data exfiltration and unauthorized actions.
ChatGPT Triggered 410 Million DLP Violations
One of the report’s most eye-opening stats: ChatGPT alone accounted for 410 million DLP policy violations. Why? – Convenience: Employees paste “just a snippet” of sensitive data to get faster answers. – Lack of controls: Without inline classification and redaction, PII, source code, contracts, and keys can slip into prompts. – Misunderstood risk: Users may not realize how prompts, logs, or model memory can propagate sensitive information.
Practical steps: – Classify and tag sensitive data at the source. Don’t rely solely on downstream detection. – Enforce inline DLP on AI egress—redact or block sensitive fields before they leave your environment. – Offer safe alternatives (e.g., approved enterprise AI endpoints with guardrails). – Educate continuously—and verify via analytics and coaching, not just policy PDFs.
For baseline acceptable-use guidance, reference vendor policies like OpenAI’s usage policies alongside your internal governance.
Zero Trust for AI: What It Actually Looks Like
Zero trust isn’t a buzzword here—it’s table stakes for safe AI adoption. Applied to AI, zero trust means:
- Identity first: Verify the user, device posture, and workload identity for every AI interaction—not just at login.
- Least privilege for prompts and tools: Scope what a user or agent can do, with explicit permissions for tool use, data access, and action execution.
- Inline inspection of prompts, responses, and actions: Classify data, detect sensitive content, and prevent bad egress while preserving productivity.
- Microsegmented egress: Route AI traffic through brokered, policy-enforced paths—no direct-to-internet data dumps.
- Continuous risk evaluation: Adapt policies in real time based on sensitivity, behavior anomalies, and threat intel.
- Visibility end-to-end: Track who asked what, where data flowed, what the model returned, and which tools were invoked.
This is where a platform designed around zero trust shines—identity, context, and inline controls working together, at speed.
To learn more about the model, see Zscaler’s overview of Zero Trust.
Inside Zscaler’s AI Security Suite
In response to the report’s findings, Zscaler unveiled its AI Security Suite to secure enterprise AI adoption by applying zero-trust principles directly to AI interactions. According to Zscaler, the suite is designed to:
- Provide visibility into prompts, responses, and data flows for approved AI/ML applications.
- Prevent unauthorized actions and data exfiltration in real time.
- Apply identity-driven controls that operate at machine speed.
- Address the explosion of AI endpoints and agent actions that traditional perimeter defenses can’t handle.
- Block lateral threats and outbound data leaks by enforcing least privilege and continuous verification.
In effect, it aims to give organizations the same level of granular control they expect for traditional apps—now extended to the unique behaviors of AI models and agents. If your strategy is to embrace AI without accepting uncontrolled risk, these are the types of controls you’ll need wired into every interaction.
You can follow Zscaler’s ongoing research and product updates at ThreatLabz.
A 90-Day Roadmap to Safer AI Adoption
You don’t need to boil the ocean. Here’s a pragmatic sequence to reduce risk fast while keeping innovation alive.
Phase 1: Inventory and Contain (Weeks 1–4) – Discover AI/ML traffic: Identify all AI endpoints, apps, and agents in use (sanctioned or shadow). – Classify sensitivity: Map which business units are sending what data types to which AI tools. – Ringfence egress: Route AI traffic through a controlled broker; block direct-to-internet access for unknown AI domains. – Quick wins: Block unsanctioned tools with high risk; enable approved alternatives with clear usage guidelines.
Phase 2: Guardrails and Governance (Weeks 5–8) – Inline DLP: Inspect prompts and responses; redact or block PII, secrets, source code, or regulated data. – Identity and device posture: Enforce strong auth, device health checks, and user risk scoring for AI access. – Policy-based controls: Set allow/deny rules by user group, data classification, and AI app reputation. – Prompt and action logging: Capture who asked what, what was returned, and which tools were invoked for auditability.
Phase 3: Optimize and Scale (Weeks 9–12) – Fine-tune policies: Review violation data; reduce false positives without opening risky paths. – Developer guardrails: For internal LLM apps, enforce input/output filters, model confinement, and safe tool invocation patterns. – Vendor and model governance: Vet third-party models and plugins; standardize contracts around data handling and retention. – Train and reinforce: Role-based education for high-risk functions (engineering, finance, legal, HR), with just-in-time coaching.
Metrics That Matter for AI Security
Track these to quantify progress: – DLP violation rate by app (e.g., ChatGPT, code copilots) and by data class. – Percentage of AI traffic sanctioned vs. unsanctioned—and the reduction trend. – Time to detect risky AI interactions (goal: seconds) and time to block (goal: real-time). – Volume of sensitive data redacted vs. blocked (optimize for enablement with safety). – Policy coverage: Users, devices, and locations under AI access governance. – Approved agent actions: Percent executed with least privilege and explicit authorization.
These KPIs bridge security and business outcomes—fewer incidents, faster adoption, and better compliance posture.
What Boards and Executives Should Ask
- Where is our AI traffic going today, and who is using it?
- What sensitive data is leaving our environment via prompts or agent actions?
- Which controls do we have inline at the prompt/response layer?
- How quickly can we detect and stop a compromised AI interaction?
- Are we applying identity-driven zero trust to all AI endpoints, including agents and plugins?
- Do we have a clear path to enabling safe AI for every business unit—not just blocking?
The Investment Lens: Why Platforms That Secure AI May Win
Coverage of the report by 247wallst.com highlights the market implication: as AI adoption outpaces oversight, vendors that can secure AI at scale stand to benefit. While this article focuses on security strategy (not investment advice), the macro is straightforward—AI is now a core IT and risk budget line, and boards are seeking platforms with proven visibility and control.
Frequently Asked Questions
What is an “AI/ML transaction” in this context? – It generally refers to a discrete interaction with an AI/ML service—such as a prompt, model inference call, agent action invocation, or related API request traversing the security platform.
What does “100% of analyzed enterprise AI systems contained critical vulnerabilities” actually mean? – According to Zscaler’s ThreatLabz report, every analyzed enterprise AI system exhibited at least one critical vulnerability. The specifics can vary (configuration, access control, plugin/tool permissions, data handling), but the consistent presence of critical issues underscores systemic risk as adoption scales.
Is blocking 39% of AI/ML traffic a sign we should slow down AI? – Not necessarily. It indicates that many AI interactions are risky under current policies. The goal is to refine controls—use identity, data classification, and context to enable safe, compliant usage while preventing exfiltration and unauthorized actions.
Why are DLP violations so high with ChatGPT? – Convenience and lack of guardrails. Employees often paste sensitive snippets to accelerate work, and without inline inspection/redaction, those snippets trip DLP rules. Education plus real-time controls is the remedy.
Does “enterprise ChatGPT” or any single vendor solution solve data leakage? – Enterprise offerings can improve controls and retention policies, but they’re not a silver bullet. You still need identity-centric access, inline DLP, tool permissioning, and visibility across all AI endpoints and agents.
How do zero trust principles apply to AI differently than to traditional apps? – AI involves free-form prompts, dynamic tool invocation, and emergent agent behaviors. Zero trust for AI therefore must inspect and govern prompts, responses, and actions—not just network connections—while continuously verifying identity and data sensitivity.
What’s the right balance between blocking and enabling? – Start by enabling sanctioned AI tools with strong guardrails: brokered access, inline DLP, least-privilege tool use, and full logging. Block only what you can’t secure or justify. Over time, tune policies using telemetry to maximize safe productivity.
How should we approach third-party models and plugins? – Treat the AI supply chain like software supply chain security: vet providers, review data handling and retention, pin versions, limit scopes, and monitor behavior continuously. Block unknown or high-risk plugins by default.
What frameworks can help us structure AI risk? – The NIST AI Risk Management Framework and the OWASP Top 10 for LLM Applications provide actionable guidance for governance, technical controls, and secure design patterns.
Where can I learn more about Zscaler’s research? – Visit Zscaler ThreatLabz for research, advisories, and report updates.
The Clear Takeaway
AI isn’t coming—it’s here, at staggering scale. In 2025 alone, enterprises pushed 989.3 billion AI/ML transactions through their systems, with data flows up 93% and usage up 91% year over year. Yet the security gap is undeniable: every analyzed enterprise AI system had critical vulnerabilities, many exploitable in under 16 minutes. ChatGPT triggered 410 million DLP violations. Nearly four in ten AI transactions were blocked for security reasons.
This isn’t a call to slam on the brakes. It’s a mandate to steer with precision. Apply zero trust to AI itself—prompts, responses, actions—using identity-driven, real-time controls that inspect and enforce at the point of interaction. That’s the promise of Zscaler’s AI Security Suite and the trajectory forward for any organization determined to harness AI safely.
Move fast, but don’t break trust. With the right guardrails, you can scale AI, protect your data, and keep pace with innovation—at machine speed and with enterprise-grade assurance.
Discover more at InnoVirtuoso.com
I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.
For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring!
Stay updated with the latest news—subscribe to our newsletter today!
Thank you all—wishing you an amazing day ahead!
Read more related Articles at InnoVirtuoso
- How to Completely Turn Off Google AI on Your Android Phone
- The Best AI Jokes of the Month: February Edition
- Introducing SpoofDPI: Bypassing Deep Packet Inspection
- Getting Started with shadps4: Your Guide to the PlayStation 4 Emulator
- Sophos Pricing in 2025: A Guide to Intercept X Endpoint Protection
- The Essential Requirements for Augmented Reality: A Comprehensive Guide
- Harvard: A Legacy of Achievements and a Path Towards the Future
- Unlocking the Secrets of Prompt Engineering: 5 Must-Read Books That Will Revolutionize You
