Agentic AI vs. Machine‑Speed Cyber Attacks: Why Torq’s John White Says It’s Time to Re-Architect Security
What happens when attackers move faster than your best analysts can type? When a phish turns into a session hijack in 90 seconds, when an OAuth token is abused before the SIEM ingests the alert, and when privilege escalation completes while your playbook is still routing a ticket? That’s the unsettling reality security leaders are facing—and it’s why Torq’s John White is arguing that CISOs must reorganize around agentic AI now, not later.
In a recent perspective covered by Help Net Security, White makes a stark case: to restore parity against machine-speed adversaries, security needs agentic AI—systems that pursue outcomes autonomously, orchestrate across tools, and only pull humans in for high-impact judgment calls. It’s not just a tooling change; it’s an organizational design choice that rewards early adopters and leaves laggards more exposed as attackers continue to scale and adapt.
If that sounds like a leap, it is. But it’s also a pragmatic one. Below, we unpack what “agentic AI” really means in a security operations context, why traditional approaches are falling short, and how CISOs can implement this shift—safely, measurably, and fast.
For context, see the original coverage: Torq’s John White on agentic AI and machine-speed defense.
The New Normal: Machine-Speed Offense
Attackers are no longer pacing themselves around human workflows.
- Automated initial access: Phishing kits, MFA fatigue tooling, and token theft are packaged, commoditized, and fast.
- Adaptive execution: Scripting and living-off-the-land tactics chain quickly and pivot based on host responses and identity posture.
- Supply chain leverage: One compromise upstream can fan out to dozens of victims before manual investigations even begin.
- LLM-enabled social engineering: Convincing, contextual lures scale to target breadth—raising click-through and reply rates.
Meanwhile, defenders are constrained by queues, shift changes, and approvals. Even mature programs saw strain in 2025 from high-tempo incidents attributed to social-engineering-centric groups and supply-chain vectors. When the attacker’s loop takes seconds and your defensive loop takes minutes or hours, the scoreboard is predictable.
To close that gap, you need a system that can sense, decide, and act at machine speed—without losing the governance, context, and accountability enterprises require. That’s the promise of agentic AI.
What Agentic AI Means for Security (and What It Doesn’t)
Agentic AI refers to AI-driven systems that pursue defined goals, plan multi-step actions, and execute them across tools—while observing guardrails, policies, and human oversight thresholds. In security operations, think of agentic AI as:
- Outcome-oriented: “Contain malicious email blast” or “remove suspect OAuth grant” rather than “run these three scripts.”
- Context-aware: Correlates identity, device, cloud, network, and ticket history to qualify confidence and decide next steps.
- Autonomously orchestrated: Invokes EDR, IAM, email, CASB, cloud, and collaboration tools without a human stitching steps.
- Governed and observable: Every action logged, explainable, reversible, and constrained by policy.
What it’s not:
- Not “set-and-forget” autonomy. Guardrails, kill switches, and approval thresholds are table stakes.
- Not a black box. Explainability, testability, and versioned automation code separate enterprise-grade agents from risky experiments.
- Not just SOAR with a new name. Traditional SOAR runs static playbooks. Agentic AI plans and adapts in real time to achieve an outcome, with policy-as-code and risk-aware decisioning at its core.
If you’re familiar with MITRE ATT&CK, the shift is from “if detection X, run response Y” toward “given tactics observed and confidence Z, achieve containment of technique T across impacted identities, devices, and tenants”—continuously, until the outcome is verified.
John White’s Thesis, In Brief
White’s argument is simple and urgent:
- Offense has already automated the fast parts.
- Defense still centers people for the fast parts and uses automation as an assistant, not an agent.
- To restore parity, CISOs must reorganize around agentic AI: automate execution, keep humans for high-impact judgment, and build preemptive accountability.
- This is an organizational design choice. Early adopters set the tempo and reap decisive advantages; laggards will fight uphill as AI-driven threats compound.
Read the gist here: Help Net Security coverage of Torq’s call for agentic AI.
Why Traditional Security Operations Are Losing Ground
- Human-in-the-loop bottlenecks: Approvals, handoffs, and ticketing delays turn easy blocks into hard recoveries.
- Static playbooks: They don’t adapt to environmental drift (different EDRs, identity stacks, hybrid cloud) or novel chaining.
- Alert overload: Volume and noise erode attention, creating blind spots where speed matters most.
- Context fragmentation: Without unified identity, device, and cloud context, it’s risky to automate, so teams don’t—and fall behind.
The result: elongated MTTR, inconsistent containment, and higher blast radii. The more your adversary builds automation around your people, the more your people feel underwater.
The Four Mindset Leaps Security Leaders Need to Make
White frames the shift as a set of mindset changes. Here’s how to translate them into practice.
1) Prioritize Outcomes Over Activities
- Define outcome SLAs (contain, evict, verify) not just activity SLAs (triage, escalate, ticket).
- Example outcomes:
- BEC: Remove malicious emails, revoke OAuth grants, reset passwords, confirm no further abuse within 24 hours.
- Ransomware early stage: Isolate hosts, disable suspected accounts, block C2, verify no lateral movement indicators within 60 minutes.
- Tie incentives to outcomes: MTTA/MTTR, auto-resolution rate, time-to-containment, prevented blast radius.
2) Automate Execution, Not Just Detection
- Move from “assistive” runbooks to “agentic” automations that can plan, branch, and verify.
- Encapsulate actions as reusable capabilities: “quarantine endpoint,” “revoke refresh token,” “disable mailbox rules,” “lock service principal.”
- Let agents choose actions based on risk thresholds and context, not fixed sequences.
3) Target Human Judgment Where It Matters
- Pull humans into high-ambiguity, high-impact decision points:
- Approving irreversible or high-blast-radius actions.
- Handling true positives with business exceptions.
- Adjudicating identity-risk edge cases for executives and privileged roles.
- Keep humans out of:
- Bulk, repetitive containment tasks.
- Routine enrichment, correlation, suppression.
- Policy-conforming remediations with low residual risk.
4) Build Preemptive Accountability
- Policy-as-code: Explicit guardrails define when and how agents can act.
- Full provenance: Every decision and action is logged with inputs, confidence, and rationale.
- Auditable approvals: Tiered, just-in-time human sign-offs for higher-risk moves.
- Continuous testing: Simulate attacks and rehearse agent responses before enabling autonomy.
Resources to anchor accountability: – NIST AI Risk Management Framework – OWASP Top 10 for LLM Applications – NIST SP 800-207: Zero Trust Architecture
A Reference Architecture for Agentic Defense
Think in layers:
- Sensing and context
- Telemetry from EDR/EPP, identity, email, SaaS, cloud, network.
- Identity and asset risk scoring, business context (VIPs, crown jewels).
- Threat intel and detections mapped to MITRE ATT&CK.
- Decision and planning
- Policy engine that maps confidence, impact, and environment to allowable actions.
- Agents that plan multi-step responses to achieve defined outcomes.
- Risk-aware branching with fallbacks and retries.
- Action and orchestration
- Connectors to EDR, IAM, mail, CASB/SWG, firewall, cloud providers, ticketing/chat.
- Idempotent actions with verification (did quarantine succeed? did token revocation propagate?).
- Guardrails and oversight
- Approval thresholds by risk and role.
- Kill switches and rollback.
- Sandboxing and “shadow mode” before full autonomy.
- Observability and learning
- End-to-end timelines of agent decisions and actions.
- Post-incident reviews informed by agent logs.
- Safe, testable iteration of policies and capabilities.
Standards and practices to align with: – CISA Secure by Design – MITRE ATLAS (Adversarial Threat Landscape for AI Systems) – Google Cloud Secure AI Framework (SAIF)
Choosing a Platform: What Matters
When evaluating platforms that promise agentic operations (including Torq and others), weigh:
- Breadth and depth of integrations: Native, well-supported connectors across your stack—EDR, SIEM, IAM, M365/Google Workspace, Okta/Azure AD, AWS/GCP/Azure, firewalls, ticketing (Jira/ServiceNow), and chat (Slack/Teams).
- Policy-as-code and guardrails: Can you define risk thresholds, approval paths, and blast-radius constraints as versioned code?
- Explainability and auditability: Clear logs of inputs, decisions, and actions; evidence packages for compliance.
- Testing and simulation: Runbooks/agents can be validated in sandbox, “shadow mode,” and staged rollouts.
- Reliability and performance: SLAs for action execution, queueing behavior under surge, retries, and eventual consistency management.
- Data handling and privacy: Where models run, what data they see, tenant isolation, and secrets management.
- Cost and licensing fit: Transparent pricing that aligns with event volume and automation coverage, not surprise overages.
- Vendor ecosystem: Support, documentation, and community patterns for common incident types.
A 90-Day Roadmap to Agentic Operations
You don’t have to rip and replace. You do have to move with intent.
Weeks 1–2: Baseline and priorities – Establish current MTTA/MTTR, auto-resolution rate, and top incident classes by volume and impact. – Identify high-confidence, low-blast-radius actions already taken repeatedly. – Select outcome targets (e.g., phishing containment under 10 minutes; token abuse containment under 15 minutes).
Weeks 3–4: Instrumentation and context – Normalize identity and asset context (VIP tags, privileged roles, critical apps). – Map detections to ATT&CK techniques and define confidence scoring. – Stand up observability: end-to-end logs, dashboards, and alerting around automation.
Weeks 5–6: Low-risk autonomy – Enable autonomous actions for safe remediations: – Quarantine suspicious emails from known bad senders/domains. – Revoke tokens for known-compromised accounts with MFA off. – Isolate endpoints exhibiting high-confidence ransomware indicators. – Run in shadow mode for a week, then turn on autonomy with rollback.
Weeks 7–8: Human-in-the-loop for edge cases – Define approval thresholds for moves that can disrupt VIPs or critical services. – Integrate approvals into chat (Slack/Teams) with tight SLAs and clear context. – Start “tactical sprints” to refine policies via real incidents.
Weeks 9–10: Expand to identity and cloud – Automate conditional account disables when multiple high-risk signals concur (impossible travel + token theft + new MFA device). – Auto-remediate cloud misconfigurations (public S3/Blob storage, overly permissive IAM roles) with verification.
Weeks 11–12: Measure, learn, and iterate – Compare outcome metrics to baseline: – Auto-resolution rate from X% to Y%. – MTTA from minutes/hours to seconds/minutes for specific incident classes. – Time-to-containment reduced by Z%. – Conduct post-incident reviews using agent logs; update guardrails and capabilities.
Tip: Keep a visible “autonomy scoreboard” for leadership to track progress, confidence, and safety metrics.
Governance, Risk, and Accountability—Before You Scale
Agentic AI without governance is bravado. With governance, it’s leverage. Bake in:
- Policies that bind autonomy
- Define triggers, confidence thresholds, and blast-radius limits.
- Set approval tiers by action class and asset criticality.
- Provenance and evidence
- Immutable logs including raw inputs, model outputs (if applicable), context, actions, and verification.
- Evidence bundles for audits and compliance (SOX, ISO 27001, SOC 2).
- Evaluation and red teaming
- Test agents against simulated attacks and adversarial inputs.
- Validate model prompts, tool-use sequences, and fallback paths.
- Privacy and data minimization
- Limit PII exposure and define data residency.
- Encrypt secrets and sign actions where feasible.
Helpful frameworks and references: – NIST AI Risk Management Framework – OWASP Top 10 for LLM Applications
Real-World Use Cases That Pay Off Fast
- Business Email Compromise (BEC) containment
- Detect suspicious mailbox rules, anomalous forwarding, and OAuth grants.
- Auto-remove malicious messages, revoke grants, invalidate sessions, and notify impacted users.
- Verify outcome: no new rules, no external forwards, no outbound anomalies for 24 hours.
- Ransomware early containment
- Correlate rapid file modifications, shadow copy deletion, and C2 beacons.
- Auto-isolate host, block hash/signature across fleet, disable lateral movement tools, snapshot critical VMs.
- Verify: no fresh encryption behaviors, restored integrity on affected paths.
- Identity abuse and session hijacking
- Combine unusual sign-in patterns, device posture drift, and risky token use.
- Auto-revoke sessions, require step-up MFA, remove newly added MFA methods, lock SPNs with suspicious behavior.
- Cloud drift and misconfiguration
- Auto-remove public access from storage buckets, enforce encryption at rest, rotate access keys on anomalies.
- Map changes to ATT&CK for Cloud techniques for clear traceability.
- Vendor access control
- Detect anomalous third-party activity, auto-restrict access, and initiate a structured re-approval process.
- Record every step for vendor-management audits.
- Vulnerability-driven remediation
- Prioritize patches based on exploitability, asset criticality, and exposure.
- Auto-open tickets with owner/context and auto-close when telemetry confirms patch application.
Pitfalls to Avoid
- Over-automation without context: Automating “disable account” across the board can break business. Tie actions to risk, role, and asset criticality.
- Brittle playbooks masquerading as agents: If it can’t adapt or verify outcomes, it’s not agentic—and it won’t keep up.
- Blind spots in data: Poor identity hygiene or asset discovery undermines automation quality. Fix the foundations.
- Unreviewed prompts or models: If you use LLMs, guard against hallucinations and injection; validate tool calls and responses.
- Lack of rollback: Always design for safe failure—undo, isolate, learn, and retry.
Measuring Success: From Metrics to Business Value
Translate security speed into business impact:
- Operational metrics
- MTTA/MTTR per incident class
- Auto-resolution rate
- Time-to-containment and blast-radius reduction
- False positive rate and intervention burden
- Resilience metrics
- Variance across shifts/regions (consistency)
- Mean time between significant incidents
- Recovery time objective (RTO) adherence
- Business outcomes
- Fewer customer-impacting incidents
- Lower downtime and recovery costs
- Audit readiness and reduced compliance toil
Benchmark where you are, set quarterly targets, and tie team incentives to outcomes, not ticket throughput. Reports like the Verizon Data Breach Investigations Report can help contextualize your progress against industry trends.
The Strategic Edge: Security as Organizational Design
White’s most provocative point may be this: adopting agentic AI isn’t primarily a tooling decision—it’s an organizational design choice. It changes:
- What your people do: from step executors to decision-makers and system designers.
- How you manage risk: via policy-as-code, provable guardrails, and measurable autonomy.
- Where you invest: in outcome design, context quality, and continuous testing.
Done right, this shift compounds. Each automated outcome frees human capacity to raise the quality bar elsewhere. Your tempo improves, and your tolerance for offensive automation increases. Early adopters bank these gains; late adopters chase them.
Getting Started: A Short, Practical Checklist
- Pick three outcome targets with clear ROI (e.g., BEC, token abuse, cloud public exposure).
- Inventory actions you already take repeatedly and safely—wrap them as reusable capabilities.
- Establish guardrails and approval thresholds before turning on autonomy.
- Pilot in shadow mode, then enable autonomy for low-risk paths with rollback.
- Publish your autonomy scoreboard to leadership; iterate every sprint.
- Align with frameworks: NIST AI RMF, MITRE ATT&CK, CISA Secure by Design.
FAQs
Q: How is agentic AI different from SOAR? A: SOAR runs mostly static playbooks triggered by detections. Agentic AI plans and adapts in real time to achieve an outcome, using policy-as-code, risk thresholds, and multi-step orchestration—plus verification that the outcome was achieved.
Q: Is it safe to let an AI act autonomously in my environment? A: Yes—if you set guardrails. Define approval thresholds, blast-radius limits, and rollback paths. Start with shadow mode and low-risk actions, then expand. Use full logging and evidence trails for oversight.
Q: Do I need large language models to do this? A: Not necessarily. Many agentic capabilities rely on deterministic logic, policy engines, and orchestration. LLMs can help with pattern generalization and reasoning, but must be wrapped with strong validation and tool-use constraints.
Q: Where should a small team start? A: Start with high-volume, low-risk incidents like phishing containment. Automate removal, token revocation, and session resets with verification. Measure gains, then reinvest the time saved into higher-impact workflows.
Q: How do we avoid false positives causing business disruption? A: Combine multi-signal confidence scoring, asset criticality, and role-based policies. Require human approval for high-impact actions. Audit and tune thresholds continuously based on post-incident reviews.
Q: What about compliance and audits? A: Agentic systems can improve compliance by producing consistent, auditable evidence of decisions and actions. Maintain immutable logs, approval trails, and versioned policies to satisfy auditors.
Q: Can attackers use agentic AI against us? A: They already are automating many stages of the kill chain. That’s precisely why defenders need agentic systems—to match speed, shrink dwell time, and contain threats before they escalate.
Q: Which frameworks should we align to? A: Use NIST AI RMF for governance, MITRE ATT&CK for threat mapping, OWASP LLM Top 10 if you use LLMs, and NIST SP 800-207 for zero trust alignment.
The Takeaway
Attackers have already shifted to machine-speed operations. Defenders can’t keep pace by adding more people to human-speed workflows. The path forward is agentic AI: outcome-focused, risk-aware, governed automation that executes fast and calls humans only when judgment truly matters.
This is more than an upgrade—it’s a re-architecture of how your security program works. Start with clear outcomes, bake in guardrails, automate the execution, and measure your gains relentlessly. As Torq’s John White argues, early adopters will set the tempo and seize the advantage. The sooner you make the mindset leap, the sooner you can turn speed from your greatest liability into your most decisive asset.
Discover more at InnoVirtuoso.com
I would love some feedback on my writing so if you have any, please don’t hesitate to leave a comment around here or in any platforms that is convenient for you.
For more on tech and other topics, explore InnoVirtuoso.com anytime. Subscribe to my newsletter and join our growing community—we’ll create something magical together. I promise, it’ll never be boring!
Stay updated with the latest news—subscribe to our newsletter today!
Thank you all—wishing you an amazing day ahead!
Read more related Articles at InnoVirtuoso
- How to Completely Turn Off Google AI on Your Android Phone
- The Best AI Jokes of the Month: February Edition
- Introducing SpoofDPI: Bypassing Deep Packet Inspection
- Getting Started with shadps4: Your Guide to the PlayStation 4 Emulator
- Sophos Pricing in 2025: A Guide to Intercept X Endpoint Protection
- The Essential Requirements for Augmented Reality: A Comprehensive Guide
- Harvard: A Legacy of Achievements and a Path Towards the Future
- Unlocking the Secrets of Prompt Engineering: 5 Must-Read Books That Will Revolutionize You
