ServiceNow Is the Last Place You Want Your Agentic Shift: And the Market Just Figured That Out

ServiceNow Is the Last Place You Want Your Agentic Shift: And the Market Just Figured That Out

ServiceNow Is the Last Place You Want Your Agentic Shift

And the Market Just Figured That Out 

Article By Jeff Highman

I’ve been watching two stories collide in slow motion, and the wreckage is now visible from orbit. 

Story one: In October 2025, AppOmni’s security research team discovered that anyone with a target’s email address could achieve full platform takeover of ServiceNow through its AI Virtual Agent. Aaron Costello, their chief of security research, called it “the most severe AI-driven vulnerability uncovered to date.” ServiceNow patched it quietly. The disclosure came in January 2026. 

  Story two: In the first week of February 2026, the S&P 500 Software & Services Index dropped 20% year-to-date. Jefferies coined a term for it: the “SaaSpocalypse.” Bloomberg called it the biggest AI-driven stock selloff markets have ever seen. Salesforce, Thomson Reuters, LegalZoom — all hammered. Jason Lemkin, the godfather of SaaS, called it a crash. 

These aren’t two stories. They’re the same story. 

Key Takeaways: 

  • ServiceNow’s AI Virtual Agent was found vulnerable to full platform takeover via a single email address (disclosed January 2026).
  • Simultaneously, the S&P 500 Software & Services Index dropped 20% YTD in the “SaaSpocalypse.”
  • Both events point to the same conclusion: deploying agentic AI inside broad-access SaaS platforms creates unacceptable security risk, while AI now makes those platforms replaceable.

The Skeleton Key Problem

Let me explain why ServiceNow is the most dangerous place in your enterprise to deploy an AI agent. 

 ServiceNow is a system of record. Not a system of record — in most enterprises, it’s the system of record. IT service management. HR cases. Security incidents. Procurement workflows. Customer operations. Change management. Asset tracking. It touches everything because that’s its value proposition: one platform to rule them all. 

Now imagine giving an AI agent access to that platform. 

Not hypothetically. This is happening right now, in thousands of enterprises. ServiceNow’s own Virtual Agent product. Microsoft Copilot integrations. Third-party AI tools bolted onto the ServiceNow API. Every one of them needs broad access to do its job — because the whole point of ServiceNow is that everything’s connected. 

Broad access is the feature. Broad access is also the attack surface. 

When AppOmni found that vulnerability, what they actually found was that ServiceNow’s AI agent infrastructure had no meaningful concept of bounded authority. The agent could access whatever ServiceNow could access. And ServiceNow can access everything. 

 An attacker didn’t need to hack the agent. They needed to be the agent. One email address, and they had the keys to the kingdom. 

What is an AI “skeleton key” vulnerability? A skeleton key vulnerability occurs when an AI agent inherits unbounded platform access, meaning a single exploit grants an attacker access to every system the platform touches — IT, HR, security, procurement, and customer data.

Why is AI Agent Lateral Movement an Architectural Risk, Not Just a Bug?

Here’s what most CISOs haven’t internalized yet: lateral movement through AI agents isn’t a vulnerability to be patched. It’s a consequence of how we’re deploying them. 

 Jonathan Wall, CEO of Runloop, put it plainly: “If, through that first agent, a malicious actor is able to connect to another agent with a better set of privileges to that resource, then he will have escalated privileges through lateral movement.” 

 Microsoft’s Copilot Studio has the same problem. Zenity Labs found that the “Connected Agents” feature — which lets AI agents call other AI agents — enables lateral movement between agents. It’s enabled by default. 

 Read that again. Lateral movement between AI agents. Enabled by default. 

 We spent twenty years learning that implicit trust is the root of all security evil. Zero trust architectures. Least privilege access. Network segmentation. Then we deployed AI agents with blanket platform access and connected them to each other with no authorization boundaries. 

 It’s early-2000s security posture with 2026 attack surfaces.

The Other Shoe: The SaaSpocalypse

Now here’s where it gets interesting for anyone paying attention to both the security story and the market story. 

The SaaSpocalypse isn’t about AI being bad for software companies. It’s about AI being good enough that companies can build their own. 

 Anthropic released Claude “Cowork” — AI agent tools designed to handle complex professional workflows. Legal research. CRM. Analytics. The functions that SaaS companies sell as core products. The market immediately repriced every SaaS stock, because investors realized something that practitioners have known for a while: 

The moat was complexity. AI just filled in the moat.

What is the SaaSpocalypse? The SaaSpocalypse refers to the February 2026 selloff of SaaS stocks — a 20% year-to-date decline in the S&P 500 Software & Services Index — driven by investor recognition that AI agents can now replicate the core functions of enterprise SaaS platforms like ServiceNow, Salesforce, and Thomson Reuters.

ServiceNow’s value proposition was: “This is too complex for you to build yourself. Pay us $2 million a year and we’ll manage it.” That proposition held as long as building custom software was expensive and slow. But when an AI agent can scaffold a workflow application in hours instead of months, the complexity moat evaporates. 

And here’s the irony that nobody’s talking about: 

 Companies are simultaneously being told to deploy AI agents inside ServiceNow (the platform play) and to use AI agents to replace ServiceNow (the bespoke play). ServiceNow is getting squeezed from both directions. 

Deploy agents inside ServiceNow? You get the skeleton key problem. Unbounded access, lateral movement, one vulnerability away from full platform takeover. 

 Build agents to replace ServiceNow? You get a custom application that does exactly what you need, with exactly the permissions it needs, with none of the attack surface of a general-purpose platform that touches everything. 

The market chose. Down 20%. 

Why ServiceNow Specifically

I want to be fair here. This isn’t a ServiceNow hit piece. Every major SaaS platform faces the same structural tension. But ServiceNow is the canonical example because it sits at the intersection of three trends: 

Why does ServiceNow have the broadest AI agent attack surface?

ServiceNow’s value is that it connects everything. That’s also why a compromised agent inside ServiceNow has the highest blast radius of any enterprise platform. An agent with ServiceNow access doesn’t just see IT tickets — it sees HR data, security incidents, financial workflows, and customer records. The breadth is the risk.

 Why are enterprises deploying AI agents in ServiceNow first?

Because it’s the system of record, it’s the obvious place to add AI capabilities. “Let’s put an AI agent on our IT service desk” is the most common first step in enterprise agentic deployment. It’s also the most dangerous first step, for exactly the reasons above. 

Why is ServiceNow the most replaceable by bespoke AI?

The workflows that ServiceNow orchestrates — incident management, change requests, approval chains, knowledge base search — are exactly the workflows that AI agents can now build from scratch. You don’t need a $2M/year platform to route IT tickets when a Claude agent can build you a custom workflow in an afternoon. 

The Sentinel Lens

The fundamental problem with AI agents like this — and in any broad-access platform — is that they violate every principle of bounded authority.

What are the four principles of bounded authority that AI agents violate?

No bounded domain. The agent can access everything the platform can access. There’s no concept of “this agent only handles IT incidents” at the authorization level. The scope is the entire platform. 

No accountability. When an agent takes an action, the audit trail shows “the agent did it.” Not which user’s request triggered it. Not what reasoning led to the action. Not what upstream data the agent relied on. The attribution is opaque. 

No transparency of method. The agent’s decision-making process is a black box. It accessed some data, applied some model, took some action. The “how” is invisible to everyone downstream. 

No explicit authority. The agent’s permissions come from a service account with broad access. There’s no chain of delegation that says, “this agent is authorized to access HR data for the purpose of IT incident resolution and nothing else.” The authority is implicit and unlimited. 

This is what happens when you deploy powerful actors without a trust model. It’s not a technology problem. It’s a governance problem. And the fix isn’t a security patch — it’s architectural.

What the Smart Money Is Doing

The companies that will win the agentic shift aren’t the ones deploying AI agents inside their existing SaaS platforms. They’re the ones building purpose-built applications with AI at the core. 

Big-SaaS AI AgentBespoke AI Application
PermissionsInherits full platform accessExactly scoped; least privilege by design
Blast radiusEntire ServiceNow instanceBounded to single workflow
Data modelPlatform’s rigid schemaYour business’s actual data model
CostPlatform license + AI surchageDevelopment time (dropping to near-zero)
Security PostureSkeleton key risk; lateral movementIsolated; no cross-platform attack surface

Here’s the difference:

A Big-SaaS AI agent inherits the platform’s permissions. A bespoke AI application gets exactly the permissions it needs — no more, no less. The blast radius of a compromised bespoke application is bounded by design. The blast radius of a compromised ServiceNow agent is… ServiceNow. 

 A Big-SaaS AI agent operates within the platform’s data model. A bespoke application operates within your data model — the one that reflects how your business actually works, not how ServiceNow’s schema committee decided you should work. 

 A Big-SaaS AI agent costs you a platform license plus an AI surcharge. A bespoke application costs you development time — which, thanks to AI, is dropping to near zero for the workflows that matter. 

The smart money isn’t adding AI to Big-SaaS. The smart money is using AI to leave ServiceNow. 

The SaaSpocalypse isn’t panic. It’s the market pricing in the obvious. 

The Uncomfortable Truth

I’ll say the thing that’s uncomfortable for a lot of technology executives right now: 

Your Big-SaaS instance is simultaneously your biggest security liability and your most replaceable asset. The AI agent you’re deploying inside it is making the first problem worse, while the AI agent you could build to replace it is making the second problem obvious.

Every CISO I know is excited about agentic AI in ServiceNow. That excitement is the threat model. 

Every CFO I know is looking at that $2M platform license and wondering what they’re actually getting. The SaaSpocalypse is the answer: they’re getting an attack surface they don’t need. 

The agentic shift isn’t about making your existing platforms smarter. It’s about making your existing platforms unnecessary. And the platforms know it. That’s why the stock is down 20%. 

Frequently Asked Questions

What is the ServiceNow AI Virtual Agent vulnerability?

In October 2025, AppOmni discovered that anyone with a target’s email address could achieve full platform takeover of ServiceNow through its AI Virtual Agent. ServiceNow patched it quietly; public disclosure came in January 2026.

What is the SaaSpocalypse?

The SaaSpocalypse is the term coined by Jefferies for the 20% year-to-date decline in SaaS stocks in February 2026, driven by AI’s ability to replace enterprise software platforms.

Why is ServiceNow the riskiest platform for AI agent deployment?

ServiceNow connects IT, HR, security, procurement, and customer operations in one platform. An AI agent with ServiceNow access inherits that entire scope, creating the largest blast radius of any enterprise platform.

What is lateral movement in AI agent Security?

Lateral movement occurs when a compromised AI agent uses its connections to other agents or systems to escalate privileges and access resources beyond its intended scope.

What is the alternative to deploying AI agents in Big-SaaS?

Purpose-built bespoke AI applications with scoped permissions, bounded domains, and isolated architectures eliminate the skeleton key risk of broad-platform deployment.

    Get the PDF

    Let us know where to send the file. Your information stays private and is never shared.

    No spam, no sales pressure