Back to blog
AI Security2026-03-319 min read

MCP Security Crisis — The Protocol Powering Your AI Agents Has a Critical Exposure Problem

100% of enterprises planning agentic AI deployments. Only 29% feel ready to secure them. The gap is MCP servers — the open protocol connecting AI agents to enterprise tools and data — and the gap is already being exploited.


The MCP Adoption Race Nobody Is Talking About

The Model Context Protocol (MCP) went from Anthropic open-source project to Linux Foundation stewardship to near-universal enterprise adoption in under two years. The download counts are now measured in the tens of millions. Google and Microsoft have integrated it into their AI platforms. Enterprise developers are building MCP server integrations as a default architectural pattern. The protocol that lets AI agents connect to tools, data sources, and enterprise systems has become critical infrastructure faster than most security teams could evaluate it.

The adoption curve has outpaced the security evaluation curve by a significant margin. Cisco's State of AI Security 2026 research documented the gap: every organization surveyed that was planning agentic AI deployments — 100% — said MCP integration was part of their strategy. But only 29% of those same organizations felt they had the security controls in place to protect MCP deployments from exploitation. That 71-point gap between deployment velocity and security preparedness is the attack surface that threat actors are beginning to probe.

The problem is structural. MCP servers are now embedded in enterprise workflows at a pace that bypassed the traditional security review process. Developers deploy MCP servers to connect AI agents to Slack, Notion, Box, Jira, and internal data sources because it makes the agents more capable — not because anyone completed a security assessment of what an MCP server could access if it were compromised. The capability expansion happened first. The security evaluation is catching up under pressure from documented incidents.

Amy Chang, Cisco's AI Threat Intelligence lead, has framed the problem as "multi-turn resilience" — the accumulation of attack surface that occurs when AI agents operate over extended sessions with MCP tool access. A single-prompt security review doesn't capture the risk of an agent that accumulates context, invokes multiple MCP tools, and chains actions across sessions. The longer an MCP-connected agent operates, the more data it has accessed and the larger the potential exfiltration surface.


The Five MCP Attack Vectors Already Being Exploited

The threat landscape for MCP deployments is not theoretical. Cisco Live 2026 research, documented in the AITech-12.1 threat taxonomy, and reporting from esentire and eSecurity Planet have identified five attack vectors that are already being exploited in enterprise environments.

1. Prompt Injection via MCP Tool Descriptions

MCP servers expose their capabilities through tool descriptions — structured metadata that tells an AI agent what a tool does and how to invoke it. An attacker who can control or manipulate a tool description can inject hidden instructions that the agent interprets as part of its own task.

Cisco's documented case: a GitHub MCP server was found to have been compromised with injected instructions that, when an AI agent used the server, caused the agent to exfiltrate data from private repositories to an external endpoint. The agent wasn't told to do this explicitly. The instructions were embedded in the tool description and the agent followed them as part of its normal operation. This is the fundamental trust problem in MCP security: agents trust tool descriptions as factual operational instructions, not as potential attack vectors.

2. Tool Poisoning

Tool poisoning targets the metadata and behavior specification of MCP tools themselves. An attacker who can publish or modify an MCP tool's behavioral description — in the MCP registry, in a third-party server, or in an internal tool that has been compromised — can alter what the tool does when invoked. An agent calling a poisoned tool may be performing actions entirely different from what the tool description claims.

The AITech-12.1.2 subcategory in Cisco's taxonomy specifically addresses this: tools that appear to perform one function but perform another, with the behavioral delta invisible to the agent and the user until the damage is discovered.

3. Remote Code Execution Through Chained MCP Servers

MCP servers can be chained — one server calling another, or a local server calling remote servers across network boundaries. This chaining creates attack paths that don't exist in a single-server deployment.

Chaining a local MCP server with inadequate sandboxing to a remote MCP server that an attacker controls enables remote code execution on the enterprise endpoint running the local server. The architecture that makes MCP powerful — the ability to compose tool capabilities across systems — is the same architecture that creates code execution paths that traditional endpoint security doesn't monitor.

4. Overprivileged MCP Access

The OAuth flows that MCP servers use to connect to enterprise SaaS platforms — Slack, Notion, Box, Atlassian, Google Workspace — are frequently granted more permissions than the agent's task requires. An MCP server that needs to read documents from Notion is often granted full Notion workspace access rather than scoped access to the specific workspace or page the agent needs.

The result is an agent that, if its MCP server is compromised or the connection is intercepted, can exfiltrate data from the entire Notion workspace rather than the specific documents it was meant to access. The principle of least privilege is frequently absent from MCP server OAuth configurations because the default OAuth consent flows grant broad access and the developers deploying these servers accept the defaults.

5. Supply Chain Compromise

The MCP server ecosystem — the registry of third-party servers developers use to add capabilities to their agents — has the same supply chain risks as any software registry. Unvetted third-party MCP servers, version pinning failures that allow malicious updates to be automatically pulled, and unverified server signatures mean that the servers enterprises are connecting to their AI agents may not be what they claim to be.

Version pinning failures are particularly problematic in the MCP ecosystem because the protocol was designed for developer convenience — servers update automatically unless explicitly pinned. An enterprise that deployed a pinned MCP server six months ago may have been silently updated to a version that includes modified behavior. Without active version management and signature verification, there's no way to know.


Why Traditional Security Tools Don't Catch MCP Attacks

The security stack that enterprises rely on for their conventional infrastructure was not built to observe, let alone prevent, MCP-specific attack vectors.

Endpoint detection and response (EDR) tools generate signals related to AI agent processes — new executables, network connections to AI service providers, unusual data access patterns. But EDR tools don't understand MCP protocol semantics. They see a process making an API call. They don't see that the API call was the result of a poisoned tool description instructing an agent to exfiltrate data to an external endpoint.

Secure Access Service Edge (SASE) infrastructure can identify that corporate devices are connecting to AI service provider endpoints. It can block specific domains. It cannot determine whether the data being sent to those endpoints is the result of a legitimate user request or a prompt injection that has co-opted the agent's tool invocation logic.

Web application firewalls and API gateways similarly understand HTTP traffic patterns, not MCP-specific attack semantics. A prompt injection embedded in a tool description produces HTTP traffic that looks identical to a legitimate tool invocation.

The authentication gap is equally problematic. MCP servers initiate OAuth flows to connect to enterprise SaaS platforms. Those flows bypass the API governance that organizations have built for sanctioned AI tools, because the MCP server is making the OAuth call, not a managed client application. The identity and access management infrastructure sees an OAuth grant from an enterprise user to a SaaS platform — it doesn't see that the grant was initiated by an MCP server acting as an autonomous agent on the user's behalf.

The kill-switch problem compounds all of this. Most enterprises have no mechanism to immediately suspend MCP server operations during a security incident. When a compromised MCP server is identified, the response time to revoke its access and terminate its operations is measured in the time it takes to identify which servers are running, which API credentials they use, and how to revoke them — a process that can take hours during which the compromised server continues to operate.


The MCP Security Framework Enterprises Need Now

The recommendations from Cisco Live 2026, Amy Chang's threat intelligence team, and the broader security research community converge on a six-component framework that enterprises deploying MCP need to implement now.

1. Audit Your MCP Inventory

You cannot secure what you cannot see. Every MCP server, model connection, API key, and tool integration currently operating in your enterprise needs to be documented. This includes servers deployed by IT and the Shadow AI MCP servers that employees have deployed without IT involvement — which connects directly to the Shadow AI governance problem.

The audit needs to cover not just which servers exist, but what access each server has been granted, which OAuth tokens are active, and what data each server can reach. Many enterprises will be surprised by how many MCP servers are running and how much access they have.

2. Pre-Integration Scanning

Before any MCP server is deployed in a production environment, it should be scanned for risks in its tool descriptions, prompt templates, and resource access specifications. Cisco's MCP Scanner and comparable tools can identify potentially malicious or overreaching tool descriptions that would inject unauthorized instructions, request excessive permissions, or access resources beyond the server's stated function. Treat MCP server integration reviews the same way you treat third-party library imports in your code.

3. Least-Privilege Tool Rules

MCP servers should be deployed with the minimum permissions required for their function. OAuth grants should be scoped to specific workspaces, documents, or data objects rather than granted at the platform level. A Notion MCP integration should have access to the specific Notion workspace and pages it needs, not full workspace admin access. This limits the blast radius if a server is compromised.

4. Runtime Allowlists and Telemetry

Static configuration is not sufficient for MCP security. Enterprises need continuous monitoring of which MCP servers are running, which tools they are invoking, what data they are accessing, and what outputs they are producing. Allowlist-based control — only approved MCP servers can operate — with behavioral telemetry feeding into a security monitoring system, enables detection of anomalous MCP activity that static configuration would miss.

5. Automated Kill-Switch Capability

When an MCP-related security incident is identified, the response needs to be immediate. Enterprises need the technical capability to revoke MCP server credentials, terminate running MCP processes, and isolate affected endpoints within minutes, not hours.

This requires pre-built runbooks, automated credential revocation, and tested isolation procedures. The kill-switch capability needs to exist before an incident, not designed during one.

6. Supply Chain Governance

MCP server versions should be pinned in production environments. Server signatures should be verified before deployment. Third-party MCP servers should go through a procurement-equivalent review process before being integrated into production agent workflows. The supply chain risks in the MCP ecosystem are real and documented — treating MCP servers as trusted infrastructure without verification is the vulnerability.


The Governance Imperative

MCP security is an enterprise governance problem, not exclusively a security team problem. The organizations that deployed MCP servers fastest were development and AI platform teams — not security teams. The governance framework needs to meet those teams where they are.

The most practical immediate action is to connect MCP governance to the AI Acceptable Use Policy and the Shadow AI remediation program. Employees who have deployed MCP servers outside IT governance need a safe mechanism to disclose them, and they need support transitioning to governed MCP configurations.

The cost of inaction is measurable. Cisco's documented GitHub MCP server exfiltration event demonstrates that the attack vectors are not theoretical. Gartner's 40% AI compliance incidents by 2030 projection, partially driven by data leakage at the MCP and agent level, is the forward-looking consequence of a protocol that was deployed faster than it was secured.

If you cannot name every MCP server your AI agents are connected to right now, you already have a problem. The question is whether you find out about it from a security team that detected it, or from a threat actor who exploited it first.


MCP Attack Taxonomy (Based on Cisco Live 2026 — AITech-12.1)

| Attack Vector | Category | Risk | |---|---|---| | Prompt injection via tool description | AITech-12.1.1 | Data exfiltration, session hijacking | | Tool poisoning | AITech-12.1.2 | Behavioral manipulation, unauthorized actions | | Remote code execution (chained servers) | AITech-12.1.3 | Endpoint compromise | | Overprivileged OAuth access | AITech-12.1.4 | Lateral movement, data exposure | | Supply chain compromise | AITech-12.1.5 | Trusted server → malicious behavior |


Research synthesis by Agencie. Sources: Cisco State of AI Security 2026 (100% planning / 29% prepared), Cisco Live 2026 (AITech-12.1 threat taxonomy), Amy Chang — Cisco AI Threat Intelligence, esentire threat research, eSecurity Planet. All cited sources are 2025-2026 publications.

Ready to let AI handle your busywork?

Book a free 20-minute assessment. We'll review your workflows, identify automation opportunities, and show you exactly how your AI corps would work.

From $199/month ongoing, cancel anytime. Initial setup is quoted based on your requirements.