← Noma Blog

The Top-Five MCP Security Blindspots Putting Your Organization at Risk

Published: Nov 05, 2025 · 9 min. read

The Model Context Protocol (MCP) is rapidly transforming enterprise AI capabilities, enabling agents to interact seamlessly with databases, APIs, and business systems. But this rapid adoption comes with a dangerous blind spot: organizations are unknowingly expanding their attack surface in ways that traditional security tools can’t detect. Unlike traditional software vulnerabilities that affect individual applications, MCP security failures cascade across entire digital ecosystems, amplifying risk as AI agents interact with interconnected systems.

Working with our customers as they manage and secure MCP deployments across the enterprise, Noma Security has identified five critical MCP security blindspots creating immediate risk. These aren’t theoretical vulnerabilities, they are active threats and issues we’ve observed affecting production MCP deployments, creating pathways for data breaches and operational disruption.

Now here are five critical MCP security blindspots that could put your organization at risk today: 

1. Typosquatting: When Trust Becomes a Weapon

Attackers are exploiting human error by publishing malicious MCP servers with names nearly identical to trusted packages. This isn’t just sophisticated phishing, it’s supply chain warfare targeting AI Agent infrastructure.

The most striking example involves Microsoft’s Playwright MCP (@playwright/mcp), one of the most popular MCP servers for web automation. We’ve discovered that multiple customers inadvertently deployed an unofficial package named playwright-mcp, which is a subtle but critical difference. While this specific untrusted package may not be malicious, it reached 17,000 downloads in a single week, demonstrating both the scale of the mistake and the sophistication of typosquatting as an attack vector.

What makes this typosquatting threat particularly dangerous is MCP servers’ direct access to AI agents and connected systems. When organizations unknowingly deploy untrusted or malicious MCP servers, they’re handing attackers keys to their agentic AI infrastructure. Malicious actors can leverage typosquatted MCPs to harvest credentials, inject malicious prompts, and establish backdoors that survive system updates.

2. Excessive Capabilities: The “Default Danger” Problem

The most widespread vulnerability we encounter is organizations running MCP servers with excessive permissions. Most MCP servers enable all tools by default, including those with destructive capabilities and access to sensitive data. Our research shows more than 90% of organizations maintain these dangerous default configurations.

Each MCP server contains tools with varying risk levels. Some read configuration files, others can delete database tables or access PII. The principle of least privilege demands enabling only necessary tools, but we regularly encounter deployments where customer service agents have database deletion permissions simply because these tools were enabled by default.

We’ve documented cases where AI agents caused significant business disruption because they had access to destructive capabilities they should never have possessed. In one incident, an agent tasked with cleaning duplicate customer records deleted thousands of legitimate accounts due to unrestricted permissions.

The solution requires shifting from default-permissive to default-restrictive configurations, implementing rigorous capability assessments, and regular permission audits.

3. Data Exfiltration and Malicious Code Execution

MCP servers create pathways for data exfiltration and system compromise through untrusted code execution. This manifests differently for local versus remote deployments, but both carry significant risks.

Local MCP Servers require executing third-party code directly in your infrastructure. While open-source nature theoretically allows code inspection, most organizations skip this crucial step. We’ve identified malicious MCP servers with hidden data collection capabilities, system reconnaissance functions, and persistent access mechanisms that blend seamlessly with legitimate operations.

Remote MCP Servers present the “black box” problem. Zero visibility into server-side execution. When AI agents communicate with remote MCPs, they send potentially sensitive context and data to external systems under unknown security controls. We’re already seeing unofficial remote MCP services targeting cost-conscious organizations, often lacking basic security protections.

Both scenarios create substantial data exfiltration risks because AI agents operate with rich contextual information including business documents, customer communications, and strategic data.

4. Plain Text Secrets in MCP Configuration Files

Some MCP servers require API tokens and other sensitive information in order to operate as part of the agentic identity provided to the MCP. A top issue is hardcoding plain text secrets in MCP configuration files. Even though these configuration files are not supposed to leave users` local machines, there is still a risk of unintentionally sharing the files with others, or potentially having a malicious actor gaining access to them. 

While some platforms, such as VS Code, have already identified this security risk and tried to mitigate it by enabling use of environment variables and input prompts, not all platforms support these protections. This creates a gap where users on less secure platforms may have no choice but to hardcode secrets if they want to use certain MCP servers.

The best solution is to choose MCP servers that eliminate the need for stored secrets altogether by supporting OAuth authentication or an integration with a key vault. With OAuth, each user authenticates individually through their organization’s identity provider (like Google, Microsoft, or Okta), and the MCP server handles token management automatically. This approach provides identity control, use of short-lived tokens, and centralized credential governance. 

Finally, if your current MCP client platform doesn’t support use of environment variables, consider switching to a more secure platform, like VS Code that provides these protections by default.

5. Inadequate Observability and Audibility

The final critical vulnerability involves the almost complete lack of specialized observability and audibility capabilities for MCP-related security events. Traditional security tools weren’t designed for the dynamic, interconnected nature of AI agent ecosystems, leaving organizations blind to threats targeting their agentic AI infrastructure.

Effective MCP security requires continuous visibility into agent-to-server communications, behavioral analysis to detect anomalous patterns, and the ability to quickly assess the potential blast radius of security incidents. This includes understanding how compromised MCP servers might affect connected systems, what data might be at risk, and how to contain incidents before they cascade across multiple business functions.

Organizations also need to develop auditing and logging capabilities in order to assure every AI agent tool calling is traceable. This requires implementing logging systems that capture records of all agent-to-server communications, including what tools were invoked, what parameters were passed, what data was accessed, and what results were returned. These audit capabilities enable organizations to detect anomalous patterns that might indicate security threats, conduct post-incident analysis to understand what happened and prevent future occurrences, and map the blast radius and impact chain of agent actions across systems when issues arise.

The Broader Threat Landscape: Why MCP Security Matters

These five vulnerability categories represent more than isolated security concerns, they reflect a fundamental shift in enterprise attack surfaces. As organizations adopt agentic AI systems at scale, they’re creating interconnected digital ecosystems where traditional security boundaries no longer apply.

The concept of an always-expanding agentic AI blast radius becomes critical in this context. In traditional IT environments, security incidents typically affect specific systems or applications with defined boundaries. However, AI agents operate across multiple systems, databases, applications, and external services. A single compromised MCP server can potentially impact dozens of connected systems, affecting business operations across multiple departments and functions.

This interconnectedness also means that MCP security failures can have cascading effects. An attack that begins with a typosquatted MCP server might lead to data exfiltration, which could enable social engineering attacks against employees, which could result in additional system compromises, ultimately culminating in significant business disruption or data breaches.

The regulatory implications are equally concerning. As AI systems become subject to increasing regulatory scrutiny, organizations that can’t demonstrate adequate security controls over their agentic AI infrastructure may face compliance violations, regulatory sanctions, and legal liability. The interconnected nature of MCP-enabled systems means that security failures can potentially impact regulatory requirements across multiple domains, from data protection to financial services compliance.

Securing Your MCP Implementation: A Strategic Approach

Addressing these MCP security challenges requires both immediate tactical responses and long-term strategic investments in specialized security capabilities.

Immediate Risk Mitigation Actions

Start with a comprehensive audit of your current MCP deployments. Identify all MCP servers currently in use, verify their sources and authenticity, and check for potential typosquatted packages. Pay particular attention to packages that were installed based on community recommendations or that have naming conventions similar to official packages but from different publishers.

Implement immediate permission reviews for all MCP servers. Disable any tools or capabilities that aren’t directly necessary for your agents’ core functions, especially those with destructive capabilities or access to sensitive data. Establish a formal approval process for enabling additional MCP server capabilities, requiring security team review and business justification.

Strengthen your MCP server code review processes. For local MCP servers, implement mandatory security code reviews before deployment, focusing on identifying potential malware, data exfiltration capabilities, or hidden functionality. For remote MCP servers, conduct thorough due diligence on service providers, including security assessments, data handling practices, and compliance certifications. In addition, run MCP servers only in containers (such as Docker) to isolate them from the host and enforce least privilege by default. Containers restrict filesystem and network access, help contain compromise, and make MCP servers easier to audit, control, and roll back.

Long-term Strategic Security Investments

The fundamental challenge with MCP security is that traditional security tools and processes weren’t designed for the unique characteristics of agentic AI systems. Organizations need specialized observability and monitoring capabilities that understand the interconnected, dynamic nature of AI agent ecosystems.

Effective MCP security requires continuous visibility into agent-to-server communications, behavioral analysis to detect anomalous patterns, and the ability to quickly assess the potential blast radius of security incidents. This includes understanding how compromised MCP servers might affect connected systems, what data might be at risk, and how to contain incidents before they cascade across multiple business functions.

Organizations also need to develop auditing and logging capabilities in order to assure every AI agent tool calling is traceable. This requires implementing logging systems that capture records of all agent-to-server communications, including what tools were invoked, what parameters were passed, what data was accessed, and what results were returned. These audit capabilities enable organizations to detect anomalous patterns that might indicate security threats, conduct post-incident analysis to understand what happened and prevent future occurrences, and map the blast radius and impact chain of agent actions across systems when issues arise.

Noma Security: Purpose-Built Protection for Agentic AI

At Noma Security, we’ve built runtime protection specifically to address the unique security challenges of agentic AI systems, including comprehensive MCP security capabilities.

The Noma Agentic Risk Map provides organizations with real-time visibility into their agentic AI infrastructure, enabling security teams to understand the interconnections between AI agents, MCP servers, connected systems and data flows. This visibility is crucial for assessing the potential impact of security incidents and implementing appropriate risk controls.

Our platform continuously monitors MCP communications for anomalous patterns, unauthorized data access, and potential indicators of compromise. Unlike traditional security tools that focus on network traffic or endpoint behavior, our Noma Runtime Protection understands the context and intent of agent-to-server interactions, enabling more accurate threat detection with fewer false positives.

When security incidents do occur, the Noma Security AI security platform provides specialized investigation and response capabilities designed for the unique challenges of agentic AI security. This includes automated assessment of blast radius impact, guidance for containing incidents across interconnected systems, and detailed forensic analysis of agent behavior and MCP server interactions.

Conclusion: Don’t Let Innovation Create Security Blindspots

The adoption of MCP and agentic AI systems represents a transformative opportunity for organizations to achieve new levels of automation, efficiency, and intelligence. However, this transformation must be balanced with appropriate security investments and risk management practices.

The five MCP security blindspots we’ve identified: typosquatting, excessive permissions, malicious code execution, insecure communications, and inadequate observability, are avoidable risks in MCP adoption. They represent security gaps that can be addressed through proper planning, implementation, and ongoing management.

Organizations that proactively address these challenges will be positioned to realize the full benefits of agentic AI while maintaining strong security postures. Those that ignore these risks may find that their AI transformation also becomes their biggest vulnerability.

The window for establishing strong MCP security practices is narrowing as adoption accelerates and attackers develop more sophisticated techniques for exploiting agentic AI systems. Organizations that act now to implement comprehensive MCP security controls will have significant advantages over those that wait for security incidents to drive their security investments.

Ready to secure your agentic AI infrastructure? Contact Noma Security to learn how our purpose-built AI security platform can provide the visibility, monitoring, control and protection your organization needs to safely adopt MCP and other agentic AI technologies.