Noma Security’s Research Team announced the identification of a critical security flaw (CVSS 9.2) in Cursor, the world’s most popular AI-powered code editor. This vulnerability allows an attacker to bypass terminal execution restrictions, including “Command Allowlists” and “Ask Every Time” prompts, to execute arbitrary commands on a user’s machine. By leveraging a specific markdown obfuscation technique, an attacker can trigger automatic command execution without any user interaction or approval.
The Vulnerability: Triple Backtrick
Cursor includes a security feature designed to prevent its AI agent from running terminal commands without permission. Users can configure a “Command Allowlist” to authorize specific tools or set the editor to “Ask Every Time” before running any terminal call.
However, Noma discovered that the system’s filtering logic could be bypassed via a direct Prompt Injection attack and was easily adapted to an Indirect Prompt Injection attack. Wrapping the restricted command in triple backticks (` ` `), the command is processed in a way that skips the user’s allow-list verification entirely.
Technical Insight: The vulnerability originated from a regression during code refactoring. The allow-list logic failed to account for how the parser handles markdown formatting. Because triple backticks typically designate pre-formatted blocks, the system may have bypassed its standard sanitization protocols and incorrectly flagged the encapsulated commands for automatic execution.
Real World Risks and Impact
If exploited in the wild, this vulnerability presents significant risks to individual developers and organizations alike. Because the bypass can be triggered via a prompt injection strategy, in which malicious instructions are hidden in external files, documentation, or even code comments that the AI reads, a developer could compromise their own machine simply by opening a malicious repository.
- Data Exfiltration: Attackers could execute commands to read and upload sensitive environment variables (.env files), SSH keys, or cloud provider credentials.
- Supply Chain Compromise: Unauthorized terminal access allows for the silent modification of source code or build scripts, potentially injecting backdoors into software before it is ever committed or deployed.
- Lateral Movement: In a corporate environment, a compromised developer workstation serves as a beachhead for scanning internal networks or accessing protected company resources.
Vulnerability Proof of Concept (POC)
To demonstrate the severity, we showed that an attacker could execute a script to modify internal configuration files even if the allow-list was empty.
- Workflow: A script named “a.sh” is created with the intent to modify “.cursor.mcp.json”.
- The Bypass: Instead of requesting “bash a.sh” (which would trigger a prompt), the attacker requests the command wrapped in backticks: ` ` `bash a.sh` ` `.
- Result: Cursor executes the command automatically, modifying the sensitive file without ever alerting the user.
Defending AI Systems Against Indirect Prompt Injection
When your AI cannot distinguish between legitimate commands and maliciously injected context, existing security solutions can fail. Noma’s unified platform addresses these vulnerabilities by moving beyond basic filtering toward a zero-trust model for the entire AI lifecycle.
- Contextual Isolation: Noma’s AI-DR Runtime Protection prevents prompt injection by identifying malicious instructions within the context window. Through deterministic interception, the platform enforces a “data-only” status for untrusted inputs, stopping the model from interpreting injected context as legitimate system commands.
- Eliminating Excessive Agency: Through the Agentic Risk Map (ARM), Noma identifies and restricts dangerous combinations of toolsets and permissions. This ensures high-risk operations, such as filesystem writes, remain behind a “hard-coded” approval gate that is structurally isolated from the model’s output.
- Proactive Visibility: Noma’s Runtime Protection continuously monitors agent reasoning and tool calls. Shifting from keyword matching to behavioral validation allows for the immediate detection of anomalies, such as unauthorized shell scripts or data exfiltration attempts.
Disclosure Timeline
- August 7, 2025: Initial report submitted to the Cursor security team.
- August 8, 2025: Cursor team acknowledges the report and begins investigation.
- December 3, 2025: Cursor requests additional settings info after confirming the bypass works even with empty allow-lists.
- December 9, 2025: Cursor confirms a regression was found during a recent refactor and begins a fix.
- December 18, 2025: Fix merged and confirmed for the next release.
- March 2026: Disclosure permitted according to responsible vulnerability disclosure policies
Conclusions and Recommendations
This flaw highlights the types of “mismatches” that can occur between AI-generated markdown and system-level execution, as well as the potential risks of unprotected AI-based development. We recommend that all Cursor users update their editor to the latest version immediately to ensure that terminal guardrails function as intended and ensure your organization has a robust, comprehensive, and proactive indirect prompt injection solution in place before moving your AI development from testing to production environments.
We would like to thank the Cursor team for their transparency and for working with the Noma Security Research Team to help secure and protect the AI-augmented development ecosystem.
***
Many thanks to Avi Yeger for his collaboration on this vulnerability research and blog.


