ForcedLeak vulnerability discovered by Noma Labs in Salesforce Agentforce

AI Agent risk exposed in Salesforce Agentforce

The first zero-click vulnerability capable of running destructive actions

Noma Labs discovered the ForcedLeak AI agent vulnerability chain which allows external attackers to exfiltrate sensitive CRM data through an indirect prompt injection attack, using a whitelisted domain that cost $5. This agentic vulnerability extends far beyond simple data theft. Attackers can manipulate CRM records, establish persistent access, and target any organization using AI-integrated business tools. 

The ForcedLeak attack path

                  Get the Report

This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response AI systems. Learn more about how the attack worked and how to protect your organziation from these destructive capabilities. 

Watch

Straight from Noma Labs

Learn how the Noma Labs research team was able to exfiltrate sensitive data from Salesforce Agentforce.

Frequently Asked Questions

What is this vulnerability and how serious is it?

A security vulnerability was identified in Salesforce Agentforce, which potentially enabled external attackers to exfiltrate CRM data through a sophisticated indirect prompt injection attack. Malicious data containing concealed instructions could be submitted by attackers, which could then be executed when employees subsequently interact with said data via AI agents, potentially leading to the exposure of sensitive information. This attack vector posed a significant threat due to its delayed nature, allowing it to remain dormant until activated by routine employee interactions. Upon activation, the injected payload would execute within the context of the running user: for Agent Service Agent, this would typically be the user under which the agent operates; for employee agents, it would be the organizational user interacting with the agent. This could have led to the disclosure of potentially sensitive information, contingent on the data accessibility of the executing user and the configured actions of the payload.

On September 8, 2025, Salesforce began enforcement of Trusted URL allow lists for Agentforce and Einstein Generative AI agents as a part of an ongoing effort to strengthen customer environments and provide a crucial defense-in-depth control against sensitive data escaping customer systems via external requests after a successful prompt injection. As a result of this mitigation customers are not affected by this indirect prompt injection attack. More information can be found here.

The attack works through a multi-step process where an attacker first submits a Web-to-Lead form with malicious instructions hidden in the description field that appear to be normal lead inquiries. When an internal employee later queries the AI about that lead using standard business processes, the AI executes both the employee’s legitimate request and the attacker’s hidden commands. The system then retrieves potentially sensitive Salesforce CRM information and transmits it to attacker-controlled servers through seemingly innocent image requests that bypass security controls. This time-delayed execution makes the attack particularly insidious because it exploits the trust boundary between employee instructions and external user data.

With trusted URLs now enforced, agents will no longer return images or other URLs containing sensitive data.

Salesforce commenced the enforcement of Trusted URL allowlists for Agentforce and Einstein Generative AI agents on September 8, 2025. This update is part of an ongoing effort to strengthen customer environments and follows the “principle of least privilege” security model.

Customers can audit all existing lead data from recent months for suspicious submissions containing unusual instructions, technical language, or references to data processing that don’t match typical lead profiles. Customers should also review AI agent interactions from the past 30-60 days for anomalous behavior and implement additional monitoring for unusual data access patterns while notifying your marketing team about the temporary suspension of lead capture functionality. Here are some links to tools that can help perform these reviews:

Long-term protection requires implementing AI-specific security controls including real-time prompt injection detection, establishing strict input validation for all user-controlled data fields that will be processed by AI systems, and creating comprehensive AI agent inventories to maintain visibility into your AI attack surface. You should develop AI security governance frameworks that treat AI agents as production components requiring rigorous security validation, threat modeling, and isolation for high-risk agents processing external data sources. Contact Noma Security to learn more about how we can help your teams protect against emerging AI threats like ForcedLeak.

More AI vulnerability research

Noma Labs is a team of elite AI security researchers dedicated to uncovering enterprise AI vulnerabilities before attackers do. We’re using our deep AI vulnerability research to provide organizations the knowledge and tools they need to enable AI innovation securely.

Uncrew: Understanding the Risk Behind a Leaked Internal GitHub Token at CrewAI

The Noma Labs team discovered a critical vulnerability in the CrewAI platform, granting full access to CrewAI’s private GitHub repositories.
Learn more >

How an AI Agent Vulnerability in LangSmith Could Lead to Stolen API Keys and Hijacked LLM Responses

Noma Security research team uncovers CVSS 8.8 “AgentSmith” vulnerability, a potentially malicious proxy configuration affecting AI agents and prompts
Learn more >

Noma Research discovers RCE vulnerability in AI-development platform, Lightning AI

Uncover how a hidden URL flaw in AI tools enabled RCE attacks with root privileges, potentially compromising client data
Learn more >

Podcast

Lorem Ipsum Dolor Sit Amet, Consectetur Adipiscing

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do 
eiusmod tempor incididunt ut labore et dolore magna aliqua

Recent Episodes

Ready to navigate 
AI securely?

Ready to navigate AI securely?