Noma Labs discovered the ForcedLeak AI agent vulnerability chain which allows external attackers to exfiltrate sensitive CRM data through an indirect prompt injection attack, using a whitelisted domain that cost $5. This agentic vulnerability extends far beyond simple data theft. Attackers can manipulate CRM records, establish persistent access, and target any organization using AI-integrated business tools.
This vulnerability demonstrates how AI agents present a fundamentally different and expanded attack surface compared to traditional prompt-response AI systems. Learn more about how the attack worked and how to protect your organziation from these destructive capabilities.
Learn how the Noma Labs research team was able to exfiltrate sensitive data from Salesforce Agentforce.
A security vulnerability was identified in Salesforce Agentforce, which potentially enabled external attackers to exfiltrate CRM data through a sophisticated indirect prompt injection attack. Malicious data containing concealed instructions could be submitted by attackers, which could then be executed when employees subsequently interact with said data via AI agents, potentially leading to the exposure of sensitive information. This attack vector posed a significant threat due to its delayed nature, allowing it to remain dormant until activated by routine employee interactions. Upon activation, the injected payload would execute within the context of the running user: for Agent Service Agent, this would typically be the user under which the agent operates; for employee agents, it would be the organizational user interacting with the agent. This could have led to the disclosure of potentially sensitive information, contingent on the data accessibility of the executing user and the configured actions of the payload.
On September 8, 2025, Salesforce began enforcement of Trusted URL allow lists for Agentforce and Einstein Generative AI agents as a part of an ongoing effort to strengthen customer environments and provide a crucial defense-in-depth control against sensitive data escaping customer systems via external requests after a successful prompt injection. As a result of this mitigation customers are not affected by this indirect prompt injection attack. More information can be found here.
The attack works through a multi-step process where an attacker first submits a Web-to-Lead form with malicious instructions hidden in the description field that appear to be normal lead inquiries. When an internal employee later queries the AI about that lead using standard business processes, the AI executes both the employee’s legitimate request and the attacker’s hidden commands. The system then retrieves potentially sensitive Salesforce CRM information and transmits it to attacker-controlled servers through seemingly innocent image requests that bypass security controls. This time-delayed execution makes the attack particularly insidious because it exploits the trust boundary between employee instructions and external user data.
With trusted URLs now enforced, agents will no longer return images or other URLs containing sensitive data.
Salesforce commenced the enforcement of Trusted URL allowlists for Agentforce and Einstein Generative AI agents on September 8, 2025. This update is part of an ongoing effort to strengthen customer environments and follows the “principle of least privilege” security model.
Customers can audit all existing lead data from recent months for suspicious submissions containing unusual instructions, technical language, or references to data processing that don’t match typical lead profiles. Customers should also review AI agent interactions from the past 30-60 days for anomalous behavior and implement additional monitoring for unusual data access patterns while notifying your marketing team about the temporary suspension of lead capture functionality. Here are some links to tools that can help perform these reviews:
Long-term protection requires implementing AI-specific security controls including real-time prompt injection detection, establishing strict input validation for all user-controlled data fields that will be processed by AI systems, and creating comprehensive AI agent inventories to maintain visibility into your AI attack surface. You should develop AI security governance frameworks that treat AI agents as production components requiring rigorous security validation, threat modeling, and isolation for high-risk agents processing external data sources. Contact Noma Security to learn more about how we can help your teams protect against emerging AI threats like ForcedLeak.
Noma Labs is a team of elite AI security researchers dedicated to uncovering enterprise AI vulnerabilities before attackers do. We’re using our deep AI vulnerability research to provide organizations the knowledge and tools they need to enable AI innovation securely.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua