AI Agent risk exposed in Salesforce Agentforce
Overview

Noma Labs uncovered GeminiJack, a critical vulnerability in Google Gemini Enterprise and Vertex AI Search that collapses the trust boundary between enterprise data and AI instructions.

Overview

Noma Labs uncovered GeminiJack, a critical vulnerability in Google Gemini Enterprise and Vertex AI Search that collapses the trust boundary between enterprise data and AI instructions.

Overview
Noma Labs discovered the ForcedLeak AI agent vulnerability chain which allows external attackers to exfiltrate sensitive CRM data through an indirect prompt injection attack, using a whitelisted domain that cost $5. 
Impact
This agentic vulnerability extends far beyond simple data theft. Attackers can manipulate CRM records, establish persistent access, and target any organization using AI-integrated business tools. 

The first zero-click vulnerability capable of
running destructive actions

ForcedLeak attack path

Watch: Straight from Noma Labs

Learn how the Noma Labs research team was able to exfiltrate sensitive data from Salesforce Agentforce.

Frequently Asked Questions

What is this vulnerability and how serious is it?
A security vulnerability was identified in Salesforce Agentforce, which potentially enabled external attackers to exfiltrate CRM data through a sophisticated indirect prompt injection attack. Malicious data containing concealed instructions could be submitted by attackers, which could then be executed when employees subsequently interact with said data via AI agents, potentially leading to the exposure of sensitive information. This attack vector posed a significant threat due to its delayed nature, allowing it to remain dormant until activated by routine employee interactions. Upon activation, the injected payload would execute within the context of the running user: for Agent Service Agent, this would typically be the user under which the agent operates; for employee agents, it would be the organizational user interacting with the agent. This could have led to the disclosure of potentially sensitive information, contingent on the data accessibility of the executing user and the configured actions of the payload.
Is my organization affected?
On September 8, 2025, Salesforce began enforcement of Trusted URL allow lists for Agentforce and Einstein Generative AI agents as a part of an ongoing effort to strengthen customer environments and provide a crucial defense-in-depth control against sensitive data escaping customer systems via external requests after a successful prompt injection. As a result of this mitigation customers are not affected by this indirect prompt injection attack. More information can be found here.
How does the attack work?
The attack works through a multi-step process where an attacker first submits a Web-to-Lead form with malicious instructions hidden in the description field that appear to be normal lead inquiries. When an internal employee later queries the AI about that lead using standard business processes, the AI executes both the employee’s legitimate request and the attacker’s hidden commands. The system then retrieves potentially sensitive Salesforce CRM information and transmits it to attacker-controlled servers through seemingly innocent image requests that bypass security controls. This time-delayed execution makes the attack particularly insidious because it exploits the trust boundary between employee instructions and external user data. With trusted URLs now enforced, agents will no longer return images or other URLs containing sensitive data.
What should I do immediately?
Salesforce commenced the enforcement of Trusted URL allowlists for Agentforce and Einstein Generative AI agents on September 8, 2025. This update is part of an ongoing effort to strengthen customer environments and follows the “principle of least privilege” security model. Customers can audit all existing lead data from recent months for suspicious submissions containing unusual instructions, technical language, or references to data processing that don’t match typical lead profiles. Customers should also review AI agent interactions from the past 30-60 days for anomalous behavior and implement additional monitoring for unusual data access patterns while notifying your marketing team about the temporary suspension of lead capture functionality. Here are some links to tools that can help perform these reviews: Release Notes: Agentforce Analytics (Beta) Release Notes: Agentforce Optimization (Beta) Salesforce Help: Agentforce Analytics (Beta) Salesforce Help: Agentforce Optimization (Beta)
How can I protect my organization long-term?
Long-term protection requires implementing AI-specific security controls including real-time prompt injection detection, establishing strict input validation for all user-controlled data fields that will be processed by AI systems, and creating comprehensive AI agent inventories to maintain visibility into your AI attack surface. You should develop AI security governance frameworks that treat AI agents as production components requiring rigorous security validation, threat modeling, and isolation for high-risk agents processing external data sources. Contact Noma Security to learn more about how we can help your teams protect against emerging AI threats like ForcedLeak.

Noma Labs uncovers enterprise AI vulnerabilities so organizations can adopt AI securely.