The Agentic AI Security Academy

Welcome

Noma Labs discovered the ForcedLeak AI agent vulnerability chain which allows external attackers to exfiltrate sensitive CRM data through an indirect prompt injection attack, using a whitelisted domain that cost $5. This agentic vulnerability extends far beyond simple data theft. Attackers can manipulate CRM records, establish persistent access, and target any organization using AI-integrated business tools. 

What is Agentic AI Security?

Get Started with Agentic AI security

Part 1 - An Introduction to Agents

Read More

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Ready to navigate AI securely?

Additional Resources

Noma Labs is a team of elite AI security researchers dedicated to uncovering enterprise AI vulnerabilities before attackers do. We’re using our deep AI vulnerability research to provide organizations the knowledge and tools they need to enable AI innovation securely.

Uncrew: understanding the risk behind a leaked internal github token at crewai

The Noma Labs team discovered a critical vulnerability in the CrewAI platform, granting full access to CrewAI’s private GitHub repositories.
Learn more >

How an ai agent vulnerability in langsmith could lead to stolen api keys and hijacked llm responses

Noma Security research team uncovers CVSS 8.8 “AgentSmith” vulnerability, a potentially malicious proxy configuration affecting AI agents and prompts
Learn more >

Noma research discovers rce vulnerability in ai-development platform, lightning ai

Uncover how a hidden URL flaw in AI tools enabled RCE attacks with root privileges, potentially compromising client data
Learn more >