Executive Summary
The Noma Labs team discovered a critical (CVSS 9.2) vulnerability in the CrewAI platform that led to the exposure of a single internal GitHub token, granting full access to CrewAI’s private GitHub repositories.
Background: AI Platform Security Research
Our ongoing research into emerging AI threats in the rapidly evolving landscape continues to uncover not only significant vulnerabilities in the AI platforms themselves, but also in the tools and technologies used in the fabric of extremely complex and interconnected AI systems.
As organizations rapidly integrate AI agents and platforms into their critical operations (often ahead of robust security controls), our team has prioritized deep investigations into leading AI platforms. Following our recent disclosures of ForcedLeak in Salesforce Agentforce, critical flaws in Lightning AI, and the AgentSmith vulnerability in LangSmith, we also identified a critical-severity vulnerability in the CrewAI platform that could expose sensitive internal credentials to unauthorized users.
The CrewAI vulnerability is a potential threat discovered in the fabric of CrewAI, specifically via an exposed GitHub access token.
Uncrew Vulnerability Discovery
The Noma Labs team identified a vulnerability within the CrewAI platform where a high-privilege GitHub access token was inadvertently exposed through improper exception handling. CrewAI, a popular platform for building and deploying AI agent crews, is widely used by organizations to orchestrate complex multi-agent workflows for business automation and decision-making processes.
This newly identified vulnerability exploited a fundamental flaw in how the platform handled error conditions, allowing users to view a single internal GitHub tokens under specific circumstances. The severity of this vulnerability, named Uncrew by the Noma Labs team, has been assessed a CVSS score of 9.2, reflecting its high-potential impact on the confidentiality and integrity of CrewAI’s internal systems.
Risk Assessment
The exposure of high-privilege GitHub tokens poses severe risks to organizations relying on CrewAI for their AI operations. Beyond the immediate threat of unauthorized repository access, malicious actors could potentially gain persistent access to internal codebases, proprietary algorithms, and sensitive configuration data. This could result in intellectual property theft, supply chain attacks, and broader compromise of the platform’s infrastructure, amplifying both the scale and complexity of potential security incidents.
Resolution for Uncrew, the CrewAI Vulnerability
In line with responsible disclosure practices, Noma Labs promptly reported the vulnerability to CrewAI’s security team. CrewAI responded swiftly and responsibly, releasing an effective security patch that mitigated the issue and closed the exploitation pathway; demonstrating their strong commitment to user safety and rapid incident resolution.
Full Impact Analysis
The exposed GitHub token wasn’t a limited-scope credential, it was confirmed to have full administrative privileges across CrewAI’s entire GitHub infrastructure. Once obtained, this token could facilitate:
Repository Compromise: Clone or access all private GitHub repositories within CrewAI’s organization, providing complete visibility into proprietary source code, development history, and architectural decisions that took years to develop.
Code Injection and Tampering: Push malicious code directly to repositories or tamper with automation workflows, potentially introducing backdoors or data exfiltration mechanisms that would be distributed to all platform users through normal software updates.
Secondary Secret Harvesting: Exfiltrate other embedded secrets, API keys, database credentials, and pipeline configuration files stored within the repositories, creating pathways for broader infrastructure compromise.
Supply Chain Attack Execution: Escalate privileges through poisoned updates or supply-chain attack vectors, potentially compromising every organization and user relying on CrewAI for their AI operations.
Root Cause
When the CrewAI platform encountered a failure during the provisioning of machines, it did not securely handle exceptions. As a result, the exception details, including a CrewAI-owned internal GitHub token, were unintentionally exposed to users via error messages. This a single occurred due to a lack of proper redaction or sanitization of sensitive data in the failure response.
Provisioning Flow Overview
During the provisioning process, we observed that CrewAI sends a GET request to check the status of the deployment. If the status is successful, the system continues polling; however, if a failure occurs, the backend returns an error containing the full JSON payload – including a single internal GitHub token.
Request Endpoint:
None
GET
/crewai_plus/deployments/[deployment_id]/check_provision_status
Observed Error Response (sanitized):
JSON
{
"id": [ProvisionID],
"repo_clone_url":"https://x-access-token:ghu_Ahd...."
}

An error was received due to an internal failure, and the response included a single internal GitHub token within the repo_clone_url field.
Immediate Impact of Losing All Private Repositories on CrewAI
The loss of CrewAI’s private repositories would result in:
- Total exposure of proprietary source code, including critical platform logic and intellectual property.
- Immediate disruption to development operations, with engineers unable to access or recover active codebases.
- Loss of internal documentation and embedded credentials, compromising infrastructure and integrations.
- Unrestricted access for malicious actors, enabling code tampering, data theft, or sabotage.
- High risk of software supply chain attacks, potentially affecting downstream users and partners.
- Severe reputational and operational damage, undermining trust with customers, partners, and stakeholders.
Mitigating Actions
Improve Exception Handling: Improve the CrewAI platform’s exception handling by ensuring that all sensitive information, especially credentials and tokens, is automatically redacted or hidden from logs, responses, and any user-facing messages.
Token Revocation and Rotation: Immediately revoke the compromised GitHub token. Issue a new token with minimal required privileges. Establish automated token rotation policies (e.g., every 30 or 60 days) to reduce future exposure risk.
Audit and Monitoring: Conduct a comprehensive audit of GitHub repository access logs for suspicious activity. Monitor for any unauthorized usage of the previous token. Enable alerts and anomaly detection for token usage patterns.
Access Control Review: Review all service accounts and token permissions to ensure adherence to the principle of least privilege.
Security Postmortem and Process Hardening: Perform a full root cause analysis and document lessons learned. Update development processes to include security-focused code reviews, especially for areas dealing with external integrations and exception logging.
Uncrew Responsible Disclosure Timeline
March 5, 2025
- The Noma Labs team discovered a vulnerability involving an exposed a single internal GitHub token belonging to CrewAI. The issue was immediately reported to CrewAI the same day.
- CrewAI promptly acknowledged the report and initiated an investigation. Impressively, within five hours of receiving the disclosure, the CrewAI team had deployed a security fix. The Noma Labs team independently verified that the vulnerability was resolved.
Why this is important for the AI security industry
The CrewAI token exposure vulnerability demonstrates how traditional security oversights can have amplified consequences in AI environments. As AI platforms become central to enterprise operations, a single vulnerability can potentially compromise entire ecosystems through supply chain attacks and cascading failures.
This discovery reinforces the critical need for proactive security research and AI-specific security frameworks that address the unique risks of autonomous agents and interconnected AI systems. Organizations can no longer treat AI security as an afterthought, the stakes are simply too high.
At Noma Security, our ongoing research into AI platform vulnerabilities ensures that the AI transformation occurs safely and securely, helping organizations build trustworthy AI deployments that can handle their most critical operations. If you would like to learn more about how Noma Security can help you embrace AI with confidence, please reach out to us.


