Almost daily, we learn of a new “Claw” fork or platform. Yesterday, The Verge leaked Microsoft’s intention to add a Claw framework to Copilot. In the last few weeks, the software development lifecycle market underwent another significant transition following the leak of Anthropic’s internal “Claude Code” source code. Following the leak and the quick emergence of Claw Code, a community-driven, model-agnostic version of Claude Code, Noma is seeing uncontrolled, rapid adoption of agentic workflows in Git repositories, accelerating at an unbelievable pace.
The swift shift from a focus on browser-based LLM interactions to Autonomous Local Agents has left organizations that are slower to adapt both unprepared and unprotected. However, unlike standard AI assistants, these tools operate as Non-Human Identities (NHIs) and are granted direct execution privileges on developer endpoints. This agency and access create an entirely new surface area for both local AI and supply-chain risk.
Field Observations: The Emergence of Shadow Agents
Noma’s recent threat and research analysis of enterprise environments and ongoing market discovery reveals several surprising but specific trends regarding the rapid deployment of Claw-based tools within the Enterprise:
- Agent-Led Repository Maintenance: Unexpectedly quick moves toward repos where agents perform autonomous operations, such as refactoring code, managing dependencies, and opening pull requests. Even in enterprise environments, this automation is often allowed to entirely bypass or significantly minimize standard human review cycles intended to catch security regressions and logic flaws.
- Endpoint Visibility Gaps: Because local agents execute locally, they can evade SaaS-level logging. While EDRs may capture the agent’s activity, it is difficult to distinguish this activity from the user’s own actions, as the agent often runs under the same user context. This visibility gap leaves even the best security teams without any clear audit trail of AI-based changes.
- Unofficial Supply Chain: Enterprise developers are frequently opting for “Claw” forks over official vendor tools to gain model flexibility and speed. These community forks lack formal security backing and established vulnerability disclosure programs, introducing unvetted code into the heart of their development environments.
Technical Analysis: The Risks of Local Autonomy
A core concern with the Claw ecosystem and architecture is the model’s ability to inherit the developer’s local permissions, a concept we’ve defined as Privilege Inheritance.
- Active Session Exploitation: Claw agents inherit the shell context, including the permissions of the user running the CLI. If that user/developer has active auth tokens for AWS, Kubernetes, or internal databases, the agent can programmatically interact with those services without requiring any further auth.
- “Auto-Approval” Risk: To maximize speed and innovation, users and developers frequently enable “auto-approve” settings. Using this type of configuration provides LLM-driven processes with an unattended shell, allowing them to execute Bash commands and modify system files without a human in the loop.
- Any Model, Any Time: Unlike Claude Code, the newer Claws can be used with any model, including uncensored ones.
Rapid Response: Security Risks and Misconfigurations
Our research into Claw platforms highlights several dangerous defaults we strongly recommend enterprises address:
Web-to-Bash Vector
OpenClaw allows a model to browse the web and execute bash commands in the same session. This shared session creates a path for Indirect Prompt Injection: a malicious website or a poisoned documentation file can feed instructions to the agent that trigger unauthorized local commands, such as curl | bash. For more information about the destructive risks of Indirect Prompt Injection, see our vulnerability blogs on GeminiJack, DockerDash, and GrafanaGhost.
Directory Agency and Scoping
Default configurations often don’t “jail” the agent to a specific project folder. An agent can be easily manipulated into reading sensitive files outside the repository, such as ~/.ssh/id_rsa or .env files, and exfiltrating that data via outbound web requests.
Typical “Claw” Technical Capability Risk Map
| Capability | Function | Security Risk |
|---|---|---|
| Bash Access | Execute terminal commands | Arbitrary code execution; Unauthorized software installation; lateral movement. |
| File Access | Read/Write to local disk | Injection of backdoors; theft of local credentials. |
| Web Access | Fetch external data
Post data to external sources |
Import of untrusted content to the agent’s context
Data exfiltration via POST requests to external listeners. |
CISO Considerations
Addressing the risks of agentic tools requires moving focus from Content (what the AI generates) to Conduct (what the agent executes).
- Identity Governance: Treat every local agent installation as a Non-Human Identity. Organizations require visibility into which agents are active and what internal resources they are authorized to touch.
- Environment Isolation: Agentic CLI tools should ideally run within isolated containers (e.g., Docker). This prevents the agent from accessing the host OS, local SSH keys, or the broader corporate network.
- Egress Control: Restrict network access of local agents. As a policy, your local agents should only be able to reach an allowlist of approved doc sites and internal package registries. This type of policy will directly mitigate the risk of silent data exfiltration.
Conclusion: Managing Agentic Adoption Speed
Claws offer significant productivity gains, but their decentralized nature, lack of security compliance, and local execution model introduce risks that traditional AI policies cannot cover. Providing a governed framework in which your developers can use agentic tools without granting persistent, unobserved LLM access to the enterprise’s core intellectual property must take priority when deciding what to allow into your enterprise and how to protect your core business.


