One of the more interesting questions I keep coming back to with other CISOs and security friends is how we should govern AI agent identities. Do we treat them like service accounts? Like users? Or something else entirely?

Where I’ve landed, at least for now, is: both, and neither.

Agents are a true hybrid. They share meaningful characteristics with human identities and with service accounts, and we can borrow governance patterns from both. But there’s also a layer that doesn’t fit either model cleanly, and that’s where the real challenges start to show up.

If we want something that scales, we need to be explicit about all three: what transfers from human identity governance, what transfers from service account governance, and what’s genuinely new.

What Agents Share with Human Identities

Like a person, an agent has an owner. There’s a team or individual responsible for what it does, who approves its deployment, and who is accountable for retiring it. That’s not consistently true in service account models, where credentials can persist long after anyone remembers why they were created.

Agents also need the same kind of periodic access review we apply to human identities. Not just “does this credential still exist,” but “is this agent still doing what we intended, and does it still need the access it has?” That’s a human governance concept, and it maps directly.

There’s also the question of changing access. Humans gain and lose permissions over time. Agents will too. In many cases, they have legitimate reasons to request elevated access to complete a task. That doesn’t mean standing privilege. It means controlled, auditable elevation.

And finally, delegation. Agents can act on behalf of a person. The same governance questions apply: is that delegation appropriate, is it bounded, and can it be revoked?

What Agents Share with Service Accounts

At the same time, agents are clearly non-human systems.

They should have dedicated, non-interactive machine identities. They shouldn’t borrow human credentials. They should authenticate without passwords wherever possible, and their secrets should live in a vault, not in code or configuration. Those are well-understood service account fundamentals.

Least privilege still applies. An agent should only have access to the systems and data it genuinely needs. Where things start to diverge is predictability.

A traditional service account primarily supports deterministic behavior. You can define its access requirements upfront. Agents are different. Some portion of their behavior, and therefore their access needs, emerges at runtime.

A pattern that fits is just-in-time access. Credentials are issued per session, scoped to a specific task, and expire when that task is complete. This is service account governance adapted for non-deterministic systems.

What’s Actually New

There are three areas where neither framework is sufficient on its own.

The first is delegation chains. Agents don’t just act. They orchestrate. A planning agent calls a research agent, which invokes a drafting agent, which triggers a downstream tool. Now you have a chain of identities operating on behalf of a human.

What identity does each step use? The human’s? The parent agent’s? A newly scoped identity?

If sub-agents inherit the full human credential, your attack surface expands across the entire chain. If each sub-agent gets its own identity, you need a way to authorize and constrain that delegation so it doesn’t become effectively ungoverned.

That’s not a solved problem today, and it doesn’t map cleanly to existing IAM models.

The second is the confused deputy problem, now at scale.

An agent operating under delegated human authority is manipulated through a poisoned document in its retrieval pipeline. It takes an action the human never intended. The audit log shows the human identity. The blast radius reflects the human’s permissions, not the agent’s intended task.

The pattern itself isn’t new. The difference is the attack surface. The manipulation vector is now data, and data is ambient. It’s not confined to a single input channel. It lives in documents, emails, knowledge bases, tickets, anywhere the agent retrieves context. Every item in a retrieval pipeline becomes a potential injection point.

The third is runtime behavior.

Even a well-provisioned agent can take a harmful action if the session is manipulated. That means governance has to move closer to execution. You need controls that apply during runtime, not just at provisioning and review.

What a Hybrid Approach Looks Like

No identity platform has fully solved this yet, although many are moving in the right direction.

In the meantime, the practical approach is compositional. Take what works from both models and deliberately fill the gaps.

From human identity governance, carry forward ownership and accountability, periodic access reviews tied to task scope, and delegation that is explicitly bounded and revocable.

From service account governance, use dedicated machine identities, non-interactive authentication, centralized secrets management, and least privilege enforced through session scoping rather than standing access.

Then address what’s new.

Issue just-in-time, session-bound credentials that expire at task completion. Enforce down-scoped delegation so an agent never operates with more privilege than the human intended. Track delegation chains across multi-agent workflows. And most importantly, enforce behavioral policy at runtime, not just at provisioning.

5 min read

Category:

Table of Contents

Share this: