The Agent Identity Problem
We're entering an era where AI agents act on behalf of humans — booking flights, negotiating contracts, managing infrastructure, even writing code. These agents interact with other agents, APIs, and human systems.
Every one of these interactions raises a fundamental question: who is this agent, and why should I trust it?
Today, most agents are authenticated with API keys — static secrets shared between the agent's operator and the service it accesses. This works for simple cases, but breaks down as agents become more autonomous:
- Who is responsible when an agent makes a bad decision?
- How does one agent verify another agent's claims?
- How do you revoke an agent's access without revoking its operator's access?
- How do you audit what an agent did, and on whose behalf?
Why Not Just API Keys?
API keys solve authentication — proving you have a valid credential. They don't solve identity — proving who you are and what you're authorized to do.
| API Keys | DID-based Identity | |
|---|---|---|
| Credentials | Static, long-lived secrets | Cryptographic key pairs, rotatable |
| Identity | No intrinsic identity information | Rich, verifiable identity claims |
| Access | Binary: valid or invalid | Granular capabilities and permissions |
| Authority | Centrally issued and revoked | Self-sovereign, no central authority |
| Delegation | No delegation model | Native delegation and revocation |
DID as the Answer
Decentralized Identifiers (DIDs) were designed by the W3C as a universal identity layer. While originally conceived for humans, the architecture is a perfect fit for AI agents:
Self-Sovereign
An agent creates its own DID — no registration authority needed. The agent controls its own keys and can prove its identity to anyone.
Verifiable Credentials
An agent's capabilities can be expressed as verifiable credentials: "This agent is authorized by Company X to make purchases up to $1,000." Anyone can verify this claim without contacting Company X.
Delegation Chain
Human → Agent → Sub-agent delegation is native to the DID model. Every action can be traced back through the chain of authority.
Practical Architecture
Here's what DID-based agent identity looks like in practice:
Human (DID: did:abt:alice) │ ├── Issues credential: "Agent X may book flights on my behalf" │ ▼ Agent X (DID: did:abt:agent-x) │ ├── Presents credential to Flight API ├── Flight API verifies: credential → Alice's DID → trusted issuer │ ▼ Flight booked. Audit trail: Agent X → authorized by Alice → flight #123
The key insight: every interaction is auditable, every capability is scoped, and every delegation is revocable — without any central authority managing keys or permissions.
What We're Building
At ArcBlock, we're integrating DID into the AIGNE framework at the foundational level:
- Every AIGNE agent gets a DID at creation time. No opt-in required.
- Agent-to-agent communication uses DID-based mutual authentication.
- Verifiable credentials define what each agent can do — and who authorized it.
- Audit logs link every action to a DID, creating a complete chain of accountability.
This isn't theoretical — it's shipping in AIGNE 1.0. If you're building AI agents that interact with the real world, identity isn't optional. It's the foundation.
Recommended Reading
- Introducing AIGNE: Building AI-Native Software — The open source framework with DID identity built into every agent.
- AFS: Rethinking System Abstraction for the AI Era — How the Agentic File System integrates identity into every file operation.
- From Blocklet to Chamber: How Our Architecture Evolved for AI — The evolution of constraint architecture from human engineers to AI agents.