The Constraint Insight

Every significant software architecture is, at its core, a system of constraints. Containers constrain processes. Sandboxes constrain code. Permission models constrain users. The value of these systems is not in what they allow — it is in what they prevent.

When we designed the Blocklet architecture, we started from this insight. The problem we were solving was: how do you let untrusted actors build and deploy software components on shared infrastructure, without letting them break things?

In 2018, those untrusted actors were human engineers.

In 2026, those untrusted actors are AI agents.

The actors changed. The architecture did not have to start over — it had to evolve.

What Blocklet Already Got Right

A Blocklet is a self-contained software component that runs inside Blocklet Server. From the beginning, the architecture enforced:

  • Isolation — each Blocklet runs in its own namespace with defined resource boundaries
  • Declared capabilities — a Blocklet explicitly states what it needs (network access, storage, identity services) and the platform grants or denies those capabilities
  • Lifecycle management — install, start, stop, update, and rollback are platform-managed operations, not ad-hoc scripts
  • Identity integration — every Blocklet has a decentralized identity and every user interaction is authenticated through DID

These were not accidental design choices. They were deliberate constraints designed to make it safe for organizations to run third-party software components on their own infrastructure.

The key word is safe. Not convenient. Not fast. Safe. Because when you are running someone else's code on your servers, safety is the first requirement.

The Scaffold Pattern

Blocklet Server was the first implementation of what we now call the Scaffold pattern: a platform that provides structure, constraints, and services to components that run inside it.

A scaffold does not tell the component what to do. It tells the component what it cannot do, and provides the services it needs to do everything else. The component is free within its boundaries, but the boundaries are non-negotiable.

This pattern maps directly to how we think about AI agents:

Human Era (Blocklet) AI Era (Chamber)
Engineer writes code AI agent generates and executes code
Code runs in isolated Blocklet Agent runs in isolated Chamber
Platform enforces resource limits Platform enforces resource limits
DID authenticates the developer DID authenticates the agent
Platform manages lifecycle Platform manages lifecycle

The structure is identical. The actor changed.

Enter Chamber

A Chamber is a Blocklet upgraded for AI. It inherits every constraint and capability from the Blocklet architecture, and adds what AI agents specifically need:

Model access control — a Chamber declares which AI models it needs access to, and the platform mediates that access. The agent cannot reach arbitrary external services. It works through the platform.

Prompt and tool boundaries — the Chamber defines what tools and prompts the agent can use. This is the AI equivalent of a Blocklet's capability declaration. An agent that is authorized to read files but not write them cannot escalate its own permissions.

Observation and audit — every action an AI agent takes inside a Chamber is logged with full identity context. Not just what happened, but who authorized it, which model generated the decision, and what context was provided.

Resource metering — AI workloads have different resource profiles than traditional software. Token consumption, model API calls, and computation time are tracked and constrained at the Chamber level.

The AIGNE framework builds on top of Chambers. When you deploy an AIGNE agent, it runs inside a Chamber on Blocklet Server. The framework handles agent logic; the Chamber handles constraint enforcement.

AINE is Blocklet's Completion

There is a narrative that AI-Native Engineering (AINE) represents a departure from ArcBlock's Blocklet heritage. This is incorrect.

AINE is not rejecting Blocklet. AINE is Blocklet's completion.

The Blocklet architecture was always about one thing: making it safe to run components from actors you cannot fully trust. In 2018, we could not have predicted that the untrusted actor would be an AI model. But the architectural principles we established — isolation, declared capabilities, identity-based access, platform-managed lifecycle — are exactly what AI agents need.

We did not design Blocklet for AI. We designed Blocklet for the general problem of constraining unreliable actors. AI agents are simply the most important instance of that problem today.

The Path Forward

The evolution from Blocklet to Chamber is not a migration. Existing Blocklets continue to work exactly as they do today. Chamber is an extension — a new mode that activates when the component inside the scaffold is an AI agent rather than a traditional application.

This backward compatibility is intentional. Organizations that have deployed Blocklets on Blocklet Server do not need to rebuild anything. They gain the ability to deploy AI agents alongside their existing applications, using the same infrastructure, the same identity system, and the same operational model.

The Agentic File System provides the shared abstraction layer that connects human-era Blocklets and AI-era Chambers. Both read and write through AFS. Both are identified through DID. Both are managed by Blocklet Server.

The platform evolved. The principles held.

Build on Blocklet Server

Deploy traditional Blocklets and AI-native Chambers on the same infrastructure.

Get Started | View on GitHub


Recommended Reading