2025-12-31 · Authensor

Per-Agent Isolation Pattern

The per-agent isolation pattern assigns each AI agent in a multi-agent system its own independent security policy, ensuring that one agent's permissions cannot be used by another and that a compromise of one agent does not grant access to another agent's resources.

Problem Statement

Multi-agent systems (CrewAI, AutoGen, LangChain multi-agent, MCP orchestrations) deploy multiple specialized agents that collaborate on tasks. A research agent gathers data, a coding agent writes code, a deployment agent pushes to production. If all agents share a single policy, the research agent inherits the deployment agent's permissions. A compromised or prompt-injected research agent can then deploy code. Shared policies violate the principle of least privilege and create lateral escalation paths between agents. The blast radius of a single agent's compromise extends to the entire system.

Solution

Per-agent isolation creates a one-to-one mapping between agents and security policies. Each agent is identified by a unique identity string. Each policy file specifies which agent it applies to. The policy engine evaluates actions against the policy bound to the requesting agent, not a global policy.

The architecture has three components:

Agent identity. Every action request includes an agent field that identifies the requesting agent. The identity is assigned at agent initialization and cannot be changed by the agent during execution. The identity is a string (e.g., "research-agent", "coding-agent", "deploy-agent") that maps to a policy file.

Per-agent policy files. Each agent has its own policy file defining its allowlist. The research agent's policy permits file_read and specific network endpoints. The coding agent's policy permits file_write to /src and shell_exec for test commands. The deployment agent's policy permits shell_exec for deploy commands and network access to the deployment target.

Policy routing. The policy engine receives an action request, extracts the agent identity, loads the corresponding policy, and evaluates the action against that specific policy. An action from the coding agent is never evaluated against the deployment agent's policy.

This design provides three guarantees:

  1. Permission isolation. Each agent can only perform actions permitted by its own policy. The coding agent cannot deploy. The research agent cannot write files. The deployment agent cannot read research data.
  1. Blast radius containment. A compromised research agent can only perform actions within the research agent's policy. It cannot escalate to deployment capabilities.
  1. Independent policy lifecycle. Each agent's policy can be authored, reviewed, tested, and updated independently. Changing the deployment agent's policy does not affect the coding agent.
In a multi-agent CrewAI system, the pattern maps naturally: each CrewAI agent receives a SafeClaw policy file. In a LangChain multi-agent graph, each node in the graph has its own policy. In MCP server deployments, each MCP server instance gets its own policy.

Implementation

SafeClaw, by Authensor, implements per-agent isolation through the agent field in action requests and per-agent policy file routing. When SafeClaw initializes, it loads policy files mapped to agent identities. When an action request arrives, SafeClaw reads the agent field and evaluates the action against the corresponding policy.

Each policy is evaluated independently using SafeClaw's first-match-wins algorithm with deny-by-default fallback. If no policy is mapped to a given agent identity, all actions from that agent are denied. This prevents new or unknown agents from inheriting default permissions.

SafeClaw's audit trail records the agent identity in every entry, enabling per-agent audit queries. Operators can review the action history of a specific agent without filtering through events from other agents. The SHA-256 hash chain covers all entries across all agents, maintaining a single tamper-proof timeline.

Policy evaluation completes in sub-millisecond time per action with zero third-party dependencies. SafeClaw is written in TypeScript strict mode, validated by 446 tests, and is 100% open source (MIT license). The control plane (safeclaw.onrender.com) provides a browser dashboard that displays per-agent policy status and action history. The control plane receives only action metadata, never API keys or sensitive data.

Install with npx @authensor/safeclaw. Free tier with 7-day renewable keys, no credit card required.

Code Example

Per-agent policy files for a three-agent system:

Research agent policy (policies/research-agent.yaml):

agent: "research-agent"
rules:
  - name: "allow-data-reads"
    action: file_read
    conditions:
      path:
        starts_with: "/data/research"
    effect: ALLOW

- name: "allow-api-queries"
action: network
conditions:
url:
starts_with: "https://api.datasource.com"
effect: ALLOW

# Cannot write files, execute shell commands, or access other domains

Coding agent policy (policies/coding-agent.yaml):

agent: "coding-agent"
rules:
  - name: "allow-src-writes"
    action: file_write
    conditions:
      path:
        starts_with: "/project/src"
    effect: ALLOW

- name: "allow-project-reads"
action: file_read
conditions:
path:
starts_with: "/project"
effect: ALLOW

- name: "allow-test-execution"
action: shell_exec
conditions:
command:
starts_with: "npm test"
effect: ALLOW

# Cannot deploy, access research data, or make network requests

Deployment agent policy (policies/deploy-agent.yaml):

agent: "deploy-agent"
rules:
  - name: "allow-deploy-command"
    action: shell_exec
    conditions:
      command:
        starts_with: "npm run deploy"
    effect: REQUIRE_APPROVAL

- name: "allow-deploy-target"
action: network
conditions:
url:
starts_with: "https://deploy.internal.example.com"
effect: ALLOW

# Cannot read research data, write source code, or run tests

SafeClaw initialization with per-agent routing:

const safeclaw = new SafeClaw({
  policies: {
    "research-agent": "./policies/research-agent.yaml",
    "coding-agent": "./policies/coding-agent.yaml",
    "deploy-agent": "./policies/deploy-agent.yaml"
  }
});

// Action from research agent — evaluated against research policy
safeclaw.evaluate({
type: "file_read",
path: "/data/research/dataset.csv",
agent: "research-agent"
});
// Result: ALLOW

// Research agent attempts deployment — denied by its policy
safeclaw.evaluate({
type: "shell_exec",
command: "npm run deploy",
agent: "research-agent"
});
// Result: DENY (no matching rule in research-agent policy)

Trade-offs

When to Use

When Not to Use

Related Patterns

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw