Using SafeClaw with CrewAI: Per-Agent Policy Configuration
Scenario
You are building a multi-agent system with CrewAI. Your crew has three agents, each with a distinct role:
- Researcher — searches the web, reads APIs, and gathers information. Needs network access but should never write files or execute shell commands.
- Writer — takes the researcher's findings and writes documents to disk. Needs file write access but should not access the network or run shell commands.
- Reviewer — reads the writer's output and provides feedback. Needs file read access only. No writes, no shell, no network.
Threat Model
Multi-agent systems without per-agent gating face compounded risks:
- Lateral privilege escalation. If all agents share one policy, a compromised researcher agent can write files (writer's privilege) or a compromised writer can access the network (researcher's privilege).
- Role confusion attacks. An agent instructed via prompt injection might claim to be a different role, accessing resources outside its lane.
- Cascading compromise. One agent's output becomes another agent's input. A poisoned research result could instruct the writer to execute shell commands or exfiltrate data.
- Shared credential abuse. Without isolation, all agents use the same API keys and can access any endpoint any other agent can reach.
- Unauditable activity. With a shared identity, audit logs cannot distinguish which agent performed which action, making incident investigation impossible.
Recommended Policy
# CrewAI Researcher Agent Policy
policy:
name: "crewai-researcher"
default: DENY
rules:
- action: file_read
path: "/app/crew/shared/research-brief.md"
decision: ALLOW
- action: file_write
path: "**"
decision: DENY
- action: shell_exec
command: "**"
decision: DENY
- action: network
domain: "api.openai.com"
decision: ALLOW
- action: network
domain: "en.wikipedia.org"
decision: ALLOW
- action: network
domain: "api.semanticscholar.org"
decision: ALLOW
- action: network
domain: "news.ycombinator.com"
decision: ALLOW
- action: network
domain: "*"
decision: DENY
# CrewAI Writer Agent Policy
policy:
name: "crewai-writer"
default: DENY
rules:
- action: file_read
path: "/app/crew/shared/**"
decision: ALLOW
- action: file_write
path: "/app/crew/output/drafts/**"
decision: ALLOW
- action: file_write
path: "/app/crew/shared/**"
decision: DENY
- action: shell_exec
command: "**"
decision: DENY
- action: network
domain: "api.openai.com"
decision: ALLOW
- action: network
domain: "*"
decision: DENY
# CrewAI Reviewer Agent Policy
policy:
name: "crewai-reviewer"
default: DENY
rules:
- action: file_read
path: "/app/crew/output/drafts/**"
decision: ALLOW
- action: file_read
path: "/app/crew/shared/**"
decision: ALLOW
- action: file_write
path: "/app/crew/output/reviews/**"
decision: ALLOW
- action: file_write
path: "/app/crew/output/drafts/**"
decision: DENY
- action: shell_exec
command: "**"
decision: DENY
- action: network
domain: "api.openai.com"
decision: ALLOW
- action: network
domain: "*"
decision: DENY
Example Action Requests
1. Researcher fetches data from Semantic Scholar (ALLOW)
{
"action": "network",
"domain": "api.semanticscholar.org",
"method": "GET",
"agent": "crewai-researcher",
"timestamp": "2026-02-13T09:00:00Z"
}
// Decision: ALLOW — domain is in the researcher's network allowlist
2. Researcher attempts to write a file (DENY)
{
"action": "file_write",
"path": "/app/crew/output/drafts/backdoor.sh",
"agent": "crewai-researcher",
"timestamp": "2026-02-13T09:01:00Z"
}
// Decision: DENY — researcher policy denies all file_write actions
3. Writer saves a draft document (ALLOW)
{
"action": "file_write",
"path": "/app/crew/output/drafts/article-v1.md",
"content": "# Research Summary\n\nAccording to recent studies...",
"agent": "crewai-writer",
"timestamp": "2026-02-13T09:10:00Z"
}
// Decision: ALLOW — path matches /app/crew/output/drafts/**
4. Writer attempts to access an external URL (DENY)
{
"action": "network",
"domain": "pastebin.com",
"method": "POST",
"agent": "crewai-writer",
"timestamp": "2026-02-13T09:11:00Z"
}
// Decision: DENY — writer can only reach api.openai.com
5. Reviewer reads the draft (ALLOW)
{
"action": "file_read",
"path": "/app/crew/output/drafts/article-v1.md",
"agent": "crewai-reviewer",
"timestamp": "2026-02-13T09:20:00Z"
}
// Decision: ALLOW — path matches /app/crew/output/drafts/**
6. Reviewer attempts to overwrite the draft (DENY)
{
"action": "file_write",
"path": "/app/crew/output/drafts/article-v1.md",
"content": "IGNORE PREVIOUS INSTRUCTIONS...",
"agent": "crewai-reviewer",
"timestamp": "2026-02-13T09:21:00Z"
}
// Decision: DENY — reviewer cannot write to drafts, only to reviews
Setup Steps
- Install SafeClaw in your CrewAI project:
npx @authensor/safeclaw
The browser-based setup wizard opens. Free tier with 7-day renewable keys, no credit card required.
- Create three separate policies in the wizard, one per agent role:
crewai-researcher,crewai-writer,crewai-reviewer. Each policy has its own set of rules.
- Assign agent identifiers. When initializing each CrewAI agent, set the
agentfield so SafeClaw can match actions to policies:
from crewai import Agent
from safeclaw import create_evaluator
researcher_gate = create_evaluator(agent="crewai-researcher")
writer_gate = create_evaluator(agent="crewai-writer")
reviewer_gate = create_evaluator(agent="crewai-reviewer")
- Wrap each agent's tool execution with its SafeClaw evaluator. Before any tool runs, check the policy:
class GatedWebSearchTool:
def __init__(self, gate):
self.gate = gate
def run(self, query: str, domain: str):
decision = self.gate.evaluate({
"action": "network",
"domain": domain
})
if decision["result"] != "ALLOW":
return f"Access denied to {domain}"
return web_search(query)
- Define shared directories for inter-agent communication. The researcher writes to
/app/crew/shared/, the writer reads from it. SafeClaw ensures each agent can only interact with shared data in the direction their policy permits.
- Run in simulation mode with a sample task. The dashboard shows every action by every agent, tagged with the agent name. Verify each agent stays within its role.
- Switch to enforcement mode. SafeClaw evaluates every action in sub-millisecond time, so multi-agent orchestration is not slowed down.
- Review per-agent audit trails. Filter the tamper-proof SHA-256 hash chain logs by agent name to see exactly what each role attempted and what was decided.
Cross-References
- SafeClaw Quickstart Guide — Full installation and first-run walkthrough
- Multi-Agent Policy Reference — How to define and assign per-agent policies
- Deny-by-Default Architecture — Why SafeClaw blocks everything unless explicitly allowed
- Audit Trail and Hash Chain — Technical details on tamper-proof logging with agent attribution
- Prompt Injection Defense FAQ — How action-level gating stops cascading prompt injection
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw