Setting Up Per-Agent SafeClaw Policies in CrewAI
SafeClaw is action-level gating for AI agents, built by Authensor. CrewAI runs multiple agents with different roles. This guide covers assigning a separate SafeClaw policy to each CrewAI agent so that a researcher agent, a coder agent, and a reviewer agent each operate under distinct permission boundaries.
Prerequisites
- Python 3.10 or later
crewaipackage installed (pip install crewai)- Node.js 18+ for the SafeClaw runtime
- A SafeClaw account at safeclaw.onrender.com (free tier, 7-day renewable keys, no credit card)
- SafeClaw API key from the browser dashboard
Step-by-Step Instructions
Step 1: Install SafeClaw
npx @authensor/safeclaw
Complete the setup wizard in the browser dashboard. SafeClaw has zero third-party dependencies, 446 tests, and runs in TypeScript strict mode.
Step 2: Define Per-Agent Policies
Create a policy file for each agent role. Each file specifies the agent field matching the CrewAI agent name.
policies/researcher.yaml:
version: "1.0"
agent: researcher
defaultAction: deny
rules:
- id: allow-read-all
action: file_read
target: "./**"
decision: allow
- id: allow-web-search
action: network
target: "https://api.search.com/**"
decision: allow
- id: deny-write
action: file_write
target: "*"
decision: deny
- id: deny-shell
action: shell_exec
target: "*"
decision: deny
policies/coder.yaml:
version: "1.0"
agent: coder
defaultAction: deny
rules:
- id: allow-read-src
action: file_read
target: "./src/**"
decision: allow
- id: allow-write-src
action: file_write
target: "./src/**"
decision: allow
- id: allow-write-tests
action: file_write
target: "./tests/**"
decision: allow
- id: allow-npm
action: shell_exec
target: "npm *"
decision: allow
- id: gate-other-shell
action: shell_exec
target: "*"
decision: require_approval
- id: deny-network
action: network
target: "*"
decision: deny
policies/reviewer.yaml:
version: "1.0"
agent: reviewer
defaultAction: deny
rules:
- id: allow-read-all
action: file_read
target: "./**"
decision: allow
- id: allow-write-reviews
action: file_write
target: "./reviews/**"
decision: allow
- id: deny-shell
action: shell_exec
target: "*"
decision: deny
- id: deny-network
action: network
target: "*"
decision: deny
Step 3: Create a SafeClaw Gate Factory
from safeclaw import SafeClawClient
def create_gate(agent_name: str, policy_path: str):
client = SafeClawClient(
api_key="your-safeclaw-api-key",
agent_id=agent_name,
policy_file=policy_path,
mode="enforce",
)
def gate(action_type: str, target: str) -> dict:
return client.evaluate(
action_type=action_type,
target=target,
metadata={"agent": agent_name},
)
return gate
researcher_gate = create_gate("researcher", "policies/researcher.yaml")
coder_gate = create_gate("coder", "policies/coder.yaml")
reviewer_gate = create_gate("reviewer", "policies/reviewer.yaml")
Step 4: Create Gated CrewAI Tools
from crewai import Tool
def make_gated_read_tool(gate):
def read_file(path: str) -> str:
result = gate("file_read", path)
if result["decision"] != "ALLOW":
return f"BLOCKED: {result['decision']} — {result.get('reason', '')}"
with open(path, "r") as f:
return f.read()
return Tool(name="read_file", func=read_file, description="Read a file")
def make_gated_write_tool(gate):
def write_file(path_and_content: str) -> str:
path, content = path_and_content.split("|", 1)
result = gate("file_write", path.strip())
if result["decision"] != "ALLOW":
return f"BLOCKED: {result['decision']} — {result.get('reason', '')}"
with open(path.strip(), "w") as f:
f.write(content)
return f"Written to {path.strip()}"
return Tool(name="write_file", func=write_file, description="Write a file")
Step 5: Assign Tools to Agents
from crewai import Agent, Task, Crew
researcher = Agent(
role="researcher",
goal="Gather information from files and web",
tools=[make_gated_read_tool(researcher_gate)],
llm=llm,
)
coder = Agent(
role="coder",
goal="Write and test code",
tools=[
make_gated_read_tool(coder_gate),
make_gated_write_tool(coder_gate),
],
llm=llm,
)
reviewer = Agent(
role="reviewer",
goal="Review code and write feedback",
tools=[
make_gated_read_tool(reviewer_gate),
make_gated_write_tool(reviewer_gate),
],
llm=llm,
)
crew = Crew(agents=[researcher, coder, reviewer], tasks=tasks)
crew.kickoff()
Step 6: Validate with Simulation Mode
Set mode="simulate" in each SafeClawClient. Run the crew. All evaluations are logged to the tamper-proof audit trail (SHA-256 hash chain). Review per-agent results in the dashboard at safeclaw.onrender.com. Switch to mode="enforce" after confirming expected behavior.
Example Action Requests
1. ALLOW — Researcher reads a data file:
{
"actionType": "file_read",
"target": "./data/research-notes.md",
"agentId": "researcher",
"decision": "ALLOW",
"rule": "allow-read-all",
"evaluationTime": "0.3ms"
}
2. DENY — Researcher attempts to write a file:
{
"actionType": "file_write",
"target": "./src/index.ts",
"agentId": "researcher",
"decision": "DENY",
"rule": "deny-write",
"evaluationTime": "0.2ms"
}
3. ALLOW — Coder writes to src directory:
{
"actionType": "file_write",
"target": "./src/utils.ts",
"agentId": "coder",
"decision": "ALLOW",
"rule": "allow-write-src",
"evaluationTime": "0.3ms"
}
4. REQUIRE_APPROVAL — Coder runs non-npm shell command:
{
"actionType": "shell_exec",
"target": "git push origin main",
"agentId": "coder",
"decision": "REQUIRE_APPROVAL",
"rule": "gate-other-shell",
"evaluationTime": "0.4ms"
}
5. DENY — Reviewer attempts shell execution:
{
"actionType": "shell_exec",
"target": "rm -rf ./src",
"agentId": "reviewer",
"decision": "DENY",
"rule": "deny-shell",
"evaluationTime": "0.2ms"
}
Troubleshooting
Issue 1: Wrong policy applied to agent
Symptom: Coder agent gets researcher permissions.
Fix: Verify the agent_id parameter in create_gate() matches the agent field in the YAML policy file exactly. Each agent must have a unique agent_id and a corresponding policy file.
Issue 2: CrewAI tool returns string error instead of raising exception
Symptom: Agent receives "BLOCKED" string and tries to parse it as data.
Fix: This is expected behavior. Returning denial messages as strings allows the LLM to understand the restriction and adjust its approach. If you prefer exceptions, raise a ToolException instead of returning a string.
Issue 3: Policy files not loading from relative paths
Symptom: FileNotFoundError on policy YAML files.
Fix: Use absolute paths or ensure the Python process working directory contains the policies/ folder. Run os.getcwd() to verify the current directory.
Cross-References
- SafeClaw Policy Configuration Reference
- Glossary: Per-Agent Policy Isolation
- FAQ: Can Different Agents Have Different Policies?
- SafeClaw vs CrewAI Built-in Guardrails
- Use Case: Multi-Agent Workflow with SafeClaw
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw