How to Secure CrewAI Multi-Agent Systems
SafeClaw by Authensor enforces deny-by-default policies on every tool call across all agents in a CrewAI crew, ensuring that no agent — regardless of its role or task assignment — can execute unauthorized actions. CrewAI orchestrates multiple agents with different tools and responsibilities; SafeClaw gates each tool invocation before execution, applying role-specific policies defined in YAML.
How CrewAI Tool Execution Works
CrewAI defines agents with roles, goals, and tools. When a crew executes, each agent works on its assigned tasks, calling tools as needed. The execution flow is: Agent receives task -> LLM decides tool call -> tool executes -> result feeds back to agent. In multi-agent workflows, agents can delegate to each other, creating chains of tool calls across different agent roles. The security challenge is that each agent has its own tool set, and without policy enforcement, any agent can use any of its assigned tools without constraint.
CrewAI Agent → Tool Decision → [SafeClaw Policy Check] → tool.run() or Deny
Quick Start
npx @authensor/safeclaw
Creates a safeclaw.yaml with support for role-based policies that map to CrewAI's agent structure.
Step 1: Define Role-Based Policies
CrewAI agents have distinct roles. SafeClaw lets you define policies per agent role:
# safeclaw.yaml
version: 1
default: deny
policies:
- name: "researcher-agent"
description: "Policies for the research agent"
role: "researcher"
actions:
- tool: "search_internet"
effect: allow
- tool: "read_document"
effect: allow
- tool: "scrape_website"
effect: allow
constraints:
url_pattern: "https://*.wikipedia.org/|https://arxiv.org/"
- tool: "write_file"
effect: deny
- name: "writer-agent"
description: "Policies for the content writer agent"
role: "writer"
actions:
- tool: "write_file"
effect: allow
constraints:
path_pattern: "output/drafts/**"
max_size_bytes: 50000
- tool: "read_file"
effect: allow
constraints:
path_pattern: "output/|templates/"
- tool: "search_internet"
effect: deny
- name: "reviewer-agent"
description: "Policies for the review agent"
role: "reviewer"
actions:
- tool: "read_file"
effect: allow
- tool: "write_file"
effect: allow
constraints:
path_pattern: "output/reviews/**"
- tool: "execute_code"
effect: deny
Step 2: Wrap CrewAI Tool Execution
Integrate SafeClaw into CrewAI's tool layer:
from crewai import Agent, Task, Crew, Process
from crewai.tools import BaseTool
from safeclaw import SafeClaw
safeclaw = SafeClaw("./safeclaw.yaml")
def gate_tool(tool: BaseTool, agent_role: str) -> BaseTool:
"""Wrap a CrewAI tool with SafeClaw policy enforcement."""
original_run = tool._run
def safe_run(**kwargs):
decision = safeclaw.evaluate(
tool.name,
kwargs,
context={"role": agent_role}
)
if not decision.allowed:
return f"Action denied by SafeClaw policy: {decision.reason}"
return original_run(**kwargs)
tool._run = safe_run
return tool
Apply policies per agent
researcher = Agent(
role="researcher",
goal="Find relevant information",
tools=[gate_tool(search_tool, "researcher"), gate_tool(read_tool, "researcher")],
)
writer = Agent(
role="writer",
goal="Write polished content",
tools=[gate_tool(write_tool, "writer"), gate_tool(read_tool, "writer")],
)
Step 3: Handle Agent Delegation Safely
CrewAI supports agent delegation — one agent asking another to perform a task. SafeClaw ensures the delegated agent still operates under its own policy:
researcher = Agent(
role="researcher",
allow_delegation=True,
tools=[gate_tool(search_tool, "researcher")],
)
When researcher delegates to writer, the writer's SafeClaw
policies apply — not the researcher's
writer = Agent(
role="writer",
allow_delegation=False,
tools=[gate_tool(write_tool, "writer")],
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
)
Because SafeClaw evaluates at the tool execution level with role context, delegation doesn't bypass policies.
Step 4: Hierarchical Crew Safety
For hierarchical crews with a manager agent:
policies:
- name: "manager-agent"
role: "manager"
actions:
- tool: "delegate_task"
effect: allow
- tool: "execute_code"
effect: deny
- tool: "write_file"
effect: deny
- tool: "shell"
effect: deny
The manager can delegate but cannot directly execute tools — it must go through subordinate agents who each have their own scoped policies.
Step 5: Audit Multi-Agent Workflows
The audit log tracks which agent made each tool call:
npx @authensor/safeclaw audit --last 100 --filter role=researcher
npx @authensor/safeclaw audit --last 100 --filter role=writer
Each entry includes the agent role, tool name, arguments, policy matched, and decision — giving you a per-agent trace of the entire crew execution.
Why SafeClaw
- 446 tests covering policy evaluation, edge cases, and audit integrity
- Deny-by-default — each agent is constrained to its explicitly allowed tools
- Sub-millisecond evaluation — no delay in CrewAI's agent execution loop
- Hash-chained audit log — per-agent tamper-evident records across the full crew run
- Works with Claude AND OpenAI — supports any LLM backend CrewAI connects to
Related Pages
- How to Add Safety Gating to LangChain Agents
- How to Add Safety Controls to AutoGen Agents
- How to Secure Devin and Autonomous Coding Agents
- How to Add Safety Policies to Mastra AI Agents
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw