How to Use SafeClaw with Crewai
Using SafeClaw with CrewAI: Action-Level Gating for Agent Tool Calls
CrewAI agents execute tool calls based on task instructions and LLM reasoning. Without action-level gating, an agent can call any tool it has access to, regardless of context, user intent, or safety constraints. SafeClaw enforces deny-by-default action gating at the tool call layer, letting you define exactly which tools agents can invoke in which scenarios.
Why CrewAI Agents Need Action-Level Gating
CrewAI agents operate in a loop: receive task, reason about steps, call tools, process results, iterate. Each tool call is a potential security boundary. Consider these scenarios:
- A research agent with database access receives a prompt injection asking it to export customer records
- A code review agent gains access to a deployment tool and an attacker manipulates it into triggering production deploys
- A customer support agent can send emails but receives instructions to spam users
Integration Pattern: SafeClaw as Tool Call Middleware
You wrap CrewAI tool execution with SafeClaw evaluation. The pattern is:
- Agent decides to call a tool
- Tool call is intercepted before execution
- SafeClaw evaluates the call against your policy
- If allowed, tool executes normally
- If denied, SafeClaw returns a rejection message to the agent
- If require-approval, call is queued for human review
import Anthropic from "@anthropic-ai/sdk";
import { SafeClaw } from "@authensor/safeclaw";
const safeClaw = new SafeClaw({
apiKey: process.env.SAFECLAW_API_KEY,
policyYaml: fs.readFileSync("./policy.yaml", "utf-8"),
});
// Wrap tool execution
async function executeToolWithGating(
toolName: string,
toolInput: Record<string, unknown>,
agentContext: {
agentName: string;
taskDescription: string;
userId: string;
}
): Promise<unknown> {
const decision = await safeClaw.evaluate({
action: toolName,
context: {
agent: agentContext.agentName,
task: agentContext.taskDescription,
user: agentContext.userId,
input: JSON.stringify(toolInput),
},
});
if (decision.result === "deny") {
throw new Error(
Tool call denied by policy: ${decision.reason || "No reason provided"}
);
}
if (decision.result === "require-approval") {
// Queue for human review, return pending message to agent
console.log(Tool call queued for approval: ${toolName});
return {
status: "pending_approval",
message: Your request to use ${toolName} requires approval and is being reviewed.,
};
}
// Call executes normally
return await toolstoolName;
}
For CrewAI specifically, you override the tool execution in your agent definition:
import { Agent, Task, Crew } from "crewai";
const researchAgent = new Agent({
role: "Research Analyst",
goal: "Find relevant information",
tools: [searchTool, databaseTool],
// Override tool execution
executeToolCall: async (toolName, toolInput) => {
return executeToolWithGating(toolName, toolInput, {
agentName: "Research Analyst",
taskDescription: "Analyze market trends",
userId: "user-123",
});
},
});
Policy YAML for Common CrewAI Use Cases
SafeClaw policies are YAML files defining allow, deny, and require-approval rules. Rules match on action name and context fields.
Example 1: Research Agent with Database Access Control
version: "1.0"
default: deny
rules:
Allow public search tools
- action: "web_search"
effect: allow
description: "Research agents can search public web"
Allow database reads, deny writes
- action: "database_query"
effect: allow
conditions:
- field: "input"
operator: "contains"
value: "SELECT"
description: "Only SELECT queries allowed"
- action: "database_query"
effect: deny
conditions:
- field: "input"
operator: "contains"
value: "DELETE"
description: "DELETE operations blocked"
- action: "database_query"
effect: deny
conditions:
- field: "input"
operator: "contains"
value: "DROP"
description: "DROP operations blocked"
Require approval for large exports
- action: "export_data"
effect: require-approval
conditions:
- field: "input"
operator: "regex"
value: "limit.[5-9][0-9]{3}|limit.[0-9]{5,}"
description: "Exports over 5000 rows need approval"
Block all other actions
- action: "*"
effect: deny
description: "Default deny all other tools"
Example 2: Customer Support Agent with Email and Ticket Access
version: "1.0"
default: deny
rules:
Allow ticket operations
- action: "create_ticket"
effect: allow
conditions:
- field: "agent"
operator: "equals"
value: "support_agent"
description: "Support agents can create tickets"
- action: "update_ticket"
effect: allow
conditions:
- field: "agent"
operator: "equals"
value: "support_agent"
description: "Support agents can update tickets"
Allow single-recipient emails only
- action: "send_email"
effect: allow
conditions:
- field: "input"
operator: "regex"
value: '"to":\s*"[^,]+"'
description: "Only single-recipient emails allowed"
Deny bulk email operations
- action: "send_bulk_email"
effect: deny
description: "Bulk email disabled for agents"
Allow knowledge base search
- action: "search_knowledge_base"
effect: allow
description: "Support agents can search KB"
Block customer data export
- action: "export_customer_data"
effect: deny
description: "Customer data export blocked"
Example 3: Code Review Agent with Limited Deployment Access
version: "1.0"
default: deny
rules:
Allow code analysis
- action: "analyze_code"
effect: allow
description: "Code review agents can analyze code"
- action: "run_tests"
effect: allow
description: "Code review agents can run tests"
Allow staging deployment only
- action: "deploy"
effect: allow
conditions:
- field: "input"
operator: "contains"
value: "staging"
description: "Deployment allowed to staging only"
Require approval for production
- action: "deploy"
effect: require-approval
conditions:
- field: "input"
operator: "contains"
value: "production"
description: "Production deployments need approval"
Block rollback without approval
- action: "rollback"
effect: require-approval
description: "Rollbacks require approval"
Block direct database access
- action: "database_access"
effect: deny
description: "Direct database access blocked"
What Gets Blocked vs Allowed
SafeClaw evaluates each tool call and returns one of three decisions:
Allow (Tool Executes)
The call matches an allow rule. The tool runs normally and returns its result to the agent.
Example from policy above:
- action: "web_search"
effect: allow
When the agent calls web_search, SafeClaw evaluates it, finds the allow rule, and the search executes.
Deny (Tool Does Not Execute)
The call matches a deny rule or no allow rule exists (deny-by-default). SafeClaw returns an error to the agent instead of executing the tool.
Example:
- action: "database_query"
effect: deny
conditions:
- field: "input"
operator: "contains"
value: "DELETE"
If the agent tries to call database_query with a DELETE statement, SafeClaw blocks it:
const decision = await safeClaw.evaluate({
action: "database_query",
context: {
input: "DELETE FROM users WHERE id = 1",
},
});
// decision.result === "deny"
// decision.reason === "DELETE operations blocked"
The agent receives an error message instead of executing the query. The agent can then adjust its approach (for example, asking the user for permission or using a different strategy).
Require-Approval (Tool Call Queued)
The call matches a require-approval rule. SafeClaw queues the call for human review and returns a pending status to the agent.
Example:
- action: "export_data"
effect: require-approval
conditions:
- field: "input"
operator: "regex"
value: "limit.*[5-9][0-9]{3}"
If the agent tries to export 5000+ rows, SafeClaw queues it:
const decision = await safeClaw.evaluate({
action: "export_data",
context: {
input: "SELECT * FROM logs LIMIT 10000",
},
});
// decision.result === "require-approval"
// decision.approvalId === "appr_abc123"
The agent receives a message that the request is pending review. You can check approval status later:
const status = await safeClaw.getApprovalStatus("appr_abc123");
// Returns: { status: "pending" | "approved" | "rejected", ... }
Complete Integration Example
Here is a working example integrating SafeClaw with a CrewAI-like agent structure:
``typescript
import fs from "fs";
import { SafeClaw } from "@authensor/safeclaw";
// Initialize SafeClaw
const safeClaw = new SafeClaw({
apiKey: process.env.SAFECLAW_API_KEY || "",
policyYaml: fs.readFileSync("./policy.yaml", "utf-8"),
});
// Mock tools
const tools: Record
web_search: async (input: unknown) => {
return Search results for: ${JSON.stringify(input)};Query executed: ${JSON.stringify(input)}
},
database_query: async (input: unknown) => {
return ;Email sent: ${JSON.stringify(input)}
},
send_email: async (input: unknown) => {
return ;Data exported: ${JSON.stringify(input)}
},
export_data: async (input: unknown) => {
return ;
},
};
// Tool execution with SafeClaw gating
async function executeToolWithGating(
toolName: string,
toolInput: Record
agentContext: {
agentName: string;
taskDescription: string;
userId: string;
}
): Promise
console.log(Agent: ${agentContext.agentName});Tool: ${toolName}
console.log();Input: ${JSON.stringify(toolInput)}
console.log();
const decision = await safeClaw.evaluate({
action: toolName,
context: {
agent: agentContext.agentName,
task: agentContext.taskDescription,
user: agentContext.userId,
input: JSON.stringify(toolInput),
},
});
console.log(Decision: ${decision.result
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw