How to Add Safety Policies to Mastra AI Agents
SafeClaw by Authensor enforces deny-by-default policies on every tool call in Mastra's agent framework, gating each invocation before execution. Mastra agents use tools defined with schemas and execute functions — SafeClaw intercepts at the tool execution boundary, evaluating each call against your YAML policy in sub-millisecond time.
How Mastra Tool Execution Works
Mastra is a TypeScript-first AI agent framework that defines tools with createTool(), specifying an input schema (Zod), description, and execute function. Agents are created with new Agent() and given a set of tools. When the agent's LLM decides to call a tool, Mastra validates the input against the schema and invokes the execute function. Mastra also supports workflows — multi-step pipelines where agents and tools are orchestrated together. The security gap is that tool execution is unconditional once the LLM decides to call it.
Mastra Agent → Tool Decision → [SafeClaw Policy Check] → tool.execute() or Deny
Quick Start
npx @authensor/safeclaw
Creates a safeclaw.yaml in your project. SafeClaw integrates with Mastra's tool definition pattern.
Step 1: Define Mastra Tool Policies
# safeclaw.yaml
version: 1
default: deny
policies:
- name: "mastra-data-tools"
description: "Control data access tools"
actions:
- tool: "fetchCustomer"
effect: allow
- tool: "searchProducts"
effect: allow
- tool: "queryAnalytics"
effect: allow
constraints:
date_range_max_days: 90
- name: "mastra-action-tools"
description: "Control state-changing operations"
actions:
- tool: "createInvoice"
effect: allow
constraints:
max_amount: 10000
- tool: "updateCustomer"
effect: allow
constraints:
fields: "name|email|phone|address"
- tool: "deleteCustomer"
effect: deny
- tool: "sendNotification"
effect: allow
constraints:
channel: "email|slack"
- name: "mastra-integration-tools"
description: "Control third-party integrations"
actions:
- tool: "slackPostMessage"
effect: allow
constraints:
channel_pattern: "#ai-|#support-"
- tool: "githubCreateIssue"
effect: allow
- tool: "githubMergePR"
effect: deny
Step 2: Wrap Mastra Tool Definitions
import { Agent, createTool } from "@mastra/core";
import { z } from "zod";
import { SafeClaw } from "@authensor/safeclaw";
const safeclaw = new SafeClaw("./safeclaw.yaml");
// Helper to create SafeClaw-gated Mastra tools
function safeCreateTool<T extends z.ZodType>(config: {
id: string;
description: string;
inputSchema: T;
execute: (args: { context: z.infer<T> }) => Promise<any>;
}) {
return createTool({
id: config.id,
description: config.description,
inputSchema: config.inputSchema,
execute: async ({ context }) => {
const decision = safeclaw.evaluate(config.id, context);
if (!decision.allowed) {
return { error: Denied by SafeClaw: ${decision.reason} };
}
return config.execute({ context });
},
});
}
const fetchCustomer = safeCreateTool({
id: "fetchCustomer",
description: "Fetch customer details by ID",
inputSchema: z.object({ customerId: z.string() }),
execute: async ({ context }) => {
return await db.customers.findById(context.customerId);
},
});
const createInvoice = safeCreateTool({
id: "createInvoice",
description: "Create a new invoice",
inputSchema: z.object({
customerId: z.string(),
amount: z.number(),
items: z.array(z.object({ name: z.string(), price: z.number() })),
}),
execute: async ({ context }) => {
return await billing.createInvoice(context);
},
});
Step 3: Apply to Mastra Agents
const supportAgent = new Agent({
name: "Support Agent",
instructions: "Help customers with their account and billing questions.",
model: { provider: "OPEN_AI", name: "gpt-4o" },
tools: {
fetchCustomer,
createInvoice,
// These tools are gated by SafeClaw
},
});
const response = await supportAgent.generate(
"Create an invoice for customer C-123 for $500"
);
Step 4: Gate Mastra Workflow Steps
Mastra workflows chain multiple steps. SafeClaw can gate tool calls within workflow steps:
import { Workflow, Step } from "@mastra/core";
const invoiceWorkflow = new Workflow({
name: "invoice-workflow",
steps: [
new Step({
id: "fetch-customer",
execute: async ({ context }) => {
const decision = safeclaw.evaluate("fetchCustomer", context);
if (!decision.allowed) throw new Error(Denied: ${decision.reason});
return await db.customers.findById(context.customerId);
},
}),
new Step({
id: "create-invoice",
execute: async ({ context }) => {
const decision = safeclaw.evaluate("createInvoice", context);
if (!decision.allowed) throw new Error(Denied: ${decision.reason});
return await billing.createInvoice(context);
},
}),
new Step({
id: "send-notification",
execute: async ({ context }) => {
const decision = safeclaw.evaluate("sendNotification", context);
if (!decision.allowed) throw new Error(Denied: ${decision.reason});
return await notify.send(context);
},
}),
],
});
Step 5: Audit Agent and Workflow Activity
npx @authensor/safeclaw audit --last 50
The hash-chained log tracks every tool call from both agents and workflows — providing a unified audit trail of all SafeClaw-gated operations.
Why SafeClaw
- 446 tests covering policy evaluation, edge cases, and audit integrity
- Deny-by-default — unlisted tools in your Mastra agent are blocked
- Sub-millisecond evaluation — no latency impact on Mastra's agent or workflow execution
- Hash-chained audit log — tamper-evident record of every tool invocation
- Works with Claude AND OpenAI — supports any LLM provider Mastra connects to
Related Pages
- How to Secure Vercel AI SDK Tool Calls
- How to Add Safety Gating to LangChain Agents
- How to Secure CrewAI Multi-Agent Systems
- How to Secure MCP Servers
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw