How to Secure Vercel AI SDK Tool Calls
SafeClaw by Authensor gates every tool invocation in the Vercel AI SDK before execution, enforcing deny-by-default policies on generateText, streamText, and useChat workflows. The AI SDK's tools parameter defines callable functions — SafeClaw intercepts each tool call and evaluates it against your YAML policy in sub-millisecond time.
How Vercel AI SDK Tool Calling Works
The Vercel AI SDK uses a tools object where each key is a tool name and the value includes a description, Zod parameters schema, and an execute function. When the model decides to call a tool, the SDK invokes the execute function with the validated parameters. For generateText, this happens server-side; for streamText with useChat, tool calls can flow between client and server. The security gap is that the execute function runs unconditionally when the model decides to call it.
Model Decision → tool name + args → [SafeClaw Policy Check] → execute() or Deny
Quick Start
npx @authensor/safeclaw
Creates a safeclaw.yaml in your project. SafeClaw wraps the AI SDK's execute functions with policy enforcement.
Step 1: Define Policies for AI SDK Tools
# safeclaw.yaml
version: 1
default: deny
policies:
- name: "ai-sdk-data-tools"
description: "Control data access tools"
actions:
- tool: "getWeather"
effect: allow
- tool: "searchProducts"
effect: allow
- tool: "getUserProfile"
effect: allow
constraints:
fields: "name|email|preferences"
- name: "ai-sdk-action-tools"
description: "Control state-changing tools"
actions:
- tool: "createOrder"
effect: allow
constraints:
max_amount: 1000
- tool: "updateProfile"
effect: allow
constraints:
fields: "name|preferences"
- tool: "deleteAccount"
effect: deny
- tool: "sendEmail"
effect: deny
- name: "ai-sdk-file-tools"
description: "Restrict file operations"
actions:
- tool: "readFile"
effect: allow
constraints:
path_pattern: "public/|data/"
- tool: "writeFile"
effect: deny
Step 2: Wrap Tool Execute Functions
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { SafeClaw } from "@authensor/safeclaw";
const safeclaw = new SafeClaw("./safeclaw.yaml");
function safeTool<T extends z.ZodType>(
name: string,
config: { description: string; parameters: T; execute: (args: z.infer<T>) => Promise<any> }
) {
return tool({
description: config.description,
parameters: config.parameters,
execute: async (args) => {
const decision = safeclaw.evaluate(name, args);
if (!decision.allowed) {
return { error: Denied by SafeClaw: ${decision.reason} };
}
return config.execute(args);
},
});
}
const result = await generateText({
model: openai("gpt-4o"),
tools: {
getWeather: safeTool("getWeather", {
description: "Get current weather for a location",
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => fetchWeather(city),
}),
createOrder: safeTool("createOrder", {
description: "Create a new order",
parameters: z.object({
product: z.string(),
quantity: z.number(),
amount: z.number(),
}),
execute: async (args) => createOrder(args),
}),
},
prompt: "What's the weather in London and order 5 widgets?",
});
Step 3: Integrate with streamText and useChat
For streaming scenarios with useChat:
// app/api/chat/route.ts
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
import { SafeClaw } from "@authensor/safeclaw";
const safeclaw = new SafeClaw("./safeclaw.yaml");
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
tools: {
searchProducts: safeTool("searchProducts", {
description: "Search the product catalog",
parameters: z.object({ query: z.string(), category: z.string().optional() }),
execute: async (args) => searchProducts(args),
}),
},
maxSteps: 5, // SafeClaw evaluates each step's tool calls
});
return result.toDataStreamResponse();
}
The maxSteps parameter enables multi-step tool calling. SafeClaw evaluates each step independently.
Step 4: Handle Multi-Step Tool Loops
The AI SDK's maxSteps feature allows the model to call tools multiple times. SafeClaw gates every step:
const result = await generateText({
model: openai("gpt-4o"),
tools: {
analyze: safeTool("analyze", { / ... / }),
summarize: safeTool("summarize", { / ... / }),
writeReport: safeTool("writeReport", { / ... / }),
},
maxSteps: 10,
prompt: "Analyze the data and write a summary report",
});
// Each of the up to 10 steps has its tool calls gated by SafeClaw
console.log(Steps taken: ${result.steps.length});
console.log(Tool calls evaluated: ${result.toolCalls.length});
Step 5: Audit Tool Call History
npx @authensor/safeclaw audit --last 50
Every tool invocation — including the step number, tool name, arguments, and decision — is recorded in the hash-chained audit log.
Why SafeClaw
- 446 tests covering policy evaluation, edge cases, and audit integrity
- Deny-by-default — unlisted tools in your AI SDK config are blocked
- Sub-millisecond evaluation — no perceptible latency in streaming workflows
- Hash-chained audit log — tamper-evident record of every tool execution
- Works with Claude AND OpenAI — the AI SDK supports both, and so does SafeClaw
Related Pages
- How to Secure Your OpenAI GPT Agent
- How to Secure Your Claude Agent with SafeClaw
- How to Add Safety Policies to Mastra AI Agents
- How to Secure MCP Servers
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw