2025-12-19 · Authensor

How to Add AI Agent Safety to Next.js Applications

SafeClaw by Authensor integrates into Next.js applications to provide deny-by-default gating for AI agent actions in API routes, server actions, and server components. Every tool call, file operation, and external API request your agent makes is checked against a YAML policy before execution. Install with npx @authensor/safeclaw alongside your Next.js project.

Why Next.js Agents Need Gating

Next.js is increasingly used to build AI-powered applications with the Vercel AI SDK. These apps run server-side code that can access the filesystem, execute processes, and make network requests. A prompt injection in a chat endpoint can escalate into arbitrary code execution on your server.

SafeClaw gates every action with 446 validated tests and a hash-chained audit trail.

Installation

npx @authensor/safeclaw
npm install @authensor/safeclaw

Policy

version: 1
defaultAction: deny

rules:
- action: file.read
path:
glob: "./content/**"
decision: allow

- action: file.read
path:
glob: "./public/**"
decision: allow

- action: file.write
path:
glob: "./output/**"
decision: allow

- action: network.request
host:
in: ["api.openai.com", "api.anthropic.com"]
decision: allow

- action: file.read
path:
glob: "*/.env"
decision: deny

- action: process.exec
decision: deny # no shell execution from Next.js

API Route Integration

Gate agent tool calls in your Next.js API routes:

// app/api/agent/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { Gate } from '@authensor/safeclaw';
import { readFile } from 'fs/promises';

const gate = new Gate();

export async function POST(req: NextRequest) {
const { action, params } = await req.json();

if (action === 'readFile') {
const decision = await gate.check({
action: 'file.read',
path: params.path
});

if (!decision.allowed) {
return NextResponse.json(
{ error: Denied: ${decision.reason} },
{ status: 403 }
);
}

const content = await readFile(params.path, 'utf-8');
return NextResponse.json({ content });
}

// Default deny for unknown actions
return NextResponse.json(
{ error: 'Action not recognized' },
{ status: 400 }
);
}

Server Action Integration

// app/actions/agent-actions.ts
'use server';

import { Gate } from '@authensor/safeclaw';
import { readFile, writeFile } from 'fs/promises';

const gate = new Gate();

export async function readDocument(path: string): Promise<string> {
const decision = await gate.check({
action: 'file.read',
path
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return readFile(path, 'utf-8');
}

export async function saveOutput(path: string, content: string): Promise<void> {
const decision = await gate.check({
action: 'file.write',
path
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
await writeFile(path, content, 'utf-8');
}

export async function callExternalApi(url: string): Promise<Response> {
const host = new URL(url).hostname;
const decision = await gate.check({
action: 'network.request',
host,
url,
method: 'POST'
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return fetch(url, { method: 'POST' });
}

Vercel AI SDK Integration

Wrap tool definitions with SafeClaw gating:

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { Gate } from '@authensor/safeclaw';
import { z } from 'zod';
import { readFile } from 'fs/promises';

const gate = new Gate();

export async function POST(req: Request) {
const { messages } = await req.json();

const result = streamText({
model: openai('gpt-4'),
messages,
tools: {
readFile: tool({
description: 'Read a file from disk',
parameters: z.object({ path: z.string() }),
execute: async ({ path }) => {
const decision = await gate.check({
action: 'file.read',
path
});
if (!decision.allowed) {
return Access denied: ${decision.reason};
}
return readFile(path, 'utf-8');
}
})
}
});

return result.toDataStreamResponse();
}

Deployment

Start SafeClaw alongside your Next.js app:

{
  "scripts": {
    "dev": "concurrently \"npx @authensor/safeclaw\" \"next dev\"",
    "start": "npx @authensor/safeclaw & next start"
  }
}

Every gate decision is logged to the hash-chained audit trail. MIT licensed, works with Claude and OpenAI.

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw