AI Agent Safety for JavaScript and Node.js Developers
SafeClaw by Authensor provides deny-by-default action gating for JavaScript and Node.js AI agents. It intercepts child_process.exec(), fs.writeFile(), fetch(), and any other dangerous operation, checking each against your YAML policy before allowing execution. Install with npx @authensor/safeclaw — zero dependencies, works with Claude and OpenAI agents alike.
The Node.js Agent Risk Surface
Node.js agents have unrestricted access to child_process, fs, net, and http. A single compromised tool call can:
- Execute
rm -rf /viachild_process.exec() - Read
.envfiles containing API keys viafs.readFileSync() - Exfiltrate data via
fetch()oraxios.post()to an attacker-controlled endpoint - Install malicious packages via
child_process.exec("npm install trojan-package")
Installation
npx @authensor/safeclaw
This starts SafeClaw as a local sidecar. Your Node.js agent communicates with it before executing any action.
Policy Configuration
Create safeclaw.policy.yaml:
version: 1
defaultAction: deny
rules:
- action: file.read
path:
glob: "./src/**"
decision: allow
- action: file.write
path:
glob: "./output/**"
decision: allow
- action: file.read
path:
glob: "*/.env"
decision: deny # never let agents read env files
- action: process.exec
command:
startsWith: "node"
decision: allow
- action: process.exec
command:
startsWith: "npm install"
decision: prompt
- action: network.request
host:
in: ["api.openai.com", "api.anthropic.com"]
decision: allow
JavaScript Integration
import { Gate } from '@authensor/safeclaw';
import { execSync } from 'child_process';
import fs from 'fs/promises';
const gate = new Gate();
async function safeExec(command) {
const decision = await gate.check({
action: 'process.exec',
command
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return execSync(command, { encoding: 'utf-8' });
}
async function safeReadFile(filepath) {
const decision = await gate.check({
action: 'file.read',
path: filepath
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return fs.readFile(filepath, 'utf-8');
}
async function safeFetch(url, options = {}) {
const host = new URL(url).hostname;
const decision = await gate.check({
action: 'network.request',
host,
url,
method: options.method || 'GET'
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return fetch(url, options);
}
Wrapping Axios
If your agent uses axios for HTTP:
import axios from 'axios';
async function safeAxios(config) {
const url = typeof config === 'string' ? config : config.url;
const host = new URL(url).hostname;
const decision = await gate.check({
action: 'network.request',
host,
url,
method: config.method || 'GET'
});
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
return axios(config);
}
Gating npm install
async function safeNpmInstall(packageName) {
const decision = await gate.check({
action: 'process.exec',
command: npm install ${packageName}
});
if (!decision.allowed) {
throw new Error(SafeClaw denied npm install ${packageName}: ${decision.reason});
}
return execSync(npm install ${packageName}, { encoding: 'utf-8' });
}
With the prompt decision in your policy, this pauses and asks a human before allowing the install.
Audit Trail
Every decision is hash-chained:
{
"timestamp": "2026-02-13T10:30:00Z",
"action": "file.read",
"path": "./.env",
"decision": "deny",
"reason": "matched deny rule for .env files",
"hash": "b4e9f2...",
"prev_hash": "a3f8c1..."
}
MIT licensed. Works with Claude Agent SDK, OpenAI Assistants, Vercel AI SDK, and any custom agent framework.
Cross-References
- Next.js Integration
- Express.js Integration
- TypeScript Integration
- Deny-by-Default Explained
- Prevent Agent .env File Access
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw