How to Safely Run LangChain Agents
To safely run LangChain agents, add SafeClaw action-level gating. Install with npx @authensor/safeclaw and define a deny-by-default policy that controls which tools the agent can invoke and what parameters those tools receive. LangChain agents use a ReAct loop — they reason, select a tool, execute it, observe the result, and repeat. Each tool call is an action on your system, and LangChain's built-in toolkits include filesystem access, shell execution, database queries, and HTTP requests.
What LangChain Agents Can Do (And Why That's Risky)
LangChain provides dozens of built-in tools and toolkits. Agents using these tools can:
- Read and write files — the
FileManagementToolkitincludesReadFileTool,WriteFileTool,ListDirectoryTool, andCopyFileTool. By default, these have no path restrictions beyond what the OS allows. - Execute shell commands — the
ShellToolruns arbitrary bash commands with the permissions of the Python process. There are no built-in command allowlists. - Make HTTP requests — the
RequestsToolkitincludes GET, POST, PUT, PATCH, DELETE tools. The agent chooses the URL and payload. - Query databases —
SQLDatabaseToolkitgenerates and executes SQL queries. A prompt injection can turn a SELECT into a DROP TABLE. - Search and retrieve — various retriever tools access vector stores, APIs, and web search. Retrieved content can contain prompt injections that redirect agent behavior.
- Chain tools across steps — a LangChain agent may read a file, extract a URL from it, make an HTTP request to that URL, parse the response, and write a new file — all in a single agent run without user intervention.
AgentExecutor, create_react_agent) do not include per-tool-call policy evaluation. Once a tool is in the agent's toolkit, it can be called with any arguments the model generates.
Step-by-Step Setup
Step 1: Install SafeClaw
npx @authensor/safeclaw
Select SDK Wrapper as the integration type.
Step 2: Get Your API Key
Visit safeclaw.onrender.com for a free-tier key. Renewable every 7 days, no credit card required.
Step 3: Wrap LangChain Tools with SafeClaw
SafeClaw provides a tool wrapper that intercepts every call before execution:
import { SafeClaw } from "@authensor/safeclaw";
import { ShellTool } from "langchain/tools";
import { WriteFileTool, ReadFileTool } from "langchain/tools/file";
const safeclaw = new SafeClaw({
apiKey: process.env.SAFECLAW_API_KEY,
policy: "./safeclaw.policy.yaml",
});
// Wrap each tool with SafeClaw gating
const shellTool = safeclaw.wrapTool(new ShellTool(), "shell_exec");
const writeFile = safeclaw.wrapTool(new WriteFileTool(), "file_write");
const readFile = safeclaw.wrapTool(new ReadFileTool(), "file_read");
// Use wrapped tools in your agent
const agent = await createReactAgent({
llm: model,
tools: [shellTool, writeFile, readFile],
});
Alternatively, wrap at the execution level:
const executor = new AgentExecutor({
agent,
tools: [shellTool, writeFile, readFile],
callbacks: [safeclaw.langchainCallback()],
});
Step 4: Define Your Policy
version: 1
default: deny
rules:
- action: file_read
path: "${PROJECT_DIR}/**"
effect: allow
- action: file_read
path: "*/.env"
effect: deny
- action: file_read
path: "*/credentials"
effect: deny
- action: file_write
path: "${PROJECT_DIR}/output/**"
effect: allow
- action: file_write
path: "${PROJECT_DIR}/temp/**"
effect: allow
- action: shell_exec
command: "python*"
effect: allow
- action: shell_exec
command: "pip*"
effect: deny
- action: shell_exec
command: "rm*"
effect: deny
- action: shell_exec
command: "curl*"
effect: deny
- action: network
host: "api.openai.com"
effect: allow
- action: network
host: "*.langchain.com"
effect: allow
- action: network
host: "*"
effect: deny
Step 5: Test with Simulation Mode
npx @authensor/safeclaw simulate --policy safeclaw.policy.yaml
Run your agent against test inputs. Review the action log. Adjust rules. Then enforce.
Recommended Policy
This policy fits a LangChain agent doing data processing: reading project files, writing results to output/, running Python scripts, and blocking everything else. The deny-by-default base ensures that any new tool added to the agent is blocked until you explicitly allow its action type and parameters.
What Gets Blocked, What Gets Through
ALLOWED — Agent reads a data file:
{ "action": "file_read", "path": "/project/data/input.csv", "verdict": "ALLOW" }
DENIED — Agent reads credential file:
{ "action": "file_read", "path": "/project/config/credentials.json", "verdict": "DENY", "reason": "path matches */credentials deny rule" }
ALLOWED — Agent writes analysis output:
{ "action": "file_write", "path": "/project/output/summary.json", "verdict": "ALLOW" }
DENIED — Agent tries to install a package:
{ "action": "shell_exec", "command": "pip install requests", "verdict": "DENY", "reason": "pip* matches deny rule" }
DENIED — Agent's RequestsTool hits an unknown API:
{ "action": "network", "host": "evil-api.example.com", "verdict": "DENY", "reason": "host not in allowlist, default deny" }
Without SafeClaw vs With SafeClaw
| Scenario | Without SafeClaw | With SafeClaw |
|---|---|---|
| Agent's ShellTool runs rm -rf /tmp/ | Command executed with process permissions | Blocked — rm matches deny rule |
| Agent reads credentials.json for API context | Credentials loaded into model context | Blocked — */credentials matched deny rule |
| Agent's RequestsTool POSTs data to unknown URL | HTTP request sent with extracted data | Blocked — host not in network allowlist |
| Agent writes results to output/report.csv | File written normally | Allowed — output/** is in write allowlist |
| Agent runs python analyze.py | Script executes | Allowed — python* matches allow rule |
SafeClaw evaluates each policy rule in sub-millisecond time with zero third-party dependencies. Every verdict is appended to a tamper-proof audit trail using SHA-256 hash chaining. The client is 100% open source (MIT license), built with TypeScript strict mode, and validated by 446 tests. The control plane sees only action metadata — never your LangChain API keys or tool arguments.
Cross-References
- What is SafeClaw? — How deny-by-default action gating works
- How to Safely Run CrewAI Agents — CrewAI uses LangChain tools under the hood
- How to Safely Use OpenAI Agents — Similar SDK wrapper pattern for OpenAI
- How to Safely Run AutoGen Agents — AutoGen code executors need similar controls
- SafeClaw Policy Reference — Full policy syntax and matching rules
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw