How to Wrap LangChain Tools with SafeClaw Policy Checks
SafeClaw is action-level gating for AI agents, built by Authensor. This guide shows how to wrap any LangChain tool with SafeClaw policy enforcement so that every tool invocation is evaluated before execution. SafeClaw uses deny-by-default architecture — actions not explicitly permitted are blocked.
Prerequisites
- Python 3.10 or later
langchainandlangchain-corepackages installed- Node.js 18+ (SafeClaw client runtime)
- A SafeClaw account at safeclaw.onrender.com (free tier, 7-day renewable keys, no credit card)
- SafeClaw API key from the browser dashboard
Step-by-Step Instructions
Step 1: Install SafeClaw
npx @authensor/safeclaw
Complete the browser-based setup wizard. SafeClaw has zero third-party dependencies and is validated by 446 tests in TypeScript strict mode.
Step 2: Install the SafeClaw Python Wrapper
pip install safeclaw
Or use the TypeScript client directly via a subprocess bridge if you prefer the 100% open source MIT-licensed client.
Step 3: Create a SafeClaw Tool Wrapper
from safeclaw import SafeClawClient
from langchain_core.tools import BaseTool, ToolException
from typing import Any
safeclaw = SafeClawClient(
api_key="your-safeclaw-api-key",
agent_id="langchain-agent",
mode="enforce", # "simulate" for testing
)
def safeclaw_wrap(tool: BaseTool, action_type: str) -> BaseTool:
"""Wrap a LangChain tool with SafeClaw policy evaluation."""
original_run = tool._run
def gated_run(query: str, **kwargs: Any) -> str:
result = safeclaw.evaluate(
action_type=action_type,
target=query,
metadata={
"agent": "langchain-agent",
"tool": tool.name,
},
)
if result["decision"] == "DENY":
raise ToolException(
f"SafeClaw DENY: {result.get('reason', 'Policy violation')}"
)
if result["decision"] == "REQUIRE_APPROVAL":
raise ToolException(
f"SafeClaw REQUIRE_APPROVAL: Human approval needed for {query}"
)
return original_run(query, **kwargs)
tool._run = gated_run
return tool
Step 4: Apply the Wrapper to Your Tools
from langchain_community.tools import ShellTool, ReadFileTool, WriteFileTool
from langchain_community.utilities import RequestsWrapper
shell_tool = safeclaw_wrap(ShellTool(), action_type="shell_exec")
read_tool = safeclaw_wrap(ReadFileTool(), action_type="file_read")
write_tool = safeclaw_wrap(WriteFileTool(), action_type="file_write")
tools = [shell_tool, read_tool, write_tool]
Step 5: Initialize the Agent with Wrapped Tools
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, handle_parsing_errors=True)
result = executor.invoke({"input": "Read the config file and summarize it"})
Step 6: Test with Simulation Mode
Set mode="simulate" in the SafeClawClient. All evaluations are logged to the tamper-proof audit trail (SHA-256 hash chain) without blocking. Review outcomes in the browser dashboard before switching to enforce mode.
Example Policy
# safeclaw.config.yaml
version: "1.0"
agent: langchain-agent
defaultAction: deny
rules:
- id: allow-read-data
action: file_read
target: "./data/**"
decision: allow
description: "Allow reading from data directory"
- id: allow-write-results
action: file_write
target: "./results/**"
decision: allow
description: "Allow writing to results directory"
- id: deny-read-secrets
action: file_read
target: ".env*"
decision: deny
description: "Block reading environment secrets"
- id: deny-read-credentials
action: file_read
target: "*/credentials"
decision: deny
- id: gate-shell
action: shell_exec
target: "*"
decision: require_approval
- id: allow-python-scripts
action: shell_exec
target: "python3 ./scripts/**"
decision: allow
description: "Allow running project scripts"
- id: deny-network
action: network
target: "*"
decision: deny
Example Action Requests
1. ALLOW — Reading a data file:
{
"actionType": "file_read",
"target": "./data/dataset.csv",
"agentId": "langchain-agent",
"tool": "ReadFileTool",
"decision": "ALLOW",
"rule": "allow-read-data",
"evaluationTime": "0.3ms"
}
2. DENY — Reading .env file:
{
"actionType": "file_read",
"target": ".env.local",
"agentId": "langchain-agent",
"tool": "ReadFileTool",
"decision": "DENY",
"rule": "deny-read-secrets",
"evaluationTime": "0.2ms"
}
3. ALLOW — Running an approved Python script:
{
"actionType": "shell_exec",
"target": "python3 ./scripts/transform.py",
"agentId": "langchain-agent",
"tool": "ShellTool",
"decision": "ALLOW",
"rule": "allow-python-scripts",
"evaluationTime": "0.4ms"
}
4. REQUIRE_APPROVAL — Arbitrary shell command:
{
"actionType": "shell_exec",
"target": "curl https://example.com",
"agentId": "langchain-agent",
"tool": "ShellTool",
"decision": "REQUIRE_APPROVAL",
"rule": "gate-shell",
"evaluationTime": "0.3ms"
}
5. DENY — Writing outside allowed directory:
{
"actionType": "file_write",
"target": "/etc/hosts",
"agentId": "langchain-agent",
"tool": "WriteFileTool",
"decision": "DENY",
"rule": "default-deny",
"evaluationTime": "0.2ms"
}
Troubleshooting
Issue 1: ToolException not handled by agent
Symptom: Agent crashes when SafeClaw denies an action.
Fix: Set handle_parsing_errors=True in AgentExecutor. This causes the agent to receive the denial message and attempt an alternative approach instead of raising an unhandled exception.
Issue 2: Wrapper not intercepting async tool calls
Symptom: Async tool invocations bypass SafeClaw checks.
Fix: Override _arun in addition to _run. The safeclaw_wrap function must wrap both synchronous and asynchronous execution paths. Use safeclaw.evaluate_async() for the async path.
Issue 3: Policy file not found
Symptom: SafeClaw returns ConfigError: safeclaw.config.yaml not found.
Fix: Run npx @authensor/safeclaw init in the project root. Ensure the working directory of your Python process matches the directory containing safeclaw.config.yaml.
Cross-References
- SafeClaw Policy Configuration Reference
- Glossary: Action-Level Gating
- FAQ: Does SafeClaw Support Python Natively?
- SafeClaw vs LangChain Permissions: Comparison
- Use Case: LangChain RAG Pipeline with SafeClaw
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw