How to Add AI Agent Safety to Sublime Text
SafeClaw by Authensor provides deny-by-default AI agent safety for Sublime Text users. If you use AI-powered plugins like Sublime Copilot or custom LLM integrations, SafeClaw gates every action through your policy before execution. It logs all decisions to a hash-chained audit trail, supports both Claude and OpenAI, and is backed by 446 tests.
Prerequisites
- Sublime Text 4 (Build 4143 or later)
- Node.js 18+
- Package Control installed
Step 1: Install SafeClaw
Open Sublime Text's built-in console (Ctrl+ ) or your system terminal and run:
npx @authensor/safeclaw
This creates the .safeclaw/ directory in your project root with a default deny-all policy and empty audit log.
Step 2: Define Your Policy
Create .safeclaw/policy.yaml:
version: 1
default: deny
rules:
- action: file.read
paths:
- "src/**"
- "lib/**"
- "tests/**"
decision: allow
- action: file.write
paths:
- "src/**"
- "lib/**"
decision: prompt
- action: shell.execute
decision: deny
- action: network.request
domains:
- "api.openai.com"
- "api.anthropic.com"
decision: allow
Step 3: Create a Sublime Build System
Sublime Text build systems let you run SafeClaw commands with a keyboard shortcut. Create a file at ~/.config/sublime-text/Packages/User/SafeClaw.sublime-build (Linux) or ~/Library/Application Support/Sublime Text/Packages/User/SafeClaw.sublime-build (macOS):
{
"shell_cmd": "npx @authensor/safeclaw audit --tail 10",
"working_dir": "$project_path",
"selector": "",
"variants": [
{
"name": "Verify Chain",
"shell_cmd": "npx @authensor/safeclaw audit --verify"
},
{
"name": "Status",
"shell_cmd": "npx @authensor/safeclaw status"
},
{
"name": "Validate Policy",
"shell_cmd": "npx @authensor/safeclaw policy --validate"
}
]
}
Access variants with Ctrl+Shift+B (or Cmd+Shift+B on macOS) and select the SafeClaw command you need.
Step 4: Add a Sublime Project Configuration
In your .sublime-project file, add SafeClaw settings:
{
"folders": [
{
"path": "."
}
],
"settings": {
"safeclaw.enabled": true,
"safeclaw.policy_path": ".safeclaw/policy.yaml"
},
"build_systems": [
{
"name": "SafeClaw Audit",
"shell_cmd": "npx @authensor/safeclaw audit --tail 10",
"working_dir": "$project_path"
}
]
}
Step 5: Wrap AI Plugins with SafeClaw
If your AI agent plugin runs as a subprocess, wrap it:
npx @authensor/safeclaw wrap -- python3 ai_plugin_server.py
For Sublime LSP-based AI integrations, configure the server command in your LSP settings to run through SafeClaw:
{
"clients": {
"my-ai-agent": {
"command": ["npx", "@authensor/safeclaw", "wrap", "--", "node", "ai-agent-lsp.js"],
"selector": "source.python | source.js"
}
}
}
Step 6: Test the Integration
Trigger an AI agent action that attempts to write a file. SafeClaw should intercept it, apply the prompt decision, and log the result. Check:
npx @authensor/safeclaw audit --tail 5
Each entry shows the action type, file path, decision, timestamp, and hash chain link.
Summary
SafeClaw adds a safety layer to Sublime Text through build systems, project configuration, and plugin wrapping. The deny-by-default model ensures AI agents cannot act outside your policy. Hash-chained audit logs provide tamper-evident evidence of every action. SafeClaw is MIT licensed and open source.
Related Guides
- How to Add AI Agent Safety to VS Code
- How to Add AI Agent Safety to Zed Editor
- How to Run AI Agents Safely from the Terminal
- How to Build an AI Agent Safety Dashboard in Grafana
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw