2026-01-20 · Authensor

How to Add AI Agent Safety to VS Code

SafeClaw by Authensor brings deny-by-default action gating to any AI agent running inside Visual Studio Code. With 446 tests, hash-chained audit logs, and support for both Claude and OpenAI, SafeClaw ensures that no AI agent action executes without explicit approval. This guide walks you through installing and configuring SafeClaw in your VS Code environment.

Prerequisites

Step 1: Install SafeClaw via the Integrated Terminal

Open the VS Code integrated terminal with Ctrl+ (or Cmd+ on macOS) and run:

npx @authensor/safeclaw

This downloads and initializes SafeClaw in your project root. A .safeclaw/ directory will be created containing the default policy file and audit log store.

Step 2: Create a Workspace Policy File

SafeClaw uses a YAML policy file to define which actions are permitted. Create .safeclaw/policy.yaml in your workspace root:

version: 1
default: deny

rules:
- action: file.read
paths:
- "src/**"
- "docs/**"
decision: allow

- action: file.write
paths:
- "src/**"
decision: prompt

- action: shell.execute
decision: deny

- action: network.request
domains:
- "api.openai.com"
- "api.anthropic.com"
decision: allow

This policy denies all actions by default, allows file reads in src/ and docs/, prompts for confirmation on writes to src/, blocks shell execution entirely, and permits network requests only to OpenAI and Anthropic APIs.

Step 3: Configure VS Code Workspace Settings

Add SafeClaw to your .vscode/settings.json to enable automatic policy enforcement when VS Code starts:

{
  "safeclaw.enabled": true,
  "safeclaw.policyPath": ".safeclaw/policy.yaml",
  "safeclaw.auditLog": true,
  "safeclaw.hashChain": true,
  "safeclaw.notifyOnDeny": true
}

Setting notifyOnDeny to true triggers a VS Code notification each time SafeClaw blocks an agent action, giving you immediate visibility without disrupting your workflow.

Step 4: Add a VS Code Task for Audit Review

Create a task in .vscode/tasks.json so you can review the hash-chained audit log directly from the command palette:

{
  "version": "2.0.0",
  "tasks": [
    {
      "label": "SafeClaw: Review Audit Log",
      "type": "shell",
      "command": "npx @authensor/safeclaw audit --verify",
      "problemMatcher": []
    }
  ]
}

Run this task with Ctrl+Shift+P then "Tasks: Run Task" to verify the integrity of every logged action. The hash chain ensures that no log entry has been tampered with after the fact.

Step 5: Test the Integration

Open any AI agent extension in VS Code and issue a command that would write a file. SafeClaw should intercept the action, check it against your policy, and either allow, prompt, or deny it. Check the audit log to confirm:

npx @authensor/safeclaw audit --tail 5

You should see entries with action type, decision, timestamp, and the hash chain linking each entry to the previous one.

Recommended Extensions

Pair SafeClaw with these VS Code extensions for a complete safety setup:

Summary

SafeClaw integrates cleanly into VS Code through the integrated terminal, workspace settings, and task runner. The deny-by-default model means your AI agents start with zero permissions and gain only what you explicitly grant. With hash-chained audit logging, every decision is recorded immutably. SafeClaw is MIT licensed and open source.


Related Guides

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw