How to Use Pre-Commit Hooks for AI Agent Safety
Pre-commit hooks catch unsafe AI agent policy changes before they enter your git history, providing the fastest possible feedback loop for developers. SafeClaw by Authensor integrates with git hooks to validate policy syntax, run simulation tests, and detect dangerous permission escalations at commit time. This means a developer cannot accidentally commit a policy that grants an AI agent unrestricted file access or shell execution — the commit is blocked before it happens.
Quick Start
npx @authensor/safeclaw
Scaffolds a .safeclaw/ directory. Then configure the pre-commit hook as described below.
Step 1: Install the Pre-Commit Hook
SafeClaw includes a hook installer that adds validation to your git workflow:
npx @authensor/safeclaw hooks install
This creates a .git/hooks/pre-commit script that runs automatically before every commit. Alternatively, integrate with the pre-commit framework:
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: safeclaw-validate
name: SafeClaw Policy Validation
entry: npx @authensor/safeclaw validate
language: system
files: '\.safeclaw/'
pass_filenames: false
- id: safeclaw-test
name: SafeClaw Simulation Tests
entry: npx @authensor/safeclaw test
language: system
files: '\.safeclaw/'
pass_filenames: false
Step 2: What the Hook Validates
The pre-commit hook runs three checks:
1. Policy Syntax Validation — ensures all YAML policy files are well-formed and use valid rule syntax:
$ npx @authensor/safeclaw validate
✓ policies/coding-assistant.yaml — valid (12 rules)
✓ policies/devops-agent.yaml — valid (8 rules)
✓ config.yaml — valid
2. Simulation Tests — runs your .safeclaw/tests/*.test.yaml files to ensure policies produce expected results:
$ npx @authensor/safeclaw test
✓ file-access.test.yaml — 6/6 passed
✓ shell-commands.test.yaml — 4/4 passed
✓ network-access.test.yaml — 3/3 passed
3. Permission Escalation Detection — compares the staged policy changes against the current HEAD and flags any permission escalations:
$ npx @authensor/safeclaw diff --check-escalation
⚠ ESCALATION DETECTED in policies/coding-assistant.yaml:
Rule "allow-src-writes" now matches "/.ts" (was "src//.ts")
This broadens file write access beyond the src directory.
Commit blocked. Review the escalation and re-commit with --allow-escalation if intentional.
Step 3: Write Targeted Simulation Tests
Write tests that cover your most critical safety boundaries:
# .safeclaw/tests/critical-boundaries.test.yaml
tests:
- name: "Cannot delete production database"
action: shell.execute
input:
command: "dropdb production"
expect:
effect: deny
- name: "Cannot read SSH keys"
action: file.read
input:
path: "~/.ssh/id_rsa"
expect:
effect: deny
- name: "Cannot push to main"
action: shell.execute
input:
command: "git push origin main"
expect:
effect: deny
- name: "Cannot access .env files"
action: file.read
input:
path: ".env.production"
expect:
effect: deny
Step 4: Handle Hook Failures
When the pre-commit hook blocks a commit, the developer sees exactly what failed and why:
$ git commit -m "Update agent policies"
SafeClaw Pre-Commit Check
━━━━━━━━━━━━━━━━━━━━━━━━
✗ Policy Validation: FAILED
Error in policies/new-agent.yaml:14 — Unknown action type "file.execute"
Fix the error and try again.
The error message is actionable — it tells you the file, line number, and what is wrong. No guessing.
Bypassing the Hook (Emergency Only)
In genuine emergencies, developers can bypass the hook:
git commit --no-verify -m "Emergency: hotfix for production outage"
SafeClaw logs bypass events so they can be reviewed. Configure your CI pipeline as a second safety net for commits that bypass pre-commit hooks.
Why SafeClaw
- 446 tests ensuring the validation engine itself is reliable
- Deny-by-default — new actions are blocked until policies explicitly allow them
- Sub-millisecond evaluation — pre-commit hooks run in milliseconds, not seconds
- Hash-chained audit trail — even bypassed commits are logged for review
- Works with Claude AND OpenAI — one hook validates policies for any LLM backend
Cross-References
- How to Add AI Agent Safety to Your CI/CD Pipeline
- How to Test AI Agent Safety Policies
- Policy-as-Code Pattern
- Simulation Mode Explained
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw