How to Add AI Agent Safety to Ansible Playbooks
SafeClaw by Authensor provides deny-by-default action gating for AI agents that generate or execute Ansible playbooks. When an AI agent writes playbook YAML, runs ansible-playbook, or invokes specific Ansible modules, SafeClaw intercepts the action and validates it against your policy before execution. Install with npx @authensor/safeclaw and prevent AI agents from making uncontrolled configuration changes to your infrastructure.
Prerequisites
- Ansible installed in your CI/CD or operations environment
- Node.js 18+
- An AI agent that generates or triggers Ansible playbooks
- SafeClaw initialized in your project
Step 1 — Install SafeClaw
npx @authensor/safeclaw
Zero dependencies, MIT-licensed, 446 tests. SafeClaw wraps the agent runtime that orchestrates Ansible, not Ansible itself.
Step 2 — Define an Ansible-Aware Policy
version: 1
defaultAction: deny
rules:
# Agent can read existing playbooks and roles
- action: "file:read"
path: "/workspace/ansible/**"
effect: allow
# Agent can write playbooks to a review directory
- action: "file:write"
path: "/workspace/ansible/proposed/**"
effect: allow
# Block writing directly to production playbooks
- action: "file:write"
path: "/workspace/ansible/playbooks/**"
effect: deny
reason: "Agent must write to proposed/ for review"
# Allow ansible-playbook in check mode only
- action: "shell:execute"
command: "ansible-playbook --check "
effect: allow
- action: "shell:execute"
command: "ansible-playbook --diff --check "
effect: allow
# Block actual ansible-playbook execution
- action: "shell:execute"
command: "ansible-playbook *"
effect: deny
reason: "Playbook execution requires human approval"
# Block ansible ad-hoc commands entirely
- action: "shell:execute"
command: "ansible -m shell "
effect: deny
reason: "Ad-hoc shell module execution blocked"
- action: "shell:execute"
command: "ansible -m command "
effect: deny
reason: "Ad-hoc command module execution blocked"
- action: "shell:execute"
command: "ansible-vault *"
effect: deny
reason: "Vault operations blocked"
- action: "env:read"
key: "ANSIBLE_VAULT_PASSWORD"
effect: deny
- action: "env:read"
key: "SSH_*"
effect: deny
This policy allows the agent to read playbooks, write proposed changes for review, and run check mode (dry run) — but blocks actual execution, ad-hoc commands, and vault access.
Step 3 — CI Pipeline Integration
GitHub Actions Example
name: AI Ansible Safety Check
on:
pull_request:
paths:
- "ansible/**"
jobs:
safety-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
run: npm ci
- name: Validate SafeClaw policy
run: npx @authensor/safeclaw validate
- name: Run simulation
run: npx @authensor/safeclaw test --simulation
- name: Lint proposed playbooks
run: ansible-lint ansible/proposed/
- name: Check mode dry run
run: ansible-playbook ansible/proposed/main.yml --check --diff
GitLab CI Example
ai-ansible-safety:
stage: safety
image: node:20-slim
before_script:
- apt-get update && apt-get install -y ansible ansible-lint
- npm ci
script:
- npx @authensor/safeclaw validate
- npx @authensor/safeclaw test --simulation
- ansible-lint ansible/proposed/
rules:
- changes:
- ansible/**
Step 4 — Block Destructive Modules
Certain Ansible modules are inherently dangerous when triggered by an AI agent. Enforce application-level blocks:
rules:
# Block file deletion modules
- action: "shell:execute"
command: "-m filestate=absent*"
effect: deny
reason: "File deletion module blocked"
# Block service management
- action: "shell:execute"
command: "-m systemdstate=stopped*"
effect: deny
reason: "Service stop blocked"
# Block user management
- action: "shell:execute"
command: "-m userstate=absent*"
effect: deny
reason: "User deletion blocked"
Step 5 — AWX / Ansible Tower Integration
If using AWX or Ansible Automation Platform, the AI agent should trigger job templates but never modify them:
rules:
- action: "network:request"
host: "awx.internal.example.com"
method: "POST"
path: "/api/v2/job_templates/*/launch/"
effect: allow
- action: "network:request"
host: "awx.internal.example.com"
method: "PATCH"
effect: deny
reason: "Agent cannot modify AWX job templates"
- action: "network:request"
host: "awx.internal.example.com"
method: "DELETE"
effect: deny
reason: "Agent cannot delete AWX resources"
Step 6 — Audit Every Configuration Action
SafeClaw's hash-chained audit trail records every action the agent attempted:
npx @authensor/safeclaw audit verify --last 100
The audit log shows which playbooks the agent tried to write, which commands it attempted to execute, and whether each was allowed or denied. This is essential for change management compliance.
Why This Matters
Ansible playbooks can modify every server in your inventory with a single command. An AI agent with unrestricted ansible-playbook access can deploy broken configurations, stop services, or delete users across your entire fleet. SafeClaw ensures agents can propose and validate configurations but never apply them without human review.
Related Pages
- Prevent Agent System Config Changes
- Human-in-the-Loop Pattern
- Shell Execution Safety
- Policy-as-Code Pattern
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw