2026-01-26 · Authensor

How to Add AI Agent Safety to Terraform Workflows

SafeClaw by Authensor adds deny-by-default action gating to AI agents that execute or generate Terraform configurations. When an AI agent creates, modifies, or applies Terraform plans, SafeClaw intercepts every action — file writes to .tf files, shell execution of terraform apply, and network requests to cloud APIs — and validates them against your YAML policy. Install with npx @authensor/safeclaw and prevent AI agents from making uncontrolled infrastructure changes.

Prerequisites

Step 1 — Install SafeClaw

npx @authensor/safeclaw

Zero dependencies, MIT-licensed, 446 tests. Integrates into the agent runtime that orchestrates Terraform.

Step 2 — Define an Infrastructure-Aware Policy

version: 1
defaultAction: deny

rules:
# Agent can read existing Terraform files
- action: "file:read"
path: "/workspace/terraform/**"
effect: allow

# Agent can write to a staging directory only
- action: "file:write"
path: "/workspace/terraform/proposed/**"
effect: allow

# Agent CANNOT write directly to the live terraform directory
- action: "file:write"
path: "/workspace/terraform/*.tf"
effect: deny
reason: "Agent must write to proposed/ for human review"

# Agent can run terraform plan but NOT apply
- action: "shell:execute"
command: "terraform plan *"
effect: allow

- action: "shell:execute"
command: "terraform validate *"
effect: allow

- action: "shell:execute"
command: "terraform apply *"
effect: deny
reason: "terraform apply requires human approval"

- action: "shell:execute"
command: "terraform destroy *"
effect: deny
reason: "terraform destroy is never agent-executable"

# Block state manipulation
- action: "shell:execute"
command: "terraform state *"
effect: deny
reason: "State manipulation blocked"

- action: "env:read"
key: "AWS_SECRET_ACCESS_KEY"
effect: deny

- action: "env:read"
key: "ARM_CLIENT_SECRET"
effect: deny

- action: "env:read"
key: "GOOGLE_CREDENTIALS"
effect: deny

This policy lets the agent read existing infrastructure, propose new configurations in a staging directory, and run terraform plan for validation — but blocks terraform apply, terraform destroy, and direct state manipulation.

Step 3 — CI Pipeline with Safety Gates

GitHub Actions Example

name: Terraform with AI Safety

on:
pull_request:
paths:
- "terraform/**"

jobs:
ai-terraform-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- uses: hashicorp/setup-terraform@v3

- name: Install dependencies
run: npm ci

- name: Validate SafeClaw policy
run: npx @authensor/safeclaw validate

- name: AI agent proposes changes
run: npx @authensor/safeclaw test --simulation

- name: Terraform validate
run: terraform -chdir=terraform/proposed validate

- name: Terraform plan
run: terraform -chdir=terraform/proposed plan -out=tfplan

- name: Store plan artifact
uses: actions/upload-artifact@v4
with:
name: terraform-plan
path: terraform/proposed/tfplan

The AI agent proposes Terraform changes in a PR. SafeClaw ensures it can only write to the proposed/ directory and run terraform plan. A human reviews the plan and manually runs terraform apply.

Step 4 — Terraform Cloud / Terraform Enterprise

If using Terraform Cloud, the AI agent should only trigger plan runs, never apply:

rules:
  - action: "network:request"
    host: "app.terraform.io"
    method: "POST"
    path: "/api/v2/runs"
    effect: allow

- action: "network:request"
host: "app.terraform.io"
method: "POST"
path: "/api/v2/runs/*/actions/apply"
effect: deny
reason: "Apply must be triggered manually in Terraform Cloud"

Step 5 — Audit Infrastructure Actions

SafeClaw's hash-chained audit log captures every action the AI agent attempted against your Terraform codebase:

npx @authensor/safeclaw audit verify --last 100

The audit trail shows exactly which .tf files the agent tried to modify, which terraform commands it attempted, and whether each was allowed or denied. This is critical for compliance when AI agents participate in infrastructure management.

Step 6 — Sentinel Policy Integration

If you use HashiCorp Sentinel for policy-as-code on the Terraform side, SafeClaw complements it:

Together they provide two layers of policy enforcement: one for the infrastructure, one for the agent.

Why This Matters

An AI agent with unrestricted terraform apply access can destroy production infrastructure in seconds. SafeClaw ensures agents can propose and validate changes but never execute destructive operations without human approval.


Related Pages

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw