How to Secure AI Agents Using Pulumi
SafeClaw by Authensor adds deny-by-default action gating to AI agents that generate or execute Pulumi infrastructure-as-code programs. When an AI agent writes Pulumi code, runs pulumi preview, or attempts pulumi up, SafeClaw intercepts every action and validates it against your YAML policy before execution. Install with npx @authensor/safeclaw and prevent AI agents from making uncontrolled infrastructure changes through Pulumi.
Prerequisites
- Pulumi CLI installed
- Node.js 18+ (Pulumi TypeScript/JavaScript projects)
- An AI agent that generates or runs Pulumi programs
- SafeClaw initialized in your project
Step 1 — Install SafeClaw
npx @authensor/safeclaw
Zero dependencies, MIT-licensed, 446 tests. SafeClaw wraps the agent runtime that orchestrates Pulumi, not Pulumi itself.
Step 2 — Define a Pulumi-Aware Policy
version: 1
defaultAction: deny
rules:
# Agent can read existing Pulumi programs
- action: "file:read"
path: "/workspace/infra/**"
effect: allow
# Agent can write to a staging directory
- action: "file:write"
path: "/workspace/infra/proposed/**"
effect: allow
# Block writing directly to the live Pulumi program
- action: "file:write"
path: "/workspace/infra/index.ts"
effect: deny
reason: "Agent must write to proposed/ for review"
# Allow pulumi preview (dry run)
- action: "shell:execute"
command: "pulumi preview *"
effect: allow
# Block pulumi up (actual deployment)
- action: "shell:execute"
command: "pulumi up *"
effect: deny
reason: "pulumi up requires human approval"
# Block destructive operations
- action: "shell:execute"
command: "pulumi destroy *"
effect: deny
reason: "pulumi destroy is never agent-executable"
# Block state manipulation
- action: "shell:execute"
command: "pulumi state *"
effect: deny
reason: "State manipulation blocked"
# Block stack management
- action: "shell:execute"
command: "pulumi stack rm *"
effect: deny
reason: "Stack deletion blocked"
# Block secret access
- action: "shell:execute"
command: "pulumi config --show-secrets *"
effect: deny
reason: "Secret exposure blocked"
- action: "env:read"
key: "PULUMI_ACCESS_TOKEN"
effect: deny
- action: "env:read"
key: "AWS_SECRET_ACCESS_KEY"
effect: deny
Step 3 — CI Pipeline Integration
GitHub Actions Example
name: Pulumi AI Safety Check
on:
pull_request:
paths:
- "infra/**"
jobs:
safety-and-preview:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
run: npm ci
- name: Validate SafeClaw policy
run: npx @authensor/safeclaw validate
- name: Run simulation tests
run: npx @authensor/safeclaw test --simulation
- name: Install Pulumi
uses: pulumi/actions@v5
- name: Pulumi preview
run: pulumi preview --stack dev --cwd infra/proposed/
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
The agent proposes Pulumi code in infra/proposed/. SafeClaw validates the policy. pulumi preview shows what would change. A human reviews the PR and runs pulumi up manually.
GitLab CI Example
ai-pulumi-safety:
stage: safety
image: node:20-slim
script:
- npm ci
- npx @authensor/safeclaw validate
- npx @authensor/safeclaw test --simulation
- curl -fsSL https://get.pulumi.com | sh
- export PATH=$PATH:$HOME/.pulumi/bin
- pulumi preview --stack dev --cwd infra/proposed/
rules:
- changes:
- infra/**
Step 4 — Pulumi Policy Packs + SafeClaw
Pulumi has its own Policy-as-Code system called Policy Packs (CrossGuard). SafeClaw complements it:
- Pulumi Policy Packs validate the infrastructure resources themselves (e.g., "no public S3 buckets", "all EC2 instances must have tags")
- SafeClaw validates the AI agent's actions (e.g., "agent cannot run
pulumi up", "agent cannot read secrets")
Step 5 — Pulumi Cloud Integration
If using Pulumi Cloud (formerly Pulumi Service), the agent can create preview deployments but not apply:
rules:
- action: "network:request"
host: "api.pulumi.com"
method: "GET"
effect: allow
- action: "network:request"
host: "api.pulumi.com"
method: "POST"
path: "/api/stacks/*/preview"
effect: allow
- action: "network:request"
host: "api.pulumi.com"
method: "POST"
path: "/api/stacks/*/update"
effect: deny
reason: "Agent cannot trigger stack updates"
Step 6 — Audit Infrastructure Actions
npx @authensor/safeclaw audit verify --last 100
The hash-chained audit log captures every Pulumi command the agent attempted, every file it tried to write, and whether each action was allowed or denied. This provides a compliance-grade record of AI-driven infrastructure changes.
Why This Matters
Pulumi programs use real programming languages (TypeScript, Python, Go), giving AI agents significant creative freedom. An agent could write a Pulumi program that provisions expensive resources, opens security groups, or deletes stacks. SafeClaw constrains the agent to proposing changes and running previews, never applying destructive operations.
Related Pages
- Prevent Agent Cloud Cost Runaway
- Human-in-the-Loop Pattern
- Shell Execution Safety
- Pre-Deploy AI Safety Checks
- Policy-as-Code Pattern
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw