2025-12-15 · Authensor

How to Use SafeClaw Simulation Mode

Writing security policies for AI agents is iterative. You do not know if your rules are right until you see them in action. But enforcing untested rules means your agent gets blocked mid-task when a rule is too narrow, or worse, a rule is too broad and lets something through that it should not.

SafeClaw's simulation mode solves this. It evaluates every agent action against your policy but does not actually block anything. Instead, it logs what it would allow and what it would deny. You review the log, adjust your rules, and repeat until the policy matches your intent. Then you switch to enforcement with confidence.

What Simulation Mode Does

In normal (enforcement) mode, SafeClaw intercepts every action your AI agent attempts -- file writes, shell commands, network requests -- and either allows or blocks it based on your policy rules.

In simulation mode, the same evaluation happens. Every action is checked against your rules. The first-match-wins logic runs. The deny-by-default fallback applies. But the final step is different: instead of blocking denied actions, SafeClaw lets them through and logs the decision it would have made.

The result is a complete log showing:

Your agent runs uninterrupted. Your log fills up with data. You review the data and fix your rules.

Setting Up Simulation Mode

Step 1: Install SafeClaw

If you have not already:

npx @authensor/safeclaw

The browser dashboard opens automatically.

Step 2: Create or Select a Policy

You need at least one policy to simulate against. If you do not have one yet, create a starter policy in the dashboard. Here is a minimal example:

{
  "name": "dev-agent-draft",
  "rules": [
    {
      "action": "file_write",
      "effect": "deny",
      "pathPattern": "*/.env"
    },
    {
      "action": "file_write",
      "effect": "deny",
      "pathPattern": "/.ssh/"
    },
    {
      "action": "file_write",
      "effect": "allow",
      "pathPattern": "/home/user/project/src/**"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npm test"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npm run build"
    },
    {
      "action": "network",
      "effect": "allow",
      "destination": "registry.npmjs.org"
    }
  ]
}

This is intentionally incomplete. You are going to discover what is missing through simulation.

Step 3: Enable Simulation Mode

In the SafeClaw dashboard, find the Simulation Mode toggle for your policy. Turn it on.

When simulation mode is active, the dashboard shows a clear indicator. You will not accidentally think you are protected when you are in simulation mode.

Step 4: Run Your Agent

Start your AI agent -- Claude, OpenAI, LangChain, whatever you use -- and let it work through a real task. Do not use a toy example. Run a realistic workflow:

The more realistic the workflow, the more useful your simulation data will be.

Reading the Simulation Log

After your agent completes its task (or a meaningful portion of it), open the Simulation Log in the SafeClaw dashboard.

Each entry shows:

| Field | Description |
|---|---|
| timestamp | When the action was attempted |
| actionType | file_write, shell_exec, or network |
| target | The specific file, command, or destination |
| decision | would_allow or would_deny |
| matchedRule | Which rule triggered (or "deny-by-default" if no rule matched) |
| agentId | Which agent attempted the action |

Example Simulation Log

14:22:01.003  file_write   /home/user/project/src/index.ts      would_allow  rule: allow-src-writes
14:22:01.247  file_write   /home/user/project/src/utils.ts      would_allow  rule: allow-src-writes
14:22:02.891  shell_exec   npm install lodash                    would_deny   deny-by-default
14:22:03.114  shell_exec   npm test                              would_allow  rule: allow-npm-test
14:22:15.502  file_write   /home/user/project/package.json       would_deny   deny-by-default
14:22:15.819  file_write   /home/user/project/tsconfig.json      would_deny   deny-by-default
14:22:16.003  network      registry.npmjs.org                    would_allow  rule: allow-npm-registry
14:22:16.447  shell_exec   npx tsc --noEmit                      would_deny   deny-by-default
14:22:17.112  file_write   /home/user/project/tests/index.test.ts  would_deny   deny-by-default

Analyzing the Results

Go through each "would deny" entry and ask: should this have been allowed?

From the example log above:

| Action | Should it be allowed? | Fix |
|---|---|---|
| npm install lodash | Yes, the agent needs to install packages | Add allow rule for npm install |
| Write package.json | Yes, npm install modifies it | Add allow rule for package.json |
| Write tsconfig.json | Yes, if the agent is configuring TypeScript | Add allow rule for tsconfig.json |
| npx tsc --noEmit | Yes, type checking is legitimate | Add allow rule for npx tsc --noEmit |
| Write test files | Yes, the agent should write tests | Add allow rule for tests/** |

Every "would deny" entry on a legitimate action is a missing rule. Every "would allow" entry on a risky action is a rule that is too broad.

Iterating on Your Policy

Based on the simulation analysis, update your policy:

{
  "name": "dev-agent-v2",
  "rules": [
    {
      "action": "file_write",
      "effect": "deny",
      "pathPattern": "*/.env"
    },
    {
      "action": "file_write",
      "effect": "deny",
      "pathPattern": "/.ssh/"
    },
    {
      "action": "file_write",
      "effect": "deny",
      "pathPattern": "/.aws/"
    },
    {
      "action": "file_write",
      "effect": "allow",
      "pathPattern": "/home/user/project/src/**"
    },
    {
      "action": "file_write",
      "effect": "allow",
      "pathPattern": "/home/user/project/tests/**"
    },
    {
      "action": "file_write",
      "effect": "allow",
      "pathPattern": "/home/user/project/package.json"
    },
    {
      "action": "file_write",
      "effect": "allow",
      "pathPattern": "/home/user/project/tsconfig.json"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npm install"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npm test"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npm run build"
    },
    {
      "action": "shell_exec",
      "effect": "allow",
      "command": "npx tsc --noEmit"
    },
    {
      "action": "network",
      "effect": "allow",
      "destination": "registry.npmjs.org"
    }
  ]
}

Now run another simulation with the updated policy. Repeat until:

  1. Every legitimate action shows "would allow."
  2. No risky action shows "would allow."
  3. The deny-by-default entries are for actions you genuinely want blocked.

The Iteration Cycle

The typical simulation workflow looks like this:

Draft policy
    -> Enable simulation mode
    -> Run agent through real tasks
    -> Review simulation log
    -> Identify missing allow rules
    -> Identify overly broad allow rules
    -> Update policy
    -> Run another simulation
    -> Review again
    -> Repeat until clean
    -> Switch to enforcement

Most teams go through 2-4 iterations before their policy is solid. The first draft usually misses 30-50% of the actions the agent actually needs. That is expected. Simulation mode is designed for this.

When to Switch to Enforcement

Switch to enforcement when:

In the dashboard, turn off the simulation mode toggle. Your policy is now live. Denied actions are actually blocked.

Simulation Mode After Enforcement

Simulation mode is not just for initial setup. Use it whenever:

Simulation Mode and the Audit Trail

Simulation mode entries are recorded in the tamper-proof audit trail, just like enforcement entries. Each simulation log entry is SHA-256 hashed and chained to the previous entry. The decision field shows simulate_allow or simulate_deny instead of allow or deny.

This means you have a complete, verifiable record of your policy testing process. If an auditor asks how you validated your security policy, the simulation log is your evidence.

Common Simulation Mode Mistakes

Running too short a simulation. A 5-minute test will not catch edge cases. Let your agent work through a full task.

Using artificial tasks. "Write hello world to a file" does not test your policy. Use real project work.

Ignoring "would allow" entries. Most people focus on fixing denied actions. Equally important: review what was allowed and ask whether it should have been.

Not iterating. One round of simulation is rarely enough. Plan for 2-4 iterations.

Forgetting to disable simulation mode. Simulation mode provides zero protection. It is a testing tool, not a security measure. When you are done testing, switch to enforcement.

Getting Started

npx @authensor/safeclaw

Free tier. No credit card. 7-day renewable keys. SafeClaw is built on the Authensor framework -- 446 tests, TypeScript strict mode, zero dependencies. Works with Claude, OpenAI, and LangChain. 100% open source client.

Write your draft policy. Enable simulation mode. Let your agent run. Review the log. Iterate. Enforce. That is the workflow. Visit safeclaw.onrender.com or authensor.com for full documentation.

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw