2025-12-16 · Authensor

Sidecar Gating Pattern

The sidecar gating pattern deploys an action-level security policy engine as a co-located process alongside the AI agent, intercepting every action request before execution without requiring modifications to the agent's internal code.

Problem Statement

AI agent frameworks (LangChain, CrewAI, OpenAI Assistants) expose different APIs and execution models. Embedding security logic directly into each framework creates tight coupling, requires framework-specific code, and breaks when frameworks update. Operators need a security layer that works across agent frameworks without modifying agent source code. The security layer must intercept all actions regardless of how the agent dispatches them.

Solution

The sidecar pattern originates from service mesh architectures (Envoy, Istio) where a proxy process runs alongside each service to handle cross-cutting concerns like authentication, rate limiting, and observability. Applied to AI agents, the sidecar is a policy evaluation engine that runs in the same process or as a co-located module, intercepting action requests at the dispatch boundary.

The sidecar sits between the agent's decision-making layer and the execution environment. When an agent decides to perform an action (write a file, execute a command, make a network request), the action request passes through the sidecar before reaching the operating system or external service. The sidecar evaluates the action against a policy rule set and returns ALLOW, DENY, or REQUIRE_APPROVAL.

The architecture has three components:

  1. Agent process — The AI agent (Claude, OpenAI, LangChain) that generates action requests. The agent is unaware of the sidecar's internal policy logic.
  1. Sidecar policy engine — A lightweight, co-located module that receives action requests, evaluates them against the policy, and returns a verdict. The sidecar has no dependency on the agent framework's internals.
  1. Execution environment — The operating system, filesystem, network, and shell. Actions reach the execution environment only after the sidecar permits them.
The sidecar pattern provides framework-agnostic security. The same sidecar works with Claude Code, Cursor agent mode, LangChain ReAct agents, CrewAI multi-agent systems, and MCP servers. Policy rules are defined once and apply regardless of which framework dispatches the action.

The pattern also enables independent lifecycle management. The sidecar's policy can be updated without restarting the agent. The agent can be updated without modifying the sidecar. This decoupling reduces deployment risk.

Implementation

SafeClaw, by Authensor, implements the sidecar gating pattern. SafeClaw installs as an npm package and runs within the agent's Node.js process as a co-located module. It intercepts action requests at the dispatch layer and evaluates them against the local policy set.

SafeClaw's sidecar architecture achieves sub-millisecond policy evaluation with zero network round-trips during action gating. The evaluation engine is written in TypeScript strict mode with zero third-party dependencies. It runs as a synchronous function call within the agent process — there is no IPC overhead, no separate daemon, and no container to manage.

The sidecar's policy is loaded from a local configuration file and can be synchronized with the Authensor control plane (safeclaw.onrender.com). Policy updates are pulled asynchronously and do not block action evaluation. The control plane receives only action metadata, never API keys or sensitive data.

SafeClaw is 100% open source (MIT license), validated by 446 tests, and installed with npx @authensor/safeclaw. The free tier provides 7-day renewable keys with no credit card required. A browser dashboard and setup wizard handle initial configuration.

Code Example

Integrating SafeClaw as a sidecar in a LangChain agent:

import { SafeClaw } from "@authensor/safeclaw";

const safeclaw = new SafeClaw({
policyPath: "./policies/agent-policy.yaml",
mode: "enforce" // or "simulate" for testing
});

// Sidecar intercepts the action before execution
async function executeAction(action: ActionRequest) {
const verdict = safeclaw.evaluate(action);

if (verdict.effect === "DENY") {
console.log(Blocked: ${action.type} — ${verdict.reason});
return { blocked: true, reason: verdict.reason };
}

if (verdict.effect === "REQUIRE_APPROVAL") {
return await waitForHumanApproval(action);
}

// ALLOW — proceed to execution
return await performAction(action);
}

Action request intercepted by the sidecar:

{
  "type": "shell_exec",
  "command": "rm -rf /tmp/build-output",
  "agent": "build-automation"
}

Policy YAML evaluated by the sidecar:

rules:
  - name: "allow-tmp-cleanup"
    action: shell_exec
    conditions:
      command:
        starts_with: "rm -rf /tmp/build-output"
    effect: ALLOW

- name: "block-destructive-rm"
action: shell_exec
conditions:
command:
contains: "rm -rf"
effect: DENY

The first-match-wins algorithm evaluates rules in order. The tmp cleanup command matches the first rule and is allowed. Any other rm -rf command matches the second rule and is denied. Commands matching neither rule are denied by the default fallback.

Trade-offs

When to Use

When Not to Use

Related Patterns

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw