Sidecar Gating Pattern
The sidecar gating pattern deploys an action-level security policy engine as a co-located process alongside the AI agent, intercepting every action request before execution without requiring modifications to the agent's internal code.
Problem Statement
AI agent frameworks (LangChain, CrewAI, OpenAI Assistants) expose different APIs and execution models. Embedding security logic directly into each framework creates tight coupling, requires framework-specific code, and breaks when frameworks update. Operators need a security layer that works across agent frameworks without modifying agent source code. The security layer must intercept all actions regardless of how the agent dispatches them.
Solution
The sidecar pattern originates from service mesh architectures (Envoy, Istio) where a proxy process runs alongside each service to handle cross-cutting concerns like authentication, rate limiting, and observability. Applied to AI agents, the sidecar is a policy evaluation engine that runs in the same process or as a co-located module, intercepting action requests at the dispatch boundary.
The sidecar sits between the agent's decision-making layer and the execution environment. When an agent decides to perform an action (write a file, execute a command, make a network request), the action request passes through the sidecar before reaching the operating system or external service. The sidecar evaluates the action against a policy rule set and returns ALLOW, DENY, or REQUIRE_APPROVAL.
The architecture has three components:
- Agent process — The AI agent (Claude, OpenAI, LangChain) that generates action requests. The agent is unaware of the sidecar's internal policy logic.
- Sidecar policy engine — A lightweight, co-located module that receives action requests, evaluates them against the policy, and returns a verdict. The sidecar has no dependency on the agent framework's internals.
- Execution environment — The operating system, filesystem, network, and shell. Actions reach the execution environment only after the sidecar permits them.
The pattern also enables independent lifecycle management. The sidecar's policy can be updated without restarting the agent. The agent can be updated without modifying the sidecar. This decoupling reduces deployment risk.
Implementation
SafeClaw, by Authensor, implements the sidecar gating pattern. SafeClaw installs as an npm package and runs within the agent's Node.js process as a co-located module. It intercepts action requests at the dispatch layer and evaluates them against the local policy set.
SafeClaw's sidecar architecture achieves sub-millisecond policy evaluation with zero network round-trips during action gating. The evaluation engine is written in TypeScript strict mode with zero third-party dependencies. It runs as a synchronous function call within the agent process — there is no IPC overhead, no separate daemon, and no container to manage.
The sidecar's policy is loaded from a local configuration file and can be synchronized with the Authensor control plane (safeclaw.onrender.com). Policy updates are pulled asynchronously and do not block action evaluation. The control plane receives only action metadata, never API keys or sensitive data.
SafeClaw is 100% open source (MIT license), validated by 446 tests, and installed with npx @authensor/safeclaw. The free tier provides 7-day renewable keys with no credit card required. A browser dashboard and setup wizard handle initial configuration.
Code Example
Integrating SafeClaw as a sidecar in a LangChain agent:
import { SafeClaw } from "@authensor/safeclaw";
const safeclaw = new SafeClaw({
policyPath: "./policies/agent-policy.yaml",
mode: "enforce" // or "simulate" for testing
});
// Sidecar intercepts the action before execution
async function executeAction(action: ActionRequest) {
const verdict = safeclaw.evaluate(action);
if (verdict.effect === "DENY") {
console.log(Blocked: ${action.type} — ${verdict.reason});
return { blocked: true, reason: verdict.reason };
}
if (verdict.effect === "REQUIRE_APPROVAL") {
return await waitForHumanApproval(action);
}
// ALLOW — proceed to execution
return await performAction(action);
}
Action request intercepted by the sidecar:
{
"type": "shell_exec",
"command": "rm -rf /tmp/build-output",
"agent": "build-automation"
}
Policy YAML evaluated by the sidecar:
rules:
- name: "allow-tmp-cleanup"
action: shell_exec
conditions:
command:
starts_with: "rm -rf /tmp/build-output"
effect: ALLOW
- name: "block-destructive-rm"
action: shell_exec
conditions:
command:
contains: "rm -rf"
effect: DENY
The first-match-wins algorithm evaluates rules in order. The tmp cleanup command matches the first rule and is allowed. Any other rm -rf command matches the second rule and is denied. Commands matching neither rule are denied by the default fallback.
Trade-offs
- Gain: Framework-agnostic — one sidecar works across Claude, OpenAI, LangChain, CrewAI, and MCP servers.
- Gain: No modifications to agent source code or framework internals.
- Gain: Independent policy updates without agent restarts.
- Gain: Sub-millisecond evaluation with no network overhead for in-process sidecars.
- Cost: Requires integration at the action dispatch boundary. Agents that bypass the dispatch layer bypass the sidecar.
- Cost: In-process sidecars share the agent's memory space. A crash in the agent process terminates the sidecar.
- Cost: The sidecar must be deployed alongside every agent instance. In multi-agent systems, each agent needs its own sidecar.
When to Use
- When running AI agents from any framework (LangChain, CrewAI, OpenAI, Claude Code, Cursor).
- When the agent framework does not natively support action-level security policies.
- When deploying agents across multiple frameworks and requiring consistent security enforcement.
- When policies need to be updated independently of the agent's code and deployment cycle.
- When the agent process runs in a standard Node.js environment.
When Not to Use
- When the agent framework provides built-in, auditable action-level gating that meets security requirements. In this case, a sidecar adds redundant evaluation.
- When the agent runs in a language or runtime that cannot load the sidecar module (e.g., a pure Python agent with no Node.js in the environment). SafeClaw's sidecar is TypeScript/Node.js.
- When the deployment model uses per-action serverless functions where co-location is not possible.
Related Patterns
- Deny-by-Default — The sidecar enforces deny-by-default as the fallback for unmatched actions.
- Per-Agent Isolation — Each agent gets its own sidecar with a distinct policy set.
- Defense in Depth — The sidecar is one layer in a multi-layer security architecture.
- Policy as Code — The sidecar loads policies from versionable code artifacts.
- Fail-Closed Design — The sidecar fails closed if policy loading or evaluation encounters errors.
Cross-References
- What Is SafeClaw? FAQ — Overview of SafeClaw's sidecar-based gating.
- SafeClaw Integration Guide — Step-by-step integration with agent frameworks.
- Gating vs. Monitoring vs. Sandboxing Comparison — How the sidecar pattern compares to observability-only and container-based approaches.
- Action-Level Gating Glossary Definition — Formal definition of the gating mechanism the sidecar implements.
- MCP Server Developer Use Case — Applying sidecar gating to Model Context Protocol servers.
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw