2026-01-06 · Authensor

How to Secure AI Agents Running in Kubernetes

SafeClaw by Authensor deploys as a sidecar container in Kubernetes pods, providing deny-by-default action gating for AI agents. Every process execution, file operation, and network request your agent container attempts is checked against your YAML policy stored in a ConfigMap. Install SafeClaw with npx @authensor/safeclaw locally for development, then deploy the container image to your cluster.

Why Kubernetes Agents Need Application-Level Gating

Kubernetes provides network policies, RBAC, and pod security standards, but none of these operate at the application level. A Kubernetes NetworkPolicy can block traffic to external IPs, but it cannot distinguish between your agent calling api.openai.com for a legitimate completion and calling attacker.com to exfiltrate data over an allowed port. SafeClaw gates at the action level — before the syscall happens.

446 tests validate the gate engine. Hash-chained audit trail for compliance. Works with Claude and OpenAI. MIT licensed.

Installation (Development)

npx @authensor/safeclaw

Policy as ConfigMap

# k8s/safeclaw-policy.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: safeclaw-policy
  namespace: ai-agents
data:
  safeclaw.policy.yaml: |
    version: 1
    defaultAction: deny

rules:
- action: file.read
path:
glob: "/app/data/**"
decision: allow

- action: file.write
path:
glob: "/app/output/**"
decision: allow

- action: process.exec
decision: deny

- action: network.request
host:
in: ["api.openai.com", "api.anthropic.com"]
decision: allow

- action: network.request
host:
endsWith: ".internal.svc.cluster.local"
decision: allow

Sidecar Deployment

# k8s/agent-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ai-agent
  namespace: ai-agents
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ai-agent
  template:
    metadata:
      labels:
        app: ai-agent
    spec:
      containers:
        # Your agent container
        - name: agent
          image: your-registry/ai-agent:latest
          ports:
            - containerPort: 3000
          env:
            - name: SAFECLAW_ENDPOINT
              value: "http://localhost:9800"

# SafeClaw sidecar
- name: safeclaw
image: ghcr.io/authensor/safeclaw:latest
ports:
- containerPort: 9800
volumeMounts:
- name: policy
mountPath: /etc/safeclaw
readOnly: true
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: /health
port: 9800
initialDelaySeconds: 5
periodSeconds: 10

volumes:
- name: policy
configMap:
name: safeclaw-policy

Agent Code (Any Language)

Your agent container communicates with the SafeClaw sidecar over localhost:

// In your agent container
import { Gate } from '@authensor/safeclaw';

const gate = new Gate({ endpoint: process.env.SAFECLAW_ENDPOINT });

async function agentAction(action, params) {
const decision = await gate.check({ action, ...params });
if (!decision.allowed) {
throw new Error(SafeClaw denied: ${decision.reason});
}
// proceed with action
}

NetworkPolicy Complement

Use Kubernetes NetworkPolicy alongside SafeClaw for defense in depth:

# k8s/network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ai-agent-network
  namespace: ai-agents
spec:
  podSelector:
    matchLabels:
      app: ai-agent
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: ai-agents
      ports:
        - port: 9800  # SafeClaw sidecar
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
      ports:
        - port: 443   # HTTPS only

Audit Trail with Persistent Volume

# k8s/safeclaw-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safeclaw-audit
  namespace: ai-agents
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Mount in the sidecar:

volumeMounts:
  - name: audit
    mountPath: /var/log/safeclaw
volumes:
  - name: audit
    persistentVolumeClaim:
      claimName: safeclaw-audit

Every decision is hash-chained. The audit trail is tamper-proof and can be exported for compliance.

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw