How to Secure AI Agents on Render
SafeClaw by Authensor adds deny-by-default action gating to AI agents deployed on Render. Every file operation, network request, and shell command your agent attempts is intercepted and evaluated against your YAML policy before execution proceeds. SafeClaw supports Claude, OpenAI, and all major LLM providers. Install with npx @authensor/safeclaw and deploy a safely gated agent on Render's managed infrastructure.
Prerequisites
- A Render account
- Node.js 18+ (auto-detected by Render)
- An AI agent codebase pushed to a Git repository
Step 1 — Install SafeClaw
npx @authensor/safeclaw
Zero dependencies, MIT-licensed, backed by 446 tests. This scaffolds your policy file and integrates into the agent runtime.
Step 2 — Define a Render-Specific Policy
version: 1
defaultAction: deny
rules:
- action: "file:read"
path: "/opt/render/project/src/**"
effect: allow
- action: "file:write"
path: "/tmp/**"
effect: allow
- action: "file:write"
path: "/var/data/**"
effect: allow
reason: "Render persistent disk mount"
- action: "network:request"
host: "api.openai.com"
effect: allow
- action: "network:request"
host: "api.anthropic.com"
effect: allow
- action: "network:request"
host: "*.onrender.com"
effect: allow
reason: "Inter-service communication"
- action: "env:read"
key: "DATABASE_URL"
effect: deny
reason: "Agent must not read database credentials"
- action: "env:read"
key: "RENDER_*"
effect: deny
reason: "Block Render platform variables"
- action: "shell:execute"
effect: deny
Render deploys your code to /opt/render/project/src/, so the file read rule is scoped to that path.
Step 3 — Configure Render Web Service
In your Render dashboard or render.yaml Blueprint:
services:
- type: web
name: ai-agent
runtime: node
buildCommand: npm ci && npx @authensor/safeclaw
startCommand: node agent.js
envVars:
- key: SAFECLAW_AUDIT_SINK
value: stdout
- key: NODE_ENV
value: production
disk:
name: agent-data
mountPath: /var/data
sizeGB: 1
The buildCommand installs dependencies and initializes SafeClaw. The persistent disk at /var/data stores the audit trail across deploys.
Step 4 — Deploy as a Background Worker
For agents that process queues rather than HTTP requests:
services:
- type: worker
name: ai-agent-worker
runtime: node
buildCommand: npm ci && npx @authensor/safeclaw
startCommand: node worker.js
envVars:
- key: SAFECLAW_AUDIT_SINK
value: file
- key: SAFECLAW_AUDIT_PATH
value: /var/data/audit.log
disk:
name: worker-data
mountPath: /var/data
sizeGB: 1
Step 5 — Deploy as a Private Service
If your agent should only be reachable by other Render services (not the public internet):
services:
- type: pserv
name: ai-agent-internal
runtime: node
buildCommand: npm ci && npx @authensor/safeclaw
startCommand: node agent.js
Private Services get a *.onrender.com internal hostname. Combine this with SafeClaw's network policy to allow only internal traffic.
Step 6 — Use a Dockerfile on Render
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npx @authensor/safeclaw
CMD ["node", "agent.js"]
In render.yaml:
services:
- type: web
name: ai-agent
runtime: docker
dockerfilePath: Dockerfile
Step 7 — Persist and Verify Audit Logs
With a persistent disk mounted, SafeClaw writes the hash-chained audit trail to /var/data/audit.log. Verify integrity:
npx @authensor/safeclaw audit verify --file /var/data/audit.log --last 100
Every action — allowed or denied — is logged with a tamper-proof hash chain for compliance-grade auditability.
Step 8 — Connect Log Drains
Render supports log streams. If using audit.sink: stdout, route logs to Datadog, Papertrail, or any syslog endpoint via Render's log stream settings. This gives your security team real-time visibility into every action your AI agent attempts.
Related Pages
- Docker Compose Deployment Guide
- Hash-Chained Audit Logs Deep Dive
- Data Exfiltration Prevention
- Environment Variable Protection
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw