How to Secure AI Agents in GitHub Codespaces
SafeClaw by Authensor secures AI agents running inside GitHub Codespaces with deny-by-default action gating. Because Codespaces are cloud-hosted and ephemeral, SafeClaw's policy enforcement and hash-chained audit logging are especially important for maintaining control over AI agent behavior in shared environments. SafeClaw supports both Claude and OpenAI and ships with 446 tests.
Prerequisites
- A GitHub account with Codespaces access
- A repository with a
.devcontainer/configuration - Node.js 18+ (included in most Codespace images)
Step 1: Add SafeClaw to Your Devcontainer
Edit .devcontainer/devcontainer.json to include SafeClaw in the post-create setup:
{
"name": "My Project",
"image": "mcr.microsoft.com/devcontainers/javascript-node:18",
"postCreateCommand": "npx @authensor/safeclaw",
"customizations": {
"vscode": {
"settings": {
"safeclaw.enabled": true,
"safeclaw.policyPath": ".safeclaw/policy.yaml",
"safeclaw.auditLog": true,
"safeclaw.hashChain": true
}
}
},
"features": {
"ghcr.io/devcontainers/features/node:1": {
"version": "18"
}
}
}
Every time a Codespace is created or rebuilt, SafeClaw initializes automatically with your committed policy.
Step 2: Commit Your Policy to the Repository
Create .safeclaw/policy.yaml and commit it to your repository:
version: 1
default: deny
rules:
- action: file.read
paths:
- "src/**"
- "tests/**"
- "docs/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: file.write
paths:
- ".devcontainer/**"
- ".github/**"
decision: deny
- action: shell.execute
commands:
- "npm test"
- "npm run build"
- "git status"
- "git diff"
decision: allow
- action: shell.execute
decision: deny
- action: network.request
domains:
- "api.openai.com"
- "api.anthropic.com"
decision: allow
Critically, this policy prevents AI agents from modifying the .devcontainer/ and .github/ directories. An AI agent modifying your devcontainer configuration could change the security posture of future Codespaces, and modifying GitHub Actions workflows could introduce supply chain risks.
Step 3: Add a Lifecycle Script for Audit Verification
Create .devcontainer/post-start.sh:
#!/bin/bash
echo "Verifying SafeClaw audit chain integrity..."
npx @authensor/safeclaw audit --verify --quiet
if [ $? -ne 0 ]; then
echo "WARNING: SafeClaw audit chain integrity check failed!"
echo "Review the audit log: npx @authensor/safeclaw audit --tail 20"
fi
echo "SafeClaw is active. Policy: .safeclaw/policy.yaml"
Reference it in devcontainer.json:
{
"postStartCommand": "bash .devcontainer/post-start.sh"
}
This verifies audit integrity every time the Codespace starts, catching any tampering that occurred during previous sessions.
Step 4: Configure GitHub Codespace Secrets
If your AI agents need API keys, store them as Codespace secrets rather than in the repository. SafeClaw respects environment variables:
# Set in GitHub Settings > Codespaces > Secrets
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
SAFECLAW_STRICT=true
Setting SAFECLAW_STRICT=true converts all prompt decisions to deny when no interactive terminal is available, which is useful for automated workflows within Codespaces.
Step 5: Team Policy Enforcement
For team Codespaces, add a .safeclaw/team-policy.yaml with organization-wide rules and reference it:
npx @authensor/safeclaw --policy .safeclaw/team-policy.yaml --overlay .safeclaw/policy.yaml
The --overlay flag merges project-specific rules on top of team rules, with team rules taking precedence for any conflicts. This ensures organizational security policies cannot be overridden by individual projects.
Step 6: Test in a Codespace
Create a new Codespace from your repository. After the post-create command runs, open the terminal and verify:
npx @authensor/safeclaw status
You should see the active policy, audit log state, and last verification timestamp.
Summary
SafeClaw is designed for cloud-native development workflows like GitHub Codespaces. By committing your policy to the repository and initializing SafeClaw in your devcontainer lifecycle, every team member gets the same deny-by-default protections. Hash-chained audit logs persist across sessions. SafeClaw is MIT licensed and open source.
Related Guides
- How to Add AI Agent Safety to VS Code
- How to Run AI Agents Safely from the Terminal
- How to Send AI Agent Safety Alerts to Slack
- How to Monitor AI Agent Actions in Datadog
- How to Set Up Custom Webhooks for AI Agent Events
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw