AI Agent Leaked My API Keys: What to Do and How to Prevent It
An AI coding agent read a .env file to "understand the project configuration" and then included AWS secret keys verbatim in a code comment it generated, which was committed to a public repository. SafeClaw by Authensor prevents this by denying all reads to credential files (.env, .credentials, key files) unless explicitly permitted, and by blocking output patterns that match known secret formats.
The Incident: Step by Step
A developer asked an AI agent to "set up the database connection." The agent autonomously:
- Read
src/config/database.jsto understand the existing setup - Read
.envto find theDATABASE_URLvalue — this file also containedAWS_SECRET_ACCESS_KEY,STRIPE_SECRET_KEY, andGITHUB_TOKEN - Generated a new config file with hardcoded values instead of environment variable references
- The developer committed the generated code without reviewing every line
- Within 12 minutes, automated scanners on GitHub detected the exposed AWS key
- The AWS account was compromised and used to spin up crypto mining instances costing $2,400 before detection
Immediate Response If This Happens to You
- Rotate all exposed keys immediately — do not just delete the commit
- Revoke the compromised credentials in each service's dashboard (AWS IAM, Stripe, GitHub)
- Run
git log --all --full-history -S "AKIA"to find every commit containing the key - Use
git filter-branchor BFG Repo-Cleaner to scrub the key from history - Check CloudTrail / audit logs for unauthorized usage during the exposure window
How SafeClaw Prevents This
Quick Start
npx @authensor/safeclaw
Policy That Blocks Credential Access
# safeclaw.config.yaml
rules:
# Block reading any credential files
- action: file.read
path: "*/.env"
decision: deny
reason: "Agent must not read environment files containing secrets"
- action: file.read
path: "*/.pem"
decision: deny
reason: "Agent must not read private key files"
- action: file.read
path: "*/credentials*"
decision: deny
reason: "Agent must not read credential files"
- action: file.read
path: "*/id_rsa"
decision: deny
reason: "Agent must not read SSH private keys"
# Allow reading source code
- action: file.read
path: "src/*/.{js,ts,py}"
decision: allow
# Deny everything else by default
- action: "**"
decision: deny
What the Agent Sees
When the agent tries to read .env, SafeClaw returns a denial before the file contents are ever loaded into the agent's context:
Action DENIED: file.read on .env
Reason: Agent must not read environment files containing secrets
The agent never sees the secrets. It cannot leak what it never had access to.
Why SafeClaw
- 446 tests cover credential file patterns including
.env,.env.local,.env.production, PEM files, SSH keys, and AWS credential files - Deny-by-default ensures new credential file patterns are blocked automatically — you only allowlist what the agent needs
- Sub-millisecond evaluation means the policy check adds no noticeable delay
- Hash-chained audit trail logs every denied read attempt, so you can see exactly which files the agent tried to access
Prevention Checklist
- Block agent reads on all files matching
.env,.pem,credentials,secret,id_rsa* - Never allowlist credential files even in development — agents should read configs via safe abstractions
- Use SafeClaw simulation mode first to discover which files your agent attempts to read
- Review generated code for hardcoded values before committing
Related Pages
- Threat: API Key Exfiltration
- How to Stop Agent Leaking Keys
- AI Agent Sent Database Contents to External Server
- Threat: Credential File Read
- Define: Deny-by-Default
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw