How to Prevent AI Agents from Reading or Overwriting .env Files
To prevent AI agents from reading or overwriting .env files, use SafeClaw action-level gating to block both file_read and file_write actions on credential files. SafeClaw denies access to .env, .env.local, .env.production, and any file matching credential patterns before the agent can touch them. An agent overwriting your .env is just as destructive as reading it — it destroys your secrets. Install with npx @authensor/safeclaw.
The Risk
Your .env file contains API keys, database passwords, OAuth secrets, and third-party service credentials. When an AI agent reads it, those secrets enter the model's context window. From there, they can be logged, echoed in output, included in network requests, or persisted in conversation history stored on third-party servers.
This is not hypothetical. The Clawdbot incident exposed 1.5 million API keys when an AI agent with file read access ingested credential files and propagated the values through its output chain. The keys were valid, in production, and belonged to paying customers.
Even without malicious intent, an agent that reads your .env while "helping debug" your application will include those values in its reasoning. If that reasoning is logged — by the AI provider, by your observability tools, by a shared conversation — your secrets are now in plaintext somewhere you don't control.
.gitignore doesn't help here. It prevents git from tracking the file. It does nothing to prevent a process running on your machine from reading it. File permissions don't help if the agent runs as your user.
The One-Minute Fix
Step 1: Install SafeClaw.
npx @authensor/safeclaw
Step 2: Get your free API key at safeclaw.onrender.com (7-day renewable, no credit card).
Step 3: Add these policy rules:
- action: file_read
pattern: "\\.env|\\.env\\.|credentials|secrets|\\.key$|\\.pem$"
effect: deny
reason: "Credential and secret file read blocked"
- action: file_write
pattern: "\\.env|\\.env\\.|credentials|secrets|\\.key$|\\.pem$"
effect: deny
reason: "Credential and secret file write blocked"
The agent is now blocked from both reading and overwriting any file matching credential patterns.
Full Policy
name: block-credential-file-access
version: "1.0"
defaultEffect: deny
rules:
# Block .env files in any directory
- action: file_read
pattern: "\\.env$|\\.env\\..*"
effect: deny
reason: "Environment file access blocked"
# Block common credential files
- action: file_read
pattern: "credentials\\.json|secrets\\.yaml|secrets\\.json|\\.credentials"
effect: deny
reason: "Credential file access blocked"
# Block key and certificate files
- action: file_read
pattern: "\\.key$|\\.pem$|\\.p12$|\\.pfx$|\\.keystore"
effect: deny
reason: "Key and certificate file access blocked"
# Block cloud provider credential files
- action: file_read
pattern: "\\.aws/credentials|\\.gcloud/|azure.*credentials"
effect: deny
reason: "Cloud credential file access blocked"
# Block WRITING to .env files (prevents overwriting your secrets)
- action: file_write
pattern: "\\.env$|\\.env\\..*"
effect: deny
reason: "Environment file write blocked — prevents secret destruction"
# Block writing to credential files
- action: file_write
pattern: "credentials\\.json|secrets\\.yaml|secrets\\.json|\\.credentials"
effect: deny
reason: "Credential file write blocked"
# Block writing to key and certificate files
- action: file_write
pattern: "\\.key$|\\.pem$|\\.p12$|\\.pfx$|\\.keystore"
effect: deny
reason: "Key and certificate file write blocked"
# Allow reading source code and config
- action: file_read
pattern: "\\.(ts|js|py|go|rs|java|json|yaml|yml|toml|md|txt|css|html)$"
effect: allow
reason: "Source code and documentation files permitted"
What Gets Blocked
These action requests are DENIED:
{
"action": "file_read",
"path": "/home/user/project/.env",
"agent": "debug-assistant",
"result": "DENIED — Environment file access blocked"
}
{
"action": "file_read",
"path": "/home/user/project/.env.production",
"agent": "deploy-agent",
"result": "DENIED — Environment file access blocked"
}
{
"action": "file_read",
"path": "/home/user/.aws/credentials",
"agent": "cloud-helper",
"result": "DENIED — Cloud credential file access blocked"
}
What Still Works
These safe actions are ALLOWED:
{
"action": "file_read",
"path": "/home/user/project/src/index.ts",
"agent": "code-assistant",
"result": "ALLOWED — Source code and documentation files permitted"
}
{
"action": "file_read",
"path": "/home/user/project/package.json",
"agent": "code-assistant",
"result": "ALLOWED — Source code and documentation files permitted"
}
Your agent can still read source code, configuration files, documentation, and everything else it needs to be useful. It just can't read your secrets.
Why Other Approaches Don't Work
.gitignore prevents git from tracking .env. It does absolutely nothing to prevent a process on your machine from calling fs.readFile('.env'). Your agent doesn't use git to read files.
File permissions don't help because the agent runs as your user. If you can read .env, the agent can read .env. Creating a restricted user for the agent breaks most workflows — the agent needs to read your project files.
Environment variables instead of files still expose secrets. The agent can run printenv or echo $DATABASE_URL via shell_exec. You'd need to block that too (SafeClaw does).
Docker secrets work inside containers but add deployment complexity. Most developers using AI coding assistants aren't running them inside Docker.
SafeClaw blocks the file_read action at evaluation time, in sub-millisecond. The policy runs in your process. The control plane sees only action metadata — it never sees the file contents or your keys. Deny-by-default means even credential file patterns you didn't think of are blocked unless explicitly allowed. Every blocked access is recorded in a tamper-proof audit trail (SHA-256 hash chain). 446 tests, TypeScript strict mode, zero third-party dependencies.
Cross-References
- API Key Exfiltration Threat
- Credential File Read Threat
- How to Prevent AI Agents from Accessing SSH Keys
- How to Prevent AI Agents from Sending Your Data to External Servers
- SafeClaw Privacy and Trust FAQ
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw