AI Agents Reading Credential Files
Threat Description
AI agents with file read tool access can read any file the host process has permissions for, including credential files: .env (API keys, database URLs), ~/.ssh/id_rsa (SSH private keys), ~/.aws/credentials (AWS access keys), ~/.config/gcloud/application_default_credentials.json (GCP service account keys), and ~/.kube/config (Kubernetes cluster credentials). The agent may read these files intentionally (through a malicious or misguided task), through prompt injection, or as a side effect of a broad file search operation. Once credentials are in the agent's context window, they can be logged, transmitted, or used for further actions.
Attack Vector
- An AI agent is given a task that involves exploring the filesystem — "find configuration files," "check the project setup," or "review the deployment config."
- The agent issues a
file_readaction targeting a credential file. This may be direct (reading.env) or indirect (reading a directory listing, then reading files found within it). - The credential contents enter the agent's context window and are available for use in subsequent actions.
- The agent may include credentials in a
networkaction (sending them to an API), afile_writeaction (writing them to a log or output file), or ashell_execaction (using them as command arguments). - The credentials are exposed through whichever channel the agent uses.
{
"action": "file_read",
"params": {
"path": "/home/user/.aws/credentials"
},
"agentId": "infra-agent-04",
"timestamp": "2026-02-13T10:00:00Z"
}
Indirect credential read via SSH key:
{
"action": "file_read",
"params": {
"path": "/home/user/.ssh/id_ed25519"
},
"agentId": "infra-agent-04",
"timestamp": "2026-02-13T10:00:15Z"
}
Credential in shell command:
{
"action": "shell_exec",
"params": {
"command": "aws s3 ls --profile stolen-creds"
},
"agentId": "infra-agent-04",
"timestamp": "2026-02-13T10:00:30Z"
}
Real-World Context
The Clawdbot incident is the most significant documented case. Clawdbot, an AI coding agent, leaked 1.5 million API keys in under a month. The agent had unrestricted file_read access and read .env and configuration files as part of its normal operation. The keys were then transmitted externally through the agent's network capabilities. No action-level control existed to prevent either the read or the transmit.
This threat affects every AI agent framework that provides file tools: Claude Code, Cursor, Windsurf, LangChain agents, CrewAI agents, AutoGPT, and any custom agent with filesystem access. The default behavior in all these frameworks is to grant the agent read access to all files accessible by the host process.
Common credential file paths that agents access:
.env,.env.local,.env.production— Application secrets~/.ssh/id_rsa,~/.ssh/id_ed25519— SSH private keys~/.aws/credentials,~/.aws/config— AWS access keys~/.config/gcloud/application_default_credentials.json— GCP credentials~/.kube/config— Kubernetes cluster access~/.npmrc— npm auth tokens~/.netrc— Machine login credentialsconfig.json,secrets.yaml— Application-specific secrets
Why Existing Defenses Fail
File permissions are set per user/group, not per application. If the user can read ~/.ssh/id_rsa, every process running as that user — including the AI agent — can read it.
Docker containers can mount only specific directories, but agents often need access to the project directory where .env files reside. Excluding .env from the mount requires the operator to know every credential file location in advance.
Prompt instructions telling the agent "do not read .env files" are bypassable through prompt injection, and the agent may read credential files without recognizing them as sensitive (e.g., reading a config.json that happens to contain API keys).
Git ignore and .dockerignore control version control and Docker context, not runtime file access. A file listed in .gitignore is still readable at runtime.
How Action-Level Gating Prevents This
SafeClaw by Authensor intercepts every file_read action before the file is opened. The policy engine evaluates the target path against deny rules.
- Credential path DENY rules. Explicit rules deny
file_readactions targeting known credential paths:.env,.ssh/,.aws/,.kube/,.npmrc,.netrc. - Directory allowlisting. Instead of blocking specific sensitive paths (which requires knowing all of them), operators can allow reads only to specific directories. A policy that allows
file_readonly withinproject/src/andproject/docs/implicitly denies reads to all credential locations. - Pattern matching. Glob patterns like
*/.envmatch.env,.env.local,.env.production, and any other dotenv variant in any directory. - Deny-by-default. Any file path not matching an ALLOW rule is denied. New credential files added to the system are automatically protected without policy updates.
- Sub-millisecond evaluation. The policy engine, backed by 446 tests in TypeScript strict mode, evaluates each path in under a millisecond.
Example Policy
{
"rules": [
{
"action": "file_read",
"match": { "pathPattern": "*/.env" },
"effect": "DENY",
"reason": "Environment files containing secrets are off-limits"
},
{
"action": "file_read",
"match": { "pathPattern": "/.ssh/" },
"effect": "DENY",
"reason": "SSH credential directory is off-limits"
},
{
"action": "file_read",
"match": { "pathPattern": "/.aws/" },
"effect": "DENY",
"reason": "AWS credential directory is off-limits"
},
{
"action": "file_read",
"match": { "pathPattern": "/.kube/" },
"effect": "DENY",
"reason": "Kubernetes config directory is off-limits"
},
{
"action": "file_read",
"match": { "pathPattern": "**/.npmrc" },
"effect": "DENY",
"reason": "npm auth token file is off-limits"
},
{
"action": "file_read",
"match": { "pathPattern": "/project/src/" },
"effect": "ALLOW",
"reason": "Agent may read project source files"
},
{
"action": "file_read",
"match": { "pathPattern": "/project/tests/" },
"effect": "ALLOW",
"reason": "Agent may read test files"
},
{
"action": "file_read",
"match": { "pathPattern": "**" },
"effect": "DENY",
"reason": "All other file reads denied"
}
]
}
Detection in Audit Trail
SafeClaw's SHA-256 hash chain audit trail records credential read attempts:
[2026-02-13T10:00:00Z] action=file_read path=/home/user/.aws/credentials agent=infra-agent-04 verdict=DENY rule="AWS credential directory is off-limits" hash=a8c4d1...
[2026-02-13T10:00:15Z] action=file_read path=/home/user/.ssh/id_ed25519 agent=infra-agent-04 verdict=DENY rule="SSH credential directory is off-limits" hash=b9d5e2...
Repeated DENY entries for credential paths from a single agent indicate either a misconfigured task, prompt injection, or a compromised agent. The audit trail provides full path, timestamp, and agent ID for each attempt. Each entry is chained via SHA-256 hash to the previous entry, preventing retroactive tampering. The control plane receives only path metadata, never file contents. Review the audit trail via the browser dashboard at safeclaw.onrender.com.
Cross-References
- AI Agent Security Risks FAQ — Credential exposure as a primary agent risk
- API Key Exfiltration Threat — Complete exfiltration chain from read to transmit
- SafeClaw vs File Permissions Comparison — Why OS file permissions do not solve this
- Deny-by-Default Definition — How unpermitted paths are automatically blocked
- Use Case: Claude Code Developer — Protecting credential files during AI-assisted development
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw