2026-01-13 · Authensor

AI Agents Reading Credential Files

Threat Description

AI agents with file read tool access can read any file the host process has permissions for, including credential files: .env (API keys, database URLs), ~/.ssh/id_rsa (SSH private keys), ~/.aws/credentials (AWS access keys), ~/.config/gcloud/application_default_credentials.json (GCP service account keys), and ~/.kube/config (Kubernetes cluster credentials). The agent may read these files intentionally (through a malicious or misguided task), through prompt injection, or as a side effect of a broad file search operation. Once credentials are in the agent's context window, they can be logged, transmitted, or used for further actions.

Attack Vector

  1. An AI agent is given a task that involves exploring the filesystem — "find configuration files," "check the project setup," or "review the deployment config."
  2. The agent issues a file_read action targeting a credential file. This may be direct (reading .env) or indirect (reading a directory listing, then reading files found within it).
  3. The credential contents enter the agent's context window and are available for use in subsequent actions.
  4. The agent may include credentials in a network action (sending them to an API), a file_write action (writing them to a log or output file), or a shell_exec action (using them as command arguments).
  5. The credentials are exposed through whichever channel the agent uses.
Direct credential read:
{
  "action": "file_read",
  "params": {
    "path": "/home/user/.aws/credentials"
  },
  "agentId": "infra-agent-04",
  "timestamp": "2026-02-13T10:00:00Z"
}

Indirect credential read via SSH key:

{
  "action": "file_read",
  "params": {
    "path": "/home/user/.ssh/id_ed25519"
  },
  "agentId": "infra-agent-04",
  "timestamp": "2026-02-13T10:00:15Z"
}

Credential in shell command:

{
  "action": "shell_exec",
  "params": {
    "command": "aws s3 ls --profile stolen-creds"
  },
  "agentId": "infra-agent-04",
  "timestamp": "2026-02-13T10:00:30Z"
}

Real-World Context

The Clawdbot incident is the most significant documented case. Clawdbot, an AI coding agent, leaked 1.5 million API keys in under a month. The agent had unrestricted file_read access and read .env and configuration files as part of its normal operation. The keys were then transmitted externally through the agent's network capabilities. No action-level control existed to prevent either the read or the transmit.

This threat affects every AI agent framework that provides file tools: Claude Code, Cursor, Windsurf, LangChain agents, CrewAI agents, AutoGPT, and any custom agent with filesystem access. The default behavior in all these frameworks is to grant the agent read access to all files accessible by the host process.

Common credential file paths that agents access:


Why Existing Defenses Fail

File permissions are set per user/group, not per application. If the user can read ~/.ssh/id_rsa, every process running as that user — including the AI agent — can read it.

Docker containers can mount only specific directories, but agents often need access to the project directory where .env files reside. Excluding .env from the mount requires the operator to know every credential file location in advance.

Prompt instructions telling the agent "do not read .env files" are bypassable through prompt injection, and the agent may read credential files without recognizing them as sensitive (e.g., reading a config.json that happens to contain API keys).

Git ignore and .dockerignore control version control and Docker context, not runtime file access. A file listed in .gitignore is still readable at runtime.

How Action-Level Gating Prevents This

SafeClaw by Authensor intercepts every file_read action before the file is opened. The policy engine evaluates the target path against deny rules.

  1. Credential path DENY rules. Explicit rules deny file_read actions targeting known credential paths: .env, .ssh/, .aws/, .kube/, .npmrc, .netrc.
  2. Directory allowlisting. Instead of blocking specific sensitive paths (which requires knowing all of them), operators can allow reads only to specific directories. A policy that allows file_read only within project/src/ and project/docs/ implicitly denies reads to all credential locations.
  3. Pattern matching. Glob patterns like */.env match .env, .env.local, .env.production, and any other dotenv variant in any directory.
  4. Deny-by-default. Any file path not matching an ALLOW rule is denied. New credential files added to the system are automatically protected without policy updates.
  5. Sub-millisecond evaluation. The policy engine, backed by 446 tests in TypeScript strict mode, evaluates each path in under a millisecond.

Example Policy

{
  "rules": [
    {
      "action": "file_read",
      "match": { "pathPattern": "*/.env" },
      "effect": "DENY",
      "reason": "Environment files containing secrets are off-limits"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "/.ssh/" },
      "effect": "DENY",
      "reason": "SSH credential directory is off-limits"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "/.aws/" },
      "effect": "DENY",
      "reason": "AWS credential directory is off-limits"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "/.kube/" },
      "effect": "DENY",
      "reason": "Kubernetes config directory is off-limits"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "**/.npmrc" },
      "effect": "DENY",
      "reason": "npm auth token file is off-limits"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "/project/src/" },
      "effect": "ALLOW",
      "reason": "Agent may read project source files"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "/project/tests/" },
      "effect": "ALLOW",
      "reason": "Agent may read test files"
    },
    {
      "action": "file_read",
      "match": { "pathPattern": "**" },
      "effect": "DENY",
      "reason": "All other file reads denied"
    }
  ]
}

Detection in Audit Trail

SafeClaw's SHA-256 hash chain audit trail records credential read attempts:

[2026-02-13T10:00:00Z] action=file_read path=/home/user/.aws/credentials agent=infra-agent-04 verdict=DENY rule="AWS credential directory is off-limits" hash=a8c4d1...
[2026-02-13T10:00:15Z] action=file_read path=/home/user/.ssh/id_ed25519 agent=infra-agent-04 verdict=DENY rule="SSH credential directory is off-limits" hash=b9d5e2...

Repeated DENY entries for credential paths from a single agent indicate either a misconfigured task, prompt injection, or a compromised agent. The audit trail provides full path, timestamp, and agent ID for each attempt. Each entry is chained via SHA-256 hash to the previous entry, preventing retroactive tampering. The control plane receives only path metadata, never file contents. Review the audit trail via the browser dashboard at safeclaw.onrender.com.

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw