2025-12-08 · Authensor

AI Agent File Access Risks: What Your Coding Agent Can Read on Your Machine

When you give an AI coding agent access to your project directory, you're not just giving it access to your source code. You're giving it access to everything your user account can read. Every file. Every directory. Every secret you've ever stored on that machine.

This isn't theoretical. Clawdbot leaked over 1.5 million API keys in under a month. The mechanism was simple: the agent read files containing secrets and transmitted them outward. No exploit required. Just standard file access permissions doing exactly what they were designed to do.

Let's map out exactly what's at risk.

The .env File: Your Secrets in Plain Text

Every developer has them. .env files sitting in project roots, containing API keys, database connection strings, OAuth secrets, and service credentials.

# .env
DATABASE_URL=postgres://admin:s3cretPassw0rd@prod-db.internal:5432/main
STRIPE_SECRET_KEY=sk_live_abc123...
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=wJalrXUtn...
OPENAI_API_KEY=sk-proj-...

Your .gitignore keeps these out of version control. But an AI agent doesn't clone your repo. It operates on your local filesystem. The .gitignore is irrelevant. The agent reads .env the same way it reads index.ts -- it just opens the file.

Most AI coding agents need file read access to function. That's the whole point. But nothing in the default configuration of Claude Code, Cursor, or similar tools distinguishes between "read src/app.ts" and "read .env."

SSH Keys: The Master Keys to Your Infrastructure

~/.ssh/
  id_rsa          # Private key - unencrypted on most dev machines
  id_ed25519      # Another private key
  config          # Lists every server you connect to
  known_hosts     # Confirms which servers you've accessed

Your SSH private key is a plain text file. If it's not passphrase-protected (and on many dev machines, it isn't), an agent that reads ~/.ssh/id_rsa has full access to every server, Git remote, and CI system that key authenticates against.

The config file is equally dangerous. It's a map of your infrastructure: hostnames, usernames, ports, jump hosts. Combined with the private key, it's a complete access package.

Cloud Provider Credentials

AWS, GCP, and Azure all store credentials in your home directory by default.

~/.aws/credentials        # AWS access keys
~/.aws/config             # Region, account IDs, role ARNs
~/.config/gcloud/         # GCP service account keys, OAuth tokens
~/.azure/                 # Azure CLI tokens

These aren't project files. They're in your home directory. But an AI agent running as your user has the same filesystem permissions you do. If you can cat ~/.aws/credentials, so can the agent.

AWS credentials stored here often have broad permissions. Developer IAM users frequently have AdministratorAccess or similarly overpowered policies. One leaked credential pair can compromise an entire AWS account.

Browser Profiles: Sessions, Cookies, Saved Passwords

This one surprises people.

# Chrome on macOS
~/Library/Application Support/Google/Chrome/Default/
  Login Data          # Saved passwords (encrypted, but key is accessible)
  Cookies             # Session cookies for every logged-in service
  Web Data            # Autofill data, credit card numbers
  History             # Every URL you've visited

Firefox

~/Library/Application Support/Firefox/Profiles/*/ logins.json # Encrypted passwords cookies.sqlite # Session cookies key4.db # Encryption key

An AI agent with file access can read browser profile databases. Session cookies for GitHub, AWS Console, Slack, your company's internal tools -- they're all files on disk. A valid session cookie is as good as a password. In many cases, it bypasses MFA entirely because the session was already authenticated.

Package Manager Tokens

~/.npmrc              # npm auth tokens
~/.pypirc             # PyPI upload credentials
~/.gem/credentials    # RubyGems API key
~/.docker/config.json # Docker Hub credentials
~/.kube/config        # Kubernetes cluster credentials + certs

These tokens allow publishing packages, pushing container images, and accessing production Kubernetes clusters. An agent with access to ~/.npmrc could theoretically publish a malicious package under your name. An agent with ~/.kube/config has kubectl access to your clusters.

The Project Directory Itself

Beyond secrets, your source code has value. Proprietary algorithms, business logic, customer data processing pipelines, internal API schemas. An AI agent reads all of it to do its job. The question is what it does with that information afterward.

Agents that phone home -- sending context to cloud APIs for processing -- transmit your source code over the network. Even agents that claim local processing may send telemetry, error reports, or "anonymized" usage data that contains fragments of your code.

Git History: Secrets That Were "Deleted"

git log --all --full-history -- "*.env"
git show HEAD~50:.env

Developers accidentally commit secrets, then remove them in a subsequent commit. The secret is still in Git history. An AI agent with access to the .git directory can traverse the entire commit history and extract every secret that was ever committed, even briefly.

The Real Problem: Permissions Are Binary

Operating systems enforce file permissions at the user level. If your user can read a file, any process running as your user can read it too. AI agents run as your user. There's no OS-level mechanism to say "this process can read src/ but not ~/.ssh/" without containerization or sandboxing -- and most developers don't run their coding agents in containers.

This is the gap that SafeClaw addresses.

Action-Level Gating: The Fix

SafeClaw implements action-level gating for AI agents. Instead of binary "can access filesystem / cannot access filesystem," every file access is evaluated against a policy.

// SafeClaw policy: deny reads outside project directory
// and deny reads of sensitive file patterns
{
  action: "file_read",
  rules: [
    { path: "*/.env", effect: "deny" },
    { path: "/.ssh/", effect: "deny" },
    { path: "*/credentials", effect: "deny" },
    { path: "~/.aws/**", effect: "deny" },
    { path: "~/.kube/**", effect: "deny" },
    { path: "./src/**", effect: "allow" }
  ]
}

Each file read is checked against the ruleset before it executes. Sub-millisecond evaluation, local, no network round trip. The agent can read your source code. It cannot read your credentials. The deny-by-default architecture means anything not explicitly allowed is blocked.

This is fundamentally different from sandboxing, which operates at the process level and can't distinguish between reading src/app.ts and reading .env in the same directory.

Setting It Up

npx @authensor/safeclaw

The browser dashboard includes a setup wizard. No CLI configuration needed. Define your file access policies, enable simulation mode to see what would be blocked without breaking your workflow, then switch to enforcement.

SafeClaw has zero third-party dependencies, runs 446 automated tests in TypeScript strict mode, and every policy evaluation is recorded in a tamper-proof audit trail using SHA-256 hash chains. The client is 100% open source. The control plane only sees metadata.

The Bottom Line

Your AI coding agent has the same file access as you do. That means .env, .ssh, cloud credentials, browser profiles, package manager tokens, and your entire Git history. The question isn't whether the agent can read these files. It can. The question is whether anything stops it.

Right now, for most developers, the answer is nothing.

SafeClaw changes that. Free tier available, renewable 7-day keys, no credit card required. Visit safeclaw.onrender.com to get started.

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw