2025-12-26 · Authensor

How to Prevent AI Agents from Reading Dotfiles (.bashrc, .zshrc, .gitconfig)

SafeClaw by Authensor blocks AI agents from reading your dotfiles — including .bashrc, .zshrc, .gitconfig, .npmrc, and any other hidden configuration file — through deny-by-default action gating. No dotfile can be read unless you create an explicit allow rule. Install with npx @authensor/safeclaw and your shell configuration, tokens, and aliases are protected immediately.

Why Dotfiles Are a Target

Dotfiles are hidden configuration files in your home directory that often contain sensitive information:

An AI agent that reads these files can extract secrets that give it — or an attacker exploiting prompt injection — access to your accounts and infrastructure.

Step 1: Install SafeClaw

npx @authensor/safeclaw

Zero dependencies, MIT licensed, works with Claude and OpenAI.

Step 2: Block All Home Directory Dotfiles

# safeclaw.policy.yaml
rules:
  - action: file.read
    path: "~/.*"
    effect: deny
    reason: "Block reading any dotfile in the home directory"

- action: file.read
path: "~/./*"
effect: deny
reason: "Block reading contents of hidden directories in home"

The first rule blocks files like ~/.bashrc, ~/.zshrc, ~/.gitconfig. The second rule blocks files inside hidden directories like ~/.ssh/, ~/.aws/, ~/.config/.

Step 3: Block Specific High-Value Targets

For extra clarity in your audit logs, add explicit rules for the most sensitive files:

rules:
  - action: file.read
    path: "~/.bashrc"
    effect: deny
    reason: "Shell config may contain exported secrets"

- action: file.read
path: "~/.zshrc"
effect: deny
reason: "Zsh config may contain exported secrets"

- action: file.read
path: "~/.bash_profile"
effect: deny
reason: "Bash profile may contain exported secrets"

- action: file.read
path: "~/.profile"
effect: deny
reason: "Profile may contain exported secrets"

- action: file.read
path: "~/.gitconfig"
effect: deny
reason: "Git config contains identity and credential helper info"

- action: file.read
path: "~/.npmrc"
effect: deny
reason: "npmrc contains registry authentication tokens"

- action: file.read
path: "~/.netrc"
effect: deny
reason: "netrc contains plaintext HTTP credentials"

- action: file.read
path: "~/.docker/config.json"
effect: deny
reason: "Docker config contains registry auth tokens"

Step 4: Allow Project-Level Dotfiles

Your agent probably needs to read project-level dotfiles like .eslintrc, .prettierrc, or .env.example. Create an exception for dotfiles within your project directory:

rules:
  # Allow project-level config files
  - action: file.read
    path: "/home/user/projects/my-app/.*"
    effect: allow
    reason: "Project-level dotfiles are safe to read"

# Block home directory dotfiles
- action: file.read
path: "~/.*"
effect: deny
reason: "Home directory dotfiles are off-limits"

Important: Be sure to also block .env files separately, as they may exist in the project directory but contain secrets.

Step 5: Block Write Access Too

Prevent the agent from modifying your dotfiles:

rules:
  - action: file.write
    path: "~/.*"
    effect: deny
    reason: "AI agents cannot modify home directory dotfiles"

- action: file.write
path: "~/./*"
effect: deny
reason: "AI agents cannot modify files in hidden home directories"

Step 6: Verify

npx @authensor/safeclaw --simulate

Check the hash-chained audit trail:

npx @authensor/safeclaw audit --filter "path:~/."

SafeClaw is open-source with 446 tests and works with both Claude and OpenAI providers.

Related Pages

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw