How to Prevent Claude from Reading My .ssh Folder
SafeClaw by Authensor blocks Claude Code from reading your .ssh folder by default — no configuration required. Because SafeClaw uses deny-by-default action gating, any file-read targeting ~/.ssh is automatically denied unless you explicitly allow it in your policy file. Install it with npx @authensor/safeclaw and your SSH keys are protected immediately.
Why This Matters
Your ~/.ssh directory contains private keys, known_hosts, and config files that grant access to remote servers, GitHub repositories, and cloud infrastructure. If an AI agent reads these files — even accidentally through a broad directory scan — those secrets could end up in context windows, logs, or worse, exfiltrated through prompt injection attacks.
Claude Code is powerful, but by default it can access any file your user account can read. That includes ~/.ssh/id_rsa, ~/.ssh/id_ed25519, and every other key in the directory.
Step 1: Install SafeClaw
Run a single command in your terminal:
npx @authensor/safeclaw
SafeClaw requires zero dependencies and sets up deny-by-default gating instantly. It works with Claude Code, OpenAI agents, and any framework that exposes an action layer.
Step 2: Verify the Default Policy
SafeClaw ships with a default policy that already blocks sensitive file paths. You can inspect it:
# safeclaw.policy.yaml
rules:
- action: file.read
path: "~/.ssh/**"
effect: deny
reason: "SSH private keys and config are off-limits to AI agents"
- action: file.read
path: "~/.ssh/known_hosts"
effect: deny
reason: "Known hosts reveal server infrastructure"
Under deny-by-default, even without these explicit rules, the read would be blocked. These rules exist for clarity in audit logs.
Step 3: Add a Surgical Allow Rule (Optional)
If your workflow requires Claude to read a specific SSH config value — say, a hostname from ~/.ssh/config — you can create a narrow exception:
rules:
- action: file.read
path: "~/.ssh/config"
effect: allow
conditions:
- human_approval: required
reason: "Allow reading SSH config only with human approval"
- action: file.read
path: "~/.ssh/**"
effect: deny
reason: "Block all other SSH file access"
SafeClaw uses first-match-wins evaluation, so the specific allow rule must appear before the broader deny rule. The human_approval: required condition means Claude will pause and ask you before proceeding.
Step 4: Test in Simulation Mode
Before enforcing, run SafeClaw in simulation mode to see what would be blocked without actually blocking anything:
npx @authensor/safeclaw --simulate
Simulation mode logs every action request and shows whether it would be allowed or denied. This is ideal for tuning your policy before going live.
Step 5: Check the Audit Trail
Every denied action is recorded in SafeClaw's hash-chained audit log. You can verify that SSH folder access attempts are being caught:
npx @authensor/safeclaw audit --filter "path:~/.ssh"
The hash-chained log is tamper-proof — each entry references the hash of the previous entry, so no record can be silently deleted or modified.
What Gets Blocked
With the default policy, Claude cannot:
- Read
~/.ssh/id_rsaor~/.ssh/id_ed25519(private keys) - Read
~/.ssh/authorized_keys(reveals trust relationships) - Read
~/.ssh/config(reveals hostnames, usernames, proxy settings) - List the contents of
~/.ssh/(directory enumeration) - Copy SSH files to another location to read them indirectly
~/.ssh, it is denied.
Why SafeClaw Over File Permissions
You could chmod 600 your SSH files (and you should), but that only works if the AI agent runs as a different user. Most local AI tools — including Claude Code — run as your user. SafeClaw operates at the action-gating layer, above the OS, catching requests before they ever reach the filesystem.
SafeClaw is open-source, MIT licensed, and backed by 446 tests. It works with both Claude and OpenAI providers.
Related Pages
- How to Prevent AI Agents from Reading Dotfiles (.bashrc, .zshrc, .gitconfig)
- How to Block AI Agents from Accessing AWS Credentials
- What Is Deny-by-Default for AI Agents?
- How to Audit AI Agent Actions
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw