How to Limit an AI Agent to One Directory
SafeClaw by Authensor lets you confine any AI agent to a single directory with a simple YAML policy. Under deny-by-default gating, all file reads, writes, and directory listings outside your specified folder are automatically blocked. Install with npx @authensor/safeclaw and your agent is sandboxed to one directory in seconds.
Why Directory Isolation Matters
AI agents like Claude Code and GPT-based tools operate with the same filesystem access as your user account. Without restrictions, an agent asked to "fix a bug in my project" could traverse your entire home directory, reading .env files in other projects, browsing your downloads folder, or modifying configuration in unrelated repositories.
Limiting an agent to a single directory is the simplest and most effective safety boundary you can set.
Step 1: Install SafeClaw
npx @authensor/safeclaw
Zero dependencies. Works with Claude, OpenAI, LangChain, CrewAI, and any agent framework.
Step 2: Define the Allowed Directory
Create or edit your safeclaw.policy.yaml to specify the one directory your agent may access:
# safeclaw.policy.yaml
rules:
# Allow all file operations within the project directory
- action: file.*
path: "/home/user/projects/my-app/**"
effect: allow
reason: "Agent is scoped to the my-app project directory"
# Allow shell commands only within the project directory
- action: shell.execute
working_directory: "/home/user/projects/my-app"
effect: allow
reason: "Shell commands must execute within the project root"
# Deny everything else (explicit for clarity; deny-by-default handles this)
- action: "*"
effect: deny
reason: "All actions outside the project directory are blocked"
The file. wildcard covers file.read, file.write, file.delete, and file.list. The * glob matches all files and subdirectories recursively.
Step 3: Block Path Traversal Attempts
A clever agent might try to escape the sandbox using relative paths like ../../etc/passwd. SafeClaw resolves all paths to their canonical absolute form before evaluating rules:
rules:
- action: file.*
path: "/home/user/projects/my-app/**"
effect: allow
reason: "Scoped to project directory"
- action: file.*
path_traversal: block
reason: "Prevent relative path escape attempts"
Even if the agent constructs a path like /home/user/projects/my-app/../../.ssh/id_rsa, SafeClaw resolves it to /home/user/.ssh/id_rsa and denies it because it falls outside the allowed directory.
Step 4: Handle Symlinks
Symbolic links can be used to escape directory restrictions. SafeClaw follows symlinks to their real target and evaluates the policy against the resolved path:
rules:
- action: file.*
path: "/home/user/projects/my-app/**"
resolve_symlinks: true
effect: allow
reason: "Only allow access if the real path is within the project"
If someone creates a symlink inside your project that points to /etc/shadow, SafeClaw resolves the link and blocks the read.
Step 5: Test in Simulation Mode
npx @authensor/safeclaw --simulate
Ask your agent to read a file outside the project directory. The simulation log confirms the deny:
[DENIED] file.read: "/home/user/.bashrc"
Rule: "All actions outside the project directory are blocked"
Then ask it to read a file inside the project:
[ALLOWED] file.read: "/home/user/projects/my-app/src/index.ts"
Rule: "Agent is scoped to the my-app project directory"
Step 6: Verify with the Audit Trail
npx @authensor/safeclaw audit --tail 20
SafeClaw's hash-chained audit log records every allowed and denied action, creating a tamper-proof record of exactly what your agent accessed.
Multiple Projects
If you need the agent to access two directories, add a second allow rule:
rules:
- action: file.*
path: "/home/user/projects/my-app/**"
effect: allow
- action: file.*
path: "/home/user/projects/shared-lib/**"
effect: allow
- action: "*"
effect: deny
SafeClaw is open-source, MIT licensed, and backed by 446 tests. It works with both Claude and OpenAI providers.
Related Pages
- How to Prevent AI from Creating New Files Outside a Project
- How to Prevent AI from Accessing Other Git Repositories
- Deep Dive: Filesystem Isolation for AI Agents
- How to Make an AI Agent Read-Only (No Write Access)
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw