Students and new developers often give AI agents unrestricted access to their machines without understanding the risks — agents can delete files, install malicious packages, expose personal credentials, or run arbitrary shell commands. SafeClaw by Authensor is a free, open-source tool that installs with npx @authensor/safeclaw and teaches you deny-by-default security thinking while actively protecting your development environment. It works with both Claude and OpenAI agents, requires zero security expertise to set up, and uses a single YAML config file.
Why New Developers Are at Higher Risk
When you are learning to code, you are also learning to evaluate what code does. AI agents generate code and execute commands that you may not fully understand yet. This creates a dangerous combination:
- You may approve actions you do not understand — an agent asking to run a shell command that looks harmless but deletes important files
- Your machine has personal data — SSH keys, browser profiles, personal documents, and cloud credentials are all accessible
- You may not have backups — a destructive agent action on a machine without backups means permanent data loss
- Learning environments have loose permissions — tutorials often instruct you to use
sudoor disable security features for convenience
A Student-Friendly SafeClaw Policy
This policy provides strong defaults while allowing the common operations students need:
# safeclaw.yaml — student / new developer policy
version: 1
default: deny
rules:
# Let the agent read and write project code
- action: file_read
path: "src/**"
decision: allow
reason: "Reading project source code"
- action: file_write
path: "src/**"
decision: prompt
reason: "Review generated code before saving"
# Protect personal files
- action: file_read
path: "~/.ssh/**"
decision: deny
reason: "SSH keys are private"
- action: file_read
path: "*/.env"
decision: deny
reason: "Environment files may contain secrets"
# Safe commands
- action: shell_execute
command: "npm test"
decision: allow
reason: "Running tests is safe"
- action: shell_execute
command: "node *"
decision: prompt
reason: "Review before running scripts"
- action: shell_execute
command: "python *"
decision: prompt
reason: "Review before running scripts"
# Dangerous commands
- action: shell_execute
command: "sudo *"
decision: deny
reason: "Never let agents use sudo"
- action: shell_execute
command: "rm -rf *"
decision: deny
reason: "Block recursive deletion"
- action: shell_execute
command: "curl *"
decision: deny
reason: "Block network data transfer"
- action: network_request
destination: "*"
decision: deny
reason: "No outbound network access"
The decision: prompt rules are the most educational part. Every time SafeClaw prompts you to approve an action, you see exactly what the agent wants to do. This builds your security intuition over time — you start recognizing which commands are safe and which are suspicious.
What You Learn from SafeClaw
Using SafeClaw as a student teaches you three foundational security principles:
- Deny-by-default — nothing is allowed unless explicitly permitted. This is how firewalls, IAM policies, and production security systems work. Learning this principle early shapes how you think about security for your entire career.
- Least privilege — give agents (and programs) only the minimum access they need. The policy above lets agents read source code but not SSH keys. It lets agents write code but not delete files. Each rule is scoped as narrowly as possible.
- Audit trails — SafeClaw logs every action in a hash-chained trail. Looking at these logs teaches you what agents actually do behind the scenes, which is often more than you expect.
Getting Started
npx @authensor/safeclaw
This single command installs SafeClaw. Create a safeclaw.yaml file in your project directory with the policy above. The tool runs with zero dependencies, zero configuration beyond the YAML file, and zero cost. SafeClaw is backed by 446 tests and is MIT-licensed, so you can also read the source code as a learning exercise in how security tooling works.
Related pages:
- AI Agent Security for Beginners
- What Can AI Agents Do to My Computer?
- SafeClaw Quickstart in 60 Seconds
- Deny-by-Default Explained
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw