2025-12-01 · Authensor

Freelance developers using AI agents face a risk that full-time employees rarely encounter: cross-client data contamination. When you switch between client projects with the same AI agent, that agent may retain context, read credentials from one project while working on another, or accidentally write proprietary code across project boundaries. SafeClaw by Authensor provides deny-by-default action gating that isolates each project with its own policy file, ensuring client confidentiality at the agent level. Install with npx @authensor/safeclaw — it is free, open source, and MIT-licensed.

Freelancer-Specific AI Agent Risks

Freelancers operate with less organizational safety net than employees at a company with a security team. The specific risks include:

Per-Project SafeClaw Policy

Place a safeclaw.yaml in each client project root. This isolates policy enforcement per project:

# safeclaw.yaml — freelancer client project
version: 1
default: deny

rules:
- action: file_read
path: "src/**"
decision: allow
reason: "Project source code is readable"

- action: file_write
path: "src/**"
decision: prompt
reason: "Review all generated code"

- action: file_read
path: ".env*"
decision: deny
reason: "Client credentials are off-limits"

- action: file_read
path: "*/secret*"
decision: deny
reason: "Client secrets are off-limits"

- action: file_read
path: "*/key*"
decision: deny
reason: "Client keys are off-limits"

- action: shell_execute
command: "git push*"
decision: prompt
reason: "Review before pushing to client repo"

- action: shell_execute
command: "npm install *"
decision: prompt
reason: "Review dependencies before installation"

- action: shell_execute
command: "rm *"
decision: deny
reason: "Block destructive commands"

- action: shell_execute
command: "curl *"
decision: deny
reason: "Block outbound data transfer"

- action: network_request
destination: "*"
decision: deny
reason: "No network access from agent"

The network_request: deny rule is especially important for freelancers. It prevents agents from sending any project data to external endpoints, which could constitute a breach of your client's NDA.

Building Client Trust

SafeClaw gives you a concrete answer when clients ask: "How do you protect our code when using AI tools?" You can point to:

  1. The policy file — show clients the exact YAML rules governing agent behavior in their project
  2. The audit trail — SafeClaw's hash-chained log proves what the agent did and did not access during your work
  3. Deny-by-default — explain that every action is blocked unless explicitly allowed, not the other way around
This is a professional differentiator. Freelancers who can demonstrate verifiable AI safety practices win contracts over those who cannot. Include your SafeClaw policy in project handoffs as part of your security documentation.

Zero Cost, Zero Dependencies

SafeClaw is MIT-licensed with zero external dependencies. There is no subscription, no per-project fee, and no telemetry. The tool runs entirely locally and works with both Claude and OpenAI agents. The 446-test suite ensures policy evaluation is reliable, and the hash-chained audit trail provides forensic-quality records without any cloud service.


Related pages:

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw