Development agencies and consultancies manage multiple client projects simultaneously, often with AI agents that speed up delivery across all of them. The core risk is cross-client data contamination: an agent reading Client A's proprietary code, credentials, or business logic while working on Client B's project. SafeClaw by Authensor enforces per-project deny-by-default policies that isolate each client engagement at the agent action level. Install with npx @authensor/safeclaw and include a safeclaw.yaml in every client project.
Agency-Specific Risks
Agencies face a distinct threat model compared to product companies:
- Cross-client contamination — agents accessing files from one client's project while working in another's, creating IP and confidentiality violations
- NDA violations — agents reading proprietary code, designs, or configurations and inadvertently including that information in context sent to LLM providers
- Varied client security requirements — each client may have different compliance needs (HIPAA, PCI DSS, SOC 2) that require different agent constraints
- Contractor access — agency engineers may be temporary, and their AI agents inherit the same elevated access
- Deliverable integrity — clients need assurance that AI-generated code in their project was produced under safety constraints
Per-Client Project Policy
Each client project gets its own safeclaw.yaml tailored to that engagement's requirements:
# safeclaw.yaml — agency client project (healthcare client)
version: 1
default: deny
rules:
- action: file_read
path: "src/**"
decision: allow
reason: "Project source code"
- action: file_write
path: "src/**"
decision: prompt
reason: "Review code before write"
- action: file_read
path: "data/**"
decision: deny
reason: "Patient data is HIPAA-protected"
- action: file_read
path: "*/.env"
decision: deny
reason: "Client credentials blocked"
- action: file_read
path: "*/secret*"
decision: deny
reason: "Client secrets blocked"
- action: file_write
path: ".github/**"
decision: deny
reason: "CI/CD is protected"
- action: shell_execute
command: "npm test"
decision: allow
reason: "Tests are safe"
- action: shell_execute
command: "npm install *"
decision: prompt
reason: "Review dependencies"
- action: shell_execute
command: "rm *"
decision: deny
reason: "No deletions"
- action: network_request
destination: "*"
decision: deny
reason: "No outbound network"
For a fintech client, the policy would differ:
# safeclaw.yaml — agency client project (fintech client)
version: 1
default: deny
rules:
- action: file_read
path: "src/**"
decision: allow
reason: "Project source code"
- action: file_write
path: "src/**"
decision: prompt
reason: "Review code before write"
- action: file_read
path: "*/payment*"
decision: deny
reason: "Payment processing files blocked (PCI DSS)"
- action: file_read
path: "*/card*"
decision: deny
reason: "Card data files blocked"
- action: shell_execute
command: "npm test"
decision: allow
reason: "Tests are safe"
- action: shell_execute
command: "psql *"
decision: deny
reason: "No direct database access"
- action: network_request
destination: "*"
decision: deny
reason: "No outbound network"
Client-Facing Safety Documentation
SafeClaw gives agencies a concrete deliverable for client trust:
- Include
safeclaw.yamlin every project — clients can inspect the exact policy governing agent behavior - Export audit trails — the hash-chained logs prove what agents did during development
- Reference in contracts — specify that all AI-assisted development operates under deny-by-default constraints
- Use as a differentiator — agencies that demonstrate verifiable AI safety win proposals over those that do not
Onboarding Agency Engineers
When new developers join the agency — especially contractors — SafeClaw provides immediate safety:
git clone client-project.git
cd client-project
npx @authensor/safeclaw
The policy activates immediately. No additional configuration, no security training prerequisite, no access control setup. The deny-by-default policy ensures that even an unfamiliar engineer's AI agent cannot access client secrets or make destructive changes.
SafeClaw is MIT-licensed with zero dependencies and 446 tests. It works with both Claude and OpenAI agents. The local-only architecture means no client data flows to third-party services through the safety layer itself.
Related pages:
- AI Agent Safety for Freelance Developers
- HIPAA Agent Safeguards
- PCI DSS Agent Access
- Per-Agent Isolation Pattern
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw