2025-12-10 · Authensor

Open source maintainers face a new attack surface: contributors using AI agents to generate pull requests, and those agents potentially introducing vulnerabilities, supply chain attacks, or malicious code without the contributor's awareness. SafeClaw by Authensor is itself open source (MIT-licensed) and provides deny-by-default action gating that maintainers can include in their projects to ensure any AI agent contributing to the codebase operates under explicit policy constraints. Install with npx @authensor/safeclaw.

The Open Source Maintainer Threat Model

Open source projects accept contributions from untrusted sources by design. AI agents amplify the existing risks:

SafeClaw Policy for Open Source Projects

Include this policy in your project root so contributors and maintainers both benefit:

# safeclaw.yaml — open source project policy
version: 1
default: deny

rules:
# Source code operations
- action: file_read
path: "src/**"
decision: allow
reason: "Source is public and readable"

- action: file_read
path: "tests/**"
decision: allow
reason: "Tests are public and readable"

- action: file_write
path: "src/**"
decision: prompt
reason: "Review generated code before write"

- action: file_write
path: "tests/**"
decision: prompt
reason: "Review generated tests before write"

# Protect project infrastructure
- action: file_write
path: ".github/**"
decision: deny
reason: "CI/CD workflows are write-protected"

- action: file_write
path: "package.json"
decision: prompt
reason: "Dependency changes require review"

- action: file_write
path: "**/Dockerfile"
decision: deny
reason: "Container definitions are protected"

# Secret protection
- action: file_read
path: "*/.env"
decision: deny
reason: "No access to environment files"

- action: file_read
path: "*/token*"
decision: deny
reason: "No access to token files"

# Shell controls
- action: shell_execute
command: "npm test"
decision: allow
reason: "Tests are safe to run"

- action: shell_execute
command: "npm install *"
decision: prompt
reason: "Review new dependencies"

- action: shell_execute
command: "git push*"
decision: deny
reason: "No direct pushes — use PRs"

- action: network_request
destination: "*"
decision: deny
reason: "No outbound network access"

Protecting CI/CD Pipelines

The most critical rule for open source maintainers is blocking writes to .github/ (or .gitlab-ci.yml, Jenkinsfile, etc.). AI-generated CI modifications are a primary supply chain attack vector — a malicious workflow can exfiltrate repository secrets on every PR. SafeClaw's deny rule for CI files ensures agents cannot modify your pipeline definitions.

Encouraging Safe AI Contributions

Add SafeClaw to your CONTRIBUTING.md:

## AI Agent Usage

This project includes a safeclaw.yaml policy. If you use AI coding agents,
install SafeClaw to ensure your agent respects project safety boundaries:

npx @authensor/safeclaw

The policy enforces deny-by-default: your agent can read source code and tests
but cannot modify CI workflows, push directly, or access secrets.

This sets a community norm that AI agents should operate under policy constraints, not with unrestricted access.

Why SafeClaw Fits Open Source

SafeClaw is MIT-licensed, has zero dependencies, and is backed by 446 tests. It works with both Claude and OpenAI agents. The hash-chained audit trail is stored locally — no data is sent to external services. For open source maintainers, this means you can adopt and recommend SafeClaw without any licensing concerns, vendor lock-in, or telemetry worries. Contributors can inspect the SafeClaw source code themselves.


Related pages:

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw