2026-01-21 · Authensor

How to Stop an AI Agent from Installing Dependencies

SafeClaw by Authensor blocks AI agents from installing dependencies by denying package manager commands like npm install, yarn add, pip install, and cargo add at the action level. With deny-by-default gating, no dependency can be added to your project unless you explicitly allow it. Install SafeClaw with npx @authensor/safeclaw and lock down your dependency tree immediately.

Why Dependency Installation Is Dangerous

When an AI agent installs a package, it introduces third-party code into your project that executes with your full user permissions. Malicious or typosquatted packages can run postinstall scripts that steal credentials, open backdoors, or exfiltrate data. Even legitimate packages increase your attack surface and may have their own vulnerable dependencies.

AI agents are particularly susceptible to recommending packages they were trained on, even if those packages have since been deprecated, compromised, or replaced with malicious forks.

Step 1: Install SafeClaw

npx @authensor/safeclaw

Zero dependencies. MIT licensed. Works with Claude, OpenAI, and every major agent framework.

Step 2: Block All Package Manager Install Commands

# safeclaw.policy.yaml
rules:
  # Node.js package managers
  - action: shell.execute
    command_pattern: "npm install *"
    effect: deny
    reason: "Block npm install"

- action: shell.execute
command_pattern: "npm i *"
effect: deny
reason: "Block npm i shorthand"

- action: shell.execute
command_pattern: "yarn add *"
effect: deny
reason: "Block yarn add"

- action: shell.execute
command_pattern: "pnpm add *"
effect: deny
reason: "Block pnpm add"

# Python package managers
- action: shell.execute
command_pattern: "pip install *"
effect: deny
reason: "Block pip install"

- action: shell.execute
command_pattern: "pip3 install *"
effect: deny
reason: "Block pip3 install"

- action: shell.execute
command_pattern: "poetry add *"
effect: deny
reason: "Block poetry add"

# Rust
- action: shell.execute
command_pattern: "cargo add *"
effect: deny
reason: "Block cargo add"

# Ruby
- action: shell.execute
command_pattern: "gem install *"
effect: deny
reason: "Block gem install"

- action: shell.execute
command_pattern: "bundle add *"
effect: deny
reason: "Block bundle add"

# Go
- action: shell.execute
command_pattern: "go get *"
effect: deny
reason: "Block go get"

Step 3: Allow Restoring Existing Dependencies

You likely want your agent to be able to restore already-declared dependencies (e.g., running npm install with no arguments to install from package.json):

rules:
  # Allow installing from lock file (no new packages)
  - action: shell.execute
    command_pattern: "npm ci"
    effect: allow
    reason: "npm ci installs from lock file only"

- action: shell.execute
command_pattern: "npm install"
effect: allow
reason: "npm install with no args restores existing deps"

# Block adding new packages
- action: shell.execute
command_pattern: "npm install *"
effect: deny
reason: "Block adding new packages"

Note the distinction: npm install (no arguments) restores existing dependencies from package-lock.json, while npm install lodash adds a new package. SafeClaw matches these as different commands.

Step 4: Block Manifest File Modifications

An agent might try to manually edit package.json and then run npm install:

rules:
  - action: file.write
    path: "**/package.json"
    effect: deny
    reason: "Block direct modification of package.json"

- action: file.write
path: "**/requirements.txt"
effect: deny
reason: "Block modification of Python requirements"

- action: file.write
path: "**/pyproject.toml"
effect: deny
reason: "Block modification of Python project config"

- action: file.write
path: "**/Cargo.toml"
effect: deny
reason: "Block modification of Rust manifest"

Step 5: Test and Audit

npx @authensor/safeclaw --simulate

Ask your agent to install a package. The simulation log shows:

[DENIED] shell.execute: "npm install axios"
  Rule: "Block npm install"

Review the hash-chained audit trail:

npx @authensor/safeclaw audit --filter "reason:install"

SafeClaw is open-source with 446 tests and works with both Claude and OpenAI providers. Every blocked and allowed action is logged in a tamper-proof hash chain.

Related Pages

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw