How to Recover After an AI Agent Installed Suspicious Packages
When an AI agent installs a suspicious, malicious, or unknown npm package (or pip, gem, cargo, etc.), your system may be compromised. Supply chain attacks through package managers are one of the fastest-growing attack vectors, and an AI agent that runs npm install unknown-package without oversight can introduce malware, data exfiltration scripts, or cryptocurrency miners into your project. SafeClaw by Authensor blocks all package installation commands by default, requiring explicit approval for every dependency change. If a suspicious package has already been installed, follow the containment steps below.
Immediate Containment
1. Disconnect from the Network
If you suspect the package contains malware that exfiltrates data:
# Disable network temporarily
macOS:
networksetup -setairportpower en0 off
Or physically disconnect ethernet
2. Identify What Was Installed
# Check recent npm installs
npm ls --depth=0
Check for recently modified packages
ls -lt node_modules/ | head -20
Check package.json changes
git diff package.json
git diff package-lock.json
For pip
pip list --format=freeze | diff - requirements.txt
3. Check if the Package Is Known Malicious
Search for the package name on:
- npm advisories
- Snyk vulnerability database
- Socket.dev for supply chain analysis
4. Run a Security Audit
# npm
npm audit
yarn
yarn audit
pip
pip-audit
5. Remove the Suspicious Package
# npm
npm uninstall suspicious-package-name
Completely clean and reinstall from lock file
rm -rf node_modules
npm ci # install from lock file only
6. Check for Post-Install Scripts
Malicious packages often use post-install scripts to execute code during installation:
# Check if the package had install scripts
cat node_modules/suspicious-package/package.json | grep -A5 '"scripts"'
If it had preinstall, install, or postinstall scripts, the malicious code may have already executed.
7. Scan for Changes the Package May Have Made
# Check for new files created outside node_modules
git status
Check for modified system files
Look for cron jobs, startup scripts, or new processes
crontab -l
ps aux | grep -i suspicious
Review the Audit Trail
npx @authensor/safeclaw audit --filter "action:shell.exec" --filter "resource:install" --last 20
npx @authensor/safeclaw audit --filter "action:file.write" --filter "resource:package" --last 20
SafeClaw's hash-chained audit trail shows exactly what install commands the agent executed and when.
Install SafeClaw and Block Unauthorized Package Installation
npx @authensor/safeclaw
Configure Package Installation Policies
Add to your safeclaw.policy.yaml:
rules:
# Block all package installation commands
- action: shell.exec
resource: "npm install *"
effect: deny
reason: "Package installation requires human review"
- action: shell.exec
resource: "npm i *"
effect: deny
reason: "Package installation requires human review"
- action: shell.exec
resource: "yarn add *"
effect: deny
reason: "Package installation requires human review"
- action: shell.exec
resource: "pip install *"
effect: deny
reason: "Pip install requires human review"
- action: shell.exec
resource: "gem install *"
effect: deny
reason: "Gem install requires human review"
- action: shell.exec
resource: "cargo add *"
effect: deny
reason: "Cargo add requires human review"
# Block modifying package manifests
- action: file.write
resource: "**/package.json"
effect: deny
reason: "Package manifest changes require human review"
- action: file.write
resource: "**/package-lock.json"
effect: deny
reason: "Lock file changes require human review"
- action: file.write
resource: "**/requirements.txt"
effect: deny
reason: "Requirements changes require human review"
# Allow npm ci (install from lock file only — safe)
- action: shell.exec
resource: "npm ci"
effect: allow
reason: "Clean install from lock file is safe"
# Allow npm test and build
- action: shell.exec
resource: "npm test"
effect: allow
reason: "Agent can run tests"
- action: shell.exec
resource: "npm run build"
effect: allow
reason: "Agent can run builds"
Troubleshooting Scenarios
Agent installed a typosquatted package: Packages with names similar to popular ones (e.g., lod4sh instead of lodash) are common attack vectors. Remove it, scan for damage, and block all installs in your policy.
Agent installed a package with known vulnerabilities: Run npm audit fix to update to a patched version, or remove the package entirely if it is not needed.
Agent installed a package that created a backdoor: This is a full security incident. Scan the entire system, check for new SSH keys, cron jobs, and running processes. Consider rebuilding the environment from a known-clean state.
Agent modified package-lock.json: This can introduce supply chain attacks through dependency resolution changes. Restore from git:
git checkout HEAD -- package-lock.json
rm -rf node_modules
npm ci
Prevention
SafeClaw's 446 tests validate that package installation commands are blocked by default across both Claude and OpenAI agents. The deny-by-default model ensures no package can be installed without explicit permission. MIT licensed, zero dependencies — SafeClaw itself does not introduce supply chain risk.
Related Resources
- Prevent Agent npm Install Malware
- Threat: Supply Chain Agent Attack
- AI Agent Corrupted Configuration Files: Recovery
- Define: Zero-Dependency Security
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw