2026-01-30 · Authensor

AI Agent Pushed Secrets to GitHub: Emergency Response Guide

If an AI agent committed and pushed API keys, passwords, tokens, or other secrets to a GitHub repository, treat this as a critical security incident. The secrets are compromised the moment they hit a remote repository — even a private one — because they exist in git history and may have been scanned by automated tools. SafeClaw by Authensor prevents this by blocking agent access to credential files and gating git push operations through deny-by-default policies. Right now, you need to rotate credentials and clean the repository.

Emergency Response: First 15 Minutes

Step 1: Rotate Every Exposed Credential (Do This NOW)

Do not wait to clean git history. Rotate first:

Step 2: Check GitHub Secret Scanning Alerts

GitHub automatically scans for known secret patterns. Check:

https://github.com/YOUR_ORG/YOUR_REPO/security/secret-scanning

If GitHub detected the secrets, partner notifications may have already been sent to providers like AWS and Stripe, which may have auto-revoked the keys.

Step 3: Identify What Was Exposed

# Find the commit that introduced secrets
git log --all --oneline -- ".env" "secret" "credential"

Search for specific patterns in history

git log -p --all -S 'sk-' | head -100 git log -p --all -S 'AKIA' | head -100

Scrub Secrets from Git History

Option 1: BFG Repo Cleaner (Recommended)

# Install BFG
brew install bfg

Create a file listing the secrets to remove

echo "YOUR_API_KEY_VALUE" >> secrets-to-remove.txt echo "YOUR_DATABASE_PASSWORD" >> secrets-to-remove.txt

Run BFG

bfg --replace-text secrets-to-remove.txt your-repo.git

Clean up

cd your-repo.git git reflog expire --expire=now --all git gc --prune=now --aggressive

Force push (coordinate with your team)

git push --force

Option 2: git filter-repo

pip install git-filter-repo

git filter-repo --invert-paths --path .env
git filter-repo --invert-paths --path credentials.json

Important: Notify Your Team

After force-pushing cleaned history, every team member must re-clone or reset:

git fetch origin
git reset --hard origin/main

Review the SafeClaw Audit Trail

npx @authensor/safeclaw audit --filter "action:git" --last 30
npx @authensor/safeclaw audit --filter "action:file.read" --filter "resource:env" --last 20

The hash-chained audit trail shows whether the agent read credential files before committing and exactly when the push occurred.

Install SafeClaw and Prevent Future Secret Exposure

npx @authensor/safeclaw

Configure Secret Protection Policies

Add to your safeclaw.policy.yaml:

rules:
  # Block reading credential files
  - action: file.read
    resource: "*/.env"
    effect: deny
    reason: "Env files contain secrets"

- action: file.read
resource: "*/credentials"
effect: deny
reason: "Credential files are off limits"

- action: file.read
resource: "*/secret*"
effect: deny
reason: "Secret files are off limits"

# Block adding credential files to git
- action: shell.exec
resource: "git add .env"
effect: deny
reason: "Cannot stage env files"

- action: shell.exec
resource: "git add credentials"
effect: deny
reason: "Cannot stage credential files"

# Gate git push operations
- action: git.push
resource: "main"
effect: deny
reason: "Cannot push directly to main"

- action: git.push
resource: "feature/**"
effect: allow
pre_conditions:
- "git diff --cached --name-only | grep -v '.env'"
reason: "Push allowed after secret check"

# Block force push entirely
- action: git.force_push
resource: "*"
effect: deny
reason: "Force push is never allowed for agents"

Add .gitignore Protection

Ensure your .gitignore blocks secrets, and prevent agents from modifying it:

rules:
  - action: file.write
    resource: "**/.gitignore"
    effect: deny
    reason: "Agents cannot modify gitignore"

Post-Incident Checklist

Prevention

SafeClaw's 446 tests validate that credential file access and git operations are properly gated across both Claude and OpenAI agents. The deny-by-default model ensures agents never touch secrets unless you explicitly allow it. Combined with the hash-chained audit trail, you have full forensic evidence for compliance reporting.

Related Resources

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw