How to Add AI Agent Safety to JetBrains IDEs (IntelliJ, WebStorm, PyCharm)
SafeClaw by Authensor provides deny-by-default action gating for AI agents running inside any JetBrains IDE, including IntelliJ IDEA, WebStorm, and PyCharm. It intercepts every agent action, checks it against your policy, and records the decision in a hash-chained audit log. SafeClaw works with both Claude and OpenAI-powered agents and is backed by 446 tests.
Prerequisites
- Any JetBrains IDE (2024.1 or later recommended)
- Node.js 18+
- Terminal access within the IDE
Step 1: Install SafeClaw
Open the built-in terminal in your JetBrains IDE (Alt+F12) and run:
npx @authensor/safeclaw
SafeClaw initializes a .safeclaw/ directory in your project root with a default deny-all policy and an empty audit log.
Step 2: Define Your Policy
Create or edit .safeclaw/policy.yaml:
version: 1
default: deny
rules:
- action: file.read
paths:
- "src/**"
- "test/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: refactor.rename
decision: prompt
- action: shell.execute
decision: deny
- action: network.request
domains:
- "api.openai.com"
- "api.anthropic.com"
decision: allow
JetBrains IDEs have deep refactoring capabilities. The refactor.rename rule above ensures that any AI-initiated rename operation requires your explicit confirmation before proceeding.
Step 3: Add a Run Configuration
In your JetBrains IDE, go to Run > Edit Configurations and add a new Shell Script configuration:
- Name: SafeClaw Audit Verify
- Script text:
npx @authensor/safeclaw audit --verify - Working directory:
$ProjectFileDir$
Step 4: Configure a File Watcher for Policy Validation
Go to Settings > Tools > File Watchers and add a new watcher:
- File type: YAML
- Scope:
.safeclaw/policy.yaml - Program:
npx - Arguments:
@authensor/safeclaw policy --validate
Step 5: Add an External Tool for Quick Status
Navigate to Settings > Tools > External Tools and create a new entry:
- Name: SafeClaw Status
- Program:
npx - Arguments:
@authensor/safeclaw status - Working directory:
$ProjectFileDir$
Step 6: Test with an AI Agent
Use your preferred AI agent plugin (JetBrains AI Assistant, a Claude plugin, or an OpenAI integration) and request an action such as creating a new file. SafeClaw should intercept the action and apply your policy. Verify with:
npx @authensor/safeclaw audit --tail 5
Each log entry contains the action type, the policy decision, a timestamp, and a cryptographic hash linking it to the previous entry.
IDE-Specific Tips
- IntelliJ IDEA: SafeClaw works alongside the built-in code inspections. Denied actions appear in the Event Log.
- WebStorm: For JavaScript/TypeScript projects, add
node_modulesto your policy's deny paths to prevent AI agents from modifying dependencies. - PyCharm: Use the Python Console integration to run SafeClaw commands interactively during debugging sessions.
Summary
SafeClaw integrates into the JetBrains ecosystem through the terminal, run configurations, file watchers, and external tools. The deny-by-default model ensures AI agents cannot act without your policy approval. The MIT-licensed, open-source project ships with 446 tests for reliability.
Related Guides
- How to Add AI Agent Safety to VS Code
- How to Run AI Agents Safely from the Terminal
- How to Track AI Agent Errors in Sentry
- How to Send AI Agent Audit Logs to Splunk
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw