How to Track AI Agent Errors in Sentry
SafeClaw by Authensor integrates with Sentry to capture AI agent safety errors as trackable events. When an agent action is denied, a policy violation occurs, or the audit hash chain fails integrity checks, SafeClaw sends a structured error to Sentry with full context. SafeClaw supports Claude and OpenAI agents, has 446 tests, and provides hash-chained audit logging.
Prerequisites
- SafeClaw installed (
npx @authensor/safeclaw) - A Sentry account and project
- A Sentry DSN (Data Source Name)
Step 1: Install SafeClaw
Initialize SafeClaw in your project:
npx @authensor/safeclaw
Step 2: Configure Sentry Integration
Add Sentry settings to .safeclaw/policy.yaml:
version: 1
default: deny
notifications:
sentry:
dsn: "${SENTRY_DSN}"
environment: "production"
release: "1.0.0"
events:
- action.denied
- audit.integrity_failure
- policy.parse_error
- agent.timeout
tags:
team: "platform"
component: "ai-safety"
rules:
- action: file.read
paths:
- "src/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: shell.execute
decision: deny
Set the environment variable:
export SENTRY_DSN="https://examplePublicKey@o0.ingest.sentry.io/0"
Step 3: Understand Sentry Event Structure
SafeClaw sends structured events to Sentry with these components:
Error Level Mapping:
action.deniedmaps to Sentrywarningaudit.integrity_failuremaps to Sentryfatalpolicy.parse_errormaps to Sentryerroragent.timeoutmaps to Sentryerror
Event Context:
{
"exception": {
"values": [
{
"type": "SafeClawActionDenied",
"value": "shell.execute denied by policy rule: default:deny"
}
]
},
"tags": {
"action_type": "shell.execute",
"decision": "denied",
"agent": "gpt-4o",
"team": "platform"
},
"extra": {
"target": "rm -rf /tmp/data",
"policy_rule": "default:deny",
"audit_hash": "a3f2...b71c",
"chain_position": 172,
"recent_actions": ["file.read:allowed", "file.write:prompted", "shell.execute:denied"]
},
"breadcrumbs": [
{
"timestamp": "2026-02-13T14:31:58.000Z",
"category": "safeclaw",
"message": "file.read allowed: src/main.js",
"level": "info"
},
{
"timestamp": "2026-02-13T14:32:00.000Z",
"category": "safeclaw",
"message": "file.write prompted: src/utils.js",
"level": "warning"
}
]
}
Breadcrumbs show the sequence of agent actions leading up to the error, providing crucial debugging context.
Step 4: Configure Issue Grouping
SafeClaw sets Sentry fingerprints to group related events:
notifications:
sentry:
dsn: "${SENTRY_DSN}"
fingerprint:
strategy: "action_and_rule"
With action_and_rule grouping, all shell.execute denials by the same policy rule are grouped into a single Sentry issue. This prevents thousands of individual issues while still tracking every occurrence.
Available strategies:
action_type-- groups by action type onlyaction_and_rule-- groups by action type and policy ruleaction_and_target-- groups by action type and target resourceunique-- every event creates a new issue
Step 5: Set Up Sentry Alerts
In Sentry, go to Alerts > Create Alert Rule:
- When: A new issue is created
- If: Issue is tagged with
decision:deniedandaction_type:shell.execute - Then: Send a notification to Slack channel
#ai-agent-alerts
- When: An event occurs with level
fatal - If: Event message contains "integrity_failure"
- Then: Send to PagerDuty
Step 6: Test the Integration
Send a test event to Sentry:
npx @authensor/safeclaw test-notify --channel sentry
Verify the event appears in your Sentry project dashboard. Then trigger a real denied action:
npx @authensor/safeclaw wrap -- node my-agent.js
Check Sentry for the new issue with full context, breadcrumbs, and tags.
Summary
SafeClaw integrates with Sentry to bring AI agent safety into your error tracking workflow. Structured events with breadcrumbs, tags, and fingerprinting make it easy to investigate and triage agent safety incidents. Sentry's alerting and issue management capabilities complement SafeClaw's deny-by-default approach. SafeClaw is MIT licensed and open source.
Related Guides
- How to Monitor AI Agent Actions in Datadog
- How to Integrate AI Agent Safety with PagerDuty
- How to Send AI Agent Audit Logs to Splunk
- How to Add AI Agent Safety to VS Code
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw