How to Integrate AI Agent Safety with PagerDuty
SafeClaw by Authensor integrates with PagerDuty to escalate critical AI agent safety events into your incident management workflow. When an AI agent attempts a dangerous action, fails an audit integrity check, or triggers repeated policy violations, SafeClaw creates a PagerDuty incident automatically. SafeClaw supports Claude and OpenAI agents, has 446 tests, and provides hash-chained audit logs.
Prerequisites
- SafeClaw installed (
npx @authensor/safeclaw) - A PagerDuty account with a service configured
- A PagerDuty Events API v2 integration key
Step 1: Create a PagerDuty Service Integration
- Log into PagerDuty and navigate to Services > Service Directory.
- Select an existing service or create a new one named "AI Agent Safety".
- Go to the Integrations tab and click Add Integration.
- Select Events API v2 and click Add.
- Copy the Integration Key (also called a routing key).
Step 2: Configure SafeClaw for PagerDuty
Add PagerDuty settings to your .safeclaw/policy.yaml:
version: 1
default: deny
notifications:
pagerduty:
routing_key: "${PAGERDUTY_ROUTING_KEY}"
severity_map:
action.denied: warning
audit.integrity_failure: critical
policy.violation: error
action.prompted: info
dedup_key_prefix: "safeclaw"
rules:
- action: file.read
paths:
- "src/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: shell.execute
decision: deny
- action: network.request
decision: deny
Set the environment variable:
export PAGERDUTY_ROUTING_KEY="your-integration-key-here"
Step 3: Configure Severity Escalation
Map SafeClaw events to PagerDuty severities for proper routing:
notifications:
pagerduty:
routing_key: "${PAGERDUTY_ROUTING_KEY}"
escalation:
critical:
events:
- audit.integrity_failure
auto_resolve: false
high:
events:
- action.denied
threshold:
count: 5
window_minutes: 10
auto_resolve: true
resolve_after_minutes: 30
low:
events:
- action.prompted
threshold:
count: 20
window_minutes: 60
This configuration creates a critical incident immediately on any audit integrity failure. For denied actions, it waits until 5 denials occur within 10 minutes before escalating. Prompted actions only trigger a low-severity alert after 20 occurrences in an hour.
Step 4: Configure Deduplication
PagerDuty uses deduplication keys to group related events. SafeClaw generates these automatically:
notifications:
pagerduty:
routing_key: "${PAGERDUTY_ROUTING_KEY}"
dedup_key_prefix: "safeclaw"
dedup_strategy: "action_type"
With action_type deduplication, all shell.execute denials group into a single PagerDuty incident, preventing alert storms while still capturing every individual event in SafeClaw's audit log.
Step 5: Add Custom Details to Incidents
SafeClaw includes rich context in the PagerDuty event payload:
{
"routing_key": "your-key",
"event_action": "trigger",
"dedup_key": "safeclaw-shell.execute-20260213",
"payload": {
"summary": "SafeClaw: shell.execute denied (5 occurrences in 10 min)",
"severity": "high",
"source": "safeclaw-agent-monitor",
"component": "ai-agent",
"custom_details": {
"action_type": "shell.execute",
"target": "rm -rf /tmp/data",
"agent": "gpt-4o",
"policy_rule": "deny",
"audit_hash": "a3f2...b71c",
"recent_log_entries": 5
}
}
}
Step 6: Test the Integration
Send a test event to PagerDuty:
npx @authensor/safeclaw test-notify --channel pagerduty
Verify the incident appears in your PagerDuty dashboard. Then trigger a real denied action and check that the incident is created with the correct severity and deduplication.
Summary
SafeClaw integrates with PagerDuty to bring AI agent safety into your incident management workflow. Severity mapping, threshold escalation, and deduplication ensure your on-call team receives actionable alerts without noise. Hash-chained audit logs provide the forensic detail needed during incident investigation. SafeClaw is MIT licensed and open source.
Related Guides
- How to Send AI Agent Safety Alerts to Slack
- How to Track AI Agent Errors in Sentry
- How to Get SMS Alerts for AI Agent Safety Events
- How to Monitor AI Agent Actions in Datadog
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw