How to Monitor AI Agent Actions in Datadog
SafeClaw by Authensor integrates with Datadog to provide full observability over AI agent actions. Every allow, deny, and prompt decision is forwarded as a metric and log entry, enabling you to build dashboards, set alerts, and correlate agent behavior with your existing infrastructure monitoring. SafeClaw supports Claude and OpenAI, ships with 446 tests, and uses hash-chained audit logs.
Prerequisites
- SafeClaw installed (
npx @authensor/safeclaw) - A Datadog account with an API key
- The Datadog Agent installed on your host (optional but recommended)
Step 1: Install SafeClaw
Initialize SafeClaw in your project:
npx @authensor/safeclaw
Step 2: Configure Datadog Metrics
Add Datadog settings to .safeclaw/policy.yaml:
version: 1
default: deny
notifications:
datadog:
api_key: "${DATADOG_API_KEY}"
site: "datadoghq.com"
metrics:
enabled: true
prefix: "safeclaw"
tags:
- "env:production"
- "service:ai-agent"
- "team:platform"
logs:
enabled: true
source: "safeclaw"
service: "ai-agent-safety"
rules:
- action: file.read
paths:
- "src/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: shell.execute
decision: deny
Set the environment variable:
export DATADOG_API_KEY="your_datadog_api_key"
Step 3: Understand the Metrics SafeClaw Sends
SafeClaw emits the following custom metrics to Datadog:
| Metric | Type | Description |
|--------|------|-------------|
| safeclaw.actions.total | Counter | Total agent actions processed |
| safeclaw.actions.allowed | Counter | Actions that were allowed |
| safeclaw.actions.denied | Counter | Actions that were denied |
| safeclaw.actions.prompted | Counter | Actions requiring user confirmation |
| safeclaw.audit.chain_valid | Gauge | 1 if hash chain is valid, 0 if broken |
| safeclaw.audit.entries | Gauge | Total audit log entries |
| safeclaw.policy.rules_count | Gauge | Number of active policy rules |
All metrics are tagged with the action type (action_type:file.write), agent identifier, and your custom tags.
Step 4: Forward Logs to Datadog
SafeClaw sends structured JSON logs that Datadog can parse automatically:
{
"timestamp": "2026-02-13T14:32:01.000Z",
"level": "warn",
"source": "safeclaw",
"service": "ai-agent-safety",
"message": "Action denied: shell.execute",
"action_type": "shell.execute",
"target": "rm -rf /tmp/data",
"decision": "denied",
"agent": "gpt-4o",
"audit_hash": "a3f2...b71c",
"policy_rule": "default:deny"
}
Create a Datadog log pipeline to extract facets from action_type, decision, and agent fields for filtering and grouping.
Step 5: Create a Datadog Monitor
Set up an alert for unusual agent behavior. In Datadog, go to Monitors > New Monitor > Metric:
Metric: safeclaw.actions.denied
Aggregation: sum by {action_type}
Alert threshold: > 10 in last 5 minutes
Warning threshold: > 5 in last 5 minutes
Notification: @slack-ai-safety-alerts
Message: "SafeClaw: High deny rate detected for {{action_type.name}}. Check audit log."
This alert fires when more than 10 actions are denied in a 5-minute window, indicating either a misconfigured agent or an attempted security bypass.
Step 6: Build a Dashboard
Create a Datadog dashboard with these widgets:
- Timeseries:
safeclaw.actions.totalgrouped bydecision(stacked area chart) - Top List: Most denied action types in the last 24 hours
- Query Value:
safeclaw.audit.chain_validwith a red/green conditional format - Log Stream: Filtered to
source:safeclawanddecision:denied - Change: Deny rate compared to previous day
Step 7: Test
Trigger several denied actions and verify metrics appear in Datadog:
npx @authensor/safeclaw test-notify --channel datadog
npx @authensor/safeclaw wrap -- node my-agent.js
Check your Datadog Metrics Explorer for safeclaw.* metrics and your Logs view for SafeClaw entries.
Summary
SafeClaw integrates with Datadog to provide metrics, logs, monitors, and dashboards for AI agent safety. Real-time visibility into allow/deny decisions helps teams detect anomalous agent behavior. The hash chain validity metric provides continuous audit integrity monitoring. SafeClaw is MIT licensed and open source.
Related Guides
- How to Build an AI Agent Safety Dashboard in Grafana
- How to Send AI Agent Audit Logs to Splunk
- How to Send AI Agent Safety Alerts to Slack
- How to Secure AI Agents in GitHub Codespaces
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw