How to Build an AI Agent Safety Dashboard in Grafana
SafeClaw by Authensor exposes Prometheus-compatible metrics that you can visualize in Grafana to build a comprehensive AI agent safety dashboard. Track allow/deny rates, audit chain integrity, policy rule hits, and agent behavior patterns in real time. SafeClaw supports Claude and OpenAI agents, ships with 446 tests, and provides hash-chained audit logs.
Prerequisites
- SafeClaw installed (
npx @authensor/safeclaw) - Grafana 10+ running (self-hosted or Grafana Cloud)
- Prometheus configured as a Grafana data source
Step 1: Enable the SafeClaw Prometheus Endpoint
Configure SafeClaw to expose a metrics endpoint in .safeclaw/policy.yaml:
version: 1
default: deny
metrics:
prometheus:
enabled: true
port: 9742
path: "/metrics"
labels:
environment: "production"
service: "ai-agent"
rules:
- action: file.read
paths:
- "src/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: shell.execute
decision: deny
Start SafeClaw with the metrics server:
npx @authensor/safeclaw --metrics
Verify the endpoint:
curl http://localhost:9742/metrics
Step 2: Add SafeClaw to Prometheus Scrape Config
Add the SafeClaw target to your prometheus.yml:
scrape_configs:
- job_name: "safeclaw"
scrape_interval: 15s
static_configs:
- targets: ["localhost:9742"]
labels:
instance: "dev-workstation"
Prometheus will scrape SafeClaw's metrics every 15 seconds.
Step 3: Understand the Available Metrics
SafeClaw exposes these Prometheus metrics:
# HELP safeclaw_actions_total Total agent actions processed
TYPE safeclaw_actions_total counter
safeclaw_actions_total{decision="allowed",action_type="file.read"} 142
safeclaw_actions_total{decision="denied",action_type="shell.execute"} 7
safeclaw_actions_total{decision="prompted",action_type="file.write"} 23
HELP safeclaw_audit_chain_valid Whether the audit hash chain is valid
TYPE safeclaw_audit_chain_valid gauge
safeclaw_audit_chain_valid 1
HELP safeclaw_audit_entries_total Total audit log entries
TYPE safeclaw_audit_entries_total gauge
safeclaw_audit_entries_total 172
HELP safeclaw_policy_rules_active Number of active policy rules
TYPE safeclaw_policy_rules_active gauge
safeclaw_policy_rules_active 5
HELP safeclaw_action_duration_seconds Time to process policy decisions
TYPE safeclaw_action_duration_seconds histogram
safeclaw_action_duration_seconds_bucket{le="0.001"} 150
safeclaw_action_duration_seconds_bucket{le="0.01"} 170
Step 4: Build the Grafana Dashboard
Create a new dashboard in Grafana and add these panels:
Panel 1: Action Rate Over Time (Time Series)
rate(safeclaw_actions_total[5m])Group by
decision label. Use a stacked area chart with green for allowed, red for denied, and yellow for prompted.
Panel 2: Deny Rate Percentage (Gauge)
sum(rate(safeclaw_actions_total{decision="denied"}[1h]))
/
sum(rate(safeclaw_actions_total[1h]))
* 100Set thresholds: green < 5%, yellow < 20%, red >= 20%.
Panel 3: Audit Chain Health (Stat)
safeclaw_audit_chain_validMap value 1 to "Valid" (green) and 0 to "Broken" (red).
Panel 4: Top Denied Actions (Table)
topk(10, sum by (action_type) (increase(safeclaw_actions_total{decision="denied"}[24h])))
Panel 5: Policy Decision Latency (Heatmap)
rate(safeclaw_action_duration_seconds_bucket[5m])
Step 5: Configure Grafana Alerts
Add alert rules to your dashboard panels:
- High Deny Rate: Alert when denied actions exceed 20% of total actions over 10 minutes
- Audit Chain Broken: Alert immediately when
safeclaw_audit_chain_validdrops to 0 - Decision Latency Spike: Alert when p99 decision latency exceeds 100ms
Step 6: Test the Dashboard
Generate some agent activity and verify the dashboard updates:
npx @authensor/safeclaw wrap -- node my-agent.js
Check that all panels populate with data and that alert rules evaluate correctly.
Summary
SafeClaw exposes Prometheus-compatible metrics that power rich Grafana dashboards for AI agent safety monitoring. Real-time panels for action rates, deny percentages, audit chain health, and decision latency give teams complete visibility. Grafana alerts close the loop by notifying when thresholds are breached. SafeClaw is MIT licensed and open source.
Related Guides
- How to Monitor AI Agent Actions in Datadog
- How to Send AI Agent Audit Logs to Splunk
- How to Set Up Custom Webhooks for AI Agent Events
- How to Run AI Agents Safely from the Terminal
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw