How to Send AI Agent Audit Logs to Splunk
SafeClaw by Authensor forwards its hash-chained audit logs to Splunk through the HTTP Event Collector (HEC), giving your security team full search, correlation, and alerting capabilities over AI agent actions. Every allow, deny, and prompt decision is sent as a structured event with the cryptographic hash chain intact. SafeClaw supports Claude and OpenAI agents and ships with 446 tests.
Prerequisites
- SafeClaw installed (
npx @authensor/safeclaw) - Splunk Enterprise or Splunk Cloud with admin access
- An HEC token configured in Splunk
Step 1: Create a Splunk HEC Input
- In Splunk, go to Settings > Data Inputs > HTTP Event Collector.
- Click New Token.
- Name it "SafeClaw Audit Logs".
- Select or create an index (e.g.,
ai_agent_audit). - Set the source type to
_json. - Click Submit and copy the token.
Step 2: Configure SafeClaw for Splunk
Add Splunk HEC settings to .safeclaw/policy.yaml:
version: 1
default: deny
notifications:
splunk:
hec_url: "${SPLUNK_HEC_URL}"
hec_token: "${SPLUNK_HEC_TOKEN}"
index: "ai_agent_audit"
source: "safeclaw"
sourcetype: "_json"
events:
- action.allowed
- action.denied
- action.prompted
- audit.integrity_check
batch:
size: 50
interval_seconds: 10
rules:
- action: file.read
paths:
- "src/**"
decision: allow
- action: file.write
paths:
- "src/**"
decision: prompt
- action: shell.execute
decision: deny
Set environment variables:
export SPLUNK_HEC_URL="https://splunk.yourcompany.com:8088/services/collector/event"
export SPLUNK_HEC_TOKEN="your-hec-token"
Step 3: Understand the Event Structure
SafeClaw sends structured JSON events to Splunk:
{
"time": 1739456001,
"host": "dev-workstation",
"source": "safeclaw",
"sourcetype": "_json",
"index": "ai_agent_audit",
"event": {
"action_type": "shell.execute",
"target": "rm -rf /tmp/data",
"decision": "denied",
"agent": "gpt-4o",
"policy_rule": "default:deny",
"timestamp": "2026-02-13T14:32:01.000Z",
"audit_hash": "a3f2c8e9b1d4f6a7c3e5d2b8f0a1c4e6",
"previous_hash": "e7d3a9f2c1b5e8d4a6f0c2e9b3d7a1f5",
"chain_position": 172,
"project": "my-ai-app",
"environment": "development"
}
}
The audit_hash and previous_hash fields preserve the hash chain in Splunk, enabling you to verify chain integrity directly from Splunk searches.
Step 4: Build Splunk Searches
Use these SPL queries to analyze AI agent behavior:
All denied actions in the last 24 hours:
index=ai_agent_audit source=safeclaw decision=denied
| stats count by action_type, target
| sort -count
Hash chain integrity verification:
index=ai_agent_audit source=safeclaw
| sort chain_position
| streamstats current=f last(audit_hash) as expected_previous
| eval chain_valid=if(previous_hash=expected_previous OR chain_position=1, "valid", "broken")
| search chain_valid="broken"
Agent activity timeline:
index=ai_agent_audit source=safeclaw
| timechart span=1h count by decision
Top agents by denied actions:
index=ai_agent_audit source=safeclaw decision=denied
| stats count by agent
| sort -count
Step 5: Create Splunk Alerts
Set up alerts for critical events:
- Go to Search > Save As > Alert.
- Use the query:
index=ai_agent_audit source=safeclaw decision=denied | stats count | where count > 10 - Set trigger conditions: "Number of results is greater than 0" in a 5-minute window.
- Configure actions: Send email, trigger a webhook, or post to Slack.
Step 6: Test the Integration
Send test events to Splunk:
npx @authensor/safeclaw test-notify --channel splunk
Verify events appear in Splunk by searching: index=ai_agent_audit source=safeclaw. Then generate real activity:
npx @authensor/safeclaw wrap -- node my-agent.js
Summary
SafeClaw integrates with Splunk through HEC to forward hash-chained audit logs for comprehensive search and analysis. The structured event format enables powerful SPL queries for security investigations. Batch sending reduces network overhead. Splunk alerts close the loop on critical agent events. SafeClaw is MIT licensed and open source.
Related Guides
- How to Monitor AI Agent Actions in Datadog
- How to Build an AI Agent Safety Dashboard in Grafana
- How to Track AI Agent Errors in Sentry
- How to Integrate AI Agent Safety with PagerDuty
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw