2025-12-22 · Authensor

EU AI Act High-Risk System Requirements and Agent Gating

Regulation Overview

The EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level: unacceptable, high-risk, limited, and minimal. AI agents that autonomously access files, execute commands, and make network requests in critical infrastructure, healthcare, finance, or employment contexts are classified as high-risk systems under Annex III. High-risk systems must comply with requirements in Chapter III, Section 2 (Articles 9–15) before being placed on the EU market or put into service.

Relevant Requirements

Article 9 — Risk Management System

High-risk AI systems must have a risk management system established, implemented, documented, and maintained throughout the lifecycle. The system must identify and analyze known and foreseeable risks, estimate and evaluate risks, and adopt appropriate risk management measures.

Article 10 — Data and Data Governance

Training, validation, and testing data sets must be subject to appropriate data governance practices. For AI agents accessing live data, this extends to runtime data access controls.

Article 11 — Technical Documentation

Providers must draw up technical documentation demonstrating compliance with high-risk requirements. Documentation must include system design specifications, risk management measures, and monitoring capabilities.

Article 12 — Record-Keeping

High-risk AI systems must include logging capabilities that enable recording of events relevant to identifying risks, facilitating post-market monitoring, and ensuring traceability. Logs must be kept for a period appropriate to the intended purpose.

Article 13 — Transparency and Provision of Information

High-risk AI systems must be designed to ensure their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Deployers must understand what the system can and cannot do.

Article 14 — Human Oversight

High-risk AI systems must be designed to enable effective human oversight during use. This includes the ability to understand the system's capabilities and limitations, correctly interpret output, decide not to use the system, and intervene or interrupt the system.

Article 15 — Accuracy, Robustness, and Cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. They must be resilient against errors, faults, inconsistencies, and unauthorized third-party manipulation.

Compliance Gap Without Gating

Deploying AI agents as high-risk systems without action-level controls results in:

How SafeClaw Addresses Each Requirement

| EU AI Act Article | Requirement | SafeClaw Capability |
|---|---|---|
| Article 9 | Risk management system | Policy configurations document risk management measures for each action type. Deny-by-default mitigates foreseeable risks by blocking unspecified actions. Simulation mode enables risk assessment testing. |
| Article 10 | Data governance | file_read rules control which data sources the agent can access. Path restrictions enforce data governance boundaries at runtime. |
| Article 11 | Technical documentation | Policy configurations, audit trail specifications, and test coverage (446 tests, TypeScript strict mode) constitute technical documentation of safeguards. |
| Article 12 | Record-keeping | Tamper-proof audit trail with SHA-256 hash chain records every action request, policy decision, and timestamp. Records are retained and exportable. |
| Article 13 | Transparency | Every policy decision includes the matching rule and reason. Audit trail provides full traceability of what the agent did and why it was allowed or denied. |
| Article 14 | Human oversight | Action-level gating provides a pre-execution control point. Policies can require human-in-the-loop approval for specific action types. Dashboard enables real-time monitoring. |
| Article 15 | Accuracy, robustness, cybersecurity | Zero third-party dependencies eliminate supply chain attack vectors. 446 tests verify enforcement correctness. Sub-millisecond evaluation prevents timeout-based bypasses. |

Evidence Generation

| EU AI Act Article | Required Evidence | SafeClaw Output |
|---|---|---|
| Article 9 | Risk management documentation | Policy configurations documenting controls per action type; simulation mode risk assessment results |
| Article 10 | Data governance records | file_read policy rules restricting data access scope |
| Article 11 | Technical documentation package | Open-source client code (MIT license), policy engine architecture docs, test suite results |
| Article 12 | System logs | Hash-chained audit trail with full action history |
| Article 13 | Transparency information for deployers | Decision reason fields in policy rules; audit trail with per-action explanations |
| Article 14 | Human oversight mechanism documentation | Human-in-the-loop policy configurations; dashboard monitoring capability documentation |
| Article 15 | Cybersecurity testing results | 446-test suite results; zero-dependency verification; strict mode compilation evidence |

Example Policy

An EU AI Act-compliant policy for a high-risk AI agent in an employment context (Annex III, point 4):

{
  "name": "eu-ai-act-hr-screening-agent",
  "defaultAction": "deny",
  "rules": [
    {
      "action": "file_read",
      "path": "/app/candidates/anonymized/**",
      "decision": "allow",
      "reason": "Art. 10 — Only anonymized candidate data accessible"
    },
    {
      "action": "file_read",
      "path": "/app/candidates/personal/**",
      "decision": "deny",
      "reason": "Art. 10 — Personal identifiers excluded per data governance"
    },
    {
      "action": "file_write",
      "path": "/app/output/screening-results/**",
      "decision": "human_review",
      "reason": "Art. 14 — Human oversight required before screening output is written"
    },
    {
      "action": "shell_exec",
      "decision": "deny",
      "reason": "Art. 9 — Shell execution outside risk tolerance for this system"
    },
    {
      "action": "network",
      "decision": "deny",
      "reason": "Art. 15 — No external network access to prevent data exfiltration"
    }
  ]
}

Install with npx @authensor/safeclaw. The browser dashboard setup wizard guides configuration of Article 14-compliant human oversight policies. The free tier with 7-day renewable keys enables compliance testing without financial commitment.

Audit Trail Export

For EU AI Act conformity assessments and market surveillance:

  1. Compile technical documentation — Include policy configurations, test results, and architecture documentation per Article 11
  2. Export the audit trail from safeclaw.onrender.com for the assessment period
  3. Demonstrate hash chain integrity — Article 12 requires trustworthy record-keeping; SHA-256 verification provides this
  4. Document human oversight points — Show which actions require human-in-the-loop approval per Article 14
  5. Present risk management measures — Map each policy rule to an identified risk per Article 9
The 100% open-source client (MIT license) allows notified bodies and conformity assessment bodies to audit the enforcement mechanism directly. Local execution ensures the system operates within EU data residency boundaries, supporting Article 10 data governance.

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw