2025-11-24 · Authensor

AI Agent Safety for Rust Developers

SafeClaw by Authensor adds runtime deny-by-default action gating to Rust AI agents, complementing Rust's compile-time memory safety with runtime behavioral safety. Every Command::new(), File::open(), and reqwest::get() call is checked against your YAML policy before execution. Install SafeClaw with npx @authensor/safeclaw and call its local API from your Rust agent.

Compile-Time Safety Is Not Enough

Rust guarantees memory safety at compile time, but it cannot prevent an AI agent from making dangerous decisions at runtime. An agent can still:

SafeClaw provides the behavioral safety layer that Rust's type system cannot. 446 tests validate every gate decision. A hash-chained audit trail records every action.

Installation

npx @authensor/safeclaw

Add the HTTP client dependency to your Cargo.toml:

[dependencies]
reqwest = { version = "0.12", features = ["json", "blocking"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"

Policy

version: 1
defaultAction: deny

rules:
- action: file.read
path:
glob: "/app/data/**"
decision: allow

- action: file.write
path:
glob: "/app/output/**"
decision: allow

- action: process.exec
command:
startsWith: "cargo"
decision: allow

- action: network.request
host:
in: ["api.openai.com", "api.anthropic.com"]
decision: allow

Rust Integration

use reqwest::blocking::Client;
use serde::{Deserialize, Serialize};

#[derive(Serialize)]
struct ActionRequest {
action: String,
#[serde(skip_serializing_if = "Option::is_none")]
path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
command: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
host: Option<String>,
}

#[derive(Deserialize)]
struct GateDecision {
allowed: bool,
reason: String,
}

struct Gate {
client: Client,
endpoint: String,
}

impl Gate {
fn new() -> Self {
Gate {
client: Client::new(),
endpoint: "http://localhost:9800".to_string(),
}
}

fn check(&self, req: &ActionRequest) -> Result<GateDecision, Box<dyn std::error::Error>> {
let resp = self.client
.post(format!("{}/check", self.endpoint))
.json(req)
.send()?;

if !resp.status().is_success() {
// Fail closed
return Ok(GateDecision {
allowed: false,
reason: "SafeClaw returned non-200".to_string(),
});
}

Ok(resp.json()?)
}
}

Using the Gate

use std::process::Command;
use std::fs;

fn main() -> Result<(), Box<dyn std::error::Error>> {
let gate = Gate::new();

// Gate a command execution
let decision = gate.check(&ActionRequest {
action: "process.exec".to_string(),
command: Some("cargo build".to_string()),
path: None,
host: None,
})?;

if decision.allowed {
let output = Command::new("cargo")
.arg("build")
.output()?;
println!("{}", String::from_utf8_lossy(&output.stdout));
} else {
eprintln!("Denied: {}", decision.reason);
}

// Gate a file read
let decision = gate.check(&ActionRequest {
action: "file.read".to_string(),
path: Some("/app/data/input.json".to_string()),
command: None,
host: None,
})?;

if decision.allowed {
let contents = fs::read_to_string("/app/data/input.json")?;
println!("{}", contents);
}

Ok(())
}

SafeClaw fails closed by design. If the sidecar is unreachable, all actions are denied. Works with Claude and OpenAI agents. MIT licensed.

Cross-References

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw