2025-12-15 · Authensor

How to Add Action Gating to LlamaIndex Agents

SafeClaw by Authensor intercepts every tool call in LlamaIndex's agent and query engine pipelines, enforcing deny-by-default policies before any tool's call() method executes. LlamaIndex agents use FunctionTool, QueryEngineTool, and custom tools — SafeClaw evaluates each invocation against your YAML policy in sub-millisecond time.

How LlamaIndex Tool Execution Works

LlamaIndex's agent framework (AgentRunner and AgentWorker) manages a step-wise execution loop. The agent receives a task, the LLM decides which tool to call, and the AgentRunner dispatches the tool call through a ToolOutput. Tools are either FunctionTool (wrapping a Python function), QueryEngineTool (wrapping a query engine), or custom BaseTool subclasses. The gap is between the LLM's tool decision and the tool.call() invocation.

LLM Decision → Tool Selection + Args → [SafeClaw Policy Check] → tool.call() or Deny

Quick Start

npx @authensor/safeclaw

Creates a safeclaw.yaml in your project. SafeClaw maps LlamaIndex tool names directly to policy rules.

Step 1: Define LlamaIndex Tool Policies

# safeclaw.yaml
version: 1
default: deny

policies:
- name: "llamaindex-query-tools"
description: "Allow query engine tools"
actions:
- tool: "vector_query"
effect: allow
- tool: "sql_query"
effect: allow
constraints:
operation: "SELECT"
- tool: "knowledge_base_search"
effect: allow

- name: "llamaindex-function-tools"
description: "Control function tool access"
actions:
- tool: "send_email"
effect: deny
- tool: "create_ticket"
effect: allow
constraints:
project: "support|engineering"
- tool: "update_crm"
effect: allow
constraints:
operation: "read|update"

- name: "llamaindex-file-tools"
description: "Restrict file access"
actions:
- tool: "read_document"
effect: allow
constraints:
path_pattern: "docs/|data/"
- tool: "write_document"
effect: allow
constraints:
path_pattern: "output/**"
- tool: "delete_document"
effect: deny

Step 2: Wrap LlamaIndex Tools with SafeClaw

from llama_index.core.tools import FunctionTool, QueryEngineTool
from llama_index.core.agent import ReActAgent
from safeclaw import SafeClaw

safeclaw = SafeClaw("./safeclaw.yaml")

def gate_tool(tool):
"""Wrap a LlamaIndex tool with SafeClaw policy enforcement."""
original_call = tool.call

def safe_call(input, **kwargs):
decision = safeclaw.evaluate(tool.metadata.name, {"input": str(input), **kwargs})
if not decision.allowed:
from llama_index.core.tools import ToolOutput
return ToolOutput(
content=f"Denied by SafeClaw: {decision.reason}",
tool_name=tool.metadata.name,
raw_input={"input": input},
raw_output=f"Denied: {decision.reason}",
)
return original_call(input, **kwargs)

tool.call = safe_call
return tool

Wrap your tools

query_tool = gate_tool(QueryEngineTool.from_defaults( query_engine=index.as_query_engine(), name="vector_query", description="Search the knowledge base", ))

email_tool = gate_tool(FunctionTool.from_defaults(
fn=send_email_fn,
name="send_email",
description="Send an email",
))

agent = ReActAgent.from_tools(
[query_tool, email_tool],
llm=llm,
verbose=True,
)

Step 3: Integrate with LlamaIndex Workflows

For LlamaIndex's newer Workflow API:

from llama_index.core.workflow import Workflow, step, Event
from safeclaw import SafeClaw

safeclaw = SafeClaw("./safeclaw.yaml")

class ToolCallEvent(Event):
tool_name: str
tool_args: dict

class SafeWorkflow(Workflow):
@step
async def handle_tool_call(self, ev: ToolCallEvent):
decision = safeclaw.evaluate(ev.tool_name, ev.tool_args)
if not decision.allowed:
return ToolResultEvent(
result=f"Denied: {decision.reason}",
tool_name=ev.tool_name,
)
result = await self.execute_tool(ev.tool_name, ev.tool_args)
return ToolResultEvent(result=result, tool_name=ev.tool_name)

Step 4: Query Engine Safety

Even query engines can be gated — preventing the agent from querying sensitive indexes:

policies:
  - name: "query-engine-access"
    actions:
      - tool: "public_docs_query"
        effect: allow
      - tool: "hr_docs_query"
        effect: deny
      - tool: "financial_data_query"
        effect: allow
        constraints:
          user_role: "finance|executive"

Step 5: Audit the Agent Pipeline

npx @authensor/safeclaw audit --last 50

Every tool call in the agent's step-wise execution is logged — including which tool was selected, the arguments, and whether it was allowed or denied. The hash-chained format ensures log integrity.

Why SafeClaw

Related Pages

Try SafeClaw

Action-level gating for AI agents. Set it up in your browser in 60 seconds.

$ npx @authensor/safeclaw