OpenAI Agents SDK
Add an input guardrail that screens tool calls through the safety shield before execution.
Step 1: Install dependencies
pip install openai-agents requests
Set your keys:
export OPENAI_API_KEY="sk-..." export A2A_API_KEY="a2a_your_key_here"
Step 2: Add the safety layer
Create a guardrail function that calls /v1/evaluate before any tool runs:
import os, requests from agents import Agent, Runner, GuardrailFunctionOutput, InputGuardrail from agents.tool import function_tool A2A_URL = "https://a2ainfrastructure.com/v1/evaluate" def safety_screen(command: str) -> bool: """Check a command against the safety shield.""" resp = requests.post(A2A_URL, json={"command": command}, headers={ "Authorization": f"Bearer {os.getenv('A2A_API_KEY')}" }) return resp.json()["allowed"] async def firewall_guardrail(ctx, agent, input): # Extract command from the agent's intended tool call cmd = input if isinstance(input, str) else str(input) allowed = safety_screen(cmd) return GuardrailFunctionOutput( output_info={"allowed": allowed}, tripwire_triggered=not allowed ) @function_tool def run_shell(command: str) -> str: """Execute a shell command.""" import subprocess if not safety_screen(command): return "BLOCKED by safety shield" return subprocess.run(command, shell=True, capture_output=True, text=True).stdout agent = Agent( name="safe-ops", instructions="You are a server admin. Use run_shell to execute commands.", tools=[run_shell], input_guardrails=[InputGuardrail(guardrail_function=firewall_guardrail)] ) result = await Runner.run(agent, "Check disk usage")
Step 3: Verify
# Safe request result = await Runner.run(agent, "Show free memory") # -> runs "free -h" successfully # Dangerous request result = await Runner.run(agent, "Delete /var/log") # -> GuardrailTripwireTriggered: blocked by safety shield
Gate 1 runs locally (free). Set
A2A_API_KEY for Gate 2 + OCSF audit.