Vercel AI SDK
Wrap tools with an safety shield check before execution in the Vercel AI SDK.
Step 1: Install dependencies
npm install ai @ai-sdk/openai
Set your keys in .env:
OPENAI_API_KEY=sk-... A2A_API_KEY=a2a_your_key_here
Step 2: Add the safety layer
Create a tool wrapper that calls /v1/evaluate before executing any tool:
import { generateText, tool } from "ai"; import { openai } from "@ai-sdk/openai"; import { execSync } from "child_process"; import { z } from "zod"; const A2A_URL = "https://a2ainfrastructure.com/v1/evaluate"; async function safetyScreen(command: string): Promise<boolean> { const resp = await fetch(A2A_URL, { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${process.env.A2A_API_KEY}`, }, body: JSON.stringify({ command }), }); const result = await resp.json(); return result.allowed; } const shellTool = tool({ description: "Execute a shell command", parameters: z.object({ command: z.string().describe("The shell command to run"), }), execute: async ({ command }) => { const allowed = await safetyScreen(command); if (!allowed) { return { error: "Blocked by safety shield" }; } const output = execSync(command, { encoding: "utf-8" }); return { output }; }, }); const { text } = await generateText({ model: openai("gpt-4o"), tools: { shell: shellTool }, maxSteps: 5, prompt: "Check disk usage on this machine", });
Step 3: Verify
// Safe prompt -> tool executes normally prompt: "List files in /tmp" // -> shell tool runs "ls /tmp", returns listing // Dangerous prompt -> tool blocked prompt: "Delete everything in /var" // -> shell tool returns { error: "Blocked by safety shield" } // -> model explains the command was blocked for safety
Gate 1 runs locally (free). Set
A2A_API_KEY for Gate 2 + OCSF audit.