PyPI

LangChain / LangGraph

Wrap any LangChain tool with the safety shield in one line.

Step 1: Install the integration

pip install tyga-langchain

Set your API key (optional — Gate 1 runs locally without it):

export A2A_API_KEY="a2a_YOUR_KEY"

Step 2: Add the safety layer

Wrap any tool with safety_guard() to screen commands before execution:

from langchain_community.tools import ShellTool
from langchain.agents import AgentExecutor, create_react_agent
from langchain_openai import ChatOpenAI
from tyga_langchain import safety_guard

# Wrap the shell tool with the safety shield
safe_shell = safety_guard(ShellTool())

# Build a ReAct agent with the guarded tool
llm = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(llm, [safe_shell], prompt)
executor = AgentExecutor(agent=agent, tools=[safe_shell])

# Safe commands run normally
result = executor.invoke({"input": "List Python files in the current directory"})

# Dangerous commands are blocked before execution
result = executor.invoke({"input": "Delete all log files recursively"})
# -> BlockedError: Gate 1 matched destructive operation

Step 3: Verify

python -c "
from tyga_langchain import safety_guard
from langchain_community.tools import ShellTool

tool = safety_guard(ShellTool())
print(tool.run('echo hello'))    # hello
print(tool.run('rm -rf /'))      # BlockedError: Gate 1 matched
"
Gate 1 runs locally with built-in denylist patterns (free, no API key). Set A2A_API_KEY for Gate 2 (LLM judge) + OCSF audit.
Get your API key Full API docs
Help

Help

Need help? Here are some quick links:

A2A Infrastructure
Air traffic control for AI agents
Ask me anything about pipelines, workspaces, channels, pricing, or integrations.