Now available: pip install agent-trust-sdk

The Security Layer for AI Agents

Protect AI agents from prompt injection, malicious content, and attacks—whether they're browsing the web, reading documents, or communicating via A2A protocol.

agent_guard.py
from agent_trust import TrustGuard

guard = TrustGuard(api_key="ta_xxx...")

# Scan web content before your agent processes it
result = guard.fetch_url("https://untrusted-site.com/article")

if result.safe:
    agent.process(result.content)
else:
    print(f"⛔ Blocked: {result.threats}")
    # ["Prompt Injection", "Hidden Instructions"]

Complete protection for AI agents

From A2A verification to web browsing protection—one unified security layer

67+ Threat Patterns

Detect prompt injection, jailbreaks, data exfiltration, memory poisoning, and more with constantly updated patterns.

Agent Guard

Scan web pages, documents, emails, tool descriptions, and memory before your agent processes them.

Framework Integrations

Drop-in support for LangChain, LlamaIndex, CrewAI, AutoGPT, and MCP. Protect your agents in minutes.

New in v0.3

Agent Guard: Content Protection

Protect agents from threats in any untrusted content—web pages, documents, emails, tools, and memory

🌐

/guard/web

Scan HTML pages for hidden instructions, invisible text, and prompt injection.

📄

/guard/document

Scan PDFs, Word docs, and text files before processing.

🧠

/guard/memory

Prevent memory poisoning by scanning before storage.

🔍

/guard/rag

Scan documents before RAG indexing to prevent poisoning.

📧

/guard/email

Detect phishing and malicious instructions in emails.

🔧

/guard/tool

Scan MCP tool descriptions for poisoning attempts.

🔗

/guard/url

Fetch and scan URLs in one API call.

📦

/guard/batch

Scan multiple items in a single request.

Framework Integrations

🦜
LangChain
🦙
LlamaIndex
👥
CrewAI
🤖
AutoGPT
🔌
MCP
67+
Threat Patterns
8
Guard Endpoints
5
Frameworks
<50ms
Scan Latency

Built for modern AI architectures

Wherever AI agents interact with untrusted content

🌐

AI Browsers & Research Agents

Protect agents that browse the web from malicious pages containing hidden instructions, invisible text, or prompt injection in HTML comments.

📚

RAG Pipelines

Scan documents before indexing to prevent RAG poisoning attacks where malicious content corrupts your vector database and influences agent responses.

🤝

A2A Agent Communication

Verify external agents before interaction. Track reputation over time. Build a network of trusted agents with our verification badge system.

🔌

MCP Tool Servers

Validate tool descriptions from MCP servers before registration. Scan tool responses for threats before processing.

Simple, transparent pricing

Start free, scale as you grow

Free

For side projects and testing

$0/mo
  • 1,000 scans/month
  • All 67+ threat patterns
  • All Guard endpoints
  • Community support
Get Started
Popular

Pro

For production applications

$49/mo
  • 50,000 scans/month
  • Priority support
  • Custom threat patterns
  • Analytics dashboard
Start Free Trial

Enterprise

For large organizations

Custom
  • Unlimited scans
  • Self-hosted option
  • SLA guarantee
  • Dedicated support
Contact Sales

Ready to secure your AI agents?

Start protecting your agents in minutes. No credit card required.