The fastest Trust Layer for AI Agents
-
Updated
Feb 3, 2026 - Python
The fastest Trust Layer for AI Agents
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
Veil Armor is an enterprise-grade security framework for Large Language Models (LLMs) that provides multi-layered protection against prompt injections, jailbreaks, PII leakage, and sophisticated attack vectors.
Runtime defense for AI agents. 24 inline defenses, 3 output scanners, MCP server, framework adapters.
Example of running last_layer with FastAPI on vercel
Developer-first security layer for AI applications. Deterministic detection of prompt injection across 13 attack categories.
Input-boundary prompt injection detection for LLM applications using Protect AI’s LLM Guard.
Add a description, image, and links to the llm-guard topic page so that developers can more easily learn about it.
To associate your repository with the llm-guard topic, visit your repo's landing page and select "manage topics."