AgentGuard
Open SourceAI Guardrails & Governance to Eliminate AI Slop
Production-grade open-source AI guardrails, checks, and validation platform to eliminate AI slop across enterprise applications. Sits between your app and any LLM provider to enforce safety, accuracy, and quality standards on every AI interaction.
What's Included
AI Gateway
AuthN/AuthZ, tenant isolation, rate limiting, and model routing
Input Guardrails
7 parallel safety checks (prompt injection, jailbreak, toxicity, PII, secrets, restricted topics, data exfiltration)
Output Validation
7 quality checks (schema validity, citations, hallucination proxy, policy compliance, unsafe language, confidence, genericity)
AI Slop Prevention Score
composite 0.0–1.0 quality metric from 6 weighted components
Action Governance
tool allowlists, risk scoring, human-in-the-loop approval gates
Policy Engine
declarative YAML policy-as-code with tenant/role/channel scoping
Prompt Framework
versioned, lintable prompt packages replacing ad-hoc strings
67 tests passing
full test coverage across all modules
Tech Stack
Who It's For
Platform engineers building AI-powered products that need safety guarantees
AI/ML teams deploying LLM agents that require structured guardrails
Compliance and risk teams enforcing data protection and policy controls
Security teams defending against prompt injection and PII leakage
Want to contribute?
We welcome contributions, feature requests, and bug reports from the community.
Get in Touch