Compliant LLM is your comprehensive toolkit for ensuring compliance and security of your AI systems.
Infosec, compliance, and gen AI teams use compliant-llm to ensure that their AI agents are secure and compliant with internal policies and frameworks like NIST, ISO, HIPAA, GDPR, etc.
It supports multiple LLM providers, and can be used to test prompts, agents, MCP servers and GenAI models.
Go through our documentation for more details.
- 🎯 Security Testing: Test against 8+ attack strategies including prompt injection, jailbreaking, and context manipulation
- 📊 Compliance Analysis: Ensure the compliance of your AI systems against NIST, ISO, OWASP, GDPR, HIPAA and other compliance frameworks
- 🤖 Provider Support: Works with multiple LLM providers via LiteLLM
- 📈 Visual Dashboard: Interactive UI for analyzing test results
- ⚡ End to End Testing: Test your AI systems end to end
- 📄 Detailed Reporting: Comprehensive reports with actionable insights
Configure your LLM provider and run attacks
Contact: [email protected]
Meet: 🔗 Find a slot
Community: 💬 Discord, X, LinkedIn
- OpenAI
- Anthropic
- Gemini
- Mistral
- Groq
- Deepseek
- Azure
- vLLM Ollama
- Ollama
- Nvidia Nim
- Meta Llama
- Full Application Pen Testing
- Compliant and Logged MCP Servers
- Support different Compliance Frameworks - NIST, HIPAA, GDPR, EU AI Act, etc.
- Multimodal Testing
- CI/CD
- Access Control checks
- Control Pane for different controls
- Internal audits and documentation
Those who build with compliant-llm. | Those who make compliant-llm better. |
(You have import compliant-llm somewhere in your project) | (You create a PR to this repo) |
We welcome contributions from the community! Whether it's bug fixes, feature additions, or documentation improvements, your input is valuable.
- Fork the repository
- Create your feature branch (git checkout -b feature/AmazingFeature)
- Commit your changes (git commit -m 'Add some AmazingFeature')
- Push to the branch (git push origin feature/AmazingFeature)
- Open a Pull Request
We take data security and privacy seriously. Please refer to our Security and Privacy page for more information.
Compliant LLM tracks anonymized usage statistics to improve the product. No private or personally identifiable information is tracked.
You can opt-out by setting export DISABLE_COMPLIANT_LLM_TELEMETRY=true.
@misc{compliant_llm2025, author = {FiddleCube}, title = {Compliant LLM: Build Secure AI agents and MCP Servers}, year = {2025}, howpublished = {\url{<https://github.com/fiddlecube/compliant-llm}}>, }