Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
- Updated Jan 7, 2026
- Jupyter Notebook
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, and more.
the LLM vulnerability scanner
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
🐢 Open-Source Evaluation & Testing library for LLM Agents
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
A.I.G (AI-Infra-Guard) is a full-stack AI Red Teaming platform developed by Tencent Zhuque Lab that secures your AI ecosystem from infrastructure to agents.
The Security Toolkit for LLM Interactions
A secure low code honeypot framework, leveraging AI for System Virtualization.
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
A security scanner for your LLM agentic workflows
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Papers and resources related to the security and privacy of LLMs 🤖
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Run coding agents in isolated Incus containers (sandboxes) with session persistence, workspace isolation, and multi-slot support.
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."