Skip to main content
LlamaGuard logo

LlamaGuard

Llama-3.1 based model for classifying unsafe prompts and responses

About

Llama Guard 3 is a content safety model that classifies LLM inputs and outputs as safe or unsafe based on a standardized hazard taxonomy. It is designed for developers to self-host locally for prompt and response moderation within their own applications.