AI prompts glossary
Prompt Injection
Prompt injection is an attack pattern where malicious or unexpected instructions are introduced into model inputs to override or subvert existing prompts and policies. It can appear in user content, retrieved documents, or third-party data. For security-conscious teams, defending against prompt injection requires input sanitization, robust system prompts, and careful retrieval design to prevent unauthorized behavior in Ai Messages. Prompt injection is an attack pattern where malicious or unexpected instructions are embedded in inputs or retrieved content to override a model’s intended behavior. It can cause policy violations, data leakage, or incorrect actions. For security-conscious teams, defending against prompt injection requires sanitization, hierarchy-aware prompts, and careful retrieval design.

