llm-safety-patterns

Installation
SKILL.md

LLM Safety Patterns

The Core Principle

Identifiers flow AROUND the LLM, not THROUGH it. The LLM sees only content. Attribution happens deterministically.

Why This Matters

When identifiers appear in prompts, bad things happen:

  1. Hallucination: LLM invents IDs that don't exist
  2. Confusion: LLM mixes up which ID belongs where
  3. Injection: Attacker manipulates IDs via prompt injection
  4. Leakage: IDs appear in logs, caches, traces
  5. Cross-tenant: LLM could reference other users' data

The Architecture

Related skills

More from yonatangross/skillforge-claude-plugin

Installs
4
GitHub Stars
170
First Seen
Jan 21, 2026