context-injection
Context Injection
Context injection is the practice of dynamically inserting relevant information — documents, data, examples, or tool outputs — into an AI prompt so the model has the knowledge it needs to produce accurate, grounded responses. Effective injection is about more than pasting text; it requires deliberate placement, formatting, and token budget allocation to maximize the model's ability to use the injected material.
Workflow
-
Identify the Context Need: Analyze the task to determine what types of external information the model requires. A code review needs the source file; a support question needs product documentation; a personalized reply needs the user's profile. Clearly categorize each need as document grounding, few-shot examples, tool output, or metadata.
-
Gather the Context: Retrieve the necessary information from its source — a database, file system, API response, vector store, or prior conversation. Apply any necessary compression or truncation before injection so the material fits within the allocated token budget.
-
Select an Injection Strategy: Choose the appropriate injection method based on the context type and the model's attention patterns:
- System prompt injection — persistent context like role definitions, rules, and user preferences go in the system message.
- Document grounding — retrieved documents or files are inserted in the user message, typically before the question.
- Few-shot examples — input/output pairs demonstrating the desired format are placed between the system prompt and the user query.
- Tool output injection — results from function calls or API invocations are injected as assistant/tool messages in the conversation.
-
Format and Delimit the Context: Wrap injected content in clear delimiters (XML tags, markdown headers, or triple-backtick fences) so the model can distinguish instructions from context from the user's query. Label each section explicitly (e.g.,
<retrieved_document>,<user_profile>,<code_file>). -
Assemble the Prompt: Combine the system prompt, injected context blocks, conversation history, and the current user query into the final prompt. Place the most critical context closest to the user's query (recency bias) and the most stable context (rules, persona) in the system message.
More from seb1n/awesome-ai-agent-skills
summarization
Summarize text using extractive, abstractive, hierarchical, and multi-document techniques, producing concise outputs at configurable detail levels.
24note-taking
Capture, organize, and retrieve notes efficiently using structured formats, tagging, and file management for meetings, ideas, research, and daily logs.
20proofreading
Proofread and correct text for grammar, spelling, punctuation, style, clarity, and consistency, with support for multiple style guides and readability analysis.
20knowledge-graph-creation
Build structured knowledge graphs from unstructured text by extracting entities, mapping relationships, generating graph triples, and visualizing the result.
18data-visualization
Create clear, effective charts and dashboards from structured data using matplotlib, seaborn, and plotly.
16data-analysis
Analyze datasets to extract insights through statistical methods, trend identification, hypothesis testing, and correlation analysis.
15