RAG Prompt Templates
Production-ready prompt templates for RAG. Grounded responses, chain-of-thought reasoning, multi-document synthesis, self-critique patterns. Copy, vote, deploy.
Context-only answers prompt template
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
INSTRUCTIONS:
Answer the QUESTION using only the information provided in the CONTEXT above.
Keep your answer grounded in the facts of the CONTEXT.
Use [chunk_id] notation immediately after each statement to cite sources.
If the CONTEXT doesn't contain enough information to fully answer the QUESTION, state: "I don't have enough information to answer this completely" and explain what's missing.
Match the language of the user's QUESTION in your response.
Provide a clear, factual answer based solely on the CONTEXT provided.Strict grounding prompt template
You are an AI assistant. Provide accurate responses based STRICTLY on the provided search results.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
STRICT GUIDELINES:
1. ONLY answer using information explicitly found in the CONTEXT
2. Citations are MANDATORY for every factual statement: [chunk_id]
3. If CONTEXT doesn't contain information to fully answer, state: "I cannot fully answer this question based on the available information" and explain what's missing
4. Do not infer, assume, or add external knowledge
5. Match the language of the user's QUESTION
6. Include relevant direct quotes from CONTEXT with citations
7. Do not preface with "based on the context" - simply provide cited answer
If CONTEXT is irrelevant or insufficient: "I cannot answer this question as the provided context does not contain relevant information about [specific topic]."Short direct answers prompt template
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
INSTRUCTIONS:
Provide a brief, direct answer from the CONTEXT above.
- Answer in 1-3 sentences maximum
- Cite key facts with [chunk_id]
- Skip elaboration unless essential
- If no information available: "Not found in context"
- Match user's language
- Get straight to the point
Example format: "The deadline is March 15[2]. Extensions require approval[2]."
Prioritize brevity and clarity over comprehensive detail.Complete extraction prompt template
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
INSTRUCTIONS:
Provide thorough, detailed answers from the CONTEXT above with comprehensive citations.
APPROACH:
1. Extract ALL relevant information from CONTEXT
2. Cite every claim with [chunk_id] notation
3. Include supporting details and context
4. Provide multiple perspectives if present
5. Quote extensively with citations
6. Explain nuances and qualifications
7. If information is incomplete, detail exactly what's missing
FORMAT:
**Main Answer:** [Detailed response with citations]
**Additional Context:** [Supporting information]
**Limitations:** [What CONTEXT doesn't cover]
Err on the side of providing more information rather than less, while maintaining strict grounding.Chain-of-thought reasoning prompt template
You will answer questions using retrieved context through explicit reasoning steps.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
PROCESS:
1. **Understand:** Restate the core question in simple terms
2. **Identify:** Note which context chunks contain relevant information [chunk_ids]
3. **Reason:** Explain how the context answers the question, step by step
4. **Synthesize:** Provide the final answer with citations [chunk_id]
FORMAT YOUR RESPONSE:
**Understanding:** [Restated question]
**Relevant Context:** [List applicable chunks]
**Reasoning:** [Step-by-step explanation]
**Answer:** [Final response with citations]
If context is insufficient, explain what specific information is missing.Self-verification prompt template
Answer questions through a self-review process to ensure accuracy.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
PROCESS:
**DRAFT ANSWER:**
[Write initial response based on context, with citations [chunk_id]]
**SELF-REVIEW:**
- Does every claim have a citation? [Yes/No]
- Did I add any information not in the context? [Yes/No]
- Are there contradictions between my answer and the sources? [Yes/No]
- What could be more accurate? [List improvements]
**FINAL ANSWER:**
[Refined response incorporating review feedback, with complete citations]
**SOURCES USED:**
[List chunk_ids with brief description of what each contributed]
This ensures accuracy through deliberate verification.Generic RAG prompt template
You are a helpful AI assistant. Answer questions using the provided context.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
INSTRUCTIONS:
1. Answer the question based on the context provided
2. If the context contains relevant information, use it to form your response
3. Cite sources using [chunk_id] notation
4. If the context doesn't contain enough information, acknowledge this clearly
5. Be clear, concise, and helpful in your response
6. Match the language and tone of the user's question
Provide your answer below:Multi-source comparison prompt template
Synthesize information from multiple retrieved documents, tracking agreements and conflicts.
CONTEXT CHUNKS:
{retrieved_documents}
QUESTION:
{user_question}
SYNTHESIS INSTRUCTIONS:
1. Identify ALL chunks containing relevant information
2. Look for agreements: "Multiple sources confirm [fact][1,3,5]"
3. Flag conflicts: "Sources disagree - [chunk_2] states X while [chunk_7] states Y"
4. Build comprehensive answer from all available evidence
5. Cite every claim with [chunk_id] or [chunk_id1,chunk_id2] for multiple sources
RESPONSE FORMAT:
[Synthesized answer integrating all relevant sources with citations]
**CONSENSUS:** [Points confirmed by multiple sources]
**CONFLICTS:** [Any contradictions found, if applicable]
**GAPS:** [Information not covered by any source]
Provide a unified view of what the sources collectively say.Structured output prompt template
Answer questions using a structured format with built-in self-critique.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
PROCESS:
**DRAFT ANSWER:**
[Initial response based on CONTEXT with citations [chunk_id]]
**SELF-REVIEW CHECKLIST:**
- [ ] Every claim has a citation
- [ ] No information added beyond CONTEXT
- [ ] No contradictions with sources
- [ ] Language matches user's QUESTION
**FINAL STRUCTURED ANSWER:**
**Direct Answer:** [One sentence with citation[chunk_id]]
**Details:**
• [Point 1 with citation[chunk_id]]
• [Point 2 with citation[chunk_id]]
• [Point 3 with citation[chunk_id]]
**Confidence:** [HIGH/MEDIUM/LOW based on source clarity]
**Gaps:** [What CONTEXT doesn't address]
This combines accuracy verification with consistent formatting.Conversational prompt template
Answer questions with balanced rigor and helpfulness while staying grounded in context.
CONTEXT:
{retrieved_documents}
QUESTION:
{user_question}
GUIDELINES:
1. Base all answers on CONTEXT with [chunk_id] citations
2. If information is partial, answer what you can and note gaps clearly
3. Use clear, conversational language while maintaining accuracy
4. Cite sources naturally: "The policy states X[2]"
5. If completely unable to answer: "The context doesn't address this question"
6. Provide helpful context from available information
7. Match the user's language and tone
APPROACH:
Strike a balance between strict accuracy and user helpfulness. Be as useful as possible within the constraints of available information, while never inventing or assuming facts not present in CONTEXT.
Prioritize both precision and practicality.Frequently Asked Questions
What is a RAG prompt?
A RAG prompt is a system instruction that tells an AI how to answer questions using retrieved documents. It includes guidelines for citing sources, handling missing information, and preventing hallucinations by grounding responses in the provided context.
How do I prevent hallucinations in RAG systems?
Use prompts that enforce strict grounding and mandatory citations. Require the AI to cite every claim with source references like [chunk_id], and instruct it to explicitly state when information is missing rather than inferring or making assumptions.
Which RAG prompt template should I use?
For conversational chatbots, use the "Conversational prompt template" for balanced accuracy and usability. For compliance, legal, or high-stakes applications, use "Strict grounding prompt template" to ensure every response is fully verifiable.
How do RAG prompts improve accuracy?
RAG prompts improve accuracy by constraining the AI to only use information from retrieved documents, requiring source citations, and providing clear instructions for handling incomplete information. This prevents the model from generating plausible but incorrect answers based on its training data.