Prompt Engineering for Chatgpt (2)
2. Introducing New Information to the Large Language Model
3. How to deal with Prompt Size Limitations ?
✅ 1. Query Only What’s Relevant (Selective Retrieval)
Idea: Instead of loading entire documents, pull only the parts that matter to the task.
Example:
You're working with a collection of meeting transcripts and want insights on a decision about budget planning.
Instead of this (too long):
“Here are 20 pages of meeting notes…”
Do this:
Use a search function or embedding model to extract only the paragraphs mentioning “budget,” “finance,” or “cost estimate.”
Prompt to LLM:
“Here are excerpts from recent meetings related to budget planning. Please summarize the decisions made and any unresolved issues.”
✅ 2. Filter Out Extraneous Information
Idea: Manually or programmatically remove boilerplate, irrelevant sections, or repeated text before prompting the model.
Example:
You have product reviews, but many just repeat phrases like “great product” or include unrelated shipping complaints.
Instead of this:
All reviews, including filler or duplicate content.
Do this:
Pre-filter to keep only reviews that mention specific features (e.g., battery life, camera quality).
Prompt to LLM:
“Summarize customer feedback about the battery life and camera quality based on these filtered reviews.”
✅ 3. Summarize or Compress in Chunks (Progressive Summarization)
Idea: If you have a long document, break it into parts, summarize each, then feed the summaries to the model for final reasoning.
Example:
You have a 50-page clinical study report you want to analyze.
Step 1 Prompt:
“Summarize the key findings and methodology of Section 1 of this study.”
(Repeat for all major sections.)
Step 2 Prompt:
“Based on the summaries of all sections, provide a final overview highlighting safety concerns and study limitations.”
Bonus tip: You can even ask the model to preserve specific elements during summarization, like “retain statistical outcomes and participant demographics.”
🔁 Summary of the Three Approaches
Approach | What You Do | Why It Works |
---|---|---|
1. Selective Querying | Pull only relevant parts | Avoids feeding unnecessary info |
2. Filtering | Remove noise or irrelevant data | Keeps prompt lean and focused |
3. Summarizing in Chunks | Compress data step-by-step | Enables reasoning across large documents |
These strategies can extend the power of your prompts and help you work around model limits—especially when dealing with long documents, noisy data, or multi-step tasks.
Comments
Post a Comment