A complete nuanced guide to writing effective prompts
1 Start in the right workspace
Why: Consumer chat apps (ChatGPT, Claude web, etc.) inject hidden text and limit controls. How: Open the API playground / workbench instead. You control model choice, temperature, penalties, functions, and can see/modify every message in the stack.
2 Short prompts outperform long ones
- Accuracy drops as token count rises.
- Aim for information density: remove filler, redundancies, and corporate prose.
- A 30 % trim often yields a 3‑5 % accuracy bump on GPT‑4 class models.
Practical move: After writing, challenge yourself to cut the word count by a third without losing meaning.
3 Master the three message roles
Role | Purpose | Typical content |
System | “Who am I?” | Model identity and global behaviour (“You are a Spartan, helpful assistant…”). |
User | “What do I need?” | Concrete task, constraints, data. |
Assistant | “Past answers” | Examples, confirmations, or continuations that the model can imitate. |
Treat the assistant message as training data: praise what you liked, correct what you didn’t, then ask for the next item.
4 Lean on one‑shot examples
Adding one high‑quality example delivers a disproportionate accuracy jump; extra examples give diminishing returns and inflate the prompt. Place it as a user ↔ assistant pair after your main instructions.
5 Conversational engine ≠ knowledge engine
- LLMs hallucinate facts.
- For factual work, pair the model with an external knowledge base (RAG, database, API) and have it cite or quote that material.
6 Write unambiguous, non‑conflicting instructions
Bad: “Create a detailed summary.” (Detailed and summary fight each other.) Good: “Write a concise, 3‑sentence summary.”
Spell out numbers, scope, and exclusions. Every extra degree of freedom widens output variance.
7 Specify the output format explicitly
Tell the model how to answer, not just what.
Examples
Return JSON:
{
“relevance”: true/false,
“reason”: “<50‑word explanation>”,
“icebreaker”: “<max 20 words>”
}
or “Produce a CSV with headings: Month, Revenue, Profit.”
8 Use the “Spartan” tone keyword
Adding “Use a Spartan tone” gives you crisp, direct prose without needless formality.
9 Iterate with data, not gut feel
- Generate 10‑20 outputs for the same prompt.
- Score each as good enough? (Y/N).
- Adjust the prompt, regenerate, rescore.
- Keep the version with the highest hit‑rate. (Spreadsheet or lightweight script is fine; this is Monte Carlo testing on a shoestring.)
10 Prefer smarter models first
Token costs on GPT‑4‑o/GPT‑4‑Turbo class models are tiny unless you process millions per day. Start with the best model; only downgrade if latency or budget truly demands it.
11 Know basic structured formats
- JSON for APIs and coding workflows.
- XML when nesting order matters (documents, configs).
- CSV for lightweight tables – but keep row counts modest or the model may lose alignment.
12 Adopt a standard scaffold (C‑I‑O‑R‑E)
- Context – who/what/why.
- Instructions – the task in imperative form.
- Output format – specify schema or layout.
- Rules – do’s & don’ts (brevity, tone, token limits, etc.).
- Example(s) – one gold‑standard user ↔ assistant pair.
Memorise this order; it works for 90 % of business prompts.
13 Use AI to manufacture more examples
Ask GPT to “create three more training pairs like the above” to bulk‑build your one‑shot library without manual slog.
Reusable “Prompt Improver” instructions
Paste the following block as a system message when you want GPT‑4‑o (or similar) to clean up a messy prompt. Then feed the dirty prompt as the next user message.
You are **Prompt Washer v1** – a Spartan, business‑oriented rewriting assistant.
When the user supplies a prompt (the “Input”), do the following:
1. **Classify** the Input as one of:
– system message
– user message
– assistant message
– function definition/call
If uncertain, choose “user”.
2. **Extract intent & constraints**
• Summarise the task in ≤20 words.
• Note any explicit output format, tone, length, audience, or content limits.
3. **Detect issues**
• Verbosity / filler
• Conflicting adjectives (e.g., “detailed summary”)
• Ambiguity (undefined scope, vague verbs)
• Missing scaffold elements (context, output format, rules)
• Excess length (>500 tokens)
4. **Rewrite** using the C‑I‑O‑R‑E template:
**Context:** (≤30 words, only if absent)
**Instructions:** (imperative, unambiguous, ≤120 words)
**Output format:** (JSON/XML/CSV/bullets, etc., or “free prose”)
**Rules:** (bulleted do’s and don’ts, each ≤10 words)
**Example:** (ONE concise user ↔ assistant pair, optional; generate if helpful)
– Remove filler and contradictions.
– Keep Spartan tone.
– Preserve the original task and constraints.
5. **Return JSON** with:
“`json
{
“message_role”: “<system|user|assistant|function>”,
“clean_prompt”: “<fully rewritten text>”,
“notes”: “<summary of major changes (max 3 bullets)>”
}
Never add extra capabilities or content beyond the Input’s scope unless the user explicitly asks for enhancements.
How to use it?
1. **System**: paste the washer block once.
2. **User**: paste any sloppy prompt you receive or write.
3. GPT returns a tidy, high‑performance version plus a change log.
4. Copy the `clean_prompt` into your workflow (Playground, API call, or chained prompt).
### Example run
**User (raw):**
> “hey gpt, can you maybe like write me a really detailed summary but super short about my SaaS idea, kind of funny but professional, and maybe some bullet points idk”
**GPT (washer output):**
“`json
{
“message_role”: “user”,
“clean_prompt”: “Instructions: Write a concise (≤3 sentences) summary of my SaaS idea in a Spartan, mildly humorous tone. \nOutput format: 3 bullet points, max 15 words each.\nRules:\n- No jargon.\n- Avoid emojis.”,
“notes”: [
“Resolved conflict between ‘detailed’ and ‘super short’.”,
“Specified bullet count and length.”,
“Replaced informal phrasing with clear instructions.”
]
}
Copy the clean_prompt into your actual call, and you’re ready.
Final tips
- Keep the washer instructions in a file/snippet manager for one‑click reuse.
- If you frequently wash prompts of a single type (e.g., marketing emails), pre‑add a fixed Context block about your brand voice so it’s filled automatically.
- Review the change log; it teaches you to write cleaner prompts natively over time.
Use these tools and habits, and you’ll spend less time wrestling the model – and more time shipping work that moves the needle.