The prompt that solves ambiguous problems

· 3 min read
Share:

TL;DR

  • LLMs pick the “standard” interpretation when there’s ambiguity (even if it’s wrong)
  • Prompt v17b methodology: identify interpretations → solve each → verify as emergent property → discard invalid ones
  • Key ingredient: “You have permission and obligation to discard. You decide.”
  • Works for interpretive ambiguity; doesn’t work for conceptual errors or external knowledge

The problem it solves

LLMs have a bias toward the “standard” interpretation of a problem. When there’s ambiguity, they pick the most common one and solve it well… but it was the wrong interpretation.

Example:

“3 coins, P(heads)=1/3, the number of tails is always even. What’s P(all heads)?”

  • Standard interpretation: Conditional probability → 1/13
  • Correct interpretation: Structural constraint → 0

The v17b prompt

Methodology for solving problems with conditions:

1. IDENTIFY AMBIGUITIES: Don't assume the "standard" interpretation

2. GENERATE INTERPRETATIONS: List ALL possible ways to
   mathematically model each condition

3. SOLVE EACH ONE: Calculate the complete solution for each
   interpretation

4. VERIFY CONSISTENCY: For each interpretation, check that
   your model satisfies ALL conditions as emergent property.
   "I used the data" ≠ "The result satisfies the data"

5. DISCARD: Eliminate interpretations where a condition from
   the problem statement is NOT met in the final model

6. ANSWER: The one that remains

IMPORTANT: You have permission and obligation to discard.
Don't ask which I prefer. You decide.

Why it works

Three key elements:

1. “Don’t assume the standard”

The model has permission to consider alternatives. Normally it doesn’t because “the standard” is safe.

2. “Emergent property”

The model typically verifies: “Did I use P(heads)=1/3 in my calculations? ✓”

But that’s not verifying. It should check: “Does my result give P(heads)=1/3 when I calculate the marginal?”

3. “You have permission and obligation to discard”

Without this phrase, the model presents both interpretations and asks which you prefer. It won’t commit. I documented this behavior in detail in how the model reaches the correct answer and calls it a contradiction.

When it DOESN’T work

Problem typeDoes v17b work?
Interpretive ambiguity✅ Yes
Pure calculation⚠️ Unnecessary (model already does it well)
Deep conceptual error❌ No (doesn’t know that it doesn’t know)
External technical knowledge❌ No (needs tools)

How to use it

Option A: System prompt

Put the methodology as prior context, then ask the question.

Option B: Multi-turn

  1. Send the methodology
  2. Model responds “Understood”
  3. Send the problem

Option B works better because the model “confirms” the methodology before seeing the problem.

I also found that more tokens doesn’t mean better results: if the model doesn’t understand the underlying problem, a longer prompt just gives it more space to rationalize.


This prompt came out of 17 iterations testing a probability problem where the model found the correct answer and discarded it.

To know when to use v17b vs other techniques, check my taxonomy of LLM failures.

This post is part of my series on the limits of prompting. For a complete view, read my prompt engineering guide.


Keep exploring

Found this useful? Share it

Share:

You might also like