A common pricing-interview failure looks like this: a founder spends two weeks talking to users, every conversation ending with the same question: “So — would you pay $49 a month for this?” Most people say yes. A few say maybe. The price goes live, and the real market gives a colder answer than the calls did.
The interviews hadn’t lied to him. He’d asked the wrong question.
This is the most common, most expensive mistake in early SaaS pricing research, and it has a name in academic circles: hypothetical bias. According to the Catalog of Bias, what individuals say they would do is consistently not what they actually do. A 2020 meta-analysis in the Journal of the Academy of Marketing Science found that stated willingness to pay tends to overstate real willingness to pay by a meaningful margin across dozens of studies. People are not lying when they say they’d pay $49 a month. They are answering a question that has no cost — no card to enter, no team to convince, no other line items to argue with. The answer they give you is the answer to a hypothetical, and the hypothetical is not your business.
The good news: there is a pattern of pricing interview questions that do work, even for a solo founder talking to twenty users in a week. They come from a small canon of practitioners who have spent careers on this — Madhavan Ramanujam, Bob Moesta, Rob Fitzpatrick, Teresa Torres, Steve Blank — and they all converge on the same idea, expressed slightly differently: stop asking what people would do, and start asking what they have already done.
This piece is the template. Twelve questions, in the order I’d ask them, with the reasoning for each, and a short section on what to actually do with the answers. It’s written for a B2B SaaS founder with somewhere between fifty and a thousand users, no research background, and no money for a consultant.
Why “would you pay X?” is the most expensive question you can ask
Madhavan Ramanujam, a partner at Simon-Kucher and the author of Monetizing Innovation, has argued for years that pricing should be researched before the product is built — not after. Writing in First Round Review, he makes the case for bringing willingness-to-pay conversations into discovery early, while there is still time to shape the product, packaging, and market positioning.
The principle is right. The exact wording is a trap.
Teresa Torres, who wrote Continuous Discovery Habits and has coached more product teams than almost anyone working today, puts it sharper. In her Product Talk piece on testing willingness to pay, she argues that “what would you do?” answers are unreliable because no real choice is being made. Her advice is to either ask about the past, run a demand test that simulates buying, or — best of all — get them to actually buy.
Rob Fitzpatrick made the same point in The Mom Test a decade earlier. The book’s thesis, in one line: opinions about your idea are worthless, and you should anchor every conversation in concrete things the person has already done. For pricing, the question is not “would you pay?” but “have you ever paid to solve this problem, and how much?”
Steve Blank’s customer development tradition agrees. As Lean B2B summarises Blank’s approach, the question to ask is what the customer is currently spending — on alternatives, on workarounds, on staff time — to solve the problem your product addresses. That number is real. It comes from a budget that has already been signed off. Whatever your price, it will be compared against that number, not against the polite figure they made up to be helpful in your interview.
So: the canon agrees. Stated willingness to pay is unreliable. Past behaviour and current spending are the two anchors that hold weight. Everything else in this template is built on those two anchors.
A note on the survey-style methods
You may have read about Van Westendorp’s price sensitivity meter, Becker–DeGroot–Marschak, multiple price lists, or discrete choice experiments. These have their place. Lenny Rachitsky’s newsletter has a thorough guide to all four. They are quantitative, survey-based, and require sample sizes well beyond what a founder running concierge research can gather in a week.
For a founder with twenty interviews in a week, none of these will give you a clean answer. They are also still asking hypothetical questions, just dressed up in a frame that hides the hypothesis. As ProfitWell’s Patrick Campbell has argued, pricing research benefits from pairing qualitative depth with quantitative breadth — but if you only have time for one, the qualitative interview, done well, will tell you more about why a price works than any survey.
The template that follows is the qualitative half. Run it well now. Run a survey later, if you want.
The pricing interview template — twelve questions, in order
The structure borrows from Bob Moesta’s switch interview methodology, which he developed alongside Clayton Christensen as part of the Jobs-to-be-Done framework. In his Intercom podcast appearance, Moesta describes the central move: rather than asking what someone might do, you reconstruct, in fine detail, the moment they made a decision they have already made. “Why was today the day they signed up for this product?” That is the question. Everything else is timeline.
Read the template once before you use it. The exact words matter less than the order — past behaviour first, then the moment of decision, then the alternatives they considered, and only at the very end anything that touches on what they would do next. Most founders skip to the end. Don’t.
Opening (one question)
1. “Walk me through how you currently handle [the problem your product solves].”
This is the question that anchors the entire conversation. You are not asking about your product. You are asking them to describe a real, present-day workflow. Listen for what they currently use, who else is involved, how much time it takes, and — without pushing — what it costs them. As NN/g notes in their primer on why interviews fail, the best opening question is one that gets the participant talking about something concrete from their own life.
Spend history (three questions)
2. “What have you tried before this?”
You’re building a list of the alternatives they have already considered serious enough to test. Each entry is a price anchor, whether or not money changed hands. Free tools count. Internal scripts count. Hiring a contractor counts.
3. “What did you pay for those, if anything?”
Now you put a number on each. Often the answer is “nothing — we built it ourselves.” Follow up: how much time did that take, and whose time was it? Engineering hours have a price even if your interviewee doesn’t think of them that way.
4. “What are you currently paying — whether to a tool, a contractor, or in your team’s time — to keep [the problem] under control?”
This is the Mom Test question, sharpened. The total figure is your real anchor. If they tell you they’re paying nothing, ask what would happen if the workaround disappeared tomorrow. If the answer is “we’d just hire someone,” there’s your number.
The decision moment (three questions)
5. “Tell me about the last time [the problem] became urgent enough that you actually did something about it.”
This is the switch-interview move. Freeze time. You want a specific incident, not a category of incidents. If they answer in generalities, gently ask for the most recent one they can remember.
6. “What was happening that week that pushed you to act, instead of letting it slide?”
Moesta’s “forces of progress” frame: every purchase is a contest between the push of the current situation, the pull of a new solution, and the inertia of habit and anxiety. This question surfaces the push. You will hear deadlines, customer complaints, board meetings, a single bad month of churn. These are the real triggers your buyers respond to. Your pricing page should speak to them.
7. “What did you actually try first? And then?”
Reconstruct the sequence. People rarely buy the first option they see. Knowing what they tried before they bought tells you who you are really competing against — which is rarely the competitor you assumed.
Money and authority (three questions)
8. “When you decided to pay for [the alternative they actually bought], whose budget did it come from? Was there anyone you had to convince?”
In B2B, the buyer and the user are often different people. The Mom Test calls this “where does the money come from?” — it is the single most-skipped question by founders selling to teams. If the answer is “I just put it on my card,” you are selling to an individual contributor, and your pricing should reflect that. If the answer is “I had to write a memo for my manager,” you are selling to a team, and you need a story for that memo.
9. “How long did the decision take from first noticing the problem to handing over a card?”
The length of the buying cycle is a price signal in itself. A short decision usually means low approval friction; a long one usually means the price has to survive budget owners, risk questions, and internal comparison. If the decisions in your category run long, your pricing page is just one input among many; if they run short, the page is doing almost all the work.
10. “Was there anything that almost stopped you from buying?”
You are asking about anxiety — Moesta’s fourth force. Listen carefully. The answers are usually about risk: integration risk, switching cost, internal politics. Each one is a thing your pricing and packaging need to address, often more than the dollar figure.
The forward-looking question — but only one (one question)
11. “If you imagined replacing your current solution with something better, what would have to be true for it to be worth switching?”
This is the only forward-looking question in the template, and it deliberately does not mention price. You want their list of conditions: the features that aren’t optional, the integrations that have to exist, the support level, the security review. Once you have that list, your pricing tiers can be designed around it — which is what Ramanujam means by “design the product around the price.”
Wrap (one question)
12. “Is there anything I should have asked you that I didn’t?”
This is Steve Blank’s question, and it is the single most-quoted line in customer development for a reason. People hold back the most useful things in interviews until they think the formal part is over. By giving them permission to add what they want, you often get the line that ends up in your sales deck.
Notice what isn’t here
There is no “would you pay $X?” There is no Van Westendorp ladder. There is no “how much do you think this should cost?” The reason is the one Torres gave: every answer to those questions is invented on the spot, in front of you, with no money at stake. The eleven other questions are about money that has already moved, decisions that have already been made, and conditions the person can describe because they have actually thought about them. That is the difference between a pricing interview that is honest and one that is comforting.
You will be tempted to ask the hypothetical anyway. Most founders are. The conversation feels incomplete without it. Resist. The discomfort of not asking is the price of getting a real answer.
What to do with the answers
Twenty conversations in this format will give you a stack of notes that, on first read, look like a mess. The signal is in the patterns, and the patterns show up if you do three things.
First, collect the spend numbers in one place. What is the median amount the people you talked to are currently paying — in money, in time, in contractor invoices — to keep this problem under control? That number is your floor. Going meaningfully below it without a story for why suggests you are leaving money on the table; going meaningfully above it requires that you carry a clear story for the extra value.
Second, cluster the trigger events. The “what was happening that week” answers usually fall into three or four categories. Each category is a marketing message and, often, a packaging cue. If half your interviewees were triggered by a single bad customer call, your pricing page needs a story about avoiding that call. If a third were triggered by a board meeting, you need a story about board-meeting-friendly evidence.
Third, list the must-haves and the deal-breakers. The question 11 answers map directly to your tiers. Anything that comes up as a deal-breaker in eight of twenty conversations is in your base tier. Anything that comes up as a nice-to-have in four of twenty is a differentiator for a higher tier. The ProfitWell Report has argued that companies often spend less than ten hours a year on pricing. Twenty good interviews and one Saturday afternoon of clustering will put you ahead of teams that treat pricing as a once-a-year spreadsheet edit.
A last note on honesty
The best pricing research finds the difference between what people say a fair price would be and what they would actually do at that price. The whole template above is built to close that gap by anchoring everything in real behaviour.
The same gap is what makes pricing interviews quietly painful for founders. The “would you pay?” answer is comforting because it tells you what you hoped to hear. The reconstructed buying decision is uncomfortable because it tells you, very specifically, who your buyer is, what they care about, and how much they will actually part with. That is also the reason it is worth asking.
If you run twenty of these conversations and find that the median current spend is half what you wanted to charge, you have not failed. You have learned something it would have cost you a quarter of revenue to learn after launch. The discomfort is the value. The comfortable answer is what you were already getting.
That is the whole job, in fact, of a research conversation: to give the person on the other end enough room to tell you the inconvenient truth. If you treat them as someone whose past behaviour is more interesting than their predictions, they will. And the pricing page you write next will be a description of a decision they have already made, rather than a guess at one they might.