PivotBuddy

Unlock This Playbook

Create a free account to access execution playbooks

9 Comprehensive Playbooks
Access to Free-Tier AI Tools
Save Progress & Bookmarks
Create Free Account
Chapter 2 of 9

Chapter 2: Unpacking Assumptions - The Topography of Risk

Desirability, Viability, Feasibility: User Story & Impact Mapping.

Read Aloud AI
Ready
What You'll Learn Learn the three types of product risk, how to run Assumption Mapping workshops, and how to keep your MVP focused on results.

Every Business is Built on Assumptions

Behind every successful product is a set of validated assumptions. Behind every failed startup is a set of assumptions that were never tested -- or tested too late. This chapter gives you a framework to find, sort, and rank every assumption in your business.

The challenge is that assumptions are invisible. They're baked into your business plan, your pitch deck, your product roadmap, and your conversations with investors. They feel like facts because you've repeated them so many times. "Our target market is millennials who care about sustainability." Is that a fact or an assumption? "Users will pay $29/month for our premium tier." Fact or assumption? "We can acquire customers through Instagram ads at $5 per lead." Fact or assumption? Until you have evidence, every single one of these statements is an assumption -- and any one of them could be wrong enough to kill your business.

The discipline of assumption unpacking is about making the invisible visible. It's about forcing yourself to write down every belief that underpins your business and then honestly assessing how much evidence you have for each one. This is uncomfortable work. It requires intellectual honesty that goes against every instinct of the optimistic founder. But it's also the most important work you'll do before writing a single line of code.

The Core Insight

Your business plan isn't a set of facts -- it's a collection of beliefs. The faster you can distinguish between what you know and what you assume, the faster you'll learn what actually works.

The Triad of Product Risk

Product risks fall into three categories. A successful product must pass all three tests -- miss any one, and your startup fails:

Desirability

"Do they want this?"

This is where most startups fail. Key questions:

  • Is the problem real and frequent?
  • Is it painful enough to seek a solution?
  • Do they want YOUR solution specifically?

Viability

"Should we do this?"

Even desirable products can be bad businesses. Key questions:

  • Can we acquire customers profitably?
  • Will they pay what we need to charge?
  • Is the market large enough?

Feasibility

"Can we do this?"

Technical and operational constraints. Key questions:

  • Do we have the technology?
  • Is it legal and compliant?
  • Can we handle operations at scale?

Understanding these three risk categories is essential because they require fundamentally different testing approaches. Desirability is best tested through customer conversations, fake door tests, and behavioral observation. Viability is tested through pricing experiments, unit economics modeling, and market sizing analysis. Feasibility is tested through technical spikes, prototypes, and regulatory research. A common mistake is using a feasibility test (building a prototype) to validate desirability (do people want this?) -- the two questions require different evidence.

Where to Start?

Test Desirability first. Most founders start with Feasibility ("Can we build it?") because it's comfortable. But building something nobody wants is the #1 startup killer. Always validate demand before engineering.

Case Study: Webvan -- The $800M Feasibility Fixation

Webvan invested $800 million building massive automated warehouses for online grocery delivery. They proved conclusively that they could deliver groceries to homes within 30-minute windows. The technology was impressive. The logistics were sophisticated. But they never adequately tested whether enough consumers valued the convenience at a price that covered the operational costs. When unit economics proved unsustainable, the company collapsed.

The lesson: Webvan answered the feasibility question brilliantly but failed on viability. They proved they could do it without proving they should. Had they started with viability testing -- running a concierge service in one ZIP code to validate willingness to pay -- they could have discovered the unit economics problem before investing hundreds of millions.

Bug #1: Testing the Wrong Assumptions

Not all assumptions are created equal. Many founders waste months validating assumptions that don't actually matter.

The Bug

"We validated that we can build the technology, so we're good to go."

Technical feasibility is often the LEAST risky assumption. Proving you can build something tells you nothing about whether anyone will buy it.

The Fix

Use the Assumption Mapping Matrix.

Plot all assumptions on a 2x2 grid of Importance vs. Evidence. Focus exclusively on assumptions that are CRITICAL but UNPROVEN -- the "Kill Zone."

The temptation to test feasibility first is understandable. Technical founders feel most competent in the engineering domain. Building a prototype is tangible and satisfying. And feasibility tests have clear binary outcomes: it either works or it doesn't. But this comfort is precisely the problem. You're testing what's easy to test rather than what's most important to test. The most important assumptions are almost always about people -- their behavior, their preferences, their willingness to pay -- and those are inherently messier to validate.

The Assumption Mapping Workshop

Run this workshop with your team to systematically identify and prioritize what you don't know.

Workshop Setup

Participants:Founders + key team members (3-6 people ideal)
Duration:2-3 hours
Materials:Whiteboard, sticky notes, Sharpies, or digital tool like Miro
Output:Prioritized list of assumptions with experiment plans

The workshop format matters because assumption mapping is not a solo activity. Individual founders have blind spots -- they unconsciously avoid questioning the assumptions they're most emotionally attached to. A team setting creates accountability: when you have to write an assumption on a sticky note and defend its position on the matrix to your co-founders, it's much harder to hide behind "I just know this is true." Diverse perspectives catch hidden assumptions that no single person would identify.

Step 1: Extraction (30-45 minutes)

Brainstorm every assumption underlying your business. Use these prompts to extract hidden beliefs:

Desirability Assumptions
  • "Our target customer experiences [problem] regularly"
  • "This problem is painful enough to pay for a solution"
  • "Our solution actually solves the problem"
  • "Customers will switch from their current solution"
  • "Users will understand how to use our product"
Viability Assumptions
  • "Customers will pay $X for this"
  • "We can acquire customers for less than $Y"
  • "Our target market size is at least $Z"
  • "Customers will stay for at least N months"
  • "We can reach customers through [channel]"
Feasibility Assumptions
  • "We can build this with [technology]"
  • "We can hire the talent we need"
  • "This is legally permissible in our markets"
  • "We can deliver within [timeframe]"
  • "Partners/suppliers will work with us"
Team & Market Assumptions
  • "We have the right team composition"
  • "The market timing is right"
  • "Competitors won't respond quickly"
  • "Key stakeholders will support this"
  • "Regulations won't change"

A useful technique for extraction is the "Pre-Mortem." Instead of asking "What assumptions are we making?" (which feels abstract), ask: "Imagine it's one year from now and our startup has failed. What went wrong?" This reframing activates different cognitive pathways and often surfaces assumptions that the team has been unconsciously avoiding. People are surprisingly good at predicting their own failures when given permission to be pessimistic.

Another powerful extraction technique is the "Stranger Test." Explain your business to someone completely outside your industry -- a parent, a friend who works in an unrelated field, or even a stranger at a coffee shop. Pay attention to the questions they ask. "Why would someone want that?" and "How would they find out about it?" are questions insiders stop asking because the answers feel obvious. But those "obvious" answers are often your most dangerous unvalidated assumptions.

Common Extraction Mistakes
  • Too abstract: "People want to save time" is too vague. Be specific: "Accountants spend 5+ hours/week on manual data entry."
  • Missing implicit assumptions: "Users have smartphones" might be obvious in the US but risky in rural emerging markets.
  • Skipping "obvious" ones: Your most dangerous assumptions often feel so obvious you don't question them.
  • Confusing solutions with problems: "Users need a dashboard" is a solution assumption. The underlying problem assumption might be "Users struggle to find key information quickly."

Step 2: Mapping (30-45 minutes)

Plot each assumption on a 2x2 matrix:

The Assumption Mapping Matrix

X-Axis: Evidence Level

  • Low Evidence (Left): Gut feeling, no data
  • High Evidence (Right): Customer interviews, data, market research

Y-Axis: Business Impact

  • High Impact (Top): If wrong, the business fails
  • Low Impact (Bottom): If wrong, we can adapt
Low Evidence High Evidence
High Impact KILL ZONE
Test immediately
VALIDATED
Monitor & maintain
Low Impact NICE TO KNOW
Test if time permits
SAFE ZONE
Ignore for now

The mapping process often produces debate, and that debate is the point. When one co-founder says "We know our customers will pay $29/month" and another says "Actually, we've only asked three people," the gap between perceived and actual evidence becomes visible. Encourage disagreement during the mapping phase -- it surfaces the gaps in your collective knowledge that need to be filled through experimentation.

One critical nuance: evidence is not binary. You don't simply "have" or "not have" evidence. There's a spectrum from anecdote to data. A single customer interview is better than gut feeling but worse than 50 interviews. A landing page test with 200 visitors is more reliable than one with 20. When plotting assumptions on the matrix, be honest about where your evidence falls on this spectrum. Founders tend to overweight a single enthusiastic customer conversation as "strong evidence," when in reality it's just one data point with significant selection bias.

Step 3: Prioritize the Kill Zone (30 minutes)

For assumptions in the Kill Zone (high impact, low evidence), prioritize using this formula:

Prioritization Formula

Priority Score = Impact x Uncertainty x Speed of Testing

  • Impact (1-10): How catastrophic if wrong?
  • Uncertainty (1-10): How little do we know?
  • Speed (1-10): How quickly can we test it? (higher = faster)

Test the highest-scoring assumptions first.

The "Speed" factor is often overlooked but critically important. If two assumptions have similar impact and uncertainty scores, test the one you can validate faster. Quick wins build momentum and confidence. They also free up cognitive bandwidth -- once you've validated (or invalidated) an assumption, you can stop worrying about it and focus on the next one.

LeanPivot's Problem-Solution Fit tool automates much of this prioritization process, helping you score assumptions against the three dimensions and suggesting appropriate experiment designs for each Kill Zone assumption.

Step 4: Design Experiments (45 minutes)

For each priority assumption, define an experiment using this template:

Experiment Design Template

Assumption:[What we believe to be true]
Experiment Type:[Interview / Landing Page / Fake Door / Concierge]
Sample Size:[How many people/interactions]
Success Metric:[What we'll measure]
Pass Threshold:[The number that means "validated"]
Deadline:[When we'll have results]

When designing experiments, resist the temptation to make them too elaborate. The best experiment is the simplest one that produces a reliable signal. For desirability assumptions, five high-quality customer interviews often provide more insight than a 500-person survey. For willingness-to-pay assumptions, a single fake door test with $200 in ad spend can be more revealing than months of focus groups. Match the experiment complexity to the assumption complexity -- don't use a cannon when a slingshot will do.

Bug #2: Building Features Without Outcomes

Many teams build features because they seem like good ideas -- without connecting them to measurable business outcomes.

The Bug

"Let's add dark mode -- users will love it."

Features that don't connect to business outcomes create bloat, delay launch, and waste engineering time. "Users might like it" isn't a strategy.

The Fix

Use Impact Mapping.

Every feature must connect to a behavior change that helps a business goal. No link = no build.

The "dark mode" example is deliberately provocative because dark mode feels like an obvious feature. But consider the opportunity cost: the week your team spends implementing dark mode is a week not spent improving onboarding (which directly impacts activation rate), fixing a conversion bottleneck (which directly impacts revenue), or building a feature that addresses your users' top pain point (which directly impacts retention). In a resource-constrained startup, every feature must earn its place by connecting to a business outcome.

Impact Mapping: Why Before What

Impact Mapping connects what you build to business goals in four steps:

The Impact Map Structure

GOAL ACTORS IMPACTS DELIVERABLES
LevelQuestionExample
GoalWhy are we doing this?Increase revenue by 30%
ActorsWho can help or hinder?Event organizers, attendees, sponsors
ImpactsHow should their behavior change?Organizers create events more frequently
DeliverablesWhat can we build to cause this?Mobile admin app, event templates

The power of Impact Mapping is that it forces you to articulate the causal chain between your feature and your goal. Building a mobile admin app is not inherently valuable -- it's only valuable if it causes organizers to create events more frequently, which in turn increases revenue. By making this chain explicit, you create accountability: after building the feature, you can measure whether the expected behavioral change actually occurred. If organizers don't create events more frequently after you ship the mobile admin app, you've learned something valuable -- the feature didn't have the intended impact, and you need to investigate why.

The Feature Justification Test

Before adding any feature to your backlog, answer: "What behavioral change will this cause, and how does that impact our goal?" If you can't answer clearly, the feature doesn't belong in your MVP.

User Story Mapping: End-to-End Completeness

Impact Mapping asks why. User Story Mapping asks what -- making sure your MVP covers the full journey, not just one piece.

User Story Map Structure

Arrange stories in a grid that represents the user's journey:

Discover Sign Up Onboard Core Action Share
View landing page Create account Complete profile Create first item Invite teammate
Watch demo video OAuth login Tutorial flow Edit item Share via link
Read testimonials SSO integration Import data Bulk actions Export report

The MVP Line: Draw a horizontal line. Everything above is your MVP -- the minimum stories needed to complete the full journey. Green row = MVP. Yellow row = Next release.

The story map reveals a critical insight that most feature lists miss: horizontal completeness matters more than vertical depth. A user who can discover your product, sign up, complete onboarding, perform the core action, and share it with others has experienced the full value loop -- even if each step is basic. Contrast this with a product that has an incredible sign-up flow but no onboarding, or beautiful onboarding but a broken core action. The first product has a chance of retaining users; the second never will.

LeanPivot's User Story Generator can help you rapidly create and organize user stories across your entire journey map, ensuring end-to-end completeness before you start building.

Common Story Mapping Mistake

Building 100% of "Sign Up" but 0% of "Core Action." A user can create an account beautifully -- but then has nothing to do. Story Mapping makes these gaps visible before you build.

The Living Assumption Board

Assumption mapping isn't a one-time exercise. Your assumptions evolve as you learn, and your mapping should evolve with them. Create a "Living Assumption Board" -- a persistent artifact that your team reviews weekly.

Maintaining Your Assumption Board

Weekly Review:Spend 15 minutes in your weekly standup reviewing the board. Have any assumptions moved quadrants based on new evidence?
After Each Test:Update the evidence level of the tested assumption. Move it from "Kill Zone" to "Validated" or flag it as "Invalidated."
After Pivots:When you change direction, the entire assumption map may need rebuilding. New strategies come with new assumptions.
New Discoveries:Customer conversations often reveal assumptions you didn't know you had. Add them to the board immediately.

The discipline of maintaining a living assumption board transforms how your team thinks about uncertainty. Instead of treating unknowns as threats to be ignored, the board makes them visible assets to be managed. Over time, watching assumptions move from the Kill Zone to the Validated quadrant creates a tangible sense of progress -- progress measured in learning, not in features shipped.

Your Assumption Unpacking Checklist

Before Moving to Build

Desirability validated: You have evidence that customers want this specific solution
Viability tested: You've validated willingness to pay at your target price
Kill Zone cleared: All high-impact, low-evidence assumptions have experiments
Impact map created: Every feature links to a behavioral change and business goal
Story map drawn: Your MVP is end-to-end complete, not deep in one area

Key Takeaways

Remember These Truths
  1. Test Desirability first. Most startups fail because no one wants the product -- not because they couldn't build it.
  2. Find the Kill Zone. Focus exclusively on assumptions that are high-impact and low-evidence.
  3. Link features to outcomes. Use Impact Mapping to justify every feature with a behavioral change and business goal.
  4. Ensure end-to-end completeness. Use Story Mapping to avoid building deep in one area while leaving critical gaps.
  5. Run the workshop. Assumption mapping isn't a solo activity -- diverse perspectives catch hidden assumptions.

Now that you can systematically unpack and prioritize assumptions, let's explore the specific techniques for validating them without writing code.

Run Your Assumption Workshop with AI

Our AI-powered tools help you extract, prioritize, and design experiments for your business assumptions systematically.

Save Your Progress

Create a free account to save your reading progress, bookmark chapters, and unlock Playbooks 04-08 (MVP, Launch, Growth & Funding).

Ready to Design Your MVP?

LeanPivot.ai provides 80+ AI-powered tools to design, build, and launch your MVP.

Start Free Today
Works Cited & Recommended Reading
RAT vs MVP Philosophy
  • 1. Ries, E. (2011). The Lean Startup. Crown Business.
  • 2. "Why RAT (Riskiest Assumption Test) beats MVP every time." LinkedIn
  • 3. "Pretotyping: The Art of Innovation." Pretotyping.org
  • 6. "Continuous Discovery: Product Trio." Product Talk
  • 7. "MVP Fidelity Spectrum Guide." SVPG
Minimum Lovable Product
  • 8. Olsen, D. (2015). The Lean Product Playbook. Wiley.
  • 9. "From MVP to MLP: Why 'Viable' Is No Longer Enough." First Round Review
  • 10. "Minimum Lovable Product framework." Amplitude Blog
Hypothesis-Driven Development
Assumption Mapping
  • 15. Bland, D. & Osterwalder, A. (2019). Testing Business Ideas. Wiley.
  • 16. "Risk vs. Knowledge Matrix." Miro Templates
  • 17. "Identifying Riskiest Assumptions." Intercom Blog
User Story & Impact Mapping
  • 20. Patton, J. (2014). User Story Mapping. O'Reilly Media.
  • 21. Adzic, G. (2012). Impact Mapping. Provoking Thoughts.
  • 22. "Jobs-to-Be-Done Story Framework." JTBD.info
  • 23. "The INVEST Criteria for User Stories." Agile Alliance
  • 24. "North Star Metric Framework." Amplitude
  • 25. "Opportunity Solution Trees." Product Talk
  • 26. Torres, T. (2021). Continuous Discovery Habits. Product Talk LLC.
Pretotyping Techniques
Prioritization Frameworks
Build vs Buy & No-Code
Metrics & Analytics
Launch Operations & Analysis

This playbook synthesizes methodologies from Lean Startup, Design Thinking, Jobs-to-Be-Done, Pretotyping, and modern product management practices. References are provided for deeper exploration of each topic.