Appendices: Templates and Checklists
Regulatory checklists, pre-mortem templates, and scorecards.
Appendix A: Compliance Checklist
Regulatory compliance isn't optional -- it's a feasibility constraint. Use this checklist before committing to your MVP build. Each item represents a concrete requirement that could affect your architecture, timeline, or budget. Items marked with a red flag require immediate attention and may need legal consultation.
The checklist is organized by category, with the most universally applicable items first. Even if you're building a "simple" SaaS product, you'll likely need to address most items in the Data Privacy section. AI-specific items apply if your product uses any form of machine learning or large language models. Industry-specific items depend on your target market.
2026 Startup Compliance Checklist
Data Privacy (Required for All)
- Data deletion: Can users request complete deletion of their data? Can you fulfill this within 30 days? Does deletion cascade through backups, logs, and third-party systems?
- Data residency: Can you store data in EU data centers if required? Are you aware of which jurisdictions your customers operate in and their data residency requirements?
- Consent management: Do users provide informed, granular consent before data collection? Can you distinguish between consent for different purposes (analytics, marketing, AI training)?
- Data portability: Can users export their data in a machine-readable format (JSON, CSV)? Is the export process automated or manual?
- Privacy policy: Does your privacy policy clearly describe what data you collect, why, how long you retain it, and who has access? Has a lawyer reviewed it?
- Third-party data sharing: Have you mapped every third party that receives user data? Do you have Data Processing Agreements (DPAs) with each?
AI-Specific Rules
- Risk classification: Under the EU AI Act, what risk category does your AI system fall into (minimal, limited, high, unacceptable)?
- Transparency: Can you mark AI-generated outputs as AI-made? Do users know when they're interacting with AI vs. a human?
- Human oversight: For consequential decisions (hiring, lending, medical), can humans review and override AI recommendations?
- Audit trail: Can you produce a complete log of AI decisions and the inputs that generated them? How long are these logs retained?
- Bias monitoring: Do you have a process for detecting and measuring bias in AI outputs? How often is this reviewed?
- Training data provenance: Can you document the source and licensing terms of all data used to train or fine-tune your models?
Legal Foundations
- Terms of Service: Do your ToS clearly address AI liability limitations? Have they been reviewed by a lawyer familiar with AI products?
- IP ownership: Who owns AI-created content -- your company, the user, or neither? Is this clearly stated in your ToS?
- Training data rights: Do you have legal rights to use all training data? Have you assessed copyright risk for any scraped or public data?
- Liability insurance: Do you have professional liability (E&O) insurance? Cyber liability insurance? D&O insurance if you have investors?
- Corporate structure: Is your corporate entity properly formed? Do you have founder agreements with vesting schedules?
Industry-Specific Requirements
- Industry regulations: Have you identified all industry-specific regulations that apply (HIPAA, FERPA, GLBA, PCI DSS, SOX)?
- Licensing: Do you need any professional licenses, money transmission licenses, or industry certifications?
- Insurance requirements: Do your target customers or industry standards require specific insurance coverage?
- SOC2/ISO certification: Will your target customers require SOC2 Type II or ISO 27001 certification? If so, have you budgeted the $20K-50K cost and 6-12 month timeline?
- Accessibility: Does your product need to comply with WCAG 2.1 accessibility standards? This may be legally required for government or education customers.
How to Use This Checklist
Go through each item and mark it as: (1) Addressed -- you've already handled this, (2) Planned -- you know what to do and when, or (3) Unknown -- you need to investigate. Any item marked "Unknown" in the Data Privacy or Legal sections should be resolved with a lawyer before building your MVP. Use the Assumption Mapper to track compliance requirements alongside your business model assumptions.
Appendix B: Pre-Mortem Template
Run this exercise with your team before making the Go/No-Go decision. The Pre-Mortem, developed by psychologist Gary Klein, is one of the most effective risk identification techniques available. It works by leveraging a cognitive quirk: people are better at explaining past events than predicting future ones. By imagining that failure has already occurred, you unlock insights that forward-looking risk analysis misses.
Preparation
- Participants: All co-founders and early team members. Include advisors if available. 3-8 people is the ideal group size.
- Time needed: 60-90 minutes for the full exercise.
- Materials: Sticky notes or shared digital document, timer, whiteboard or wall space for clustering.
- Facilitator: Designate someone to facilitate who can remain neutral and ensure all voices are heard.
The Prompt
"It is 2028. Our company has completely failed. We're shutting down. What happened?"
Important: The prompt uses definitive language ("has failed" not "might fail"). This is critical -- it removes the cognitive burden of predicting and replaces it with the easier task of explaining. Read the prompt aloud, slowly, and let it sink in before people start writing.
Process
- Individual writing (10 minutes): Each person independently writes 3-5 specific failure scenarios. No discussion during this phase. Encourage specificity: "We ran out of cash in Month 14 because our CAC escalated 40% faster than projected" is better than "We ran out of money."
- Round-robin sharing (15 minutes): Go around the table. Each person reads one scenario at a time. No debating or dismissing. The facilitator captures each item on the whiteboard. Continue until all items are shared.
- Clustering (10 minutes): Group similar failure modes together. Common clusters: financial failures, team failures, product failures, competitive failures, regulatory failures, timing failures.
- Voting (5 minutes): Each person gets 3 votes. Vote for the failure modes you believe are most likely AND most impactful. Items with multiple votes are your highest-priority risks.
- Mitigation planning (30 minutes): For the top 3-5 risks, discuss specific mitigation actions. Assign an owner and a deadline for each. Add these to your risk matrix from Chapter 8.
Common failure modes to seed the conversation if the group needs prompting:
Financial Failures
- Ran out of cash because burn rate exceeded projections by 40%
- Couldn't raise next round because metrics didn't meet investor thresholds
- Unit economics never reached the 3:1 LTV:CAC target despite 18 months of optimization
- Revenue plateaued at $30K MRR -- enough to survive but not enough to grow or attract investment
- AI inference costs rose instead of declining as projected, destroying margins
- Annual contracts churned at renewal because the product didn't deliver sustained value
Team Failures
- Co-founder dispute over equity split led to a messy breakup and legal costs
- CTO left with all the technical knowledge and no documentation
- Couldn't hire ML engineers at our budget -- competitors offered 2x compensation
- Remote culture broke down as team grew from 3 to 8 people, leading to miscommunication
- Burnout: founders worked 80-hour weeks for 18 months and made increasingly poor decisions
- First sales hire had wrong background -- we lost 4 months of runway on a bad fit
External & Regulatory Failures
- EU AI Act classified us as high-risk, adding $150K in compliance costs we hadn't budgeted
- Copyright lawsuit from a content creator whose work appeared in our training data
- Key API provider (OpenAI/Anthropic) changed their terms of service, prohibiting our use case
- CCPA enforcement action because our data deletion process missed cached copies
- Industry-specific regulation changed, requiring licenses we couldn't afford or obtain
- Macroeconomic downturn reduced customer willingness to pay for new SaaS tools
Competitive & Product Failures
- Google/Microsoft launched a free version of our core feature as part of their suite
- A YC-backed competitor raised $10M and out-spent us on customer acquisition
- Open-source alternative achieved 80% of our functionality at zero cost
- We built what we thought customers wanted instead of what they actually needed
- Technology breakthrough (new AI model, new framework) obsoleted our approach
- We expanded to enterprise before product was ready, losing credibility with early champions
Appendix C: Unit Economics Health Check Scorecard
Use this scorecard to quickly assess whether your unit economics are healthy, concerning, or in critical territory. Check your numbers against each row. Multiple "Warning" results indicate your model needs work. Any "Critical" result should be resolved before committing to your MVP build.
The benchmarks here are drawn from data published by SaaStr, OpenView, Lighter Capital, and other startup-focused research organizations. They represent broad industry averages -- your specific benchmarks may differ based on your vertical, target customer, and business model. Use them as starting points, not absolute standards.
| Metric | Healthy | Warning | Critical | Why It Matters |
|---|---|---|---|---|
| LTV:CAC Ratio | > 3:1 | 1:1 - 3:1 | < 1:1 | Below 3:1, there's not enough margin to reinvest in growth, product, and operations. Below 1:1, every customer makes you poorer. |
| Payback Period | < 12 months | 12-18 months | > 18 months | Long payback periods create enormous working capital requirements during growth. At 18+ months, the cash trough becomes a cash grave. |
| Gross Margin | > 70% | 50-70% | < 50% | SaaS investors expect 70%+ long-term. AI products may start at 50-60% but should show a path to improvement as costs decline. |
| Monthly Logo Churn | < 2% | 2-5% | > 5% | At 5% monthly churn, you lose half your customers every year. Growth becomes a treadmill -- you're running fast just to stay in place. |
| Net Revenue Retention | > 120% | 100-120% | < 100% | NRR above 100% means your customer base grows in revenue even without new acquisitions. Below 100%, you're on a leaky boat. |
| Burn Efficiency | < 1.5x | 1.5-2.5x | > 2.5x | Burn efficiency = net burn / net new ARR. At 2.5x, you're spending $2.50 for every $1 of new ARR -- a rate that drains cash fast. |
| Rule of 40 | > 40 | 20-40 | < 20 | Growth rate + profit margin. Measures the combined health of growth and profitability. Below 20 suggests neither growth nor margins are working. |
How to Use This Scorecard
Score your business against each row using the best data you have. For metrics you haven't measured yet, use your projections from the financial model -- but flag these as "projected" rather than "measured."
- All Green: Your unit economics are strong. Proceed to MVP with confidence.
- 1-2 Warnings: Your model works but has areas for improvement. Proceed, but prioritize improving these metrics during your MVP phase.
- 3+ Warnings: Your model has significant weaknesses. Consider a pivot on the weakest dimensions before building.
- Any Critical: A critical result is a stop sign. Do not build until you've addressed it -- either by fixing the underlying economics or by pivoting to a model that doesn't have this weakness.
Appendix D: AI Cost Benchmarks (2026)
These benchmarks represent market pricing as of early 2026. AI inference costs have been declining 50-70% annually, so these numbers will likely be lower when you read this. Use them as a starting point for cost modeling, but validate against current pricing from your specific providers.
The table below covers the major cost categories for AI-native products. Note the enormous range within each tier -- cost varies significantly based on provider, model size, and whether you're using the model's API or self-hosting.
| Model Tier | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) | Best Use Case | Example Models |
|---|---|---|---|---|
| Economy | $0.10 - $0.25 | $0.25 - $0.50 | Classification, simple extraction, formatting, data validation | Claude Haiku, GPT-4o mini, Llama 3 8B, Gemini Flash |
| Standard | $0.50 - $2.00 | $1.50 - $6.00 | General chat, summarization, Q&A, content generation | Claude Sonnet, GPT-4o, Llama 3 70B, Gemini Pro |
| Premium | $3.00 - $15.00 | $10.00 - $60.00 | Complex reasoning, code generation, multi-step analysis | Claude Opus, GPT-4 Turbo, Llama 3 405B |
| Frontier | $15.00 - $75.00 | $60.00 - $150.00 | Research-level analysis, advanced scientific reasoning | o1-pro, Claude Opus (extended thinking) |
Additional AI Cost Components
| Component | Typical Cost Range | Scaling Factor | Optimization Tips |
|---|---|---|---|
| Embedding Generation | $0.02 - $0.13 per 1M tokens | Scales with document volume | Batch process documents. Use smaller embedding models for initial retrieval. |
| Vector Database Storage | $0.10 - $0.50 per GB/month | Scales with indexed content size | Archive old content. Optimize chunk sizes. Use tiered storage. |
| Fine-Tuning | $3 - $25 per 1M training tokens | One-time per training run | Use LoRA/PEFT for 90% cost reduction. Start with prompt engineering before fine-tuning. |
| Image Generation | $0.02 - $0.12 per image | Scales with generation volume | Cache common generations. Use lower resolution for previews. |
| Speech-to-Text | $0.006 per minute | Scales with audio minutes processed | Chunk long audio files. Use cheaper models for non-critical transcription. |
Cost Trend Projection
AI inference costs have been declining approximately 50-70% annually since 2023. Based on this trend, costs in your Year 2 financial model can reasonably assume a 40-50% reduction from current levels. However, your Year 1 model should use today's prices -- never build a model that requires future cost reductions to be viable. If the model works at today's costs and improves with cost declines, you have a robust foundation. If it only works with assumed cost declines, you're taking a bet that may not pay off.
Appendix E: Feasibility Analysis Glossary
Key terms used throughout this playbook, defined with precision for founders who may be encountering them for the first time:
| Term | Definition | Why It Matters |
|---|---|---|
| ARPU | Average Revenue Per User. Monthly revenue divided by active paying users. | The foundation of your revenue model. Every other metric depends on getting this right. |
| CAC | Customer Acquisition Cost. Total sales and marketing spend divided by new customers acquired. | The cost side of the unit economics equation. Must be "fully loaded" to be accurate. |
| LTV | Customer Lifetime Value. Total gross profit expected from a customer over their entire relationship. | The value side of the unit economics equation. Must use gross profit, not revenue. |
| NRR | Net Revenue Retention. Revenue from existing customers after accounting for churn, downgrades, and expansion. | Above 100% means your customer base grows in value without new acquisitions. |
| Gross Margin | (Revenue - COGS) / Revenue. The percentage of revenue remaining after direct costs. | Determines how much of each revenue dollar is available to cover fixed costs and profit. |
| Burn Rate | Net monthly cash consumption. Gross burn (total spending) minus revenue. | Determines your runway. Runway = Cash / Net Burn Rate. |
| Default Alive | A company that will reach profitability with current cash, without raising additional capital. | The most powerful strategic position. You need investors' money, but you don't need their permission to survive. |
| Rule of 40 | Growth Rate % + Profit Margin % >= 40. A benchmark for software company health. | Allows trade-off between growth and profitability. Both extremes are valid if the sum exceeds 40. |
What You Walk Away With
- Regulatory Checklist: A comprehensive, actionable compliance assessment covering data privacy, AI regulations, legal foundations, and industry-specific requirements. Ready to use as a working document during your MVP build.
- Pre-Mortem Template: A complete facilitation guide for running pre-mortem exercises, including seeded failure scenarios across financial, team, regulatory, and competitive categories.
- Unit Economics Scorecard: A quick-reference assessment tool with industry benchmarks for seven critical metrics, color-coded by health status with clear explanations of why each metric matters.
- AI Cost Benchmarks: Current reference prices for all major AI cost categories (inference, embeddings, vector storage, fine-tuning), with optimization strategies and cost trend projections.
- Feasibility Glossary: Precise definitions of all key terms used throughout this playbook, serving as a reference document for team alignment and investor conversations.
Interactive Versions
LeanPivot's AI-powered tools include interactive versions of these checklists and scorecards with automatic scoring and personalized recommendations.
Save Your Progress
Create a free account to save your reading progress, bookmark chapters, and unlock Playbooks 04-08 (MVP, Launch, Growth & Funding).
Ready to Prove Your Business Model?
LeanPivot.ai provides 80+ AI-powered tools to validate feasibility and build your startup.
Start Free TodayRelated Guides
Lean Startup Guide
Master the build-measure-learn loop and the foundations of validated learning to build products people actually want.
From Layoff to Launch
A step-by-step guide to turning industry expertise into a thriving professional practice after a layoff.
Fintech Playbook
Master regulatory moats, ledger architecture, and BaaS partnerships to build successful fintech products.