PivotBuddy

Unlock This Playbook

Create a free account to access execution playbooks

9 Comprehensive Playbooks
Access to Free-Tier AI Tools
Save Progress & Bookmarks
Create Free Account
Chapter 12 of 15

Chapter 12: Deep Dive - Bullseye Framework

ICE scoring, traction testing, and channel validation.

Read Aloud AI
Ready
What You'll Learn This deep dive expands on the Bullseye Framework introduced in Chapter 6. You will learn how to run structured channel experiments, calculate the true cost per acquisition for each channel, understand the concept of channel-market fit, and develop a systematic approach to scaling your winning channel while preparing the next one.

The Bullseye Framework: A Step-by-Step Implementation Guide

In Chapter 6, we introduced the Bullseye Framework as a method for prioritizing traction channels. This deep dive provides the operational playbook for executing the framework -- from initial brainstorm through testing to scaling your core channel. Most founders fail at channel selection not because they choose the wrong channel, but because they test channels incorrectly, declare failure prematurely, or spread their resources too thin across too many experiments simultaneously.

Gabriel Weinberg, who developed the Bullseye Framework in his book Traction, emphasizes a counterintuitive truth: the channel that will work best for your startup is almost certainly not the one you think it is. Founders have strong biases -- engineers gravitate toward SEO and content marketing because those are "build" activities. Former salespeople gravitate toward outbound sales. Former marketers gravitate toward paid ads. The Bullseye Framework forces you to consider all nineteen channels with equal weight before your biases kick in.

Phase 1: The Outer Ring -- Brainstorming All 19 Channels

The first phase requires you to spend at least one focused session (90 minutes minimum) brainstorming a concrete strategy for every single one of the nineteen traction channels. This feels excessive, but it is the core of the framework. By forcing yourself to think through channels you would normally dismiss, you often discover unexpected opportunities.

For each channel, answer three questions:

  1. What would we do in this channel? Describe a specific, actionable experiment. Not "do SEO" but "publish 10 comparison articles targeting '[competitor] alternative' keywords."
  2. How much would it cost to test? Estimate the budget and time required for a meaningful test (typically 2-4 weeks).
  3. What is the expected outcome? What specific metric would you measure, and what threshold constitutes "success"?

The 19 Channels: With Test Ideas

  • Viral Marketing: Build a referral program with a double-sided incentive. Test: Can 10% of new users refer at least one person?
  • PR: Pitch a data-driven story to 5 tier-2 publications. Test: Does a feature drive measurable sign-ups?
  • Unconventional PR: Create a shareable stunt or provocative report. Test: Does it generate social sharing and backlinks?
  • SEM: Run Google Ads on 20 high-intent keywords for two weeks. Test: What is the cost per sign-up?
  • Social/Display Ads: Test 3 ad creatives on LinkedIn/Facebook targeting your ICP. Test: What is the cost per qualified lead?
  • Offline Ads: Test a local radio or podcast ad. Test: Use a unique promo code to track attribution.
  • SEO: Publish 5 "best [category]" or "how to [problem]" articles. Test: Do they rank within 90 days?
  • Content Marketing: Publish a definitive guide or original research report. Test: Does it generate leads through gated content?
  • Email Marketing: Build a 500-person list through content and test a 5-email nurture sequence. Test: What is the conversion rate from subscriber to trial?
  • Engineering as Marketing: Build a free tool that solves a related problem (like HubSpot's Website Grader). Test: Does it drive organic traffic and sign-ups?
  • Targeting Blogs: Guest post on 5 industry blogs read by your ICP. Test: Does each post drive measurable referral traffic?
  • Business Development: Partner with 3 complementary products for co-marketing. Test: Do partnerships drive qualified leads at a lower CAC than paid?
  • Sales: Cold-email 100 prospects with a personalized pitch. Test: What is the reply rate and demo booking rate?
  • Affiliate Programs: Recruit 10 affiliates with a 20% commission. Test: Can affiliates drive quality customers at an acceptable CAC?
  • Existing Platforms: List on Product Hunt, AppSumo, or an industry marketplace. Test: Does a launch drive sustained traffic beyond launch day?
  • Trade Shows: Attend one industry conference with a booth or demo. Test: What is the cost per qualified lead?
  • Offline Events: Host a meetup or workshop for your target audience. Test: Do attendees convert at a higher rate than other channels?
  • Speaking Engagements: Speak at 3 events where your ICP attends. Test: Does a talk generate inbound inquiries?
  • Community Building: Launch a Slack or Discord community around your problem space. Test: Does community engagement correlate with product adoption?

Phase 2: The Middle Ring -- Running Cheap Traction Tests

From your brainstorm, select the top 3-5 channels that seem most promising based on your knowledge of your customers, your budget, and your team's capabilities. The goal in Phase 2 is not to "do marketing" -- it is to run experiments. Each experiment should be time-boxed (2-4 weeks), budget-capped (spend the minimum needed to get statistically meaningful data), and measured against a specific hypothesis.

The Channel Experiment Template

Channel: [e.g., LinkedIn Outbound Sales]
Hypothesis: [e.g., "We can book 5 demos per week by sending personalized LinkedIn messages to VP-level marketing leaders at agencies with 10-50 employees"]
Budget: [e.g., $200/month for LinkedIn Sales Navigator + 10 hours/week of founder time]
Duration: [e.g., 3 weeks]
Success Metric: [e.g., 5+ demo bookings per week at less than $100 CAC]
Kill Criteria: [e.g., Fewer than 2 demos booked after 200 messages sent]

Common Mistakes in Channel Testing

There are four mistakes that cause founders to draw incorrect conclusions from channel tests:

Quitting Too Early

Most channels require 4-8 weeks to produce meaningful data. Declaring a channel "dead" after one week of testing is premature. SEO, in particular, requires 3-6 months to show results. Content marketing needs 20+ published pieces before you can evaluate its effectiveness. Set realistic timelines for each channel based on its typical feedback cycle.

Testing Too Many Simultaneously

If you test 5 channels at once with a 3-person team, each channel gets 20% of your effort. That is not enough to learn anything meaningful. Better to test 2 channels deeply than 5 channels superficially. Each test needs a dedicated owner who is responsible for the experiment design, execution, and analysis.

Measuring the Wrong Metric

Do not measure clicks or impressions. Measure cost per activated customer. A channel that generates 10,000 website visitors but zero sign-ups is worse than a channel that generates 50 visitors and 5 paying customers. Track the full funnel from impression to activation to retention, not just the top.

Ignoring Channel-Market Fit

Not every channel works for every market. TikTok ads are unlikely to work for enterprise security software. Cold calling is unlikely to work for a $5/month consumer app. Match the channel's economics to your product's economics. High-CAC channels (sales teams, trade shows) require high-ACV products. Low-ACV products need low-CAC channels (viral, SEO, content).

Phase 3: The Inner Ring -- Scaling Your Core Channel

After your Phase 2 tests, one channel should emerge as the clear winner -- the channel with the lowest cost per activated customer and the most promising scalability characteristics. Phase 3 is about going all-in on this channel. This means allocating 80% or more of your marketing resources to this single channel and optimizing every step of the acquisition funnel within it.

Scaling a channel is not simply "spend more money." It requires systematic optimization:

  • Optimize the creative. Test different headlines, images, copy, and calls to action. For paid channels, creative fatigue is a real phenomenon -- ads that worked last month may stop working this month.
  • Optimize the landing page. A/B test your landing page continuously. Small changes in headline, social proof, form length, and page speed can dramatically impact conversion rates.
  • Optimize the funnel. Track every step from first touch to activation. Where are users dropping off? Is it the sign-up form? The onboarding flow? The first session? Each drop-off point is an optimization opportunity.
  • Monitor CAC trends. As you scale, your CAC will increase (you are moving from early adopters to the mainstream). Track this closely and set a ceiling above which you will stop scaling and look for the next channel.

Channel Stacking: Preparing for the Next S-Curve

Every channel eventually saturates. The early adopters are cheap to reach; the mainstream is expensive. Your CAC within a channel will follow an S-curve: slow start (testing), rapid growth (scaling), and eventual plateau (saturation). The best companies begin testing their next channel while the current one is still in the growth phase, so they have a backup ready when saturation hits.

The S-Curve Stacking Strategy

Allocate your marketing resources using the 70/20/10 rule:

  • 70% on your proven core channel (the inner ring)
  • 20% on scaling a second channel that showed promise in Phase 2 testing
  • 10% on experimental tests of new channels (back to the outer ring)

This ensures you are maximizing returns from your proven channel while systematically building your next growth lever. When your core channel begins to saturate (CAC rises above your ceiling), you should already have a second channel ready to take over as the primary growth driver.

Channel-Product Fit Matrix

Different product types have natural affinities for different channels. While the Bullseye Framework encourages you to test broadly, understanding these natural affinities can help you prioritize your initial tests:

Product Type Natural Channels Why
B2B SaaS ($50-500/mo) Content Marketing, SEO, LinkedIn Outbound, Partnerships Buyers research online, long decision cycles favor education-based marketing
Enterprise SaaS ($5K+/mo) Outbound Sales, Trade Shows, Referrals, Account-Based Marketing High ACV justifies high-touch sales; buying committees need relationship building
Consumer App (Free/Low-cost) Viral/Referral, Social Media, App Store Optimization, Influencer Marketing Low ACV requires low-CAC channels; word-of-mouth is essential for consumer adoption
Marketplace SEO, Community Building, Partnerships, Supply-Side Outreach Must solve chicken-and-egg problem; SEO captures intent from both sides
Developer Tools Engineering as Marketing, Open Source, Community (Discord/Slack), Developer Relations Developers distrust traditional marketing; they adopt tools they discover through peers and practice

Attribution: Knowing What Actually Works

As you scale multiple channels, attribution -- knowing which channel actually drove a conversion -- becomes critical and increasingly difficult. A customer may see a LinkedIn ad, read a blog post, attend a webinar, and then sign up through a Google search. Which channel gets the credit?

There is no perfect answer, but here are practical approaches for early-stage companies:

  • Ask them. Add a "How did you hear about us?" field to your sign-up form. Self-reported attribution is imperfect but provides a useful signal, especially for channels like word-of-mouth that are invisible to tracking pixels.
  • Use UTM parameters religiously. Tag every link you share with UTM parameters so you can track which campaigns and channels drove traffic to your site.
  • Track first-touch and last-touch. First-touch attribution tells you which channel introduced the customer to your brand. Last-touch tells you which channel closed the deal. Both are useful; neither is complete.
  • Run sequential tests. The most reliable method at early stage: turn channels on and off one at a time and measure the impact on overall sign-ups. This is crude but eliminates multi-touch attribution complexity entirely.
Run Your Bullseye Analysis

Use our AI-powered tools to systematically evaluate all 19 traction channels for your specific product and market, design experiments, and optimize your winning channel.

Save Your Progress

Create a free account to save your reading progress, bookmark chapters, and unlock Playbooks 04-08 (MVP, Launch, Growth & Funding).

Ready to Go To Market?

LeanPivot.ai provides 80+ AI-powered tools to help you launch and grow your startup.

Start Free Today

Related Guides

Lean Startup Guide

Master the build-measure-learn loop and the foundations of validated learning to build products people actually want.

From Layoff to Launch

A step-by-step guide to turning industry expertise into a thriving professional practice after a layoff.

Fintech Playbook

Master regulatory moats, ledger architecture, and BaaS partnerships to build successful fintech products.