Win/Loss Analysis · Guide

Win/Loss Analysis for Churned Customers (Not Just Lost Deals)

Churn interviews ask different questions than lost-deal interviews — and produce different, often more valuable, findings. The six-question script, the specific timing that matters, and the routing that prevents churn data from sitting unused.

9 min read·For CMO·Updated Apr 19, 2026

Lost-deal interviews and churn interviews are sometimes run by the same team with the same script. This is wrong. A lost-deal interview is asking "why didn't you pick us" — a compressed evaluation question. A churn interview is asking "why did you leave us after having picked us" — a long-term relationship question. The findings they produce are different, and so the questions and the timing should be different too.

Most companies run decent lost-deal programs and thin churn programs. The imbalance leaves the more revealing data source underused. A churned customer has used your product for months or years; their reasons for leaving reflect accumulated experience that no lost-deal interview can access. The specific discipline below is calibrated to extract that signal.

Why timing matters more for churn than lost deals

A lost-deal interview is best run 2–4 weeks after the decision — close enough that the reasoning is fresh, far enough that the buyer isn't in the adrenaline of having just made the call. The window is narrow but forgiving.

Churn timing is different. Interview too early (immediately after cancellation) and the customer is in the "firing you" emotional state, which produces unhelpfully angry feedback. Interview too late (six months after) and the customer has moved on and can't reconstruct the reasoning.

    The practical implication: churn interviews are scheduled for week 2–6 post-cancellation. A churn program that runs interviews in week 1 produces unusable venting data; a program that waits until week 10+ produces vague generalities.

    The six-question churn interview

    Lost-deal scripts focus on "why them not us." Churn scripts focus on "why leave us at all." Six questions, run verbatim, producing roughly 35 minutes of conversation.

    The six churn-interview questions, in order

      Question 3 changed our positioning. Multiple customers told us they thought we were one kind of product when they bought and realized we were a different kind of product while using us. The positioning was over-promising in the pre-sale stage. We'd been losing customers to that gap for years before the churn interviews surfaced it.

      Head of CS, vertical SaaS, on the power of question 3

      What to do with the data

      Churn-interview findings route to three different places, depending on the finding type.

      Route 1 · Product gaps route to product

      Findings about specific missing capabilities, workflow gaps, or integration issues go to the product team. Not as a roadmap request — as context. The product team is not obligated to act on every churn-cited missing feature, but they should know which ones accumulated into churn.

      A specific pattern: if 3+ churn interviews in a quarter cite the same missing capability, the capability is a retention-grade product gap. The product team's roadmap should reflect this knowledge, even if the specific feature doesn't make the next release.

      Route 2 · Positioning gaps route to marketing

      Findings from question 3 (perception-reality mismatch) are positioning findings. They go to the PMM and CMO. The pattern — what customers thought you were vs. what they found you to be — reveals how your positioning is over-promising or under-promising in the pre-sale stage.

      This is usually the highest-return churn-interview output. Fixing a positioning over-promise saves future customers from churning for the same reason. The fix is usually a specific pricing-page or sales-deck change — cheap to implement, high-value in churn reduction.

      Route 3 · Relationship and process gaps route to CS

      Findings about implementation quality, CSM relationship, or support experience route to the CS team. Not for blame — for process improvement. Patterns across interviews reveal which CS operational elements are contributing to churn risk.

      If 4+ churn interviews in a quarter cite "my CSM changed three times" or "onboarding took too long," the CS process has an issue the team can address. Without the churn data, the team might not notice the pattern.

      The cadence and volume

      A healthy churn-interview program runs 3–5 interviews per month. Higher volume than that is usually unsustainable (CSMs have limited time to schedule them); lower volume misses patterns.

      At 4 interviews per month, a company has 48 churn interviews per year — enough sample size for most findings to be statistically meaningful. The quarterly synthesis (run by the CS lead or PMM) groups findings by theme, produces a one-page note, and routes to the relevant teams.

      The volume should scale with churn volume. A company churning 30 customers per quarter can interview 4 per month (13% of churners); a company churning 100+ should interview a larger sample, typically 8–12 per month.

      What makes churn interviews harder than lost-deal interviews

      Three specific challenges that churn programs face that lost-deal programs don't.

      Challenge 1: Scheduling the interview. Churned customers have no ongoing reason to engage with you. Response rates on interview requests are lower than for lost-deal prospects who at least evaluated you. The solution: small incentive ($100 gift card is standard; higher for enterprise accounts), personal outreach from the former CSM, clear framing that the interview is for your learning (not a retention save attempt).

      Challenge 2: The retention-save temptation. Some churn interviews turn into retention conversations because the churned customer signals they might come back. This is a problem: the interview becomes a sales call, the data becomes unreliable, and other churned customers hear about the pattern and refuse to do interviews. Keep the interview a pure research conversation; retention conversations are separate.

      Challenge 3: The emotional charge. Churned customers sometimes have unresolved emotional reactions to the decision. Good interviewers defuse this by acknowledging the decision, not defending, and staying curious. Interviewers who get defensive about churned customers' feedback produce worse data. Train the interviewer specifically on this.

      The internal politics

      Churn data reveals problems. Some of those problems point at product; some at marketing; some at CS. Each team has political incentive to minimize or reframe the findings pointing at them.

      The mitigation: the churn-interview program reports to the CEO or CMO, not to any of the teams being critiqued. The findings are read-once by the teams they implicate, with a specific follow-up meeting to plan response, not to debate the findings.

      Teams that learn from churn data become better at preventing churn. Teams that defend against churn data experience more of it. The difference is usually the political dynamic around how the data is received, which is a cultural choice the CEO and CMO set. A CEO who publicly engages with churn findings — asking specific questions, thanking the churn-interview program for hard-to-hear feedback — establishes the norm that the data is welcome. A CEO who signals discomfort with churn findings establishes the opposite.

      The churn program's value is proportional to how seriously the company engages with what it reveals. The interviews themselves are the easy part. The culture that welcomes the findings and acts on them is where most of the value actually lives.

      Related Stratridge Tool

      Win/Loss Review

      Turn every lost deal into something your team can actually act on.

      Win/Loss Review takes your lost-deal notes and turns them into objection patterns, rebuttal suggestions, and positioning gaps — then writes the learning back to Strategic Context so the next deal benefits from it.

      • Surfaces patterns across lost deals, not one-off anecdotes
      • Generates rebuttal suggestions from real objections
      • Feeds findings back into your strategic memory
      Analyze your losses →
      The Stratridge Dispatch

      One sharp B2B marketing read, most Thursdays.

      Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.

      Keep reading