Winloss Analysis · Guide

Win/Loss Analysis for Usage-Based Churn

Usage churn doesn't announce itself the way sales churn does. Here's how to run a win/loss program that catches it before the renewal call

9 min read·For CMO·Updated Apr 28, 2026

A sales-driven churn shows up on a call. Someone tells the CSM the renewal is at risk, finance gets looped in, the deal team writes a save plan. A usage-based churn — the kind that quietly happens at a PLG or consumption-priced SaaS company — shows up as a bar chart that flattens. No call. No champion. No save play. By the time the CSM notices, the buyer has already chosen a different tool, a different team, or a different problem.

The win/loss playbook most teams inherited was built for the first kind of loss. It interviews economic buyers about a procurement decision. That playbook misses the second kind almost entirely, because there's no procurement decision to interview about — there's an absence of usage, and an absence is harder to ask questions about.

Usage churn is a loss of habit, not a loss of contract. The interview has to be designed for that.

Define the term before designing the program

"Usage-based churn" gets used loosely. For this guide: a customer's measured product usage drops below a threshold (active seats, API calls, monthly events, workflow runs) for a sustained window, and either the contract auto-downgrades, the consumption bill drops, or the renewal is declined without negotiation. The buyer didn't fire you. They stopped showing up.

This is different from sales churn in three ways that matter for the interview design.

The implication: a usage-churn win/loss program is less about asking why did you not renew and more about reconstructing what happened in the four weeks the chart started bending.

The interview population is different too

In a sales-led loss, you interview the buyer. In a usage-based loss, the buyer doesn't know — they signed the contract or set the budget, and they haven't logged in for two months. The people who can tell you what happened are the people who used the product daily and stopped.

That's two practical changes:

  • Recruit from the activity log, not the CRM contact card. Pull the top three users by event volume from the 90 days before the inflection point. Those are your interviewees.
  • Expect a lower response rate and pay for it. Sales-loss interviews typically run 40–60% acceptance with a $100 honorarium. Usage-churn interviewees have less stake in the outcome — budget $200–250 and accept a 25–35% acceptance rate.
14 days
median window between the usage inflection point and the moment the lapsed user's memory of the switch becomes generic and unreliableStratridge interview-program data, 2026

Step 1 — Identify the inflection point before you call

Don't interview blind. Before you reach out, instrument the account.

    The point of this prep: you're not asking the user "why did you stop?" You're asking "in the week of October 14th, your team's workflow runs dropped from 80 to 12. Walk me through that week." Specificity surfaces real reasons. Open-ended questions surface rationalizations.

    Step 2 — Run the interview around the timeline, not the relationship

    The interview script for a usage-loss is structurally different from a sales-loss script. You're not asking about evaluation criteria. You're asking about a behavioral switch.

    The script we use, in order:

    1. Habit reconstruction. What was the team doing with the product in the steady-state period? Which workflow? How often? Who triggered it?
    2. The disruption. What changed in the inflection week? A new project, a new tool introduced, a person leaving, a feature breaking, a price moving?
    3. The substitute. What is the team doing now to accomplish what the product used to do? (This is the single most valuable question. The answer is your real competition — and it's often not who marketing thinks it is.)
    4. The switching cost they paid. Did moving away cost them anything? If the answer is "no, it was easy" — that's a positioning problem, not a product problem.
    5. The trigger they'd come back for. What would have to be true for them to start using the product again? Don't accept "lower price" as a final answer; probe for the underlying job.

    Step 3 — Code the answers against four causes, not twelve

    Sales-loss interviews often code against ten or fifteen reasons. For usage churn, four causes account for the vast majority of cases in the programs we've run.

    You'll get a long tail of edge cases. Resist the urge to grow the taxonomy. The point is to make the program legible to product, marketing, and CS in a single quarterly review — not to be exhaustive.

    Step 4 — Compute the rate, not just the reasons

    Most win/loss programs report which reasons came up most often. That's directionally useful but it doesn't tell you which problem to fix first. To prioritize, you need rate-of-occurrence weighted by account value.

    Cause priority = (% of churned ARR attributed to cause) × (estimated fix feasibility, 1–3)

    Multiply, not add. A high-feasibility cause that only touches 4% of ARR is a distraction. A low-feasibility cause that touches 38% is a roadmap conversation.

    A worked example from a recent program:

    The top cause — substitution by an adjacent tool — was, in this case, a positioning problem masquerading as a product problem. The team had been hearing "we need more features" from the loudest accounts. The interviews showed that a quieter pattern was bigger: customers were filing the product mentally next to a tool they already had, and the adjacent tool kept absorbing the use case month by month. The fix wasn't features. It was changing what category noun the homepage used.

    We thought we had a product gap. We had a positioning gap. The interviews told us our customers couldn't explain to a new teammate why we existed alongside the tool they already had. So new teammates didn't open us.

    VP of Customer Success, mid-stage data tooling company

    Step 5 — Close the loop with marketing, not just product

    A usage-churn finding routed only to product is half-routed. The four causes above each have a marketing implication that, in our experience, is faster to act on than the product fix.

    Where each cause lands

      Walk-away from this program

      Before launching the program, you can answer yes to:

      Check it off
      0 of 0 done
        Saved in your browser
        Download interview templateDownload DOCX

        What to do Monday

        Pull last quarter's usage-churn list. Pick three accounts. For each, plot the 180-day curve, find the inflection week, identify the top three users from the activity log, and reach out to one of them with the two-question opener from Step 2. You'll learn more in those three conversations than in the next quarter of dashboard reviews.

        The interviews are uncomfortable for the first month. The patterns start showing up around interview number nine. By interview fifteen, the four causes above will have sorted themselves into a rate table you can act on — and the question stops being "why are we losing usage?" and starts being "which cause do we fix first?"

        Keep reading

        Related Stratridge Capability

        Win/Loss Review

        Turn every lost deal into something your team can actually act on.

        Win/Loss Review takes your lost-deal notes and turns them into objection patterns, rebuttal suggestions, and positioning gaps — then writes the learning back to Strategic Context so the next deal benefits from it.

        • Surfaces patterns across lost deals, not one-off anecdotes
        • Generates rebuttal suggestions from real objections
        • Feeds findings back into your strategic memory
        Analyze your losses →
        Stratridge Synthesis

        Positioning and go-to-market, distilled.

        A short read in your inbox — patterns from live B2B work, framework excerpts, and competitive teardowns. Written for CMOs and PMMs actively shipping. No listicles. No vendor roundups. Unsubscribe whenever.