Analyst · Guide

How to Use an Analyst for Win/Loss Pattern Detection

A working method for turning a year of win/loss interviews into named patterns your sales and product teams will actually act on next quarter

8 min read·For PMM·Updated Apr 27, 2026

A year's worth of win/loss interviews is roughly 60,000 words of transcripts, fifteen scattered Notion pages, and a Slack channel where deals go to be eulogized once and forgotten. The patterns are in there. The problem is that no one on your team has eight uninterrupted hours to read every transcript twice, which is what pattern detection actually requires.

This is the work an analyst — an LLM with the right context loaded and the right prompts — does well. Not "AI summarizes your calls." Pattern detection. Naming the three reasons you lost the mid-market segment last quarter, with quotes, and ranked by deal value.

A pattern isn't a theme. A theme is "pricing came up." A pattern is "deals over $80k stalled when procurement asked for SOC 2 evidence we had but didn't surface."

What pattern detection actually means

Most win/loss reports stop at themes. "Pricing was a concern in 40% of losses." That sentence has lost a deal for me personally — it sends product into a discount spiral and leaves the actual cause untouched.

A pattern names four things: the segment it applies to, the deal stage where it shows up, the mechanism (what the buyer was trying to do), and the counterfactual (what would have changed the outcome). When you can fill all four, you have something a product manager can roadmap and a salesperson can rebut.

The reason analysts help here isn't intelligence. It's stamina. An analyst will read 47 transcripts at the same level of attention as the first one. You won't, and neither will I.

Step 1 · Load the corpus, not the summary

    The single most common mistake: feeding the analyst the polished deck the research vendor handed you. That deck was written to defend a hypothesis. You want raw input the analyst can re-read with a fresh question.

    Step 2 · Ask questions that surface mechanism, not vibe

    The default prompt — "what are the top reasons we lost?" — produces a list that sounds like the CRM dropdown. Useless.

    Better prompts force the analyst to identify the buyer's job-to-be-done at the moment they chose someone else.

    Prompts that surface real patterns

      Step 3 · Force the pattern into a four-field shape

      Make the analyst output every candidate pattern in the same structured format. Free-form summaries are unfalsifiable and impossible to triage.

      The right side is a pattern a head of sales can act on Monday. The left side is a slide.

      Step 4 · Rank patterns by recoverable revenue, not frequency

      Most pattern reports rank by how often the pattern showed up. This is wrong. A pattern that shows up in 12 sub-$30k losses matters less than one that shows up in three $400k losses.

      3.2x
      the value gap between the median enterprise loss and the median mid-market loss in our 2025 cohort of B2B SaaS clientsStratridge win/loss aggregate analysis, 2026

      For each candidate pattern, the analyst should report: number of deals affected, total ACV at risk, segment concentration, and whether the pattern is accelerating or decaying over the four-quarter window. Patterns getting worse outrank patterns that are stable, even if stable ones are bigger today.

      We had a pattern that explained 40% of our enterprise losses. It had been showing up for a year. Nobody saw it because each AE saw two or three deals fit the pattern, and the pattern needed forty deals to show its shape.

      Sasha ReyesComposite — three VP-level PMMs at Series B/C SaaS companies

      Step 5 · Hand patterns to the team in the format they'll use

      A pattern report read once and filed never moved a metric. The patterns that move metrics get translated immediately into artifacts the receiving team already uses.

      • Sales gets battle card updates — one paragraph per pattern, in the existing card structure, with the counterfactual as the rebuttal.
      • Product gets a roadmap input doc — pattern, segment, ACV at risk, suggested capability or evidence gap.
      • Marketing gets messaging diffs — the specific page, deck slide, or email where the pattern's counterfactual should be visible earlier in the funnel.

      If you generate a pattern and don't ship it into one of those three artifacts within two weeks, assume it died on the vine. That's the actual failure mode of win/loss programs — not bad analysis, but analysis that never reached the people who could act on it.

      What the analyst won't do for you

      It won't decide which pattern to fix first. That's a strategy call about where the company has slack, what the product roadmap can absorb, and which competitor you're most worried about in the next two quarters. The analyst gives you ranked, evidenced candidates. You decide.

      It won't conduct the interviews. The interview itself — the open-ended follow-up question, the moment the buyer says something unguarded — still requires a human researcher who knows the product. An analyst that only ever reads transcripts inherits whatever the interviewer missed.

      It won't replace the read. The PMM who owns this still reads the top eight transcripts end to end. The analyst surfaces what to read closely; it doesn't substitute for the close read.

      What to do this week

      Pull last quarter's lost-deal transcripts into a single folder. Run one prompt: "Group these losses by segment, then by deal stage at loss. For each group, name the buyer's job-to-be-done at the moment they chose someone else, and find the two transcripts that best illustrate it." Read the output. If a single named pattern survives the disconfirming-evidence test, you've recovered more value from your win/loss program in an afternoon than most teams get in a year.

      Keep reading

      Related Stratridge Capability

      Analyst

      AI strategy advice grounded in your own context — not generic playbooks.

      The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.

      • Reads your own positioning data before responding
      • Grounded in audit findings and competitor signals
      • No hallucinated advice — evidence cited inline
      Ask the Analyst →
      Stratridge Synthesis

      Positioning and go-to-market, synthesized weekly.

      A short read most Thursdays — patterns from live B2B work, framework excerpts, and competitive teardowns. Written for CMOs and PMMs actively shipping. No listicles. No vendor roundups. Unsubscribe whenever.