The standard launch retrospective is a forty-five-minute meeting that produces a Notion doc nobody reads. Three categories — what went well, what didn't, what we'd change — each with five bullets that read the same as last quarter's. The next launch repeats two of the same mistakes. The PMM who ran the retro knows it. So does everyone else in the room.
The failure isn't the participants. It's the format. A retrospective without structured prompts becomes a sentiment survey, and a sentiment survey can't tell you why the trial-to-paid rate dropped four points or why three AEs went off-script in week two. An analyst — used as a working partner, not a summarizer — changes the unit of analysis from "how did it feel" to "what specifically happened, and what does the evidence say about why."
What an analyst is good at, and what it isn't
An analyst is not a meeting replacement. It won't read the room, surface the political subtext, or notice that the head of sales has gone quiet because she disagrees with the launch lead. Those are human jobs.
What an analyst does well is the work most retros skip: cross-referencing the launch plan against the actual artifacts that shipped, pulling patterns out of unstructured win/loss notes, comparing the messaging hierarchy in the launch deck against what the homepage and pricing page actually say four weeks later, and asking the questions a fatigued team won't ask itself at 4pm on a Friday.
Step 1 · Feed the analyst the launch's actual artifacts
Before the meeting, load the analyst with everything the launch produced. Not the plan — the output. The plan tells you what was supposed to happen. The artifacts tell you what did.
What to load before the retro
The analyst's job in this phase is to read everything end-to-end before any human does. By the time the team sits down, the analyst should already have a draft of the gaps it's spotted.
Step 2 · Run the retrospective against structured prompts, not open questions
"What went well" is the wrong opening question. It rewards the loudest voice and produces sentiment, not signal. Replace it with a structured set of prompts the analyst runs the team through.
The team's job through all four steps is to argue with the analyst, not nod along. The analyst is wrong about something — the question is what, and what that error reveals about the team's mental model of the launch.
Step 3 · Force the analyst to name what the next launch must change
This is the step most retros skip. The team identifies problems and assigns them to "we should…" — passive voice, no owner, no deadline, no review date. Six weeks later the same problems show up in the next retro.
We had documented every messaging gap from the last three launches. The analyst pulled them up in our retro and we'd repeated two of them verbatim. That was the moment the format changed for us.
Ask the analyst three closing questions:
- What pattern repeats across this launch and the previous two?
- What single change to the next launch's plan would have the largest effect on the metric we missed?
- What's the smallest experiment we could run in the next four weeks to test whether that change works?
The answers go into the launch playbook for the next launch, not into a retro doc. If they don't change a future plan, they didn't earn the meeting.
Step 4 · Disagree with the analyst on the record
The analyst will get things wrong. It will misread a pattern, weight a win/loss interview too heavily, or miss context only the field knows. Capture those disagreements in writing, attached to the retro output, with the human's reasoning.
This serves two purposes. First, it builds the analyst's strategic context — every disagreement is a calibration signal that improves the next analysis. Second, it surfaces the implicit knowledge that lives in senior PMM heads and dies when they leave. A retrospective doc that includes "the analyst said X, but here's why we know that's wrong in our market" is a far better artifact than one that just lists what went well.
What changes when retros work
Three things, in our client work, separate retrospectives that change behavior from retrospectives that don't.
The analyst doesn't create these properties on its own. It creates the conditions for a team that wants them to actually achieve them — by holding the artifacts, asking the structured questions, and refusing to let the meeting drift back into sentiment.
What to do Monday
Pull your last three launch retro docs. For each one, count the documented decisions that actually changed the subsequent launch's plan. If the count across three retros is under five, the format is the problem, not the team. Replace the next retro's agenda with the four steps above and load the analyst before anyone walks into the room.
The retrospective is the cheapest research you'll run all quarter. It's already paid for in calendar time. The only question is whether it produces the document that changes the next launch — or the document that gets archived next to the last one.
Keep reading
How to Use an Analyst for Win/Loss Pattern Detection
A working method for turning a year of win/loss interviews into named patterns your sales and product teams will actually act on next quarter
Analyst for Launch Narrative Drafting
How to use a positioning analyst to cut launch narrative drafts from twelve to three, with the prompts and structural moves that compress the cycle
Analyst for Win/Loss Interview Analysis
How to turn messy win/loss interview transcripts into structured pattern data using an AI analyst, without losing the texture that makes the quotes useful
Analyst
AI strategy advice grounded in your own context — not generic playbooks.
The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.
- ✓Reads your own positioning data before responding
- ✓Grounded in audit findings and competitor signals
- ✓No hallucinated advice — evidence cited inline