A manual message-consistency audit takes a competent PMM about four hours: open every customer-facing surface, copy the headlines and value props into a spreadsheet, color-code the drift, write the memo, route it to the people who own each asset. Nobody schedules this. So nobody does it. Then a prospect quotes your homepage back to a sales rep and the rep doesn't recognize the sentence.
An AI analyst — meaning a chat interface with the right context loaded and the right prompts pointed at it — collapses that four hours to roughly thirty minutes. Not because the model is smarter than the PMM. Because the boring extraction-and-comparison work is exactly what a language model is good at, and the PMM's judgment shows up only at the end.
What "consistency" actually means when you're checking it
Three things, in order of how often they drift:
- Category noun. What you say you are. "Customer data platform" on the homepage, "marketing automation tool" in the sales deck, "growth platform" in the pricing tier names.
- Primary value claim. The before/after promise. "Cut onboarding time by half" on the website, "give your CSMs their Fridays back" in the one-pager, "reduce TTFV" in the deck.
- ICP signal. Who the copy is talking to. "Series B SaaS RevOps leaders" on the homepage, "any growing company" in the case studies, "enterprises looking to scale" in the deck.
A consistency check is the work of pulling those three things from every surface and lining them up. If the analyst can extract them reliably, the diff is trivial.
The prompt pattern that works
Every prompt in the pack follows the same shape — give the model a role, a single surface, an extraction schema, and an instruction to refuse to fill in gaps. The refusal-to-guess part matters most.
A working extraction prompt looks like this:
You are a positioning analyst auditing a single web page. From the text below, extract: (1) the category noun used to describe the product, (2) the primary value claim in the hero, (3) the named ICP if any, (4) any secondary value claims in the next-fold sections. If a field is absent or ambiguous, write "absent" or "ambiguous — see notes" rather than guessing. Show the exact phrase you're extracting from in each case.
Run that across five surfaces. Paste the five outputs into a sixth prompt that diffs them. That's the audit.
The thirty-minute workflow
The judgment call — which drift matters this quarter — is the one part the model can't do for you. It can rank by buyer-visibility, but it doesn't know that your category noun is intentionally evolving because the board pushed back on the old one in February. Keep that context in your head, or write it into the diff prompt as a constraint.
The first audit took me an afternoon because I was second-guessing the model. The third audit took twenty minutes because I'd learned which extractions to trust and which to spot-check.
What the analyst gets wrong
Three failure modes, all worth knowing before you ship the memo:
None of these are fatal. They mean the workflow is thirty minutes plus a ten-minute spot-check, not thirty minutes flat. Budget for it.
When to run this
Quarterly is the floor. After any of the following, run it the same week:
Triggers that warrant an unscheduled consistency check
The build-versus-buy question
You can do all of this in a generic chat interface with the prompt pack below. You'll spend setup time loading context every session, you'll re-paste the surfaces every time, and the analyst won't remember what last quarter's audit said. That's fine for a first pass.
A purpose-built analyst — Stratridge's Analyst capability is one example — keeps the surfaces, the prior audits, and the category-noun history in persistent memory, so the diff prompt can also answer what changed since last quarter without you re-explaining. The cost difference is roughly the price of a coffee subscription versus the price of a SaaS subscription. The time difference, after the third audit, is about two hours per quarter.
What to do Monday
Pick three surfaces — your homepage, your top sales deck, and your pricing page. Run the extraction prompt against each. Read the three outputs side by side. If they agree on the category noun and the primary value claim, the rest of your stack is probably fine and you can move on. If they don't, you've found the audit you were going to do anyway, and you can run the full workflow against the other four surfaces this week.
The point isn't to never drift. Drift is what happens when teams ship. The point is to catch it within a quarter, not within a board meeting.
Keep reading
Analyst Prompt Library for Positioning Work
Twenty prompts that turn an AI analyst into a useful positioning tool — organized by positioning-layer, each with the specific context the AI needs to produce non-generic output, and the reviewer discipline that keeps the output honest.
The Strategist's Prompt Book: 50 Questions for Your Analyst
Fifty specific, context-loaded questions organized by strategic problem. Each prompt is designed to produce output a senior PMM would find useful, not generic text. Use as a reference; replace bracketed elements with your specifics.
How to Use an Analyst for Positioning Brainstorms
A working method for running positioning brainstorms with an AI analyst that surfaces non-obvious angles instead of consensus mush
Analyst
AI strategy advice grounded in your own context — not generic playbooks.
The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.
- ✓Reads your own positioning data before responding
- ✓Grounded in audit findings and competitor signals
- ✓No hallucinated advice — evidence cited inline