The problem isn't that AI gives bad competitive analysis. The problem is that it gives the same competitive analysis it would give a junior analyst at a bank, a marketing intern, and a board director — interchangeable, public-website-grade, useful to no one with skin in the game.
A deep dive is supposed to surface the thing your competitor is hoping you don't notice. If the output reads like a Wikipedia rewrite, the prompt is the issue, not the model.
Why generic output is a context problem, not a model problem
When a PMM asks an analyst "give me a deep dive on Competitor X," the model has to invent a reader. It doesn't know whether you're prepping a board slide, briefing a new AE, or building a battle card for a specific deal. So it averages — feature list, funding history, pricing tiers, a SWOT, a closing paragraph about "the evolving landscape."
That output is not wrong. It's just the wrong altitude. A useful deep dive operates two clicks below the marketing site: how the competitor's pricing logic actually works, what their AEs say in deal three, what their roadmap implies about their bet, where their narrative quietly contradicts itself.
Getting there takes three things the analyst doesn't have until you supply them: your context, your question, and your evidence.
A good deep dive is a hypothesis test, not a summary.
The four-layer prompt structure
Most prompts collapse the request into one sentence. A deep dive needs four layers — each of which the analyst can act on, none of which it can guess.
The four layers compress to roughly 400-700 words of prompt. That feels heavy. It's the smallest amount of context that produces non-generic output, and it's reusable — the situation paragraph carries across every deep dive you run that quarter.
What "evidence the analyst should weight" actually means
This is the layer most PMMs skip, and the one that determines whether the output is defensible or decorative.
Sources to feed the analyst before asking for analysis
The analyst will not go find these sources for you, and shouldn't pretend to. Pasting them in costs you 20 minutes and it's the difference between an output that names a specific contradiction in the competitor's pricing logic and one that says "their pricing reflects a value-based approach."
The four deep-dive lenses worth running
Not every deep dive answers the same question. Pick the lens that maps to the decision you're about to make.
Running all four on the same competitor is a quarter's work, not a Tuesday's. Pick one, run it well, file the output, return for the next lens when the decision warrants it.
What the output should look like
A deep dive that earns its place in your stack has four sections, in this order: the hypothesis tested, the evidence found, the contradictions surfaced, the so-what for your next move. If it doesn't have section four, it's a research artifact, not a deep dive.
We stopped accepting deep dives that didn't end with three specific moves we could make in the next 30 days. The first month of that rule, output volume dropped 40% and battle card adoption doubled.
The "so what" section is where the analyst is most likely to retreat into hedges. Push back. Ask: "If you were the PMM here, what would you change in the battle card on Monday?" The model will answer. The answer will be specific. You'll disagree with some of it. That disagreement is the most valuable part of the exercise — it forces you to articulate what you believe and why.
When to re-run, when to archive
A deep dive has a half-life. Pricing-logic dives age in roughly six months unless the competitor reprices. GTM-motion dives hold for nine to twelve. Narrative-consistency dives need refreshing every time the competitor ships a new homepage. Roadmap-signal dives are the longest-lived — eighteen months is reasonable.
Tag every deep dive with the lens, the date, and the decision it informed. When the decision recurs, you re-run the same lens with the same prompt structure and compare. The compare is where the real signal lives — not in any single dive, but in what changed between two.
What to do Monday
Pick one competitor that came up in three or more deals last quarter. Pick one lens — pricing logic is the highest-leverage starting point for most PMMs. Spend 20 minutes assembling the evidence pack from the checklist above. Write the four-layer prompt. Run it. Read the output with a red pen — strike anything that could have been written about any competitor in the category. What's left is your starting brief. Edit that, ship it to sales, and put the next dive on the calendar for the lens you didn't run.
Keep reading
Analyst Prompt Library for Positioning Work
Twenty prompts that turn an AI analyst into a useful positioning tool — organized by positioning-layer, each with the specific context the AI needs to produce non-generic output, and the reviewer discipline that keeps the output honest.
The Strategist's Prompt Book: 50 Questions for Your Analyst
Fifty specific, context-loaded questions organized by strategic problem. Each prompt is designed to produce output a senior PMM would find useful, not generic text. Use as a reference; replace bracketed elements with your specifics.
Analyst Use Case: Competitive Response Drafting
How PMMs use the Analyst to draft a defensible competitive response in twenty minutes instead of the usual three-day Slack thread
Analyst
AI strategy advice grounded in your own context — not generic playbooks.
The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.
- ✓Reads your own positioning data before responding
- ✓Grounded in audit findings and competitor signals
- ✓No hallucinated advice — evidence cited inline