The brainstorm produces nine ideas. Eight of them are recombinations of your homepage hero. The ninth — the one someone said half-joking on the way to refilling coffee — is the only one anybody will remember in two weeks. Adding an AI analyst to that meeting doesn't fix the math by itself. Used badly, it produces a tenth obvious idea, faster. Used well, it pressures the room into the territory where the interesting ideas actually live.
The difference is method. An analyst is a sparring partner with infinite patience and zero ego, not an idea generator. Treating it as the latter is why most positioning brainstorms with AI in the room produce sludge that reads like the LinkedIn feed of a generic SaaS company.
The failure mode to design around
Most teams open the session with a prompt like "give me ten positioning angles for our product." The model returns ten angles. Six of them are the angles every B2B SaaS company has used since 2018 — built for teams, save time, AI-powered, enterprise-ready, the modern X. The team picks the least bad one and ships it.
This isn't the model failing. It's the prompt asking for the average. Foundation models are trained on the corpus of all marketing copy ever written, weighted toward what got indexed. Asking for "positioning angles" without constraint returns the centroid of that corpus. The centroid is, by definition, the obvious answer.
The job, then, is to build the brainstorm so that the obvious answers are off the table before the model writes its first sentence.
The five-move method
The whole loop runs in roughly forty-five minutes if the context is pre-loaded. It runs in three hours if you're loading context as you go, which is why the pre-load matters.
What to load before you generate
The quality of the brainstorm is set by the context, not by the prompt. A model with thin context produces thin output regardless of how clever the prompt is.
Context to load before the first generation
The customer quote is the move most teams skip and the one that changes the output the most. A real customer sentence — "we picked you because the procurement guy didn't have to learn anything new" — gives the model a texture to write toward. Without it, the model writes toward marketing-blog texture by default.
Prompts that produce non-obvious angles
The phrasing of the generation prompt is the smallest variable in the system, but it's not zero. A few patterns reliably produce sharper output than "give me positioning angles."
- The negative-space prompt. "Given the loaded context, what does every competitor in this category claim? What's the position no one is occupying?" This forces the model to map the competitive frame before generating, which surfaces the gap rather than restating the consensus.
- The wrong-category prompt. "Argue we're not actually competing in [category]. We're competing in [adjacent category]. What changes about our positioning?" Often returns nonsense. Occasionally returns the angle the team has been circling for six months without naming.
- The hostile-reframe prompt. "A competitor's CMO has thirty seconds to dismiss our positioning to a board. What do they say?" The dismissal usually points at the weakest claim. Inverting the dismissal is the angle worth defending.
We stopped asking the model for ideas and started asking it to attack ours. The brainstorm got faster and the angles got weirder, in a good way. We finally killed the "built for modern teams" line we'd been defending for two years.
Where the analyst stops being useful
The analyst is good at expansion, mapping, and stress-testing. It is not good at deciding. Two failure modes show up when teams try to push it past its useful range.
The first is the consensus collapse. After enough back-and-forth, the model starts agreeing with whatever the most recent prompt implied. If the PMM running the session has a favorite angle, the analyst will eventually validate it — not because the angle is good, but because the conversation history has tilted that way. The fix is to cap the session at forty-five minutes and walk away.
The second is the plausibility trap. The model produces an angle that sounds great, reads cleanly, and survives the stress-test prompts. Then sales tries to use it on a call and it lands wrong, because the angle is plausible inside the model's worldview but doesn't match how buyers in this specific market actually talk. The analyst can't catch this. Three customer calls can.
What good looks like Monday
A brainstorm that worked produces two or three positioning angles, each with a clear constraint it satisfies, a clear set of objections it answers, and a clear test you can run in the next two weeks — three sales calls using the new line, an A/B on the pricing page hero, a draft email to your top ten ICP accounts. If the output is ten angles and a vague feeling that some of them are good, the session didn't work. Re-run it with a tighter constraint.
The cost of doing this right is roughly two hours of pre-load work — competitor scraping, loss-reason synthesis, context assembly — for every forty-five-minute generation session. If you can't hold that ratio, the analyst will produce the corpus average and you'll ship it. The pre-load is the work. The prompt is the trigger.
Keep reading
Analyst Prompt Library for Positioning Work
Twenty prompts that turn an AI analyst into a useful positioning tool — organized by positioning-layer, each with the specific context the AI needs to produce non-generic output, and the reviewer discipline that keeps the output honest.
The Strategist's Prompt Book: 50 Questions for Your Analyst
Fifty specific, context-loaded questions organized by strategic problem. Each prompt is designed to produce output a senior PMM would find useful, not generic text. Use as a reference; replace bracketed elements with your specifics.
Analyst for Positioning Brief Generation: A PMM's Working Method
How to use an AI analyst to draft a positioning brief without surrendering judgment, including the prompts, the inputs, and the parts you keep human
Analyst
AI strategy advice grounded in your own context — not generic playbooks.
The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.
- ✓Reads your own positioning data before responding
- ✓Grounded in audit findings and competitor signals
- ✓No hallucinated advice — evidence cited inline