Analyst · Guide

How to Use an Analyst for Battle Card Generation

A working PMM's process for generating battle cards with an AI analyst — what to feed it, what to verify, and where the human still has to do the work

8 min read·For PMM·Updated Apr 27, 2026

A first-draft battle card that used to take a PMM eight hours of interviews, web research, and synthesis now takes about ninety minutes — if the analyst is fed the right context and the human is willing to verify the output. If either of those things is missing, you get a card that sounds confident and is wrong, which is worse than no card at all.

This is the working process. It assumes you have access to an analyst tool with persistent context (Stratridge's Analyst, a custom GPT loaded with your positioning brief, or a Claude project with your wiki attached) and at least one closed-won and one closed-lost call recording for the competitor in question.

The analyst doesn't write the card. It compresses the research so you can.

What the analyst is good at, and what it isn't

The analyst is fast at three things: summarizing public artifacts (pricing page, G2 reviews, recent product launches), drafting structured prose in the format you specify, and surfacing the second-order question you forgot to ask. It is unreliable at three other things: pricing accuracy beyond what's on the public page, win/loss patterns it hasn't seen evidence for, and any claim about the competitor's roadmap.

Treat those unreliable categories as scaffolding. The model writes a placeholder, you replace it with what you actually know from sales calls and customer conversations.

Step 1 · Load the context the analyst needs

Skip this and the output is generic. The model has to know what you sell, who you sell to, and why deals against this competitor are won or lost — not in the abstract, but in your specific market.

What to feed the analyst before you ask for anything

    The win/loss transcripts are the single biggest lift. Without them the model writes a feature comparison; with them it writes the reasons deals tip.

    Step 2 · Ask for the structure first, not the content

    A common mistake is to prompt "write me a battle card for Competitor X." You'll get a generic five-section card that reads like every other LinkedIn template. Instead, ask the analyst to propose the structure given your specific deal motion, then critique its proposal.

    A useful first prompt: "Given the positioning brief and the two transcripts I attached, propose a battle card structure for AEs handling late-stage objections against Competitor X. Tell me what sections to include and what to leave out, and explain why."

    You'll get back something like: cover the three objections that come up in stage four (pricing, integration depth, switching cost), skip the founder backstory and the funding history because they don't move the deal, include a one-line landmine question at the end. That structure is now the spec for everything that follows.

    We were fighting the model when we asked for content first. Once we made it justify the structure against our actual call data, the section list got shorter and the cards got used more.

    Maya R.Head of PMM, vertical SaaS, ~$40M ARR

    Step 3 · Generate one section at a time, with evidence

    Don't ask for the whole card in one shot. The output gets shallow and the hallucinations compound. Generate section by section, and require the analyst to show its source for every claim.

    A working pattern: "Draft the 'Where they win' section. For each point, cite which transcript or which page on their site it came from. If you can't cite a source, say so and propose what evidence we'd need to confirm."

    That last clause matters. It forces the model into one of three honest states: cite a real source, flag a gap, or refuse. The card you ship won't have anything in the third state.

      Step 4 · Verify the three things the model lies about

      Three categories cause more bad cards than anything else: pricing, integrations, and customer logos. The model will state a list price from a year-old blog post, claim an integration that was deprecated, or list a logo from a press release that turned into a churned account.

      For each: open the source. If the analyst says "Competitor X charges $50/seat starting at the Pro tier," go to the pricing page and confirm. If it says "they integrate with Salesforce, HubSpot, and Marketo," check their integrations directory, not the homepage. If it lists three customer logos, check that those customers are still on the site and haven't been quietly removed.

      This step takes thirty minutes and prevents the only failure mode that erodes sales trust permanently.

      Step 5 · Add the human-only sections

      There are two sections the analyst cannot write well no matter how much context you give it: the landmine questions and the trap-objection responses. Both require knowing how your salespeople actually talk and what this specific competitor's AEs typically respond with.

      Write these yourself, or — better — pull them from a thirty-minute call with your two strongest AEs. Ask them: "When you're losing to Competitor X in stage four, what's the one question you ask that flips the deal? When the prospect comes back with the standard objection, what's your line?" Those answers go in the card verbatim.

      The card sections I actually use are the ones written in the language we already use on calls. Anything that sounds like marketing wrote it, I skip.

      Senior AE, infrastructure SaaS

      Step 6 · Have the analyst critique its own output

      Before you ship, run one more prompt: "Read this draft as if you were a skeptical AE who has run dozens of deals against this competitor. What's weak? What sounds like marketing wrote it? What would you cut?"

      The model is surprisingly good at this when asked. You'll get back three to five edits — usually around hedging language, generic phrasing, or sections that restate the obvious. Take the edits that ring true.

      What ninety minutes actually buys you

      A first draft that's 70–80% of the way to shippable, with sourcing attached, in a format your AEs recognize. The remaining 20–30% — the verification, the AE quotes, the landmine questions — is the work that was always going to require human judgment. The analyst doesn't replace that work. It removes the four hours of staring at G2 reviews and pricing pages that used to come before it.

      What to do this week

      Pick one competitor — the one you've lost the most deals to in the last quarter. Pull two transcripts. Block ninety minutes on Thursday. Run the six steps above. Ship the card to one AE on Friday and ask them to use it in a live call the following week. Iterate from what they tell you.

      The first card will be rough. The third one won't be.

      Keep reading

      Related Stratridge Capability

      Analyst

      AI strategy advice grounded in your own context — not generic playbooks.

      The Analyst is a chat-based AI strategist that reads your Strategic Context, past audits, and competitive signals before answering. Ask it anything from 'why are we losing to Competitor X' to 'how should we reframe our pricing page' — and get answers that are actually about you.

      • Reads your own positioning data before responding
      • Grounded in audit findings and competitor signals
      • No hallucinated advice — evidence cited inline
      Ask the Analyst →
      Stratridge Synthesis

      Positioning and go-to-market, synthesized weekly.

      A short read most Thursdays — patterns from live B2B work, framework excerpts, and competitive teardowns. Written for CMOs and PMMs actively shipping. No listicles. No vendor roundups. Unsubscribe whenever.