A positioning audit built for sales-led B2B SaaS applied to a PLG company produces findings that are correct in the abstract and useless in practice. The sales-led audit looks at sales materials, reads the pricing page, audits the battle cards. All important. None of them capture where PLG positioning actually lives — inside the product, in the signup flow, in the activation metrics, in the product copy that 95% of prospects will encounter before they talk to a human.
The three differences below shape a PLG-specific audit. The scorecard at the end of the piece produces findings at the specific surfaces that matter for PLG outcomes.
Difference 1 · The product is the primary positioning surface
In PLG, the prospect signs up within minutes of arriving, and then spends hours inside the product. The product's copy — empty states, tooltips, onboarding prompts, error messages, feature descriptions — carries more positioning weight than the homepage.
Audit implication: The PLG audit has to sample product copy at the same depth as it samples marketing copy. Twenty product-copy surfaces (ten empty states, five tooltips, three onboarding screens, two error states) should be audited against the positioning brief, the same way homepages and pricing pages are audited in sales-led audits.
Most sales-led-trained auditors skip this entirely. They've been trained to look at external surfaces. Asking them to audit tooltips feels like scope creep. For PLG, it's the central scope — and when it's skipped, the audit's findings miss the positioning drift that affects most prospects most.
Difference 2 · Activation metrics are positioning metrics
In sales-led, win rate is the positioning outcome. In PLG, activation rate is the positioning outcome. A prospect who signs up but doesn't activate is a prospect whose positioning expectations were set by the homepage and then not delivered by the product.
Audit implication: The PLG audit should include activation-funnel analysis as a positioning finding. Where are prospects dropping off, and does the drop-off pattern suggest positioning gaps? If 60% of signups never reach the first-value moment, that's not purely a product problem — it's often a positioning problem where the homepage promised a value that the product made too hard to reach.
Difference 3 · The buyer may never talk to you
In sales-led, the buyer is someone the sales team will eventually speak with. In PLG, the buyer may use the product, form an opinion, decide whether to pay, and either convert or churn — without ever speaking to anyone at your company. This is an inversion of the positioning-audit's buyer-research methodology.
Audit implication: The PLG audit can't rely entirely on buyer interviews. It has to triangulate from multiple sources: in-product behavior data, self-reported signup-survey data, dormancy-outreach responses, and a smaller set of deeper interviews with activated customers.
The in-product signal is often the richest. A prospect who clicked the "Get started" CTA on the homepage and then bounced at the third onboarding step is a prospect whose positioning expectation was set and then not matched. Tracking this funnel as positioning data — rather than purely as UX data — produces audit findings that purely-qualitative audits miss.
The PLG-specific scorecard
Building on the standard five-layer positioning framework, the PLG audit scores each layer with an additional PLG-specific sub-score.
Layer 1 · Category (and in-product category reinforcement)
Standard Layer 1 question: is the category noun consistent across surfaces? PLG-specific extension: does the product itself use the category noun? Many PLG companies have a sharp category noun on the homepage and a product whose language describes the same functionality using different terms. A user who signs up after reading "positioning audit" and spends the next 20 minutes inside a product that calls it "assessment" or "review" experiences category dissonance that erodes trust.
Layer 2 · Audience (and in-product persona fit)
Standard Layer 2 question: is the ICP clearly defined? PLG-specific extension: does the product assume the same persona the homepage describes? A homepage targeting mid-market PMMs paired with a product built for individual-contributor practitioners creates a persona mismatch visible within ten minutes of signup. The user decides the product isn't "for them" and churns to dormancy.
Layer 3 · Problem (and first-value moment)
Standard Layer 3 question: is the problem specifically named? PLG-specific extension: does the product deliver on the problem's resolution within the activation window? A homepage promising "ninety-second audits" paired with a product that takes twenty minutes of setup before the first audit can run is a Layer 3 promise-reality gap. Close the gap.
Layer 4 · Alternative (and in-product competitive framing)
Standard Layer 4 question: are the alternatives named? PLG-specific extension: does the product help the user understand why they should pay instead of using alternatives they might stack? A PLG user often has Plex-of-tools, spreadsheets, or free-tier competitor products on hand. The in-product copy should, at specific moments, make the case for paying you versus stacking alternatives.
Layer 5 · Claim (and product-delivered outcome)
Standard Layer 5 question: is the claim falsifiable with evidence? PLG-specific extension: does the product, through usage, produce the evidence for the claim? Activation data is your Layer 5 evidence in PLG. A claim about speed should be provable from activation timing; a claim about simplicity should be provable from onboarding completion rates. The product's own metrics are the positioning's best proof.
The product-copy sample
A PLG audit's product-copy sample should cover five specific surface types.
1. The first empty state a user sees. What does the product say to a user who has logged in but done nothing? This is the first detailed text a new user reads. Audit against the brief.
2. The onboarding sequence. The first 3–5 screens of onboarding. Most onboarding flows are optimized for completion rate, not for brand voice — which means they often drift from positioning. Check for the drift.
3. The key feature's tooltip text. Each product has one or two "hero" features that the positioning brief claims. Read the tooltips on those features. Do they describe the feature using the brief's language or different language?
4. Error and empty-result messages. The language the product uses when something goes wrong. Error-message drift is extremely common — engineers write functional error messages, marketing never reviews them, and the brand voice disappears at the moment the user is most frustrated.
5. The in-product paywall or upgrade CTAs. The language used at upgrade moments. This is the PLG equivalent of the pricing-page sales conversation. What's said at the moment the user is deciding whether to pay matters enormously and is frequently off-brief.
The remediation that actually moves PLG positioning
A PLG audit's findings route to two distinct remediation tracks.
Track 1: Positioning brief remediation. Standard. If the audit found Layer 1, 2, 3, 4, or 5 gaps, update the brief. This track looks like any other positioning remediation.
Track 2: Product-copy remediation. This track is PLG-specific and usually bigger. Every product-copy surface flagged in the audit gets rewritten to match the brief's language. The rewrite requires design, engineering, and PMM collaboration — and the PMM usually has to advocate for the copy quality the audit recommends.
Track 2 is slower than Track 1 because product copy ships with code, not with marketing cycles. A positioning refresh that requires product-copy changes will take 2–3 product release cycles to fully land. Budget for this; don't expect the audit's findings to ship in a quarter.
The PLG-specific audit is more work than the sales-led version, in roughly the same 4–6 weeks. It produces findings the sales-led audit misses entirely, and the findings route to a different (broader) remediation plan. Companies running PLG who audit with sales-led frameworks get findings that improve their marketing and miss their product — which is the half of the positioning that most affects their business.
Positioning Audit
Find out exactly where your positioning is losing buyers.
Run an eight-area diagnostic of your site against your own strategic intent. Stratridge reads your pages, compares them to your positioning goals, and surfaces the specific gaps costing you deals — with a prioritized action plan.
- ✓Eight-lens diagnostic in under two minutes
- ✓Evidence pulled directly from your own site
- ✓Prioritized action plan, not a generic checklist
One sharp B2B marketing read, most Thursdays.
Practical frameworks, competitive teardowns, and field observations across positioning, messaging, launches, and go-to-market. Written for working CMOs and PMMs. No listicles. No vendor roundups. Unsubscribe whenever.
Keep reading
Win/Loss Review for Product-Led Growth Companies
The classic win/loss interview misses 80% of the signal in a PLG motion. Here's why, and the four-source method that replaces it.
Message Consistency for PLG Companies (Product Copy Matters More)
In a PLG motion, the product is the sales pitch. Product copy — tooltips, empty states, error messages — carries more messaging weight than the homepage, and most teams underweight it. Here's how to audit and align.
Positioning Audit: How to Score Your Own Work Objectively
Scoring your own positioning is structurally hard — you wrote it. Six disciplines that reduce the bias without outsourcing the audit, plus the rubric.