Most B2B content gets published to fill a calendar, not to win customers. The companies winning in 2026 publish less and think harder. Six well-built pieces beat thirty generic ones, every time. CITE is the framework that makes that possible: Capture buyer questions, Investigate competitor gaps, Target the right mix, Engineer for everywhere.
This page is the actual methodology. Not a teaser. If you read it carefully and have a strong content team, you can implement it yourself. Plenty of clients read it carefully and still hire me, because reading the playbook and running it consistently for six months are different problems.
Most agencies hide their methodology. The methodology is the product. Hiding it would mean there isn't much there.
C / Capture buyer questions
Every program starts with a map of what your buyers actually ask, across Google, ChatGPT, Perplexity, Reddit, and the communities they hang out in. Most agencies start with keyword volumes. I start with questions. Keywords describe what people search. Questions describe what people want to know. The difference shows up in every piece that follows.
The question harvest pulls from sources in priority order:
- Customer and sales call recordings (the richest source of real buyer language)
- Top question-format posts in the primary subreddits and forums
- Quora questions with real engagement
- Competitor blog comment sections and review platforms
- G2 review fields where buyers describe what they wish was different
- Direct prompts to ChatGPT, Claude, and Perplexity to surface the questions those engines already answer
Candidates are scored across five dimensions:
- Buyer intent. Would someone asking this be ready to buy in 0–90 days?
- AI prevalence. Does ChatGPT or Perplexity actually return a substantive answer? Some questions sound great but engines just return "ask a professional."
- Movement potential. Is the current top answer beatable? Are cited sources weak (random blogs) or strong (top G2 listings, viral Reddit threads)?
- Strategic value. Does ranking here actually affect the client's business?
- Coverage gap. Does the client currently not appear?
The output is the buyer query map: a documented, prioritized list of every question worth answering, scored, mapped to the surface where it appears, and split into the questions that get full flagship treatment vs. the questions that get shorter pieces.
If you want to see the buyer query map step in action, the public DIY version is documented step by step in how to run your own AI search visibility audit, with twenty prompt patterns and a free Google Sheet template.
Most content programs skip this step. They guess what buyers care about and write toward keyword volumes. Six months later, nothing ranks and nothing gets cited. Capturing real questions first is the single biggest difference between content that earns its place and content that fills a calendar.
I / Investigate competitor gaps
Where do your competitors show up in Google and AI search? What do they have that you don't? Where are the gaps you can win? This produces the priority list. The work isn't beating competitors at what they're already winning. It's finding the questions and surfaces they've ignored, and getting there first.
For each question on the buyer query map, I run the prompt three times on each of four engines: ChatGPT (with search enabled, logged out, US locale), Claude (with web search), Perplexity (default mode), and Google AI Overviews (clean browser, location matched to the buyer geography).
LLM responses are non-deterministic. One run is noise. Three runs averaged out is a real signal. This is the single most important methodological discipline. Agencies that report from a single run are reporting from a coin flip.
For each run I capture: did the client appear, where in the answer, what URL the engine cited, the sentiment, and which competitors showed up alongside. The output is the competitor gap report: a documented map of where competitors appear, where you don't, which sources the engines actually cite, and where the recoverable gaps are.
T / Target the right mix
Six pieces a month, split between two flagship long-forms (the ones that get recommended by ChatGPT and earn links) and four shorter pieces (the ones that rank in Google for specific buyer searches). Both compound. Neither dominates. This balance is what makes a six-piece program outperform a thirty-piece one.
Flagship pieces (2,500+ words each, two per month):
- Built around the highest-priority buyer questions from the query map
- Original takes, original data, deep research
- Designed to be the best piece on the topic, not a competitor copy
- Earn inbound links and get recommended by ChatGPT and Perplexity
Shorter pieces (1,200–1,800 words each, four per month):
- Built to rank in Google for specific commercial searches
- Capture buyers actively comparing options or troubleshooting
- Tighter scope, faster turnaround, structured for AI extraction
The output is a monthly content plan, refreshed each cycle. Plans don't get locked in for 12 weeks. Each month's plan responds to what moved in the prior month: which questions got citations, which surfaces produced inbound, which formats outperformed. The discipline is the framework. The plan adjusts.
E / Engineer for everywhere
Each piece is built to rank in Google, get recommended by ChatGPT, and work as social content. Not three different pieces. One piece that performs in three places. This is the production discipline that justifies the price. Generic content can rank or get recommended by accident. CITE-built content does both on purpose.
What goes into engineering each piece:
- The answer in the first 100 words. AI engines recommend answers, not setups.
- Headers phrased as questions where natural ("What is X?", "How does X work?", "When should you use X?")
- Original data, surveys, or your own frameworks. AI heavily prefers original sources over recycled ones.
- Real attribution and credentials visible. Bylines with named operators get recommended more than anonymous posts.
- Schema markup where applicable: Article, FAQ, HowTo. Recommendations correlate with structured data.
- Named tools, platforms, and people the AI already knows. Co-mentions build the entity graph that includes the client.
- Internal linking that connects flagship pieces to the shorter pieces, building topical authority.
A flagship that gets cited will outperform a polished piece that doesn't. Every time.
Monthly tracking and reporting
End of every month, I re-run the priority questions using the same three-runs-per-engine protocol as the baseline. The scorecard updates. Movement is calculated. The monthly brief has six sections:
- Headline number. Where you moved on the priority questions.
- Per-piece breakdown. What was published, where it's showing up, what it earned.
- Surface insights. Which Reddit thread now cites you, where Perplexity returns you in the top three.
- What didn't work and why, written honestly.
- Next month's plan, sequenced based on what moved.
- Three to five verbatim buyer-language quotes captured during the work.
Briefs are delivered as a written PDF plus a 30-minute Loom walkthrough. The Loom is what builds the relationship. Static reports get skimmed.
The adaptation protocol
AI search is not stable. Engines change retrieval logic. Platforms shift moderation rules. Reddit hardens against marketing one quarter and softens the next. The methodology is built to evolve.
First Monday of every month, I run a small set of control prompts across all four engines to detect retrieval-pattern shifts. If the patterns change, the SOP gets updated within 14 days. Surfaces that stop producing citations get demoted. New surfaces that start showing up get promoted. None of this is ad-hoc. The protocol is documented. The protocol changes when the data demands it.
Senior editing plus AI production
AI handles drafting and research. Editing, fact-checking, voice match, and final approval go through me. AI is leverage, not a replacement. This is told to clients openly. Premium buyers respect honesty about AI use more than they respect agencies pretending to be 100% human at $8K–25K/month.
The senior editing pass is what makes the difference. Generic AI output gets skipped by AI engines just like it gets skipped by readers. Original framing, real expertise, and real attribution are what gets cited.
Tool stack
Every tool I use, named publicly. No affiliate disclaimers needed because there are no affiliates.
- Ahrefs for keyword research and competitor SERP analysis
- Perplexity Pro for citation source mapping
- Claude and Claude Code for drafting and research
- Notion for per-client tracking and documentation
- Google Search Console and Bing Webmaster for owned-content indexation
- The four LLM engines (ChatGPT, Claude, Perplexity, Google AI Overviews) for measurement
- Loom for monthly walkthroughs
- PayPal for invoicing
What this methodology will not do
It won't guarantee revenue. Citations and rankings are leading indicators, not directly attributable conversion events. Most clients see pipeline lift from the work. Some don't, because their product or sales motion is the bottleneck. I'm honest about this on every sales call.
It won't replace traditional SEO. Companies winning in 2026 do both. The shorter pieces in the six-piece-a-month rhythm are doing traditional SEO work. The flagships are doing AI search work. They reinforce each other.
It won't work for clients who want 20+ pieces a month. The whole point is depth, not throughput. Anyone whose first question is volume is the wrong fit, and I'll say so on the discovery call.