Case study

From flat to 2,400 search clicks in 30 days.

One anonymized AI tool client. Six pieces published over 30 days. The numbers, and the honest disclaimer about what transfers to B2B SaaS.

Category AI image generation tool
Engagement April 2026 · 30 days
Status Active engagement
2,414 Search clicks Vast majority captured in the 30-day window
20.5k Impressions From near-zero baseline before engagement
11.8% Average CTR Industry baseline: ~3%
+77% Active users (MoM) During the 30-day engagement · 9,000 users
Read this first

These results came from a consumer AI tool with active category demand and weak long-tail competition. B2B SaaS engagements operate in narrower markets with longer consideration cycles. Expect 60 to 90 days before similar inflection patterns appear, and expect smaller absolute numbers because B2B audiences are smaller. Industry, baseline visibility, product-market fit, and competitive density all change the curve. The methodology produces movement; the magnitude depends on factors outside any operator's control. Past performance is not a guarantee of future results.

When this engagement started in early April 2026, the AI image generation tool had limited organic visibility. The site was indexed but ranking nowhere meaningful, and traffic was almost entirely direct or referral. Thirty days later, it ranks for terms across its category, pulls 2,400+ search clicks for the month, and is appearing in the kind of long-tail queries the methodology was built to capture.

This page documents what happened, what was done to cause it, and how to read the numbers honestly. The data is real, the methodology is documented, and the limits of the claim are stated openly above.

The starting state Where this began

For the three months prior to engagement (late January through early April 2026), the tool's Google Search Console performance was a flat line. Sub-20 daily clicks, occasional spikes from one-off Reddit traffic, and otherwise minimal organic visibility. Average position floated around 40 to 50 across a small set of indexed pages. Most queries were branded or near-branded.

The product itself was solid; the problem was discoverability. Buyers in the category were researching tools through community forums, comparison threads, and AI-engine queries, and the tool was invisible in those answers. The 3-month flat baseline shows up clearly in the GSC chart further down, where the inflection coinciding with engagement start makes the attribution unambiguous.

What I did The methodology applied, in 30 days

The same four phases that run on every Zilwaris client engagement, compressed into a 30-day cycle:

  1. Capture. Mapped the questions the tool's users were already asking AI engines and search engines. Pulled 30+ candidate queries from Reddit threads, Quora answers, and competitor SERPs. Scored and selected the 20 with the highest movement potential.
  2. Investigate. Ran each query across ChatGPT, Perplexity, and Google to map citation sources. Documented where competitors appeared, where the tool didn't, and where the citation surfaces were weak (i.e. recoverable).
  3. Target. Built a 4-week content sequence prioritizing surfaces with the highest citation weight per category: long-tail blog content for Google, Reddit comment placements where competitors were already cited, and answer-shaped content engineered for AI extraction.
  4. Engineer. Each piece was written for citation, not engagement. Answer-first openings, named-entity density, schema markup, and consistent author attribution. Published on a regular cadence over 4 weeks.

Results The 30-day curve

The intervention point in the data is unambiguous. For 60+ days before engagement, the GSC line is flat, hovering at 5 to 20 daily clicks. From early April when the methodology started running, clicks rise from sub-20 daily to over 100 daily within two weeks. Impressions follow on the same trajectory. By the end of the month, the site is pulling 2,400 monthly clicks and 20,500 monthly impressions.

Google Search Console detailed performance: 2.41k total clicks, 20.5k impressions, 11.8% average CTR, average position 32.4. Clicks (blue) and impressions (purple) accelerate from early April.
Clicks (blue) and impressions (purple) move together from early April. Synchronized acceleration on both indicates the methodology hit threshold rather than coincidental traffic.
11.8% CTR is the signal

Industry-average organic CTR is around 3%. An 11.8% CTR at average position 32.4 means the content ranks lower than peak but pulls a disproportionate share of the impressions it earns. That's the citation-engineered content doing its job: when it surfaces, it gets clicked.

Engagement What happened after the click

The traffic produced compounded engagement. The 30-day engagement period shows what the methodology actually drove on the site itself:

GA4 dashboard: 9k active users (+77.4%), 16k views (+81.3%), 50k events (+74.0%), 3m 39s average session duration (+34.9%).
The 30-day GA4 snapshot, compared against the prior 30 days. Note the prior-period dotted line: this isn't seasonal lift, it's a real step-change.

Active users up 77%. Views up 81%. Event count up 74%. Average session duration up 35%. These are the kinds of numbers that don't happen by accident or seasonal drift. They happen when an audience the tool wasn't reaching before starts arriving and engaging with the product.

Channel mix Where the traffic came from

GA4 channel breakdown showing Unassigned 46%, Organic Search 29%, Referral 13%, Direct 11%, Organic Social 0.07% with engagement rates per channel.
Channel breakdown. Organic Search at 29% of traffic. Engagement rates above 70% across the top three channels.

Organic Search jumped to 29% of traffic, with 71.7% engagement rate, not a vanity metric here, since these visitors came from queries the methodology specifically targeted. Referral traffic at 13% reflects the surface-seeding work. The "unassigned" 46% is a mix of dark social, AI engine referrals, and bookmark direct entries that GA4 can't fully categorize.

The most important number ChatGPT is the third-largest source of traffic

This is the single most direct piece of evidence that the methodology is working as designed. When you drill from channel-level data down to specific source/medium attribution, ChatGPT shows up as a source in its own right.

GA4 source breakdown showing top 10 sources by active users. chatgpt.com is the third-largest source with 1,245 active users (13.81%) and 65.6% engagement rate, behind only aiagentstore.ai and google.
Top 10 traffic sources by active users. ChatGPT.com is the third-largest source, sending 1,245 active users in 30 days with a 65.6% engagement rate.
What this means

1,245 users arrived directly from ChatGPT in 30 days, with a 65.6% engagement rate. That isn't accidental traffic. Users were getting recommendations from ChatGPT and clicking through to the site. The methodology's whole purpose is to make this happen. Here it's measurable. The top referrer overall (aiagentstore.ai) is also AI-adjacent, suggesting the surface-seeding work placed the tool in directories and aggregators that AI engines then cite.

The "not set" row at #4 (1,025 users) is also worth understanding. Some of that is direct traffic without a referrer; a meaningful portion is likely Claude, Perplexity, and Google AI Overviews referrals where the engine doesn't pass a standard referrer header. Combined with the explicit chatgpt.com row, the realistic share of AI-search-driven traffic is likely 20 to 25% of total, higher than what channel-level reporting alone would show.

If the goal of the methodology is to be the answer AI engines give, this is what success looks like at the data level: the AI engine itself becomes a top traffic source.

What this proves And what it doesn't

What this proves: the methodology produces measurable visibility movement when run on a 30-day cycle. Indexation, citation, and click-through patterns improve in the trajectory the framework predicts. The flat-then-inflection curve, with the inflection coinciding exactly with engagement start, is the methodology's signature working as designed.

What this doesn't prove: that consumer AI tooling traffic dynamics translate directly to B2B SaaS. They don't. B2B SaaS has narrower buyer pools, longer consideration cycles, and different citation surfaces. A B2B SaaS engagement will produce the same shape of curve, but the timing and absolute numbers will differ. Expect 60 to 90 days to inflection rather than 30, and expect total impressions and clicks to be smaller because the addressable audience is smaller. The first true B2B SaaS case study is the next one to publish. That work is in progress with Zilwaris's first founding clients.

Lessons What I'd take into the next engagement

  1. The intervention point is the proof. Pre-engagement baseline data matters more than peak numbers. A flat line followed by an inflection coinciding with engagement start is the only way to attribute results to the work rather than to coincidence. Documenting the baseline before starting is non-negotiable for every future engagement.
  2. CTR is the early signal. Before clicks scale, CTR climbs. If the right content is being placed, even pre-inflection impressions pull above-average click rates. Watching CTR independently from clicks is the leading indicator I trust most.
  3. The "unassigned" channel matters and is hard to measure. Almost half of recent traffic isn't categorically attributable. Some of it is AI search, some is dark social, some is direct from bookmarks. Better tooling for AI referral attribution is needed industry-wide.
  4. Schema and named-entity density are the real moats. The content that pulled the highest CTR was the most operationally specific: named tools, named comparisons, schema-marked FAQs. Generic explainers underperformed even when they ranked.
  5. Industry context is everything. A consumer AI tool with active demand and weak long-tail competition produced a 30-day inflection. A B2B SaaS in a competitive niche with mature category authority will not produce the same curve in the same window. Setting that expectation before signing is mandatory.