Back to Blog

Why Zero-Click Searches Happen (And What to Do When Clicks Disappear)

Zero-click searches happen when SERPs answer queries directly. Learn why CTR collapses while rankings stay stable, and how to measure and engineer visibility when clicks disappear.

January 4, 202614 min read
Medieval landscape showing converging paths that mysteriously end before reaching their destination, with floating ember lights symbolizing zero-click searches

Zero-click searches happen when the search engine results page (SERP) answers the query directly, so users don't need to click through to a website. Featured snippets, knowledge panels, and AI Overviews all satisfy intent without requiring an outbound click.

Key takeaways:

  • The SERP got taller: more pixels of answers appear before the first traditional result
  • The click moved inside: users interact with AI summaries and Google-owned properties instead of clicking out
  • The new goal is being cited and remembered, not just being ranked

The bottom line: If you only measure clicks, you'll think you're losing even when you're winning visibility.

What the research shows

  • 15% → 8%: Traditional result click rate drops nearly in half when an AI summary appears (Pew Research Center, 2025)
  • 58.5% of U.S. Google searches end with zero clicks (SparkToro + Datos, 2024)
  • ~34.5% lower CTR for position #1 when AI Overviews appear (Ahrefs, 2025)
  • -37% CTR when AI Overviews and featured snippets both appear (Amsive, 2025)

"Rankings are stable. CTR collapsed. What happened?"

That's the question showing up in SEO communities everywhere. And it's confusing because the traditional playbook says: if you're ranking, you're winning. But now you can hold position #1 and watch your clicks evaporate.

This isn't a penalty. The SERP just got taller.

AI Overviews, featured snippets, knowledge panels, "People Also Ask" boxes—they all answer the question before users reach the organic results. Google's incentive is clear: keep users on Google longer. Your traffic is collateral damage.

Understanding zero-click is table stakes. The hard part is building the footprint that gets you cited when the SERP answers the question—and tracking that visibility even when the click never happens.

Here's what you need to know, what to measure, and what to build.

Check if your brand appears in ChatGPT, Perplexity, and Google AI Overviews →

Why do zero-click searches happen in the first place?

Zero-click happens because search engines have gotten better at synthesizing answers directly on the results page. Users get what they need without clicking through.

This isn't new. Featured snippets and knowledge panels have existed for years. But AI Overviews changed the scale.

The SERP is turning into the answer

Google now answers queries in three main ways before showing traditional results:

  1. Featured snippets: Pull a direct answer from a webpage and display it at the top
  2. Knowledge panels: Show structured data (business info, people, places) without requiring a click
  3. AI Overviews: Synthesize information from multiple sources into a generated summary

Each of these reduces the need to click. According to Pew Research, when an AI summary appears, traditional result clicking drops from 15% to 8%. And only 1% of users click the links inside the AI summary itself.

Why it's accelerating now

Two things are driving the acceleration:

AI Overviews are rolling out broadly. In the Pew study from March 2025, AI summaries appeared in 18% of searches sampled. Google continues expanding where they show up.

Query fan-out is happening. According to Google Search Central, AI features may use "query fan-out"—running multiple related searches behind the scenes to develop a response. Your content might inform an AI answer without you ever seeing that query in Search Console.

"For years, digital marketers have focused on optimising search engine rankings to drive brand discoverability and traffic to their websites. Now, AI-generated search results are rewriting the rules, and SEO optimisation is no longer enough." — Natasha Sommerfeld, Partner, Bain & Company

How much is clicking actually dropping?

The macro numbers are stark. But the impact varies by query type and SERP layout.

Most searches already end without an open-web click

SparkToro and Datos analyzed clickstream data and found that 58.5% of U.S. Google searches and 59.7% of EU Google searches resulted in zero clicks. For every 1,000 searches, only 360-374 clicks went to the open web.

Year-over-year, the trend is accelerating. According to Search Engine Land (covering Datos + SparkToro data), between March 2024 and March 2025:

  • Searchers clicking an organic result: 44.2% → 40.3%
  • Searches ending with no click: 24.4% → 27.2%
  • Searches clicking a Google-owned property: 12.1% → 14.3%

That last number matters. It's not just that clicks are disappearing—they're moving to Google's own properties.

AI Overviews change CTR even when you rank #1

Position #1 used to mean something. Now it depends on what's above it.

Ahrefs studied 300,000 keywords and found that AI Overviews correlate with approximately 34.5% lower CTR for the top-ranking page.

Amsive's study of 700,000 keywords found:

  • Overall CTR decline: -15.49% for keywords triggering AI Overviews
  • Non-branded CTR decline: -19.98%
  • AI Overview + Featured Snippet overlap: -37.04% CTR decline

The worst-case scenario is a query where both an AI Overview and a featured snippet appear. Nearly 40% of your expected clicks vanish.

Why are rankings stable while CTR collapses?

This is the pattern that confuses people: "I'm ranking in the same position, but my clicks dropped off a cliff."

As one practitioner put it on r/bigseo: "CTRs collapsed... rankings are stable (1-3)."

There are two things happening.

The page got taller

Your position #1 ranking hasn't changed. But the SERP real estate above it has.

If an AI Overview takes up 400 pixels, a featured snippet takes 200 more, and "People Also Ask" adds another 300, your position #1 result might be below the fold on mobile. Users never see it.

This is a layout problem, not a ranking problem. Your rank is the same. Your visibility is not.

The click moved inside the SERP

When users do engage, they're increasingly staying within the SERP. Clicking an AI Overview link. Expanding a "People Also Ask" answer. Going to a Google-owned property like YouTube or Maps.

Google CEO Sundar Pichai has claimed that "if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews."

We can't verify this because Google doesn't share the breakdown publicly. But it signals where Google sees the future: engagement inside their AI features, not clicks to external sites.

Which query types go zero-click, and which still drive clicks?

Not all queries are equally affected. Understanding the difference helps you prioritize.

"Instant answer" queries are the hardest hit

These are queries where the SERP can fully satisfy intent:

  • Definitions: "What is SEO?"
  • Simple facts: "Capital of France"
  • Calculations: "100 USD to EUR"
  • Weather and time

If Google can answer in a sentence or a quick data lookup, there's little reason to click. These queries have always had low CTR potential, but AI Overviews make them even more likely to end on the SERP.

"Investigation" queries still drive clicks

Some queries require depth that doesn't fit in a SERP feature:

  • Pricing and vendor comparisons: "Best CRM for startups"
  • Complex how-tos: "How to set up conversion tracking in GA4"
  • Reviews and evaluations: "Is [Product] worth it?"

Users doing research often need to visit multiple sources. AI Overviews can provide a starting point, but they can't replace the need to evaluate options or execute multi-step processes.

"YMYL" queries require trust signals beyond the SERP

"Your Money, Your Life" queries—health, finance, legal—carry higher stakes. Users are more likely to click through to verify sources when the decision matters.

But here's the catch: if you're not cited in the AI Overview, users may never find you. Being absent from the answer is worse than not ranking at all.

What should you measure instead of clicks?

Clicks are an incomplete picture now. You need additional metrics.

But here's the practitioner reality: as one r/bigseo user put it, "GA shows we're getting traffic from ChatGPT but I have no clue what the queries are." And from r/TechSEO: "LLMs don't pass referrers, everything just gets dumped into direct."

Search Console and analytics won't tell you everything. Here's what to track instead.

CTR deltas by query class

Segment your keywords by intent type (instant answer, investigation, YMYL) and track CTR changes separately. A 20% CTR drop on "what is [term]" queries is expected. A 20% drop on "best [product] for [use case]" might signal a problem.

Watch for AI Overview triggers. If CTR drops but impressions stay flat, an AI Overview probably started appearing for that query.

Visibility scoring across engines

Build a stable prompt set—a list of 20-50 queries your prospects actually ask—and run them weekly across:

  • Google (with and without AI Mode)
  • ChatGPT
  • Perplexity
  • Claude

Score each response: Are you mentioned? Are you cited with a link? Are competitors mentioned instead?

This is visibility measurement. Clicks are one signal; being part of the answer is another.

Assisted conversion thinking

If AI answers your prospect's question and mentions you by name, they may go directly to your site. That shows up as direct traffic in analytics, not organic.

Look for correlations: when your visibility scores go up, does direct traffic increase? When you publish new content, do branded searches rise?

You may win the answer and lose the click attribution. That's fine—as long as you're tracking both.

The operational reality: Understanding zero-click is table stakes. The execution—tracking visibility across AI engines, engineering presence in communities and comparisons, leveraging expert time efficiently—is where most teams get stuck. That's the Track → Engineer → Leverage → Own system we build for clients.

Check if your brand appears in ChatGPT, Perplexity, and Google AI Overviews →

What should you change on-site vs off-site?

Winning in a zero-click world requires two things: (1) make your content easy to cite, and (2) build presence across every surface AI looks at.

On-site: answer-ready blocks

AI systems pull from content that answers questions directly. Structure matters.

Write standalone definitions. "X is Y" format in the first sentence of key sections. Make it quotable.

Use clear question-answer pairs. H2s as questions, first paragraph as the direct answer. Then expand.

Add proof density. Statistics with sources. Expert quotes with attribution. Numbered steps for processes.

One r/ContentMarketing user asked: "How do you write so that an LLM picks YOUR paragraph as the definitive answer?" The answer is: be specific, be direct, and include evidence.

On-site: schema helps parsing, but isn't magic

A common question from r/TechSEO: "Has anyone ran a split test on this? Content with Schema vs. without Schema in AI responses?"

Schema helps search engines parse and disambiguate your content. It's useful. But it's not a switch that guarantees citations.

Citations follow three things: trust (are you a credible source?), quote-ready answers (can AI pull a clean response?), and reinforcement (does other content reference you?).

Schema is one lever. Not the lever.

Off-site: citation gravity

This is where most teams underinvest. AI doesn't just pull from your website. It pulls from:

  • Lists and roundups (top tools, best services)
  • Comparison articles on third-party sites
  • Community discussions (Reddit, Quora, industry forums)
  • Reviews and mentions across the web

If you only exist on your own domain, you're invisible to most of what AI synthesizes. Building presence across these surfaces—what we call "citation gravity"—is the engineering work that makes citations possible.

The "answer-ready blocks" workflow

Here's a way to scale this without burning out your experts.

Extract the 10 questions your market asks

Mine your sales calls, support tickets, and Reddit threads for the questions prospects actually ask. Not keyword variations—actual questions.

Pick 10 that matter most for your business.

Turn each into a 120-word answer + a 3-row table + one proof block

For each question:

  1. Write a direct 120-word answer (quotable, standalone)
  2. Create a simple comparison or decision table (3 rows max)
  3. Add one proof block (stat with source, or expert quote with attribution)

This takes maybe an hour of SME time per block. The output is reusable across pages, FAQs, community answers, and comparison content.

Ship the blocks across pages, comparisons, and community answers

One block shouldn't live in one place. Use it:

  • On your definitive guide page
  • In your FAQ section
  • As a Reddit or Quora answer (where appropriate)
  • In guest content or comparison articles

Reuse creates reinforcement. Reinforcement creates citation gravity.

A simple operating cadence

Zero-click isn't a one-time fix. The SERP layout shifts. AI features evolve. You need a repeatable system.

Weekly: prompt set scoring + query class watchlist

Every week, run your stable prompt set across ChatGPT, Perplexity, and Google AI Mode. Score visibility and track changes.

Maintain a watchlist of your highest-value keywords. Check for new AI Overview triggers and CTR shifts in Search Console.

Monthly: refresh proof blocks, update answer-ready pages

Review your answer-ready blocks. Are the stats still current? Are there new proof points to add?

Update your key pages based on what's working. Double down on content that's getting cited.

Quarterly: re-sample prompts, evaluate competitors, reclassify queries

Expand your prompt set based on new questions you're hearing.

Check which competitors are showing up in AI answers. What are they doing that you're not?

Reclassify your query types. Some "investigation" queries may have shifted to "instant answer" as AI features expand.


Ready to see where you're invisible?

We'll run your key queries through ChatGPT, Perplexity, and Google AI Overviews and show you exactly where competitors get cited and you don't. Takes 30 minutes.

Get your AI visibility audit →

Not ready for an audit? Read our complete guide to GEO →


Frequently asked questions

How do you know what prompts are making LLMs mention your brand?

You can't see the exact prompts users type. But you can build a stable prompt set—queries your prospects likely ask—and run them across AI engines weekly. Tools exist to automate this, but even manual sampling gives you visibility. The goal is trends over time, not perfect measurement.

LLMs don't pass referrers. How do I attribute this traffic?

Most LLM traffic shows up as "direct" in analytics because referrers aren't passed. Look for correlations: when your AI visibility scores improve, does direct traffic increase? Track assisted signals like branded search volume and time-on-site for direct visitors.

Does schema markup help get cited by AI?

Schema helps search engines parse your content and disambiguate entities. That's useful. But it's not a guarantee of citations. AI systems prioritize trust, quote-ready answers, and third-party reinforcement. Schema is one input, not a magic switch.

How do you write content that AI systems want to quote?

Be direct. Answer questions in the first sentence. Use "X is Y" format for definitions. Add proof (statistics with sources, expert quotes). Structure content with clear H2s as questions and concise first-paragraph answers. Make every key sentence standalone and quotable.

How can I find which websites get the most traffic from LLMs?

There's no public leaderboard. Some tools estimate AI traffic share based on referrer data and crawl patterns, but the data is incomplete. Focus on what you can control: your visibility in AI answers and the citation gravity you build across the web.



Typescape makes expert brands visible everywhere AI looks. Get your AI visibility audit →