Improve your skills with weekly insights from Growth Memo experts. Subscribe for free!
Most teams talk about “AI visibility” like it’s a thing. New data on 3.7 million citations in ChatGPT, Perplexity and Google AI Overviews suggests this is not the case. And the gap between the three engines is wider (and more strategically important) than your scorecard probably admits.
Today’s memo breaks down:
- Why a mixed AEO score hides the only result that matters.
- What types of pages and domains are actually flowing through the engines.
- The move from AI measurement presence to measure portability.
One of the biggest differences between AEO and SEO is that AEO is playing on more platforms.
Omnia data shows across multiple samples that only 2.35% to 2.45% of cited URLs appeared in ChatGPT, Perplexity, and Google AI previews for the same prompt. 91% of citations appeared in a single engine.
Bottom line: AI visibility is not a one-size-fits-all ranking. Rather, there are three different distribution systems that Sometimes overlap and generally not.
Only 2% of URLs are cited by the 3 engines
Most people would assume that if a URL is cited by one major AI engine, it has a reasonable chance of appearing in others.
But the sample of 20,000 prompts shows only 2.37% of cited URLs appear on all three engines for the same prompt.
Meanwhile, 91.07% show up in one. These two numbers are close to each other because they explain each other. The remaining approximately 7% overlap in pairs, meaning the engines draw from largely disjoint pools rather than classifying the same pool differently.

For AEO/SEO teams, this means that a single composite viewability score is not the right unit of measurement. Average AEO scores hide this. A brand can appear strong as a whole and be invisible in 2 out of 3 engines. Teams looking for a mixed AI visibility number are compress three ranking systems into one metric and the calling strategy.
The 2% is valid for each cut
The overlap rate of around 2% and the exclusivity rate of around 91% remain almost perfectly stable across four samples.

This consistency matters more than the exact decimal point. The consensus gap is not the product of a query set or a time window. This seems structural.
In the third quarter of 2025, the universal overlap was 2.2%. In the fourth quarter of 2025 and the first quarter of 2026, it amounted to 2.7%. Engine-exclusive quotes dropped from 90.1% to around 88%. So yes, a little convergence. But even after this change, fragmentation remains predominant.
Commercial prompts don’t converge either
Intent splitting is one of the quietest but most useful parts of the dataset. It could be argued that commercial queries should produce more consensus. When someone searches for (best CRM), (best running shoes), or (best project management software), the pool of acceptable sources seems narrower than that of general informational prompts.
Surprisingly, the data does not support a big difference.

Commercial prompts show a universal overlap of 2.4%. Information prompts show 2.0%. Even when the query must restrict the set of answers, the engines most of the time choose different sources.
This goes against a common instinct when it comes to SEO and content strategy. Teams often assume that high-intent requests are where shared authority will appear. The opposite seems more true. Even in commercial territory, each engine’s own recovery logic, the sources it trusts, the formats it prefers, do most of the work.
Guides beat homepages by 2x
The breakdown of page types below shows that guides and tutorials have the highest cross-engine overlap at 2.3%, followed by blogs at 1.8%, category pages at 1.6%, product pages at 1.2%, and home pages at 1.1%.

Two lessons:
- First of all, explanatory content travels better as the brand or transactional assets. If you want the best chance of appearing on search engines, the strongest candidate isn’t the homepage or the product page. This is the page that helps, explains, compares or teaches, but keep in mind that these are also content formats that AIs can respond well to directly.
- Second, even the best page types perform poorly in absolute terms. Guides do not gain significantly on engines. The proper reading on this isn’t “publish more guides and you’ll win everywhere”. It’s simpler than that: Useful content travels better than branded content.
Visibility is not the same as portability
One of the easiest mistakes in this area is to confusing citation frequency and citation portability. Wikipedia is the clearest example. It appears 16,073 times in the dataset, but only 1.3% of these appearances are universal across all engines. Reddit appears 14,267 times, but only 0.1% are universal. Reuters appears 1,202 times and always arrives at a universal overlap of 0.0%.

This is why portability is an important metric. A domain can appear everywhere on an engine and barely travel, meaning a brand that appears dominant in an overall dashboard may be safe from invisibility on one platform. Presence tells you if you are visible. Portability tells you if that visibility is resilient.
What this means for operators
The practical implication is simple: Stop treating AI visibility as just one thing. Examine the complete visibility of your domain by measuring:
1. Presence, The % of your followed prompts where your domain appears in any engine. Presence tells you if you are visible.
2. Portability, the % of your cited URLs that appear in all three engines. Portability tells you if that visibility is resilient.
3. Focus, the % of your citations coming from a single engine. Focus tells you which engine your current dashboard is secretly built on.
If the overlap between engines is this small, a single AEO strategy is too abstract to be useful.
When we approach AI visibility from a holistic perspective, it leads us to ask more pointed questions:
- Which engine matters most to us?
- Which of our assets travel on multiple engines and which only operate in one?
- Are we measure the presence When should we measure portability?
It also changes the way brand teams should view diagnostics. A weak homepage on all engines may not be a homepage problem. It’s a symptom of something larger: drivers prioritize utility over brand centrality. In this world, visibility comes less from being the official source and more by being the useful source.
The strategic question is no longer “How do we rank in AI?“We should instead ask ourselves:”How to create assets that survive different engine preferences?“It’s a narrower question. It’s also a better question.
Methodology
There are a few caveats to this analysis:
- The dataset is biased towards Everything is customer base.
- Intent and page type cuts rely on regular expression classification, which is useful for directional analysis but not for perfect taxonomy work.
These reservations do not greatly weaken the main conclusion. The biggest signal isn’t accuracy around the edges. It’s consistency at the center. No matter how the cuts change, the same pattern resurfaces: very little overlap, very high engine specificity, and only modest differences depending on timing, intent, or page type.
Dataset size and time window
The analysis is based on four grab samples. Three cohorts of 5,000 invites each, followed from January 1, 2025; July 1, 2025; and January 1, 2026. A separate random sample of 20,000 prompts underlies the figures of 2.37% and 91.07%. The temporal reduction extends from Q3 2025 to Q1 2026 (to date) and covers 3.7 million URL citations in total. Commercial/informational/other intent distributions are drawn from approximately 2.6 million URLs in the combined sample. Splits by page type cover 4.1 million URL appearances.
How the prompts were selected
The 20,000 prompts are drawn at random from Everything is Fast live monitoring pool. The pool reflects what the real marketing teams chose to track, based on the geography of Omnia’s customers (primarily Spain, plus UK, Nordics and other EU markets). Each prompt runs in its country’s primary language, so Spanish is overrepresented compared to a US-only dataset. The industry mix consists of fintech/insurtech, travel, SaaS and B2B services. Treat the results as directional for European AI research.
Engine cover
The study covers three engines: ChatGPT, Perplexity and Google AI Overviews. Each triggers the same prompt simultaneously in the same minute, twice a day, with country location, and each engine is queried in its default, web-enabled, unauthenticated state. Perplexity tracking runs on Sonar, while ChatGPT and Google AI Overviews use each vendor’s default production model for offline web browsing (which neither OpenAI nor Google publicly pins to a specific version).
Classification methodology
The intent and page type are assigned by regex. The intent categories are commercial, informational and others. The page type buckets are Guide/Tutorial, Article/Blog, Category Page, Product Page, Home Page, Wikipedia, and Others. The rules are based on keywords and URL patterns, making them fast enough for a dataset of millions of URLs, but rough around the edges. Edge cases fall under the Other category, which is why Other has a high share in the intent and page type tables. Treat regular expression hyphenation as directional and not authoritative.
More resources:
Featured image: FGC/Shutterstock; Paulo Bobita/Search Engine Journal





