The New Vanity Metrics: Why AI Visibility Scores Are the Klout of 2026
There's a new number on your marketing dashboard.
It's called an AI visibility score. It goes up. It goes down. You pay $200 a month to watch it move.
And it has absolutely no proven connection to whether anyone visited your website, bought your product, or even knows your name.
I'm David, founder of Clickport Analytics. I build web analytics software, so take my perspective with the appropriate grain of salt. But I've spent the last two weeks pulling apart how these AI visibility tools actually work, and what I found is a pattern I've seen before. A proprietary number. A secret formula. A dashboard that sells anxiety. And a complete absence of proof that any of it matters.
This isn't a hit piece on specific companies. Some of them are well-funded, have real customers, and employ talented people. The problem isn't the companies. The problem is the category. The entire premise is built on measuring something that cannot be reliably measured, then selling that measurement as insight.
A number that buys you nothing
The SEO industry spent a decade worshiping Domain Authority. A score from 1 to 100 that claimed to predict how well your site would rank on Google.
Google's own John Mueller said it plainly: "We don't use domain authority. That's a metric from an SEO company." Statistician Jen Hood analyzed the data and found DA explains 0.1% of ranking variance for top-5 positions. The creator of the metric, Rand Fishkin, is described as embarrassed by its continued prominence.
And yet. Agencies built entire business models around "improving DA." Job listings required minimum DA scores. An industry of link buying and selling emerged around manipulating a number that Google ignores entirely.
Now replace "Domain Authority" with "AI Visibility Score." Same structure. Same promise. Same gap between the metric and reality.
How AI visibility tools actually work
Here's what happens behind the curtain.
You sign up. You pick 25 to 300 prompts. Things like "best project management tool" or "top CRM for startups." The tool runs these prompts through ChatGPT, Perplexity, and other AI chatbots once a day using automated browser sessions. It parses the responses. Did your brand get mentioned? What position? Positive or negative?
Then it puts a number on your dashboard.
That's it. That's the product.
Some tools go further. They buy data from unnamed "third-party consumer panels." People who installed a browser extension or VPN that tracks their AI conversations in exchange for a free product. The tool takes this panel data and projects it to global scale to estimate what millions of people are asking AI.
This is the same methodology that made traffic estimation tools notoriously inaccurate. Screaming Frog tested the major traffic estimators against 25 real websites and found individual site estimates off by up to 94%. Promodo analyzed 184 websites and found an average 50% error rate across all major tools. One tool overestimated search traffic by an average of 3,242%.
Now apply that same methodology to a market 100x smaller and 100x more volatile than web search. That's what AI visibility tools are selling you.
Less than 1 in 100
In January 2026, Rand Fishkin and Patrick O'Donnell from Gumshoe.ai published research that should have ended the AI visibility tool category overnight.
They had 600 volunteers run 2,961 queries across ChatGPT, Claude, and Google AI. Same prompts. Repeated multiple times. They measured consistency.
The result: less than a 1 in 100 chance that an AI tool produces the same list of brand recommendations when asked the same question twice. Less than 1 in 1,000 chance the brands appear in the same order.
Read that again.
The same question. The same AI. Different answer almost every time.
Fishkin's conclusion: "Any tool that gives a 'ranking position in AI' is full of baloney."
This isn't a minor methodological quibble. It's a structural problem. AI responses are non-deterministic by design. They vary by context, time of day, conversation history, and random seed. Building a monitoring tool on top of something that changes with every query is like building a weather station on a trampoline.
The Digiday interviews with publishers confirm the skepticism from the people who would benefit most. Neil Vogel, CEO of People Inc: "This whole conversation is not rooted in any fact." Lily Ray, VP of SEO Strategy at Amsive: "Anybody that's pretending to be an expert in this, they're lying."
2. Brand D
3. Brand B
4. Brand F
5. Brand C
2. Brand A
3. Brand E
4. Brand G
2. Brand F
3. Brand D
4. Brand A
5. Brand H
6. Brand C
We've seen this movie before
Every era produces a vanity metric. A number that looks like insight, feels like progress, and measures nothing that matters. The pattern is always the same: a company invents a score, the industry adopts it, businesses optimize for it, and years later everyone quietly admits it was meaningless.
Klout Score (2008-2018). Reduced your entire social media presence to a number from 1 to 100. Justin Bieber scored higher than Barack Obama. The Pope was listed as an expert in "Miss Universe." A spammer with 11 followers achieved a score of 59 by posting 9,156 tweets. Job listings required "Klout score of 35 or higher." The Palms Casino secretly upgraded hotel rooms for high scorers. Analysis showed that follower count alone explained 95% of the variance in scores, making the rest of the algorithm redundant. Shut down in 2018. Nobody missed it.
Alexa Rank (1996-2022). Global website ranking based primarily on data from a browser toolbar. Google's Director of Research Peter Norvig documented a 50-to-1 distortion: his site received twice the pageviews of Matt Cutts' site, yet ranked 25 times lower on Alexa. The toolbar user base was heavily skewed toward webmasters and South Korean users, making rankings meaningless for everyone else. Amazon shut it down in 2022 after years of declining relevance.
Facebook Video Metrics (2015-2018). Facebook overestimated video viewing times by 150-900%. Internal documents showed engineers knew about the error for over a year before disclosure. Trusting these numbers, major media companies pivoted to video, laid off writers, and restructured entire organizations. Vice, Mic, MTV News, Fox Sports, and Mashable all gutted their editorial teams. Reuters Institute data showed users spent only 2.5% of visit time on video pages. Facebook settled for $40 million.
Domain Authority (2004-present). Still alive. Still explains 0.1% of ranking variance. Still spawning an entire industry of link buying. Google still doesn't use it. The pattern refuses to die because the metric is useful to the people selling services around it.
The through-line is always the same. A proprietary score that's easy to track, hard to verify, and impossible to tie to revenue. The people who benefit most are the people selling the tracking tool.
The 190:1 gap
Here's the number that should end every conversation about AI visibility tools before it starts.
ChatGPT processes roughly 12% of Google's search volume. But it sends 190 times less traffic to websites.
That's the gap between "AI mentioned your brand" and "AI sent you a visitor." It's not a crack. It's a canyon.
Pew Research Center studied 68,879 Google searches and found users clicked on links within AI summaries only 1% of the time. Google's own AI Mode ends 93% of sessions without a single click to an external website. All AI platforms combined account for 0.15% of global web traffic.
Let's put a funnel on it.
AI visibility tools measure the top of this funnel. The "1,000 mentions" part. They cannot see the bottom, because they don't track your website. They have no idea whether a single mention turned into a single visit.
And here's the irony: the few visitors who do arrive from AI search are extremely valuable. AI referral traffic converts at 14.2% compared to Google's 2.8%. They spend 67.7% more time on your site. These are high-intent visitors who came to you because AI specifically recommended you.
But you can only see that if you measure the visit. Not the mention.
Google is not dying
The entire AI visibility category is built on a premise: AI search is replacing Google, and you need to be ready.
The data says otherwise.
Google holds 89.98% of global search market share as of February 2026. AI chatbots don't even appear in StatCounter's search engine rankings. Google grew 21.64% in 2024, reaching 5 trillion annual searches. Google receives 373 times more searches than ChatGPT.
In February 2024, Gartner predicted that traditional search engine volume would drop 25% by 2026 due to AI. The opposite happened.
SparkToro's research across 41 websites with significant search activity found AI tools account for 3.2% of total search activity. Amazon, Bing, and YouTube all receive more desktop search activity than ChatGPT. 95% of Americans still use traditional search engines monthly. And when users adopt ChatGPT, their Google usage actually increases rather than decreases.
Fishkin puts it directly: "The 'AI vs. Search' narrative is largely made-up by media and influencers seeking attention, rather than an accurate reflection of reality."
This doesn't mean AI search doesn't matter. It's growing. The visitors it sends are valuable. But it represents a tiny fraction of where your traffic actually comes from. Spending $200-500 a month on a specialized dashboard for 0.15% of web traffic, when your analytics tool already shows you that traffic, is a solution in search of a problem.
What gets measured gets managed (into the ground)
There's a law in economics called Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."
The Soviet nail factory was given a target for number of nails. It produced tiny, useless nails. When the target changed to weight, it produced enormous, useless nails. The factory optimized perfectly for the metric while producing nothing of value.
AI visibility scores create the same dynamic.
If you measure "visibility in AI responses," you'll optimize for AI responses. You'll restructure your content to be more easily summarized. You'll add the statistics, quotes, and citations that research suggests make AI more likely to mention you. You'll create content that gives AI better raw material to work with.
The Content Marketing Institute identified the paradox: optimizing for AI answers means giving AI better content to summarize, which means AI can give more complete answers, which means fewer people need to click through to your site. You're solving your traffic problem by accelerating it.
The alternative is to measure what actually matters: did someone visit, and what did they do?
Scroll depth tells you if people read your content. Session duration tells you if they stayed. Conversion tracking tells you if they acted. These are observed behaviors, not modeled estimates. They connect directly to business outcomes.
↓
AI gives better answers
↓
Fewer people click through
↓
Score goes up. Traffic goes down.
↓
Improve engagement on those pages
↓
More conversions, longer sessions
↓
Revenue goes up. That's the point.
The data you already have
Here's what most people don't realize: you don't need a specialized AI monitoring tool. Your analytics already tracks AI search traffic.
When someone clicks a link in ChatGPT, Perplexity, or Claude, the browser sends a referrer header containing the AI platform's domain. Any analytics tool that reads referrer data can identify these visits. It's the same mechanism that has identified Google, Bing, and social media traffic for decades.
The difference is what happens next.
A synthetic monitoring tool shows you a mention count. Your analytics shows you a visitor. With a real visitor, you get everything: which page they landed on, how far they scrolled, how long they stayed, whether they clicked anything, whether they converted. You can compare AI search visitors to organic search visitors, to social media visitors, to direct traffic. Same metrics, same dashboard, same connection to outcomes.
SE Ranking analyzed 63,987 websites and found AI visitors spend 67.7% more time on sites than organic search visitors. RankScience analyzed 12 million visits and found AI traffic converts at 14.2% versus Google's 2.8%. Microsoft Clarity data from 1,200+ sites shows AI traffic converts to sign-ups at 1.66% versus 0.15% from search.
These are genuinely useful insights. And they come from observing actual behavior, not from running synthetic prompts through a chatbot once a day.
organic search
(vs 2.8% Google)
than search
AI search is a real traffic source. It's small but growing, and the visitors it sends are high-quality. The right response is to track it like you track every other source: with real analytics on your own site. Not with a separate $200/month dashboard that measures what a chatbot said in a simulated conversation.
Measure what happened, not what might happen
The distinction is simple.
Estimated metrics tell you what might be true. Observed metrics tell you what is true.
A visibility score is estimated. It's derived from synthetic prompts, panel extrapolation, and proprietary weighting. It tells you what AI might say about you if someone asks the right question in the right way on the right day.
A visitor session is observed. It's a real human on your real site, recorded by your real analytics. The scroll depth, the time on page, the conversion: all of it happened. No modeling. No extrapolation. No formula.
BCG and Google found that businesses using first-party data for marketing achieved up to 2.9x revenue uplift. Forrester research shows 75% of marketers say collecting real-time behavioral data is critical, yet less than half actually do it. McKinsey found companies making intensive use of customer analytics are 2.6x more likely to have significantly higher ROI.
The entire trend in analytics is moving toward direct measurement and away from modeled estimates. That's why GA4's modeled data draws criticism: it mixes estimates with real events and you can't tell which is which. That's why privacy-focused analytics tools are growing rapidly: people want accurate data from 100% of their visitors, not sampled, modeled, consent-dependent approximations.
The question isn't "is AI talking about you?"
The question is: "Did AI send you a visitor, and what did they do?"
One of those questions costs $200 a month and produces a number you can't act on. The other is a filter in your analytics dashboard.
● Share of voice
● Citation count
● Estimated reach
● Sentiment score
● Pages viewed
● Scroll depth
● Conversions
● Revenue
What to measure instead
If you cancel your AI visibility tool tomorrow, nothing changes about your business. The score disappears and your traffic, conversions, and revenue stay exactly the same.
Here's what actually matters.
AI search as a traffic source. Track it the same way you track organic search, paid search, and social. When someone arrives from ChatGPT or Perplexity, your analytics records it through the referrer header. No special tool needed. You can see per-platform breakdowns, landing pages, engagement metrics, and conversions. Learn how this works.
Engagement quality by source. Compare how AI visitors behave versus other channels. Do they scroll further? Stay longer? Convert at higher rates? This tells you whether AI traffic is valuable to your specific business, not whether AI mentions your name in synthetic tests. Engagement metrics that matter.
Content performance. Which pages attract AI referral traffic? What do visitors do on those pages? If your guide to "best project management tools" gets 50 visitors a month from AI search and they convert at 14%, that's a concrete, measurable insight worth acting on. If they bounce at 90%, that's equally useful.
Conversion attribution. When an AI visitor fills out a form, starts a trial, or makes a purchase, that conversion is attributed to the AI Search channel. You can calculate actual ROI from AI traffic. Not estimated ROI. Not projected ROI. Actual, observed ROI based on things that happened.
AI crawler activity. Separately from visitor tracking, you can monitor which AI bots crawl your site and how often. This is the supply side: are AI systems indexing your content? Combined with visitor data (the demand side), you get a complete picture without synthetic testing.
These aren't revolutionary new metrics. They're the same metrics you use for every other traffic source. That's the point. AI search doesn't require a new category of tooling. It requires the same disciplined measurement you already apply everywhere else.
The question that matters
A growing industry is selling you a number. The number is built on synthetic prompts, estimated panel data, and proprietary formulas. It cannot tell you if a single person visited your site. It cannot tell you if a single dollar of revenue resulted from an AI mention. It fluctuates with every query because AI responses are non-deterministic by design.
The same industry produced Klout scores, Alexa rankings, and Facebook video metrics. All of them looked like insight. All of them measured noise. All of them eventually collapsed under the weight of their own irrelevance.
Meanwhile, every visitor who arrives at your site from an AI chatbot leaves a trail of real data. Where they came from. What they read. How far they scrolled. Whether they converted. This data exists in your analytics right now, waiting for you to look at it.
The question was never "is AI talking about you?"
The question is: "Did anyone show up? And what did they do when they got here?"
That's the only question that has ever mattered. The channel changes. The question doesn't.

Comments
Loading comments...
Leave a comment