An industry of 27+ AI visibility tracking tools has emerged — at an average price of $337 per month — backed by $31 million or more in venture funding. Most of them answer only one question: “Am I mentioned?” They cannot tell you whether the mention is accurate, whether your pricing is being hallucinated, or whether the tool claiming to measure your visibility is itself conflating platforms that share only 11% domain overlap.
In our last analysis, we showed that AI visibility measurements contradict each other depending on who measures what — 615× citation variation across platforms, 36% visibility decline in five weeks, and only 11% domain overlap between ChatGPT and Perplexity. We asked whether “AI visibility” is even a coherent concept. These 27+ tools have emerged to answer that question. Most of them make it worse.
The confusion runs deeper than features. One of the three products most commonly compared — GrowthX — isn’t even a tool. It’s an agency. The market is building dashboards for a problem it hasn’t yet defined.
Why Does GrowthX Appear in Every AI Tools Comparison?
This post exists partly because of a Google Search Console query: “profound vs growthx for measuring referral traffic from llms.” The comparison is natural to search for but structurally misleading. GrowthX is an agency — it provides expert-led, AI-powered growth strategy. You hire people. You don’t log in to a dashboard.
The confusion matters because it reveals something deeper. The market hasn’t decided whether AI visibility is a tool problem (buy software, monitor dashboards, interpret data yourself) or a service problem (hire experts who run strategy and execute content). Both answers have merit. For a Fortune 500 brand with an in-house team, Profound is the right shape. For a small business without the expertise to interpret 615× citation variation across platforms, an agency may deliver more value than any dashboard. The fact that search queries conflate the two tells you the category is still pre-definition.
What Do AI Visibility Tools Actually Measure?
The Ekamoira comparison usefully categorises these 27+ tools into three tiers: Monitor (run prompts, check for mentions, report frequency), Intelligence (analyse citation patterns, competitor tracking, cross-platform reconciliation), and Execution (content optimisation, automated response, workflow integration). Most tools sit firmly in tier one. They count how often you appear. They don’t check whether what appears is true.
This is where the structural weakness lives. The founder of LLMClicks tested 10 platforms and found Otterly tracked 18 mentions across its coverage — marking all 18 as positive sentiment. Four of those 18 contained factually wrong information about the brand being tracked. Otterly detected zero of the four. No alert. No flag. The dashboard showed green across the board while AI was actively lying about the product.
This matters because the business question isn’t “am I mentioned?” It’s “is what’s being said about me correct?” But here’s the counter: Conductor’s 2026 benchmarks (13,770 domains, 3.3 billion sessions) show 87.4% of all AI referral traffic comes from ChatGPT. So ChatGPT-only monitoring — like Profound’s $99 starter — isn’t irrational. But citation rates tell a different story: ChatGPT cites URLs only 0.7% of the time versus Perplexity at 13.8%. You can dominate the traffic source that barely links back to you, or the one that links back twenty times more often but sends far less total traffic.
How Do Profound, Otterly, and Rankability Compare?
As we covered above, GrowthX is an agency, not a tool. The actual three-way comparison is between the software platforms that compete in this space: Profound, Otterly, and Rankability.
Profound is the enterprise option. A $35 million Series B led by Sequoia Capital positions it as the category leader by funding. The $99 starter plan tracks ChatGPT only with 50 prompts per month. The $499+ Lite plan covers four platforms with 200 prompts. It monitors 8+ AI platforms via direct interface rather than API, which gives it broader coverage than most competitors. G2 rates it 4.6/5 and Capterra 4.9/5. The weakness: no self-serve access on premium tiers, no content execution, and — critically — no accuracy detection.
Otterly is the mid-market workhorse. It launched in October 2024 and has built to 15,000+ users. The $29 starter covers 15 prompts per month. But the jump to the $189 Standard plan (100 prompts) represents a 550% price increase. The $989 Pro plan unlocks 1,000 prompts. Gemini and AI Mode tracking require paid add-ons at $9–$149 per month extra. Clean UI, Semrush integration, good entry price — but the same blind spot: no accuracy detection, no hallucination alerts.
Rankability takes a different approach. Starting at $149 per month, it combines traditional SEO and AI tracking in a single platform. Its AI Analyzer tracks ChatGPT, Perplexity, and Google AI. For agencies managing multiple clients, this is the strongest proposition — one platform for both legacy search and AI visibility. The trade-off: its AI-specific features are newer and less mature than Profound’s or Otterly’s dedicated coverage.
The row that matters most in any comparison table is the one where every cell says “No.” None of these tools detect accuracy. None flag hallucinations. The category has standardised around mention frequency as its core metric — and mention frequency is the wrong question for any business where AI is actively misrepresenting their product.
Do You Actually Need Any of These Tools?
The sceptic position deserves a serious hearing. AI referral traffic accounts for 1.08% of all website traffic. At $337 per month average, you’re spending roughly £3,200 per year to monitor a rounding error. GA4 custom channel groups with regex filters can segment AI referral traffic at no cost. Bing Webmaster Tools offers AI traffic reports. Google Search Console surfaces AI Overview data. Manual prompt testing — asking ChatGPT and Perplexity about your business once a week — costs nothing.
The smart move may be to invest in fundamentals — structured data, review platform presence, entity building — that our previous analyses show drive AI citations, and wait for the tooling to mature.
But the counter to the counter is real: 1.08% understates influence because AI-assisted discovery — where users ask AI, then search directly — doesn’t appear in referral data. And $31 million or more in venture funding means these tools are getting better fast. The question isn’t whether you’ll need AI visibility tracking. It’s whether you need it at today’s prices and today’s feature sets.
One more thing the sceptic should know: the review ecosystem reviewing these tools has structural conflicts. Rankability publishes reviews of Profound. LLMClicks reviews Otterly. The entire evaluation layer is written by competitors evaluating each other. When every comparison article comes from a rival, whose assessment do you trust?
Uncomfortable Questions We’re Still Working Through
None of these tools detect hallucination. If ChatGPT tells a prospect your product costs £79 when it costs £49, no current tool will flag it. The LLMClicks 10-platform test confirmed this across every major tracker. The accuracy gap isn’t a feature request — it’s a structural failure in how the category defines its own value. An AI visibility tool that can’t tell you when AI is wrong about you is measuring the wrong thing.
The review ecosystem is reviewing itself. When every comparison article is written by a competitor, who evaluates the evaluators? This is also our disclosure: Findcraft operates its own AI visibility scanner — a free tool that competes, at a basic level, with the products reviewed here. We name this conflict because you deserve to evaluate our analysis knowing the incentive behind it.
Does blocking AI crawlers affect your visibility scores? This is the question we’re researching next. 79% of top news sites block at least one AI training bot, and 71% also block search and retrieval bots. These tools measure citation visibility — but if the content they’re measuring was blocked from training in the first place, what are the scores actually tracking? The technical architecture of robots.txt doesn’t cleanly separate training from retrieval, and the consequences of getting this wrong are invisible. We’re looking at this data more closely.
Frequently Asked Questions
Which AI visibility tool should I use if I’m just starting out?
Otterly at $29 per month is the lowest-cost entry point for basic mention tracking. But understand what you’re getting: frequency data, not accuracy data. If AI is hallucinating your pricing or inventing features, no current tool at any price point will tell you. Start with free alternatives — GA4 AI channel groups and manual prompt testing — before committing to a subscription.
Is Profound worth the price for small businesses?
Profound is designed for Fortune 500 brands. The $99 starter tracks only ChatGPT with 50 prompts. For small businesses, that’s a narrow window into one platform when citation patterns vary 615× across platforms. The value proposition strengthens significantly at enterprise scale with the $499+ multi-platform plans.
Can I track AI referral traffic without paid tools?
Yes. GA4 custom channel groups with regex filters can segment traffic from ChatGPT, Perplexity, Gemini, and other AI sources at no cost. Bing Webmaster Tools provides AI-specific traffic reports. Google Search Console surfaces AI Overview data. This won’t tell you about brand mentions in AI responses, but it will show you the traffic that actually reaches your site.
What’s the difference between GrowthX and tools like Profound or Otterly?
GrowthX is an agency — you hire people who run AI visibility strategy and content production for you. Profound and Otterly are software platforms — you log in, configure tracking, and interpret the data yourself. The comparison appearing in search results reflects category confusion, not direct competition.
Further Reading
These are independent sources — not Findcraft content:
- 2026 AEO/GEO Benchmarks Report — Conductor. 13,770 domains, 3.3 billion sessions. The definitive AI traffic baseline.
- 10 AI Visibility Tracker Tools Tested — LLMClicks. The accuracy test that found zero hallucination detection across major trackers.
- 27+ AI Brand Visibility Tools Compared — Ekamoira. The most comprehensive capability-tier comparison available.
- State of GEO Q1 2026 — Superlines. Market overview including funding data and citation rate disparities.
- How Much Should You Pay for AI Visibility Tools? — Rankability. Pricing analysis across the market (note: Rankability is itself a tool reviewed in this post).
- Publishers Blocking AI Study — BuzzStream. Data on 79% of top news sites blocking AI training bots.
Incentive disclosure: Findcraft is an AI visibility consultancy — we sell the services this article discusses. That’s a conflict of interest worth naming. Findcraft also operates its own AI visibility scanner — a free tool that competes, at a basic level, with the products reviewed here. We’ve written this piece to be genuinely useful whether or not you ever contact us. But we’d be dishonest if we didn’t acknowledge that we benefit when businesses take AI visibility seriously. Read accordingly, verify independently, and trust your own judgement.
Content methodology: This post was produced through the M.A.R.C. methodology (Machine-Assisted, Research-driven, human-Curated content). AI tools assisted with research synthesis and drafting. A human reviewed all claims, verified all sources, and made all editorial decisions. Every statistic links to its primary source.