<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Prerender</title>
	<atom:link href="https://prerender.io/feed/" rel="self" type="application/rss+xml" />
	<link>https://prerender.io</link>
	<description>Prerender. JavaScript SEO, solved with Dynamic Rendering</description>
	<lastBuildDate>Wed, 22 Apr 2026 12:41:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>How to Prepare for the Agentic AI Wave: A Conversation with Ray Grieselhuber, CEO of DemandSphere </title>
		<link>https://prerender.io/blog/podcast-ray-grieselhuber-demandsphere/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 11:58:52 +0000</pubDate>
				<category><![CDATA[AI SEO]]></category>
		<category><![CDATA[Podcast]]></category>
		<category><![CDATA[agentic AI]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=8119</guid>

					<description><![CDATA[Tune into our conversation with Ray Griselhuber, CEO of DemandSphere and a grounded voice in the AI search space.]]></description>
										<content:encoded><![CDATA[
<p>In season two of Get Discovered, we&#8217;re documenting what we&#8217;re calling the discovery crisis: how AI is changing where online discovery happens, what actually gets surfaced, and why so many marketing teams are realizing they don&#8217;t have the visibility or control they thought they did.</p>



<p>In this episode, host Joe Walsh sits down with Ray Grieselhuber, founder and CEO of DemandSphere, a platform that gives marketing teams a unified view of their brand across traditional search, Large Language Models, and agentic search experiences. </p>



<p>The conversation covers everything from indexing to attribution to how to prepare for agentic AI. Whether you’re a growth leader or SEO expert, you’ll definitely find this one valuable.</p>



<p>Listen to the full episode below, or read on for a written summary.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="How to Prepare for the Agentic AI Wave: Ray Grieselhuber, CEO of DemandSphere" width="640" height="360" src="https://www.youtube.com/embed/E1YnDQUo5tY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">The Measurement Contract Is Broken</h2>



<p>Ray opens with a framing that shapes the rest of the conversation: the measurement contract between SEO effort and business outcomes has fundamentally broken down.</p>



<p>It used to be relatively straightforward, even if it was difficult to clearly attribute revenue from SEO. You&#8217;d identify search volume for a topic, create content targeting those keywords, and expect a reasonably predictable number of clicks based on ranking position and clickthrough rates. It was never perfectly deterministic, but it was workable.</p>



<p>That model no longer holds for most sites.</p>



<p><em>&#8220;The big thing is that the measurement from a business perspective, the measurement contract has really gotten broken in many ways… </em><strong><em>You have to bring this mindset that you&#8217;re triangulating directional data from a lot of different sources in order to build a picture.&#8221;</em></strong></p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why the measurement contract in SEO is now broken" width="540" height="960" src="https://www.youtube.com/embed/u9aV2tcfctw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p>What Ray describes is less like a dashboard and more like reckoning. Using multiple imprecise signals together to form a directional picture, then treating the whole thing as a feedback loop rather than a formula. That skillset, he argues, is what will help people survive this wave. Those who&#8217;ve been doing SEO for years already know how to work this way. For those who are new to it, he admits that this shift can feel disorienting.</p>



<h2 class="wp-block-heading">The Great Decoupling of Clicks and Impressions</h2>



<p>One of the most concrete phenomena Ray points to is what&#8217;s widely being called “the great decoupling,” a term coined by the well-known SEO expert, <a href="https://www.growth-memo.com/p/the-great-decoupling">Kevin Indig</a>. It refers to the breakdown of the historically tight correlation between impressions (a proxy for search demand), rankings, and traffic. (We also discuss this in our conversations with <a href="https://prerender.io/blog/podcast-noah-greenberg-stacker/">Noah Greenberg, CEO of Stacker</a>, and <a href="https://prerender.io/blog/podcast-klaus-schremser-otterlyai/">Klaus-M. Schremser, CEO of OtterlyAI</a>.) </p>



<p>For a long time, if your content ranked well for high-volume terms, you could count on traffic following. That relationship has weakened significantly. Impressions are up on many sites, but clicks aren&#8217;t following at the same rate because AI Overviews, featured snippets, and other zero-click features are absorbing more of the interaction.</p>



<p>This creates a new kind of challenge, says Ray. You can no longer look at search volume data and confidently project the engagement you&#8217;ll get from capturing that demand. The signal still matters, but it&#8217;s less predictive than it once was.</p>



<p>However, not every site is experiencing this equally. Ray notes that some of DemandSphere&#8217;s customers still show tight correlations. But for most, it&#8217;s a genuine source of anxiety and a real planning problem.</p>



<h2 class="wp-block-heading">Yes, Traditional SEO Still Matters</h2>



<p>One of the more grounded parts of this conversation is Ray&#8217;s insistence that foundational SEO principles haven&#8217;t gone away, a theme we’ve heard from nearly <a href="https://prerender.io/podcast/">every podcast guest</a> this season. They&#8217;ve just shifted where they apply.</p>



<p>A common misconception in the industry is that SEO is either completely dead (the panicked take) or completely unchanged (the defensive SEO-community take). Ray argues that both of these miss the point.</p>



<p>From Ray’s perspective, the reason traditional SEO still matters is largely about the index. Google&#8217;s web index remains the most comprehensive map of the internet that exists, and most Large Language Model applications—whether they&#8217;re running on Bing&#8217;s API or scraping search results more directly—are still drawing on that index for live retrieval.</p>



<p><em>&#8220;I always talk about the index being the prize, and nobody has a better index of the internet than Google. So that makes them incredibly valuable as a platform company. It also means that a lot of the things that still matter for SEO still matter very much within the realm of what most people would call AI search visibility.&#8221;</em></p>



<p>The implication for practitioners: the work you&#8217;re doing to help search engines find, crawl, and understand your content is also the work that helps LLM applications surface it.</p>



<p><em><strong>Further reading:</strong> <a href="https://prerender.io/blog/how-to-get-indexed-on-ai-platforms/">How to Get Indexed on AI Platforms</a></em></p>



<h2 class="wp-block-heading">The Training Layer vs. Retrieval Layer</h2>



<p>Ray draws a distinction that&#8217;s genuinely useful for thinking about LLM visibility: the difference between what a model knows from its training data and what it retrieves in real time.</p>



<p>Training data has a cutoff date. The moment a model launches, its knowledge is already aging. This is why tools like ChatGPT and Perplexity have increasingly added live web retrieval—often called retrieval-augmented generation, or RAG—to supplement what the model was trained on. When you ask an LLM a question about a current topic, the system often goes to a search index to pull fresh information before generating a response.</p>



<p>This means you&#8217;re always dealing with a hybrid system. Some of what shapes your visibility in an LLM response is baked in from training, while some of it is determined in the moment by live retrieval from search indexes.</p>



<p>For SEO and content strategy, this has real implications. Ranking well and being crawlable still matter for the retrieval layer. But what about training?</p>



<p>Ray&#8217;s view is that training data visibility is harder to engineer, but it&#8217;s also shaped by the same underlying forces that drive search indexing. The internet that training datasets pull from has already been shaped heavily by Google&#8217;s index and by the signals that determine what gets crawled and surfaced. Sites like Common Crawl, which are widely used in training datasets, tend to favor established, well-linked, well-indexed domains.</p>



<p>&#8220;If you&#8217;re one of the top brands in the world, then you&#8217;re going to probably do pretty well on that. If you&#8217;re a smaller brand, your visibility within Common Crawl itself is going to be limited.&#8221;</p>



<p>Ray&#8217;s practical advice: don&#8217;t treat training visibility and live retrieval as separate problems. Both reward the same fundamentals of being indexed, authoritative, and well-structured.</p>



<p><strong><em>Further Reading: </em></strong><a href="https://prerender.io/blog/ai-optimization-technical-seo-guide/"><em>A Technical Guide to Optimize Your Website for AI Search</em></a></p>



<h2 class="wp-block-heading">Why You Should Think of Your Website as a Knowledge Repository</h2>



<p>One of the more conceptually useful ideas in this conversation is Ray&#8217;s suggestion to stop thinking about your website as a marketing function and start thinking about it as an interlinked knowledge repository.&nbsp;</p>



<p>The mental model he offers is something like an internal Wikipedia or a well-maintained personal knowledge base (think Obsidian, or Zettelkasten-style note systems). The idea is that your content should be structured for knowledge retrieval with clear topic relationships, well-organized internal linking, and content that answers specific questions—rather than optimized purely for marketing purposes.</p>



<p>This isn&#8217;t a completely new idea in SEO, but the framing becomes more urgent in an LLM context. Models doing retrieval are, in a sense, trying to pull the most relevant and well-organized information on a topic. Sites that are structured to make that easy are better positioned compared to sites that aren’t.</p>



<p>Ray connects this to a longer-standing belief at DemandSphere: &#8220;Good SEO is good product management and also good corporate strategy.&#8221; You can often tell how well-run a company is by how coherent and discoverable its search surface is.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="How you should think about your website now" width="540" height="960" src="https://www.youtube.com/embed/4PyDQ87YNUo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">How to Think About Agentic AI</h2>



<p>One of the more forward-looking threads in this conversation involves how to think about “agents as users of your website.”</p>



<p>Joe raises the idea, as it’s something the Prerender.io team has been thinking about internally. He notes that the user class for websites is expanding, and it&#8217;s not just humans anymore. Agents are browsing, querying, and making decisions on behalf of users. And as agentic tools become more accessible to non-technical users, the share of traffic and interaction driven by agents is likely to grow.</p>



<p>Ray pushes back slightly on the more radical version of this thesis: he&#8217;s skeptical that human-facing websites will become irrelevant in a three-to-five-year timeframe. But he&#8217;s fully on board with the underlying point: “Yes, you should be thinking about agents as a distinct user class now.”</p>



<p><strong>&#8220;You should definitely be thinking in terms of new protocols to expose your website&#8217;s behavior to agents, because it will be an influencing factor in many ways in that first part of the funnel.&#8221;</strong></p>



<p>The framing he finds useful is a hybrid model. Agents may handle the early stages of discovery and shortlisting to help users narrow down options before they engage directly. But the closer you get to a real decision (especially one involving money), the more likely a human is to want to be in the loop. The website isn&#8217;t going away, of course, but it’s becoming a new layer in a more complex decision stack.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Should you think about agents as a new user class on your website?" width="540" height="960" src="https://www.youtube.com/embed/yTpPWBgjDcM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p><em><strong>Further Reading:</strong> <a href="https://prerender.io/blog/how-to-build-ai-agent-friendly-websites/">How to Build Agent-Friendly Websites</a></em></p>



<h2 class="wp-block-heading">The Biggest Misconceptions in AI Search Today</h2>



<p>Ray identifies two misconceptions that he sees as a kind of pair.</p>



<p>The first is the familiar one: “SEO is dead.” This is the panicked response to AI disruption, and Ray (like most thoughtful practitioners) thinks it&#8217;s wrong. The fundamentals of how information is organized and retrieved online haven&#8217;t changed as much as the discourse suggests.</p>



<p>The second is the defensive mirror image: “it&#8217;s all just SEO.” This is the response from the SEO community that&#8217;s tempting to make, but equally incomplete. When you say &#8220;it&#8217;s all just SEO,&#8221; you&#8217;re focusing on the mechanics of indexes and content at the expense of the more important question: what are users actually doing, and how is that behavior shifting?</p>



<p>&#8220;If any conversation we&#8217;re having about this, if you&#8217;re not bringing it back to what users are doing, how is user behavior shifting, then you&#8217;re missing the boat.&#8221;</p>



<p>For Ray, human attention is the prize. The tools for capturing it change faster than the underlying human behaviors do.</p>



<h2 class="wp-block-heading">What Gets Better in the Next 12–18 Months</h2>



<p>On the optimistic side of the ledger, Ray points to something he and Joe both feel: the ability to build things faster is genuinely transformative right now.</p>



<p>DemandSphere has fully embraced AI-based engineering, and the distinction Ray makes is worth noting. Vibe coding is fine for what it is. But AI-based engineering goes further: it means having a real deployment pipeline and operational infrastructure behind what you build. With that foundation in place, you can now build and ship things at a speed that wasn&#8217;t previously possible for a team of any size.</p>



<p>He also reframes the &#8220;SaaS apocalypse&#8221; narrative in a useful way. Yes, some software companies are going to struggle or disappear, and especially those that are little more than a UI wrapper around a database. But the silver lining is that this clears away a layer of subscription overhead that was never delivering enough value to justify its cost. The companies that survive and thrive will be the ones that have found a way to function less like a product you log into and more like a service that&#8217;s genuinely part of your team&#8217;s workflow. We&#8217;re curious to see where this takes us.</p>



<h2 class="wp-block-heading">Tune Into the Full Conversation</h2>



<p>Listen to the full episode of the <em>Get Discovered</em> podcast on <a href="https://open.spotify.com/show/0rzmrAPI063Y0LYpHipII8">Spotify</a>, <a href="https://podcasts.apple.com/us/podcast/get-discovered-by-prerender-io/id1840604392">Apple Podcasts</a>, <a href="https://www.youtube.com/@Prerender">YouTube</a>, or wherever you get your podcasts. To connect with Ray, visit <a href="https://www.demandsphere.com/">DemandSphere</a> or find him on <a href="https://www.linkedin.com/in/raygrieselhuber/">LinkedIn</a>.</p>





<h2 class="wp-block-heading">About Prerender.io</h2>



<p>Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. <a href="https://prerender.io/pricing/" target="_blank" rel="noreferrer noopener">Start for free today.</a></p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why SSR Alone Doesn&#8217;t Solve Enterprise SEO Problems</title>
		<link>https://prerender.io/blog/ssr-enterprise-seo/</link>
		
		<dc:creator><![CDATA[Prerender Team]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 05:09:38 +0000</pubDate>
				<category><![CDATA[Enterprise SEO]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=8039</guid>

					<description><![CDATA[Discover why enterprise SEO still breaks after SSR adoption, and how to fix it with scalable JS rendering.]]></description>
										<content:encoded><![CDATA[
<p>Implementing SSR for enterprise SEO fixes many of the visibility issues caused by JavaScript, which is why it’s often treated as the go-to solution for large-scale websites. But rendering content on the server is only one piece of the puzzle, especially if your goal is to improve both search and AI discovery.</p>



<p>The limitations of SSR usually show up after deployment. As Googlebot scales its crawling, your origin server (now responsible for rendering every request in real time) starts to introduce latency. That slowdown becomes a signal: crawl rates drop, and pages that are technically correct get indexed less frequently, or not at all. At the same time, cache invalidation delays can prevent updated content from reaching Googlebot for hours or even days.</p>



<p>None of this means SSR was the wrong approach. It means it solves a specific problem, while enterprise SEO challenges extend beyond rendering. This article covers where those gaps appear, what causes them at the infrastructure level, and how dynamic rendering fills them without requiring changes to your SSR architecture.</p>



<h2 class="wp-block-heading">TL;DR &#8211; Why Enterprise SEO Can Still Break After SSR</h2>



<ul class="wp-block-list">
<li>SSR fixes how your JavaScript pages are rendered, but it doesn&#8217;t fix how reliably crawlers find, index, and return to them at scale.</li>



<li>The bigger your site, the more crawler traffic competes with real user traffic — and the more your server overhead affects how often Googlebot shows up.</li>



<li>When content changes faster than your cache refreshes, Googlebot sees outdated pages. That&#8217;s a freshness problem, not a rendering problem.</li>



<li>Enterprise sites are rarely built on one uniform tech stack. Rendering inconsistencies on pages outside your core SSR setup are more common than most teams expect.</li>



<li>Not all AI crawlers execute JavaScript. Missing HTML content on any page is content that does not surface in AI-generated answers.</li>



<li>Dynamic rendering handles what crawlers see at the infrastructure level, independently of your SSR setup, without touching your application.</li>
</ul>



<h2 class="wp-block-heading">What SSR Fixes for Enterprise Sites (And the Visibility Gaps It Leaves Behind)</h2>



<p>To understand where SSR falls short, it helps to be precise about what it actually does.</p>



<p>SSR executes JavaScript on the server and delivers fully rendered HTML to the client on the initial HTTP response. The browser receives a complete document it can parse and display immediately, rather than a near-empty HTML shell that JavaScript then populates after load.</p>



<p>Before SSR, client-side rendering (CSR) was the default for JavaScript-heavy applications. Googlebot would arrive at a page, receive a shell with a single <code><mark style="background-color:#282a36" class="has-inline-color has-white-color"><code>&lt;div id="root"&gt;</code></mark></code> and a JavaScript bundle, and index whatever it found. For dynamic content loaded after JavaScript execution, that was often nothing.</p>



<p>Googlebot does execute JavaScript, but through a secondary rendering queue that can lag behind the initial crawl by days or weeks. <a href="https://www.onely.com/blog/google-needs-9x-more-time-to-crawl-js-than-html/" type="link" id="https://www.onely.com/blog/google-needs-9x-more-time-to-crawl-js-than-html/" target="_blank" rel="noreferrer noopener">Research by Onely</a> found that <strong>Google needs 9x more time to crawl JavaScript pages than plain HTML</strong>, a direct consequence of this rendering queue. This means that content that only exists after JavaScript rendering may not be indexed promptly, or at all (if Googlebot deprioritizes re-rendering).</p>



<p>SSR eliminates that lag. The rendered HTML is the initial response, and it behaves like a static document from the moment Googlebot arrives. For most dynamic sites, that fix is sufficient, but enterprise sites are on another level. At enterprise scale, SSR&#8217;s core assumptions break down in ways that often have nothing to do with rendering quality.</p>



<h2 class="wp-block-heading">Where Enterprise Websites Break SSR&#8217;s Rendering Benefits</h2>



<p>SSR was designed for manageable page counts, moderate content change frequency, and an infrastructure that can absorb rendering load. Enterprise sites systematically violate all three. The result is three compounding failure modes.</p>



<h3 class="wp-block-heading">1. Origin Server Overhead for SSR</h3>



<p>Every SSR request requires your origin server to render the page in real time. Under normal user traffic, this is manageable with cache-rendered output, set TTLs, and horizontal scaling. Search crawler traffic behavior breaks this model.</p>



<p>Googlebot crawls aggressively, ignores your traffic patterns, and can hit a large site hundreds of times per hour. Many of those requests land on uncached pages, each one triggering a full render cycle alongside your user traffic.</p>



<p>This creates two problems. First, your server slows down. Second, Googlebot notices. When response times degrade, Googlebot interprets that as a signal to back off and reduces its crawl rate. <a href="https://support.google.com/webmasters/answer/9679690" type="link" id="https://support.google.com/webmasters/answer/9679690" target="_blank" rel="noreferrer noopener">Google&#8217;s own documentation confirms this directly</a>: if server response times are consistently slow, the crawl rate limit decreases to avoid overloading the server.</p>



<p>In documented cases, such as <a href="https://www.searchenginejournal.com/googlebot-crawl-slump-mueller-points-to-server-errors/553715/" type="link" id="https://www.searchenginejournal.com/googlebot-crawl-slump-mueller-points-to-server-errors/553715/" target="_blank" rel="noreferrer noopener">one reported by Search Engine Journal</a>, server-side errors have caused <a href="https://www.reddit.com/r/TechSEO/comments/1mnr03b/comment/n8n5eki/" type="link" id="https://www.reddit.com/r/TechSEO/comments/1mnr03b/comment/n8n5eki/" target="_blank" rel="noreferrer noopener">crawl rates to drop by 90% within 24 hours</a> (see this Reddit post, for instance). Pages that would have rendered correctly never get indexed, not because of a rendering bug, but because the crawler stopped before reaching them.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="624" height="419" src="https://prerender.io/wp-content/uploads/Dropped-Googlebot-crawl-Reddit.jpg" alt="Dropped Googlebot crawl - Reddit" class="wp-image-8045" srcset="https://prerender.io/wp-content/uploads/Dropped-Googlebot-crawl-Reddit.jpg 624w, https://prerender.io/wp-content/uploads/Dropped-Googlebot-crawl-Reddit-300x201.jpg 300w" sizes="(max-width: 624px) 100vw, 624px" /></figure>



<p>You&#8217;re then stuck choosing between two bad options: throttle Googlebot to protect performance, or over-provision your website’s infrastructure specifically to absorb crawler load. Neither fixes why crawler traffic competes with user traffic in the first place.</p>



<h3 class="wp-block-heading">2. Cache Invalidation Lag: Why Googlebot and AI Crawlers See Stale Content</h3>



<p>The practical response to SSR rendering overhead is caching: render each page once, store the output, and serve it for subsequent requests. The <a href="https://prerender.io/blog/caching-what-it-is-how-it-works-and-how-prerender-caches/" type="link" id="https://prerender.io/blog/caching-what-it-is-how-it-works-and-how-prerender-caches/">caching problem in JavaScript</a> surfaces when content changes.</p>



<p>Enterprise sites have dozens of content types sitting behind multiple caching layers in sequence: application cache, CDN edge, and reverse proxy. A content update has to invalidate all of them correctly.</p>



<p>For a single page edit, this is manageable. For a bulk catalog update that touches hundreds of thousands of URLs, it frequently isn&#8217;t. The database updates, but stale HTML persists at the CDN or reverse proxy because the invalidation signal did not fully propagate, or because a bulk update triggered partial invalidation that missed a subset of affected URLs.</p>



<p>Essentially, Googlebot crawls those pages and receives the old content. From its perspective, nothing changed. But for ecommerce, the gap between a pricing or availability update and Googlebot seeing it is a direct revenue problem. For news and media, where freshness influences rankings, cache staleness degrades indexation quality in ways that surface in traffic.</p>



<h3 class="wp-block-heading">3. Rendering Inconsistencies Across Page Types</h3>



<p>Enterprise sites are rarely architecturally uniform. A large ecommerce site might have category pages in a React SSR setup, product pages on a different system with partial SSR coverage, editorial content on WordPress with a JavaScript frontend, and the localized versions of all of the above built by regional teams with different framework choices.</p>



<p>Pages outside the core SSR configuration fall back to client-side rendering. This affects more page types than most teams expect:</p>



<ul class="wp-block-list">
<li>Legacy pages never converted during the SSR migration</li>



<li>Third-party embedded content where the vendor controls rendering</li>



<li>Localized pages built by teams with different technical standards</li>



<li>Campaign and landing pages built outside the main application</li>



<li>Faceted navigation and filtered listing pages, where dynamic URLs create rendering edge cases</li>
</ul>



<p>SSR coverage of 80% of page types does not solve 80% of the rendering problem. The pages that fall back to client-side rendering often represent a disproportionate share of strategically important content: high-intent landing pages, localized product catalogs, and faceted search results.</p>



<p>These three enterprise SEO SSR failure modes compound each other. But even on pages where SSR is working perfectly, there is a separate problem: <strong>the distinction between a page rendering correctly and a page actually getting indexed.</strong></p>



<h2 class="wp-block-heading">Why Your Pages Are Still Not Indexed After SSR Implementation</h2>



<p>SSR ensures a page renders correctly when a crawler requests it. It does not ensure that Googlebot reaches the page, requests it at a useful frequency, or receives current content when it does. Rendering, crawling, and indexation are distinct processes.</p>



<h3 class="wp-block-heading">Crawl Budget: The Allocation Problem</h3>



<p><a href="https://prerender.io/resources/free-downloads/white-papers/crawl-budget-guide/" type="link" id="https://prerender.io/resources/free-downloads/white-papers/crawl-budget-guide/">Crawl budget is the number of pages Googlebot will crawl on a given site within a given period.</a> Google allocates it based on historical crawl rate, server response times, content freshness, and signals about how valuable the site&#8217;s content is. A five-million-URL site will not have all five million pages crawled daily, because Googlebot prioritizes, and your infrastructure influences that prioritization with every request.</p>



<p><strong>Top tip:</strong> follow <a href="https://prerender.io/blog/crawl-budget-seo/" type="link" id="https://prerender.io/blog/crawl-budget-seo/">these tips to optimize your site for better crawl budget spend</a>.</p>



<p>Of all the infrastructure signals Google weighs, server response latency is the most punishing, as it directly degrades crawl budget. When SSR rendering overhead slows your origin server&#8217;s response, Googlebot observes that latency and reduces its crawl rate. Fewer pages get crawled per budget cycle.</p>



<p>Cache staleness compounds this. When Googlebot re-crawls a page and finds the same content it previously saw, it may reduce the page&#8217;s crawl frequency and reallocate budget elsewhere. The pages at the bottom of Googlebot&#8217;s priority list tend to be the ones that most need fresh indexation: long-tail product pages, localized variants, filtered category pages, and newly created content. These are also where ecommerce and publishing revenue are concentrated.</p>



<h3 class="wp-block-heading">Content Freshness Problem: Why SSR SEO Doesn&#8217;t Solve it</h3>



<p>For frequently updated sites, the gap between a content update and Googlebot seeing it is a measurable SEO problem. In ecommerce, product pages showing outdated pricing or availability send poor quality signals to Googlebot, and pages that consistently show stale information can lose rankings relative to competitors with stronger freshness signals. In news and media, freshness is a direct ranking factor. Stories indexed promptly after publication rank better for time-sensitive queries than stories where indexation lags by hours.</p>



<p>SSR alone does not close the freshness gap if cache invalidation is unreliable. A page may render correctly on the next crawl, but if that crawl happens three days after the update, the freshness advantage is already gone.</p>



<p>Cache staleness and crawl budget are problems Googlebot can at least partially recover from over time. The indexation gaps affecting AI crawlers are permanent and growing in business impact as AI search becomes a primary discovery channel.</p>



<h2 class="wp-block-heading">SSR for AI Visibility: Solving the LLM Content Extraction Gap</h2>



<p>Enterprise SEO has historically centered on Googlebot. The current landscape is different, and the numbers are moving fast.</p>



<p>According to <a href="https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/" type="link" id="https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/" target="_blank" rel="noreferrer noopener">Cloudflare&#8217;s 2025 Year in Review</a>, AI bots now account for <a href="https://ppc.land/ai-crawlers-now-consume-4-2-of-web-traffic-as-internet-grows-19-in-2025/" type="link" id="https://ppc.land/ai-crawlers-now-consume-4-2-of-web-traffic-as-internet-grows-19-in-2025/" target="_blank" rel="noreferrer noopener">4.2% of all HTML requests</a> across the web. GPTBot alone grew 305% in request volume between May 2024 and May 2025. Overall, AI and search crawler traffic grew 18% year-over-year. These crawlers are actively indexing your site right now, and most of them cannot execute JavaScript.</p>



<p>AI-powered search products from OpenAI, Perplexity, Google, Anthropic, and others now index and cite web content directly in generated answers. The crawlers behind these systems have fundamentally different technical characteristics than Googlebot, and SSR compatibility with them is not guaranteed.</p>



<h3 class="wp-block-heading">Why Most AI Crawlers Cannot See Your JavaScript Content</h3>



<p>Most AI crawlers do not execute JavaScript, and this detail is critical for SSR.</p>



<p>When Googlebot visits your site, it makes two passes: an initial crawl of the HTML response, then a secondary rendering pass where it executes JavaScript and indexes the resulting DOM. AI crawlers make one request. They send an HTTP GET request, receive the HTML response, extract the content, and move on. No secondary rendering pass, no JavaScript execution, no return visit.</p>



<p>What is in the initial HTML response is what gets extracted. What is not in the initial HTML response is invisible.</p>



<p>On pages where SSR is consistently implemented and delivers complete HTML in the initial response, AI crawlers see complete content. On pages where SSR is missing or inconsistent, they see whatever partial content the HTML shell contains, which may be very little.</p>



<h3 class="wp-block-heading">LLM Content Extraction and Structured Data</h3>



<p>LLM content extraction depends on clean, complete, well-structured HTML. This matters most for structured content: product specifications, pricing, ingredients, technical specifications, dates, and authorship.</p>



<p>If a product page delivers an empty <code><mark style="background-color:#282a36" class="has-inline-color has-white-color"><code>&lt;div class="product-specs"&gt;</code></mark></code> in the initial HTML and JavaScript populates that div after load, an AI crawler extracts a page with no product specifications. That product does not surface in AI answers for specification-related queries. A competitor whose specifications are present in the initial HTML does.</p>



<p>The same applies to schema.org markup in JSON-LD. If your structured data is injected by JavaScript rather than present in the server-delivered HTML, AI crawlers miss it entirely. This affects citation quality and contextual relevance in AI-generated answers, not just traditional rich results in Google Search.</p>



<h3 class="wp-block-heading">Why Rendering Inconsistencies Hit AI Crawlers Harder</h3>



<p>The rendering inconsistencies described above have more severe consequences for AI crawler compatibility than for Googlebot compatibility.</p>



<p>Googlebot has a secondary rendering queue. The process is slow and imperfect, but partial recovery from rendering failures is possible. AI crawlers have no equivalent. A single request returns incomplete HTML, available content is extracted, and the page is not revisited. So rendering inconsistencies that Googlebot eventually works through are ones that AI crawlers never compensate for.</p>



<p>For enterprise sites with inconsistent SSR coverage across different tech stacks, teams, and markets, AI crawlers are building their understanding of your site from whatever incomplete HTML the first request returned. There is no correction pass.</p>



<p>The question is not whether SSR creates a visibility gap. It does. The question is what infrastructure layer can close it without requiring your teams to rebuild their SSR setup.</p>



<h2 class="wp-block-heading">Prerender.io vs. Server Side Rendering (SSR)</h2>



<p>The Prerender.io vs. SSR debate is often framed as a choice between competing approaches, but they operate at different layers, solve different problems, and are designed to work together.</p>



<h3 class="wp-block-heading">Dynamic Rendering vs. Server-Side Rendering: What Each Approach Does</h3>



<p>SSR is an application architecture decision made at the framework level. It determines how your application generates and delivers HTML, with downstream effects on Time to First Byte, client-side hydration, framework compatibility, developer experience, and infrastructure choices.</p>



<p>Prerender.io handles SEO rendering at the edge, intercepting requests by user agent and serving pre-rendered static HTML to crawlers before they reach your application&#8217;s pipeline. Human visitors continue through your normal application and receive the full dynamic experience. Crawlers receive complete HTML directly from Prerender&#8217;s cache.</p>



<p>The two coexist without conflict because they handle different parts of the same request flow. SSR owns the user experience layer. Prerender.io owns what crawlers see. Neither changes what the other does.</p>



<h3 class="wp-block-heading">How Prerender.io Closes the Enterprise SEO Gaps</h3>



<p><a href="https://prerender.io" type="link" id="https://prerender.io">Prerender.io</a> addresses the specific failure modes that make SSR for enterprise SEO more complicated than a standard implementation.</p>



<p>Origin server overhead is eliminated for crawler traffic. Prerender.io serves pre-rendered pages from a CDN layer at an average of <strong>0.03 seconds per request</strong>, so Googlebot and AI crawlers receive responses without triggering rendering cycles on your origin servers. The performance comparison to the SSR origin rendering is not marginal.</p>



<p>Because responses come from Prerender&#8217;s cache rather than your application, cache freshness is also managed independently of your application stack. You can configure refresh intervals per URL pattern and trigger invalidation on content updates, without depending on your application&#8217;s cache invalidation working correctly across every layer. <a href="https://docs.prerender.io/docs/cache-expiration" type="link" id="https://docs.prerender.io/docs/cache-expiration">Learn more about cache management in Prerender.io.</a></p>



<p>Rendering consistency follows from the same architectural separation. Prerender.io operates at the HTTP request layer, not the application layer, so pages that fall back to client-side rendering receive complete HTML for crawlers regardless of whether they were built in React, Next.js, Vue, a legacy CMS, or a third-party platform.</p>



<p><strong>This is what makes Prerender.io a scalable JS rendering solution: coverage expands with your site without requiring framework-level changes.</strong></p>



<p>That consistency is what makes AI crawler compatibility structural rather than incidental. <a href="https://prerender.io/ai-search/" type="link" id="https://prerender.io/ai-search/">Prerender serves complete pre-rendered HTML from the initial response, so AI crawlers receive complete content regardless of how the page was built.</a> Structured data, product specifications, prices, and any other JavaScript-populated content are present in the HTML they extract.</p>



<p>All of this compounds into crawl budget gains. Faster response times from Prerender.io&#8217;s CDN-served pages mean Googlebot observes better server performance across the board, which correlates directly with higher crawl budget allocation.</p>



<h3 class="wp-block-heading">How Prerender.io Helps Enterprise Websites Win the Google and AI Search</h3>



<p><a href="https://prerender.io/resources/case-studies/popken-fashion-group/" type="link" id="https://prerender.io/resources/case-studies/popken-fashion-group/">Popken Fashion Group</a>, a retailer running over 30 domains, saw approximately 15,000 additional pages indexed in a single market after reactivating Prerender.io following a brief outage. When the tool was turned off due to an internal technical issue, rankings dropped across markets almost immediately. When it was re-enabled, rankings recovered within a week. Prerender.io was not a supplementary tool in that scenario; it was load-bearing infrastructure.</p>



<figure class="wp-block-image size-full"><img decoding="async" width="624" height="291" src="https://prerender.io/wp-content/uploads/Ulla-Popken-increased-the-indexing-rate-after-Prerender.io-adoption.png" alt="Ulla Popken increased the indexing rate after Prerender.io adoption" class="wp-image-8048" srcset="https://prerender.io/wp-content/uploads/Ulla-Popken-increased-the-indexing-rate-after-Prerender.io-adoption.png 624w, https://prerender.io/wp-content/uploads/Ulla-Popken-increased-the-indexing-rate-after-Prerender.io-adoption-300x140.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></figure>



<p><a href="https://prerender.io/resources/case-studies/improved-pagespeed-and-boosting-page-indexing-to-optimize-webshop/" type="link" id="https://prerender.io/resources/case-studies/improved-pagespeed-and-boosting-page-indexing-to-optimize-webshop/">Haarshop</a>, with 35,000+ products across Dutch and German domains, went from struggling to surface products in SERPs to indexing 10,000 pages per day after implementation, with organic traffic improving by over 50%. <a href="https://prerender.io/resources/case-studies/" type="link" id="https://prerender.io/resources/case-studies/">Eldorado</a> saw an 80% traffic improvement after fixing SPA rendering issues through Prerender.</p>



<p>These results reflect a consistent pattern: the indexation gap between what a site publishes and what crawlers can see is a revenue gap. Closing that gap does not require rebuilding your SSR architecture. It requires infrastructure that operates at the crawler layer.</p>



<h2 class="wp-block-heading">Scalable JS Rendering: Why In-House SSR Fails at Enterprise Scale</h2>



<p>The &#8220;Prerender.io vs. SSR&#8221; debate is often framed as a choice between two ways to do the same thing. Technically, that’s true: both deliver rendered HTML to a crawler. But the <strong>architectural cost</strong> of those two paths couldn&#8217;t be more different.</p>



<p>SSR is an application-level burden. It requires your developers to manage Node.js environments, handle complex hydration logic, and constantly patch rendering bugs as your frontend evolves. Prerender.io moves that entire responsibility to the infrastructure layer.</p>



<p>If you are currently running an in-house SSR setup and still seeing indexation gaps, your SSR isn&#8217;t &#8220;failing&#8221;—it’s simply being asked to do something it wasn&#8217;t built for: <strong>managing massive crawler volume.</strong></p>



<ul class="wp-block-list">
<li><strong>In-House SSR is Fragile:</strong> Every code change in your React or Next.js app risks breaking your SSR output. If a component fails to hydrate correctly on the server, Googlebot sees a broken page.</li>



<li><strong>In-House SSR is Expensive:</strong> Scaling an origin server to handle millions of uncached requests from Googlebot and a dozen different AI crawlers requires massive over-provisioning of resources.</li>



<li><strong>Prerender.io is Decoupled:</strong> Because Prerender handles rendering at the edge (the HTTP request layer), it doesn&#8217;t care how your application is built. It intercepts the crawler, renders the page in a dedicated headless browser, and serves a perfect HTML snapshot.</li>
</ul>



<h2 class="wp-block-heading">Before You Move From SSR to Prerender.io &#8211; An Enterprise Checklist</h2>



<p>If you&#8217;re ready to offload your SEO rendering from your internal team to a managed solution, follow these steps:</p>



<ol class="wp-block-list">
<li><strong>Identify &#8220;SSR-Light&#8221; Areas:</strong> Audit your site for page types (like faceted search or legacy landing pages) where your in-house SSR coverage is spotty. These are your first wins.</li>



<li><strong>Audit AI Crawler Access:</strong> Verify that your current setup is delivering 100% of your structured data and product specs in the initial HTML. If it isn&#8217;t, AI search engines aren&#8217;t seeing you.</li>



<li><strong>Update Your Middleware:</strong> Prerender.io integrates at the CDN or Middleware layer. This means you can &#8220;turn it on&#8221; for specific user agents (Googlebot, GPTBot) while keeping your current setup for humans.</li>



<li><strong>Monitor Your Crawl Stats:</strong> Within 72 hours of shifting your crawler traffic to Prerender.io, check Google Search Console. You should see a sharp drop in &#8220;Average Response Time&#8221; and a corresponding lift in &#8220;Total Crawl Requests.&#8221;</li>
</ol>



<h2 class="wp-block-heading">Fix Your Enterprise SEO Discovery Issues with Prerender.io</h2>



<p>SSR SEO coverage solves the rendering problem it was designed to solve. The gaps covered in this article, origin server strain, cache invalidation failures, rendering inconsistencies across page types, and AI crawler invisibility, are infrastructure problems that live at a different layer. They do not go away by improving your SSR implementation. They go away by adding the right infrastructure alongside it.</p>



<p>Every day these gaps persist is a day Googlebot is crawling fewer pages than it should, AI crawlers are building an incomplete picture of your site, and content you have published is not reaching the audiences searching for it.</p>



<p>Prerender.io takes a few hours to install, integrates with any JavaScript framework and most CDN and backend configurations, and starts serving complete HTML to crawlers immediately. There is no code rewrite, no SSR migration, and no ongoing maintenance burden after setup.</p>



<p>The indexation gap between what your site publishes and what crawlers can see is a revenue gap. <a href="https://prerender.io" type="link" id="https://prerender.io">Start your free trial at Prerender.io</a> today and close it.</p>



<h2 class="wp-block-heading">FAQs &#8211; Adopting SSR to Fix Enterprise SEO Problems</h2>



<h3 class="wp-block-heading">Why Are My Pages Not Being Indexed Even With SSR?</h3>



<p>Pages may not be indexed even with SSR because server latency reduces Googlebot’s crawl rate, cache invalidation delays lead to stale content, or certain page types still have rendering inconsistencies.</p>



<p>Check response times under crawler load, confirm cache updates propagate correctly, and audit whether all pages return complete HTML on the initial response.</p>



<h3 class="wp-block-heading">Why Can’t AI See My JavaScript Content?</h3>



<p>AI can’t see your JavaScript content because AI crawlers don’t execute JavaScript—they only extract what’s available in the initial HTML response. Any content rendered client-side, like product details, pricing, or structured data, simply isn’t visible to them.</p>



<p>The fix is to ensure all critical content is included in the server-delivered HTML on the first request. Solutions like Prerender.io handle this by serving fully pre-rendered HTML to AI crawlers without requiring changes to your existing architecture.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>From Discoverability to Actionability: A Conversation with Tobias Schlottke, CTO and Partner at saas.group</title>
		<link>https://prerender.io/blog/a-conversation-with-tobias-schlottke-saas-group/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 11:34:30 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7843</guid>

					<description><![CDATA[Tune into our conversation with Tobias Schlottke, a driving force in the European tech scene.]]></description>
										<content:encoded><![CDATA[
<p>Most conversations about AI stop at the same place: “How do we get found in AI search?”  But that’s the wrong question, says Tobias Schlottke.</p>



<p>In this episode of the Get Discovered podcast, Joe Walsh sat down with Tobias (or Tobi, as he likes to be called), a driving force in the European tech scene. </p>



<p>Not only is he Cofounder, Partner, and CTO at saas.group, a key European software portfolio company with 25+ brands and $100M+ ARR, Tobi also cofounded the popular tech conference OMR and runs Alphalist, a well-known podcast for CTOs. </p>



<p>In this conversation, Tobi uses his expertise to explore what comes after discoverability: actionability. That&#8217;s where the future of SaaS lies, he says, but most businesses aren&#8217;t talking about it enough.</p>



<p>Watch the full episode below, or read on for key takeaways.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="From Discoverability to Actionability: Tobias Schlottke, Cofounder, Partner, and CTO at saas.group" width="640" height="360" src="https://www.youtube.com/embed/1wo_bvMdBuY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Why You Don&#8217;t Always Need to Read the News</h2>



<p>Tobi kicks off the conversation with a surprising confession. Despite being an avid tech leader, he actively avoids tech news more than you think. This isn&#8217;t because he&#8217;s disengaged, but because most of it doesn&#8217;t <em>actually</em> move the needle for him.</p>



<p>&#8220;There&#8217;s always a model released. There&#8217;s always a funding round. And often it doesn&#8217;t lead to anything. It doesn&#8217;t help me with anything.&#8221;</p>



<p>The real challenge right now, he argues, isn&#8217;t production like we see with the news. Instead, it’s adoption. The pace at which AI companies release new capabilities has far outstripped the pace at which businesses can meaningfully act on them, and the value that AI companies are offering with each release is actually quite low, he says.</p>



<p>So how does he decide what to pay attention to? He tests things in his personal life first. If he finds himself genuinely using something and generating real value from it, that&#8217;s when it earns a place in his professional toolkit. He also leans on his Alphalist CTO network as a filter. If the people he trusts are doing something, he tries it himself.&nbsp;</p>



<p>This is reassuring news for anyone in tech. If you find yourself drowning in updates and LinkedIn hype, Tobi reminds us to slow down, trust your own experience, and lean into your network.&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why you don&#039;t always need to read tech news" width="540" height="960" src="https://www.youtube.com/embed/ja54HGqsglE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">How Has Discovery Changed? Humans Are Now the Last Step, Not the First</h2>



<p>Tobi then explores how the discovery process has fundamentally changed. He rarely engages with anything cold anymore. By the time he decides to look into a product, a service, or a piece of software, an AI system has already done the preliminary filtering. It has narrowed the relevant options, surfaced the key features, and reduced a broad field of possibilities to one or two candidates. The human decision, and often the transaction itself, comes at the end of that process, not the beginning.</p>



<p><strong>&#8220;It&#8217;s more the discovery process upfront where I see value in systems filtering for me.&#8221;</strong></p>



<p>He reiterates that the discovery layer <strong><em>first</em></strong> is important. It’s a profound shift for anyone thinking about how their content or product gets found. The audience you&#8217;re optimizing for isn&#8217;t just the person who eventually engages with your brand. It&#8217;s the AI layer that decides whether to surface you at all, a theme we explore in our <a href="http://prerender.io/blog/podcast-klaus-schremser-otterlyai/">episode with OtterlyAI.</a> Actionability is just what comes next.</p>



<h2 class="wp-block-heading">The Importance of Trusted Sources</h2>



<p>That’s where trusted sources come in, a recurring theme this season.&nbsp;</p>



<p>Trusted sources are now the building blocks of search: both for AI systems first, and human verification second.&nbsp;</p>



<p>On a personal level, when he&#8217;s evaluating software, he doesn&#8217;t just ask an LLM for a recommendation in isolation. He cross-references with his CTO peer network or looks for recommendations from people whose judgment he has direct experience with. He, refreshingly, doesn’t exclusively rely on AI output.</p>



<p>But it also matters because it maps to how LLMs themselves are trained and weighted. A <a href="http://prerender.io/blog/podcast-noah-greenberg-stacker/">citation from an authoritative, trusted source carries more signal</a> than a brand citing itself. The analogy Joe raises—a car review in a magazine versus the car manufacturer&#8217;s own website—holds up. Independent, credible sources create the kind of signal that actually gets incorporated into AI responses.</p>



<p>For businesses, your own website isn’t enough. You need to be present in the sources that AI systems learn from and cite, such as industry publications, peer review platforms, expert communities, and third-party directories.&nbsp;</p>



<p>Tobi points specifically to platforms like OMR Reviews (a G2 software equivalent for Europe) as examples of the kind of trusted aggregators that LLMs increasingly draw from. Being present, reviewed, and referenced in those places is becoming as important as having a well-structured website.</p>



<h2 class="wp-block-heading">Discoverability First. Actionability Second</h2>



<p>Most of the industry conversation right now is focused on the first part of the problem: how to get AI to find you, reference, and cite you. That&#8217;s a real and urgent problem. But Tobi argues that it&#8217;s also just the foundation.</p>



<p>The more important question is whether, once you&#8217;re found, <strong>you can actually be </strong><strong><em>acted upon</em></strong><strong>.</strong> Can an AI agent complete a transaction? Get a relevant, up-to-date answer? Trigger a workflow? If your content or product can be discovered but not acted on, you&#8217;ve only solved half the problem.</p>



<p>He draws a direct parallel to the evolution of traditional SaaS: companies would buy dashboards and analytics tools, generate reports, and then never actually change their behavior based on them. The data was there, but the bridge to action was missing.</p>



<p>Tobi predicts that the real value of software in the coming years won&#8217;t be measured by the data it shows you, but by the actions it takes on your behalf.</p>



<p>&#8220;The actions that a system is going to take for you will help clients to really generate the real value of software that they may have never materialized before.&#8221; </p>



<p>For businesses thinking about AI readiness, this reframes the question. Being discoverable to AI is a must-have. But being <em>actionable</em> (having the right structure, APIs, and context for AI agents to do something useful with what they find) is where the competitive advantage starts to separate.</p>



<p><strong><em>Further Reading: <a href="https://prerender.io/blog/how-to-build-ai-agent-friendly-websites/">How to Build AI Agent-Friendly Websites</a></em></strong></p>



<h2 class="wp-block-heading">How Nobody’s Talking About Prompt Injection </h2>



<p>In a candid aside, Tobi shares an experience that&#8217;s equal parts funny and concerning: his Claude-based automation responded to an incoming WhatsApp message on his behalf, without being asked to.</p>



<p>The underlying issue is prompt injection: the risk that an external actor can embed instructions into content that an AI system reads, causing it to behave in unintended ways. The AI doesn&#8217;t distinguish between &#8220;content to summarize&#8221; and &#8220;instruction to follow.&#8221; For AI systems, both look the same.</p>



<p>Tobi&#8217;s take is that most people haven&#8217;t encountered this problem yet, but they will soon. As AI agents become more embedded in business workflows, such as reading emails, processing documents, or acting on information from external sources, the surface area for prompt injection attacks grows significantly.</p>



<p><strong>&#8220;This is a topic that many people don&#8217;t talk about, and how dangerous it is.&#8221;</strong></p>



<p>It&#8217;s worth flagging for marketers and growth teams, not because it&#8217;s their job to solve it, but because it&#8217;s coming. The content your brand publishes and the way your data is structured will all matter in an agentic world. Not just for discoverability, but for actionability, too.</p>



<h2 class="wp-block-heading">Looking Ahead</h2>



<p>In the next 12 to 18 months, Tobi is clear about what&#8217;s coming.</p>



<p>First, the volume of AI-generated software is going to explode. He&#8217;s already seeing it himself. People in his orbit are shipping tools they built overnight, which is adding to a landscape that&#8217;s already hard to navigate. He also flags the likelihood of more security incidents as agentic systems proliferate faster than the infrastructure to safely govern them.</p>



<p>But it&#8217;s important to add that he&#8217;s not pessimistic. The same forces creating the noise problem are creating the tools to solve it. AI will increasingly help manage the volume it creates, organizing communication, filtering signals, and reducing the cognitive load of staying current. </p>



<p>His advice? Stay curious, stay experimental, and don&#8217;t let the negativity that dominates the news cycle distort your view of what&#8217;s actually happening on the ground.</p>



<h2 class="wp-block-heading">Key Takeaways</h2>



<ul class="wp-block-list">
<li><strong>The first discovery layer is increasingly AI-driven, not human-driven.</strong> Optimize for the pre-filtering stage, not just the final visit. By the time a human engages with your brand, AI has already done most of the evaluation.</li>



<li><strong>Trusted third-party sources matter more than your own website.</strong> AI systems learn from and cite authoritative external sources, but humans rely on these too. Being present in those places is foundational for AI discovery and human validation.</li>



<li><strong>Discoverability is a prerequisite. Actionability is the competitive edge.</strong> The businesses that win will be ready for AI agents to do something with what they find.</li>



<li><strong>Prompt injection is a coming risk most teams aren&#8217;t prepared for.</strong> As AI agents read and act on external content, the structure and integrity of that content matter for security, not just SEO.</li>
</ul>



<p><strong>Adoption is the real challenge.</strong> Don&#8217;t let the pace of AI releases convince you to chase every new tool. Build a trusted filter (personal experimentation, peer networks) and act on what actually delivers value.</p>



<h2 class="wp-block-heading">Tune Into the Full Conversation</h2>



<p>Listen to the full episode of the <em>Get Discovered</em> podcast on <a href="https://open.spotify.com/show/0rzmrAPI063Y0LYpHipII8">Spotify</a>, <a href="https://podcasts.apple.com/us/podcast/get-discovered-by-prerender-io/id1840604392">Apple Podcasts</a>, <a href="https://www.youtube.com/@Prerender">YouTube</a>, or wherever you get your podcasts. To connect with Tobi, learn more about <a href="https://saas.group/">saas.group</a> or find him on <a href="https://www.linkedin.com/in/tobias-schlottke/">LinkedIn</a>. </p>



<figure class="wp-block-embed is-type-rich is-provider-spotify wp-block-embed-spotify wp-embed-aspect-21-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Spotify Embed: From Discoverability to Actionability: Tobias Schlottke, Cofounder, Partner, and CTO at saas.group" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/60xrlxncT6ITZJ2UM4L5OX?si=RkCPPBKIQ6GWv2-JvzxfRA&amp;utm_source=oembed"></iframe>
</div></figure>



<h2 class="wp-block-heading">About Prerender.io</h2>



<p>Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. <a href="https://prerender.io/pricing/" target="_blank" rel="noreferrer noopener">Start for free today.</a></p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why You Shouldn&#8217;t Over-Focus on Citations: A Conversation with Tom Capper from Moz</title>
		<link>https://prerender.io/blog/podcast-conversation-tom-capper-moz/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 11:36:59 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7821</guid>

					<description><![CDATA[Tom Capper, one of the most trusted data voices in the SEO space, joins the Get Discovered podcast.]]></description>
										<content:encoded><![CDATA[
<p>In this episode of Get Discovered, host Joe Walsh sits down with Tom Capper, Senior Search Scientist at Moz. As one of the most trusted voices in the SEO space for quantitative data, Tom digs into what the data is actually telling us about AI&#8217;s impact on search visibility, what&#8217;s changing (and what&#8217;s still the same), and what marketing teams should do about it.</p>



<p>Tom has spent five years building original research into how search works, and he&#8217;s one of the few voices in the industry whose predictions tend to hold up over time.&nbsp;</p>



<p>Watch the full conversation below, or read on for key takeaways.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why You Shouldn&#039;t Over-Focus on Citations: Tom Capper, Senior Search Scientist at Moz" width="640" height="360" src="https://www.youtube.com/embed/tZDx9u5gB8A?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Why Ranking #1 on Google No Longer Guarantees AI Visibility</h2>



<p>One of the most striking findings Tom shared from his original research is that, in a study of nearly 40,000 queries, 88% of the links cited in Google&#8217;s AI Mode responses don&#8217;t appear in the equivalent organic top 10 for the same keyword.</p>



<p>This means that a brand could sit at the top of a traditional SERP, but still be completely absent from the AI response its potential customers are actually reading. It&#8217;s important to note that this isn&#8217;t always the case, and <a href="https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/" target="_blank" rel="noreferrer noopener">rankings do still matter</a>. AI systems often pull from the top five search results. Tom is simply saying that rankings don&#8217;t <strong><em>guarantee</em></strong> AI visibility.</p>



<p>That&#8217;s because AI Mode isn&#8217;t just answering a single query. Instead, it&#8217;s running what he describes as roughly 20 different sub-searches simultaneously, pulling from a much broader range of sources than any single organic result page. So the content that ranks well for a given keyword may have nothing to do with what AI Mode pulls when it constructs its answer to the underlying question a user is actually asking.</p>



<p><strong>The practical implication for marketing teams:</strong> tracking your organic rankings and assuming that tells you something about your AI visibility is becoming increasingly unreliable. Rankings are still important, but these two problems may require different measurement approaches.</p>



<p><em><strong>Related reading:</strong> <a href="https://prerender.io/blog/industry-study-how-ai-retrieves-content/" target="_blank" rel="noreferrer noopener">What 100M+ Pages Reveal About How AI Systems Retrieve and Cite Your Content</a></em></p>



<h2 class="wp-block-heading">How to Rethink SEO as a Brand Channel (Not Just a Performance Channel)</h2>



<p>For most of its history, SEO has been measured like a direct response channel. Focus on rankings to drive clicks, which drive conversions, and at the end of the day, everyone reports on traffic. That model is under pressure right now, yet Tom argues that it was always a little misleading.</p>



<p>&#8220;There&#8217;s a big chunk of SEO that ought to have been thinking about itself as a brand channel probably quite a long time ago,&#8221; he told Joe. &#8220;What we are doing now is getting exposure.&#8221;</p>



<p>With AI Overviews and zero-click search handling more informational queries, the click-through that SEO teams used to count on is happening less often. But that doesn&#8217;t mean the value of appearing in those results has disappeared. It just means the value has shifted from the click to the impression.</p>



<p>Tom and Joe discuss a strong analogy here. “If your company invests in billboards, no one asks how many people clicked the billboard. The brand recognition counts.” A significant share of organic search has always worked that way, but marketers just weren&#8217;t measuring it in those terms before.</p>



<p>In the context of AI search, this becomes more pronounced. When an AI model recommends your product by name (even without linking to your site), that&#8217;s your brand being surfaced in the highest-trust context possible. The person who finds you that way and then visits your website directly is a fundamentally different visitor than someone who clicked the third link in a SERP.</p>



<p><strong>What this means for how you should measure:</strong> branded search volume, share of voice in AI responses, direct traffic trends, and third-party coverage are all more meaningful signals in this environment than raw organic click volume alone.</p>



<h2 class="wp-block-heading">Why Digital PR Has Become One of the Most Valuable Levers for AI Search Visibility</h2>



<p>If brand visibility in AI responses is the goal, the question is what actually moves the needle. Tom&#8217;s answer was clear. Digital PR, which he described as probably the biggest lever available to most companies right now, is one that most haven&#8217;t fully committed to yet.</p>



<p>The logic behind this makes sense once you understand how AI models decide what to surface. These systems are trained on large bodies of text from across the web, and they tend to favor brands that are mentioned frequently and positively across authoritative third-party sources like news coverage, industry publications, research citations, and expert roundups. You can&#8217;t optimize your way into an AI recommendation purely through on-site improvements to your own pages. Instead, the signal has to come from the broader web talking about you.</p>



<p><strong>&#8220;Digital PR is probably the biggest lever if you want to improve your standing in these kinds of results,&#8221; </strong>Tom said. <strong>&#8220;And that has not been true in SEO for more than a decade.&#8221;</strong></p>



<p>For years, digital PR was often reduced to link building to secure the placement, earn the backlink, and track the increase in your domain authority. That framing made sense when links were the primary trust signal Google was evaluating. But as Tom noted, the relationship between links and rankings has been loosening for a while, and AI search has accelerated this shift.</p>



<p>Now, a placement in a respected industry publication that mentions your brand—even <em>without</em> a link itself—is contributing to the rise of third-party text that AI models draw from. The link has become secondary to the mention.</p>



<p><em><strong>Related reading:</strong> <a href="https://prerender.io/blog/how-to-get-indexed-on-ai-platforms/" target="_blank" rel="noreferrer noopener">How to Get &#8220;Indexed&#8221; in AI Search</a></em></p>



<h3 class="wp-block-heading">How to start building your digital PR presence:</h3>



<ul class="wp-block-list">
<li>Map out which publications, journalists, and independent review sites cover your category. These are the sources that AI models are likely pulling from when they answer questions in your space.</li>



<li>Identify which of your competitors are being mentioned regularly in those outlets, and understand what kinds of stories or data are generating that coverage.</li>



<li>Shift the brief for your PR team. Getting placements without links is <em>not</em> a failure. In the current environment, the mention itself is the asset.</li>



<li>Reallocate funds, and consider investing more in PR than you might have previously.</li>
</ul>



<h2 class="wp-block-heading">The AI Attribution Problem (Again): Don’t Focus on Citations as a Core KPI</h2>



<p>Tom was candid about the fact that none of this is easy to measure right now—as we’ve heard in nearly every episode this season. (<a href="https://prerender.io/blog/podcast-noah-greenberg-stacker/" target="_blank" rel="noreferrer noopener">Noah Greenberg from Stacker</a>, <a href="https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/" target="_blank" rel="noreferrer noopener">Elizabeth Thorn from Toggl</a>, and <a href="https://prerender.io/blog/a-podcast-conversation-with-ryan-law-ahrefs/" target="_blank" rel="noreferrer noopener">Ryan Law from Ahrefs</a> all talked about this.) Brand impressions in AI responses don&#8217;t appear in Google Search Console. This means that a customer who reads an AI recommendation for your product and searches your brand directly will show up in your analytics as a branded search or direct traffic, but without a trace of the original touchpoint.</p>



<p>&#8220;Attribution and analytics in general are not in an amazing spot right now,&#8221; Tom acknowledged, while also pushing back on the idea that this is entirely new. <a href="https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/" target="_blank" rel="noreferrer noopener">Traditional SEO attribution</a> had its own issues. The click-through data that made organic search feel so measurable was always capturing something directional rather than definitive.</p>



<p>His more important point: <strong>over-focusing on citations as a KPI is a mistake.</strong> He gave the example of an AI model recommending a specific car model. If you&#8217;re the automaker, the win is being recommended, not whether your own website was one of the sources cited in the footnote. A third-party automotive magazine being cited is arguably better for credibility.</p>



<p>This is a meaningful mindset shift for SEO teams used to optimizing toward measurable outcomes. The measurement infrastructure for AI visibility is still being built. In the meantime, the right move is to expand what you track, and to treat brand-building activities as legitimate contributors to discovery performance—not as something separate from SEO.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why you should stop focusing on citations as a KPI" width="540" height="960" src="https://www.youtube.com/embed/r1acSQWkBjM?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Why Google Winning the AI Race Might Actually Be Good News for Marketers</h2>



<p>One of the more counterintuitive moments in the conversation came when Tom pushed back on the assumption that Google&#8217;s dominance in AI search is something marketers should fear. From his perspective, Google winning the top-of-funnel AI battle against ChatGPT is probably the better outcome for most marketing teams, even if it doesn&#8217;t feel that way. </p>



<p>&#8220;Google is the devil we know,&#8221; he says. &#8220;It has years of experience dealing with web spam, SEOs, and the messy realities of the open web. It has tried things, failed, and course-corrected.&#8221; A world where search consolidates around Google—even a more AI-driven version of it—looks a lot more like a world marketers already know how to operate in than one where a handful of new, unpredictable platforms are each rewriting the rules simultaneously.</p>



<p>As Tom put it, a Google-dominant future is &#8220;a slightly less intimidating future and one that might look a little bit more like what we know how to play in.&#8221; </p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="&quot;Google is the devil we know&quot; - why Google winning over OpenAI may be best for SEOs" width="540" height="960" src="https://www.youtube.com/embed/3qe20L5waBI?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Key Takeaways for Marketing Teams</h2>



<p>So what can marketers take away from this conversation? Tom&#8217;s research, unique takes, and this conversation itself have a few standout points: </p>



<ol class="wp-block-list">
<li><strong>Don&#8217;t over-index on citations as a KPI</strong></li>
</ol>



<p>Being recommended by name inside an AI response is the win, not whether your own website shows up as a cited source. A third-party publication being cited alongside your brand recommendation is often more credible than your own site being cited. Track mentions and recommendations separately from source citations.</p>



<ol start="2" class="wp-block-list">
<li><strong>Invest in digital PR with a new brief</strong></li>
</ol>



<p>The ask isn&#8217;t &#8220;get us a backlink.&#8221; It&#8217;s &#8220;get us mentioned in the places our customers and AI models trust.&#8221; That means identifying authoritative outlets in your category and building genuine relationships with the journalists and editors who cover it. Links are a secondary benefit, not the goal.</p>



<ol start="3" class="wp-block-list">
<li><strong>Start treating brand as an SEO input</strong></li>
</ol>



<p>Tom noted that brand authority is now a stronger predictor of ranking performance than domain authority, and that this relationship becomes even more critical in AI-driven results. Anything that puts your brand in front of your audience in a memorable way—sponsorships, events, TV, or community building—is contributing to branded search signals that influence discovery downstream.</p>



<ol start="4" class="wp-block-list">
<li><strong>Use citations as a strategic intelligence tool</strong></li>
</ol>



<p>Rather than using citations as a core KPI, re-evaluate your relationship with them. Instead, you can look at which third-party sources AI models cite when recommending brands in your category to build your digital PR plan. If a specific publication or review site keeps appearing, that&#8217;s your outreach list.</p>



<ol start="5" class="wp-block-list">
<li><strong>Build measurement flexibility into your strategy now</strong>&nbsp;</li>
</ol>



<p>The tools for tracking AI visibility are improving quickly, but the attribution landscape won&#8217;t settle down anytime soon. Build reporting that surfaces leading indicators—branded search volume, third-party mentions, and direct traffic trends—alongside traditional organic metrics, and revisit what you&#8217;re measuring regularly as the tooling catches up.</p>



<ol start="6" class="wp-block-list">
<li><strong>Original research is one of the highest-leverage content investments you can make</strong></li>
</ol>



<p>If your brand is the only source for a specific data point or finding, AI models have no choice but to cite you when that information is relevant. You don&#8217;t need a dedicated research team to start. Focusing on the internal data you already have, framed around a story worth telling, is enough.</p>



<p><strong>Listen to the full episode: </strong></p>



<figure class="wp-block-embed is-type-rich is-provider-spotify wp-block-embed-spotify wp-embed-aspect-21-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Spotify Embed: Why You Shouldn&amp;apos;t Over-Focus on Citations: Tom Capper, Senior Search Scientist at Moz" style="border-radius: 12px" width="100%" height="152" frameborder="0" allowfullscreen allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy" src="https://open.spotify.com/embed/episode/0iuwv0JpnrQrUXYofxw8ql?si=NiArk8rBQEeYTWAZtUeLgA&amp;utm_source=oembed"></iframe>
</div></figure>



<h2 class="wp-block-heading">About Tom Capper</h2>



<p>Tom Capper is a Senior Search Scientist at <a href="https://moz.com">Moz</a>, where he builds original research into how search actually works. He&#8217;s a regular speaker at MozCon and SMX, and publishes the Moz Top 10 newsletter biweekly. Connect with him on <a href="https://www.linkedin.com/in/thcapper/" target="_blank" rel="noreferrer noopener nofollow">LinkedIn</a> or follow his work over at Moz directly.</p>



<h2 class="wp-block-heading">About Prerender.io</h2>



<p>Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. <a href="https://prerender.io/pricing/" target="_blank" rel="noreferrer noopener">Start for free today.</a></p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How to Troubleshoot Common Lovable and Cloudflare Integration Issues</title>
		<link>https://prerender.io/blog/troubleshooting-lovable-cloudflare-integration/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Thu, 19 Mar 2026 09:57:37 +0000</pubDate>
				<category><![CDATA[Javascript SEO]]></category>
		<category><![CDATA[javascript]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7738</guid>

					<description><![CDATA[Explore common Lovable and Cloudflare integration problems and learn how to fix them, ensuring your Lovable site's SEO performance is optimized for search.]]></description>
										<content:encoded><![CDATA[
<p>A solid Cloudflare integration is the foundation for resolving Lovable SEO issues. But when the Lovable–Cloudflare integration layer isn’t configured correctly, problems surface quickly: your pages load for users but not for crawlers, or your metadata exists but doesn’t get picked up. Ultimately, your content visibility becomes incredibly inconsistent.</p>



<p>For some teams, connecting Lovable to Cloudflare is straightforward. But for others, it can be a debugging session involving redirect loops, blocked crawlers, caching conflicts, and subtle Cloudflare bot rendering issues that aren’t immediately obvious. </p>



<p>This guide breaks down the most common Lovable and Cloudflare integration failures, and shows you exactly how to fix each one.</p>



<h2 class="wp-block-heading">TL;DR of Common Lovable and Cloudflare Integration Challenges and How to Fix Them</h2>



<p>If your Lovable site isn’t indexing, search engine crawlers aren’t seeing content, or you’re getting Cloudflare errors like 1000 or 502, the issue is usually in the Cloudflare integration layer—not necessarily with Lovable itself.</p>



<p>In most cases, the Lovable and Cloudflare troubleshooting comes down to:</p>



<ul class="wp-block-list">
<li>Removing the custom domain from Lovable and letting Cloudflare fully control routing</li>



<li>Using the Lovable-specific Cloudflare Worker (not the standard version)</li>



<li>Ensuring DNS records are proxied (orange cloud) and not using A records</li>



<li>Setting SSL to Full (Strict)</li>



<li>Disabling Rocket Loader, Signed Exchanges, and reviewing Bot Fight Mode</li>



<li>Confirming bot requests return rendered HTML (look for <code><span style="color: #188038;">x-prerender-request-id</span></code>)</li>



<li>Separating bot and human cache behavior</li>
</ul>



<p>If you’re using Prerender.io for your Lovable app, most “not working” reports also trace back to misconfigured Workers, blocked bots, caching conflicts, or DNS/SSL mistakes in Cloudflare.</p>



<p>When the integration is correct, bot traffic is intercepted at the edge, routed properly, and receives fully rendered HTML, while human users continue to load the SPA normally.</p>



<h2 class="wp-block-heading">Why Lovable Apps Aren&#8217;t SEO-Friendly by Default</h2>



<p>Lovable is an AI-powered software builder that generates web applications from natural language prompts. These apps are typically client-side rendered, which means that when Googlebot or an AI crawler visits your Lovable website, it only sees an empty website while JavaScript fills in content and metadata after load. </p>



<p>Here’s what search engine crawlers receive when they make requests on a Lovable site:</p>



<pre class="wp-block-code dark"><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;&lt;title&gt;My Site&lt;/title&gt;&lt;/head&gt;
&lt;body&gt;
  &lt;div id="root"&gt;&lt;/div&gt;
  &lt;script src="/assets/index-abc123.js"&gt;&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;</code></pre>



<p>Every route returns this identical empty web page shell without the content, meta tags, or internal links. This is the core problem with client-side rendering (CSR), and it creates specific problems that directly affect content visibility and traffic.</p>



<ul class="wp-block-list">
<li><strong>Crawl delays.</strong> Google uses a two-phase process: it crawls raw HTML first, then queues JavaScript-heavy pages for a separate rendering pass. That queue adds days to weeks of delay, and execution is not guaranteed, <a href="https://prerender.io/benefits/faster-indexation/" type="link" id="https://prerender.io/benefits/faster-indexation/">causing pages to appear inconsistently in search results</a>.</li>



<li><strong>Invisible meta tags.</strong> Per-page titles, descriptions, canonical URLs, and JSON-LD structured data set via <code><span style="color: #188038;">react-helmet-async</span></code>  are all JavaScript-rendered. Crawlers that skip JS execution never see them, which means your rich snippets and structured data produce no SEO value.</li>



<li><a href="https://prerender.io/benefits/social-media-sharing/" type="link" id="https://prerender.io/benefits/social-media-sharing/"><strong>Broken social link previews.</strong></a> Facebook, LinkedIn, Twitter/X, Slack, Discord, and WhatsApp all fetch pages without executing JavaScript. They receive the empty shell and generate blank or generic link previews. Broken social link previews directly reduce click-through rates on every shared link.</li>



<li><strong>No internal link discovery.</strong> React Router links and programmatic navigation calls are invisible in raw HTML. Search engines cannot discover linked pages even when those pages exist in a sitemap.</li>



<li><strong>Wasted crawl budget.</strong> Crawlers download heavy JavaScript bundles for every URL and encounter the same empty shell across all routes, directly harming crawlability and indexability across your entire site.</li>
</ul>



<h2 class="wp-block-heading">How to Check Whether Your Lovable Website Is Having SEO Issues</h2>



<p>Lovable’s own SEO and AEO guidance documents the practical limitations of client-side rendering (CSR), and notes that metadata does not automatically update across routes unless you implement route-specific metadata management.</p>



<p>To confirm whether your site has common Lovable SEO problems, run:</p>



<pre class="wp-block-code dark"><code>curl -i -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" \
  -H "Accept: text/html" https://your-lovable-site.com</code></pre>



<p>If the response body contains only <code><mark style="background-color:#282a36" class="has-inline-color has-white-color"><code>&lt;div id="root"&gt;&lt;/div&gt;</code></mark></code> with no real content, search engine crawlers see nothing.</p>



<figure class="wp-block-table custom_table_styles_1"><table class="has-fixed-layout"><thead><tr><th class="has-text-align-left" data-align="left">Feature</th><th class="has-text-align-left" data-align="left">Visible to crawlers without Prerender?</th></tr></thead><tbody><tr><td class="has-text-align-left" data-align="left">robots.txt</td><td class="has-text-align-left" data-align="left">Yes</td></tr><tr><td class="has-text-align-left" data-align="left">sitemap.xml</td><td class="has-text-align-left" data-align="left">Yes</td></tr><tr><td class="has-text-align-left" data-align="left">Per-page meta tags</td><td class="has-text-align-left" data-align="left">No</td></tr><tr><td class="has-text-align-left" data-align="left">Open Graph / social previews</td><td class="has-text-align-left" data-align="left">No</td></tr><tr><td class="has-text-align-left" data-align="left">JSON-LD structured data</td><td class="has-text-align-left" data-align="left">No</td></tr><tr><td class="has-text-align-left" data-align="left">Page content and headings</td><td class="has-text-align-left" data-align="left">No</td></tr><tr><td class="has-text-align-left" data-align="left">Internal links</td><td class="has-text-align-left" data-align="left">No</td></tr><tr><td class="has-text-align-left" data-align="left">Canonical URLs</td><td class="has-text-align-left" data-align="left">No</td></tr></tbody></table></figure>



<p>To fix Lovable SEO issues, use <a href="https://prerender.io" type="link" id="https://prerender.io">Prerender.io</a> to pre-render your Lovable app and deliver fully rendered content to crawlers. This ensures 100% visibility of your website’s content. <a href="https://prerender.io/blog/how-to-make-lovable-websites-seo-friendly/" type="link" id="https://prerender.io/blog/how-to-make-lovable-websites-seo-friendly/">We’ve explained how Prerender.io makes your Lovable website SEO-friendly here.</a></p>



<p>That said, you may run into challenges when integrating Prerender.io with Cloudflare and Lovable. Let’s take a closer look at how the integration works—and the most common blockers you might encounter.</p>



<h2 class="wp-block-heading">How the Lovable, Cloudflare, and Prerender.io Integration Works</h2>



<p>A Cloudflare Worker sits between your visitors and your Lovable app, solving the Lovable SEO issues caused by JavaScript rendering. Every request passes through it, and it routes traffic in one of two directions based on the User-Agent header.</p>



<h3 class="wp-block-heading">How Cloudflare Worker Identifies Bots</h3>



<p>The Cloudflare Worker checks incoming <code><span style="color: #188038;">User-Agent</span></code> strings against a <code><span style="color: #188038;">BOT_AGENTS</span></code> list covering:</p>



<ul class="wp-block-list">
<li>Search engines: Googlebot, Bingbot, Yandexbot, Applebot</li>



<li>Social crawlers: facebookexternalhit, Twitterbot, LinkedInBot, Slackbot, Discordbot</li>



<li>SEO tools: Semrushbot, Ahrefsbot, Screaming Frog</li>



<li>AI bots: GPTBot, ClaudeBot, PerplexityBot, Anthropic-AI</li>
</ul>



<p>When a match is found, the Cloudflare Worker fetches <code><mark style="background-color:#282a36;overflow-wrap: break-word;" class="has-inline-color has-white-color">https://service.prerender.io/https://your-domain.com/path</mark></code> using your <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">X-Prerender-Token</mark></code> and returns the rendered HTML. An <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">X-Prerender</mark></code> header check prevents an infinite loop, since Prerender.io itself needs to fetch your page in order to render it.</p>



<figure class="wp-block-image size-full"><img decoding="async" width="624" height="217" src="https://prerender.io/wp-content/uploads/How-Cloudflare-Worker-identifies-bots-on-Lovable-apps.png" alt="How Cloudflare Worker identifies bots on Lovable apps" class="wp-image-7755" srcset="https://prerender.io/wp-content/uploads/How-Cloudflare-Worker-identifies-bots-on-Lovable-apps.png 624w, https://prerender.io/wp-content/uploads/How-Cloudflare-Worker-identifies-bots-on-Lovable-apps-300x104.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></figure>



<p>As the diagram shows, bot traffic routes through Prerender.io and returns rendered HTML, while human traffic passes directly to the Lovable app. Neither group is aware of the other&#8217;s path.</p>



<h3 class="wp-block-heading">Why Lovable Requires a Different Cloudflare Worker</h3>



<p>Prerender.io offers two Worker variants: a standard Cloudflare Worker and a Lovable-specific one. The bot-detection logic is identical in both, but the difference lies in how each handles human traffic.</p>



<p>The standard Worker calls <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">fetch(request)</mark></code> to pass human visitors through to an existing origin server. Lovable-hosted sites do not have an accessible origin server in that sense. Instead, the Worker attaches via Custom Domains, which makes the Worker itself the origin. Human traffic must be explicitly proxied to <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark></code> inside the Worker code.</p>



<p>Prerender.io provides a separate gist for this (<code><mark style="background-color:#282a36; overflow-wrap: break-word;" class="has-inline-color has-white-color">github.com/Lasalot/4e5b294438d5404a5fb4aa4f63313dfb</mark></code>). It requires two changes: your Lovable upstream URL and your domain. Deploying the standard Worker gist on a Lovable setup silently breaks the site for human visitors. Bots appear to work fine, which makes the problem easy to miss.</p>



<p>Learn more about how to connect <a href="https://prerender.io" type="link" id="https://prerender.io">Prerender.io</a> to your Lovable websites through Cloudflare in these integration guides:</p>



<p><a href="https://docs.prerender.io/docs/how-to-integrate-prerender-with-lovable-hosted-websites" type="link" id="https://docs.prerender.io/docs/how-to-integrate-prerender-with-lovable-hosted-websites">Prerender.io Lovable Integration Guide</a></p>



<p><a href="https://docs.prerender.io/docs/cloudflare-integration-v2" type="link" id="https://docs.prerender.io/docs/cloudflare-integration-v2">Prerender.io Cloudflare Integration Guide (v2)</a></p>



<h2 class="wp-block-heading">Common Lovable and Cloudflare Integration Issues</h2>



<p>The failure points below cover the majority of Lovable and Cloudflare integration issues reported around this stack. If <strong><em>Cloudflare prerender not working</em></strong> describes your situation, one of these sections will identify exactly why. Most of these are Cloudflare bot rendering issues, cases where Cloudflare&#8217;s own features interfere with Prerender&#8217;s ability to intercept and serve bot traffic correctly.</p>



<h3 class="wp-block-heading">1. Redirect Loops</h3>



<p><strong>Cause:</strong></p>



<p>The custom domain is configured inside Lovable while also being handled by the Cloudflare Worker. Lovable forces its own redirects, the Worker adds another routing layer, and the request loops indefinitely. If a domain is marked primary inside Lovable, other domains redirect to it, which interacts poorly with proxy-based setups.</p>



<p><strong>Fix:</strong></p>



<p>Remove the custom domain from the Lovable project&#8217;s Domains settings. Only <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark></code> should remain configured in Lovable. The Cloudflare Worker Custom Domain handles the public-facing domain entirely.</p>



<p>This domain conflict is the single most impactful prerequisite in the entire setup. Skip it and nothing else works correctly.</p>



<p><strong>Verify:</strong></p>



<pre class="wp-block-code dark"><code>curl -I -L https://app.example.com/</code></pre>



<p>The chain should resolve cleanly to a <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">200</mark></code>. If you see repeated <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">301</mark></code> or <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">302</mark></code> responses cycling without resolution, the domain is still connected inside Lovable.</p>



<h3 class="wp-block-heading">2. AI Crawlers Not Indexing or Citing Your Lovable Site</h3>



<p><strong>Cause:</strong></p>



<p>Cloudflare blocks AI training crawlers by default on newly added domains. Without explicitly enabling access, AI crawlers like GPTBot and PerplexityBot cannot reach your content regardless of your Prerender configuration.</p>



<p><strong>Fix:</strong></p>



<p>Go to Account Home, select your domain, navigate to Overview, and select &#8220;Control AI crawlers.&#8221; Review which crawlers are blocked and enable access for the ones you want to allow. Confirm the relevant AI bot User-Agents (GPTBot, ClaudeBot, PerplexityBot, Anthropic-AI) are also present in your Worker&#8217;s <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">BOT_AGENTS</mark></code> list so they route through Prerender and receive rendered HTML.</p>



<p><strong>Verify:</strong></p>



<p>Go to Security &gt; Events in your Cloudflare dashboard, then filter for AI crawler User-Agents such as GPTBot or ClaudeBot. If you see challenge or block events against those crawlers, they are being stopped before the Worker runs. No events means they are passing through as intended.</p>



<h3 class="wp-block-heading">3. Social Link Previews From Lovable Apps Are Generic or Wrong</h3>



<p><strong>Cause:</strong></p>



<p>Facebook, X, and LinkedIn preview crawlers do not execute JavaScript. They only see the initial HTML returned by the server. If <a href="https://prerender.io/blog/benefits-of-using-open-graph/" type="link" id="https://prerender.io/blog/benefits-of-using-open-graph/">Open Graph</a> and Twitter Card meta tags are not present in the initial HTML for each route, previews fall back to generic defaults.</p>



<p><strong>Fix:</strong></p>



<p>Ensure Open Graph and Twitter Card tags are present in the initial HTML for every route, not injected by JavaScript after load. Confirm the social crawler User-Agents are in your Worker bot list. The Prerender Lovable Worker template includes facebookexternalhit, twitterbot, and linkedinbot by default.</p>



<p><a href="https://prerender.io/blog/how-to-fix-link-previews/" type="link" id="https://prerender.io/blog/how-to-fix-link-previews/">Learn more about common social link preview problems and how to troubleshoot them.</a></p>



<p><strong>Verify:</strong></p>



<p>Run the URL through Facebook Sharing Debugger, X Card Validator, and LinkedIn Post Inspector. Each tool shows exactly what its crawler found when it fetched the page, so you can see directly whether your Open Graph tags are present and rendering correctly.</p>



<h3 class="wp-block-heading">4. Cloudflare Error 1014 and Error 1000</h3>



<p><strong>Cause:</strong></p>



<p>Lovable runs on Cloudflare&#8217;s infrastructure. When your domain is also on Cloudflare and uses a CNAME pointing to <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark></code>, cross-account CNAME resolution through Cloudflare&#8217;s proxy triggers these errors. Cloudflare defines Error 1014 as &#8220;CNAME Cross-User Banned&#8221; and Error 1000 as &#8220;DNS points to prohibited IP.&#8221;</p>



<p><strong>Fix:</strong></p>



<p>Use Workers Custom Domains rather than standard DNS route-based Workers. In your Cloudflare dashboard, go to Workers &amp; Pages, open your Worker, navigate to Settings, and add your public domain under Custom Domains. Cloudflare provisions the DNS automatically. Do not manually create a CNAME or A record pointing to <code><span style="color: #188038;">yourapp.lovable.app</span></code> alongside this, as that reintroduces the conflict.</p>



<p>If your setup requires traditional DNS, use a CNAME record rather than an A record. An A record pointing to a Cloudflare IP is a documented trigger for Error 1000, and community members running this integration consistently identify A records as the source of DNS failures.</p>



<p>Regardless of DNS approach, audit your Worker for forwarded IP headers. Before fetching the Lovable upstream, delete <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">CF-Connecting-IP</mark></code> and any duplicated <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">X-Forwarded-For</mark></code> headers. Passing these through a Cloudflare-proxied request is a documented cause of Error 1000. The Prerender Lovable Worker template handles this by default. If you are using a custom Worker, add the deletions explicitly.</p>



<p><strong>Verify:</strong></p>



<p>Run a bot User-Agent curl request and look at the response. If you are still seeing 1000 or 1014 errors, a conflicting DNS record is still in place. If you see 525 or 526, the SSL mode needs attention. A clean <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">200</mark></code> with no error codes means the DNS conflict is resolved.</p>



<h3 class="wp-block-heading">5. Cloudflare Worker Not Intercepting Bot Traffic</h3>



<p><strong>Cause:</strong></p>



<p>Route patterns are misconfigured, the Worker is deployed to the wrong Cloudflare zone, or the failure mode is set to fail-closed, returning a 500 instead of falling through to the origin. A missing or misconfigured <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">PRERENDER_TOKEN</mark></code> secret will also silently break bot routing.</p>



<p><strong>Fix:</strong></p>



<p>Confirm route patterns cover all URL variations for your domain:</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th class="has-text-align-left" data-align="left">Website type</th><th class="has-text-align-left" data-align="left">Correct route pattern</th></tr></thead><tbody><tr><td class="has-text-align-left" data-align="left">example.com</td><td class="has-text-align-left" data-align="left"><code>example.com/*</code></td></tr><tr><td class="has-text-align-left" data-align="left">www.example.com</td><td class="has-text-align-left" data-align="left"><code>www.example.com/*</code></td></tr><tr><td class="has-text-align-left" data-align="left">Both www and non-www</td><td class="has-text-align-left" data-align="left"><code>*example.com/*</code></td></tr><tr><td class="has-text-align-left" data-align="left">Any subdomain</td><td class="has-text-align-left" data-align="left">*.<code>example.com/*</code></td></tr></tbody></table></figure>



<p>Set the Worker failure mode to &#8220;Fail open (proceed)&#8221; so errors degrade gracefully rather than taking the site down.</p>



<p>Confirm the Worker has <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">PRERENDER_TOKEN</mark></code> set as a secret and that the bot path sends <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">X-Prerender-Token</mark></code>. The token is mandatory and its absence produces no visible error, just a silent failure to render.</p>



<p>For Workers Custom Domains, Cloudflare will not attach the domain if the hostname already has an existing CNAME DNS record. Confirm the Custom Domain is attached, active, and no conflicting CNAME remains.</p>



<p>If newly added bot User-Agents are being ignored despite multiple redeploys, a known Cloudflare propagation issue may be the cause. Force propagation by renaming the Worker or adding a version comment before redeploying.</p>



<p><strong>Verify:</strong></p>



<p>Run a curl request with a bot User-Agent and confirm the <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">x-prerender-request-id</mark></code>  header is present in the response:</p>



<pre class="wp-block-code dark"><code>curl -sI -A "Googlebot" https://app.example.com/ | grep x-prerender</code></pre>



<p>If <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">x-prerender-request-id</mark></code> comes back in the headers, the Worker is routing correctly. If nothing returns, bot traffic is not reaching Prerender.io. You can also replicate this in Chrome DevTools by switching the User-Agent to Googlebot under Network Conditions and checking the document response headers directly.</p>



<h3 class="wp-block-heading">6. Bot Requests Returning 401, 502, or Looping</h3>



<p><strong>Cause:</strong></p>



<p>A 401 response means Prerender rejected the request due to an authentication failure. A 502 with &#8220;Prerender loop detected&#8221; means the Worker is sending Prerender&#8217;s own rendering requests back through the proxy. Both are distinct failures with distinct causes.</p>



<p><strong>Fix:</strong></p>



<p>For 401 errors, confirm <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">X-Prerender-Token</mark></code> is present in the bot request path and that the token value matches what Prerender expects. Prerender returns explicit hint text for missing or invalid tokens in the Edge response.</p>



<p>For 502 loop errors, confirm your Worker excludes requests that already carry Prerender identification headers. The Worker should also skip static assets entirely and only send <code><mark style="background-color:#282a36" class="has-inline-color has-white-color"><code><mark style="background-color:#282a36" class="has-inline-color has-white-color">GET</mark></code></mark></code> or <code><mark style="background-color:#282a36" class="has-inline-color has-white-color"><code><mark style="background-color:#282a36" class="has-inline-color has-white-color">HEAD</mark></code></mark></code> requests for HTML documents to Prerender. The Prerender Lovable Worker template excludes common file extensions by default. If you are using a custom Worker, add those exclusions explicitly.</p>



<p><strong>Verify:</strong></p>



<p>Fetch a bot request and read the response code and body together. A <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">401</mark></code> with authentication language in the body is a token problem. A <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">502</mark></code> that mentions a loop is a request filtering problem. The response body in both cases is descriptive enough to point you at the right fix.</p>



<h3 class="wp-block-heading">7. Cloudflare Cache Serving the Wrong Content</h3>



<p><strong>Cause:</strong></p>



<p>The stack has three caching layers: Prerender.io&#8217;s cache, Cloudflare&#8217;s CDN edge cache, and the browser cache. Conflicts between those layers produce two distinct problems. Cloudflare caches the empty SPA shell and serves it to bots before the Worker intercepts the request, so bots receive the shell instead of rendered HTML.</p>



<p>Separately, Cloudflare caches prerendered HTML and serves it to human visitors, breaking SPA functionality.</p>



<p><strong>Fix:</strong></p>



<ul class="wp-block-list">
<li>Run Prerender interception at the Worker level so it happens before CDN caching</li>



<li>Remove any Page Rules with <code><mark style="background-color:#282a36" class="has-inline-color has-white-color">Cache Level: Cache Everything</mark></code> on routes handled by the Worker</li>



<li>Use the Worker&#8217;s Cache API with separate cache keys for bot and human traffic to prevent cross-contamination</li>



<li>Set no-store or private cache headers on prerendered responses to prevent Cloudflare from auto-caching them</li>



<li>When purging, handle Cloudflare&#8217;s cache (via &#8220;Purge Everything&#8221; or URL-based purge) and Prerender.io&#8217;s cache separately via the dashboard or Recache API</li>
</ul>



<p><strong>Verify:</strong></p>



<p>Fetch the same URL with a bot User-Agent and then with a regular browser User-Agent. The bot response should contain full, rendered HTML with real content and meta tags. The human response should contain the SPA shell. If both return the same thing, cache separation is not working.</p>



<h3 class="wp-block-heading">8. Rocket Loader Breaking JavaScript Rendering</h3>



<p><strong>Cause:</strong></p>



<p>Cloudflare&#8217;s Rocket Loader rewrites <mark style="background-color:#282a36" class="has-inline-color has-white-color">&lt;script type=&#8221;text/javascript&#8221;&gt;</mark> tags to <mark style="background-color:#282a36" class="has-inline-color has-white-color">&lt;script type=&#8221;text/rocketscript&#8221;&gt;</mark> to defer JavaScript execution. This is one of the most common Cloudflare configuration mistakes that breaks JavaScript-rendered applications. It produces <mark style="background-color:#282a36" class="has-inline-color has-white-color">Uncaught SyntaxError</mark> errors in cached snapshots and causes bots to receive broken, partially rendered pages that Google may index incorrectly or skip entirely.</p>



<p><strong>Fix:</strong></p>



<p>Disable Rocket Loader at Speed &gt; Optimization &gt; Content Optimization.</p>



<p>Adding <mark style="background-color:#282a36" class="has-inline-color has-white-color">data-cfasync=&#8221;false&#8221;</mark> to specific script tags is not a viable workaround for Lovable apps. The Vite build output is not directly editable, so attribute-level overrides cannot be applied.</p>



<p><strong>Verify:</strong></p>



<p>Fetch a page with a bot User-Agent and search the raw HTML for <mark style="background-color:#282a36" class="has-inline-color has-white-color">text/rocketscript</mark>. If it appears on any script tag, Rocket Loader is still rewriting your scripts and needs to be disabled.</p>



<h3 class="wp-block-heading">9. Bot Fight Mode Blocking Legitimate Crawlers</h3>



<p><strong>Cause:</strong></p>



<p>Cloudflare&#8217;s Bot Fight Mode runs before WAF rules and before Workers, so it cannot be bypassed with WAF Skip rules or Page Rules. When active, it issues Managed Challenges to search engine bots, returning challenge pages instead of letting requests reach the Prerender Worker.</p>



<p>Separately, Cloudflare automatically blocks AI crawlers on newly added domains by default.</p>



<p><strong>Fix:</strong></p>



<ul class="wp-block-list">
<li>Disable Bot Fight Mode at Security &gt; Bots if it is blocking legitimate crawlers</li>



<li>Switch to Super Bot Fight Mode, which can be bypassed via WAF custom rules with a Skip action</li>



<li>Enterprise customers can use <code><span style="color: #188038;">cf.bot_management.verified_bot</span></code> to exempt verified search bots explicitly</li>



<li>To allow AI crawlers, go to Account Home, select your domain, navigate to Overview, and select &#8220;Control AI crawlers&#8221;</li>
</ul>



<p><strong>Verify:</strong></p>



<p>Check Cloudflare&#8217;s Security Events log (Security &gt; Events) for challenge events against Googlebot, GPTBot, or other crawler User-Agents. Any challenged or blocked entries confirm Bot Fight Mode is running before your Worker and intercepting legitimate crawler traffic.</p>



<h3 class="wp-block-heading">10. SSL Misconfigurations</h3>



<p><strong>Cause:</strong></p>



<p>The wrong SSL mode causes Workers to fail when making outbound <code><span style="color: #188038;">fetch()</span></code> requests to Prerender.io or the Lovable upstream. Flexible mode in particular has a known bug where Worker subrequests to external HTTPS hosts fail with Error 525.</p>



<p><strong>Fix:</strong></p>



<p>Set SSL to Full (Strict) at SSL/TLS &gt; Overview. Full (Strict) requires a valid certificate on the origin. If the origin certificate is invalid or expired, Cloudflare returns Error 526.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><thead><tr><th class="has-text-align-left" data-align="left">SSL mode</th><th class="has-text-align-left" data-align="left">Effect</th></tr></thead><tbody><tr><td class="has-text-align-left" data-align="left">Flexible</td><td class="has-text-align-left" data-align="left">Worker fetch() to external HTTPS hosts fails with Error 525. Avoid.</td></tr><tr><td class="has-text-align-left" data-align="left">Full</td><td class="has-text-align-left" data-align="left">Works but does not validate origin certificates. Acceptable for development.</td></tr><tr><td class="has-text-align-left" data-align="left">Full (Strict)</td><td class="has-text-align-left" data-align="left">Recommended. Returns Error 526 if origin certificate is invalid or expired.</td></tr></tbody></table></figure>



<p>Worker subrequests to <mark style="background-color:#282a36" class="has-inline-color has-white-color">service.prerender.io</mark> always use Full (Strict) SSL regardless of your zone&#8217;s SSL setting. This works correctly since Prerender.io has valid certificates, but it is useful to understand when debugging unexpected SSL errors.</p>



<p>Check the Cloudflare SSL/TLS overview panel and confirm the mode is set to Full or Full (Strict). Then run a bot User-Agent curl request and confirm no <code><span style="color: #188038;">525</span></code> or <code><span style="color: #188038;">526</span></code> errors appear in the response.</p>



<h3 class="wp-block-heading">11. DNS Proxy Mode Set Incorrectly</h3>



<p><strong>Cause:</strong></p>



<p>With DNS-only mode (grey cloud) enabled, traffic bypasses Cloudflare entirely. The Worker never runs, and bot requests go directly to the Lovable origin without passing through Prerender.io.</p>



<p><strong>Fix:</strong></p>



<p>Enable proxy mode (orange cloud) on all DNS records for your domain. When using the Workers Custom Domains approach, Cloudflare creates the correct proxied DNS mapping automatically when you attach a Custom Domain to the Worker. If you are managing DNS records manually, set every relevant record to proxied.</p>



<p><strong>Verify:</strong></p>



<p>Open the DNS dashboard in Cloudflare and look at the proxy status column for each hostname the Worker should handle. Every record should show the orange cloud. A grey cloud means that the hostname&#8217;s traffic is bypassing Cloudflare entirely, and the Worker will never run for requests to it.</p>



<h2 class="wp-block-heading">How to Troubleshoot Lovable and Cloudflare Integration Problems</h2>



<p>With the common Lovable and Cloudflare integration failure understood, this is how the ideal integration should look.</p>



<h3 class="wp-block-heading">A. Domain Configuration</h3>



<p>The most reliable approach is to let Cloudflare own the domain entirely and keep Lovable out of the routing chain. In order, that means:</p>



<ol class="wp-block-list">
<li>Nameservers pointed to Cloudflare (full setup)</li>



<li>Custom domain removed from Lovable. Only <mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark> remains as upstream</li>



<li>Worker Custom Domain attached to your public domain. Cloudflare auto-creates DNS and SSL</li>
</ol>



<p>If your setup requires traditional DNS instead of Workers Custom Domains, use CNAME records with proxy mode enabled. Never use A records, as they point to Cloudflare IPs and are a documented trigger for Error 1000 with Lovable&#8217;s infrastructure. The two records you need are:</p>



<ul class="wp-block-list">
<li>@ (apex): CNAME pointing to <mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark>, proxied (orange cloud)</li>



<li>www: CNAME pointing to <mark style="background-color:#282a36" class="has-inline-color has-white-color">yourapp.lovable.app</mark>, proxied (orange cloud)</li>
</ul>



<h3 class="wp-block-heading">B. Cloudflare Settings</h3>



<p>Cloudflare ships with several performance and security features that are on by default and conflict with how Prerender.io works. Rocket Loader defers JavaScript execution, which breaks Prerender&#8217;s rendering process. Signed Exchanges let Google serve cached versions of your pages directly, bypassing your origin entirely. Bot Fight Mode runs before Workers and can block search crawlers before they ever reach Prerender.</p>



<p>None of these are obvious culprits, which is why they cause silent failures that are hard to trace. Before the integration behaves predictably, each of the following needs to be confirmed:</p>



<ul class="wp-block-list">
<li>DNS records proxied (orange cloud)</li>



<li>SSL set to Full or Full (Strict) with valid origin certificate</li>



<li>Automatic Signed Exchanges disabled: Speed &gt; Optimization &gt; Other</li>



<li>Rocket Loader disabled: Speed &gt; Optimization &gt; Content Optimization</li>



<li>Bot Fight Mode reviewed and confirmed not blocking search crawlers</li>



<li>AI crawler blocking reviewed: Account Home &gt; domain &gt; Control AI crawlers</li>



<li>No Page Rules with <mark style="background-color:#282a36" class="has-inline-color has-white-color">Cache Level: Cache Everything</mark> on Worker routes</li>



<li>Worker failure mode set to &#8220;Fail open (proceed)&#8221;</li>
</ul>



<h3 class="wp-block-heading">C. Worker Configuration</h3>



<p>The single most common Worker mistake is deploying the standard Cloudflare gist instead of the Lovable-specific one. The standard gist passes human traffic through using <mark style="background-color:#282a36" class="has-inline-color has-white-color">fetch(request) </mark>, which does not work when the Worker is the origin. Use the Lovable gist, update the two required values, and confirm the token is stored as a secret rather than a plain environment variable.</p>



<ul class="wp-block-list">
<li><mark style="background-color:#282a36; overflow-wrap: break-word;" class="has-inline-color has-white-color">PRERENDER_TOKEN</mark> set as a Secret in the Worker&#8217;s Settings &gt; Variables and Secrets</li>



<li>Lovable-specific Worker code deployed (gist <mark style="background-color:#282a36" class="has-inline-color has-white-color">4e5b294438d5404a5fb4aa4f63313dfb</mark>), not the standard Cloudflare Worker</li>



<li>Two &#8220;CHANGE THIS&#8221; values updated: Lovable upstream URL and domain</li>



<li>Route patterns covering all required URL variations</li>
</ul>



<h3 class="wp-block-heading">D. Prerender.io Settings for Lovable Applications</h3>



<p>Prerender captures a snapshot of your page at the moment the headless browser considers it ready. By default, that happens as soon as the page loads, which means async content that has not resolved yet will be missing from the cached HTML bots receive. The <mark style="background-color:#282a36" class="has-inline-color has-white-color">prerenderReady</mark> flag lets you control exactly when that snapshot is taken.</p>



<p>Beyond that, cache expiration and sitemap submission determine how fresh and how discoverable your prerendered pages are. Make sure the following are all configured before treating the integration as complete:</p>



<ul class="wp-block-list">
<li>Cache expiration configured per content type: shorter for dynamic pages, longer for static</li>



<li>Sitemap submitted for proactive cache warming</li>



<li><mark style="background-color:#282a36" class="has-inline-color has-white-color">window.prerenderReady = false</mark> set early in the app lifecycle, switched to <mark style="background-color:#282a36" class="has-inline-color has-white-color">true</mark> only when all content and meta tags are fully in the DOM</li>



<li>Custom status codes set via <mark style="background-color:#282a36" class="has-inline-color has-white-color">&lt;meta name=&#8221;prerender-status-code&#8221; content=&#8221;404&#8243;&gt;</mark> on error pages</li>
</ul>



<h2 class="wp-block-heading">Fix Your Lovable SEO and Cloudflare Integration for Good</h2>



<p>Lovable gets your app built and shipped fast. But without a JavaScript prerendering layer, every page it produces is invisible to search engines, social crawlers, and AI platforms. The traffic, the indexing, the social previews, the citations from AI search tools: none of it happens until bots can read your content. This affects discoverability across every channel, and ultimately, your revenue.</p>



<p>Prerender.io is the lowest-friction way to fix Lovable SEO challenges. You are not rewriting your Lovable app, adding SSR infrastructure, or changing how Lovable builds. A single Cloudflare Worker routes bot traffic through Prerender.io&#8217;s rendering service, and every crawler that visits your site gets fully rendered HTML. Your users never notice a change.</p>



<p>The integration fails in predictable ways when misconfigured. Remove the custom domain from Lovable, deploy the Lovable-specific Worker gist with the correct upstream URL, and disable Cloudflare&#8217;s conflicting performance features. Get those three things right and the rest of the configuration falls into place.</p>



<p>If you are not already using Prerender.io with your Lovable app, <a href="https://prerender.io/pricing" type="link" id="https://prerender.io/pricing">start a free trial</a> and follow the <a href="https://docs.prerender.io/docs/how-to-integrate-prerender-with-lovable-hosted-websites" type="link" id="https://docs.prerender.io/docs/how-to-integrate-prerender-with-lovable-hosted-websites">Lovable integration guide</a> to get your pages indexed and your previews working today.</p>



<p>Other Lovable SEO blogs and guides that may interest you:</p>



<ul class="wp-block-list">
<li><a href="https://prerender.io/blog/how-to-make-lovable-websites-seo-friendly/" type="link" id="https://prerender.io/blog/how-to-make-lovable-websites-seo-friendly/">How to Make Your Lovable Website SEO-Friendly</a></li>



<li><a href="https://prerender.io/prerender-vs-lovablehtml/" type="link" id="https://prerender.io/blog/how-to-make-lovable-websites-seo-friendly/">Prerender.io vs. LovableHTML Comparison</a></li>



<li><a href="https://docs.prerender.io/docs/cdn-issues" type="link" id="https://docs.prerender.io/docs/cdn-issues">CDN Issues and Cache Conflicts</a></li>



<li><a href="https://docs.prerender.io/docs/how-to-test-your-site-after-you-have-successfully-validated-your-prerender-integration" type="link" id="https://docs.prerender.io/docs/how-to-test-your-site-after-you-have-successfully-validated-your-prerender-integration">How to Verify Your Prerender Integration</a></li>
</ul>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>“AI is a Much Bigger Channel Than You Think” &#8211; Ethan Smith from Graphite.io</title>
		<link>https://prerender.io/blog/ai-is-bigger-than-you-think-ethan-smith-graphite/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 18 Mar 2026 12:10:34 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7784</guid>

					<description><![CDATA[Ethan Smith, CEO of Graphite, works with some of the largest brands in the world: Adobe, Notion, Upwork, and more. Tune into his SEO expertise on the Get Discovered podcast.]]></description>
										<content:encoded><![CDATA[
<p>Ethan Smith, CEO of Graphite, works with some of the world&#8217;s largest tech organizations: Adobe, Webflow, Notion, and more. He&#8217;s also an adjunct professor, teaching AEO and SEO to other leaders globally. He knows his stuff, and he openly shares his wisdom online with Graphite&#8217;s extensive database of research. He&#8217;s one of the biggest voices in SEO on LinkedIn, and we were lucky enough to have him on the Get Discovered podcast. </p>



<p>In the podcast, he draws on his research with a wake-up call: AI search is much bigger than we think. This, along with other key takeaways explored below, is a core tenet of our conversation. He explains what&#8217;s happening with SEO and AI today, where our conventional wisdom has gone wrong, what’s working and what&#8217;s not, and what teams should be doing right now.</p>



<p>Watch the full episode on YouTube below, or read on for a summary.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why AI is Much Bigger Than We Think: Ethan Smith from Graphite" width="640" height="360" src="https://www.youtube.com/embed/ilZ-EjGSgJQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Why AI Search is a Bigger Channel Than You Think</h2>



<p>One thing Ethan wants to clear up is that yes, AI search is real, and yes, it&#8217;s big. But most of the numbers you&#8217;ve seen dramatically understate the size.</p>



<p>Virtually every report measuring AI search traffic has looked at web traffic only. The problem? Around <a href="https://graphite.io/five-percent/ai-is-much-bigger-than-you-think">75–86% of AI tool usage </a><span style="box-sizing: border-box; margin: 0px; padding: 0px;"><a href="https://graphite.io/five-percent/ai-is-much-bigger-than-you-think" target="_blank">occurs within mobile apps</a>, not in </span>web browsers, according to Ethan and his data team at Graphite. When you add that in, the picture changes.</p>



<p>By Ethan&#8217;s analysis, AI search is now <strong>over half the size of traditional web search globally</strong>, and roughly a third the size in the US. This suggests that it&#8217;s a larger channel than most of us realize.</p>



<p>But there&#8217;s a caveat that most people are misunderstanding: this channel size doesn’t mean you have to throw everything out the window and start anew. </p>



<h2 class="wp-block-heading">AEO vs. SEO: Is There Actually a Difference?</h2>



<p>One of the most refreshing things Ethan said in the conversation is something most SEOs already know, even if LinkedIn posts tell us otherwise: <strong>AEO and SEO are largely the same.</strong></p>



<p>The different acronyms are everywhere: <a href="https://prerender.io/blog/seo-vs-aio-vs-geo/" target="_blank" rel="noreferrer noopener">GEO, AIO, AI SEO, or AEO</a>. And with that comes the assumption that everything has changed, too. But for most SEO experts, and Ethan included, AEO and SEO are essentially the same.</p>



<p>His conclusion is that optimizing for LLMs like ChatGPT isn’t fundamentally different from the old era of optimizing for Google. The underlying principles of quality content, authoritative sources, and <a href="https://prerender.io/blog/technical-seo-checklist-for-ecommerce-websites/" target="_blank" rel="noreferrer noopener">strong technical foundations</a> all still apply. (Some aspects of <a href="https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/" target="_blank" rel="noreferrer noopener">technical SEO are even <em>more</em> important</a> than before.)</p>



<p>However, he does identify two fundamental differences between AEO and SEO that you should be paying attention to:</p>



<ol class="wp-block-list">
<li><strong>The long tail keyword is longer.</strong> AI queries are much more conversational and specific than traditional keyword searches. &#8220;Best running shoes&#8221; becomes &#8220;I have a knee issue and need stability in a neutral shoe for road running. What do you recommend?&#8221; The implication for marketers: you need to be thinking about a much wider surface area of prompts.</li>



<li><strong>Off-site signals matter more.</strong> Links have always mattered, but where your brand appears across the web (review sites, forums, help content, comparison pages, and third-party mentions) now feeds directly into whether AI systems surface you as a trusted citation.</li>
</ol>



<p>Everything else? For now, mostly the same.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="&quot;AEO and SEO are largely the same&quot; - Ethan Smith from Graphite" width="540" height="960" src="https://www.youtube.com/embed/FQjC29gUZFo?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">The Three Personas Spreading Bad AEO Advice</h2>



<p>So why is there so much hype around the naming conventions? And why are teams feeling the pressure to throw their old strategies out the window? </p>



<p>Ethan simplifies this into three profiles that are spreading misinformation: </p>



<ol class="wp-block-list">
<li><strong>The VC-funded startup founder.</strong> They need to pitch a massive TAM and maximum disruption. Language like &#8220;SEO is dead, Google is going away, and AEO is completely different&#8221; is used as a fundraising strategy. It doesn&#8217;t necessarily benefit brands like yours.<br></li>



<li><strong>The new entrant consultant.</strong> No SEO background means no baseline credibility, unless you decide to reframe the game entirely. If everything is different, then your lived-in experience doesn&#8217;t matter. This is a convenient positioning move on LinkedIn, and it&#8217;s frequently wrong.<br></li>



<li><strong>The misled SEO practitioner.</strong> This one’s not malicious, but they’re just repeating things they&#8217;ve read many times until it feels true. This aspect of human psychology, known as the illusory truth effect, Ethan explains, comes down to this: if we hear something often enough, the brain treats it as an established fact. Sometimes, people can&#8217;t help it. This can sometimes even be subconscious.</li>
</ol>



<p>Ethan suggests being skeptical of it all, including his own work. Over on <a href="https://graphite.io/five-percent" target="_blank" rel="noreferrer noopener nofollow">Graphite’s Five Percent blog</a>, Ethan and his team publish raw data, link to the sources, and actively invite people to poke holes in their analysis. In his words, it’s a modern-day version of peer-reviewed analysis as found in academia (Ethan is an adjunct professor). He encourages you to partake in this due diligence process, too.</p>



<h2 class="wp-block-heading">AEO Misconceptions That Are Actually Costing You</h2>



<p>Ethan walked through the specific false beliefs he hears most often in conversations with marketing leaders:</p>



<ol class="wp-block-list">
<li><strong>&#8220;Search traffic is declining.&#8221;</strong> He actually <a href="https://graphite.io/five-percent/debunking-the-myth-that-seo-traffic-has-dramatically-declined" target="_blank" rel="noreferrer noopener nofollow">published a study</a> showing this isn&#8217;t true. The reality is that search isn’t going down, but it <em>is</em> reshuffling. The main challenge here is that it’s harder to attribute. Instead, he suggests thinking of AI as an additive channel, not a replacement one.<br></li>



<li><strong>&#8220;I just need to figure out Reddit.&#8221;</strong> This comes from a commonly cited discussion that Reddit dominates AI citations. But, according to Ethan, that stat comes from looking at the <em>top 10 cited domains</em> and making a pie chart. When you look at <em>all</em> citations, he says, Reddit represents around 2.5% of cited domains. It&#8217;s still the single most-cited domain, so yes, you should be thinking about it. (<a href="https://prerender.io/blog/industry-study-how-ai-retrieves-content/" target="_blank" rel="noreferrer noopener">We saw this in our own research, too</a>.) But 95% of citations come from domains outside that top 10, so the real opportunity is much broader.<br></li>



<li><strong>&#8220;AI can&#8217;t read HTML, so you need to convert everything to Markdown.&#8221;</strong> Ethan is direct on this one:<strong> there&#8217;s no evidence for it. </strong>No one from any LLM provider has ever said this, he says, and AI can parse HTML perfectly fine. In fact, it’s recommended by many SEO experts like <a href="https://www.linkedin.com/in/cshel/" target="_blank" rel="noreferrer noopener">Carolyn Shelby</a>, Principal SEO at Yoast, to actually convert your content to HTML. (Need help with that? That&#8217;s what we do here at <a href="http://prerender.io" target="_blank" rel="noreferrer noopener">Prerender.io</a>.)<br></li>



<li><strong>&#8220;AEO and SEO are fundamentally different disciplines.&#8221;</strong> They&#8217;re not. As mentioned above, the overlap is significant. While there are nuances with AEO you should consider and prioritize more, such as offsite, longtail, and technical foundations, the two run on the same principles. </li>
</ol>



<h2 class="wp-block-heading">Graphite’s 5% Framework and What Actually Drives Results</h2>



<p>One of Ethan&#8217;s most useful contributions to the conversation is what he calls his <strong>5% framework</strong>, a variation on the 80/20 rule. He created this rule after auditing hundreds of companies. </p>



<p>The key findings of this: your top 5% of landing pages drive roughly 87% of your organic traffic. So in his eyes, most SEO work (ex. Screaming Frog exports, the alt tag audits, or elaborate agency slide decks) doesn&#8217;t actually have an impact. The companies that grow are the ones that identify the small number of plays that actually matter and go deep on those.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="The 80/20 rule: why 20% of your work has 80% of the impact - Ethan Smith from Graphite" width="540" height="960" src="https://www.youtube.com/embed/i0-s1_qj4es?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p>For AI discovery specifically, here are Ethan&#8217;s suggestions:</p>



<p><strong>1. Start with prompts, not keywords.</strong> Pull customer questions from your sales team, your support inbox, and your paid search data. These are the prompts your customers are actually typing into ChatGPT. Track them and see which domains get cited for those prompts, and then you can focus your attention. </p>



<p><strong>2. Invest in product-focused content, especially for long tail keywords.</strong> Evaluate your help center content, integration pages, feature documentation, and use case pages. This isn&#8217;t just for your top integrations, but for your 35th most popular integration too. This is where the AI prompts live. The questions people ask aren&#8217;t always &#8220;what is [product]?”, but more like: &#8220;how does [product] work with [specific tool] for [specific use case]?&#8221;</p>



<p><strong>3. Off-site presence beyond Reddit.</strong> Consider investing in videos, affiliates, third-party blogs, and forums that are relevant to your industry. However, this obviously depends on your team size and budget. The right off-site mix depends entirely on which prompts you&#8217;re trying to win and the resources you have available to you.</p>



<h2 class="wp-block-heading">How to Measure ROI on AI Discovery</h2>



<p>One of the most practical parts of the conversation was around measurement, and specifically how to make the business case to a CMO or CRO who&#8217;s skeptical of a channel with fuzzy attribution. This is a common theme we’re hearing throughout this podcast, like with <a href="https://prerender.io/blog/podcast-noah-greenberg-stacker/" target="_blank" rel="noreferrer noopener">Noah Greenberg from Stacker</a> or <a href="https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/" target="_blank" rel="noreferrer noopener">Elizabeth Thorn from Toggl</a>.</p>



<p>From Ethan’s perspective, here’s how you can measure it: </p>



<ol class="wp-block-list">
<li><strong>Ask &#8220;how did you hear about us?&#8221; at conversion.</strong> It&#8217;s self-reported and imperfect, but it captures a signal that click-tracking never will. (<a href="https://prerender.io/blog/a-podcast-conversation-with-ryan-law-ahrefs/" target="_blank" rel="noreferrer noopener">Ryan Law, Director of Content Marketing at Ahrefs, prioritizes this type of measurement.)</a> Post-conversion surveys can surface AI as a touchpoint in ways that traditional attribution models miss entirely.<br></li>



<li><strong>Track visibility and rank for your priority prompts.</strong> Use a prompt tracking tool for general trend monitoring. For actual experiments, Ethan is quite clear: “You need to manually run the same prompts multiple times while logged in to get the real distribution of responses,” he says. Most tracking tools run queries logged out. This is directionally useful, but it isn’t what your customers are actually seeing.<br></li>



<li><strong>Size up the channel.</strong> Show leadership how large the channel <em>actually</em> is. Look at high intent, longer prompts, and conversational queries: traffic from AI referrals tends to be primed. This alone is a meaningful commercial argument.</li>
</ol>



<p>Lastly, Ethan explores split testing with the classic Pinterest SEO experimentation model. Start by dividing your target prompts into control and test groups, then establish a baseline, intervene on the test group, and measure the delta. In his opinion, you don&#8217;t need a data science team. Someone willing to copy and paste prompts into ChatGPT every week will do.&nbsp;</p>



<h2 class="wp-block-heading">How Brands Can Win on Reddit (Without Being Banned)</h2>



<p>In the latter part of the conversation, we asked Ethan how brands can authentically engage on Reddit, but without being banned or perceived as a corporate shill. This is a topical one: every marketer is hearing about the importance of Reddit for citations, and most brands want to be part of this conversation.&nbsp;</p>



<p>In Ethan’s opinion, spammy fake accounts and astroturfing <em>can</em> work on a small scale, but this approach is harmful, unsustainable, and a terrible look for anyone working with companies of real size.&nbsp;</p>



<p>Here’s what he says actually works for a strong, authentic Reddit presence:</p>



<ul class="wp-block-list">
<li><strong>Engage in threads where commenters are already talking about your brand.</strong> Note: he doesn’t mean you should chime in defensively or to reframe the narrative, but to add your perspective honestly. For example, he commented on a thread questioning whether he was overhyped, and he did so authentically without shaming commenters. “The comments were appreciated,” he notes. </li>



<li><strong>Create content that addresses the concerns customers are raising.</strong> For example, he shared how Lululemon responded to Reddit threads about how their pants started pilling by sharing washing guides and care instructions. The Reddit conversation naturally shifted, and people perceived Lululemon more favorably as a result.</li>



<li><strong>Consider Reddit ads on threads where the sentiment is already positive for you.</strong> If a thread organically favors your brand, amplify it with ads.</li>
</ul>



<h2 class="wp-block-heading">What Gets Worse Before It Gets Better</h2>



<p>Each episode, we ask guests for their 12–18-month outlook. Ethan’s perspective was candid: <strong>spam content will get worse before it gets better </strong>(which, interestingly enough, is what Ryan Law from Ahrefs said in his podcast episode, too).</p>



<p>As Ethan explains, Google has spent two decades building spam-detection algorithms. Today, this is essentially a cat-and-mouse game where each new spam tactic pretty quickly gets countered. But AI search tools like ChatGPT, Perplexity, and Gemini are years behind that curve, for now, and their spam filters don’t exist yet. (This is why you’ll occasionally see strange, low-quality domains appearing as AI citations.) The spam that stopped working on Google a decade ago is suddenly working again on LLMs.&nbsp;</p>



<p>So, for the next year, Ethan predicts, marketers can expect more listicle abuse, fake affiliate content, and coordinated Reddit manipulation. Who knows what will come after that.</p>



<p>But, on the optimistic side, Ethan is genuinely excited about AI&#8217;s ability to eliminate the parts of marketing and analysis work that are essentially glorified data entry. The kind of research that once took months, such as pulling traffic data across tens of thousands of sites or identifying patterns, now takes hours. <strong>“More time on strategy, less on mechanics,” </strong>he says.&nbsp;</p>



<h2 class="wp-block-heading">Summing Up the Conversation</h2>



<p>At the end of every episode, we ask guests a single sentence to sum up: if listeners take away one thing from this conversation, what would that be?<br><br>Ethan’s answer: <strong>most of what you&#8217;ve read about AEO is wrong. Or at least, not validated.</strong> He clarifies that this information isn’t maliciously wrong, per se. However, most of it is still wrong because the field is so young and the incentives to overstate disruption are real.&nbsp;</p>



<p>Instead, he suggests doing your own research, looking at raw data, asking for source links, running small experiments yourself, and checking whether the prompts your customers use actually produce citations or not.&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why most assumptions about AI aren&#039;t true" width="540" height="960" src="https://www.youtube.com/embed/Ku7RMlfVZVU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">About Ethan Smith</h2>



<p>Ethan Smith is the CEO and Founder of <a href="http://graphite.io" target="_blank" rel="noreferrer noopener nofollow">Graphite.io</a>, a growth agency working with companies like Webflow, Notion, and Upwork. He is one of the more rigorous voices in the AEO space, known for publishing primary research with source data and actively inviting peer critique of his findings. Follow him on <a href="https://www.linkedin.com/in/ethanls/" target="_blank" rel="noreferrer noopener nofollow">LinkedIn</a> or explore his research on <a href="https://graphite.io/five-percent" target="_blank" rel="noreferrer noopener nofollow">Graphite’s blog</a>.</p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://rss.com/podcasts/get-discovered-by-prerender-io/?_gl=1%2a8m2fsa%2a_gcl_au%2aMTU0MzUyMDExOC4xNzcyNDU2NjM0" style="border-radius:0px;background-color:#1f8511">Listen to the Conversation</a></div>
</div>



<h2 class="wp-block-heading">About Prerender.io</h2>



<p>Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. <a href="https://prerender.io/pricing/" target="_blank" rel="noreferrer noopener">Start for free today.</a></p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>&#8220;The Golden Era of Content is Over&#8221; &#8211; A Conversation with Ryan Law from Ahrefs</title>
		<link>https://prerender.io/blog/a-podcast-conversation-with-ryan-law-ahrefs/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 08:48:20 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7721</guid>

					<description><![CDATA[Ryan Law, Director of Content at Ahrefs, joins the podcast to share how he's adapting his content strategy to the AI era.]]></description>
										<content:encoded><![CDATA[
<p>Ryan Law is the Director of Content Marketing at Ahrefs, one of the most widely read SEO blogs in the industry. It’s one of the clearest examples of a content empire built on the old model that really worked: high-volume, targeted keywords, and perfect on-page optimization. </p>



<p>But now, Ryan is saying this model no longer works. </p>



<p>In this episode of the <em>Get Discovered</em> podcast, Joe Walsh sits down with Ryan to talk about what&#8217;s breaking for content teams because of AI, why the attribution metrics everyone&#8217;s been reporting on are the wrong ones (and his unique take on this), and what content marketing leaders should refocus on instead. </p>



<p>Watch the full episode below, or keep reading for a summary of the conversation.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why the Golden Era of Content is Over: Ryan Law, Director of Content Marketing at Ahrefs" width="640" height="360" src="https://www.youtube.com/embed/k3KHYsgNYzU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">AI Overviews Are Killing Clicks, Even If You Rank First</h2>



<p>If you&#8217;ve sensed that your content is working harder for less return, the numbers back you up.</p>



<p>Ahrefs&#8217; own research shows that AI Overviews are intercepting clicks that previously went to ranked pages. The search position hasn&#8217;t changed, but now, people aren’t clicking through.&nbsp;</p>



<p>What&#8217;s actually disappearing is the URL click itself. Instead, AI tools are synthesizing the content and answering the question, but aren’t sending anyone to the source directly. You may still influence a purchase through AI tools, Ryan says, and people are likely buying heavily (himself included). But measuring any of these conversions through direct clicks, as we did before, is increasingly impossible.</p>



<p><strong>Note</strong>: despite this, your rankings in search engines are still important. LLMs are increasingly pulling from the <a href="https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/">top five positions</a> in SERPs. </p>



<h2 class="wp-block-heading">The Content That&#8217;s Losing the Most Traffic to AI</h2>



<p>Ryan is unusually well-positioned to understand who suffers most from this shift because Ahrefs is the archetype.</p>



<p><strong>&#8220;Companies that are hardest hit are those that have historically had a ton of search traffic from very informational, top-of-funnel queries.&#8221;</strong></p>



<p>As one of the top industry leaders, Ahrefs has published thousands of pages covering the full landscape of SEO education with near-perfect search optimization. Yet, despite doing everything right the old way, they’re seeing clicks go down every month. If that&#8217;s the situation at one of the most technically capable content operations in the industry, it’s an environmental shift for everyone else, too.</p>



<p>Key takeaways for content leaders: the volume-and-optimization playbook that drove traffic growth for the better part of a decade has fundamentally changed its unit economics.&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Is AI an existential risk to content marketing? A conversation with Ryan Law from @AhrefsCom" width="540" height="960" src="https://www.youtube.com/embed/p2SxmgEUW3o?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">How Ahrefs Is Adapting Its Content Strategy</h2>



<p>Ahrefs isn&#8217;t abandoning content by any means. But Ryan is shifting the company&#8217;s content marketing strategy.&nbsp;</p>



<p>Here’s what he’s doing instead:</p>



<h3 class="wp-block-heading">1. Research over volume</h3>



<p>Ahrefs now has an in-house, full-time data scientist on the marketing team. The focus is on original data and proprietary insights that can&#8217;t be synthesized by AI because they didn&#8217;t exist anywhere before Ahrefs published them. As we’re hearing throughout the podcast from guests like <a href="https://prerender.io/blog/podcast-noah-greenberg-stacker/" target="_blank" rel="noreferrer noopener">Noah Greenberg</a>, <a href="https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/" target="_blank" rel="noreferrer noopener">Alain Schlesser</a>, and more, <strong>original research is the content that content marketers should double down on</strong>—rather than targeting keywords. Original data is the content being cited by LLMs, media, and other blogs, as it earns presence in a way that a well-optimized explainer article no longer reliably does.</p>



<p><em><strong>Further reading</strong>: <a href="https://prerender.io/blog/industry-study-how-ai-retrieves-content/">what 100M+ pages reveal about how AI systems choose your content.</a></em></p>



<h3 class="wp-block-heading">2. Distribution beyond owned channels</h3>



<p>Another key theme we’re seeing in the podcast: <strong>third-party presence over owned pages. </strong>Channels like guest posts, partnerships, review sites, creator collaborations, and LinkedIn are where Ahrefs&#8217; team is seeing surprising traction. The logic is that if an AI search tool synthesizes content from across the web, being on <em>your</em> website is less valuable than being on <em>many</em> websites. Third-party presence is a must-have to prioritize, even over your own pages. </p>



<h2 class="wp-block-heading">Why Click Attribution Is the Wrong Metric for AI-Era Content</h2>



<p>Another aspect that Ryan focuses on is his unique approach to content attribution&#8230; not worrying too much about it. While this approach isn&#8217;t new for him, it&#8217;s an increasingly common take from others in the industry—and it&#8217;s aging surprisingly well in the AI era. </p>



<p>Ahrefs doesn&#8217;t obsess over attributing specific customers to specific content. They use self-reported attribution (i.e., asking new signups how they found out about the product) and a few directional heuristics. Ryan frames this as a feature rather than a limitation.</p>



<p>In his opinion, there&#8217;s an inverse relationship between how easily a marketing tactic can be attributed and how long it remains useful.</p>



<p>&#8220;If it&#8217;s very easy to say, yes, this is definitely working, then in short order, everyone is going to do that.&#8221;</p>



<p>The channels with the longest shelf life, like sponsorships, events, thought leadership, and brand, are the ones that require faith. You believe they work because the logic holds, not because the dashboard proves it. This has always been true. What’s different now is that the tactics that <em>were</em> easy to attribute (organic search clicks) have now become harder to attribute, which is forcing a reckoning on teams that built their reporting around traffic metrics.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why ChatGPT is an additive source of acquisition with Ryan Law from @AhrefsCom" width="540" height="960" src="https://www.youtube.com/embed/WehaLWkz7Fw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p>And as <a href="https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/" target="_blank" rel="noreferrer noopener">Alain Schlesser from Yoast</a> discussed in his episode, the fixation on clicks was always a proxy metric. The business goal was never actually to get clicks—it was to drive consideration, intent, and purchase. Perhaps AI has just broken the shortcut measurement of this that has existed all along.</p>



<h2 class="wp-block-heading">Yes, AI Search Still Runs on SEO Fundamentals </h2>



<p>One of the clearest points Ryan makes is about how companies are misframing their AI search strategy. Most businesses are thinking about AI SEO (or <a href="https://prerender.io/blog/seo-vs-aio-vs-geo/" target="_blank" rel="noreferrer noopener">GEO, or AEO—whichever term you prefer</a>) as an entirely separate channel. And while there are aspects of AI search that are certainly different, Ryan argues that this is the wrong approach. At the end of the day, it&#8217;s still very much SEO.</p>



<p><strong>&#8220;I think people are downplaying the impact of traditional search indices on AI search in a very, very big way.&#8221;</strong></p>



<p>When someone runs a query in ChatGPT or Perplexity and gets a web-sourced answer, that answer comes from a search index. The LLM is using RAG (Retrieval-Augmented Generation) to pull real content from the web. The infrastructure that makes AI answers possible is the same infrastructure traditional SEO has always worked with, and <a href="https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/" target="_blank" rel="noreferrer noopener">most SEOs speak openly about this, like Peter Rota on the podcast.</a></p>



<p>But this means that, despite Ryan’s shifting strategy, crucial SEO fundamentals like building topical authority, creating original content, earning high-quality backlinks, and distributing content across reputable third-party sites matter for AI search, too, of course. The tactics that sound newest (LLMs.txt, content chunking, entity optimization) are real, but they&#8217;re a second-order concern compared to the fundamentals. </p>



<p>This is also why certain technical SEO tools matter more, not less, in this environment. If AI crawlers can&#8217;t render your JavaScript-heavy site, they can&#8217;t index the content. And content that can&#8217;t be indexed can&#8217;t be cited. <a href="https://prerender.io/blog/how-to-optimize-your-website-for-ai-crawlers/" target="_blank" rel="noreferrer noopener">Making your content visible to AI crawlers</a> is the baseline. </p>



<h2 class="wp-block-heading">The Next 12-18 Months: Why AI Spam Will Be Harder to Filter Out</h2>



<p>In every episode, we ask guests for their prediction of what’s to come over the next year or so.</p>



<p>In Ryan&#8217;s case, he&#8217;s worried about content volume. And specifically, he says that the scale of AI-generated content is already high quality, and it will only get better. His recent Claude Code experiments left him convinced that automating good, research-backed content is here to stay.</p>



<p>What does this mean for content marketers? A coming flood of content that sounds (even more) human, synthesizes ideas coherently, and occupies the same search positions and AI citation pools that hand-crafted content is competing for. </p>



<p>The people and teams who will navigate this well are those who hold something that can&#8217;t be automated: genuine original data, real institutional knowledge, earned perspectives, and the distribution networks to get those things seen.</p>



<h2 class="wp-block-heading">What He&#8217;s Optimistic About</h2>



<p>Despite everything, Ryan&#8217;s genuinely enthusiastic about what&#8217;s on the other side of this transition.</p>



<p>For him, the most exciting change is what AI enables for individuals. He&#8217;d wanted to learn to code for ten years and couldn&#8217;t break through the learning curve. Over the past few weeks, he rebuilt his personal website from scratch—the first time he&#8217;s ever built exactly what he wanted without being constrained by a CMS.</p>



<p>&#8220;I suddenly feel like I have superpowers.&#8221;</p>



<p>This new ability to build things you couldn&#8217;t before, learn faster, and move with more autonomy is real for content marketers, too. The people who will win are those who stop mourning the playbook and start experimenting with what the new environment makes possible.</p>



<h2 class="wp-block-heading">About Ryan Law</h2>



<p>Ryan Law is the Director of Content Marketing at <a href="https://ahrefs.com/blog/" target="_blank" rel="noreferrer noopener">Ahrefs</a>, one of the most widely read SEO and content marketing blogs in the industry. Previously CMO at Animalz, he grew the blog to over 1 million page views on the strength of opinionated, perspective-driven content. He oversees Ahrefs&#8217; blog, research, and a newsletter with 284,000 subscribers, and published the data showing AI overviews reduce clicks to top-ranking pages by 58%. Connect with Ryan on <a href="https://www.linkedin.com/in/thinkingslow/" target="_blank" rel="noreferrer noopener">LinkedIn</a> or listen to the full conversation.</p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://rss.com/podcasts/get-discovered-by-prerender-io/?_gl=1%2a8m2fsa%2a_gcl_au%2aMTU0MzUyMDExOC4xNzcyNDU2NjM0" style="border-radius:0px;background-color:#1f8511">Listen to the Conversation</a></div>
</div>



<h2 class="wp-block-heading">About Prerender.io</h2>



<p>Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. <a href="https://prerender.io/pricing/" target="_blank" rel="noreferrer noopener">Start for free today.</a></p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Industry Study: What 100M+ Pages Reveal About How AI Chooses Your Content</title>
		<link>https://prerender.io/blog/industry-study-how-ai-retrieves-content/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 10:08:59 +0000</pubDate>
				<category><![CDATA[AI SEO]]></category>
		<category><![CDATA[Data and Studies]]></category>
		<category><![CDATA[Enterprise SEO]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[enterprise]]></category>
		<category><![CDATA[studies]]></category>
		<category><![CDATA[thought leadership]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7545</guid>

					<description><![CDATA[A study across 100M+ pages to surface AI crawler behavior, citation analysis, and what this means for your marketing.]]></description>
										<content:encoded><![CDATA[
<p>The brands winning in AI search aren&#8217;t always the ones with the best content. <strong>They&#8217;re the ones AI systems can actually read and trust.</strong></p>



<p>That&#8217;s the central finding from a study conducted by Prerender.io and OtterlyAI analyzing 100M+ pages across eight global enterprise brands. This article breaks down what the data shows, helping you pinpoint what content AI systems prefer, seven key findings to include in your content strategy, and the three structural changes that are moving the needle on AI search visibility.</p>



<h2 class="wp-block-heading">The Dataset: Eight Global Brands and 100M+ Pages</h2>



<p>To conduct this study, the Prerender.io team analyzed 100M+ pages across eight anonymized brands in Prerender.io&#8217;s database, pulling the URLs most frequently requested by ChatGPT over Q4 2025. These are real machine-to-site retrieval requests, as opposed to pageviews or rankings, which means every signal in this dataset reflects exactly what AI systems are actively looking for.</p>



<p>The brands were selected to represent a range of industries, company sizes, and geographies. Each brand averages a minimum of 75M+ page renders per year, ensuring the dataset is large enough to surface reliable patterns. The brands chosen span ecommerce, SaaS, automotive, sports, fashion, and government to evaluate a variety of industries.</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>Brand</strong></td><td><strong>Industry</strong></td><td><strong>Most-Requested Page Types</strong></td></tr><tr><td>Brand 1</td><td>Multinational automotive company based in Europe</td><td>Informational blog posts, homepage</td></tr><tr><td>Brand 2</td><td>Leading jewelry brand headquartered in the US (ecommerce)</td><td>Homepage, FAQs, comparison guides, product blogs</td></tr><tr><td>Brand 3</td><td>International sports organization</td><td>Streaming pages, live event pages</td></tr><tr><td>Brand 4</td><td>Global athletic leisure brand in 50+ countries (ecommerce)</td><td>New collection pages, homepage, FAQs, discount pages</td></tr><tr><td>Brand 5</td><td>American department store (ecommerce)</td><td>Top-selling product pages, homepage</td></tr><tr><td>Brand 6&nbsp;</td><td>Global API/SaaS platform headquartered in the US</td><td>Technical documentation (exclusively)</td></tr><tr><td>Brand 7</td><td>Government / legal entity based in Europe</td><td>Documentation, homepage</td></tr><tr><td>Brand 8</td><td>Global fashion retailer headquartered in Europe (ecommerce)</td><td>Homepages (US + international), store locator, product pages</td></tr></tbody></table></figure>



<p>The OtterlyAI team then supplemented this retrieval analysis with citation data, examining how these brands actually appear in AI-generated answers across ChatGPT, Perplexity, and Google AI Overviews. Seven findings emerged. Here&#8217;s what they show.</p>



<h2 class="wp-block-heading">Key Finding #1: AI Systems Reward Easily Answerable Pages</h2>



<p>This is the clearest finding from the study across each brand. The most-retrieved URLs across all eight brands fall into a surprisingly narrow set of content types:</p>



<ul class="wp-block-list">
<li><strong>Evergreen editorial content and buying guides</strong></li>



<li><strong>FAQs and comparison pages</strong></li>



<li><strong>Technical documentation and API references</strong></li>



<li><strong>Category and collection pages</strong></li>
</ul>



<p>And these pages share a common characteristic. <strong>They answer a specific &#8220;what,&#8221; &#8220;how,&#8221; or &#8220;why&#8221; question without requiring the reader, or the AI crawler, to navigate anywhere else.</strong></p>



<p>This is a significant reframe for marketing teams accustomed to thinking about content in terms of the funnel or site hierarchy. AI visibility is not about where a page sits in your navigation.<strong> Instead, it&#8217;s about whether the page can reliably serve as an answer on its own. </strong>Whether it’s a blog article answering a specific question, a comparison page performing a this vs. that analysis, or a product category page, the pages that AI systems return to repeatedly are the ones that deliver complete and clearly structured answers. This also supports why AI systems seem to prefer knowledge base and help center content: they typically answer a single question. </p>



<p><strong>Key takeaway: </strong>write your content in a clear, question-answer format, focus on optimizing your knowledge base documentation, and ensure pages answer a specific “what,” “how,” or “why” question. </p>



<h2 class="wp-block-heading">Key Finding #2: Address Questions Directly, Especially for Ecommerce Brands</h2>



<p>This analysis also surfaced differences in how AI crawlers and humans navigate pages—especially for ecommerce. For example, when humans shop, they browse product detail pages in detail. But when AI systems research products on ecommerce sites, that&#8217;s not always the case.</p>



<p>Instead, AI systems consistently prefer:</p>



<ul class="wp-block-list">
<li><strong>Category and collection pages to answer <em>&#8220;what are the best options?&#8221;</em></strong></li>



<li><strong>FAQ and sizing guides to answer <em>&#8220;what should I get?&#8221;</em></strong></li>



<li><strong>Store locators and availability pages to answer <em>&#8220;where can I buy this?&#8221;</em></strong></li>
</ul>



<p>Individual product pages still matter, but far less than traditional SEO priorities suggest. The AI-driven shopping journey starts earlier and at a higher level of abstraction. Users receive AI-generated summaries before they ever visit a brand&#8217;s site, and those summaries are built from category-level and editorial content, not PDPs.</p>



<p>This means the pages most critical to AI visibility are often the ones least optimized for it. Category pages, FAQs, and buying guides tend to be JavaScript-heavy, dynamically loaded, and lower on the priority list for technical maintenance, precisely where AI systems are most likely to encounter incomplete or missing content.</p>



<p><strong>Key takeaway</strong>: prioritize your category pages, FAQs, and buying guides for both content completeness and technical rendering. These are the pages AI systems are currently using to build shopping summaries. Focus your efforts here, rather than on PDPs alone.</p>



<p><strong><em>Further reading: </em></strong><a href="https://prerender.io/blog/ai-indexing-benchmark-for-ecommerce/"><em>AI Indexing Benchmark Report for Ecommerce</em></a></p>



<h2 class="wp-block-heading">Key Finding #3: Third-Party Sites Matter For Citations&nbsp;</h2>



<p>This is the finding that should concern brand and growth leaders most. </p>



<p>In OtterlyAI&#8217;s analysis of one of the world’s top athletic brands, OtterlyAI looked at over 1,300 citations across 25 test queries. Here&#8217;s the breakdown of the share of citations per source:</p>



<ul class="wp-block-list">
<li><strong>Independent review sites: 8.6%</strong></li>



<li><strong>Reddit: 7.2%</strong></li>



<li><strong>YouTube: 4.6%</strong></li>



<li><strong>Brand’s own domain: 4.2%</strong></li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://prerender.io/wp-content/uploads/image-149-1024x576.png" alt="" class="wp-image-7620" srcset="https://prerender.io/wp-content/uploads/image-149-1024x576.png 1024w, https://prerender.io/wp-content/uploads/image-149-300x169.png 300w, https://prerender.io/wp-content/uploads/image-149-768x432.png 768w, https://prerender.io/wp-content/uploads/image-149-1536x864.png 1536w, https://prerender.io/wp-content/uploads/image-149.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Evidently, the brand’s own domain holds less weight than community platforms like G2, Reddit, or YouTube. And this isn&#8217;t an isolated case. Broader research from OtterlyAI across 1M+ citations across ChatGPT, Perplexity, and Google AI Overviews confirms the same pattern across industries: community platforms and third-party sources capture the majority of AI citations, regardless of how strong the brand&#8217;s owned content is.</p>



<p>AI systems are designed this way. They increasingly want to sound confident in their answers, and often <a href="https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/">prioritize third-party validation and community consensus over brand-owned marketing content</a>. That&#8217;s a newer structural difference with LLMs, and your strategy has to account for this.</p>



<p><strong>Key takeaway:</strong> Build a deliberate third-party presence strategy. Identify the review sites, forums, and media outlets AI systems are already citing in your category, and prioritize earning coverage there.</p>



<h2 class="wp-block-heading">Key Finding #4: Technical Accessibility Is Crucial for AI Visibility</h2>



<p>Across this dataset, one issue emerged as the most consistent barrier to AI visibility. Brands were inadvertently blocking AI crawlers with configurations originally written for traditional search engines.</p>



<p><strong>When AI crawlers can&#8217;t access your site, you&#8217;re not partially visible to AI systems—you&#8217;re completely invisible. </strong>This means no citations, no mentions, and no influence on AI-generated answers. This issue is more common now because AI crawlers are newer and less familiar to infrastructure teams than Googlebot, and they&#8217;re frequently caught by security policies that predate their existence.</p>



<p>Before any content or strategy work, confirm that your site is actually accessible to major AI crawlers. It&#8217;s the single highest-leverage action available to brands that have invested in content and are seeing no AI visibility, and it requires a conversation with your technical team, not a content brief.</p>



<p>For JavaScript-heavy sites specifically, the gap between what a human sees and what an <a href="https://prerender.io/blog/understanding-web-crawlers-traditional-ai">AI crawler can read</a> is often significant. Server-side rendering or prerendering (serving AI crawlers a fully rendered, stable version of each page) is often the most reliable fix.</p>



<p><strong>Key takeaway: </strong>have your technical SEO team evaluate your technical performance, particularly your JavaScript rendering and robots.txt.</p>



<p><em><strong>Further reading:</strong> <a href="https://prerender.io/blog/seo-audit-vs-geo-audit-for-site-health/">How to Conduct a GEO Audit on your Site</a></em></p>



<h2 class="wp-block-heading">Key Finding #5: Homepages Are Gateways, Not Destinations</h2>



<p>Homepages appear frequently among the top-requested URLs, and sometimes as the single most-retrieved page. But the retrieval pattern tells a specific story.</p>



<p>AI systems hit the homepage and then fan out immediately. The homepage appears to serve a defined function to confirm brand identity, establish authority, and act as a starting point for deeper retrieval. However, the sharp drop-off in retrieval volume after the homepage means that what happens next determines the majority of your AI visibility.<strong> A strong homepage paired with inaccessible supporting content is nearly as limiting as no homepage presence at all.</strong></p>



<p><strong>Key takeaway: </strong>treat your homepage as a crawl entry point, not a destination. The real AI visibility work happens on the supporting pages it leads to.</p>



<h2 class="wp-block-heading">Key Finding #6: You Need a Different AI Search Strategy for Each AI Platform</h2>



<p>Not all AI platforms behave the same way, and this has direct implications for where marketing teams focus their energy.&nbsp;</p>



<p>Proprietary data from OtterlyAI shows the following:</p>



<figure class="wp-block-table"><table class="has-fixed-layout"><tbody><tr><td><strong>AI Platform</strong></td><td><strong>Citation Rate</strong></td></tr><tr><td>Perplexity</td><td>97% of responses include a citation</td></tr><tr><td>Google AI Overviews</td><td>34% of responses include a citation&nbsp;</td></tr><tr><td>ChatGPT</td><td>16% of responses include a citation&nbsp;</td></tr></tbody></table></figure>



<p>Perplexity has a near-universal citation rate, meaning that most answers include a citation, and that you have a higher chance of being cited in their response. This implies that, for a Perplexity-focused strategy, if you create comprehensive, well-structured content across all detail pages, it has a real chance of appearing. </p>



<p>However, this is not equally the same for other AI platforms: Google’s AI Overviews has a 34% citation rate, whereas ChatGPT only has 16%. This means that you may need to adapt your AI search strategy accordingly. For AI Overviews, focus on cornerstone content and domain authority. While for ChatGPT, prioritize page speed—it doesn’t like waiting around for slow-loading pages. </p>



<p><strong>A single &#8220;AI search strategy&#8221; will underperform, and it may need to change every few months. </strong>A brand might have strong Perplexity visibility today and near-zero ChatGPT presence tomorrow—not because of content differences, but because each platform has different technical and editorial preferences.</p>



<p><strong>Key takeaway: monitor AI visibility by platform, not in aggregate.</strong> A blended view of &#8220;AI traffic&#8221; will mask where you&#8217;re winning and where you&#8217;re invisible. You can use AI search analytics tools to do this.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://prerender.io/wp-content/uploads/image-152-1024x576.png" alt="" class="wp-image-7623" srcset="https://prerender.io/wp-content/uploads/image-152-1024x576.png 1024w, https://prerender.io/wp-content/uploads/image-152-300x169.png 300w, https://prerender.io/wp-content/uploads/image-152-768x432.png 768w, https://prerender.io/wp-content/uploads/image-152-1536x864.png 1536w, https://prerender.io/wp-content/uploads/image-152.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Key Finding #7: International Brands Can Experience Greater Visibility Issues</h2>



<p>For brands operating across markets, AI systems don&#8217;t default to your primary locale. They actively retrieve language-specific URLs, country variants, and localized pages, and appear to match content language directly to the user&#8217;s query language.</p>



<p>A French user asking about your product in French will receive an answer drawn from your French-language content, not an English page.<strong> If your localized pages are incomplete, slow to load, or invisible to AI crawlers, that market segment is effectively unserved by AI search</strong>—regardless of how strong your primary-market presence is.</p>



<p><strong>Key takeaway:</strong> run an AI crawler accessibility check on your localized pages. Incomplete or slow-loading country variants may be entirely invisible to AI systems.</p>



<h2 class="wp-block-heading">What Does This Mean for Revenue? A Note on Attribution</h2>



<p>The reality is that clean attribution between AI visibility and revenue doesn&#8217;t quite exist yet. For now, AI systems remain a black box for most analytics stacks, and click-through rates on AI answer engines are significantly lower than traditional search.</p>



<p>That said, the directional evidence is building. <a href="https://www.bain.com/about/media-center/press-releases/20252/consumer-reliance-on-ai-search-results-signals-new-era-of-marketing--bain--company-about-80-of-search-users-rely-on-ai-summaries-at-least-40-of-the-time-on-traditional-search-engines-about-60-of-searches-now-end-without-the-user-progressing-to-a/">Bain &amp; Company&#8217;s February 2025 research</a> found that 80% of consumers now rely on AI-written results for at least 40% of their searches. A <a href="https://www.bain.com/about/media-center/press-releases/20252/agentic-ai-poised-to-disrupt-retail-even-with-50-of-consumers-cautious-of-fully-autonomous-purchasesbain--company/">separate Bain report from November 2025</a> found that 30–45% of US consumers already use AI specifically for product research and comparison, and that AI now accounts for up to 25% of referral traffic for some retailers. The research and purchase consideration stages are already happening inside AI platforms before buyers ever reach a brand&#8217;s site.</p>



<p>And in this specific dataset, the directional data continues: one ecommerce brand saw AI crawler requests more than double from Q1 to Q3 2025. Over the same period, publicly reported earnings increased by 109.7%, accounting for nearly $80M in additional revenue. We&#8217;re not claiming causation, but the parallel is consistent with a model where AI visibility amplifies demand rather than directly creating it.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://prerender.io/wp-content/uploads/image-151-1024x576.png" alt="" class="wp-image-7622" srcset="https://prerender.io/wp-content/uploads/image-151-1024x576.png 1024w, https://prerender.io/wp-content/uploads/image-151-300x169.png 300w, https://prerender.io/wp-content/uploads/image-151-768x432.png 768w, https://prerender.io/wp-content/uploads/image-151-1536x864.png 1536w, https://prerender.io/wp-content/uploads/image-151.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><em>Data source: <a href="http://prerender.io">Prerender.io</a> dashboard of a key enterprise client.&nbsp;</em></p>



<p>The more useful frame: think of AI visibility as revenue protection and a growing acquisition channel, even if it’s not entirely attributable yet. If AI systems retrieve your content incorrectly, partially, or not at all, your brand is excluded from the consideration set before the funnel begins. Buyers arrive at competitors already shaped by AI summaries your brand had no part in.</p>



<h2 class="wp-block-heading">A Measurement Framework</h2>



<p>Given the current attribution gap, here&#8217;s a framework for how you can track AI visibility:</p>



<h3 class="wp-block-heading">1. Brand share of voice across AI platforms.</h3>



<p>Track how often your brand is mentioned across ChatGPT, Perplexity, Gemini, and Copilot relative to competitors. This upstream signal predicts downstream revenue influence before it shows up in your analytics. OtterlyAI is the most practical tool we&#8217;ve seen for making this measurable.</p>



<h3 class="wp-block-heading">2. Citation monitoring on owned pages. </h3>



<p>This tells you whether your content is being accurately represented. Being cited with incorrect pricing or outdated product details is actively damaging at the moment decisions are being formed. Use Screaming Frog to surface a full URL list, then audit your presence on third-party platforms like G2, Capterra, and Trustpilot.</p>



<h3 class="wp-block-heading">3. AI crawler behavior. </h3>



<p>Measuring AI crawler activity shows whether your site is being actively retrieved by AI systems at all. Growing retrieval volume—like the doubling observed in our ecommerce case—is a directional signal worth tracking alongside business outcomes. You can use a solution like Prerender.io to provide visibility into your AI bot behavior, particularly in comparison to search or social crawlers, and identify whether pages are being served to AI crawlers in a fully rendered state.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="380" src="https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-1024x380.png" alt="" class="wp-image-7644" srcset="https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-1024x380.png 1024w, https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-300x111.png 300w, https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-768x285.png 768w, https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-1536x570.png 1536w, https://prerender.io/wp-content/uploads/prerender-dashboard-ai-crawler-behavior-2048x760.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p><em>Prerender.io dashboard of an anonymized client</em></p>



<h3 class="wp-block-heading">4. Branded and direct traffic. </h3>



<p>This captures the indirect effect. Users who encounter your brand in AI-generated answers often don&#8217;t click through immediately—they find you later via direct or branded search. Correlating share of voice trends against branded traffic growth gives you a reasonable proxy for AI-driven influence, even without direct attribution.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://prerender.io/wp-content/uploads/image-150-1024x576.png" alt="" class="wp-image-7621" srcset="https://prerender.io/wp-content/uploads/image-150-1024x576.png 1024w, https://prerender.io/wp-content/uploads/image-150-300x169.png 300w, https://prerender.io/wp-content/uploads/image-150-768x432.png 768w, https://prerender.io/wp-content/uploads/image-150-1536x864.png 1536w, https://prerender.io/wp-content/uploads/image-150.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Summing Up: Three Things That Impact Your Presence in AI Search&nbsp;</h2>



<h3 class="wp-block-heading">1. Technical retrievability is a must-have, not a nice-to-have. </h3>



<p>AI systems can only work with content they can access and read completely. For JavaScript-heavy sites, serving a fully rendered, stable version of each page to AI crawlers isn&#8217;t optional. It&#8217;s table stakes for everything else.</p>



<h3 class="wp-block-heading">2. Content needs to answer, not just attract. </h3>



<p>The pages AI systems return to are self-contained, clearly structured, and genuinely useful. If a page can&#8217;t serve as a complete answer to a specific question on its own, it&#8217;s a weak candidate for AI retrieval.</p>



<h3 class="wp-block-heading">3. Third-party presence isn&#8217;t optional.</h3>



<p>The majority of AI citations come from outside your owned properties. Brands that earn coverage in the review sites, community platforms, and media that AI systems trust will win disproportionate attribution—regardless of how strong their own content is. Influence without attribution still influences. But it&#8217;s more valuable when it&#8217;s yours. Ensure that you are prioritizing your third-party presence as a core foundation of your marketing strategy.</p>



<h2 class="wp-block-heading">FAQs</h2>



<h3 class="wp-block-heading">What types of pages does ChatGPT retrieve most often? </h3>



<p>Self-contained pages that answer a specific question without requiring additional navigation: FAQs, buying guides, technical documentation, category pages, and evergreen editorial content.</p>



<h3 class="wp-block-heading">Why is Reddit cited more than brand websites in AI search? </h3>



<p>AI systems are built to prioritize third-party validation and community consensus. Reddit provides peer-generated, question-and-answer content that maps directly onto how AI systems construct responses.</p>



<h3 class="wp-block-heading">Does page speed affect AI search visibility? </h3>



<p>Yes, particularly for ChatGPT. Faster-loading pages are more likely to be included when an AI system has limited time to retrieve and render content.</p>



<h3 class="wp-block-heading">What&#8217;s the biggest technical barrier to AI visibility? </h3>



<p>There are a few: bot-blocking configurations and JavaScript rendering. Many enterprise sites inadvertently block AI crawlers with security rules originally written for traditional search engines, resulting in complete invisibility. JavaScript-heavy sites also block AI crawlers from accessing your pages in the first place. Server-side rendering or prerendering with <a href="http://prerender.io">Prerender.io</a> can be a solution here.</p>



<h3 class="wp-block-heading">How do I track whether AI systems are citing my brand? </h3>



<p>Monitor brand share of voice using an AI search analytics tool like <a href="https://otterly.ai/">OtterlyAI</a>. For citation accuracy, combine a Screaming Frog URL crawl with manual audits of third-party review platforms.</p>



<h3 class="wp-block-heading">Does strong SEO automatically translate to strong AI search visibility? </h3>



<p>In many ways, yes, but this isn&#8217;t guaranteed. The brands winning in AI search aren&#8217;t always the ones with the strongest content, which is a core ranking factor for traditional SEO. They&#8217;re the ones whose pages are technically accessible, structured to serve as direct answers, and with a strong third-party presence.</p>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why &#8220;Safe&#8221; SEO Is No Longer Enough: How Yoast is Rethinking SEO in the Age of AI</title>
		<link>https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 08:51:43 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7708</guid>

					<description><![CDATA[Tune into a conversation with Alain Schlesser, Principal Architect at Yoast, on the Get Discovered podcast. ]]></description>
										<content:encoded><![CDATA[
<p>When the company behind the world&#8217;s most widely used SEO plugin fundamentally changes what it&#8217;s optimizing for, that&#8217;s worth paying attention to. In this episode of <em>Get Discovered</em>, host Joe Walsh sat down with Alain Schlesser, Principal Architect at Yoast—the plugin powering over 13 million WordPress websites—to unpack what Yoast&#8217;s strategic pivot toward AI visibility tracking reveals about where SEO is heading.</p>



<p>Watch the full episode below or read on for key takeaways.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="How Yoast Is Rethinking SEO for the Age of AI: Alain Schlesser, Principal Architect at Yoast" width="640" height="360" src="https://www.youtube.com/embed/hJ6q_ZrcVho?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">How AI Has Become a Gatekeeper Between Users and Search Engines</h2>



<p>Schlesser begins the conversation by explaining that AI is a new layer we have to think about. </p>



<p>When a user types a question into Perplexity or ChatGPT, they&#8217;re not submitting a search query the way they used to. They&#8217;re asking a large language model a question. If that LLM determines the question requires grounding in real-world data, it then generates its own search queries and runs them against whatever search engine it&#8217;s connected to, whether that&#8217;s Google or a proprietary index, all of which is entirely behind the scenes.</p>



<p><strong>&#8220;It&#8217;s not enough to appear in the search engines anymore,&#8221; </strong>Schlesser explained. <strong>&#8220;You also need to go through the filter that is the AI system and be relevant for the AI system.&#8221;</strong></p>



<p>This has two major implications. First, brands now need to optimize for a layer they can&#8217;t directly observe or understand. Second, the queries themselves have changed: they&#8217;re not human-written keyword strings anymore. Instead, understanding what queries an LLM generates on a user&#8217;s behalf requires a fundamentally different kind of visibility.</p>



<p><em><strong>Further reading:</strong> <a href="https://prerender.io/blog/industry-study-how-ai-retrieves-content/">Industry Study: What 100M+ Pages Reveal About How AI Chooses Your Content</a></em></p>



<h2 class="wp-block-heading">Why AI Search Fragmentation Has Made SEO Harder Than the Google Era</h2>



<p>For years, SEO practitioners had an unusual luxury: a near-monopoly made their jobs simpler. Google was one primary search engine to optimize for with one set of rules and one analytics framework. </p>



<p>&#8220;While everyone cursed the fact that this was a monopoly, it actually made it very straightforward to optimize for,&#8221; Schlesser noted.</p>



<p>But things are different now, and unfortunately, that clarity is gone. Every AI answer engine now uses a different combination of search backends, and each is powered by an LLM trained on different data. A brand&#8217;s visibility can vary dramatically across ChatGPT, Perplexity, and Gemini. This isn&#8217;t because of anything the brand did differently, but how those underlying systems were built.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="How AI is different from regular SEO" width="640" height="360" src="https://www.youtube.com/embed/2etBslPa_ZQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">LLMs Don&#8217;t Scroll to Page Two: Why Being in the Top Five Rankings Means Everything</h2>



<p>Another important detail that Schlesser discusses is the importance of being in the top-5 rankings. That&#8217;s because AI systems don&#8217;t paginate.</p>



<p>&#8220;When an LLM runs a search query to gather grounding data, it doesn&#8217;t scroll through results the way a human might. It takes the top three to five results… and nothing else,&#8221; says Schlesser. <strong>&#8220;Either you appear in the first few spots, or you don&#8217;t exist.&#8221; </strong></p>



<p>This means the classic concept of &#8220;above the fold&#8221; on a SERP isn&#8217;t just about user click behavior anymore. Instead, it&#8217;s baked into how AI systems process information at a technical level. Results from page two and onwards are now invisible to LLMs.&nbsp;</p>



<p>Rather than diminishing the importance of traditional SEO, this actually raises the stakes. Technical SEO fundamentals like indexability, crawlability, site speed, and structured content matter more now. The threshold for existing in AI search has gotten higher, not lower.</p>



<h2 class="wp-block-heading">The LLM Training Data Problem: Why Being &#8220;Average&#8221; Makes You Invisible</h2>



<p>Beyond the search layer, there&#8217;s a deeper challenge in how LLMs represent brands in their training data. This has to do with statistical compression, says Schlesser.</p>



<p>When an LLM is trained, he explains, it doesn&#8217;t retain every data point it encounters. It compresses its understanding into a statistical model that preserves strong signals and discards noise. In practice, this means the model keeps outliers—the cheapest brand, the most premium brand, or the most sustainability-focused brand—and a handful of examples that represent a meaningful average. Everything else gets compressed away.</p>



<p><strong>&#8220;If you just go with the defaults because it&#8217;s safe, it will actually make it very likely for you to never appear in the consciousness of those LLM systems,&#8221; </strong>Schlesser said. <strong>&#8220;Safe will not be enough.&#8221;</strong></p>



<p>In traditional search, a strategy of being safe, consistent, and broadly present could accumulate enough volume to be effective. In an AI-driven world, that strategy actively works against you. Brands that don&#8217;t stand for something specific, such as original data or a clear, distinct angle or perspective, risk being statistically invisible in the training data that powers the systems where discovery increasingly happens.</p>



<p>The practical upshot: brands need a genuine point of view. Not just good content, but distinctive content that positions them at one end of a meaningful spectrum.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Why AI visibility is crucial, and why search engines aren&#039;t enough on their own" width="640" height="360" src="https://www.youtube.com/embed/G10mmlNG124?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Should You Block AI Crawlers? Why That Strategy Is Likely to Backfire</h2>



<p>Some companies have started blocking AI crawlers like GPTBot. It&#8217;s an understandable impulse. Bots consume bandwidth, raise server costs, and there are legitimate questions about how that data is being used.</p>



<p>But Schlesser is blunt about what blocking AI crawlers actually means for long-term discoverability: <strong>&#8220;It just means that you don&#8217;t exist.&#8221;</strong></p>



<p>The analogy he draws is instructive: you wouldn&#8217;t have blocked Googlebot in 2005 just because it was new and unfamiliar. Blocking AI crawlers now is the same category of mistake. You’re opting yourself out of a discovery channel that will only grow in importance.</p>



<p>For brands concerned about server load and bandwidth costs, there are better approaches than blanket blocking. Yoast has been working on exactly this kind of infrastructure. They&#8217;ve added support for llms.txt—a lightweight file that enables quick, efficient content discovery by AI systems—and recently entered into a partnership with Microsoft around the NL Web Protocol, a new standard similar to sitemaps that lets AI systems get a structured overview of an entire website through a single efficient endpoint.</p>



<p>The goal isn&#8217;t to make your site accessible to every bot indiscriminately. It&#8217;s to be discoverable by the AI systems your future customers are using, while managing bandwidth intelligently.</p>



<p><strong><em>Further reading: </em></strong><a href="https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/"><em>Peter Rota on why technical SEO is more important than ever.</em></a></p>



<h2 class="wp-block-heading">Why Traffic Is the Wrong Thing to Optimize For</h2>



<p>One of the more grounding moments in the conversation came when Schlesser talked about what actually happens when customers come to Yoast asking for help.</p>



<p>&#8220;Customers usually come with the wrong goals in mind,&#8221; he said. &#8220;Often it&#8217;s: I want to increase traffic to my website. Why? What is the business goal?&#8221;</p>



<p>More traffic, he pointed out, means higher server costs. That&#8217;s the only guaranteed result of more traffic. Unpacking the <em>why</em> behind the traffic goal, whether it’s more orders or more brand awareness, often reveals that raw traffic isn&#8217;t the right metric to optimize for at all.</p>



<p>This matters especially now, as AI reshapes the relationship between content and conversion. As more transactions move through automated systems and AI agents rather than human-driven browsing sessions, the question of whether a website is designed for human visitors becomes genuinely complicated. Schlesser&#8217;s advice: start measuring bot-driven transactions now, even if they&#8217;re a small fraction of your volume. You don&#8217;t want to miss the inflection point where automated traffic starts to exceed human traffic for your business.</p>



<h2 class="wp-block-heading">Looking Ahead Over The Next 12–18 Months</h2>



<p>Asked about the near-term outlook for businesses navigating AI-driven discovery, Schlesser didn&#8217;t sugarcoat it.</p>



<p>&#8220;A lot of things will get worse before they get better,&#8221; he said. &#8220;But it&#8217;s not something anyone can stop.&#8221;</p>



<p>His framing: imagine rising sea levels. Businesses that stay above the waterline now will be lifted as the tide rises. Businesses that wait too long to adapt will find it exponentially harder to recover.</p>



<p>For slower-moving organizations, the most urgent first step isn&#8217;t to build a comprehensive AI visibility strategy overnight. It&#8217;s to establish your own real-time metrics so you can see what&#8217;s actually happening in your specific industry or for your specific audience, rather than waiting for industry reports or analyst coverage that will always lag the reality on the ground.</p>



<p>&#8220;You want to have your own metrics that you&#8217;ve seen in real time, that you want to make decisions on,&#8221; Schlesser said. &#8220;Not wait for someone else to write a book on it.&#8221;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="How to show up in AI search: focus on unique content and perspectives" width="640" height="360" src="https://www.youtube.com/embed/CM5s6UAJXs8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">Key Takeaways</h2>



<ul class="wp-block-list">
<li><strong>AI is now a filter in front of search, not a replacement for it. </strong>Traditional SEO matters more than ever, but you need to optimize for the AI layer on top of it, not just the search index beneath it.</li>



<li><strong>LLMs only see the top 3–5 search results. </strong>There is no page two. Being outside the top results means not existing for AI-powered answer engines.</li>



<li><strong>Being safe with your content isn’t enough. </strong>Distinctive brands survive statistical compression; average brands don&#8217;t. Being safe and broadly present is no longer a viable strategy in an era where LLM training data compresses everything that isn&#8217;t a clear outlier.</li>



<li><strong>Blocking AI crawlers is a short-term decision with long-term costs.</strong> Better infrastructure solutions exist, such as llms.txt and the NL Web Protocol, that let you manage bot traffic intelligently without opting out of AI discovery entirely.</li>
</ul>



<p><strong>Don’t wait to start.</strong> Establish real-time AI visibility metrics now, rather than waiting for industry reports. Instrument your own data so you can see the inflection points in your business before they hit.</p>



<h2 class="wp-block-heading">Tune Into the Full Conversation</h2>



<p><strong><a href="https://rss.com/podcasts/get-discovered-by-prerender-io/?_gl=1%2a8m2fsa%2a_gcl_au%2aMTU0MzUyMDExOC4xNzcyNDU2NjM0">Listen to the full episode of the Get Discovered podcast</a></strong> wherever you get your podcasts. Make sure to subscribe so you don&#8217;t miss future episodes with SEO experts and business leaders navigating the AI discovery challenge in real time. To connect with Alain, visit <a href="https://yoast.com/">Yoast</a> or find him on <a href="https://www.linkedin.com/in/schlessera/">LinkedIn</a>. </p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://rss.com/podcasts/get-discovered-by-prerender-io/?_gl=1%2a8m2fsa%2a_gcl_au%2aMTU0MzUyMDExOC4xNzcyNDU2NjM0" style="border-radius:0px;background-color:#1f8511">Subscribe to the Podcast</a></div>
</div>



<p>To make sure your site is visible to LLMs, <a href="https://prerender.io/pricing/"><strong>try Prerender.io for free</strong></a>. We make sure your site is visible to ChatGPT, Perplexity, Claude, and every AI search platform your buyers are using.</p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Most Overlooked Aspect of Your AI Search Strategy: A Conversation with Peter Rota</title>
		<link>https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/</link>
		
		<dc:creator><![CDATA[Prerender]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 17:49:18 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[ai podcasts]]></category>
		<category><![CDATA[seo podcasts]]></category>
		<category><![CDATA[tech podcasts]]></category>
		<guid isPermaLink="false">https://prerender.io/?p=7580</guid>

					<description><![CDATA[A crucial, but often overlooked, aspect of your marketing to ensure you show up in AI search.]]></description>
										<content:encoded><![CDATA[
<p>If you&#8217;ve been watching the SEO conversation play out over the past 12 months, you&#8217;ve likely noticed a shift in tone. Less chasing algorithm updates. More getting back to fundamentals. Peter Rota—technical SEO expert, 15-year industry veteran, and LinkedIn thought leader with 80,000 followers—is one of the clearest voices making this argument. And he makes it well.</p>



<p>Peter joined host Joe Walsh on the latest episode of <a href="https://prerender.io/podcast">Get Discovered</a>, Prerender.io&#8217;s podcast about AI and online discoverability. The conversation covered where technical SEO stands today, what AI has actually broken in the traditional playbook, and why the teams that stay grounded in first principles are the ones that will come out ahead.</p>



<p>Watch the full episode below, or keep reading for a summary.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="The Most Overlooked Aspect of Your AI Search Strategy: Peter Rota, Technical SEO Expert" width="640" height="360" src="https://www.youtube.com/embed/tIqj7PMqcYs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">It&#8217;s Not AI vs. Google: It&#8217;s How Can You Show Up in Both</h2>



<p>Peter opened with a stat that reframes the whole conversation: a Semrush study using clickstream data showed that people who started using LLMs actually <em>increased</em> their Google search sessions. AI didn&#8217;t cannibalize search. It complicated it.</p>



<p>&#8220;People are trying to ask direct questions, or they&#8217;re trying to validate things that are coming up in LLMs,&#8221; Peter explained. The buyer&#8217;s journey has gotten longer and messier, not shorter. That has big implications for how teams measure success.</p>



<p>For SEO practitioners and business leaders alike, this dismantles the &#8220;AI vs. Google&#8221; framing that still shows up in a lot of industry commentary. The more useful question isn&#8217;t <em>which one wins</em>. It&#8217;s <em>how do you show up in both</em>, and how do you measure what&#8217;s actually working across a fragmented path to purchase?</p>



<p>This challenge of fragmented attribution came up repeatedly throughout the conversation — and it&#8217;s one we&#8217;ve explored in depth with other guests this season. <a href="https://prerender.io/blog/podcast-klaus-schremser-otterlyai/">Klaus Schremser from Otterly AI</a> made a similar point about the difficulty of tracking visibility across AI channels, and <a href="https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/">Elizabeth Thorn from Toggl</a> described how her team built custom AI referral tracking to fill the gap.</p>



<h2 class="wp-block-heading">The Biggest Shift in 15 Years of SEO</h2>



<p>When Joe asked how this AI disruption compares to other seismic changes, Peter didn&#8217;t hesitate. This is the most significant shift he&#8217;s seen since starting in the industry. The reason? It fundamentally changes how success is defined.</p>



<p>&#8220;The metrics have always been clicks and rankings,&#8221; he said. &#8220;Now it&#8217;s more like: am I visible here? Conversions are getting more important. It&#8217;s just the overall understanding of what is important now versus what was important five years ago.&#8221;</p>



<p>That&#8217;s not a trivial reframe. It affects how you report to leadership, how you structure content strategy, and how you justify the budget. Traffic as the primary KPI made sense when traffic was a reliable proxy for visibility and intent. In a world where LLMs answer questions without sending a click, traffic numbers can flatline while your brand influence grows—or erodes.</p>



<h2 class="wp-block-heading">What Industries Are Most Impacted by Generative AI?</h2>



<p>Peter was candid about which content types are feeling the most pain right now: news publishers and content-heavy sites. &#8220;Glossary and blog sections have been hit very hard year over year because they&#8217;re being surfaced in AI Overviews or LLMs, so people aren&#8217;t clicking through to the website.&#8221;</p>



<p>Informational content is increasingly being answered before the user ever reaches your site. Peter&#8217;s take isn&#8217;t to abandon that content, but to shift the framing: create content that&#8217;s genuinely useful for the user, not content that exists primarily to capture search engine visits.</p>



<p>It&#8217;s a subtle but important distinction that echoes what <a href="https://prerender.io/blog/podcast-noah-greenberg-stacker/">Noah Greenberg from Stacker</a> discussed earlier this season when unpacking what actually gets cited in LLMs — authoritative, specific, citable content that earns third-party validation.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Which industries are getting hit hardest by AI?" width="540" height="960" src="https://www.youtube.com/embed/kQVc4jRaXf0?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">What&#8217;s Crucial for AI Search Visibility: Technical SEO Fundamentals</h2>



<p>This is where the conversation got particularly useful. Peter&#8217;s view is that good technical SEO and good AI discoverability are largely the same thing—and most of the ranking factors that matter more for LLMs are factors that great SEOs have always cared about. They&#8217;ve just been getting less attention. A few of the areas Peter flagged as increasingly critical:</p>



<ul class="wp-block-list">
<li><strong>Semantic HTML.</strong> LLMs, unlike Google, mostly can&#8217;t render JavaScript. That means they&#8217;re reading your raw HTML. If your content is buried in divs instead of proper semantic tags—headers, lists, footers, article blocks—you&#8217;re harder to parse and more likely to be skipped.</li>
</ul>



<ul class="wp-block-list">
<li><strong>Schema markup.</strong> Peter described this as getting &#8220;a second light,&#8221; something that was always a good practice but is now more essential. Anything that helps LLMs more easily understand your site&#8217;s structure, relationships, and meaning is worth investing in. <a href="https://prerender.io/blog/ai-optimization-technical-seo-guide/">Our own technical SEO guide for AI optimization</a> goes deep on this.</li>
</ul>



<ul class="wp-block-list">
<li><strong>JavaScript rendering.</strong> This one is particularly close to home for us. Most LLMs (with a few notable exceptions) cannot execute JavaScript. If your key content lives in JavaScript-rendered components, as is the case with many modern websites, it may as well be invisible to them. Google has invested enormous infrastructure into rendering JavaScript at scale and still doesn&#8217;t render everything. The idea that AI crawlers will do the same is, as Joe put it during the conversation, &#8220;a non-starter.&#8221; </li>
</ul>



<ul class="wp-block-list">
<li><strong>Page speed, and specifically time to first byte.</strong> LLMs using retrieval-augmented generation visit sites in real time when constructing answers. If your site doesn&#8217;t load fast enough, they move on. Peter&#8217;s recommendation is to target three to four seconds or under. &#8220;They&#8217;re almost like they have unlimited crawl budget,&#8221; he said. &#8220;They&#8217;re just gonna go somewhere else if they can&#8217;t get you.&#8221;</li>
</ul>



<ul class="wp-block-list">
<li><strong>Crawl logs.</strong> Peter said he now considers crawl log analysis a proactive practice rather than a reactive one. With Google Search Console, you can see which bots are visiting, what they&#8217;re finding, and whether there are unresolved requests. This is especially important now because AI crawler behavior is still not fully understood, and the more data you have, the better your decisions will be.</li>
</ul>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-9-16 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Are there ranking factors that matter more for AI discovery compared to traditional search?" width="540" height="960" src="https://www.youtube.com/embed/ttS0xpQ7JpA?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<h2 class="wp-block-heading">LLMs Care More About Brand Authority Than Search Engines </h2>



<p>One of the most interesting threads in the conversation was about how LLMs evaluate credibility differently from Google.</p>



<p>Google&#8217;s algorithm has always valued backlinks as a proxy for authority, and gaming that system became its own cottage industry. LLMs think differently. They&#8217;re looking for consensus. They synthesize what&#8217;s being said across micro-communities, forums, review sites, social platforms, and the broader web. A brand with strong domain authority but a weak presence in third-party conversations may rank well in Google while being largely ignored by AI.</p>



<p>&#8220;LLMs are a lot more consensus-based,&#8221; Peter said. &#8220;They&#8217;re looking for branded signals because they want to serve something they&#8217;re confident in serving.&#8221;</p>



<p>This makes <a href="https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/">brand reputation management</a> and PR strategy more important than ever. Getting mentioned in a Reddit thread might not produce a backlink. But it can influence how an LLM describes your brand. That&#8217;s a new kind of value that most SEO tools haven&#8217;t caught up to yet.</p>



<p>Peter also noted that the no-follow link vs. follow link binary that dominated SEO thinking is less relevant in this context. A no-follow mention on a high-authority site may actually be more valuable for LLM visibility than a follow link that doesn&#8217;t generate real contextual resonance. The metric that matters is signal strength, not link attribute.</p>



<h2 class="wp-block-heading">The Attribution Problem (Again)</h2>



<p>Attribution has always been difficult for SEO. You could see that organic traffic increased and correlate it with content efforts, but drawing a direct line from specific pages to specific revenue was never clean. AI makes this messier—a common theme in this season of the podcast.</p>



<p>Peter framed it well: &#8220;A lot of times, SEO was always harder to even get buy-in on attribution because it&#8217;s like, I did these paid ads and I can clearly see where the conversion was. With SEO, we know it&#8217;s working, but it was hard to show the value. And obviously, LLMs makes it even more complicated.&#8221;</p>



<p>Joe added a useful analogy: we&#8217;ve essentially moved from the attribution precision of digital advertising back toward something closer to billboard marketing. You can measure general uplift. You can observe trends. But the clean, last-click-attribution model that made Google&#8217;s ad network so appealing to CFOs? That era is fading.</p>



<p>For now, Peter&#8217;s approach is to report on what leadership understands: referrals, sessions, and conversions over time, while keeping an eye on visibility metrics as leading indicators, even if they&#8217;re imperfect.</p>



<h2 class="wp-block-heading">What Teams Should Prioritize for AI Search Visibility</h2>



<p>When asked to give concrete guidance on where to focus, Peter named a few key areas:</p>



<p><strong>Prioritize:</strong> </p>



<ul class="wp-block-list">
<li><strong>Deeper schema investment,</strong> particularly to establish entity relationships and corroborate your brand&#8217;s &#8220;entity home&#8221; (a well-structured About page that&#8217;s then validated across the web). </li>



<li><strong>Raw HTML content</strong>. Make sure your most important copy isn&#8217;t dependent on JavaScript.</li>



<li><strong>Page speed. </strong></li>



<li><strong>Crawl log analysis.</strong></li>



<li><strong>Your overall brand presence</strong> in communities and third-party publications.</li>
</ul>



<p><strong>Deprioritize:</strong> </p>



<ul class="wp-block-list">
<li><strong>The &#8220;all-or-nothing&#8221; mentality</strong> around traffic as the primary metric. </li>



<li><strong>Backlinks</strong> as the exclusive measure of off-site authority.</li>



<li><strong>The expectation that recovering pre-AI traffic levels is a realistic goal.</strong> &#8220;LLMs will never send the amount of traffic Google sends,&#8221; Peter said. &#8220;I don&#8217;t think we can get the traffic back. It&#8217;s just setting that clear expectation.&#8221;</li>
</ul>



<p>This mirrors advice we&#8217;ve heard across multiple guests this season, including from <a href="https://prerender.io/blog/ai-tips-seos-ane-wiese-saas-group/">Ané Wiese at saas.group</a>, who offered practical implementation tips for SEO teams navigating the same transition.</p>



<h2 class="wp-block-heading">Looking Ahead </h2>



<p>Looking out 12 to 18 months, Peter&#8217;s prediction is that organic clicks will continue to decline. This is not only because of AI, but because Google itself is adding more advertising to AI-powered search experiences.</p>



<p>But he raised a more nuanced concern beyond the traffic numbers: the erosion of critical thinking in teams that lean too hard on LLMs.</p>



<p>&#8220;I feel like people will look for more and more shiny objects to solve things and they lose that critical thinking,&#8221; he said. &#8220;LLMs will lie to your face and won&#8217;t even acknowledge it. You just really need to use your brain with a lot of these tools.&#8221;</p>



<p>Joe echoed this, noting that how humans relate to these tools—and where the actual value boundaries are—is something we&#8217;re still figuring out as an industry.</p>



<h2 class="wp-block-heading">Key Takeaways</h2>



<p>We asked Peter what he&#8217;d want listeners to remember from the conversation. His answer was clean: <strong>don&#8217;t forget the fundamentals of SEO</strong>.</p>



<p>&#8220;A lot of those will give you the 80/20 of showing up in LLMs,&#8221; he said. &#8220;A lot of the AI optimization principles evolved from SEO. A lot of them are SEO that&#8217;s just been repackaged. If you&#8217;re doing good SEO, you&#8217;ll definitely be off to a good start.&#8221;</p>



<p>This is a message that holds up across every guest we&#8217;ve spoken to this season. Getting back to first principles, like technically sound sites, authoritative content, and genuine brand presence, is not a retreat. It&#8217;s the actual strategy.</p>



<h2 class="wp-block-heading">Tune Into the Full Conversation</h2>



<p>This recap covers the highlights, but the full conversation goes deeper into crawl log analysis, visibility tracking tools and their limitations, prompt overlap, and what the next 12 months look like for SEO practitioners. <strong><a href="https://prerender.io/podcast">Listen to the full episode on Get Discovered</a></strong> wherever you get your podcasts.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Subscribe to Get Discovered so you don&#8217;t miss future episodes with SEO experts and business leaders navigating the AI discovery challenge in real time.</p>
</blockquote>



<p>You can also find Peter on <a href="https://www.linkedin.com/in/peterrota/">LinkedIn</a> or at his <a href="https://www.peterrotaseo.com/">website</a>, where he works with businesses on local, e-commerce, and technical SEO.</p>



<h2 class="wp-block-heading">Make Sure AI Crawlers Can Actually Find You</h2>



<p>Peter&#8217;s point about JavaScript and LLM visibility is one we hear consistently from guests, and it&#8217;s one we&#8217;ve built our product around. Most AI crawlers can&#8217;t execute JavaScript, which means if your content is JavaScript-rendered, it&#8217;s invisible to them by default.</p>



<p><a href="https://prerender.io/pricing/"><strong>Try Prerender.io for free</strong></a> and make sure your site is discoverable to ChatGPT, Perplexity, Claude, and every AI search platform your buyers are using.</p>



<div class="wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex">
<div class="wp-block-button"><a class="wp-block-button__link has-white-color has-text-color has-background has-link-color wp-element-button" href="https://prerender.io/pricing/" style="border-radius:0px;background-color:#1f8511">Try Prerender.io For Free</a></div>
</div>



<p></p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>