Try our free Site Audit Tool. Get clear insights into your site's performance.

Google JS Rendering: Noindex No Longer Means Not Rendered

Published on September 18, 2025

min read

Google JS Rendering

Table of Contents

​For most site owners, Google’s search pipeline feels like a black box. We know the broad strokes: Google crawls, renders, indexes, and ranks content, but the finer details are often obscure.

That is why technical SEOs pay close attention to every shift in behavior, as even subtle changes in how Google processes websites can have big implications. One of those shifts is happening right now in JavaScript rendering: pages with noindex directives are still being rendered.

In this article, we’ll explore Google’s new rendering behavior, why pages marked noindex are now being rendered, and what it means for your site’s health, and what SEOs should do differently.

A Quick Refresher: How Google Processes Websites

Before we dive into the main topic, let’s quickly revisit the three core stages of Google’s search pipeline:

  1. Crawling: this is the first stage where Googlebot discovers a page and fetches its HTML, CSS, and JS.
  2. Rendering: Google’s Web Rendering Service (based on Chromium) then executes the JavaScript and builds a DOM to see what the page actually displays.
  3. Indexing: finally, Google decides whether to store the page in its index, making it eligible to rank in search results.

And where does noindex come in? A noindex directive (via meta tag or HTTP header) tells Google to exclude a webpage from its index. Basically, it’s like hanging a “you can look, but don’t list me” sign for Googlebot.

Related: Learn More About How Google Crawls and Index Websites

The Old Understanding: Noindex Meant No JS Rendering (and No Indexing)

For years, the general understanding of the SEO community was that a noindex tag would stop Google from indexing a page and prevent Googlebot from rendering its JavaScript. This is still spelled out in the Google Search Documentation:

When Google encounters noindex in the robots meta tag before running JavaScript, it doesn’t render or index the page.

For this reason, noindex became a reliable safety net that SEOs built strategies around. You could keep certain pages out of search results while still allowing Google to crawl them for link discovery. And since rendering was skipped, the assumption was that fewer resources were being used, conserving your crawl budget.

This principle of handling and managing noindex pages also shaped how technical specialists approached site architecture, navigation, and audits, particularly for large-scale websites with thousands of pages.

However, recent observations suggest this long-established understanding no longer holds true.

The New Understanding: Noindex No Longer Stops JS Rendering

It appears that Google is now rendering noindex pages, at least when it comes to executing JS and handling fetch requests. To better put things into context, Dave Smart, a Technical SEO Expert, ran a series of controlled tests to confirm and document this behavioral shift.

He set up pages that triggered JavaScript fetch() calls to a logging endpoint and monitored Googlebot’s behavior. If Googlebot rendered the page and executed the script, the requests would appear in the server logs.

The results were conclusive across multiple test scenarios:

Test 1: Page with <meta name=”robots” content=”noindex”> and JS fetch. Googlebot made the POST fetch request (indicating it executed the script). However, the page was still treated as noindexed and was not put in the index.

Test 2: Page with X-Robots-Tag: noindex (HTTP header) and JS fetch. Googlebot again performed the POST fetch. The page remained excluded from the index, consistent with Test 1.

Test 2: Page with X-Robots-Tag: noindex (HTTP header) and JS fetch

Test 3: 404 page with JS fetch. No rendering happened here. Googlebot didn’t execute the script at all, which shows that a hard 404 still stops the rendering pipeline.

Test 3: 404 page with JS fetch

Test 4: Noindex page with JS redirect. Googlebot executed the script and logged the POST request, but the page was still excluded and, interestingly, the redirect target wasn’t discovered or followed.

Test 4: Noindex page with JS redirect
[[Image]]

In short, Google still respects noindex for indexing, but it no longer skips rendering. And in Smart’s words:

The fact that the requests made to the test API endpoint were made with a POST method, and not a GET method, gives me more confidence that these requests are being made as part of the rendering process.

These findings have also been corroborated by other technical SEO experts, showing a systematic change in Google rendering behavior.

Why Would Google Render Noindex Pages?

At the surface level, this seems inefficient. Why spend resources rendering a page that will not be indexed? While Google hasn’t officially confirmed the reasoning, we can draw a straight line to these three likely drivers:

  • Detecting manipulation: some websites try to game the system by dynamically removing noindex tags via JavaScript after initial crawling. Rendering allows Google to spot these tricks.
  • Extracting signals: even if a page is “noindex,” it may contain useful internal links, structured data, or other signals that help Google understand the rest of the site.
  • Comprehensive site analysis: rendering provides Google with a complete picture of how a website functions, including user experience metrics and technical implementation quality.

To be fair, from Google’s perspective, this makes sense for maintaining search quality and preventing abuse. However, NOT so much for website owners as it introduces unexpected challenges.

SEO Impacts of Rendering Noindex Pages

Google’s shift to rendering noindex pages affects how your site is crawled, analyzed, and reported. Here’s what you need to know:

1. Technical Issue Visibility

Noindex tags no longer shield technical problems from Google’s analysis. Issues such as JS errors, slow load times, broken API calls, or accessibility problems previously hidden behind the noindex directive now become visible in Search Console or other SEO testing tools.

Now, while these pages won’t directly impact rankings, the technical issues they reveal may indicate broader site health problems.

Resource: See 19 Technical SEO Issues That Hurt Your Website Performance

2. Signals Still Get Processed

Noindexed pages don’t disappear from Google’s understanding of your site. Structured data, canonical tags, and internal links are still read, meaning misconfigurations can bleed into your broader site signals.

For example, if a noindexed filter page points to the wrong canonical, it could confuse Google about which product page should be authoritative.

3. Crawl Budget and Server Strain

It’s no secret that JavaScript rendering is resource-intensive. On small sites, this may be negligible. But on large ecommerce sites with thousands of URLs, the extra load can waste both your server capacity and crawl budget.

For example, imagine an ecommerce site with 50,000 noindex filter URLs. If each requires a few seconds of JS execution, that could mean dozens of wasted rendering hours every crawl cycle. Multiply that across thousands of fetches, and you’re looking at serious server strain and crawl budget drain that could have been allocated to indexable revenue-driving pages.

4. Reporting Complexity

Because Google renders these pages, diagnostic tools may flag irrelevant issues, creating noise in your SEO reporting. As a result, you’ll now need to filter out this noise and also diagnose rendering issues that genuinely impact site health.

How Can Marketers and SEOs Handle This New Behavior?

The impacts we’ve covered clearly point to this: marketers and SEO specialists can no longer treat noindex as a simple “end of the line.” Instead, they must be deliberate in optimizing what Google sees and processes, especially when resources and crawl budget management matter.

Apply the Proper Directive Strategy

Whether it’s a noindex or robots.txt tag, a clean directive strategy helps prevent wasted crawl budget and also helps Google allocate resources more efficiently across your site. However, too often, these two get lumped together as if they solve the same problem, but they do not.

  • Noindex: best for pages that can be crawled and rendered, but should not appear in Google’s search results (e.g., thank-you pages or private members-only content).
  • Disallow (robots.txt): best for pages that you want Google to completely ignore crawling (e.g., admin areas, private user data, or staging environments). Google will not fetch or render these pages.

Note: never use both noindex and disallow tags on the same URL, as this prevents Google from seeing the noindex instruction. Check out our guide for detailed instructions on how to apply robots.txt directive to your website.

Expand Your Technical Audit Requirements

SEO audits that only check whether a page is “indexed or not” are no longer enough. Audits now need to:

  • Analyze server logs or GSC Crawl Stats to identify how much Googlebot activity is spent on noindex pages.
  • Compare raw HTML vs. rendered HTML to identify signals or errors hiding behind JS execution. Tools like Screaming Frog, Sitebulb, or Google’s URL Inspection tool can help you detect mismatches.
  • Check for conflicting directives, like pages marked noindex but also blocked in robots.txt, or canonical tags pointing to noindexed pages.
  • Evaluate internal linking and sitemaps to ensure noindex pages are not heavily linked from your main navigation or included in XML sitemaps.

Implement a JS Prerendering Solution

Several approaches, such as self-built SSR, static rendering, and hydration, can improve how search engines process JavaScript, but they’re costly to build and maintain. Instead, a prerendering solution like Prerender.io offers a more powerful and efficient alternative for saving crawl budget.

Prerender.io is a dynamic JS rendering solution that serves fast, static HTML versions of your JS-heavy pages directly to search engine crawlers such as Googlebot and Bingbot. This improves your crawl efficiency, reduces server load, and avoids the high costs of maintaining an in-house rendering setup.

It also helps ensure consistent handling of noindex directives, so your pages are rendered and processed exactly as intended. In other words, as Google’s rendering behavior evolves, Prerender.io gives you stability and technical control.

Learn how Prerender.io works in more detail and how Prerender.io compares to other solutions.

Prerender vs. SSR vs. Static Rendering vs. Hydration

Improve Website JavaScript Rendering With Prerender.io

Again, while Google itself hasn’t officially confirmed this new rendering pattern (and it’s possible the behavior could evolve again), evidence from multiple SEO experts has pointed to it, and site owners can’t afford to ignore it.

The takeaway is simple: refine your directive strategy, broaden your audit scope, and make sure Google’s resources are spent on the pages that drive real value through effective dynamic content indexing.

Also, by integrating Prerender.io, you can easily solve your site’s technical SEO issues, bid crawl budget troubles goodbye, and future-proof your SEO strategy. It is easy to install and use, and fits right into your JS-based framework, so there’s no need to change any of your tech stack.

Enjoy the rendering benefits for your JavaScript website by adopting Prerender.io for free today!

Picture of Prerender

Prerender

More From Our Blog

Uncover Screaming Frog’s auditing limitations in JavaScript rendering and how they can affect your SEO audit.
A newer dashboard design to help optimize your search strategy.

Unlock Your Site's Potential

Better crawling means improved indexing, more traffic, and higher sales. Get started with 1000 URLs free.