Rate Your Experience

Common Google Search Console Errors

Modified on: Tue, 23 Apr, 2024

Google Search Console is a tool that can be used to monitor how your site is interacting with Google Search. It's common to see a lot of errors listed inside Google Search Console, but most of the time - these aren't anything to worry about. This article will discuss some of the most common issues which can arise with Search Console, so you can be aware which ones are worth your time.

 

What is indexing and crawling?

 

The "Pages" section of Google Search Console will show you the number of pages that have been indexed on your site, as well as a list of all the pages that have not been indexed.

 

image

 

A page being "indexed" means that you are able to find that page inside of Google Search. For example, if I were to search "Ezoic How to Link Search Console", I would find a relevant page that had been indexed by Google.

 

image

 

This is different to "crawling", which is the process of Google searching your site and seeing what it can find. Google needs to crawl a page for it to be indexed, but not every page that Google crawls will be indexed.

 

If it helps, let's think of the internet as a big city, and Google as a tour guide.

 

"Discovering" is Google's tour guide exploring the city and looking at every building (website) and every room (webpage) inside those buildings, making notes on what exists and where it is.

 

"Crawling" is the tour guide visiting the attraction and evaluating how good it is at meeting certain needs.

 

"Indexing" is when the tour guide notes the attraction down, as one that it could potentially recommend to visitors. These notes (the Index) are then used when someone asks the tour guide (makes a Google search) for a specific place or thing, like "Where can I find the best pizza?" or "Show me the museums".

 

So, in simple terms, "discovering" is Google finding and exploring websites; "crawling" is Google evaluating websites; and "indexing" is Google making a list of what it found so it can quickly find it again later when someone searches for it.

 

How to find pages that aren't indexed

 

Inside the "Pages" section of Google Search Console, you will find a number of pages which Google has found; but has decided not to index, for a variety of reasons.

 

image

 

Selecting a reason will give you a full breakdown of every page impacted by the issue, and inspecting a given URL will provide further information on an individual page.

 

However, before you start panicking about a page that isn't indexed - it's worth taking a step-back and asking yourself some questions before diving into technical troubleshooting:

 

  • Do you want this URL to appear on Google?
  • Does an alternative version of this page already exist on Google?

 

Do you want this URL to appear on Google?

 

Google can be very nosy when it comes to crawling a site. It will crawl any link that it finds (even if it's on a completely different site!), including any JavaScript. It's not uncommon to see Google crawling Ezoic placeholder scripts or our tracking scripts (e.g. /detroitchicago/).

 

These aren't pages that you would want a user finding on Google Search and Google recognizes that - so won't index them. This will be reported in Google Search Console as an error. You don't need to worry about these errors, though, it's just a way for Google to say that it knows these pages exist; but it won't put them in the search results.

 

Don't worry, there's no penalty to these scripts being crawled and giving a 404 error, Google expects to find scripts on your site. These scripts, also, shouldn't impact crawl budget as they are orphans (ie. pages that aren't internally linked to).

 

However, if you are concerned - you can stop Google from crawling them by editing your robots.txt. This won't impact Ezoic functionality, but will stop Googlebot from being able to access these pages.

To do this, open your robots.txt file via FTP (or some plugins may allow you to edit the robots.txt file from within your WordPress dashboard if you use WordPress) and add the following code to your robots.txt file:

User-agent: *
Disallow: /detroitchicago/
/porpoiseant/
/beardeddragon/
/tardisrocinante/
/parsonsmaize/
/edomontonalberta/
/ezais/

 

Example errors: "Blocked due to forbidden access (403)", "Not Found (404)", "Crawled - currently not indexed", "Discovered - currently not indexed"

 

Does an alternative version of this page already exist on Google?

 

Google will only index one version of a page, which means that if you have two pages with slight variations in the URL - only one URL will be indexed. Don't panic if you see the other URL in your indexing errors list. The best way to check for this is to simply search the page URL in Google and see if it comes up.

 

Some examples of URLs may flag as being duplicate:

 

ezoic.com & ezoic.com?nocache
ezoic.com/home & ezoic.com (where ezoic.com/home redirects to ezoic.com)
ezoic.com & ezoic.com/ (notice the '/')

 

Equally, if you have two pages that have identical content (but completely different URLs) - Google will recognize this and will only index one of the two.

 

The URL that Google chooses to index is known as the "canonical URL".

 

Example errors: "Redirect error", "Alternative page with proper canonical tag"

 

How to fix an indexing issue

 

If you think your page should be indexed, you will need to troubleshoot it using the URL Inspection tool. Using this tool, you can analyze a specific URL to see the reason why Google was unable to index it.

 

Some key things to check for pages that are not indexed are:

 

 1. Is access to the page blocked by robots.txt?

 

You can access your robots.txt by going to yoursite.com/robots.txt (where yoursite.com is your root domain). Look out for disallow rules that cover the location of the missed page. If these exist, they will be stopping Googlebot from accessing this page. You'll need to remove this rule and have Google recrawl the page.

 

Example error: "Blocked by robots.txt" (or nothing at all!)

 

2. Is access to the page blocked by a "noindex" meta tag?

 

Check the HTML of your webpage for the "meta robots" tag. If this tag includes the "noindex" directive, Google will not index the page. If you're using Google Chrome, you can access the HTML of the page by pressing "Ctrl" + "U".

 

Example error: "Submitted URL marked 'noindex'"

 

3. Is the page accessible?


It may not just be Googlebot who is having trouble accessing the page. Make sure you can access it yourself, and make sure there are working links present on the site.

 

Example error: "Submitted URL not found (404)", "Blocked due to forbidden access (403)", "Blocked due to other 4xx issue".

 

4. Is the content high quality?

 

Google will only index content that it deems to be useful for searcher intent. It's possible that Google have crawled your page and decided not to index it. If you disagree with this decision or have made changes to increase the value of the page's content, you can request Google reindex the page using the URL inspection tool.

 

Example error: "Crawled - currently not indexed"

 

If none of the above has helped, then you can share Google Search Console access with shared-emea@ezoic.com and reach out to our Support teams for further assistance.



Loading ...