john-mueller-explains-crawling

Google explains how website crawling works in SEO

In a new YouTube video, John Mueller, Google’s Search Advocate, explains the process of website crawling. The video is from the English Google SEO office hours which aired live on February 18, 2022. 

In a broader sense, there are two main aspects that influence crawling:

1. Crawl Demand

Google uses a mechanism called “Crawl Demand” to understand how many pages need to be crawled per website. This is determined by a number of factors, including the overall quality of the website and the frequency at which the elements on a website change.

“On the one hand, we try to figure out how much we need to crawl from a website to keep things fresh and useful in our search results. And that relies on understanding the quality of your website, how things change on your website. We call that the crawl demand”, said John Mueller.

2. Server Limitations

Google crawling is hampered if a website returns a large number of server errors. The amount of load that a server can handle is also considered. As a result, if your website server is unable to respond quickly to crawl requests, Googlebot crawling is slowed.

“And on the other hand, there is the limitations that we see from your server, from your website, from your network infrastructure with regards to how much we can crawl on a website. So if we see a lot of server errors, then we will slow down crawling. Because we don’t want to cause more problems. If we see that your server is getting slower, then we will also slow down crawling”, explained Mueller.

John Mueller also touched upon the impact of speed on website crawling.

Crawl rate depends on server speed rather than page speed

If the crawl rate of your website is dipping, it is most likely related to your server’s average response time. The Core Web Vital metrics have no direct effect on crawling speed. Therefore, rather than focusing on page speed, you should examine your server speed. 

Here is John Mueller’s explanation:

“So specifically for the crawl rate, we just look at, how quickly can we request a URL from your server? And the other aspect of speed that you probably run into is everything around Core Web Vitals and how quickly a page loads in a browser. And the speed that it takes in a browser tends not to be related directly to the speed that it takes for us to fetch an individual URL on a website.

If you’re trying to diagnose a change in crawl rate, then don’t look at how long it takes for a page to render. And instead, look at just purely how long it takes to fetch that URL from the server.”

You can watch the entire discussion on Googlebot Crawling here:

 

Leave a Comment

Your email address will not be published.

Scroll to Top