)
Request Google Indexing: How to Make New Pages Visible Quickly
If you are creating content for a website at an agency or another company, or you are otherwise engaged in the field of search engine optimization, then you know it: You have created a blog post, a landing page, or another sub-page and now want this page to be visible on Google as quickly as possible so that it can be found by searchers. In most cases, this works without problems, but sometimes it takes a long time for a new website to be included in the index. You will learn how to find out if a webpage has been indexed by Google and how to easily request Google indexing in this article.
What is Google Crawling and Indexing?
Google uses a so-called crawler, a bot that is sent out to discover pages that have not yet been indexed in order to include them in the index. This process is called crawling. The Googlebot is supposed to ensure that the large archive, where all indexable websites are located, always remains up to date.
How do I know if a page has been indexed?
Some may now wonder how to know whether a website is in the index or not. To see the number of indexed pages of a website on Google, you can enter the following command in the search bar:
site:deinewebseite.com
If you want to check the indexing status of a single webpage, enter the following operator:
site:deinewebseite.com/web-page-slug
If no result appears in this search, it is not indexed.
Another way to verify is offered by the Google Search Console. Through the tabsGoogle Search Console > Index > CoverageYou can check how many individual pages of a website have been indexed. If the number is zero, you should definitely check why none of the pages have been indexed. Error messages are also displayed here. The same check can be performed not only with the entire website but also with individual pages. This works through the URL Inspection Tool, which is integrated into the Search Console.
Older page not indexed? The solution
If an older page has not yet been included in the index, one should consider whether submitting an application will help. In some cases, it may be that a website has a technical problem or that Google considers it too low-quality to be included in the index. Possible issues and their solutions can be found here:
Crawl-Block:
If the entire website is not indexed, it may be due to a crawl block in the robots.txt file. If the following commands are found in the robots file, they should be removed because they tell the Google bot not to crawl a page:
User-agent: Googlebot
Disallow: /
User-agent: *
Disallow: /
Remove noindex
When you are already in the robots file, you can also directly check whether relevant pages are marked with the "noindex" tag. If this is the case, as the name already suggests, it will not be indexed.
Include page in the sitemap
A page should also be visible in the sitemap. This can be checked in the Search Console. If a URL is entered for verification and it says "Sitemap: N/A," then it is not included in the sitemap.
Remove unwanted canonical tags
A canonical tag tells Google which version of a webpage is preferred. It can look like this: <link rel="canonical" href="/page.html/">.
It is possible that a canonical tag was incorrectly set, causing Google to index a different version that does not actually exist. The correct page is then not included in the index.
Avoid orphan pages, link internally
Orphan pages, also known as orphan websites, are those that have no links leading to them from anywhere on the entire website. As a result, the Googlebot cannot find them. To prevent this, a good internal linking strategy should be applied. At the very least, there must be an indexed page somewhere on your domain from which a link leads to the previously unindexed page. For new pages, it is always important to ensure that they are well internally linked, as only then can they be easily found by the crawler. It is useful to link to the new page from a powerful page.
Clean up internal nofollow links
The Google crawler does not follow links marked with a "nofollow" tag. Therefore, links to pages that should be indexed should not be marked with this tag. If a page is not to be indexed, it is better to set it to noindex or delete it.
Contenu Unique
Google includes pages it considers high-quality and useful. Little content, no uniqueness, and lack of quality can contribute to a webpage not getting indexed.
Make good use of crawl budget
Inferior pages should also be avoided because Google only expends a certain crawl budget per site. If there are too many inferior web pages, the bot moves on without discovering the new content. If too many pages go online, it is quite common for them not all to be indexed immediately by a visit from the bot.
Building high-quality backlinks
High-quality backlinks ensure that a page gains in reputation. Although Google indexes websites without backlinks, those of good quality contribute overall to a good ranking.
Simple steps for Google Indexing
If the situation arises that an older page has not been included in Google's indexing, the following steps can be initiated:
Access Google Search Console
Call up URL Inspection Tool
Copy relevant URL into the search field
Waiting for review
Press the "Request Indexing" button
Even if an important change has been made to an existing website, it makes sense to request indexing. However, one must have some patience – by making the request, you place yourself in a virtual queue. The process is not accelerated by submitting multiple requests.
Several URLs for indexing?
If not just one, but several URLs are to be indexed, it makes sense to submit a sitemap. In the left menu bar of the Search Console, simply select "Sitemaps" under the "Index" section and submit it. Here too, indexing may take some time. In any case, the sitemap is the better decision if you want to include multiple URLs in the index, as the Search Console has a limit on individual pages per day – the sitemap allows you to bypass this limit.
Image sources:
@Sughra – stock.adobe.com
)