Why would you want to remove a page from Google’s index?
There are many possible reasons but most of the time it’s one of these:
- The page is not relevant to user searches from Google. For example, you probably wouldn’t want your privacy policy coming up for searches for your brand. Unless you are a privacy policy provider and want to show off how good yours is!
- The page has changed. If something important (eg. your old pricing) is in Google’s snippet this might be an urgent fix for you.
- Page has become deleted. If the page is important this might also be urgent to ensure a good experience and prevent disappointment in people clicking from search results.
- The page contains sensitive content. If you’ve password-protected the page you probably want to stop search traffic too.
- The page in question is a duplicate (or similar) to a different page on your website. If you keep both pages, the duplicate content will split your search traffic, potentially bloating your index on Google and reducing your overall SEO performance.
- You have too many pages indexed. If you are a small website and somehow you’ve gotten thousands (or more) pages indexed this will not improve your SEO. If anything it may reduce how much of its crawl budget Google spends on your important pages.
I’ve heard of robots.txt, can I use that?
Your robots.txt file is meant to let bots who follow it (like Googlebot) know which pages they are allowed to access. The issue is that accessing a page and storing it in your index are two different things.
If Google has never indexed your page before then disallowing it to crawl the page will keep it out of the index. But if Google has already indexed your page then disallowing it to crawl your page again will mean whatever it has indexed for the page already will remain unchanged.
Think of it like an electricity company sending someone out to read your meter. The robots.txt is can be used like a “keep out” sign to stop them from going near your meter. So if they already have a reading that is incorrect, this sign will only make things worse.
Confusingly, if you want to get an image de-indexed, disallowing it with robots.txt is the way to go, see this page from Google for more info (and for Google’s own warning about webpages).
So what’s the best way to remove a page?
You have a few options:
- [Temporary] Google Search Console request. For a fast update, you can submit a request with your Google Search Console. Under Removals choose New Request.

Then fill out the URL you want removed. If you use the prefix option you can remove a whole folder, in this case we are removing the blog.
Note that this only lasts about 6 months and is only meant for urgent requests, to give you time to implement one of the other permanent changes below.
- Delete the page or make it return a 404. If the page is no longer active Google will automatically remove it from the index next time it crawls that URL. Be wary of soft-404s or when you show a “page not found” template for URLs that don’t work but make the page return an HTTP status of 200 (which means a page was found). Google can often tell but not always and it’s best to prevent confusion by making sure your not found page actually returns the 404 status (ie. not found).
- Redirect to another page. If you use a 301 redirect, this is telling Google that the URL is moved permanently to a new URL. So when it crawls your URL it will get redirected and will know the page has moved. This is usually the best option for merging duplicate content.
- Meta noindex tag. This is the best way to tell Google that you don’t want a page indexed. Here’s an example of the tag showing in the code for our privacy policy:

To make sure Google sees this tag it needs to crawl this page so you need to allow Google to crawl it.
When you need to update Google’s index the main question is how urgent it is. If this is a public page then deleting it and submitting a Google Search Console removal request should get it out of your hair forever. If the page needs to still be publicly accessible then the meta tag is the best solution. If it’s duplicate content then use a redirect. But not robots.txt or you may need to undo the robots.txt changes and start again!
