Last week I wrote and released an article on how it is possible to kick a website out of Google’s Search Engine Ranking Pages (SERP’s), and how to defend yourself against it. I released it under the title: How to Defend your Website from the Google Duplicate Proxy Exploit and under the psedonym “Sophie White”.
Why the pseudonym? It’s a Search Engine Optimisation (SEO) trick, as it’s easier to track where articles go and who publishes them, if you use different names. I have several, Sophie White is just one I occassionally use. Also,sadly sexism is definitely rife on the internet, and some sites only publish articles written by women. Such is life.
Anyway back to the point, how can it be that it’s possible to knock a website out of Google? You’d think that Google were pretty on the ball these days on this sort of thing? Is this a new exploit that Google have yet to become aware of? No, Google have known about it for at least 2 years, and have done nothing despite repeated appeals from SEO experts like Dan Theis for example.
I’m not going to repeat the entire contents of my article here, you can click the link above and read it in its full glory. So here’s the short version:
Google has a duplicate content filter. Every time the GoogleBot indexes a page it gets checked against Google’s database to see if they’ve found that page before and determine whether it’s a copy or not. They do this because Google doesn’t want to be serving up duplicate pages all the time. It’d be irritating if you searched Google for “blue widgets” and the first 10 results were all the same page content.
Let’s say Google finds a page and determines that it’s a duplicate (how they do that is beyond the scope of this post) what happens is Google decide to heavily penalise one of the pages. Simple huh? Here’s the rub, how does Google know which is the original page, and which is the copy? What if they get it wrong and they penalise the legitimate site, in favour of the duplicate? Doh!
And this is where it gets worse, what if a competitor of yours deliberately sets up a duplicate page and gets Google to rank it instead of your page? Ok there’s a raft of things you can do in this circumstances, many involving changing content, engaging lawyers, cease and desist letters to the ISP’s etc. However, there is one kind of webserver that is built to do exactly this: proxy servers.
Proxy Servers are designed to copy the web and serve it up as a local cache, and they are integral to making the internet run faster. If a competitor gets your page cached by a proxy, and then starts using normal white (or black) hat techniques to promte the proxies copy of your site, there’s every chance that Google will think that this big proxy servers copy of your web page is the true legitimate copy, and pow, there goes your sites rankings as Google boots you down it’s index.
There are various defence mechanisms you can use to protect your site, and they are detailed in the article I wrote. If this has happened to you, and you are completely stuck, my SEO company Intrinsic Marketing will be able to help.
(On a side note, I submitted my article to be published on eZineArticles.com sadly despite being a Platinum Author at eZineArticles, they refused to publish my article, claiming that it might be used for malicious intent. Whilst that’s true, I believe far more good will come from informing the masses on the proplem and the solution, rather than sticking our collective head in the sand, and hope the bad guys don’t notice!)