If you own a website and one day you find another larger website showing up in a higher position than yours in the search results with web content the same as yours, this means that your website has been hijacked.
- Search result hijacking of websites takes place when someone sets up a subdomain and creates a webpage by copying the original HTML and images from your site. This happens when several URLs crawl the same content and the URL with a higher page rank and shows up in the index.
- Sometimes, authorship fails when a search engine like Google cannot recognize your original webpage that of the duplicate page which copied your contents.
So it is up to the web developers to defend against search result hijacking by stronger authoritative websites, by taking necessary preventive measures.
1. Internal links – Using full URLs to reference to your homepage and other pages in your website indicates that if someone duplicates your content, they will be automatically linked to your webpage passing Googleâ€™s PageRank.
2. Authorship – Web developers should take caution when content is added to a webpage so that it cannot be duplicated by others.
3. Content monitoring – By using Google Alerts or CopyScape, web developers can observe references of their brand and content segments online, as they happen. If you notice a high authority domain replicating your web page, you can request either removal of a link back/citation back to your site.
4. Canonicalisation – Most websites will simply replicate your content or scrape a considerable amount of it from your site. It is typically done in the code level where properly set rel=”canonical” (full URL) ensures that Google knows which web document is the canonical form. To protect your documents from copying use the http header canonicalisation which is explained in detail in the Googleâ€™s Webmaster Central Blog site.
A professional SEO company can help you implement the right techniques to retain high page rankings for your website and prevent it from being hijacked.