Google’s history of differentiating good content from bad content
Just as it is quite common for anybody to come across good and bad people in life, so also it is quite common for internet users to come across good and bad content, respectively. After Google’s inception several years ago, an algorithm was developed as the basis for SEO and has been used to help internet users find information more conveniently. However, the creation of bad content affected and devalued Google’s platform to a considerable extent; since then, Google has been making continuous modifications to its original algorithm and made it rely on more than 200 unique signals in order to consistently between differentiate “good content” (which is ranked higher) from “bad content” (which is ranked lower, or is neglected). Differentiating good content from bad content involves crawling and indexing both types of content. Google developed robots (called Googlebot) to crawl the web and categorize content into “good” and “bad”, or “acceptable” and “unacceptable”.
Changes in Google’s algorithms that have been differentiating good content from bad content
After developing the first search engine algorithm, PageRank, in 1997, Google continuously modified its algorithms in a quest to differentiate between good and bad content. We take a look at continuous changes (which occurred significantly from 2011 to date) in algorithms that have helped Google to always differentiate good content from bad ones:
(i) Google Panda
Google Panda (launched in 2011) was created with the aim of detecting “content farms” and blocking them from showing on Google search results. Content farms have bad content that contains shallow and grammatically incorrect information that is improperly punctuated and overly stuffed with keywords. Also, in an attempt to differentiate between good and bad content, Google Panda acted against scraper websites (sites that create bad content by “scraping” original content from existing websites) by making them not show up on the upper echelons of Google’s pages.
(ii) Google Penguin
Google Penguin (launched in 2012) was targeted against web spam with the aim of decreasing the ranks of bad content which violated Google’s quality guidelines related to keyword stuffing and intentional duplication of original content from other websites. Google thinks that putting too many keywords creates negative experiences for site users and makes content incomprehensible. Incomprehensiveness and lack of unique/relevant content are considered as a signal for Pengiun to lower the rank of any bad content.
(iii) Google Hummingbird
Google launched Hummingbird in 2013 as a brand new algorithm that uses some parts of old engine systems such as Panda and Penguin. Hummingbird does not affect SEO; neither does it doesn’t differentiate between good and bad content. Although Google released Pigeon update in 2014 and Mobile update in early 2015, the two were not designed to differentiate between good and bad content.
(iv) Other Google Algorithm Updates
Prior to the release of Google Fred in 2017, the Possum update was released in 2016 and used to improve location-based searches, although it didn’t differentiate good content from bad ones. Google Fred update was released with the aim of lowering the rank of bad (low-quality) content that was created for the sole purpose of bringing in revenue from ads.
It can be observed that from Panda to Fred, Google continuously updated its algorithm’s program in order to make it more difficult or impossible for bad content to manipulate Google’s search network.
1. Lievonen, M. 2013. Understanding Google algorithms and SEO is essential for online marketers. Tampere University of Applied Sciences International Business Marketing. Available at: https://www.theseus.fi/bitstream/handle/10024/67859/Lievonen_Marjut.pdf?sequence=1
2. Rand Fishkin. How to Determine if a Page is “Low Quality” in Google’s Eyes Whiteboard. [Weblog] Posted August 25, 2017. Available at: https://moz.com/blog/low-quality-pages