By Tersoo Achineku
The internet can be likened to that one friend who looked like a cross between a human and a troll in high school, but blossomed into a Victoria’s Secret model just before puberty decided to call it a day. Over the past decade, we have gone from surfing the web with mangy landlines to broadcasting lunch, live to our followers who to be frank, do not care about our love for broccoli. With an estimated 3 million pieces of content being shared on Facebook and Twitter every day, accompanied by 220,000 shared pictures and another 72 hours of streamed videos, it has to be said that the internet, has come of age. But here comes the conundrum. How can you tell which content will satisfy your query?
Search engines play a major role in the discovery of content on the internet and as expected whenever search comes into play, Google stands out as the most popular. In other words, about 80 percent of internet users utilize Google every day, making it the ‘big brother’ of the web. In theory, the world has unconsciously tasked Google with the filtration of content on the web since it provides search results to a larger cycle of the cyberspace. But can Google actually tell the difference between good and bad content?
First off, what can be classified as ‘bad’ content? Asides the usual blacklist of porn, terrorist propaganda, malware and fraudulent transactions, what really can be classified as bad content? In layman terms, bad content contain information that does not suit the expectation of the audience.
Apparently aware of their influence of the internet, Google has in its capacity, tried to make sure that the audience gets the best content. Nobody understands Google’s obsession with wildlife and candy (see Android for more information), but over the space of 6 years, the search engine has yoked the task of finding appropriate content on three separate algorithms; Panda, Penguin and Hummingbird. Each released in different timeframes with separate tasks, Google’s algorithms have tried in truth, to filter content all over the internet by making sure information sought out by users are of high quality.
• Panda, released in February 2011 made sure that content providers with the habit of stealing and/or copying information from other sources were relegated in search results, giving users the opportunity to access content that were original, on point and deemed to be of ‘high quality’.
• The Penguin, which waddled its way into 2012 removed illegal references that affect search results.
• Hummingbird (2013) combined the attributes of the above engines with an additional spice, pin point accuracy in its search routine. However, experts have opined that Hummingbird was deployed to enhance voice searches which means you literally get what you asked for.
A few years ago, it would have been a mammoth task for Google to differentiate between good and bad content but with the current advances in their algorithm’s form, it has to be said that users can sleep peacefully knowing that Google will always provide the right result their numerous queries.


Leave a Reply