tel: 01895 619 900   email: develop@considerthisuk.com

SEO: 2 years of big changes

SEO Updates | 2 years of big changes

Since the beginning of summer 2013, we have seen big changes being made to Google's algorithms and, as a result, to SEO techniques and strategies.

The release of Hummingbird

Previously it had been Google Panda, a filter that prevents low quality sites and/or pages from ranking well in the search engine results page, rolled out monthly with multi-week updates. One year later and Knowledge Graph, a knowledge base used by Google to enhance its search engine's results, got a remarkable expansion by +50.4% across the enormous data set held by MozCast and more than a quarter of all searches showed some kind of Knowledge Graph entry.

Then, at the end of August 2013, Google started using Hummingbird but waited until the company's 15th anniversary on the 26th September to announce the change. This was a brand new algorithm, not just an update of the older versions, however some have compared it to the earlier Google Caffeine – a search architecture from 2009, designed to return search results faster.

Caffeine also had to adapt to perform different types of searches in order to accommodate the emerging importance of new services and social networks, especially the two leaders Facebook and Twitter. Like Caffeine, Hummingbird was designed to cover emerging needs, benefiting more modern forms of search whereby the user asks Google a question. This produced a cleaner more definite result, rather than the old method of entering keywords into a search box. This new behaviour has been driven mainly by searches through smartphone personal assistants such as Siri, Cortana and Google Now.

The importance of the Google updates names

So, what's with all the major Google updates being named after cute black and white animals? Until the launch of Hummingbird, its predecessors had been given the names Panda and then Penguin, although those updates hadn't changed the whole algorithm, only updated part of it.

The name choices by Google are cleverly conceived in order to give their crawler a much friendlier feel rather than the scarier ones that immediately come to mind: such as spider. So, by choosing the Hummingbird, Google has at once made its crawler more pleasing and less frightening. In fact, the main reason Google picked up on the black and white animals 'theme' was due to the SEO community talking about Black Hat and White Hat techniques.

In this respect, Google was perfectly describing exactly what its algorithm was going to do and that was, tagging the SEO factors on and off pages in order to see them as White (conforming to the search engines' guidelines with no deception involved) or Black (attempting to improve rankings in ways that are disapproved by the search engines or involve deception).

Hummingbird is more than a meticulous colourful bird

With the introduction of Hummingbird, Google has completely changed its 'theme'. Previously we had Caffeine and now we have a meticulous, colourful bird.

Google Panda, Penguin & Hummingbird differences | CTUK 

Unlike previous search algorithms, which focused on each individual word in the search query, Hummingbird considers not only each word but how each of those words make up the entirety of the question. That is to say: the whole sentence, conversation or meaning is taken into account, rather than simply particular keywords. This makes the whole search more 'conversational'; for example typed queries such as 'where is the closest pizzeria' or 'where can I buy a bus ticket'

Traditionally, a search engine would pick up words like "pizzeria", "buy", "bus" and "ticket" to give you general information on the topic. That started to change in the summer of 2013 when Google started reporting more and more relevant results. A query of "where is the closest pizzeria" will now produce relevant restaurant listings with pictures, reviews and a map with a pin for each business.

where is the closest pizzeria? - Google query

The Knowledge Graph and the "conversational search" introduction

Google has been working on this change to its search architecture since 2012 and possibly even before that. In May 2012, Google rolled out the Knowledge Graph, a SERP-integrated display providing complementary information on certain people, places, and things.

Marie Curie | Knowledge-Graph | Google SEO

 

A year later, at the Google I/O conference, the Mountain View personnel demonstrated how the Chrome browser was able to have an actual conversation with its users. It was a significant improvement on the Chrome voice search feature that speaks back to you.

Taj-Mahal Google more info


It should also be mentioned that in May 2012 Google started returning direct answers at the top of search results pages, thereby reducing the SERP "blue links" and making results increasingly informative. This process was previously driven by authorship mark-up, before being phased out and removed completely

Pigeon: a local update named by Search Engine Land

As summer 2013 drew to a close, the rumour of a new update affecting local searches overwhelmed the SEO scene. Google has never named this update, but on Search Engine Land it was referred to as Pigeon. This update changed how Google interprets location cues. Apparently after two years it has finally produced closer ties between the local algorithm and core algorithm to perfectly reflect the query "where is the closest pizzeria".

Local searches are no longer triggered by local modifiers alone, but also by our geo-location. Google is able to understand where you are and how to fulfil your needs, as well as which correlations are among the words you use. Consequently, if we query just "pizzeria" instead of "where is the closest pizzeria", Google will show the pizzeria restaurants' listing at the top of the SERP and also, to the right-hand side, the Knowledge Graph on pizza.

Pizzeria | Google search | SERP

A query as simple as "pizzeria" does not reflect how the existing search algorithm currently behaves, but demonstrates the actions of the one used up to the last decade. In this scenario Google tries to put together all the information available to generate answers to the query "pizzeria": restaurant listings, website links, news links and the Knowledge Graph. "Pizzeria" is too vague to use as a keyword. Instead, a more specific query such as "what is (a) pizzeria" will show you the direct answer and links to definitions of the word.

What is a pizzeria | meaning | Google direct answer

The Knowledge-Based Trust (KBT) path

While the number of direct answers or "rich answers" that are used to supply a quick response to a query is constantly increasing, the magazine New Scientist reported that a Google research team had published a paper illustrating that a search architect structure is more likely based on facts than backlinks. They discuss the use of Knowledge-Based Trust (KBT) to determine web page quality by looking at how accurate the facts/information are on the appropriate page.

The facts on the page, alongside the page reputation across the web, indicated the reliability of that page. Backlinks also add weight to the page as they are comparable to votes showing the reputation/value of the page's information. However, this paper should not be viewed as a Google Development Roadmap; it is purely part of the current studies being carried out at Mountain View. Having said that, it does fall into the pattern of Google's latest implementations, such as Schema.org Mark-ups and The Knowledge-Based Trust.

Schema.org mark-ups: the "entities" on your pages

The Google Knowledge Graph was viewed by the web marketing community as a menace due to the fact that in most cases this, along with direct answers, do not require a click-through to a website, thereby causing a CTR turndown. However,it also opens up an opportunity: It's just one of the very first steps towards a search engine based on entities rather than purely on keywords. In other words, the data on all your pages is becoming increasingly more important.

In particular, the conformity between explicit and implicit information takes a key role in this procedure. You can add explicit information, using structured data mark-ups, while the implicit information is delivered with natural language. A well-optimised page sends the same signals from implicit and explicit entities. This can be created through the use of consistent meta tags - such as title tags, meta descriptions and keywords - that is relevant to readable content on each page.

The main difference is that Schema.org hosts a collection of structured data mark-up schemas, it's not just a field to populate with natural language. This data allows you to define the entities and their relationships. However, Google pulls information from authoritative sources, such as Wikipedia, Freebase, Google Maps, the FDA, to design the Knowledge Graph.

The Knowledge-Based Trust early tests

The Google research team who published the paper has run early tests to see how the Knowledge-Based Trust work. According to the team it looks remarkably promising: "We applied it to 2.8 billion triples extracted from the web and were thus able to reliably predict the trustworthiness of 119 million web pages and 5.6 million websites."

The authors define "triples" as the factual elements found and extracted from a page. They have, however, also pointed out that this wouldn't work consistently across every web page, since not all of them are about entities and facts that exist in a Knowledge Graph-style database. Almost certainly the next big change to search engine architecture will see the KBT working in conjunction with existing signals.

To learn more on KBT paper:

 

 

 

Pin it