Roundup of Google updates from September 2020
Roundup of Google updates from September 2020
The autumn has officially arrived as people and businesses have woken up to another September. Further Covid-19 restrictions have been put in place by the government as infection rates have seen an incremental increase.
The hospitality sector has had the worst hit to date and more voices have expressed anger at governmental decisions to advise people to stay at home. In the meantime, business online migration has been steadily growing and more business owners are seeking expert advice to support their new online endeavors.
In this current climate, Google gurus and SEO agencies have been inevitably growing in popularity and their sought-after advice has been precious for business growth.
For the search engine world, September was business as usual with lots of new updates and insights that helped the SEO community better understand how Google values the relationship between valuable, relevant content and the user. This is reflected in much of the interesting developments focusing around new types of structured data and featured snippets that answer to search queries providing relatable information based on time and geolocation. Google also announced new measures to protect search results from manipulation and casted further light on rankings between core updates after manual penalty recovery.
2nd Sept – Google’s Mueller involved in a Twitter discussion on what is the ideal web content
Google’s John Mueller involved in an energetic Twitter discussion as to what is the optimum way to put content out there is.
The Google Webmaster Trends Analyst effectively debunked long standing content myths in terms of content size, such as word count and what should be the publisher’s main focus.
The discussion revolved around the relationship between word count and what is considered comprehensive content, also whether word count is indeed a ranking factor.
The discussion sparked by a twitterer who asked a question about competitor research:
This question derives from the common perception that competitor research is essentially reviewing the top-ranked pages in the SERPs, creating content with the same keywords, and focusing on the word count these top-ranked pages have.
The inevitable observation that the top-ranked pages have on average over 1,300 words of content, and very often the number one sporting 8,000 words or so, led most of us to the assumption that word count matters when it comes to the top-ranking pages.
Until John Mueller responded with the following:
With the twitterer explaining further his point with the following:
So, what is the deal here? Google seems to have a fair share of this confusion as the word “comprehensive” has been mentioned in its guidelines countless times referring to web page content.
- Google SEO Starter Guide:
- Google Quality Raters Guidelines:
The idea of “comprehensive” content is mentioned numerous times in Google’s Quality Raters, so it makes sense that publishers have taken this instruction seriously. However, this seems to have led to various examples with excessively long content stop ranking.
After all, comprehensiveness is all about including all relevant information – aspects of any given topic. Yet, there seems to be a fine line between comprehensiveness and too wordy content that ultimately loses rankings.
The Twitter thread became even more interesting when a person contributed in the discussion saying that BERT algo gives preference to lengthy content.
John Mueller went on:
Until someone from the discussion participants nailed it:
The key takeaway of this Twitter thread came a little earlier when Mueller confirmed that word count is not a ranking factor but it’s fine to use word count as a guideline for your content if that’s a reason for better content.
According to Mueller’s replies the bottom line seems to be that word count is not a ranking factor, however Google’s starter guide and quality raters guidelines suggest so by means of comprehensiveness. Lastly, Mueller provided another great insight that “not all pages need to be comprehensive” and that sometimes people need a fast and simple answer.
The seemingly conflicting stances between Google’s respective guidelines and John Mueller’s point of view might look fair but also might need a little more clarification in Google’s SEO Starter guide.
4th Sept – Google’s John Mueller gives insights in Core Algorithm Update recovery times
On 4th September, during Google’s Webmaster Central office-hours hangout, John Muller responded to a question about recovering from a core algorithm update.
As background knowledge, in the past many sites that have been hit by the core updates, despite the improvements, recovered only on the next core algorithm update.
Based on this observation the question was very simple:
John Mueller replied by explaining that Google’s core algorithm main objective is to determine the relevance of search results. A site doesn’t have to wait for the next update but rather work seamlessly on improving things and things will get better over time.
It’s true that since its inception (early 2000’s) Google algorithm updates were happening once a month and sites had to wait the whole month to see how their improvements were seen by the algorithm.
Yet, Mueller confirmed that the algorithm is on a constant refreshing mode and the index operates based always on the new changes. Mueller insisted that sites should focus on improvements adding the new features from the previous update and they will be gaining traction already before the next update.
9th Sept – Google’s SEO mythbusting “Is more content better?”
As the episode’s title suggests, Google’s SEO mythbusting usual suspect Martin Splitt discusses with Lily Ray of Path Interactive if more content is better.
The episode’s pair busted myths about more content being better, underperforming content, word count as a ranking factor and many more.
Ray’s first question touchbased the issue of updating the same content each year or creating new ones.
If a publisher creates content about the same topic every year, should they just create new articles or update old, existing ones?
Splitt went on saying if there’s only incremental changes to be made, updating existing articles is the way to go, rather than creating similar content to avoid duplicate content issues.
How much content should I have and to what extent does this help my performance?
Martin suggested that “rambling on” with article after article won’t get you anywhere and that producing lots of content regularly helps only for industry blogs where new information is constanlty coming out.
Another point discussed was whether producing new content helps performance on Google.
Splitt responded that publishing new content frequently is not a site-wide ranking factor per say, but rather updating your blog with industry news that is relevant to the use and help visitors understand your content better.
Does underperforming content bring down the overall trustworthiness or authority of the website?
Underperforming content as such will not necessarily affect the way Google sees your site.
Martin said that it depends on why the content is underperforming. If it’s spammy or thin content that would most probably impact your site negatively.
In either case, Splitt advised that underperforming content is always a good opportunity to reassess your content and decide whether you update it or take it down.
10th Sept – Google announces new measures to protect search results
Google deploys measures to protect information quality in search and news results.
Google announced that it has developed an intelligence desk that monitors and identifies information threats.
Google’s Intelligence Desk wants to make sure that its systems are functioning properly for all the possible queries people are searching for. In order to achieve this, Google has improved significantly the way it delivers information for breaking news and crises.
Google’s algorithms are now able to recognise breaking news within minutes and not 40 minutes that was previously needed for that matter. Google claimed with confidence the development of an automated system that is accurate and provides the most authoritative and relevant information available on the web.
Google has made partnerships with Wikipedia, health organisations and governmental agencies and is in the position to provide accurate information through Knowledge Graph panels.
In the past, wikipedia information has been repeatedly manipulated providing false or misleading information. Google’s team now is able to detect 99% of Wikipedia vandalism cases and take actions quickly.
11th Sept – Google experience indexing issues with new articles
Google confirms an indexing issue that affected the appearance of new articles in ‘Top Stories’.
The issue was fixed quite promptly within the same day but content restoration took a few hours.
Google proactively identified the issue and acted quickly updating SEOs and webmasters as soon as the problem was resolved.
Google spotted the error and fixed it before people started wondering on social, and this was a rare incident. Usually the Twitter community sparks such discussions well before Google reports any existing issues.
New publishers likely noticed the issue on their traffic reports especially news publishers whose content appears in ‘Top Stories’.
14th Sept – Google’s John Mueller: Don’t expect same rankings after manual recovery
On the latest Google Webmaster Central office-hours, John Mueller advised site owners to manage their ranking expectations after recovering from a manual penalty.
The question posed from one site owner who kept on losing search rankings. Even though the penalty lifted after hundreds of backlinks were removed as part of the recovery, the rankings never returned.
Mueller went on saying that more information about the site is needed in order to understand better the real aspects of the problem.
And continued explaining the difference between search rankings before a manual penalty and search rankings after recovery.
When Google imposes manual penalty to a website, this automatically means that this website used to rank artificially e.g. employed unacceptable tactics to inflate rankings. This is why after the penalty recovery the site ranks in completely different terms.
The more manipulative a site have been towards search rankings, the more severe the changes will be. It’s rather unrealistic to expect rankings to return to its previous point right after penalty recovery especially when they have been achieved artificially. It’s not impossible, it will just require more work and time.
15th Sept – John Mueller: Keywords in domain name aren’t needed
In that Tuesday’s AskGoogleWebmasters session, John Mueller debunked a common SEO myth that is actually a quite self explanatory one.
John Mueller advised that you will not help your website rank for a keyword if that’s in the top level domain name.
Just because a website has a keyword in its domain name doesn’t mean that it’s more relevant than others for that keyword.John Mueller, Senior Webmaster Trends Analyst at Google
John Mueller’s AskGoogleWebmasters video addressed the subject through answering this question:
Does a .jobs domain improve ranking in Google for jobs?
In fact, John Mueller went on saying that using a keyword in your site’s domain name can even make your site irrelevant as the market evolves over time and changing domain name is not easy either. So Mueller suggests that you are best if you choose a domain name that lasts in the long run and not a keyword that matches what you are offering today.
16th Sept – Google introduces support for regional video structured data
Google announced ‘regionsAllowed’ video capability to support structured data for videos specific to a region. This type of structured data property allows publishers to let Google know that a video is region specific.
It’s effectively a very accurate way of telling search engines what are the attributes and specific features of something to be ranked in the search results. In this case, it is about a video and even more so, a video specific to a region.
Google’s new capability is part of the VideoObject structured data type.
This regionsAllowed structured data property provides the search engine the information to determine what regions should be selected for rich results about a specific video.
The regionsAllowed structured data property affects organic search results, video search results, Google images and Googlediscover
These are the regionsAllowed structured data property definitions from Google and Schema.org respectively.
There are two kinds of structured data properties for rich results, e.g. featured snippets
- Required properties (necessary)
- Recommended properties (recommended)
It is recommended to use this type of structured date property only of there is a good reason for it.
Use regionsAllowed structured data property only if you want to filter out regions where rich results are shown. Label each video you want to employ for your efforts with a different region.
If you are not looking to target specific countries – regions then don’t use the regionsAllowed structured data property.
The following examples show you how to use regionsAllowed structured data.
Video targeting US region:
Video targeting the United Kingdom:
23rd Sept – Google people speak on their relationship with SEOs
Google’s advocate Martin Splitt and Barry Schwartz from Search Engine Roundtable reflect on Google’s relationship with the SEO community.
In this SEO Mythbusting episode, the pair dispelled commonly believed myths and gave us insights about the way we should be interpreting advice from Google’s ambassadors.
In fact, as it turned out, we shouldn’t be interpreting what Google people tell us but rather take it quite literally, without double thinking.
They discussed on a range of quite interesting topics such as featured snippets stealing traffic, transparency, SEOs’ misconceptions on Google’s updates and many more.
The following are a couple of interesting points of the discussion:
Barry: “so what’s it depend on?”
As most of us in the SEO community know quite well “it depends” is a common response from Google to a lot of related questions.
Without further ado, Splitt went on and named a series of factors:
- Is it a new site?
- Is the site undergoing a move?
- Has there been a change in URL structure?
- What does the site’s server setup look like?
- How fast is the site?
- What is the site’s content like?
- Does the site’s content have a lot of competition?
- Is there duplicate content on the site?
Splitt stressed the fact that the entire process and infrastructure is large and complex on their end and there are quite a few things to take into consideration.
Barry: [Featured snippets] “Because of that, people feel like, why should I write content that I’m getting zero traffic from?”
Barry Schwartz touched a very controversial topic as it’s not uncommon for webmasters to complain quite a lot about Google’s featured snippets.
The main idea behind featured snippets is that Google, when possible, displays enough information in the form of featured snippets essentially answering to the searcher’s query and the latter doesn’t even have to click through the web page.
Splitt suggested that this way the search results can lead to more qualified, better traffic and that usually this type of complaints usually come from people whose content isn’t that great.
Splitt also maintained that the fundamental idea of the search engine is to bring people and the content publishers together. If the users want to follow through that’s great, however, ‘’zombie traffic’’ is something that we should look into as well.
25th Sept – Google “Changing web layout can affect rankings”
Even if URLs and content remain the same, updating web layout is likely to affect rankings Google warns through the Google SEO office-hours host John Mueller.
In the latest episode of Google Office-hours hangout, John Mueller was asked if changing web design layout can affect rankings. Mueller replied in the affirmative confirming that it could affect rankings.
Mueller’s explanation is a great insight before someone decides to make any web design layout changes as it can impact on-page SEO and affect rankings.
According to John’s words, even SEO friendly web layout is going to need CSS and HTML updates and improvements.
Updating a web design can often be a daunting task but making a web layout more intuitive is a good way to move forward.
29th Sept – Google’s Danny Sullivan explains why time and location matter for the selection of featured snippets
Danny Sullivan from Google published an explainer page detailing how and when Google chooses to employ featured snippets.
Time and place appear to play a crucial role in Google’s decision on featured snippets.
Danny Sullivan stressed the importance of time and place speaking about “critical context” that help to understand better what people are looking for in relation to the when and where.
Sullivan also introduced the concept of freshness indicators which is when the search engine identifies trending content around a specific topic. The concept of freshness indicators sounds similar to the query deserves freshness when a surge in a topic indicates that content can potentially become topical and relevant. These indications are seen by Google as a good opportunity for feature snippets.
Sullivan’s article gives good examples of a trending search query “orange sky” and “why is it hazy”. For both queries, Google determined that they were trending in specific geolocations. This information bit automatically gave context to these queries.
The Google users in California searching for “orange sky” were able to see a featured snippet content answering as to why the sky is orange in California in the same way why the sky is hazy.
For the history these queries were referring to the effect of west coast wildfires in the sky and how people were able to see it in both the west and the east coast.
Key things to take away with us is that Sullivan spoke about time and place as main factors for Google to identify and give critical context to spikes in specific search queries. Danny also introduced the concept of freshness indicators as an algorithmic factor.