HomeBlogMarketingIn Search PodcastIn Search Podcast: How to Analyze & Approach Google’s Algorithm Updates
Marketing Intelligence

In Search Podcast: How to Analyze & Approach Google’s Algorithm Updates

Don’t forget, you can keep up with the In Search SEO Podcast by subscribing on iTunes or by following the podcast on SoundCloud!

What’s the single most important thing you can do (or maybe not do) when a major Google algorithm rolls-out?

Summary of Episode 28: The in Search SEO Podcast

Today the great SEO experimenter and innovator Dan Petrovic hits the airwaves with us to explore the deep dark depths of Google’s algorithm:

  • Is there something new brewing within Google’s algorithm?
  • Are Google’s confirmed updates a red herring that distract us from even bigger algorithmic changes?
  • What’s the best way to approach a Google update?

Plus, we look at rank stability trends over the past few years to see if maybe there is something deeper to Google’s recent “quiet” period!

Is Google Planning Something Big for the Very Near Future? [2:44 – 18:57]

During this week’s interview, you’ll hear Mordy and Dan mention that Google’s been sort of quiet lately and you’ll also hear that something ill-defined seems to be brewing.

So are things actually quieter? That’s hard to qualify. 2019 has certainly not produced the same number of blockbuster changes to Google’s SERP features as usual. We do a monthly digest called the SERP News and it’s been harder and harder to find a long series of impactful updates to the results page, solely because the news has slowed down.

Certainly, there has been a slow down in the number of tests to the SERP and whatnot. It used to be you would have a good 2-3 changes reported each week. Now, maybe you see one, if that.

Now, let’s look at SERP feature data trend shifts. It’s actually something Mordy watches very closely. Here too he noticed there is a tremendous slowdown. Outside of PLA and Ad shifts, we had some Image Box ups and downs, some video carousel increases, some carousel spikes, but really nothing unbelievable… it’s been a bit quiet on that front for sure.

How about rank? Sure, there have been fewer big ol’ ranking earthquakes but that doesn’t mean rank is quieter! Now, people should realize that most of these ‘weather’ tools that track rank fluctuations have ‘stable’ and ‘volatile’ as relative terms.

Compared to some really unstable rankings, moderate rank fluctuations seem like molehills next to mountains. It’s all relative. So if high fluctuations become the new norm, that will be considered “stable” and only when those already unstable rankings become far more volatile will we start to hear about an update rolling out.

In short, just because rank is stable on the weather tools doesn’t mean that rank is really stable, it’s just relatively stable.

Mordy did a whole study on this topic, taking a look at rank stability from 2016 – 2018, that you can check out here.

Mordy dug into the numbers again covering a dozen or so niches and looked at their average position change. The average position change is the average number of positions a site moves up or down the SERP when Google shuffles things around. In other words, when Google decides to rework the rankings for a specific keyword… is site #3 moving down to site #5 or is it falling, on average, to #100?

From 2016 and on, the average position change has seen an upward trend. In January 2016, the average number of positions we saw sites moving was around 2 positions. In January 2017, the average was at around 3.5 to 4. In January 2018, the average was a bit over 4 moving towards 4.5 positions.

During the Medic Update, some niches were pushed towards an average position change of 5.

Then after the Medic Update, something weird started to happen. The numbers started to drop. By January 2019 we were already hitting numbers around 3 position changes on average. The trends data is very clear and you see a vivid downslope. The average number of positions has started to fall back down. Now it has spiked a bit. You had those Google algorithm updates in March and so forth and even without that it’s gone up a bit. At the start of May, the average number of positions sites tended to move was about 3.5 positions. That’s still well under the 4.5 we were seeing.

To put this in one simple sentence, rank is a bit more stable, at least in terms of this one metric.

Which is weird because with machine learning you would think there would be constant changes and recalibrations but there hasn’t been or at least not to the extent seen in the past.

Okay, so here comes the conjecture.

Mordy thinks Google’s figured it out. In 2016, rank became far more unstable which makes sense. RankBrain came into the picture and started to help Google figure out what’s relevant and what’s not. The problem is that it’s a machine, it needs to learn, and Mordy thinks it’s taken about 3 years or so for it to learn.

In other words, Mordy speculates that we’re at the point where Google’s machine learning has learned enough not to have to undergo extensive recalibrations. It’s adjusting all the time, but not to the same shocking intervals that were seen in the past. The adjustments it’s making are becoming more and more refined and in the process there are fewer and fewer bumps in the road… or in our terms, fewer position moves on average.

Hence the relative quiet. But that does not mean something is brewing.

Rather, what Mordy thinks is happening is that one milestone begets another. As Google and its machine learning have reached a certain milestone of stability, it puts other milestones in reach.

So let’s go full-on crazy here and suppose that Google is like the Marvel Cinematic Universe, one phase begets the next phase. If Google has reached a certain pinnacle via its machine learning then it puts the next pinnacle in sight, it allows Google to reach for the next mountain top.

Mordy thinks Google is entering a new phase. That the quiet on all fronts, be it the number of SERP changes, SERP feature data trends, or Google algorithm updates, is the calm before the storm.

Think about all of the crazy Google bugs that have been cropping up almost endlessly. When do you have bugs in a system? When you’ve just built something new and haven’t yet worked out the kinks.

If you want to go full-on conspiracy theory, you could argue that the changes to Search Console are part of the paving of the way for a new paradigm. After all, a new construct needs some new reports does it not?

Look at the new mobile SERP with its ad label that 100% blends into the SERP. They even changed the color of the URL so that it all blends together as the URLs, like the ad label, are now black. Sometimes an external change is indicative of an internal change, which is what Mordy thinks is happening.

Long story short, Mordy thinks there are a lot of signs that something is changing in a big way. That something ill-defined is brewing.

Which is how you have “quiet” with a tinge of foreboding!

Outperform Your Competition - in Every Marketing Channel

The all-in-one solution for data-driven marketing planning and competitor analysis

How to Analyze and Approach Google’s Algorithms Updates: A Conversation with Dan Petrovic [18:57 – 51:19]

[This is a general summary of the interview and not a word for word transcript. You can listen to the podcast for the full interview.]

Mordy: Today we have an SEO all-star for you. He has spoken at every search conference you can possibly think of, he has written about every search topic you can imagine, and he is the managing director of Dejan Marketing out of Australia, he is Dan Petrovic! Welcome!

Dan: Pleasure to be with you.

M: So, before we start, I have to ask you, from one bearded man to another, how do you get that perfectly awesome chin strip?

D: The simplest possible method I use is the usual plastic disposable shaver and not too much fuss. Maybe I’m talented because I used to paint in school.

M: You must be talented as I definitely would have messed that up.

Let’s start with where do you stand on the whole E-A-T debate? I don’t mean in regards to the exact elements of the Quality Rater Guidelines being present within the algorithm. Rather, do you think that there are general, yet strong similarities between what’s been added to the guidelines and what Google is capable of algorithmically?

D: The two things are vastly different. The Quality Rater Guidelines are there to instruct the raters to provide the most useful input so search engineers can evaluate their output. Google’s algorithms generate the output for the user that raters evaluate. So the two are totally different and cannot be compared. On the one side, we have machine learning algorithms and whatever Google produces its search results and the other one is a set of guidelines that helps Google get the most value out of their rating team.

I don’t think people should be obsessing over these guidelines. There’s a reason that Google “leaked” the guidelines. Obviously, if it was a protected asset it wouldn’t have leaked. It’s not like Google’s algorithms leaked. Perhaps the first iteration was circulated without permission but I think Google embraced it and are now using it for PR. They’re sending a great message. Have great content, be accessible and crawlable, and everything will fall into place.

M: That’s probably the best answer I’ve heard to that question.

Let’s get a little “mystical.” Something is in the air. Something has changed, something has evolved algorithmically in the more recent past in my opinion. Do you get the sense that something significant, yet ill-defined (at this point) has entered the algorithmic fray? If so, what do you suspect is behind this undercurrent?

D: In fact, I also had this weird sensation. We search so much and know what to expect from Google. I have this very long structured query that I hit every day and it interestingly changes in those results.

One thing I noticed is that the results have vastly changed after Hummingbird where instead of showing search results they’re showing search vectors, directions of users. This creates a problem for more complex queries. Google’s serving users what it thinks users want and not actually what users want. So if you have a power user on Google who knows what they want, with a very structured query with very set expectations, Google will start ignoring it and dropping terms.

This is in the same line as Microsoft’s Clippy. Do you remember Clippy? You will start writing something in Microsoft Word and Clippy would come and say, “It looks like you’re writing a letter.” And I never write letters, only assignments or documents. Similarly, Google keeps suggesting to users what they are doing. One of my suspicions is that the Google we have now is what I call “Google Lite,” a Google that shares resources, that doesn’t show too much. It’s very optimized to save Google time and resources. And it’s a very pushy Google. It tries to hone you into a particular direction in your research which can be annoying for power users but I’m sure for 90% of the user base it might be quite useful. It forms the machine equivalence of an opinion of what a user is after and offers that. It is trying to be helpful but it can annoy a portion of its users, mainly us.

M: Do you think that it’s ultimately going to change? Do you think they’ll find a way to build in more resources and go in another direction with it or are they going to find a way to show what you exactly want as you hone in?

D: It’s capitalism. Minimize cost. Google has a great product but I think it’s objective is to be just good enough than everybody else so to sell ads and minimize competition. So unless Google gets a serious competitor the status quo will stay. I do have my fingers crossed though because it will be a great thing for the user. We will see better quality in Google’s results, a lot more research, and a lot more innovation. And it’s not that I think they’re sitting on their laurels as they are still quite agile and innovative but the resource saving is here to stay.

M: Let me dive into something more recent. The March 2019 Core Update took place almost exactly a year after the first of the confirmed broad core updates. Coincidence?

D: Yes, I believe it’s a complete coincidence. Either that, or there’s an internal reason for it to be released on that date like for a technical advantage or an internal schedule. Otherwise, I think it makes no sense to schedule it for once a year.

M: Right, the only thing it could be was if there was a certain calibration that was set up at certain intervals. As I am curious if there will be an update in August gain. There wasn’t one in April so my theory is already a little shot but here’s to hope.

There are people who believe that the March update was somehow related to the Medic Update. What’s your take on that? Do you think there is an essential relationship?

D: The March update wasn’t as strong as the Medic Update. From the data I see, Medic wasn’t a particularly strong or exciting update. If you look on Algoroo, our Google algorithm tracking tool, you’ll find other dates in the past five years almost as big as Medic that almost no one talks about.

The second half of 2016, the second half of 2017, and the first half of 2018 saw HUGE changes with the results varying from day to day. And if you compare those periods of time to the past four months you’ll notice the past four months have been boring.

The biggest I ever saw was on September 14th, 2017 for mobile. But nobody is analyzing it. We don’t know what happened.

M: Do you think it’s imprudent to focus so strongly on these individual updates? That is, there is a much larger algorithmic context that you can place a given update, confirmed or unconfirmed, within. Does that make honing in on a “confirmed” core update a bit of a red herring?

D: The only thing we can tell from these multi-day rank fluctuations on search trackers is that something took place with anecdotal evidence. So for me, I look at this whole process in a binary way. Was there an update? Yes or no.

I have three categories of Google events that I use with my own staff. The first is global events which are when you try to correlate and understand why there was movement in organic traffic and rankings. In that case, it was Google that changed something. I know that it wasn’t me changing the title tag, or 301ing a page, or getting some new links.

The second is automated events where I didn’t specifically do anything, but something was detected in my systems like a broken link 404 page, a 301 redirect, somebody changed the page, somebody changed the update or the content, or we gained some links.

So let’s say, we gained six new links and suddenly, two weeks after that, we gain rank. So I’ll check my algorithm checker tool and see if there was an update. Was there an update? Yes or no. And that’s it.

The third level of events is manual annotations. For example, when I optimize the title tag for my page I just plug it in and I can see it in my chart.

So when I try to correlate things I check if I did something or if something happened out of my control. And that’s good enough for me.

M: I love that. One of the things I hate most to do is dive into the algorithm updates. It’s like trying to find a needle in a haystack. It’s really only a very, very, very small sliver of what was actually happening out there.

And I hate doing winners and losers list. It’s such an easy mistake to make when all of these updates are right next to each other to say that because of this update this site lost or gained a ton of rank when in reality it’s just a reversal reaction to the previous update.

Do you think analyzing ranking factors, whether in general or even according to niche, is as helpful or relevant as it was even just a year ago? That is, as Google seems to have taken a more qualitative approach, a more holistic approach, as Google better understands entities and queries, as machine learning plays a bigger role, however you want to define and describe the current construct, has the “ranking factor” lost some of its relevance vis-a-vis trying to understand what does and what does not work?

D: I think one thing that the industry should be doing more is reading about experiments a lot less and doing experiments a lot more. For example, people know me as the guy that runs the experiments and shares the results and I think the experimentation and probing into what works and what doesn’t work should be a mindset of a modern marketer. And even if you’re not doing it for the purpose of disassembling Google’s algorithms I think our role is to try things and record it. Does it work or does it not? Repeat, try again, and again, and again.

One of my most successful articles in recent times was when I ran an experiment and I said, “I tried this and nothing happened.” I published these results that said nothing happened and that’s good enough for me because I know they don’t need to try it because it didn’t work for me.

As far as ranking factors, I love that kind of stuff. Keep testing and probing because any knowledge we get will help our clients. Soon we will have less of a bottle-neck, less of a barrier of entry, with machine learning. Once the tools of machine learning are available for us and once the marketing industry matures I think a lot more powerful probing will come to Google’s algorithm and I’m excited to see what comes out of that.

M: I want to talk to you about authority, site authority, URL authority, page authority… Has the way Google determines or evaluates how authoritative a site or a page is changed?

D: Google couldn’t be clearer that they don’t look at websites, they look at pages. Now, of course, pages in a single website are interconnected and by default form a unit that all benefit from each other. But all the ranking factors and signals are focused on the page level. I believe Google’s website authority is more like website trust and it’s a lot simpler than we think.

Page authority/relevance is complex, but it can binarily be put as being trusted or not trusted. In a recent interview, John Mueller said they will try to look at the good stuff in a bad site or avoid the bad stuff on a good site. For Google, a few good pages on a bad site is still worth presenting. But if there’s so much bad that they can’t completely trust the website then they have measures to make sure that site doesn’t appear.

So the main authority I will not put too much emphasis on. I would think in more in context of the brand as Google understands brands, entities, and authors. For example, if you search my name in Google they will suggest similar personalities. They figured it out. They know who Dan Petrovic is, who Bill Slawski is, and who Rand Fishkin is. So site authority, not so much. Page authority, definitely. But entity authority is a new, exciting, and emerging thing.

Optimize or Disavow It

M: If you could focus and analyze just one thing; Google’s Quality Rater Guidelines or Google’s algorithm updates, which would you study and which would you leave to the wayside?

D: So I skimmed over Google’s guidelines because I don’t give it too much weight. And I understand that understanding the exact composition of Google’s algorithm is worthless. So I would choose understanding how Google’s algorithm works because that’s what my job is. What’s in the Google guidelines is commonplace stuff. My job is to understand how Google’s algorithm works and what makes it tick. And the only way to do that is not to speculate too much (although speculation is good as it gives you a hypothesis) and doing tests and experimentations.

People might say that it’s impossible to break down their algorithm but I think our experimentation can yield results and discover things that are actually useful like improving our clients’ websites.

I said it many times, and I’ll say it again. We don’t have the ability to make it rain. We don’t control the weather, but we’re the weatherman. We can predict and prepare our clients for the conditions that are about to happen. So I think it’s useful to look at the algorithmic updates. And I think it’s useful to understand what goes into Google’s algorithms so we can prepare for it. But if we compare a simple document like the quality rater guidelines and understanding of Google algorithmic updates through experimentation, it’s quite a simple answer.

M: Yeah. I had a feeling you’re going to go that way with this. Hard to imagine you would pick the quality rate or guidelines with your background.

I really appreciate it. Thank you so much for coming on the show and sharing all of your wonderful ideas with us today.

D: You’re welcome. It was good fun.

SEO News [54:59 – 01:00:54]

New Recommendations to Ad Optimization Score: Google is now giving you a new way to bring up your Google Ads optimization score. The new elements focus on ways to better optimize your bidding for better performance.

Hotel Price Insights Right on the SERP!: Google’s hotel price insights are now on the desktop SERP itself. You no longer need to click over to Google’s travel site via the Knowledge Panel.

New Redesign to Mobile SERP: A new design has hit the mobile SERP. The redesign includes a new ad label without a colored background or colored text, favicons as part of the organic results, and a black URL (instead of the normal green).

More Indexing Bugs Hit the SERP: Bugs are still plaguing Google. Aside from the bug that halted new indexing for a short time, Google announced there was another indexing issue unrelated to the initial indexing bug.

SEO Send-Off Question [01:00:54 – 01:03:47]

 

What does Google buy for their partner’s birthday?

Mordy thinks Google buys its partner MORE SHOPPING PARTNERS! Why buy a gift when you could buy an entire store?!

Of course, a simple birthday cake would suffice as our co-host Sapir believes!

Thank you for joining us! Tune in next Tuesday for a new episode of The In Search SEO Podcast.

by Darrell Mordecai

Darrell creates SEO content for Similarweb, drawing on his deep understanding of SEO and Google patents.

Related Topics:
This post is subject to Similarweb legal notices and disclaimers.

The #1 keyword research tool

Give it a try or talk to our marketing team — don’t worry, it’s free!