In Search Podcast: Bert & SEO – Getting the Story Straight
Don’t forget, you can keep up with the In Search SEO Podcast by subscribing on iTunes or by following the podcast on SoundCloud!
What Is Google Bert, How Does It Work, and How Will It Impact SEO? A Conversation With Dawn Anderson
Big time guest with us as The Women In Search SEO Series continues with the one and only Dawn Anderson who gets into everything BERT and SEO:
- Clear & understandable details on how the heck BERT actually works
- The real story on how BERT’s implications will hit SEO
- What better contextual conceptualization means for content
Plus, we take a look at how the very fabric of the SERP might be changing!
How Google Can Balance Intent Diversity & Personalization
Mordy knows that last week he said he would never ever do a segment on SEO trends for 2020, but he’s doing it anyway as he has thoughts about the future of SEO. Maybe it will come in 2020 and maybe it will come in 2021, but it is coming and it’s a bit offbeat, perhaps funky if you will.
We all know Google is very good at understanding search intent and showing multiple intents on the SERP, but this is going to cause a problem for the search engine. The problem will be that Google’s multiple intent targeting will get too good for the SERP. How is that a problem? The problem is Google will further develop its ability to breakdown intent into smaller “particles” but since multiple intents appear on the same SERP Google won’t have the ability to manifest that awesomeness. There’s just not enough room on the SERP.
Think about it like this. You do a search and Google determines it can show you a ton of different result segmentations but what can it do other than show you only a few results for each segmentation. Or think of an entity or a concept. There are so many wonderful aspects of a concept for a user to grab onto, but how many of these different aspects can Google show on one SERP? You can only touch on each intent, you can’t fully tailor to any singular intent.
Now, Google does have a grip on this within Image Search. Type in a query and you get a carousel of bubble filters at the top of the Image SERP. If you search for flowers you get filters for flower types (rose, sunflower, etc.), bouquets (red flowers, purple flowers, etc.), and so on. You can customize the Image SERP to your liking. And this makes sense since there are so many ways to organize flowers (type, color, arrangement, etc.). The issue is this level of conceptual segmentation is coming and is in many ways already here for numerous queries.
Mordy thinks the type of segmentation you see on the Image SERP is coming to the regular SERP. So instead of seeing flower colors in the filters, you’ll see things like site types. Or take an entity/concept. Let’s say you search for baseball. In Mordy’s fictional world of what a SERP should look like you will see filters for teams, rules, history, places to play, etc. This is all for Google to guide the user to what they’re looking for so they won’t have to do an additional search.
All this, of course, will make rank tracking a total nightmare, but that’s another story altogether. But all-in-all the idea that you can customize your SERP is… funky.
A cool idea would be if let’s say you want to see a few different types of sites and want them grouped or thrown into sections. For example, you’re searching for a product and you want to see review sites and then after reading the reviews, you want to see places to buy. It would be funky if Google let you segment the SERP by site-type/purpose. So you first see review sites and then underneath you see sites where you can buy the product.
And this isn’t in some Twilight Zone universe as Mordy saw a post recently from Bill Slawski highlighting a patent that lets Google cluster results according to their “entity identity.” So let’s say you search for “cardinals,” all of the results or features about the birds will be together, all of the results about the baseball team are together, and all of the results about the football team are together. These are entity/concept clusters so it’s not crazy for Google to offer customizable clusters as funky as that sounds.
The point is something that allows user input into the SERP format is coming!
Getting BERT Right: A Conversation with Dawn Anderson
[This is a general summary of the interview and not a word for word transcript. You can listen to the podcast for the full interview.]
Mordy: Welcome to another In Search SEO podcast interview session. She’s a famed SEO speaker, author, and university lecturer and she just might be the smartest person in the Search industry. She is the manager of Bertey… she is Dawn Anderson!
Welcome!
Dawn: Thank you for having me.
M: By the way, I’ve seen a dog pop up on your Twitter feed every once in a while…. So I’m assuming you’re a dog person?
D: Yeah, I have one dog named Bertie and another named Tedward who’s a pomeranian.
M: Nice. I’m a major dog fan. I don’t have any now, but when I was a kid I had a few of them.
D: If I could I would have ten Pomeranians, but I can’t because if I did I would never go on holiday.
M: So we can talk about dogs all day, but let’s instead switch to BERT. Not the dog, Bertie, but the NLP. Just so everyone is on the same page, fundamentally speaking, what is BERT?
D: BERT is two things. BERT is an acronym for a natural language processing open-source language model and it’s Google’s recent algorithm update which they use in search results and Featured Snippets. Anyone can use the open-source BERT as we have Microsoft who also uses BERT, but it’s not Google’s BERT.
M: I like to joke that BERT helps Google understand the words ‘for,’ ‘of,’ and ‘so forth.’
D: Yeah, and you can have words with hundreds of meanings. Not that the meaning changes, but the use in the language can change.
M: I don’t think people realize how nuanced language can be. Even a synonym has various shades of meaning. I used to be a teacher and we had this synonym thermometer and with various synonyms, there are words closer to the word you’re analyzing or further away. I think that the nuance of meaning is under-appreciated by many people.
D: I think synonyms are easier to disambiguate compared to the likes of homonyms (sound alike but have different meanings) or homographs (spelled the same but have different meanings).
M: Right, so I want to go into that, but to make sure everyone’s up to speed, what does BERT stand for?
D: Bidirectional Encoder Representations from Transformers.
M: Let’s try to break that down. What do you mean by bidirectional and why does it make BERT so unique?
D: Let’s say I had a sentence with ten words in it. To truly understand the context of a word in relation to the other words you really need to see the words on the left and right of this target word. BERT can see the words from the left and the right in the context of the word in the middle. All previous models used unidirectional training so the context was only looking at the words that have gone before and not the words that are after the target word. Some models in the past like ELMO looked at the right and then the left and then everything at the end, while BERT looks at everything at the same time.
M: By the way, this is how you know it’s based on Sesame Street when you notice there was an ELMO model.
D: Yeah, I saw one model the other day called Big Bird.
M: Seriously? Wow. I’m going to jump back to this a few times, as back when I was teaching we used to have context clues. When a 4th grader wouldn’t know a word, we would create a sentence with that word and try to use context in the sentence to build an understanding of the definition. It seems as that’s how the bi-directional aspect of BERT works in terms of helping understand via the context.
D: Part of the training process is Google using this masked language modeling which helps BERT and based on the content in the sentence, choose the best word from a list of possible alternatives.
M: Which is basically how humans understand content.
D: Yeah, when we look at a sentence we can get a good guess most of the time of the context. Humans have this common-sense of what words relate to each other which machines don’t have.
M: And this is super important for Google.
D: Another big thing about synonyms is neighborhood words, word embeddings, knowing a word by the company it keeps. Words in a body of text live near each other semantically.
M: Let’s jump letters to T for transformers. What does it mean?
D: The T for transformers is a key part of how BERT works. Let’s say you have the word ‘bank’ and you want to figure out if it’s related to a riverbank or a financial bank. So the transformers look at all the other words in relation to each other to understand the meaning of the word.
M: So it’s ascribing weight to the various words so it knows what to use in order to build contextual understanding. In a given document or a given page how far back does BERT go back to understand context? Does it go back to an entire paragraph or the whole page?
D: I’m not entirely sure, but what I do know is that it can normally guess the next sentence. This is a feature where the older models like Albert had trouble with which is partly why Google has been working hard on this new release of BERT.
M: I’m amazed at what BERT can do, but what are the limitations of BERT? What problems are left that Google can’t currently solve with BERT?
D: Language is complex. Google’s not perfect nor is any search engine perfect Search engine machines still don’t have common sense. Every day we know there are 15% of queries that we have not seen before or there are news stories that haven’t happened yet.
M: Yeah, and there’s the idea that in the age of machine learning that AI will take over, but there’s an enormous gap that exists where I don’t think it will ever come to a point where a machine can think like a human can.
D: Yeah, there’s only so much you can do until we need the next big innovation. The current solution to understanding context is to build bigger and better models. That obviously has its limits as it’s not really innovation but feeding the models more data and making it more expensive.
M: It’s kind of funny that the quality of human understanding is so deep and hard to grasp yet we’re so worried we’re going to complete this AI picture so quickly.
D: Look on Twitter and you’ll see people misinterpret what someone else has written and that’s a human thing to do. Imagine for a search engine it can be very difficult to disambiguate. There are other cues apart from language that aren’t easy. For example, there are dozens of cities around the world named Manchester so it can be easy to confuse them all.
M: I see your point. I saw an article that claimed that Walmart is coming to Jerusalem, Israel, which wasn’t true as it was really coming to Jerusalem, Illinois or something like that.
D: And what sometimes happens is people build websites with the wrong parameters in their locations which can lead to ranking issues.
M: There’s been this idea of optimizing for BERT which is obviously insane. Leaving that aside, is there anything to making sure the way you speak or the terms you use on any given webpage are consistent and uniform so you don’t create a discord between the terminology so that BERT understands it correctly. Or does it not matter?
D: You can’t optimize for BERT. I don’t think people understand that these are words and this is a machine that’s trying to understand words. People who don’t work in SEO, like in marketing, think the search engines are cleverer than they really are and they’re not. There is a limit to what machines can do. You have to assist with a sound structure and watch out for disambiguation.
Optimize It or Disavow It
M: If you were faced with the terrible decision to either “optimize for BERT” or “optimize for RankBrain,” assuming you had to do one or the other… which would you do?
D: I’d say neither. You can’t optimize for either. Just fill your site with great content so long as it doesn’t cannibalize itself. You should be creating a big impact. We don’t have much time so think about how to spend it on things that you can optimize for.
M: Thank you, Dawn, for coming on the show!
D: Thank you for having me.
SEO News
Google Working on Some Search Console Bugs: A few Google bugs… again. The URL parameter tool in Search Console is having an issue showing accurate data while the Crawl Stats report is lagging with “old” data.
Google Slow on Reconsideration Requests: Reports abound that Google is very slow these days with their reconsideration requests!
DuckDuckGo is Now a Default Option on Android: The EU is forcing Google to list DuckDuckGo as a default search engine on Android.
Google Drops Table Schema: There are some reports showing that Google may have dropped support for Table schema.
Fun SEO Send-Off Question
What did Google think of the last Star Wars movie?
Mordy thought Google considers the last Star Wars movie as “duplicate content” as it retreads on previous plot points from Return of the Jedi.
Tune in next Tuesday for a new episode of The In Search SEO Podcast.
The #1 keyword research tool
Give it a try or talk to our marketing team — don’t worry, it’s free!