Thoughts on Bizible’s Local Ranking Factors
A few days ago Bizible released a survey on the impact of different possible ranking factors on the Google’s organic local search results. This survey was based solely on statistics, as opposed to David Mihm’s Local Search Ranking Factors, which is based on the feedback of numerous local SEO specialists. Bizible claim that this is just the first part of the results, which is mostly focused on the on-Google-Places-listing factors such as category usage, keywords in different attributes such as the business title, business description, presence or lack of photos and videos, physical location, review count and rating, etc.
Problems with the methodology:
1) Isolating different potential ranking factors from each other.
The researchers decided to look at “each ranking factor in isolation and accounted for variation in competitiveness across search terms”. While this might be a method that matches the purposes of their study and the way they decided to present the results (in parts), it might jeopardize the accuracy of the findings. Yes, we all agree that “statistical correlation … does not imply causation”, but this could not be an excuse for providing results based on isolation of each factor, when the Google’s algorithm is based on all these (and many more) in batch. There are many potential threats when using such an analytic methodology, and they increase when we take into account the fact that the sample surveyed was relatively small – the top 30 results for 22 business categories across 22 cities in the US (overall 477 queries, as they excluded 7 which did not produce local results).
2) Using the same methodology when researching different types of businesses.
Researching businesses in different niches itself using the same methodology is already dangerous for the accuracy of the final results, but even more dangerous might be doing so for both brick and mortar businesses and service-based businesses. As Bill Slawski points out Google treats some local searches differently based on the potential intent of the searcher. A search for a “restaurant” should return results of places that are relatively near to the searcher’s physical location. Using the same logic, and knowing that Google uses the geographical center of an area as the direction point for a query such as “[keyword] + [area/city]“, it could be predicted that the results for “[restaurant] + [city]” query would return places that are nearer to the geo-center of [city] (the centroid). At the same time if someone searches for “[wedding planner] + [city]“, they might be putting much more stress on that they would like to find the best in the industry as opposed to the one that is located more closely.
3) Researching only major cities.
As the researchers went the way of analyzing each factor in isolation, it might not have been the best idea to pick only large cities with severe competition. Instead, as they use such a methodology, they might have rather picked some smaller towns with less competitors and potentially “clearer” search results, where analyzing each factor on its own might have returned more accurate and unbiased results.
Results that intrigued me:
1) The power of the “at a glance” phrases:
- In blended search results “having the search category or a synonym in “at a glance” was associated with a 0.36 improvement in rank.”
- In pure search results “having the search city in “at a glance” was associated with a 1.42 improvement in rank”; and “having the search category or a synonym in “at a glance” was associated with a 0.85 improvement in rank.”
As I’ve previously written the descriptive terms are to be very important for local SEO. Bill Slawski notes these terms might come from both structured (such as ones on Yelp, Citysearch, or Google Places itself) and unstructured (practically any local citation page on the web, implying sentiment) reviews. This is a very strong signal for Google to understand what “categories” particular business might be associated with.
2) Google Places factors not so important in blended search:
One thing that could be derived from these results and taken as a near-to-sure finding, no matter the problems with the methodology, is the fact that the purely Google Places factors (keywords in title/description, photos on Place page, Google reviews) matter much less in blended search. However, as we still don’t know for sure how Google determines when to display which type of results (although it is most probably it is based on user behavior which no one out of Google has access to) , it would be a good practice to work on improving the Google Places factors that you have control over even though for your main target keywords the SERP is currently blended type (it might change in an hour).
3) Importance of Google reviews:
- In blended search results “having five or more Google reviews was associated with a 0.31 improvement in rank.”
- In pure search results “having five or more Google reviews was associated with a 1.47 improvement in rank.”
As Aaron Bird, CEO of Bizible noted: ”The average Places page has very few reviews and if you are one of the only businesses in the results that shows a star rating, this will likely drive clicks, which will help your ranking.” Therefore, the number of reviews itself might not be the actual factor. It might rather be the additional click-through rate that the 5+ reviews bring, as supported by this case study. This statement could be supported by the finding that “getting your fifth Google review significantly helped ranking, although incremental reviews between one and four and above five had a very small impact on ranking”.
4) Some findings that are a result of the methodological problems:
The research found out that “the presence of a business description alone did not help ranking, but having the search category in the business description did help”. This could potentially mean that the business description is not a factor at all. However, as businesses that do care about their online presence have invested time ( and probably money) in getting high in the local search results, they would have most probably filled out the Google Places profile 100%, including adding a keyword-rich description in an attempt to gain relevance. For me, it is very possible that the photos and the listing being owner-verified being important for rankings (according to the survey results) derive from the same argument.
I am looking forward to seeing all the parts of the research, as well as the raw data, but I believe if Bizible proceed using the same methodology the results might be seriously jeopardized, and controversial at best.
Hey Nyagoslav,
That was a great, thoughtful, judicious take on the study (and I know first-hand that your critiques are pretty thorough and tough!).
I liked the first part of the study, though I agree that that taking each factor in isolation requires taking the results with at least a grain of salt. Although I think some people have seen it as a contrast to David’s LSRF, I think it reinforces much of what we already know about the ranking factors (like, for example, that distance from the city centroid matters more than being located *in* that town–which is the main reason why many 7-packs consist of businesses from one town and businesses from another town). I think its main service is that it’s largely reinforced our experience, rather than contradict it.
One minor issue I have with your excellent critique is the assertion that the “At a glance” snippets themselves affect rankings–that “the descriptive terms are to be very important for local SEO.” They’re mostly a product of unstructured info on third-party sites and (I believe) structured reviews–as you, me, Bill Slawski, the folks at Bizible, and others know. I’m sure you’d agree that it’s *having* that info on third party sites and *having* reviews with service- and location-specific wording that can actually help your rankings. My point is that the “At a glance” snippets are an indicator that you’re doing something right, and they do often accompany good rankings, but themselves aren’t the *cause* of the good rankings.
(Maybe that was what you were saying and I just didn’t pick up on it.)
I also look forward to parts 2-5, and I too would REALLY be interested in seeing the raw data first hand.
Hey Phil,
Thanks for the thorough comment!
Yes, I completely agree with you on reinforcing things that we already know. That is why I decided not to comment on these in the article – they have already been discussed on too many times.
However, I believe my thinking on descriptive terms slightly differs from yours. What I think (maybe not well enough expressed in the article) is:
- descriptive terms serve as a factor for Google to determine what key words (synonymous groups of key words) a particular business is relevant to based on sentiment
- the descriptive terms that Google displays are not all the terms that Google thinks a particular business is relevant to
- descriptive terms help rankings as long as you are attempting to rank for a key word (or a synonym of a key word) same (or similar) as the particular descriptive term
Therefore, I do believe descriptive terms are a direct ranking factor as they are the visual expression of what Google considers particular business relevant to (which is a very important piece of information, as Google does not really share such that frequently).
Thanks again! Would really love to hear your further feedback.
Ah, I think I did misunderstand what you were saying: that the descriptive terms themselves–which Google may or may not extract and turn into “At a glance” snippets–are what can help one’s relevance and ranking. And that the descriptive terms are NOT the same thing as the “At a glance” snippets (I originally *thought* that’s what you were saying, which is what I disagreed with). In essence, both the AAG snippets and good rankings are influenced by the same upstream cause (descriptive terms), as opposed to influencing each other.
If that’s what you were saying, then I think I was on the same page all along, and therefore all is well in the universe
(Btw, I and I know others would definitely look forward to your take on parts 2-5.)
I’m sorry for the misunderstanding. I guess there is a lot more to be improved with my English
So … can someone explain too me what does 0.25 and 1.47 really mean? What is the scale?
The numbers represent with how much the organic ranking would increase if the potential ranking factor is in place. Note that these are numbers “in isolation”, i.e. this is the ranking increase when a particular ranking factor is in place, with other things being equal. Example (in a perfect world):
Business A has the same name, address, phone number, and all information on the listing same as Business A’. Business A would potentially rank 1.47 positions higher than Business A’ if “5 or more” Google reviews were associated with it, as opposed to none for Business A’
P.S. I’m happy to see that you guys in 411 are following my blog