Many local business owners make the mistake of creating duplicate business listings without realizing it. Duplicate records of your business can cost your business valuable time and money for reasons which you may not have considered:
Consequences of having duplicate listings:
Duplicate listings create customer confusion. If a potential customer is looking for a local service, more than one listing for the same business will only confuse the issue. Having identical duplicate listings isn’t good, but it’s even more harmful to have listings with slightly different information, such as two different phone numbers or addresses. It looks unprofessional, and your real listing may get lost in the electronic void, resulting in a loss of customers.
Duplicate listings will hurt your rankings. Google accesses a variety of sources, such as online yellow pages and local directories, to create its business listing information. The Google bots will not recognize your business if it shows different information in each listing, so you may end up not ranking as well as you would otherwise. Some of your listings may even be flagged as spam. The more spam the crawling programs find on your site, the lower your ranking will be.
Your social presence will be weakened. Many of the local search sites now offer opportunities to share reviews, photos, videos and check-ins. Having your own social presence is crucial to your business. Social signals are emerging as ranking factors, and are almost as important as links used to be. Google approves of businesses that are regularly generating content, but again, multiple listings will create confusion, and your social activity will be spread thin.
How do duplicate listings happen?
They are created from within the business itself. It may happen gradually, over time, if the business doesn’t have a strategy or a plan for dealing with its business listings. Different people may be adding profiles to various directories, without realizing that it’s already been done. And if a third party tool is used to add a listing, the problem becomes even worse.
They are created by the business listing aggregators. Information about the business name, address and phone number is gathered from a myriad of sources. The aggregators can’t always match up the entries with previous listings, so duplicates are inadvertently created all the time.
They are created by the publisher. Local directory publishers see it as their job to improve the presence of a local business in their listings. How many duplicates there are, and how this might appear to Google doesn’t concern them, and they would have no need or desire to remove their duplicates.
How to solve the problem:
Every local search publisher works from dozens of local data sources. These sources are then put through a two step process: cluster and conflate. The cluster process involves identifying which records in each source apply to a specific location. The problem here is that each listing may vary in the way in which the information is recorded.
Here’s an example for the same business shown from two different sources:
User generated [/one_fifth] [one_fifth]Name
Maria’s Hair Salon
Maria’s Hair & Beauty[/one_fifth] [one_fifth]Address
1234 West Street, #5
Suite 5, 1234 West St. [/one_fifth] [one_fifth]Phone
888-210-1927 [/one_fifth] [one_fifth_last]Website
Bear in mind that there will likely be many different listings for Maria’s business. In the cluster process the computer will try to analyze which records are the same. For the human eye, it would probably be immediately obvious that the two above examples are the same business, but a computer would have a lot of difficulty with this, resulting in a duplicate listing. And unfortunately the process cannot be accomplished by humans simply because of the sheer volume.
Next comes the conflation process. This is when the computer decides which name, address, phone no. etc, to show in the listing. This is done by a ranking each source at the element level. The data that has the highest rank wins.
Trying to delete information at the source won’t work: publishers don’t name all their sources, so you would have to guess at them, which would be an impossible task.
The solution lies in using a “Hide” or “Redirect” flag at the publisher level. This will prevent duplication at the most important level – the view on the actual publisher’s site. With a hide flag overlay, the duplicate record will be hidden from view at the time of clustering.
This process works differently with each publisher, depending on whether they have a “stable” listing ID, or an “unstable “ listing ID.
A “stable” listing ID means they keep the same identification for all their listings. This makes the process fairly easy, because you can tell them which listing you want to “hide” by providing them with the ID number.
An “unstable” listing ID means that the publisher changes the identification every time they produce a new listing. So in this case you need to give them a way of identifying the duplicates. How you accomplish this is to provide the info in an “aka” format, which they can put into their merging algorithms, like this:
[one_third]Maria’s Hair Salon,
1234 West Street, #5,
AKA[/one_third] [one_third_last]Maria’s Hair & Beauty Suite
5, 1234 West St.
With this information, the next time the publisher runs a merge, there will result in one listing instead of two.
According to Yext, incorrect business listings data cost businesses $10 billion per year in 2013. Fixing your duplicate listings using this method will be well worth your time.