[Estimated read time: 3 minutes]
At the end of last year, we launched Moz Local Insights, and then I read this comment on Phil Rozek’s blog:
“To be honest, we’ve been finding issues in the citation quality — it seems like their directory partners are changing up how they update information and not keeping stuff as consistent as we’d all like. I’d like Moz to focus on that and make that rock solid first — because right now, we really can’t rely on the results we’re getting from InsiderPages and SuperPages (for example). And listings that were “fixed” magically get unfixed weeks later — I don’t think this is Moz’s fault, but I do think they want to have a “red alert” feature when something gets broken that was previously resolved.”
Well, that wasn’t the comment I was looking for when we launched an important addition to Moz Local. But where there’s smoke, there’s fire. We took that blog comment and other feedback to heart, and we’ve spent our time since Insights launched on just this.
So how did we put out this fire? Well first, here’s a bit of backstory:
When Moz Local launched two years ago, the bubbles next to each listing in the dashboard were to let customers know that Moz Local was working for them. We provided simple transparency into how things were working with each of our partners.
The bubbles are popular, but they also raise two questions:
- Why isn’t my listing accurate on X partner?
- What goes into determining whether my listing is accurate for a given partner?
Better distribution
In February, we started the process of figuring out why our bubble gun wasn’t producing green bubbles. Hundreds of emails with partners, and countless (actually, probably countable) hours later, the process of submitting a listing to a partner and having that accepted has improved dramatically across the board.
The end result of all the hard work put into making listing distribution perfect: Listings submitted to Moz Local see an average accuracy score increase of 28% within 3 months. That’s a large lift across a large set of listings independent of size.
Step 1 was improving the underlying Moz Local distribution with partners. Step 2 was to make needed visual improvements for people to better understand their accuracy and what makes up that accuracy.
Better visualization
So, we’ve replaced the accuracy bubbles with an overall accuracy score and a bar graph that breaks the accuracy score down by its factors:
- Name
- Address
- Phone number
- Website
- Categories
If you don’t have 100% accuracy, then you can find out which critical component of the listing is contributing to that issue.
Quick side note: “Listing score” is the same score you see on the Moz Local Check Listing service. It grades a listing based on accuracy, completeness of categories and photos, and whether or not a listing has duplicates. The “accuracy score” is just to measure NAP+WC consistency of distribution partners. So, listing score is a superset of accuracy score and includes other important sites like Google and Facebook.
We think the accuracy score is more representative of the progress that your listing has made since it has been distributed with Moz Local. The factors are there to help understand across the partners where the gaps are. However, we realized that being Local SEO experts arming you with data helps you diagnose problems better. So, we’ve also broken all of the data factors down by partner.
The expanded accuracy factors view allows you to see whether you’re having an issue with a particular attribute of your listing or if it’s a distribution partner that is having issues with your data. For example, if the company you manage does a name change, it wouldn’t be surprising to see the name across partners show as inaccurate. In this example, the fact that I haven’t associated a Foursquare account with the listing helps explain why there are accuracy issues.
We hope existing Moz Local customers are excited about these changes. There is more work to be done, and I’m looking forward to sharing those improvements soon.