seo

Problems with Web Survey Design & An Example from the SEJournal Blog Awards

I love SearchEngineJournal’s Annual Awards. I think it’s terrific that even a small community like search marketing can have its own mini-version of the Oscars each year πŸ™‚ It’s fun, it builds friendly competition, and it inspires those of us who compete to work harder and earn our keep.

However, this year I noticed some particular problems that plague many web surveys and figured it would be worthwhile to point them out. The following are some important guidelines to keep in mind while designing web-based surveys and contests.

Use a Definitive System to Establish Nominations

Some complaints at the SEJ awards centered around the nomination process, which consisted of comments posted to a blog entry. This can be avoided a number of ways, so long as a systemic, established process is worked out. For example, when Jane puts together the Web 2.0 Awards, she accepts 3-500 nominations, then runs through a few dozen lists of “Web 2.0” sites and IDs those that have an established presence, a certain level of popularity, and fit the criteria.

My suggestion for SEJ might be to attempt to find all blogs that fulfill certain category-specific criteria, whether that be topical focus,Β subscriber size, PageRank, monthly visits, etc. SEJ could, for example set the bar for “best link building blog” to be a blog that:

  • Produced at least 3 posts in each of the 12 months of 2007
  • At least 30% of all blog posts were on the specific subject of link building
  • Has in excess of 100 blog subscribers (according to Google or BloglinesΒ subscriber numbers)
  • Has no fewer than 5,000 external links according to Yahoo! Site Explorer (or a homepage PageRank of 4/10)

These aren’t perfect criteria (just examples), but they at least create standards that would give the nomination process a more fair and even distribution. Applying this same type of systemic control to nominations for any awards or survey will produce better results in the end (and certainly end much of the complaining that plagues this type of content on the web).

Don’t Ask Partisan Fans to Rate on a Sliding Scale

This was almost certainly the SEJ Awards’ biggest mistake. In any kind of survey environment that asks for popularity ratings and offers an incentive for inaccuracy (favoring one blog or site over all others), the use of a sliding scale voting system is going to produce badly skewed results.

Here’s an example of how SEJ’s Awards were laid out:

Example of SEJournal's Blog Survey Layout

In the above sample (which I’ve re-created from memory, as the survey itself is no longer accessible), I’ve illustrated how the survey was laid out. Although participants could leave any line blank (if, for example, they had never read that blog), this wasn’t clear in the initial instructions and did end up causing some confusion.

As you might imagine, this system creates the antithesis of a positive rating system because of how partisan voters will contribute. If, for example, I wrote a post on SEOmoz asking our readers to vote for us at the awards, you might expect that rabid SEOmoz fans would see how the survey is constructed and rate SEOmoz a “5” and give all the others a “1” to help boost our chances of winning while simultaneously damaging everyone else (I’ve illustrated this using TropicalSEO as an example).

In the blog post on the subject of the “best SEO blog”, for example, you’ll see that 55 voters gave SEOmoz a score of “1,” 47 gave that score to SEOBook, and 27 gave a “1” to SEO By the Sea. I have a hard time believing that this many people truly felt that these sites were of such low quality (particularly SEOBook, which is consistently excellent). The more likely scenario is the one I’ve described above, where partisan voters wanted to help the blogs they cared about through any means possible.

As a survey designer you cannot throw up your hands and simply say “Well, the Internet’s full of @ssholes.” You have to become smarter than the partisan voters and create a system that finds the signal amongst the politics. A good move for this particular survey would have been to use a ranking order – forcing users to rank the blog listed in order from most to least favorite. With a system like this, little room is left to negatively influence the results:

Β SEJournal Blog Survey Redux

In the example above, the options should ideally be randomized for each different visitor. Participants then fill in the red text areas themselves, ordering the sites from 1-8, which prevents the high-low partisan voting problem presented above.

Craft Clear, Concise, Unimpeachably Exact Questions

This is probably the hardest thing to do when creating a survey (as SEOmoz certainly learned during our SEO Quiz process). Nearly every question is going to have some room for interpretation, but by taking care and using an unhealthy degree of paranoia about potential interpretation problems, you can prevent squabbles like those taking place at Sphinn and SEJ.

For that specific example, rather than saying “Who is the Most Giving Search Blogger,” I might seek to involve the criteria Loren noted into the question itself, perhaps crafting something like “Which of the Following Bloggers Provided the Most Overall Value in Posts throughΒ Research, Influence, Coverage, and Openness?”

Questions, in general, should also be goal-oriented, so if the goal is to discover which blogger is most popular, the question should be framed in that way. If the goal is to find out which blogger voters think provides the best content quality overall, then a different approach (and a different question)Β is needed.

Don’t Declare a Winner with Tiny Margins

The number of survey participants will dictate your margin of error, and in a small survey (with less than a thousand total voters), it’s a given that a substantive margin of error will exist. Thus, unless you’re considering the survey participants to truly be the entire universe of judges on the subject (which some contests, like the AP News College Sports Polls or the Oscars, in fact do), I would be hesitant to declare a singular winner unless you have stats showing a victory by well beyond the margin of error.

For example, In the SEJournal awards, I was given the award for “most giving blogger”. While I certainly appreciate the sentiment, when I look at the voting and see that 2 other bloggers had 4 and 5 fewer votes than myself, I’d probably suggest a shared title between the top three candidates (Danny Sullivan, Barry Schwartz, & myself).

Be Wary of Referral Sources & Biasing

Online survey software needs to be savvy, needs to track referrals, and needs to map them to entries. While I strongly suspect that the voting at the SEJournal awards was actually fairly balanced, when you’re building a web-based survey, being able to pull out data showing the skews based on referral source is incredibly valuable. If I were running the SEJournal awards, I think one of the most interesting numbers to see would be the votes of non-partisan referrers (e.g., those voters whose referral source to the blog post or voting page did not include any of the mentioned websites). Comparing that data to the final results might show some fairly serious skewing that one could systematically remove (by not counting votes in categories where the referring site was nominated, for example). After all, in a perfect world, the awards shouldn’t be a measure of who can get the highest numbers of their readers to vote for them, but an actual measure of what the average industry insider thinks is best.


Now a sharp rebuke of myself. Posting something like this after the survey’s already complete is easy and it’s evenΒ somewhat reprehensible. After all, if I really knew all this ahead of time, shouldn’t I have alerted Loren and the SEJournal crew when the survey first launched? As is clear from this post, he responds to and accepts criticism quite well! Shame on me for my late timing. I do apologize for that. Nonetheless, I hope it’s still valuable and interesting and will help everyone who’s working in the realm of survey design think carefully about the process.

ADDENDUM: SEOmoz is (no surprise) launching its own survey of search marketing industry demographics (not an awards or contest) next week. Hopefully, we can take some of our own advice to heart! I’ve personally been working with a professional survey design company over the last month learning tons of interesting things about the process (and please realize that what I’m sharing here is only the tip of iceberg when it comes to survey design). In fact, I think the following resources might provide even greater insight for survey crafters:

  • Questionairre Design & Survey Sampling – Professor Hossein Arsham from the Univ. of Baltimore offers insight into survey crafting and interpretation techniques.
  • Writing Good Survey Questions: Examples – from the Berman Blog, some great advice on crafting good survey questions to minimize biases and errors.
  • Violin Duel a Draw for Antique Stradivarius – although it’s not a web survey, note the great care taken to produce solid results, testing blind and visible, with trained musicians and amateurs alike. Yet, even with all the evidence, no firm conclusion was drawn due to the proximity of the scores.

BTW – No insult or faultΒ is intended towards Loren Baker, who’s generous donation of time organizing and promoting the contest is fantastic (as is his sharing of the data reports, without which this post would have been impossible to write). I’m merely trying to illustrate missteps that I myself have taken in the past, and hope that it can help to bring awareness for the future.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button