seo

SEOmoz’s Landing Page Competition Winner & an Interview with Jonathan Mendez on Landing Page Testing & Design

After nearly 11 weeks of running our landing page competition, we have a winner… Paul Robb of CredibleCopy.org, who put together this winning entry. Congratulations to Paul, who wins the grand prize of $1,000, a year’s membership of SEOmoz’s premium content, a 3-month trial version of the Offermatica software and more. Our second place winner, Carlos Del Rio (aka inflatemouse), put together a terrific page, too and deserves equal recognition (as you can see below). Our third place winner, who was less than 1/2 a percent away was David Mihm – congratulations to all of you!

We wanted to end the contest nearly 4 weeks ago, but struggled due to a heated, close battle between two of our landing page contestants. Finally, however, it was Jonathan Mendez from OTTO Digital who helped us finish the contest last week. Here’s Jon’s email to us:

Congrats!Β Β Page 9 is your winner. It has a higher overall conversion rate (over 7% lift vs. page 3) and has outperformed page 3 on a daily basis (only 9 of the last 29 days did page 3 perform better).

HOWEVER- You need to look at page 3 and see why it is KICKING ASS (41% lift w/ 99% confidence) getting people to the “choose” step.

Daily Results Conversion Rate

Before Jon’s email, we were actually debating whether and how we could ever finish the competition, as Pages 3 & 9 seemed to be in a constant battle, though Page 3 would only ever “catch up” or get very close to Page 9 and never surpass it. Below, I’ve listed the data collected through Offermatica during the testing period:

Landing Page Stats from Offermatica

You can see what Jon means when he talks about how well Page 3 did in getting folks to the “choose” page (where you enter your email or login to your account, then head to Paypal to pay – this page is identical for all the landing pages).

As Mark Barrera cleverly discerned in his YOUmoz post a few weeks ago, the separate pages are available on SEOmoz to view. I’ve listed them in order as they show in the Offermatica data (as well as showing off the nicknames that we’ve used internally).

Update June 19th 2009: Unfortunately these files are no longer available. Please see our new landing page for best practices. https://moz.com/gopro/

I wanted to follow up on this by asking Jonathan Mendez more about landing page design & testing in general, as well as touch onΒ the specific results for the SEOmoz contest, so I asked him if he’d agree to an interview, and he said yes! Here’s the results:

First off, Jon, I’d like to thank you not only for the interview, but for taking the time to navigate us through this difficult process of landing page testing. In fact, that brings me to my first question, which would be, what are the big elements (in your experience) that make constructing these tests challenging?

Thanks Rand. It was a pleasure working with you guys. There are two big challenges in constructing an effective landing page test. One is the test design and the other is the creative differentiation.

By test design I’m referring to selecting how many elements to test. Entire pages, parts of a page, A/B, A/B/C/D or choosing the right multivariate (MVT) array (e.g. 7×2 or 4×3). Most failures I see are when people want to test too many things at one time. After you have decided what you want to test, the amount of traffic and conversion expected over certain periods of time should determine your test design.

The second challenge is creating pages or elements that are as unique as possible from one another. What I’ve seen over the years is that the greater the creative differentiation the more likely visitors will respond to one creative treatment over another. This is important because the net effect is higher confidence levels due to a reduction in margin for error. This allows us to get results faster (or β€œfail faster”). Some of the best differentiation is testing elements being present or not present at all. In sum, a successful test is usually determined prior to any traffic entering it.

What are some of the less obvious benefits of landing page testing (apart from the obvious increase in conversion rates)? Are there secondary goals and positives that outsiders may not always perceive?

The great part about multivariate testing is that you can learn the factor of influence in percentage that each tested element has on conversion. This provides a tremendous understanding about what drives intent. We’ve had many clients that have taken this learning and applied it elsewhere, namely in their ad creative and messaging. For example, a recent client learned that the eTrust and Verisign seals had a 98% factor of influence for increasing conversion. Not only did they start to use this messaging more prominently in the remainder of their conversion funnel, they started using it in their offline advertising.

The other great benefit is found in segmenting your results. Here we can learn about what is important for different sources and behaviors or even temporally. As you might expect these factors can vary greatly. For example, looking at results segmented by branded keywords, generic keywords and even match type can yield winning ads, pages or elements that are entirely different from one another. This is where we start moving into behavioral targeting and personalization.

How does the Offermatica software operate – and how does it stack up against the competition?

Offermatica is an ASP platform that uses JavaScript tags to change content remotely based on rules. Then, it measures the effects of those changes in real-time. It’s really that simple.

I think Offermatica’s vast leadership position in the market validates it as the best and most versatile tool. Certainly the clients and the success speak for itself. One cool fact is that Offermatica is serving 5 billion impressions a month. The software has continued to develop since my first involvement with it over two years ago. There are new releases every quarter and the tool has moved into targeting and personalization with the release of affinity targeting capability as well moving from on-site optimization quite effectively into ad optimization. This provides holistic optimization capability from impression to conversion. Not to mention, the new UI in beta has an onsite editor that is insanely cool and simple.

Offermatica (and your company, OTTO Digital) were recently acquired – can you talk about that deal? Good thing? Bad thing? Are you staying on board? What changes should we expect to see?

It was a fantastic deal for both Offermatica and Omniture. Omniture gets a great technology that we’ve already integrated through their Genesis program. Offermatica, besides being validated as the market-leading platform, gets the opportunity for continued growth and success through the extensive Omniture client base and global reach. Beside the obvious ability to offer easy A/B and MVT to clients, ultimately the value is leveraging the valuable segment and behavioral data that Omniture captures. With Offermatica that data can now become immediately actionable for marketers for validation, testing, targeting and retargeting.

As far as OTTO Digital, it’s business as usual for now. I’ll be speaking with Omniture leadership in the coming weeks to hear their thoughts on how OTTO Digital can continue to provide measurable results for clients, grow the market for optimization services and remain doing the most interesting and cutting-edge work in digital marketing. Over the last 18 months we’ve created an amazing company; 20 employees with offices in NY, Dallas, LA and SF. We’ve worked for 4 of the top 11 brands in the world and our over 35 clients include the market leaders in most major verticals including search, retail, insurance, entertainment, finance and auto. I’m most proud that we’ve had a 100% success rate. Every one of our clients has experienced measurable increases in performance from our work.

Personally, while A/B and MVT testing still make up a large part of my work, the fastest growing piece of what I do revolves around leveraging the web as a platform to improve results for business by creating applications that are highly dynamic, targeted, personalized and use Offermatica to measure and optimize design, delivery and performance. We’ve had great success at OTTO using structured data through REST APIs in this way. Since the web is quickly moving away from β€œone-size fits all” static site experiences towards personalization and targeted experiences, understanding how to effectively marry content and creative with technology to provide relevance, and how to quantify that, continues to be what I’m passionate about.

Moving specifically to the SEOmoz landing page contest – it was really, really close at the end, and we ran two of the pages for nearly a month side-by-side to get enough results to have a statistically meaningful winner. Can you talk about why it took so much data to declare a winner when our eventual winner had basically always held the lead?

First of all, congratulations on a successful test. When determining the end of a test we look for two things in the data – stability and confidence. The reason it took so much data to get confidence in the results was because it was so close. In your test there were two temporal factors that we really needed to keep measuring. First, the runner-up page was actually getting many more people into the conversion funnel. This meant that there was a high possibility of latent conversions from this page so we needed more time to take that effect into consideration. Second, was that we wanted to get more daily results. Seeing that the winning page performed equal or better the vast majority of the days tested gave us a higher degree of confidence that the results were stable.

Have you experienced results like the SEOmoz contest in the past, where two pages that were incredibly different had such similar results? What can folks take away from the fact that these two pages were completely unique, yet almost identically effective in converting?

Well, I would challenge that they were almost identical in converting. The winning page had a 7% lift in conversion vs. the runner up. I would regard that as a comfortable margin.

What would you say is the next step in the process for landing page design with SEOmoz? How would you proceed from here?

More testing! You have an ideal situation for doing a Multivariate test following up on the learning from the A/B. I would go back to the fact that the runner-up was getting more people into the funnel but these people converted at a much lower rate than the people that went into the funnel from the winning page. Why? MVT should answer that for you.

Finally, do you have 3-4 top tips that you could share on landing page design and testing that you find to be exceptionally beneficial?

Sure, how about I give you three for design and three for testing:

Design

  • Simplify everything

  • Reinforce intentions

  • Limit choices

Testing

  • Let the data end your test, not your calendar

  • Understand results by segment

  • Technology is only as good as the marketer using it

Thanks, Jon. This is greatly appreciated. I really feel that although the direct format and style of the landing page results may not be applicable to all of our readers (though certainly to many), the testing process and the value of testing can’t be underestimated. We’ve gone from around .5% conversion to over 2.5% conversion with this contest – a remarkable change by any standard.

We’re still contemplating our next move – which may include hiring some professional landing page designers to see if they can do even better. We’re also thinking about creating multiple versions of the landin page based on how visitors reached it (paid search vs. on-site ads / logged-in members vs. new visitors, etc).

As always, I’d love to hear your thoughts on this contest, landing page testing in general and your own experiences with the process.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button