Back when I first started doing SEO I remember when you’d change a client’s site (probably to look slightly more spammy than it does now) and await the results nervously, waiting for their site to be crawled but in reality having to wait a good 6 weeks or so to see any appreciable effect on rankings. You’d make recommendations about title tags, internal linking and content but you wouldn’t really know how soon you’d see results. Questions like “should you put the brand first in the title tag or the keyphrase first?” or “should you link to the homepage saying ‘home’ or using the money keyphrase?” or “should you suggest 10 links on that page or 20?” were ones for which it was difficult to come up with a definitive answer. What we used to do was try and work out the best answer beforehand and put that forward as a recommendation, usually not even letting the client know that there might be another solution. Sometimes we’d come back and revisit these recommendations 6-12 months later but essentially once you’d done a site review, it was done.
At Distilled (with more maths degrees than you can shake a stick at) we’re big fans of analytical methods and rigorous testing (and auction theory, but that’s for another day…), so we’ve always strived to base recommendations on cold hard fact rather than speculation or hunches. Back in the good old days though, good analytical data was kind of hard to come by. Remember the days before Google Analytics?!
But that’s changed now. We have more analytics than you can shake a stick at (there’s a lot of stick shaking going on in this post) and equally importantly the search engines can now index new content and site changes quicker than you can say google-flashbangkaboom-bot. So what does this mean? Well at a fundamental level changes are now indexed by the search engines much much quicker (apart from Live, bless em) which means you can see the impact faster but also importantly you can correct changes which have a negative impact quicker.
For me, this means that SEO has entered an era of testing. I’m not claiming to be the first person to come up with this; people have been doing this for a while now and it’s something that we’ve been integrating into our processes more and more in the past 12 months, but I only recently managed to step back and look at the bigger picture and realise that this is quite a fundamental shift away from old-school SEO.
Here’s a quick idea of what I’m talking about:
- Information Architecture. Should those product pages link to 3 or 10 related products? Should there be a sub-sub-category level for the site or should you just show more results in the sub-category? Should your sub-categories link to other sub-categories or just back to the level above? How much content do you need on product pages, anyway? Should I be looking to hire a copywriter for my (potentially thousands of) products to help them rank? Is that really necessary?
- What’s the best advice for alt text on images? Should I keep them short or long and a little keyphrase stuffed? What about the alt text for the image link to my home page? Exactly how effective is keyphrase stuf- sorry, optimising that?
- How many footer links should I have on my site? How effective are they?
- How many products should I link to from my homepage? Should I have lots of links to all kinds of places or a few links to my top category pages?
- How long should I make my title tags? Which works best, brand before or after the keyphrase (this is usually cut and dry, but keep CTR in mind too – while this is harder to test it’s not impossible, especially if you’re running PPC on the exact phrase at the same time to monitor impressions)?
These are just a sampling of the kinds of things I’d recommend playing around with where the answers are not immediately obvious and more than likely change from site to site.
Of course, all this is very well and good, but there are a few things which are harder to test. For example:
- Should you buy links? How effective are these paid links I’m buying? While you can monitor the short-term gain, the long-term impact (if you get a penalty, for example) can be much harder to determine.
- Should you cloak content based on user-agent? Sure, this might boost your rankings in the short term but again, this might also result in a penalty further down the line, which could be very costly.
I’m not saying don’t do these things by any means, I’m simply saying that testing them can be tricky.
Of course, essential to testing is monitoring results. You have to ensure that you set some accurate benchmarks and monitor closely. Wherever possible, try and remove external factors from the test (this can be impossible at times, but if you’re intelligent about it you can usually come up with some kind of control).
For large sites with multiple sub-sections and sub-sub-sections, I’d strongly recommend doing split-testing. Make different kinds of tests on different sections of the site and see which is most effective. By running multiple tests at once you can hone in on the best answer much, much quicker. On the flip-side, for small static sites where all changes have to be run through a web-development team which takes months to implement changes, perhaps this isn’t for you (though if this is the case you’ve probably got bigger concerns!)
The moral of the tale is this:
If you’re not doing it already, I strongly recommend that you integrate testing into your SEO site reviews, particularly for large sites with distinct sub-sections where you can suggest different changes for different sections and see which works better.
And remember that changes can mostly be reverted very quickly (especially if your site is large and crawled regularly), so don’t be afraid to try a few more maverick tactics now and again. You’ll be surprised at what works!