A/B testing is dead, long live A/B testing!
If you are not using A/B testing yet, it is really time to catch-up! If you are using A/B testing, let's review how you could do it BETTER.
A/B testing is a very simple yet efficient method to improve your product and marketing over time (if you don't know what A/B testing is all about, read "Increase your web results today with A/B testing").
While A/B testing virtually applies to anything, for the purpose of this article I will focus on how A/B testing improves your online product, website or sales pages.
How traditional A/B testing works
The standard procedure of A/B testing comes as follow: you create a page with a unique conversion purpose (purchase, registration, click on a button, like, etc.) and build different versions of it, each having minor variations in terms of content, colors, layout, call-to-action or images.
After a certain period of time, once you have received sufficient traffic on your page, you can analyze your results. You should quickly find out which variation is working best (which one triggered more conversions with the same number of visitors) and take it as a base for a new A/B testing. You will keep the elements that worked best and make new variations by testing different aspects of your content or layout.
This method is very traditional and widely used because it has one obvious benefit: you KNOW what works best and you use it as a benchmark to improve your results over time. Great. But what REALLY happens?
What is the core problem of A/B testing?
The main problem with A/B testing's traditional approach is that it takes time. A lot of time.
Not so much to implement (you might have an incredible web designer that does it in no time or use tools that do it for you), but to run.
For your findings to make sense, you will need to run your A/B campaign long enough to get a sufficient number of visits. What number is that? Well, it depends!
It depends on the number of versions you have (you will need more traffic to compare 5 versions than you would need for 2) and on the conversions you get. Below a few hundred visitors per version and a few tens of conversions, you can hardly define which page works best.
And getting to these numbers might take a long time. The problem with this is that you might simply forget about your test. Worse, you are wasting valuable traffic with low performing pages!
How is that? Simple math! If you get 2,000 visits that you split equally on 2 versions of your page, each one gets 1,000. Now, one of your pages is better than the other, which means one of your pages is getting traffic that would be better used if sent to the other page! The problem is that you can't know that until you have finished your test run.
What is a better solution for A/B testing?
The solution is simple: direct your traffic to the best page!
Hold on... isn't it what we do already? Not really. You WILL drive more traffic to the best page, once you've identified it. But it may take weeks to find out. And until then, you "waste" valuable traffic on a low performing page.
Here is what you should do. Organize your A/B testing so that it goes through a 2 levels split.
On the first level, divide your traffic in 2 zones: 80%/20%.
For the 20%, use the same system you've always used: divide this traffic equally among your A/B versions.
For the 80% remaining percent, drive the traffic to your most converting page.
This way you keep A/B testing while giving most of your traffic to whichever page version shows the most promising results.
With this method, you don't even need to worry about following up anymore: even if you were to forget about your test, your visitors would be automatically directed to the right place.
What is your A/B strategy?
Last update: 2017-11-17 Tags: A/B testing web results leads