For any e-commerce and sales venture, website design can often prove as difficult as walking a tightrope. On the one hand, you need to create a site that conveys your brand’s identity and is aesthetically pleasing, while on the other you need to optimise each element so that you’re generating a steady stream of conversions – whether that’s sales or beginning new relationships with potential clients.
That’s where A/B testing – or split testing, as it’s sometimes known – comes in. Imagine that you’re buying a new pair of glasses. You head down to the local opticians and get your eyes checked to ensure you have the right prescription. The optician will then present an array of lenses and test them on you – enquiring whether your vision is better with lens A or lens B. You confirm which is best, and the optician uses your answers to determine your prescription.
This is exactly how A/B testing works. It’s all about comparing two versions of your website to see which one performs better with your prospects, preferably with the upshot of improving key performance indicators (KPIs) like conversion rate, sales and revenue.
You can use it to test and refine almost every facet of your website – anything from landing pages, promotions, calls-to-action, emails and search ads. Usually, A is the existing feature you’re currently working with (known as the ‘control element’) while B is the alternative option you want to test (‘the variant’). You then split your live traffic into two groups of subjects and direct them towards the two versions. When the experiment is done and dusted, you select the version that has produced the best results and move onwards and upwards to improving your bottom line.
Let the experiments begin
However, A/B testing isn't something you walk into unprepared – it takes experience and know-how to get right.
So how should sales teams approach A/B testing if they’re to successfully optimise their conversions?
1. Make a plan
Whoever fails to plan, plans to fail – the first step is to determine the goal of the test. Perhaps it’s to improve conversion rates or get more repeat purchases. In any case, there should be a clear target in mind.
2. Pick one variable
If you're starting out fresh, then don't overcomplicate things with multivariate testing (that comes later).
Almost anything on your website can be tested, so change one thing and see if your conversion rate changes in a statistically significant way.
3. Typical tests run no longer than 2 months
A typical A/B test runs for more than one week but not more than two months. If you're going to make changes, then you need to account for the variations in traffic.
Any longer than that, and too many variables come into play. Maximise the learnings by minimising the effect of time marching on.
4. Eliminate FUDs
Fears, uncertainties and doubts – these are what stand between you and a successful conversion of traffic.
Identify any element – copy, CTAs, images – that might cause friction or prevent conversions following through.
5. Do the maths
Before you start implementing changes, you need to ensure your results are statistically significant. If a test is set up poorly, with too many variables etc., then the results will be misleading.
A/B testing is all about probability – positive results mean your change will probably work better, not that it’s a dead certainty.
6. Test, repeat. Test, repeat
So you ran the test, got the results you were hoping for, and implemented the changes on your site. But don't throw the test away just yet.
Review your tests at the end of every quarter – this will show whether the results painted an accurate picture or if seasonal factors came into play.
With A/B testing, you have nothing to lose and only insights to gain. Whether that's button tests, copy tweaks or layout changes, it can seriously improve your bottom line.