The Pros and Cons of A/B Testing

Running A/B Tests has it's ups and downs, find out why...


Here at No Pork Pies we’ve run A/B tests for a lot of our clients. While we’ve seen some significant results from the tests, there are also downsides that are rarely talked about. This post gives some of my opinions on the pros and cons of A/B tests, as well as how and when they should be run. The post is based partly on a session I hosted at this year’s UX Camp Brighton.

AB Testing Training

What is A/B testing?

Before I start I should really define A/B tests. A/B testing, also known as split testing, is a way to test changes to a webpage against the current design and determine which design produces the best results. At its most basic level users are split into two groups with one set being shown your existing design with the other group being shown your new, improved, design. Both variations are compared to each other time to see which one generates the most conversions on your website. This is a way to test out new design ideas and see how they perform against your current webpages.

Why run A/B tests?

Firstly let’s look at some of the reasons to do A/B testing. These days it’s fairly easy to set up and run tests. We’re Certified Partners of Visual Website Optimiser so we often use their tool for our tests, but there are plenty of other options out there for more basic testing. When they’re set up they can just be left running with minimal maintenance required and will automatically inform you of a winning variation.

These tests also give clear, quantitative results, meaning they are less open to interpretation, providing more definite answers than other types of testing. This means, if the test is set up correctly, that the winning variation is almost guaranteed to give you results.
AB Testing Results
Finally, small changes can make big differences. You only have to read case studies to see how tweaking parts of your website can have a big impact on its performance. There are lots of examples of how making simple changes to text or colours on webpages has led to dramatic improvements.

However, there are also downsides to relying on A/B testing to dictate the design of your site.

What are the downsides to A/B Testing?

One of the main issues that people have with A/B testing is that it can often take a long time to see results. It’s fine if you’re Amazon and your site gets millions of visitors a month but if you run a smaller site, or you want to test one of the pages within your site that gets less traffic, then it can take a long time to find a winning variation. This is even more of a problem if the changes you’ve made are quite minor. While small changes can make a big difference, more often than not small changes generally mean a small difference to your results. Running tests with minor changes can quite often be inconclusive, no matter how long they are run for.

Another area to look out for when you’re running your tests is cloaking. Cloaking is a black hat SEO technique where the content shown to a search engine spider is different from the version that’s shown to a user. By showing different versions of a page to different people A/B testing runs the risk of being seen as cloaking by search engines. This shouldn’t be a problem as long as you follow the official Google Guidelines, but it is something you should consider if you’re planning on running tests.

A problem with relying on quantitative testing is that this often leads to a ‘conversion first’ mind-set and can dehumanise your users, turning them into little more than lab rats in your experiments! A conversion driven mind-set makes business sense but it should not be at the expense of the overall user experience. Focusing purely on conversions can lead to short term ‘fixes’ that end up doing more harm than good in the long term.

There’s also some difficult ethical questions around purely focusing on making your users convert. The difference between using Dark Patterns (user interfaces which are designed to trick people into taking actions that they did not mean to take) in your design and doing conversion rate optimisation.

“Conversion optimisation should be a win-win situation with users having a better purchasing experience and website owners making more sales. Tricks and tactics that are used to prize more money from users against their will may lead to an increase in online sales, but will almost always be outweighed by the costs of additional customer service to deal with complaints as well as the negative reputation that dark patterns will bring for your brand.”

There really isn’t anything to be gained in the long term by using Dark Patterns in your design but there is a temptation to move towards some of these underhand techniques when A/B testing. In the short term, and in isolation, they are likely to lead to higher conversions, but normally at a cost.

A Journey map of dark patterns
Moving away from Dark Patterns and back to more conventional web design, often the impact of major design changes need to be reviewed over a long time. Changes to a website can often lead to users experiencing what is known as change aversion. Change aversion is an entirely natural negative short-term reaction to changes in a website (or product or service) that often occurs after any major design changes. Think about the initial negative reaction to changes to the recent introduction of IOS7, and before that the last major Facebook updates. There are certain approaches that a designer can take to minimise the impact of change aversion, these include:
  • Warning users about big site changes
  • Communicating the reasons behind the changes
  • Letting users switch between old and new versions
  • Providing users with support to avoid confusion
  • Encouraging users to give feedback on the design
  • Communicating with users how you plan to act on this feedback
None of these tactics are easy with A/B testing though, so you have to just put your new design out there and hope for the best. You can just restrict the test to new users but this is very difficult to do accurately with users often accessing your website from different devices making identifying returning users increasingly difficult. There may even be a problem for new users if your proposed design differs from existing web conventions and you don’t give them the time to get used to your ground breaking new ideas.

Overall focusing purely on A/B tests can mean obsessing about minor design changes and losing sight of the bigger picture. In the picture below I’ve looked at the hypothetical ‘problem’ of users not clicking on a button on a website. On the left we have the kind of possibilities that can be tested using A/B testing, while on the right we have some bigger issues that can’t be tested in this way, and that are more likely to be the real reasons why people aren’t clicking.
Why is nobody clicking on my buy now button
It’s easy to become obsessed with A/B testing to try and run more and more tests to get the answers to all of your design questions, but a lot of the time the issue lies outside what can be tested on a single webpage.

Is A/B testing right for me?

A/B testing can be a really useful way to improve the conversions on your website, but it should be part of a bigger user testing plan. This should include research, analytics and user testing sessions. You can also use some of these other testing techniques to indicate where you should set up you’re A/B tests. All of your A/B tests should start with a hypothesis based on a good understanding of the current state of your website. You may find that people aren’t scrolling on a certain page where you need them to scroll, or that the exit rate is unusually high on one page of your main conversion path. Before setting up any A/B tests on your site you should first find out where the problems are, instead of trying to ‘fix’ problems that don’t exist.

When running A/B tests don’t rely on your tests to always be conclusive, remember, small changes often mean small differences. Also make sure that you set aside a lot of time for tests to find a conclusion, particularly if your web page isn’t getting a large amount of traffic.
AB Testing process
You should try to ensure that your tests are fair 50/50 split, on all browsers and devices. Be wary of skewing your test results in any way and always ‘test your tests’ to make sure they’re set up correctly before running them. You should also treat any failed tests as a learning process rather than being too disappointed by them. It may be that your idea didn’t work, that’s a shame, but it’s better to know now than have made the wrong change and have to live with the consequences.

In short; A/B testing can still be useful if run in the right way and for the right reasons. Be sure to include other types of testing, both qualitative and quantitative, to get the full picture as most problems cannot be solved by A/B testing alone. I’m not saying that you shouldn’t use A/B testing as a way to test particular design hypothesis on your site, but it needs to be part of a much broader approach to conversion optimisation. It’s also worth remembering that nobody has ever A/B tested their way to innovation!

SHARE THIS PAGE!

more reading...