A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a web page or application against each other to determine which performs better. AB testing is essentially an experiment in which two or more variants of a page are shown to random users, and statistical analysis is used to determine which variation performs better for a given conversion goal.
Running an AB test that directly compares a variation to a current experience allows you to ask focused questions about changes to your website or app and then collect data on the impact of that change.
The test takes the assumptions in website optimisation and makes informed decisions with data that shifts the business conversations from "we think" to "we know". By measuring the impact of changes on your metrics, you can ensure that each change yields positive results.
In an A/B test, you take a web page or application screen and modify it to create a second version of the same page. This change can be as simple as a single headline or button, or it can be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as the control) and half is shown the modified version of the page (the variation).
When visitors are presented with the control or variation, their interactions with each experience are measured and collected on an analytics whiteboard and analysed with a statistical engine. You can then determine whether changing the experience had a positive, negative or no effect on visitor behaviour.
A/B testing allows individuals, teams and companies to collect data about the results when making carefully considered changes to their user experience. This allows them to form hypotheses and better learn why certain elements of their experience influence user behaviour. In other words, they can be proven - their view of the best experience for a particular purpose can be proven wrong through A/B testing.
Rather than answering a one-off question or settling a dispute, AB testing can be used to continuously improve a given experience over time by improving a single goal, such as conversion rate.
For example, a B2B technology company may want to improve the quality and volume of sales opens from campaign landing pages. To achieve this goal, the team will experiment with A/B testing changes to the headline, visual image, form fields, call-to-action, and overall layout of the page.
Testing one change at a time helps you determine which changes affect the behaviour of your visitors and which do not. Over time, they can combine the impact of multiple winning changes from the experiments to show the measurable improvement of the new experience over the old experience.
This method of introducing changes to a user experience ensures that the experience is optimised for the desired outcome and can make key steps in a marketing campaign more effective.
By testing the ad copy, marketers can learn which version gets more clicks. By testing the subsequent landing page, they can learn which layout best converts visitors into customers. If the elements of each step work as efficiently as possible to win new customers, the overall expenditure on a marketing campaign can be reduced.
A / B testing can also be used by product developers and designers to demonstrate the impact of new features or changes to the user experience. As long as the goals are clearly defined and you have a clear hypothesis, the user engagement, models and in-product experiences that take place on the product can be optimised with A/B testing.
The A/B test framework that you can use to start the tests is given below:
If your variation is a winner, congratulations! See if you can apply the learnings from the experiment to other pages of your site and keep trying to improve your results. Don't worry if your experiment produces a negative result or no results. Use the experiment as a learning experience and create new hypotheses that you can test.
Whatever the outcome of your experiment, use your experience for future actions and continually iterate on optimising the experience of your app or site.
Google allows and encourages A / B testing and has stated that performing an A / B or multivariate test does not pose any risk to your website's search ranking. However, you can jeopardise your search ranking by abusing an A / B testing tool for purposes such as privacy. Google provides some recommendations to ensure that this does not happen: