The beautiful thing about A/B testing is that the focus is on progress – not on being correct in your hypothesis.
Step 1: Define a success metric.
The first step to create an A/B test is to think about a metric that is critical to your business’ success. For example, if you own a hotel, then “bookings” would be an important metric. You may also want to consider testing secondary metrics. For instance, if you are a B2B business, the number of meetings your sales team sets might be an important metric, perhaps second to closed deals. Whatever metric you choose, this will serve as your starting point for figuring out what you will test.
Step 2: Gather data.
Once you’ve chosen an important metric, it’s time to analyzethe funnel where that metric can be measured. Look specifically where customers are dropping off. Gather all of this data, and figure out what area may be good to test. Areas with high traffic, like your homepage, are especially great places to analyze, as a test on that page will produce results faster. As an example, if you are having trouble booking sales meetings, you may want to consider testing the “Request a demo” CTA on your homepage, if that is the primary way people book meetings. Another place you can look to for inspiration is by looking at important segments, like device or browser. Perhaps you find out that conversion rates for your “Request a demo” CTA are lower on mobile vs desktop. Why could this be? Sounds like a great place to test!
Step 3: Formulate a hypothesis.
Based on data you’ve gathered, come up with a hypothesis for what you want to test, what you think will happen when you test it, and why you think this will happen. Some things you can test include copy, layout, CTA, offer and more. Testing is a chance to challenge assumptions, so be bold with your hypothesis. Here’s a general template: If we change [this], then [this will happen] because [this reason]. Here are some examples of possible hypotheses: If we change the copy of our CTA from “Contact Sales” to “Schedule a demo”, then more people will click on the button because “Schedule a demo” sounds friendlier and is abouta customer benefit. It is focused on the customer receiving something (a demo), rather than Sales getting something (an email address). If we move our customer validation logos above the fold on our marketing asset, then more people will convert because they will see the logos first and therefore trust us more. A simple yet very important type of test you can set up is called an existence test. In essence, you test whether the existence of a particular element, say, a secondary CTA, is helping performance. To do so, you direct part of your traffic to a version with the selected element and another version without it. This type of test helps you narrow down what elements are boosting performance, and which ones may be holding your website back. Here is an example of an existence test hypothesis: If we remove the customer validation logos, then our conversion rate will drop because people will not trust us.
Step 4: Check your sample size.
Before we can go and test your hypothesis, you need to determine the sample size. It’s important to preface this by saying that A/B testing can be done on websites with a small to a large amount of traffic. Nonetheless, in order to determine how long you need to run your test, your test needs to reach a minimum amount of participants. Use this calculator to find out your sample size.
Step 5: Setup.
Now that you’ve squared away your minimum sample size and your hypothesis, it’s time to set up your test. Set up depends entirely on the A/B testing solution you are using (see the Setting up for A/B Testing section).
Step 6: QA, QA, QA!
No A/B test is complete without a thorough quality assurance process. Without QA, you run the risk of running a faulty test, coming up with faulty results and ultimately coming up with false conclusions that can end up having negative consequences on your business. Run through your test multiple times. Have others test it. Try it on different browsers, devices, IP addresses, etc. Your A/B test platform should have all of the necessary staging requirements to QA your test.
Step 7: Launch!
Congratulations! You’ve launched – and now it’s time to monitor your test, but don’t try to call the test too early. Even if you reach your minimum sample size, there are a load of factors that may be affecting performance, and the goal is to ride out the seasonality of those effects. In general, 7 days is the minimum amount of days the test should be live, and ideally for two business cycles to capture normal fluctuations.
Step 8: Call the test and analyze
Before you can call a test, it’s important to use an A/B testing statistical significance calculator. As a rule, a test that shows a gain with 95% certainty can be declared a winner, although you can go even lower based on your risk tolerance. As noted above, be sure to run your test long enough before checking the statistical significant of the results. Even so, you should also consider digging deeper and segmenting your test. While it may seem that your hypothesis was wrong, you may discover that one particular demographic greatly favored your proposed variation. If that is so, then perhaps it is worth targeting those customers with that message. Or maybe your test revealed that your experience is not optimized on mobile. Whatever insights it can be provided, I really encourage you to segment your tests. You also learn a lot from failed or inconclusive tests. Take a look at your assumptions – what may have you gotten wrong, and what can you test in the future? If in the end, your results are tied, you can elect to run more radical tests (for instance, proposing a whole redesign vs just a button color change), or even choose to implement one of the variations based on the experience you think would be best for the user. Regardless of your results, you should always take time to analyze the results. Ask yourself: What did we learn about our users? What can we do to improve our processes? How will these insights inform future testing? For example, if this was your hypothesis: If we move our customer validation logos above the fold on our marketing asset, then more people will convert because they will see the logos first and therefore trust us more. But your results look like this: Then perhaps you should consider testing further. Perhapsthe logos you are using don’t resonate with customers? Perhaps your logos aren’t prominent enough? There are many ways you can go about it, the important part is reflecting and using all of the information your customers give you to inform further testing.
Step 9: Document
Finally, it’s important to thoroughly document your tests, conclusions and analysis, in order to build a testing culture in your company. Testing early and often will help you keep up with changes in the market and your customers’ needs and attitudes, and can help drive your organization’s growth. With this in mind, make it a goal to always be testing some aspect of your business. You may be surprised by the insights you uncover.
Step 10: Rinse & Repeat!
There is always another test to run. As you analyze and log your results, formulate new hypothesis and repeat steps 1-9 again! Iteration is the key to success.