Growth Playbook:
The 10 Steps To Launching An A/B Test
By Colin Gardiner, Chief Business Officer at cryptohunt, formerly Chief Revenue Officer at Outdoorsy, Roamly
A/B testing, otherwise known as split testing, is a simple yet powerful way to optimize your marketing strategy, product and overall business. True to its name, A/B testing consists of testing two variations (A and B) of something on your website, such as CTA button color, landing page design, the copy on your CTA button, and more, for the purpose of discovering which variation, if any, resonates better with your audience.
Why should I A/B Test?
Experimentation is one of the most integral parts of building a product and finding product market fit. By A/B testing iteratively, you can create a constant feedback loop of customer experiences, which you can then use to continuously optimize your business to be a better fit for your customers’ needs.
Remember, your greatest asset is time. When you elect to not test, you are losing out on new information about your customers or your market – a huge opportunity cost! However, by testing regularly, you can discover things that are critical to your business, such as finding the most effective marketing campaign – saving you money.
Similarly, designers and product managers can test critical points of the user journey to increase product adoption, and eventually produce customers that promote for you.
As an example, let’s say you want to decrease cart abandonment on your ecommerce site. If your site requires users to sign up for an account before checking out (what we’ll consider Variation A), you might want to consider testing whether adding an option for users to check out as a “guest” (Variation B) increases your conversion rate. You can still prompt customers to create an account at the end of transaction, but by removing barriers to check out, you can hypothesize that there will be an increase in orders. After you run the test, you can evaluate your data to see if one version performed better than the other. Let’s say Variation B performed significantly better – you can then implement that solution and know that it will help you sell more for the time being.
The beautiful thing about A/B testing is that the focus is on progress – not on being correct in your hypothesis.
This is especially useful for uncovering strategies that can be rife with biases, and can ultimately hinder your business.
Setting up your A/B Testing Tech Stack
In this article, we will be focusing on running tests on your own website, but if you also run paid ads, Facebook and Adwords have resources to conduct tests on their respective platforms.
For testing on your own website, there are three routes: paid platforms, open source software and server-side solutions. Paid and open source platforms are often out of the box and use JS-based tracking. They are often quick to implement, but one potential drawback is that they often rely heavily on front-end manipulation.
Examples of A/B testing platforms include Google Optimize, Optimizely and VWO.
Server-side solutions come in varying flavors. If this is the route you go, you’ll want to pick one to go with your code stack. An upside to going this route is that tracking will be more integrated and native. Even though will give you more control, it’s important to keep in mind that you will likely need developer help to implement and operate it.
So, which solution is right for you? To build or to buy?
As a general guideline, early stage companies won’t need to pay for a solution as the fidelity of tests won’t be as important. Instead, changes will be more sweeping, which will require less precision. However, as your company grows and needs to be more certain of results, you will likely need to move to a A/B testing platform and eventually migrate to a bespoke solution.
Specifically, for companies that require less technical product testing, it is often best to stick with platforms that offer front-end manipulation. This makes it easier to test things like marketing collateral.
For companies that have intricate product experiences requiring logins, payments, search algorithms and more, it is often best to adopt server-side solutions, bespoke or from a platform.
All in all, there is no one size fits all solution. In my experience, the level of technical prowess within a company will probably be the deciding factor.
How to Conduct an A/B Test
Testing in general can easily get complicated, but let’s start with the basics. Here, I’ve boiled down the process to the most essential steps you need to take to conduct your first A/B test.

Step 1: Define a success metric.

The first step to create an A/B test is to think about a metric that is critical to your business’ success. For example, if you own a hotel, then “bookings” would be an important metric. You may also want to consider testing secondary metrics. For instance, if you are a B2B business, the number of meetings your sales team sets might be an important metric, perhaps second to closed deals. Whatever metric you choose, this will serve as your starting point for figuring out what you will test.

Step 2: Gather data.

Once you’ve chosen an important metric, it’s time to analyzethe funnel where that metric can be measured. Look specifically where customers are dropping off. Gather all of this data, and figure out what area may be good to test. Areas with high traffic, like your homepage, are especially great places to analyze, as a test on that page will produce results faster. As an example, if you are having trouble booking sales meetings, you may want to consider testing the “Request a demo” CTA on your homepage, if that is the primary way people book meetings. Another place you can look to for inspiration is by looking at important segments, like device or browser. Perhaps you find out that conversion rates for your “Request a demo” CTA are lower on mobile vs desktop. Why could this be? Sounds like a great place to test!

Step 3: Formulate a hypothesis.

Based on data you’ve gathered, come up with a hypothesis for what you want to test, what you think will happen when you test it, and why you think this will happen. Some things you can test include copy, layout, CTA, offer and more. Testing is a chance to challenge assumptions, so be bold with your hypothesis. Here’s a general template: If we change [this], then [this will happen] because [this reason]. Here are some examples of possible hypotheses: If we change the copy of our CTA from “Contact Sales” to “Schedule a demo”, then more people will click on the button because “Schedule a demo” sounds friendlier and is abouta customer benefit. It is focused on the customer receiving something (a demo), rather than Sales getting something (an email address). If we move our customer validation logos above the fold on our marketing asset, then more people will convert because they will see the logos first and therefore trust us more. A simple yet very important type of test you can set up is called an existence test. In essence, you test whether the existence of a particular element, say, a secondary CTA, is helping performance. To do so, you direct part of your traffic to a version with the selected element and another version without it. This type of test helps you narrow down what elements are boosting performance, and which ones may be holding your website back. Here is an example of an existence test hypothesis: If we remove the customer validation logos, then our conversion rate will drop because people will not trust us.

Step 4: Check your sample size.

Before we can go and test your hypothesis, you need to determine the sample size. It’s important to preface this by saying that A/B testing can be done on websites with a small to a large amount of traffic. Nonetheless, in order to determine how long you need to run your test, your test needs to reach a minimum amount of participants. Use this calculator to find out your sample size.

Step 5: Setup.

Now that you’ve squared away your minimum sample size and your hypothesis, it’s time to set up your test. Set up depends entirely on the A/B testing solution you are using (see the Setting up for A/B Testing section).

Step 6: QA, QA, QA!

No A/B test is complete without a thorough quality assurance process. Without QA, you run the risk of running a faulty test, coming up with faulty results and ultimately coming up with false conclusions that can end up having negative consequences on your business. Run through your test multiple times. Have others test it. Try it on different browsers, devices, IP addresses, etc. Your A/B test platform should have all of the necessary staging requirements to QA your test.

Step 7: Launch!

Congratulations! You’ve launched – and now it’s time to monitor your test, but don’t try to call the test too early. Even if you reach your minimum sample size, there are a load of factors that may be affecting performance, and the goal is to ride out the seasonality of those effects. In general, 7 days is the minimum amount of days the test should be live, and ideally for two business cycles to capture normal fluctuations.

Step 8: Call the test and analyze

Before you can call a test, it’s important to use an A/B testing statistical significance calculator. As a rule, a test that shows a gain with 95% certainty can be declared a winner, although you can go even lower based on your risk tolerance. As noted above, be sure to run your test long enough before checking the statistical significant of the results. Even so, you should also consider digging deeper and segmenting your test. While it may seem that your hypothesis was wrong, you may discover that one particular demographic greatly favored your proposed variation. If that is so, then perhaps it is worth targeting those customers with that message. Or maybe your test revealed that your experience is not optimized on mobile. Whatever insights it can be provided, I really encourage you to segment your tests. You also learn a lot from failed or inconclusive tests. Take a look at your assumptions – what may have you gotten wrong, and what can you test in the future? If in the end, your results are tied, you can elect to run more radical tests (for instance, proposing a whole redesign vs just a button color change), or even choose to implement one of the variations based on the experience you think would be best for the user. Regardless of your results, you should always take time to analyze the results. Ask yourself: What did we learn about our users? What can we do to improve our processes? How will these insights inform future testing? For example, if this was your hypothesis: If we move our customer validation logos above the fold on our marketing asset, then more people will convert because they will see the logos first and therefore trust us more. But your results look like this: Then perhaps you should consider testing further. Perhapsthe logos you are using don’t resonate with customers? Perhaps your logos aren’t prominent enough? There are many ways you can go about it, the important part is reflecting and using all of the information your customers give you to inform further testing.

Step 9: Document

Finally, it’s important to thoroughly document your tests, conclusions and analysis, in order to build a testing culture in your company. Testing early and often will help you keep up with changes in the market and your customers’ needs and attitudes, and can help drive your organization’s growth. With this in mind, make it a goal to always be testing some aspect of your business. You may be surprised by the insights you uncover.

Step 10: Rinse & Repeat!

There is always another test to run. As you analyze and log your results, formulate new hypothesis and repeat steps 1-9 again! Iteration is the key to success.