Growth Playbook

How To Run An Effective Testing Program At Scale

By Khattaab Khan, Product Lead at Shopify, formerly Product at TikTok, LinkedIn

In order to create an effective testing program, a defined work- flow, coupled with a culture in which people feel empowered, are essential.

In this post, we will go through the phases of the testing cycle and break them down into actionable tactics so you can go forth and build a successful testing program within your organization.

With that, let’s dig right in!

🧠 Step 1: Brainstorm

Getting started with testing can seem intimidating. How do I generate test ideas?

In order to get testing ideas, focus on optimization opportunities that matter by evaluating your user paths. Focus testing on problem areas that are drivers of business value.

👟 Evaluate User Path

For instance, if you are an ecommerce site, then you can evaluate the user path from product pages to checkout. Using Google Analytics, Optimizely or another analytics tool, explore your funnel for key drop off and conversion points and document that like the example below.

As you can see from this flow, 48% of users exit the website from the product page. What could be preventing them from adding the product to the cart? Perhaps the product page does not offer enough information, and they go searching for the product elsewhere?

🤔 Hypothesize

Using insights from exploring your user path, create a list of hypotheses and tests you’d like to run and plug them into a spreadsheet like this:

In the end, at the heart of generating test ideas is creativity and the willingness to explore beyond the familiar. Don’t be afraid to write down all of the “out there” ideas that come to mind – you can refine them later in the prioritization phase.

Here are more ideas to consider testing:

  • Number of questions in a form
  • Call to action button copy
  • Landing page design
  • Call to action button placement
  • The next section is about “reeling it back in” and narrowing your hypotheses down into viable test ideas.

    🔢 Step 2: Prioritization

    While you may want to test all of your ideas, limited resources obliges you to intelligently prioritize tests. Here’s a simple way to get started:

    For each test idea, assign a value of low, medium or high for effort and potential impact.

    For example:

    With this high level overview of all of your tests in hand, you can now assign priority based on your team’s capacity and resources available. For example, if your team is really new to testing and perhaps a bit skeptical, you may want to run test with medium to high impact and low effort first. This is to make your team more comfortable with testing and to ensure that your team understands the fundamentals of testing before you dive into more complicated and resource intense tests.

    Advanced Custom Scoring

    While the chart above is a great first step, those who want to dive into a more advanced approach to prioritization should consider custom scoring. In advanced custom scoring, you attempt to negate as much bias as possible by assigning points to each test based on the following criteria:

    After going through this, see which tests have the highest scores and prioritize from there.

    📄 Step 3: Documentation & Design

    Naturally, when it comes to testing, there may be some hesitation from key stakeholders. However, you can reassure them by emphasizing that any change made for a test is temporary and will be carefully documented before any permanent decisions are made. The effective planning documents are necessary in order to build trust within your organization and hold your team accountable.

    Additionally, detailed documentation ensures organizational consistency and helps preserve organizational knowledge when people move on. As your testing program grows and becomes more ingrained into the fabric of your organization, it’s important to reflect on the findings of past tests to inform future tests and avoid repeating tests.

    Remember to make your documentation publicly available – whether that be as a spreadsheet, knowledge base or through other means. Keeping people informed will not only make them more invested in fostering a testing culture, but it can also be fun. For example, you can ask people what version they think will win in a test. Adding a little competition makes it more exciting and is also a way to showcase the fact that results are not always what we may think, thereby reinforcing a minimum bias testing culture.

    Your document should also include information about the resources you will be using for the test. Work with key managers to budget for designer and developer needs and hours, and ensure that you set aside plenty of time for quality assurance (QA). As a general rule, be detailed as possible so that those outside of the testing team can understand the goals without much context.

    Lastly, whatever analytics tool you opt in to use, make sure to keep your documentation secure by saving a copy outside of the testing platform. You don’t want to lose years of knowledge if something were to happen to the tool.

    🏃 Step 4: Setup, QA, RUN

    For exact details on how to run a successful A/B test, see Colin Gardiner’s “The 10 Steps To Launching An AB Test ”

    Process usually differs from test to test but, here are some general guidelines you should follow:

  • Use a homegrown tool or a tool like Optimizely in order to set up your test
  • Calculate the minimum sample size you need in order to complete the test with this calculator
  • Do not modify any variables during a test
  • Ensure that experiments work correctly by following a thorough QA process. Make sure to:

  • Walk through all the possible visitor flows to ensure that they work as expected.
  • Double check that your goals are being tracked
  • Properly segment the visitors you are trying to target
  • After a proper QA process, it’s time to launch! Continuously monitor your test and regularly update your team in order to keep the test top of mind.

    📉 Step 5: Analytics & Review

    Now that the test is done, it’s time to analyze the results and get ready to share the results with your team. Analyzing your results correctly is super important. You’ve worked hard to minimize bias up until this point, and it’s important not to jump to conclusions at this point either.

    At the end of a test, you may be excited to see that Variation B did seemingly better than Variation A, the control. However, looks aren’t always what they seem and it’s important that your excitement isn’t premature. This is where statistical significance come into place.

    While on the surface, Variation B may have received more clicks (or other metric you were testing), without being statistically significant, it may have just been a fluke. Before coming up with any conclusions, use this free statistical significance calculator. Refer to the A/B testing article to delve deeper into statistical significance.

    📣 Step 6: Results sharing

    Hold your team accountable to the strategy that inspired variation design by sharing the results and learnings company-wide. Were your hypothesis confirmed or disproven? Was there anything that surprised you? Be transparent of any hiccups, such as bugs, and how that might have affected the results.

    For example, you can use a template like this one to provide a 2 minute overview:

    When you share your results, you can get your team excited about the process and spread testing culture to other teams.

    It’s important to remember that testing is an iterative process, it never actually stops. Things can change from one quarter to another, so it’s important to retest every so often, especially for tests that were closer.

    Many businesses start off by relying on intuition and looking at other companies’ successes – but this method will only take you so far. In order to optimize your web presence and outmaneuver your competition, you need to run objective tests on your own site and product, as what worked for one company may not be entirely applicable to your company.

    🦩 Closing Thoughts

    Ultimately, testing is an enablement tool, because you can leverage your findings to focus on goals that directly drive business value. Testing is also incredibly humbling. It forces you to challenge your assumptions – which, more than likely, are barring you from reaching the next level – and ultimately helps improve your funnels. Of course, testing can also be quite fun. Your users may behave in ways that you hadn’t predicted, and you’ll most likely encounter some surprising results.