Skip to main content

Whether you’re a product or marketing professional or work alongside those teams in customer experience, it’s likely that you’re involved in key processes to improve your company’s products and performance and that you use user perspectives to do so. One of the most powerful tools in your toolkit is absolutely, without a doubt, A/B testing.

Why? Because it takes the guesswork out of questions like which of these feature versions best supports our key KPIs? Who would pass up an opportunity to know for sure which version of a product iteration best supports their key metrics, like conversion rate or another engagement metric?

Answer: mostly no one. 

As a User Research Lead, I work with multiple teams who execute A/B tests in order to understand what each test teaches us about users and how it can help us do better product iterations in the future. Let’s dive into what A/B testing is, when and why you should use it, how to do it, and how to use it to build internal knowledge about your user base.

What is A/B testing: Definition and example

Simply put, A/B testing, also known as split testing, is a method of comparing two versions of something to see which one performs better according to your key metrics. It is commonly used product development to test different variations of a product or feature and their respective effects on KPIs.

In an A/B test, you create two versions of something (for example, a feature), and randomly assign different groups of users to see each version. By analyzing the results of the test, you can determine which version is more effective, and use this information to make data-driven decisions about your product.

Here's an Example:

Let’s say that your company has a mobile app that helps users plan large events. Your team has a feature that allows users to import their phone contacts and send invites directly via text messages. Internally, some members of the product and customer experience teams have different opinions about when you should ask a user to import their phone contacts - before or after they have customized the text invite.  The main KPI of this particular feature is the % of users who start the invite process that in the end, actually send their text invites. 

In an A/B test, your team develops the two different versions in question. The only variable, or difference, between the two versions is when the CTA to import phone contacts appears. Your team randomly assigns two groups of users, each getting one version of the feature. Once you have enough data—ideally, reaching statistically significant results—you then look at each group: which group had a higher percentage of users who actually completed the flow and sent their text message invites? That's the winning version and you turn it on for all users.

Author's Tip

I also love this resource for additional A/B testing examples, which can help you brainstorm the applicability of A/B testing your business or organization. It’s also helpful, when you’re in the brainstorming stage, to dedicate a bit of time to doing a search for A/B testing examples that are specifically in your market or with your target audience.

Why Do A/B Testing?

It’s definitely true that A/B testing uses a lot of resources—design, development, data analysis, and so on. But if you look at the values and goals of most companies, you’ll find that they’re often very aligned with the benefits of A/B testing. Let’s take a look at some of these goals and values, which can help you make a case for an A/B testing project. 

Get the latest from the brightest minds in CX, UX, and design thinking.

Get the latest from the brightest minds in CX, UX, and design thinking.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

Data-driven decision-making

A/B testing allows you to make data-driven decisions about your product, rather than relying on intuition or guesswork. By testing different variations of a feature, you can determine which version is more effective and make decisions based on hard data. I have yet to meet anyone on any product team, anywhere, who doesn’t aspire to be data-driven. A/B testing is a key tool for any data-driven product team.

Improved user experience

By testing different versions of a product or feature, you can identify ways to improve the user experience. This could mean making a button in your app more prominent, changing the color of a CTA, or streamlining a checkout process on an e-commerce site. A/B test results tell you which flows, features, and variables work best for your users based on whatever key metrics you decide. Often, when we skip A/B testing, we find out that we guessed wrong and utilize even more resources to correct the mistake—which also elongates the timeline for optimizing a feature to reach user satisfaction.

Potential revenue gains

When you run an A/B test, you choose the key metric(s) by which you’ll decide the winner when you get the test results. This means that if revenue is one of your key goals, you can actually know for sure which version of a product or feature results in the highest revenue.  

Building knowledge about users over time

We’ll talk more about this later, but another key benefit of integrating A/B testing into your product workflow is that over time, you build knowledge about users. It’s not only that each A/B test helps you make decisions in the moment, but it also teaches you something about how your users interact with your product, and that knowledge can help in future product work.

AB Testing Process In 7 Simple Steps_Infographic

The A/B testing process in 7 simple steps

Here we are—the fun part! You’re already convinced that A/B testing the placement of your call-to-action or the best way to execute an exciting new feature is the way to go. How do you do it?

Step 1: Define your objective

Before you start an A/B test, it's important to define your objective. What do you hope to achieve with the test? The key here is that your objective must be a metric. In other words, you must be able to know in numbers how each variable performs against the other.

For Example:

Creating an easier user experience is a fine goal to have in mind, but it doesn’t count as an A/B test objective. Let’s say that you have an online store. A good objective for an A/B test may be something like: Increase the % of users who complete the checkout process and make a purchase or simply, paid conversion rate. It’s measurable in numbers and it is likely a metric that meets your overall business goals.

Or, let’s say that you’re utilizing A/B testing to better optimize a specific element of your marketing strategy. You may release two versions of a landing page, each with marketing campaign messaging, but with a different button color for the main CTA. Your A/B test objective may be the click-through rate (CTR) on the button.

Author's Tip

Your key metrics could be positive—like increasing open rates for your email list or increasing revenue via a pricing experiment—or they could be aimed at mitigating harm, such as reducing email bounce rates or mobile app uninstalls. Things like conversion rate optimization are key goals for A/B testing, but keep in mind that there are plenty of other options.
Before you go through the rest of the steps, make sure that you and your team define the metrics by which you’ll determine the winner based on your overall goals.

Step 2: Identify the variable to test

Once you have defined your objective, the next step is to identify the variable you want to test. A variable is the difference between the two versions in your A/B test, and I’m going to say something kind of extreme about the variable:

If you have more than one variable, the results of your A/B test are limited in how much they can teach you. 

If your two versions have more than one difference between them, you will still be able to know which version won based on your key metric, but you won’t know why, and that can get sticky. Let’s go back to our landing page example. Each page has a CTA with a different button color and let’s say that the marketing copy is different between the two versions as well. You may see that version B more positively affects your KPIs, but you don’t know whether it was the button color, the different copy, or both.

This matters primarily because what if you want to make future iterations, either with your UI colors or your marketing copy? You won’t know what’s sacred and what’s not, or how to formulate your next A/B test strategically. So the rule is one A/B test, one variable.  

That being said, there is something called multivariate testing, which is what it sounds like—testing versions of something that have multiple variables. The rules and steps for executing a multivariate test are different and outside of the scope of this guide, but we love this as a great resource to start with.

Author's Tip

Since in an A/B test you only have one variable, try to choose a variable based on your team’s hypothesis around which aspect of your feature or other iteration will affect your key metric. For example, if your key metric is a paid conversion rate, the variable that you’re testing should be closely related to where and how users decide whether or not to pay for your product or service.

Step 3: Create your variations

By now, you’ve defined your goals and you’ve decided what the one variable between your two versions are going to be. This is the point at which your designers and developers get to work and produce both versions. 

When planning your A/B test and letting other stakeholders know when to expect results, make sure that you consult with the design and development teams about their timeline: how long will it take them to create the two versions that you need?

Step 4: Set up your test

Making sure that your test is set up properly is a crucial step. Here’s a quick checklist of things to look out for that may or may not be relevant to your company or organization.

  • Have you checked both versions of your web page/mobile app/feature to make sure that they both work properly?
  • Is your data or product team aware that this A/B test is going to happen, and have they allocated the time to analyze it as the data comes in?
  • Do you have a dashboard where key KPIs and user behavior metrics can be monitored as the A/B test runs? This is important also to ensure that one of the versions isn’t affecting a key metric in a drastically negative way, in which case you may want to turn off the test and investigate.

In terms of executing your test, it could be that your data team already has an internal mechanism for doing so. If not, there are a large variety of A/B testing tools out there—everything from basic Google Analytics functionality to more complex platforms, like SaaS A/B testing tools. Discuss with your team internally in terms of which best meets your needs.

Step 5: Collect and analyze your data

Consult with your internal data team about when you’ve reached statistical significance—meaning, that enough users have been exposed to one of your A/B test versions so that you can draw conclusions reliably from the current sample size.  You want to avoid a situation where you make the call too soon because the user groups weren’t big enough to draw conclusions from.

Once they’ve given the green light, work with them to look at the results of the test. You’ve already decided on your key metric(s) by which you’ll determine the winning variant, so this part should be relatively quick and straightforward.

Step 6: Implement the winning version

After you have analyzed your data, it's time to implement the winning version of your A/B test. At this stage, it’s important to make sure that all stakeholders are aware of the A/B test results and how you determined the winning variant.  If there is a relevant Slack channel or something along those lines, even better—you’ve just made a decision about the user experience, and you want to make sure that you have transparency around when, why, and how.

Once everyone is aware of the decision and it’s time to move forward, you can go ahead and ask your tech and product team to move forward and turn on the winning variant for all users—and watch your key metric go up and up!

That being said, your work isn’t quite done yet…

Step 7: Iterate and optimize

It’s true that the core work of your A/B test is over at this point, but it’s important to take some time to understand what you’ve learned and think about future iterations of whatever you tested. Remember that just because your winning variant was better than the other variant based on your test objectives doesn’t mean that there isn’t an even better iteration for your next round of A/B testing.

Have a talk with everyone who was involved in your A/B test, and discuss the variable in your A/B test—what are your hypotheses around why the winning variant most positively influenced your KPI? What was it about the losing variant that made it inferior in this case? You won’t be able to know for sure without doing some qualitative user research, but even just talking through your hypotheses can help you think about your next iterations and future A/B tests to continuously optimize the user experience. 

Use A/B testing to build knowledge about your user base over time

Though each A/B test is designed to collect data about one specific product iteration and make a real-time call on a specific issue, you’ll see that once you incorporate A/B testing methodology into your core product workflows, you’re actually amassing a lot of knowledge about your user base that could also have some longevity.

Your team may want to consider using something as simple as a spreadsheet or as robust as a research and experimentation repository to track and tag your insights from each A/B test, along with other internal insights about users.

Over time, it’s likely that you’ll notice patterns. For example, if there is one specific CTA button color or a subject line that is usually the winning variant, that’s data with potential longevity, especially when backed by data from more than one A/B test. Another example might be something like your team noticing that copy changes tend to have a less substantial effect on conversion rate-oriented A/B tests as opposed to UI changes.  

The possibilities are endless and highly dependent on the nature of your product, but the overall point is that you should review A/B test data with some level of regularity so that you can not only gain insights in the moment, but also build knowledge over time.

Your A/B testing journey is going to be fruitful, guaranteed! 

A/B testing is far and away one of the most powerful tools you have as a user-facing or product professional as far as optimizing the user experience to give value to your users and meet your key goals and objectives.

Hopefully by now, you feel like you have the know-how to move forward with A/B testing methodology. But if you're hungry for more knowledge, be sure to subscribe to the CX Lead newsletter for a regular dose of curated articles that can help you reach your goals.


Happy testing!

By Cori Widen

Cori Widen currently leads the UX Research team at Lightricks. She worked in the tech industry for 10 years in various product marketing roles before honing in on her passion for understanding the user and transitioning to research.