Skip to main content

As a User Research Lead, I’ll always be the first to argue that having experienced, full-time researcher(s) is the best way for a company to harness the power of UX research methods. That being said, reality is a thing—not all companies and organizations have these resources available, and not all management teams are convinced that it’s worth rethinking the team structure and adding roles for researchers.  

This means that often, non-researchers are left to do research without really knowing much about UX research methods or user research in general. Just because you’ve studied UX Design or anything else related to user experience, doesn’t necessarily mean that you know how to do reliable user research. 

If this is you, you may have already been Googling away at things like user interview best practices, but I’m here to tell you that:

1) There are a lot of user research methods and the research toolkit contains a lot more than interviewing, and

2) Choosing the right methodology for your UX research goals is a key factor in determining whether your research will be relevant and actionable. 

Below, I'll walk you through an overview of user research methods and which types of research questions they’re best suited for. You’ll notice that, overall, I’m embracing the mixed methods research approach, which means that you’ll find both quantitative methods and qualitative methods to ensure that you’ve got the right tool for each research goal.  

The good news is that you’ll find that user research involves a lot of the same concepts that one learns when studying UX design or User Experience, so even though there is a lot to learn, you won’t likely be starting off from a place of cluelessness. Let’s dive in!

Before choosing a user research method, define your research goal. 

Defining good research questions is the first step in doing good user research, and it’s impossible to be sure that you’re choosing the right UX research methods if you haven’t specifically defined what you’re trying to learn. 

User research is different from academic research in the sense that your goal, in all likelihood, is not to just chip away at building an internal body of knowledge. You want to learn about your users as efficiently as possible so that you can make key design decisions for your website or product. So when you’re defining your research question, ask yourself:

What do I need to learn about users in order to move forward with this project, feature, etc?

It’s based on the answer to this question that you’ll choose the right research techniques.

Good user research questions seek to answer the specific questions that you need to move forward with your work. They should also challenge your internal assumptions. “Do users like this layout?” is less useful than "Are users able to navigate this layout successfully?"  

Your research questions may be usability-oriented, like in the example above, or they may be more general. For example: "What unanswered pain points do our users have?"

A research question that’s more exploratory can help you generate ideas for your user personas, customer journey map, service design blueprint, or product roadmap.

Whether you’re doing research to generate ideas or to evaluate certain aspects of your existing offering, the bottom line is to know your goal before you choose your method. 

Overview of common user research methods and the research questions they address

Alright—you know what you want to know. The next step in doing successful user research is to choose your user research methods intentionally. Don’t assume that you’ll just talk to users or ship a quick survey. The user research toolkit is big and with good reason: each method gives slightly different perspectives and has its strengths and weaknesses in terms of the types of insights that it can provide. Let’s dive in.

Usability Testing: a crucial evaluative method in any design process

Usability testing is a common user research method, and with good reason – it’s the most straightforward way to check whether or not a specific flow or feature is intuitive for your users. Almost always, usability testing results in a list of problems and accompanying potential solutions to improve the user experience.  

Usability testing can be done live with test participants (this is usually referred to as moderated testing) when you sit together and watch users try to complete tasks and flows. This enables you to ask questions as they go.  

That being said, unmoderated testing is often the more practical option because you can use platforms, such as usertesting.com, to screen for relevant users who then complete the flows on their own time and you’re left with recordings to analyze. This means that you avoid having to schedule time with test participants and you get the usability sessions that you need faster. 

Note: usability testing is an evaluative research method, which means that it’s used to understand the effectiveness of something that exists within your current product, a competitor’s product, or a working prototype. If you’re looking to generate new ideas, you’ll want to look beyond usability testing for methodology—more on that later. 

Get the latest from the brightest minds in CX, UX, and design thinking.

Get the latest from the brightest minds in CX, UX, and design thinking.

  • No spam, just quality content. Your inbox is safe with us. For more details, review our Privacy Policy. We're protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
  • This field is for validation purposes and should be left unchanged.

User Interviews: current users and potential users

Interviewing is another very common user research method, and with good reason – it’s an excellent way to really dive in-depth into user sentiments, motivations, challenges, and behaviors. Interviews can be done face to face or virtually, and they allow you to get to know users holistically based on much more than how they interact with your brand.

Interviewing is a good method to choose when you’re searching for insights about people rather than answering usability questions. For example:

  • What motivates users to use your competitors product? 
  • What’s frustrating about your product? 
  • What unanswered pain points does your target audience still have, despite all of the products on the market? 

Note: a lot of times, people refer to interviewing as talking to users, as though it were a casual affair. The truth is that interviewing is a science and it’s best to brush up on best practices to make sure that you conduct interviews that yield reliable data.

Secondary Research: a qualitative method to use when the data is already collected

Don’t fall into the trap of assuming that you always need to generate new data: often, there is a lot of existing data out there for you to analyze! For example, if you work at a company with a weight loss app, you can learn a lot about the value propositions and frustrations with your product and your competitors by looking at App Store reviews and social media commentary.

When you do secondary research, think to yourself:

Where are people talking about the brands in our space?

Create a plan for systematically finding and analyzing what’s out there to understand more about your brand—and your competition—from user perspectives.

Surveys: a classic quantitative method when you need a sense of scale

Surveys can be overused or under-utilized because their true value isn’t always well understood. The rule of thumb is this: surveys are most useful for answering simple questions about user sentiments and behaviors where quantitative significance is important. 

What this means is that you’ll almost certainly get more in-depth, richer insights about your target audience from methods like interviewing—but sometimes, you really need to know how persuasive certain things are in terms of numbers.  

For example: 75% of our power users use another product to compensate for a lack of features in ours. That’s not an insight that you can get from a sample of 15 user interviews, but it could be really useful if you’re trying to prioritize a feature roadmap.

Like interviewing, writing a survey is a true science. There is a rampant misconception that a survey is an informal questionnaire and that, as long as users understand your questions, you’re good to go. This is far from true, and it’s important to make sure that you’re aware of best practices when it comes to writing survey questions and interpreting survey data.

As far as the practicalities, you can create surveys that are embedded within your website or product if you’d like to gain insights about your own users. If you’re looking for insights from your target audience in general and not specifically your users, you can use platforms such as SurveyMonkey to create online surveys and buy responses from relevant participants.

Concept Testing: a method for the idea stage of the design process

Concept Testing is a go-to user research method when a team is working on something new. If you have an idea for a feature or a product iteration, you can use concept testing to figure out whether or not it actually answers user needs and also get a sense as to how users react to your concept in general. This can help you decide whether or not your idea is worth pursuing, and also give you some actionable insights for how to execute your new idea.

The way it works is that you show users some estimation of your new idea: it could be anything from a few slides with visuals describing your idea all the way to a functional prototype. You ask questions to determine their reactions and sentiments, and then analyze the data from several different users.

Card Sorting: everything you need for your information architecture 

Card sorting is a tried and true qualitative method that helps you understand how your users or target audience conceptualize, order, and group various concepts. This type of qualitative data churns out invaluable insights relative to the information architecture within a digital product.  

The way it works is that research participants are asked to group and categorize cards with words or topics on them, and it can be done either virtually or in-person.  

For example, let’s say that you’re a UX Designer working on organizing the toolbar for a photo editing app. In this instance, you’re faced with decisions about which editing tools to group together. A card sorting exercise can help you understand which editing tools your target audience or users associate with each other and why. 

With that information, you can build a more intuitive toolbar where users are more likely to be able to find the various features that they need while editing a photo.

Participatory Design: a qualitative method to spice up your design process

Participatory design is a research method where we shift our mindset and regard our user base as partners in the design and product development process. Users are usually given art and craft materials, or just writing materials, and asked to ‘design’ their ideal experience.  

For example, if you’re working on a web platform that helps small businesses manage their finances, you may conduct a participatory design session where users design their ideal homepage.

Unless you’re building a platform for designers or product managers, your users are unlikely to give you actual features or design concepts in your participatory design session. However, the purpose of this research method is to give you a sense of what is important to your users in general, what your current platform is lacking, and so on. Pay careful attention to why your participants put various elements in their creations, because that is the relevant data to be analyzed later.

Focus Groups: classic, but proceed with caution

Focus groups are a controversial research method because in a group discussion, participants tend to influence each other. Without very skilled moderation, a focus group runs the risk of groupthink—whereby the participants articulate themselves in a way that meets their social need for approval more than it actually reflects their true thoughts and experiences. For this reason, Participatory Design or 1:1 research methods are often chosen instead when it comes to user research methods.

The primary benefit of a focus group is the ability to get data from multiple participants at once. If you do choose to run a focus group, be sure to brush up on best practices so that you can moderate it in a way that gives you the best data possible in spite of its limitations.

Customer Feedback: involve other teams in your design process

When deciding on which user research method to use based on your goals, don’t overlook the user data that you already have! If your company or organization has a Sales or Customer Support team, it’s likely that you already have a treasure trove of feedback internally that could be relevant to your research goals.

Often, customer-facing teams tag and categorize customer feedback based on topics. The best thing to do is to approach a colleague in Sales or Support, explain your research goals, and brainstorm what user feedback could be relevant. Generally speaking, these teams have feedback on everything from general sentiments to specific features.

A/B Testing: the quantitative method that trumps them all

Frequently, CX, marketing, and product teams have questions about user preferences. You may have two competing ideas for anything from copy to how a specific feature works. When you’re in this situation, you should always consider A/B testing.  

A/B testing is a research method where you put out two different versions of something, ideally with only one key difference or variable between them, and then take a look at which is more successful based on your key metrics.  

For example, if you’re working on a media site whose main KPI is Click Through Rate (CTR), you may want to test two different calls-to-action (CTAs) that lead users from one article to the next. You’d have two versions of your site released to equal numbers of users randomly, each with a different CTA. Ultimately, you’d compare: which version of the CTA yielded more clicks? That would end up being the winning variant of your A/B test that you’d then release to all of your users.

The main advantage of an A/B test over more qualitative methods of research is related to the fact that yields quantitative data: you can achieve statistical significance and know for sure which of the versions you’re testing impacts your key metrics in the most positive and significant way.  

Note: Though A/B testing gives you a level of certainty that can be very beneficial for important product decisions, keep in mind that A/B testing is expensive in the sense that your development and design teams have to actually implement two versions of something. For that reason, you should reserve A/B testing for high-risk decisions and use other methods, such as concept testing, when there is less at stake.

Diary Studies: don’t let a good or bad experience affect your data

Diary studies are a qualitative research method that is increasing in popularity by the minute, mainly because it gives you a better sense of how your target audience behaves in the context of a longer period of time.  

Let’s back up for a second: qualitative methods like user interviews are invaluable, but they do have a serious limitation in the sense that when you interview a user, you’re getting a snapshot of their self-perception and perceived behaviors that is often affected by recent experiences. For example, if you were interviewing me about my experiences using a parking app and just this morning, it took me more than 20 minutes to find a spot, and I was overcharged—well, that’s going to influence what I tell you, even if most of my prior experiences were positive. 

Enter: diary studies.

In diary studies, research participants complete questionnaires or have periodic interviews several times during a given time period. Keeping with our parking app theme, your participants in a relevant diary study may complete a survey twice a day for 10 days about their morning and evening parking experiences. This way, you’ll mitigate the issue of one good or bad day skewing your data about users.

Integrating user research methods into your product or design flow can truly affect your bottom line.

Why rely on intuition when it comes to things like user behavior, information architecture, mental models, and preferences when you can know for sure? User Experience research delivers insights about what’s happening in the real world, making your work and the work of other stakeholders more likely to succeed.

Whether you’re a CX professional, a product manager, a UX designer, or on the marketing team—you’d likely benefit from incorporating user research methodology into your workflow. Start by using evaluative research methods like usability testing to look at your current flows from a user perspective, and progress from there.   

My journey to become a User Research Lead started out exactly like that. I began by incorporating user research methods into my work as a Product Marketing Manager and then, well, it’s a professional love story for another time. 

If you want to truly gain more user research skills beyond this overview, I wholeheartedly recommend the book Just Enough Research by Erika Hall. Though this book alone won’t turn you into a full-fledged user researcher, you’ll learn a lot more about best practices to increase the reliability and effectiveness of your work.

image just enough research book
[Image Source: Twitter]

You could also consider a variety of online courses and content, including those offered by the Nielsen Norman Group or the Interaction Design Foundation

One last thing—don’t forget to subscribe to the CX Lead newsletter, which is a constant flow of actionable tips for incorporating user voice into everything you do.  Happy researching!

Related read: Best A/B Testing Tools for Data-Driven Experiments

By Cori Widen

Cori Widen currently leads the UX Research team at Lightricks. She worked in the tech industry for 10 years in various product marketing roles before honing in on her passion for understanding the user and transitioning to research.