There are lots of questions to ask ourselves in product design and development. Are we solving the right problems? Are we satisfying users’ expectations? Is it going to bring enough business outcomes? We all understand the importance of bringing evidence to these questions.
The dilemma we face is that we need some data and evidence for the decision-making process yet finding it can take a lot of time. Unless there’s a clear purpose and structure in the research activities, this process itself can be perceived as a bottleneck.
I believe that it is possible to gather g just enough evidence without slowing down the teams. My approach is to break it down into 3 stages:
Step 1: Validate the problem.
Problems can be identified from multiple channels. For example, your customer success team sees multiple customers are blocked during onboarding due to similar issues. Your sales team finds comparable reasons why conversations are not moving on. The support team hears related complaints from active users.
At this point, understanding the true problem and figuring out the underlying issue is the key. This discovery stage requires product management and product design collaboration. From a product design perspective, this is the stage where user research can provide insights. To make this happen effectively, we can look at two types of data and mix them together to validate the problem.
- Quantitative data: Tools like Google Analytics or Pendo can provide information on the type of users and where they are getting stuck.
- Qualitative data: Any comments, narratives, and comments from your users can be invaluable. If you are creating a new product, exploratory user interviews are great for understanding how and why they might use your product.
- Mix & Match: The user research landscape is complex. The key is to gather insights from a variety of methodologies. Compare users’ attitudes (qualitative) with their true behavior (quantitative). I find this article from NN group shows a great summary.
Step 2: Validate your proposed solution.
This is the part where designers have usually come up with a proposed solution. The focus here is to determine whether this solution will solve the problem and bring about the right outcome.
- First, I recommend finding which part does NOT need research or testing. This is an essential step to scope user testing. Design research groups or big companies share their best practices and keep up-to-date on design trends. Leverage their research. For example, use common design heuristics that come from academic research and save your energy and time for something more specific to your product.
- Conduct user testing with prototypes as early as possible. Even if the solution is logical to you and your team, users can take it very differently. Skipping user testing will eventually create more design debt. Sharing the results from user testing is not just good for validation, it will reduce pushback from the internal team about why we have taken a particular path. It is essential to communicate that the solution was chosen by true users.
Step 3: Validate the outcome of the solution.
This is the step a lot of teams tend to delay or skip. After shipping a feature, it is very easy to forget about it and move on to something new. This won’t provide the team with a smarter path in the future. Understanding which features provided outcomes is crucial to upgrading a product. I propose reviewing the success metrics regularly and holistically with the entire product team.
Bonus step: Use a design system to organize proven conventions with evidence. This is an extra step to (eventually) save time.
- Bottom to top: If your product design went through testing and delivered, publish its components, patterns, and experiences so they can be applied to other features. This means any future user testing will become more streamlined as they are based on previous successful outcomes.
- Top to bottom: List common design best practices (sometimes called heuristics) that are relevant to your product in the design system. I usually write them up in a “design principle” section. Refine them for your product and reference existing research findings. Then, audit your product’s components, content, patterns, and experiences with these design principles to see if they align.
Once your team’s design system has patterns backed up by user testing and design principles that are in line with your target user groups, your validation process will become more focused.