12/36 - User interview analysis – turning raw data into insights effectively
“It’s the act of taking raw data and turning it into something useful.”
🎓 Revolutionize your product management process and turn your ideas into successful products with our industry-leading product discovery course.
What is User interview analysis?
User Research analysis is the umbrella term used to define the process of classifying, organizing, and transforming raw data into valuable information, and eventually a conclusion. When performed correctly, your analysis will generate the building blocks you’ll need to construct your research deliverables. Since data can be interpreted in an infinite number of ways, part of your job as a researcher is to decide how to analyze your data and use it to tell a compelling story.
Analyzing user interviews is the most exciting, but also the messiest step when conducting user research. It’s exciting because it can reveal eye-opening insights that help to create game-changing products and services. At the same time, it’s messy because there is no standard procedure to follow, no objective measure of progress or success and the sheer amount of unstructured data can be paralyzing.
Research analysis vs. synthesis
Research analysis, as defined above, is the process of sorting and categorizing data.
Synthesis involves interpreting research data and pulling out insights and key findings that can be used to impact decisions.
Synthesis may follow once all the analysis is done, or the two processes might happen more or less in tandem, depending on the methods you use at this stage.
Together, research analysis and synthesis are key processes that create meaning from raw data. With this meaning, you can make better, more informed decisions.
Challenges and common mistakes in analyzing research data
Before you begin analyzing your data, you should be aware of some of the most common mistakes people make during analysis, including:
Presenting a large volume of unassimilated and uncategorized data in an effort to be “perfectly objective”.
Withholding initial biases or assumptions about the study focus, and thus misrepresenting the relevant and important data.
Over-reduction, or “flattening” of the data collected into close-ended survey responses (i.e. binary “yes or no” questions).
Jumping to decisions based only on statistical numbers (e.g. redesigning a full page because of a high bounce rate—when the real issue might be as small as a poorly-placed button).
Getting lost in the details and simply rehashing information collected during the study, without applying any real analysis to that information.
When analyzing qualitative data, ask yourself the following questions:
What are the major patterns and common themes in users’ responses?
Did any findings surprise you, your colleagues, and/or the client? How so?
In what context did users express the greatest emotional response to questions?
What interesting user stories emerged from the responses?
How do people view this product overall and how does it fit into their daily lives? How indispensable is this product to them? Why?
What features were most important to these users?
What did they like most about this product? What did they like least about this product? Why?
What values are most important to these users?
How are these users different from other users?
Are there any use-cases not adequately supported by the current interface?
These questions should be in the back of your mind the second you start collecting data.
How to do data analysis in user research
We start off by introducing the goals of interview analysis, sheds light on what good results look like and show at what times analysis can take place. The core part is a step-by-step approach describing the process of how to analyze user interviews for best outcomes.
The goal of analyzing user interviews
The question about the goal of analyzing user interviews may seem trivial, but it turns out it’s not. In fact, there are two goals both of which are important. One is obvious and the other not so much.
Turn raw data into insights This is what most people think of when it comes to interview analysis. Starting with a bunch of notes and recordings to extract valuable learnings. And indeed the transformation of raw data into insights is a central objective of the analysis phase. While analysis makes raw interview data more actionable, it is important to note that this step doesn't generate absolute truths. At best, we have increased certainty about a hypothesis or approach.
Get buy-in from stakeholders The second and non-obvious goal of the analysis phase is to achieve buy-in among stakeholders and get them behind the findings. Insights are only valuable if they are subsequently used as a basis for decisions. This requires that colleagues or clients truly understand, acknowledge and retain these insights.
A proven way to achieve that is to let stakeholders actively participate in the analysis instead of merely presenting results to them. The activity of analyzing interviews as a group is just as important as the resulting insights.
“It's all about bringing those stakeholders into your research early and often, so you have an audience that's already "bought in" before your findings are ready to share.” - Beth Godfrey UX Researcher at Google
What constitutes a good insight
How do you know you have done a good job and the analysis was successful? While there is no objectively measurable way to determine a good insight, there are indicators that show the right direction.
Trustworthy - grounded in data The process of interview analysis is the abstraction of raw observations into more general insights. For those to be reliable, it’s important that they are based on evidence from the interviews.
A common problem to watch out for here are cognitive bias, e.g. the tendency to confirm what we already believe. These can distort the analysis process and misguide decisions. Read this article about common bias in user research and tips to avoid them.
While it’s important to ground insights in evidence, we shouldn’t forget the limitations of qualitative data. For instance, user interviews will not yield statistically significant results. Rather focus on the strengths of qualitative data in revealing causal relationships, emotional states of users and thus far unnoticed perspectives.
Relevant - fitting to the research goal Interview analysis will likely take several hours over the span of multiple days. It’s easy to get distracted in details and lose sight of the bigger picture. This may result in spectacular findings that have nothing to do with the initial research questions (hint: still save those findings as they may become relevant in the future). To avoid unintended deviations continuously remind yourself of the main questions to be answered and make them visible regularly during the process.
Novel - uncovering what was hidden To be clear: it’s totally fine if insights confirm previous beliefs. Analysis shouldn’t bend evidence just to produce new and exciting findings. At the same time, looking a bit deeper into the data instead of only scratching the surface can reveal unexpected connections or entirely new topics. These unexpected findings multiply the value of user interviews.
When analysis takes place
User interview analysis is a distinct step in the course of a research project. But this doesn’t mean analysis only happens during this designated time. The brain immediately starts to process new data by trying to make sense of it. Ideas can spark and patterns can emerge already while conducting the interviews. Also a joint recap with the team after each interview is a great way to identify early ideas while memory is fresh. Be sure to capture all these ideas immediately when they arise.
There are two possible approaches when to schedule the main analysis part within the course of the project.
Analysis in one go With this approach analysis starts after all interviews are completed. All data is available from the start, which might make it easier to recognize patterns since there is more related evidence. One longer block of analysis facilitates to get into a state of flow as there are less interruptions. Of course you have to be aware of fatigue and keep an eye on the team’s energy level.
Batch-wise analysis The idea of this approach is to divide interviews into batches and to conduct shorter analysis sessions after each batch. One advantage is that you can still adjust the questions of upcoming interviews, for instance to focus on an underrepresented topic. It also allows to provide preliminary results to stakeholders or is used when there is not enough time for analysis after the last interview, e.g. due to a tight deadline. From a practical standpoint, it is easier to find several shorter blocks of time in the calendars of busy stakeholders than one large block.
A disadvantage of this approach is the higher switching cost. Mentally preparing for analysis requires time, in particular to get evidence into short term memory which happens multiple times here.
A related question at this point is how much time should be allocated for analysis. You may have guessed that the answer is: it depends. The tendency is to underestimate how long it takes and to not reserve enough time. If faced with the decision, we’d rather recommend decreasing the scope and focusing on the most important topics first instead of rushing through. On the other hand you could always do more analysis, so be sure to timebox the sessions.
The step-by-step approach to user interview analysis
Now let's start digging into the data. The basis for successful analysis – or synthesis as it is also referred to – is good note taking and we assume you have documented all interviews amply and consistently. When working collaboratively with stakeholders, block sufficient time in their calendars and inform them up front about what to expect.
We will tackle the analysis in three steps:
Familiarize with the data
Synthesize
Convert findings into output
Step 1: Familiarize with the data
The goal of this first step is to prepare the brain to forge connections by getting the data into short-term memory. It’s like loading information into a computer to be able to work with it. In practical terms, this usually means reading the interview notes carefully. This is easier if team members were involved in the interview phase, for example as note-takers.
To turn this familiarization step into a group activity assign each stakeholder to a participant, let them read through the respective notes, and present themselves from their assigned participant’s perspective to the team. Then take some time to discuss each participant in the group. As there usually are more interviewees than team members you can repeat this multiple times.
Step 2: Synthesize
This part doesn’t follow a very strict process. Here we’ll show you four techniques that can serve as starting points. Use them flexibly and adapt them to your needs if necessary.
Structure data into themes As qualitative data is inherently unstructured and thus difficult to analyze, the initial task is to make it comparable across participants. For that, we assign individual responses to more generic themes.
The topics that you asked about during the interviews make good starting points for these themes.
When working with digital tools, a practical way to assign notes to themes is using tags. A tag is a label indicating which theme a note belongs to. Regular text editing software doesn’t allow this sort of tagging easily, so rather use a spreadsheet or a dedicated user research tool.
A common question is whether to come up with the tag names before starting with the tagging or create them on the go. The short answer: both are possible. Usually, you need to iterate over these tags while working through the data as new themes come up or two themes merge into one.
After you have tagged the first few interviews together as a team and built a common understanding, you may want to split up and do the remaining tagging in smaller groups or individually to progress more quickly.
The analog alternative to tagging data is to use post-its. For that, write one response per post-it and place similar responses closely together on the wall. Make sure to note the participant’s name or a short sign to have a link to the raw data and be able to get the context again. Colors are great for segmentation, e.g. to indicate a certain user group. In the image below we used yellow for in-house researchers, pink for freelancers, and blue for researchers in agencies.
Look for cross-participant connections and cluster-related evidence With the notes organized into themes, you can now dig into each of these themes separately. In a digital tool, use filters to focus on one tag in particular and look for commonalities or contradictions among the responses.
Encourage team members to share their thoughts with the group as this helps to form new concepts and understanding.
Next, you can pull together related observations into clusters. This method is also called affinity mapping or affinity diagramming and enables you to connect pieces of evidence to build up a broader understanding.
Use segmentation to reveal underlying patterns It helps to look at the research data from different perspectives to get a deeper understanding of a topic. It’s like applying different lenses that help to see connections more clearly. Depending on where you look, the world can seem very different.
The metadata about the participants can be a key to discovering hidden patterns. In a B2B context, that could be the participants’ job title, the size of the company they are working at or the industry. In a consumer context demographic data or level of experience with a certain product could be relevant criteria.
Splitting the responses according to metadata (in this case company type) helps to reveal underlying patterns. Of course, not all phenomena are explainable with the available data and it requires critical thinking and potentially some more research to avoid presuming causal relationships that aren’t actually there.
Analyze across themes Besides changing the perspective, changing the resolution is another method to get a clearer understanding of the data. After we identified themes and did a deep dive into each of these in the previous steps, now we zoom out and look at the bigger picture.
Identify how the themes relate to each other and try to understand their relative importance, chronological order or causal relationships. We used a purple post-it to indicate a theme and put the respective name on it.
Step 3: Convert findings into output
With the extensive analysis, phase finished there comes the point when you ask yourself what to do with all of these insights. The final step is to turn what you learned into a tangible output.
There are two purposes for this:
It makes it easier to convey the insights to stakeholders who were not directly involved in the project and also helps them to retain what you found. So think of the output as a tool to share findings.
It initiates the transition towards putting the insights into action and thereby helps to move from learning mode to doing mode.
The best form of output depends on your initial research questions. Examples for commonly used outputs are:
A prioritized list of pain points and opportunity areas
A user journey including highlights and lowlights
Jobs to be done
User personas
Before the team parts, make sure to have concrete next steps planned to bring findings to action. This could be a decision workshop, prototyping session or design sprint. Also, think about how you want to store your data and findings to have them accessible for the future. It should be easy for stakeholders to go back and look up certain aspects of the research at any time.
🔥 Top three quotes from our Instagram page
Good features make you think. Great features make you feel. - Product Mindset
The best products don't focus on features, they focus on clarity. - Scott Belsky
You’ve got to start with the customer experience and work backward to the technology. - Steve Jobs
📺 Monthly Tech Snippets
TikTok CEO Shou Zi Chew answered US lawmakers’ questions in a tense five-hour hearing yesterday. Congress was largely skeptical while China preempted the hearing with firm opposition to a forced sale.
OpenAI launched plug-ins for its popular ChatGPT bot, allowing it wider access to the internet. The rollout is limited as safeguards are tested; the company notes that drawing from less curated environments may produce less reliable behavior.
Canva added AI tools to generate templates, copy, and more via text and image prompts. Also: automatic beat syncing that matches video to music, and automatic translation in 100+ languages.
Tall task: New Starbucks CEO Laxman Narasimhan said he will work once a month as a barista. Thoughts go out to his shiftmates whose conversations about unionization may get a little awkward.
Apple will spend $1B per year to produce feature films, possibly including a spy thriller and a Napoleon biopic, per Bloomberg. Apple won an Oscar for best picture in 2022 for CODA.
Awesome! Thank you!