Customer interviews are one of the most impactful activities a product team can do. But only if we use the right methods.
Customer interviews are one of the most impactful activities a product team can do. But only if we use the right methods. – Tweet This
An early customer interviewing mistake is to spend your interview time exploring your solution ideas. You might present a prototype and ask, “What do you think?” Or even, “Would you use this?”
This feels like progress. Our goal is to figure out if our solution ideas will work. But this type of customer interview doesn’t get us reliable feedback. In fact, it often builds confidence in the wrong ideas.
Let’s dive into why.
Customer Interviews Aren’t the Best Way to Evaluate Solutions
When we are evaluating our solutions, our primary question is, “Will our customers do what we need them to do to get value from this solution?”
To answer this question, we want to evaluate their behavior. We don’t want to evaluate what they think they will do. This is an important distinction.
It’s easy to think we’ll go to the gym, eat healthy, put in the extra time at work, save money, and so on. When asked about our future behavior, we tend to answer based on what we know we should do. We are eternal optimists.
When asked about our future behavior, we tend to answer based on what we know we should do. We are eternal optimists. – Tweet This
But reality always creeps in. We skip the gym to get an extra hour of sleep. We stay at work late and grab McDonald’s on the way home. We put in the extra hours at the office, but we get distracted by the ping-pong tournament. We try to save money, but we get enticed by our favorite band coming to town and splurge on front row seats.
We are all doing the best we can. But our best rarely lives up to what our brains tell us we should be doing. We are way harder on ourselves than any reasonable human would be on another person.
Why does this matter? Well, when I ask you if you would use my product or service, two things happen:
- You want to be nice to me. You can see how much I like my solution (no matter how neutral I try to be) and so you say nice things about it, regardless of what you really think.
- More importantly, you evaluate my solution based on your idealized, aspirational self. You might believe that you’ll use my hypothetical future product. You might genuinely want to. But that doesn’t mean that you will—no matter how excited about it you are.
When evaluating solutions, I want to evaluate what you do, not what you think you do. The best way to evaluate your solutions is with assumption tests. An assumption test is an activity where we simulate an experience and observe real behavior. You can learn more about assumption tests here.
If we aren’t using our customer interviews to explore solution ideas, what are they good for?
Customer Interviews Help Us Understand Our Customers’ Goals, Context, and Unmet Needs
I recommend product teams interview customers at least weekly. My goal with this recommendation is to help teams build a sustainable habit that gives them a persistent feedback loop as they make daily decisions.
It’s easy to be overconfident with our daily product decisions. We think about our product all day, every day. We understand our market. We are inundated with customer requests, support tickets, and stakeholder feedback. We review our product analytics every morning. We have no shortage of ideas of what to build next.
But what’s missing from all of these inputs is context. Our sales team can pass on a feature request from their latest prospect, but that feedback rarely includes the customer’s goal, the context in which they would use that feature, or the underlying need the feature might address. Without that context, it’s nearly impossible to build the right solution.
Customer interviews help us understand the missing context. They help to ensure that we have a tight match between the need and the solution.
In Continuous Discovery Habits, I argue the purpose of a customer interview is to discover unmet customer needs, pain points, and desires. I collectively call these opportunities because they are opportunities to intervene positively in our customers’ lives.
The key to designing a good solution is to first start with a deep and rich understanding of the problem you are trying to solve. For product teams, that means we need to have a deep and rich understanding of our customers’ needs, when they arise, where they arise, and what our customers are willing to do to meet those needs.
The key to designing a good solution is to first start with a deep and rich understanding of the problem you are trying to solve. – Tweet This
A regular cadence of customer interviews helps us understand this context. This context helps us make better daily decisions about what to build.
When we frame customer interviews this way, it makes it clear that we shouldn’t be spending our interview time exploring solutions.
So let’s look at what we should do instead.
Asking the Right Questions: Most Advice Isn’t Specific Enough
If you do a quick Google search, you’ll find an abundance of interviewing advice, including:
- Be prepared: Decide what you’ll ask upfront.
- Ask open-ended questions. Or sometimes phrased as, “Don’t ask closed questions” or “Don’t ask yes or no questions.”
- Don’t ask leading questions.
- Don’t ask about future behavior.
These are all good recommendations. We do want to adequately prepare for our interview, ask open-ended questions, avoid leading questions, and we don’t want to ask about future behavior (I’ve even written about that last one).
But for most product teams, this advice isn’t adequate. I want to dig into why. We’ll start by assessing how teams typically prepare for an interview, then explore what we tend to ask as a result, and then finally, look at what to do instead.
Preparing for an Interview: Identifying What You Want to Learn
A good researcher will tell you to start by defining your discussion guide. This is a great place to start. But it can often lead inexperienced product teams astray. Let’s dive into an example to see why.
Suppose you work on a product team at Netflix and you are tasked with increasing viewing engagement (e.g. increase the average minutes watched each week). This is a new outcome for your team and you want to start to understand how people use Netflix, what keeps them engaged, and why they might disengage.
As a product team, you sit down to define a discussion guide. You start by generating a list of questions that you want to learn about your customer.
- Do you watch Netflix?
- What do you like/dislike about Netflix?
- How often do you watch Netflix?
- What do you like to watch?
- How many shows do you typically watch at a time?
- How many episodes do you typically watch in a given session?
- Do you watch Netflix with other people?
This is a great activity. It’s important for a team to get aligned on what they want to learn from an interview. The mistake teams make is they ask these same questions in their customer interviews.
Deconstructing a Common Interview Technique: Where We Go Wrong
In our course Continuous Interviewing, we distinguish between research questions and interview questions. Research questions are what we want to learn from our customers. However, they don’t always make the best interview questions.
Let’s see what happens when we ask our research questions in an interview:
Now let’s break down how effective each question was.
Do you watch Netflix?
This is a closed, yes or no question.
We should not ask this question in our interview. Instead, we should recruit people who use (or don’t use) Netflix based on what we’re trying to learn. You can design screener questions as part of your recruiting process to ensure you’re interviewing people who meet your specific criteria rather than waiting to ask this question in your interview.
What do you like/dislike about Netflix?
This is an open-ended question.
This is not a leading question.
This question does not ask about future behavior.
This question passes all of our criteria. We also learned some valuable things when we asked this question. We learned that this participant likes older television series and likes to download shows to her iPad and iPhone. She doesn’t like that the shows don’t change very often. We’ll see in a bit, however, that this answer is not complete.
How often do you watch Netflix?
This is a closed question.
It’s easy to think only yes or no questions are closed questions. But that’s not true. A closed question is one that is designed to collect a limited response. In other words, they generate short answers. They don’t encourage the participant to elaborate on their response or to provide context. We see that in this interview where the participant simply responded “let’s say two hours a week.”
It’s easy to think only yes or no questions are closed questions. But that’s not true. A closed question is one that is designed to collect a limited response. – Tweet This
Quick tip: If you aren’t sure if one of your questions is a closed question, ChatGPT 4 (ChatGPT+ required) is pretty good at evaluating if a question is an open or closed question.
There’s another reason why we should not ask this question in a customer interview. Humans—all of us—are not very good at summarizing our own behavior. Unless we actively track the specific behavior we are being asked about, when asked, we aren’t likely to generate an accurate response.
Instead, our brains generate a fast answer and we accept it as truth. This is classic System 1 vs. System 2 thinking. In Thinking, Fast and Slow, Daniel Kahneman describes System 1 as the fast response system, whereas System 2 is more deliberate. System 1 saves energy and effort, but is prone to errors (e.g. cognitive biases). Sometimes System 1 responses can be accurate, but often are faulty.
You’ll see later that in this particular instance, this answer doesn’t reflect the participant’s actual behavior.
Rather than asking this question in an interview, we can (and should) answer this question by using behavioral analytics.
What do you like to watch?
This is an open-ended question.
It’s not a leading question.
It doesn’t ask about future behavior.
This question also follows all of our criteria. We learned that this participant likes to watch older television series. This is the second time she has expressed this. This time she even provides some specific shows that she likes. We’ll see later, however, that this is also not a complete answer.
How many shows do you typically watch at a time?
This is a closed question.
How many episodes do you typically watch in a given session?
This is a closed question.
We should not ask these questions in an interview. Instead, we should use behavioral analytics to answer these questions.
Do you watch Netflix with other people?
This is a closed question.
This is a tough question. Do we mean ever? If the participant has ever watched a movie with another person, should she answer yes? Do we mean typically? What do we mean by “typically”? Most of the time, some of the time, ever?
Overall, this wasn’t a great interview. But we did learn some “facts” about this participant’s Netflix behavior. I put “facts” in quotes, because it’s not clear yet if these “facts” reflect this participant’s real behavior.
Even from a small number of brief answers, I already see some conflicts. She likes older shows, but she doesn’t like that the content doesn’t change. Does she watch so much that she’s running out of older shows? It doesn’t seem like it, as she says she only watches two hours each week. Is it that Netflix doesn’t have enough of the older shows that she likes? We don’t know.
She typically watches 2 hours per week, but she watches 1 to 2 40-minute shows at a time, and sometimes up to 3 to 4 40-minute shows on the weekend. Does she only watch once per week or is she way off on her two-hour estimate? This is why reviewing behavioral analytics is a much better way to answer questions about how often.
We are also missing a lot of context. When does she watch Netflix? Where? On what device? How does it fit in her day? Is she excited about what she watches or is she just pushing boredom away?
What’s actionable in this interview? If you worked on a product team and watched this video, how would it inform your decisions? It would be easy to think we need to add more content—specifically older shows.
Now I realize this was a simple, two-minute, illustrative interview and I could have asked her far more questions. But with this way of interviewing, I have to think of all the questions. I only learn about what I think to ask about.
It turns out, there is a simpler, more effective way to conduct a customer interview.
Shifting to a Story-Based Interview Uncovers Missing Context
Many teams are familiar with the question criteria I outlined above. And they know not to ask the questions I asked. Instead, they ask something like, “Tell me about your experience on Netflix.”
This is an open-ended question. It’s not a leading question. And it asks about the past, not the future. But our question criteria is not complete.
The problem with this question (for our purposes) is that it’s a speculative question. We are asking the customer to tell us their thoughts about Netflix in general. If our goal was to understand their feelings toward Netflix, this might be a valid question. But our goal is to uncover goals, context, and unmet needs.
We need to add a new piece of advice to our question criteria: Keep the interview grounded in specific instances of past behavior.
We need to add a new piece of advice to our question criteria: Keep the interview grounded in specific instances of past behavior. – Tweet This
When I say, “tell me about your experience on Netflix,” I’m not asking about a specific instance. I’m not asking about specific behavior. I’m encouraging you to summarize and speculate. I’m encouraging you to give me a fast System 1 answer. And as the interviewer, I have no way of knowing if your answers reflect your real behavior.
Instead, I want to ask about a specific instance. I can say, “Tell me about the last time you watched Netflix.”
Let’s see what happens when I ask this question:
Can you hear the difference? We are only hearing a very short story, but even so, we get much more context. We still learn that she likes to download episodes to her phone. We still learn that she likes older shows. But this time we get much more context.
She was on a plane. She doesn’t want to have to rely on Wi-Fi. She’s willing to download the episodes ahead of time. She fell asleep while watching the show and had to rewind. She’s already watched two episodes and is planning to watch two more at the end of the day when she flies home. That’s 2 hours and 40 minutes of viewing in a single day.
But maybe this story is atypical. Let’s collect another one.
Interesting. Now we learned a lot of new things. She doesn’t just like old TV shows, but she also likes historical fiction/period pieces. She likes movies.
We are also hearing another story about how she watched on a mobile device. This starts to put more context around how she likes to engage with Netflix.
Compare the following:
“I like to download shows on my iPhone/iPad.” | “I downloaded episodes to my phone ahead of time so that I could watch Breaking Bad on my flight.” “I had a ton of downtime when visiting my mom because she goes to bed early. So I laid in bed and watched a movie on my iPad.” |
Suppose you were working on a feature that allowed you to auto-download your favorite content to your connected devices. Based on what you learned in the first interview, you might focus on selecting which show you want to auto-download. But if you collected the two specific stories, you might look at letting people download specific shows ahead of time or you might also explore how to set genre filters as well (e.g. download all historical fiction/period piece movies).
Now these interviews aren’t wildly different. But there’s a key difference that I want to highlight. In the first interview, the participant gave me fast answers to each of my questions. She relied on System 1 to generate those answers which means they are susceptible to a number of cognitive biases: recency bias, availability heuristic, aspirational responses, and many more.
In the second and third interview, we dove into the specifics of her actual behavior. She didn’t speculate, she told me what she did.
This means that in the second and third interviews, I got context, but I also got more reliable answers. That means her answers are much more likely to reflect her actual behavior. The differences might seem small, but over the course of a 20–30 minute interview, they add up quickly.
Collecting Specific Stories Isn’t Easy. It Takes Practice.
It’s one thing to know that we should spend our interview time collecting specific stories about past behavior. It’s another thing to be able to do it in practice.
When we say to someone, “Tell me about the last time you watched Netflix,” we are likely to get a short response. They might say, “It was last night after dinner.”
Our tendency is to respond by asking direct questions about specific aspects of the experience like, “What did you watch?” or “Where were you?” But this encourages the participant to only provide short answers. It also means we only learn what we think to ask about.
When collecting stories, we want the participant to do most of the talking. The art of the interview is knowing what to ask when in a way that encourages the participant to open up and share their experience.
When collecting stories, we want the participant to do most of the talking. The art of the interview is knowing what to ask when in a way that encourages the participant to open up and share their experience. – Tweet This
Even when a participant is able to tell a full story, most people will flip-flop between the specific story and generalities. We saw this in the third interview above. The participant started telling us about a specific instance (e.g. watching Elizabeth) and quickly jumped back to a generality (“I like historical fiction/period pieces”). The interviewer’s job is to gently guide the participant back to the specific instance. This sounds simple, but to do it well, you have to first learn to recognize when it’s happening (which isn’t always easy), then you have to wait for a natural pause in the conversation (don’t interrupt your participant), and then you have to segue back to the specific story.
Inevitably, there will be moments in the story that the participant can’t remember. It’s easy to gloss past these moments. But an experienced interviewer can help a participant remember more of their story by developing a basic understanding of how memory recall works and using techniques designed to aid memory.
Finally, we have to practice and develop our active listening skills so that we don’t misinterpret the participant’s story. This can often be the hardest step.
When asked, most people believe they are an above-average listener. We can’t all be above average. That’s statistically impossible. Because we listen all day, we assume we are good at it. But active listening is a skill that requires practice.
If product teams want to get reliable feedback from customer interviews, they have two choices: 1) they can rely on skilled researchers to conduct their customer interviews or 2) they can learn to be a good interviewer themselves.
Companies that have enough skilled researchers to meet the needs of all of their product teams are unfortunately few and far between. Thus most product teams need to rely on the second option. This is why we teach story-based interviewing to product teams.
In our Continuous Interviewing course, we teach you how to collect a specific story from beginning to end. We give you specific tactics to help you develop your active listening skills. We provide specific prompts designed to encourage the participant to elaborate on their short answers. And most importantly, you get over four hours of hands-on practice with real-time feedback.
It’s the easiest way I know of for you to develop your interviewing skills quickly. You should come join us.