It’s easy to think you already do continuous product discovery.
Most of the teams that I work with come into coaching thinking that they don’t need help. They’ve read the industry books, they attend the popular product conferences, and they follow all the leading blogs. They’ve got this.
And they aren’t wrong.
Most product teams are starting to integrate discovery practices into their product development process. They interview customers, run usability tests, and conduct A/B tests. What more is there to learn?
The problem is most teams don’t do any of these activities often enough. Nor do they use these activities to inform their product decisions. Instead, they use them to validate the decisions they’ve already made.
Last May, I spoke at the Front conference in Salt Lake City. In my talk, I shared a clear definition of continuous product discovery that I hope will act as a benchmark to help teams evaluate their own product discovery practices. I also shared a case study telling the story of two product teams as they adopted my definition of continuous product discovery.
You can watch the video below. (Can’t see the video? Click here.)
Show Notes:
- [1:03] Continuous Product Discovery: A Definition
- [3:28] Arity: A Case Study of an Organization Adopting Continuous Product
- [9:16] Continuous Customer Interviews
- [20:12] Continuous Rapid Prototyping
- [26:55] Continuous Product Experiments
- [34:35] The Value of Continuous Discovery
Resources Mentioned:
Tools:
People:
Articles:
- Customer Interviews
- The Evolution of Modern Product Discovery
- Hypothesis Testing
- Improve Your Product Decisions
- The Ladder of Evidence: Get More Value From Your Customer Interviews and Product Experiments
- 2017 Product Conferences
- Product Managers Don’t Own the Problem (And Designers Don’t Own the Solution)
- Take a Team Approach to Product Delivery
Conferences:
Companies:
Full Transcript
[Edited for brevity and clarity.]
Today I want to share with you a story about a specific company that’s in the process of adopting continuous product discovery across the entire company. What I found working with companies that are trying to do this is that, as individual product managers, as individual designers, we can start to think about how we want to work, but if our organizations don’t support it, it can be hard.
For those of you that aren’t familiar with these terms, I’d like to start at the beginning. Continuous discovery is the easiest to describe in the context of delivery. I look at discovery as the set of activities that we do as product teams to decide what to build, whereas delivery is the set of activities that we do as product teams to actually build and ship our products.
I want to be clear, when I say product teams, I mean the product manager, the designers, and the software engineers. I don’t get caught up on who does what. We’re a team. We’re responsible for discovering value and delivering it. This talk is going to apply to the team.
Continuous discovery is more specific. By continuous discovery I mean weekly touch points with customers by the team building the product, where they themselves conduct small research activities in pursuit of a desired outcome.
Each line of this definition is important to me. Many of us work in companies where a customer support team talks to customers, or a centralized user research team talks to customers. But our product managers and our designers and our software engineers learn about our customers through research reports or through personas, or through customer journey maps.
It’s important for those of us that are building the product to have regular interaction with our customers.
Continuous discovery requires weekly touch points with customers by the team building the product. – Tweet This
We aren’t full-time researchers so we can’t do big box, big project research—we have to do it in small bite-sized activities. So it fits along with all the other stuff that we do in our jobs, like writing code and designing. We’re going to talk a lot about what those small research activities look like today.
And then finally, it is really important that we focus on outcomes. Our goal is not just to ship a product, but it’s to ship a product that has an impact on our customers in a way that creates value for our business.
We can’t just do open-ended research indefinitely; our goal should be to identify the next question that we need answered that’s going to help us get to our next product outcome.
Continuous discovery requires weekly research activities that drive the pursuit of a desired product outcome. – Tweet This
I’m here to tell you a story about a specific company. Let’s get back to that tale. I’m going to start with some context.
This is a story of a company called Arity, they’re based in downtown Chicago. Arity has 380 employees. They’re a fairly small company. But they’re actually a subsidiary of a much larger company. They are in the Allstate family. Allstate is a very large company. They have 40,000 plus employees at Allstate.
What this means is that this company, Arity, sometimes acts like a startup because they’re a small company that’s being protected to go and do their own thing. But they sometimes act like a big company because they grew up in the Allstate culture.
Allstate isn’t just a big company; they’re not a traditional tech company. Most people that work there don’t know how digital products work.
They’re also a regulated industry and if you’ve never worked at a bank or an insurance company or anywhere else that’s regulated, what this means is the company culture is set up to be safe. It’s set up to follow these regulations. This can lead to a lot of sign-offs, a lot of red tape—things move slowly.
This company builds both consumer and enterprise products.
I’ve been in your shoes sitting at a conference, listening to a speaker tell a great story. It’s easy to think, “That would never work in my company.” It’s really easy to say, “Yes, you guys can do that, but my place is different.”
What I like about Arity is that there’s something here for everybody, whether you’re consumer, whether you are enterprise, whether you are small, whether you’re big, whether you’re regulated, whether your company’s on board or not. There’s a lot to learn from this company’s journey, so let’s get into it.
I want to introduce you to Frank and Gina. Frank and Gina are real people. They are two product managers that work at Arity. They represent two of the nine product teams that I’ve worked with at Arity over the last year. Let’s learn a little bit about their products and their journey.
Frank’s team is working on a consumer app. Allstate has data about how you drive, where you go, your relationship with your car, and they looked at this and they said, “We think we can provide a compelling service to consumers to help them with all the challenges around owning a car.” Their mission is that broad. Find something that you can do that’s of service for consumers with their relationship to their vehicle. This team’s goal is to engage consumers through some sort of app or service.
Frank’s team, when I met with them for the first time, were excited about this idea of continuous discovery. They want to build something that customers care about.
There was just one problem. They had never talked to a customer before. All very smart people, all very capable, but they grew up, career-wise, in a company that didn’t have a strong history of discovery. So we were starting from square one.
I want to emphasize that because for those of you that don’t have a regular cadence of interacting with your customers, both of the teams that we’re going to talk about today were starting from scratch.
Frank’s team, in the early days, was a product manager, Frank, a second product manager, Luis, and a tech lead. They were supported by a centralized user research team, a centralized design team, and a centralized market insights team. All three of those teams sat in Allstate, not at Arity. So there was a little bit of this divide between the product team building the product and all the design and research resources they were supported by. This is going to be a part of our tale that we’ll come back to.
Gina, on the other hand, was working on a business intelligence product. Think about it as a scoring algorithm. I can’t share specific details, but it’s a scoring algorithm for insurance companies to help them assess risk.
She is working on an enterprise B2B product. It’s not a product with a graphical user interface (GUI). Not an app, not a website, but really a data product. It’s a scoring function that’s helping these companies assess risk. Think about it like a credit score. A credit score helps you assess risk when you’re applying for a credit card, or buying a home.
Like Frank’s team, Gina’s team was also sold on this idea of continuous discovery. They had talked to customers, but only in a sales context. Their goal was to find their first referenceable customer for their scoring product.
Gina had a tech lead and a data analyst on her team, and because she was working in the insurance industry. she was supported by both the president of Arity who has many industry connections and by Allstate and everything they know about the insurance business.
Remember, our goal is to get Frank and Gina to have weekly touch points with their customers so that they themselves can conduct small research activities, and learn very quickly in pursuit of their desired outcome.
The first thing is, we need to get them doing small research activities on a regular basis. So we started by looking at this set of activities. How can they do regular customer interviews, rapid prototyping, and product experiments? They were doing none of this when we started, so we started with customer interviews. How can we get them learning about their customers?
Continuous discovery teams have a regular cadence of customer interviews, rapid prototyping, and experimentation. – Tweet This
If you remember, Frank’s team had never talked to a customer before. They were supported by a centralized user research team. So they did what most people would do and they reached out to that team and they asked for help. They said, “Hey, we’re working with Teresa. She wants us to talk to a customer every week. Help, how do we do that?” They ran into a couple of problems.
Their central research team wanted to schedule six interviews for three weeks out. Now, let’s think about this from their perspective. They’re a centralized research team, they support dozens of product teams across both Arity and Allstate. They’ve got a pipeline of projects they have to get through. So when a team comes to them and says, “I need to talk to customers,” they have to get in line.
They also work from this big project mindset. You can’t just talk to one customer; you’ve got to talk to 6–12 customers. That’s research.
Well, this team got stuck because they came and told me this and I said, “This isn’t how it works. I want you to talk to one customer this week. I don’t care how you do it, just find somebody to talk to.”
Frank’s team was concerned. They responded, “We can talk to people, but are we allowed to? We don’t want to step on the user research team’s toes. We know they can help us, why can’t we just wait?” I said, “Because you don’t make product decisions every three weeks. You make product decisions every day. You need to move your product forward every day.
If we want to co-create with our customers, we need to make sure that the teams building the product are interacting with customers on a regular basis.
The good news is I said, “Look, let your user research team schedule those six interviews three weeks out, in the meantime let’s just start talking to people. You’re working on a consumer product, get outside, talk to people. Let’s see what we can learn.”
I introduced them to this idea that I call “research in the wild.” When we think about research, we think about formal hour-long discussions with long discussion guides where somebody has vetted every question. It’s research, big research.
I said, “Look, you’ve never talked to a customer. You’re trying to build a product for people about how they interact with their cars. Let’s just go talk to people and see what they want and see what they think.”
Now here’s the danger in this: Most product teams hear that and they go, “Here’s my cool idea, what do you think?” And they don’t get very good feedback.
I sent this team out and I said, “I want you to talk to your friends, your family, your coworkers, your neighbors, the barista at Starbucks, and I want you to ask them one question, ‘Tell me about your experience with your car.’ That’s it. And I want you to listen to how they respond to that question.”
The conversation starter was intentionally broad, and it’s because how people answer that question gives you hints as to their mental model, how they think about their vehicle, what’s important to them, what’s the most salient thing that comes up.
Then from there, I just want you to have a conversation. It can last five minutes, it can last ten minutes. It can last an hour if they want to talk to you that long, but your goal is just to get out there and talk to people.
They did this, and they did it multiple times in a week and within one week they went from having never talked to a customer to talking to almost a dozen customers.
Try a one-question, five-minute customer interview. Repeat everyday. You’ll be surprised by how much you learn. – Tweet This
In one week they went from having never talked to a customer, to talking to dozens and collecting a bunch of valuable stories. Much of what they heard aligned with what they were thinking. But they also heard new things, and there were many things they were thinking that they didn’t hear anything about. Right away just starting this regular cadence, they started to learn a ton and were able to iterate on their ideas.
Three weeks went by and their user research team had set up formal interviews, and they did those interviews and they learned a lot from those interviews too.
The other thing they did was they started sharing with their user research team some of the things that they were learning in their “research in the wild” interviews. Their research team got really excited about this—they wanted to do those interviews too.
Eventually we started to converge our methods. The research team started to recruit one to two formal interviews every week, and both Frank’s team and the research team started doing all these ad-hoc interviews, all these ad-hoc conversations. The rate of learning on this team went way up.
Like I said, I’ve been a skeptical audience member so if you work on an enterprise product I’m sure what you’re thinking is, “Great, this works in the consumer world, how in the world does this work in a B2B world?” That’s why we’re going to talk about Gina. If you remember, Gina is working on a business intelligence product, a scoring function to help insurance companies remove risk.
Gina can’t go out and talk to friends and family, she has to talk to companies, and more specifically she needs to talk to small insurance firms. So research in the wild isn’t going to work for her.
Now she did have a huge advantage—she works at a large insurance company that has many relationships in her industry. She was able to leverage those inside connections.
So she set up her first couple of weeks of interviews by going to her president and saying, “Hey do you know someone at this insurance company? What about this other insurance company?”
This, however, is not a long-term solution. If her goal is to do one customer interview every week, eventually she’s going to run out of connections. So she also spent time using a service to recruit participants for her. This was a good supplement, so she did both at the exact same time to make sure that she was able to have a consistent cadence of interviews.
As she talked to those folks, they got interested in her idea and eventually they wanted to learn more, they became part of her sales pipeline. She asked all of them, “Hey who else should I be talking to?” And over time, she too built up a regular cadence of customer interviews.
If you’re in a startup, or you’re launching a product in an industry you haven’t worked in before, you may not have these inside connections to take advantage of, but we all can pay a service to recruit for us.
What can we learn from Frank and Gina’s experience?
You have to try many ways to recruit. What works for one company might not work for you. What worked for Frank didn’t work for Gina. But the goal is we want to try many ways until we find something that works.
If you have a consumer service, one of the best ways to recruit interview participants is to use a product like Ethn.io to pop up a one-question screener, “Are you available to talk to us on Thursday at 10am for 30 minutes? We’ll give you a $30 Amazon gift card.” Turns out, when you turn that on, you get interview participants quickly. You can turn it off as soon as your interview schedule is full.
Again every company is different—if you’re on the enterprise side and you really don’t have any connections and you’re struggling to pay to recruit, you can set up customer support triggers where you tell your customer support team, “When this happens, send that person my way. I want to interview them.” So the key here, you have to try a lot of things.
When you find something that works, you want to automate it. Here’s what I want you to imagine: I want you to wake up on Monday morning, you show up to work, you look at your calendar and later in the week, you have a customer interview on your calendar and you did absolutely nothing to make that happen. That’s awesome. This is how we interview customers more often than not. We need to automate the process.
Try many ways to recruit interview participants. When you find one that works automate it. – Tweet This
So Frank was able to do this because he had a centralized user research team that worked with him, they had a long history of recruiting people, but they started using tools like Ethn.io and Usabilla to recruit directly from the people using their product.
And Gina, eventually, as her product grew she got help from a sales rep, and her sales rep was out in the field, and she started coming up with sales rep triggers just like I talked about customer support triggers, so that she always had something on her calendar every week without her doing anything. This is what makes continuous sustainable.
Third, you need to do your own research. Our goal each week is to learn the different nuances of our very specific research questions. This is too much detail for a centralized user research team unless they’re embedded with you, and they know your product and they know your context.
Do your own research. Your product questions are too specific for a centralized user research team to answer. – Tweet This
So, that’s great if you have an embedded researcher on your team—use them. If you don’t, you need to learn how to do some of this yourself. And here’s the great thing, use your centralized researchers as research experts, they can help you improve your research methods.
Finally, especially in the early days of setting up continuous interviews, you have to do things that don’t scale.
Doing things that don’t scale is the secret to success. We worry too much about the thousandth person when we really should be focusing on the first, second, and third people.
As you’re trying to get to a point where you’re integrating regular interviews, I don’t want you to worry about, “Oh, well, this guy only knows three people, who do I talk to in week four?” Start by talking to those three people because when we do things that don’t scale we find ways to scale them.
Don’t worry about the 1,000th person before you’ve taken care of the first, second, and third people. – Tweet This
At this point, Gina and Frank are both doing regular customer interviews so we’ve moved on. It’s time to do some rapid prototyping.
If you remember Frank’s team doesn’t have a designer on the team. They worked with their centralized design team to create concept designs, but they hadn’t gotten any feedback on those designs.
The first thing they had to do was integrate rapid prototyping into their customer interviews. Now what’s great is when you already have an interview on your calendar every week you can use half of your interview to do what you’ve always been doing, which is your generative interview. And you can use the other half to test a couple prototypes.
Because Frank’s team already had designs, they just started by prototype testing their concept designs. This was fairly easy for them to do, but they quickly ran into a problem. They did most of their interviews virtually because they’re based in Chicago and they were worried about just focusing on Chicago’s needs. They wanted to talk to people that lived all over the US. They didn’t actually know how to prototype test virtually.
Their first hurdle was they had to learn new digital tools. We have a ton of remote prototype testing tools: Whether it’s UserTesting.com, or Validately, or a dozen more. This team had never used one of those tools. They quickly learned it, they started doing all of their prototypes virtually.
Once you start doing weekly interviews and weekly prototype tests, you learn what doesn’t work with your design. You need to be able to change your designs quickly. Otherwise, in week three all you learn is the same thing you learned in week two, and there’s no point in continuously learning. This team had to learn how to iterate quickly on their prototypes.
The challenge here is they didn’t have a designer embedded on the team. This is a classic example where even if this team wants to do continuous discovery, if their organization doesn’t support it, they’re going to have a hard time. I was able to help them with this. I went back to their leaders and say this team needs a designer.
But before they got a designer, the product managers went in and got their hands dirty. They started to learn how to do simple prototypes, not full-blown, beautiful, pixel-perfect designs, but they were able to prototype on a whiteboard, using Balsamiq. They created simple prototypes that allowed them to iterate on what they were learning from their customers week over week so that every week their product got better.
What about Gina? Gina is working on a scoring function. What does a prototype look like in that world?
It’s easy to think about prototypes as mock-ups, but they’re not just that. The goal of a prototype is to collect qualitative feedback on our ideas. And Gina had a big idea that needed qualitative feedback.
The goal of rapid prototyping is to get qualitative feedback on your ideas. – Tweet This
Instead of doing mock-ups because her product had no graphical user interface, she stepped back and she said, “What do I need feedback on?” and she realized what she needed feedback on was her concept. Are we approaching the scoring function the right way? Is this how insurance companies think about what we’re doing?
She started by building a pitch deck. Her product didn’t exist yet, but she put together a pitch deck as if it did. She used her pitch deck as her prototype.
In her customer interviews and sales conversations, she started to pitch this idea. She got feedback, some people liked it, some people didn’t. She iterated on her pitch deck.
She didn’t let her engineers build anything until she got to a pitch deck that insurance companies were excited about. Then she didn’t stop there, because pitch decks tend to be high-level benefits concepts and she knew that wasn’t enough to validate her ideas.
She moved to what we call “what-if” scenarios. She started to define specific scenarios around how an insurance company would use her scoring function. She went back to her same customers and said, “You loved our concept, here’s how we think you’ll use it—in one of these three or four ways.”
This was invaluable to her. She learned quickly that the way she was thinking about the product was radically different from the way her customers were.
All of her “what-if” scenarios were wrong. Even though her background was in insurance—she’d been working at Allstate and she was working with Allstate people to develop the product idea. But Allstate is a large insurance company and her customers were small insurance companies. She quickly realized they have different abilities, they have different needs. So she was able to make progress just by iterating through “what if” scenarios before she even had a product.
Again, what can you learn from Frank and Gina about prototyping?
Start with low-fidelity. If we’re going to learn every week, we need our designs to change every week. We need our prototype to change every week. Frank started with pixel-perfect, high-level concept designs, but they quickly moved to Balsamiq designs because that’s what the product managers could learn quickly and iterate on quickly to get feedback.
Start with low-fidelity prototypes so that you can quickly change them as you learn from customers. – Tweet This
Gina started with a pitch deck. It’s easy to change PowerPoint, and she did it every week.
After every conversation, both Frank and Gina iterated on their prototypes. They both needed people on their team to have prototyping skills. They had to pick mediums that they knew how to create in.
In the case of the product managers on Frank’s team they weren’t designers. They don’t know how to use these big Adobe design tools. They picked something easy. Honestly, they started on a white board with a marker with a customer in the room and when they moved virtual they started to learn simple prototyping tools.
Gina used PowerPoint to walk through different “what if” scenarios.
Every product team needs prototyping skills full-time on the team. – Tweet This
Prototypes come in different shapes and sizes. They are not just mockups of a GUI. I’ve seen teams prototype by role-playing—having someone on the product team role-play with the customer. We’ve seen Wizard of Oz experiments where people behind the scenes are acting like software.
There are many ways to prototype. The key question to ask is, “How can I get qualitative feedback on my idea quickly?”
Both teams are now at a point where they’re doing regular customer interviews and rapid prototyping. By regular I mean every week. So as two teams, who at the beginning had never talked to a customer, both are now talking to customers and rapid prototyping every week. So it’s time to introduce product experiments.
This word gets thrown around a lot in the industry so I want to be really clear what I mean by experiment. An experiment starts with a prediction or a hypothesis. We put a stake in the ground and we say this is what we think is going to happen and then we design something that is meant to test that prediction.
An experiment starts with a prediction or a hypothesis. – Tweet This
Frank’s team, when I said it’s time to start experimenting they thought, “Oh, we already have an experiment in flight.” I said, “Oh, you do? What’s that?” They said, “We’re running a six-month pilot with 200,000 Allstate customers.” I said, “Okay, when did that start?” And they go, “Oh, it’s starting in November.” This was May of last year. I said, “Okay, that’s great. Run your pilot. You’ve got all of your signoff, you went through the red tape, your pilot is running. I want to learn something in the next six months, so we need a faster experiment.”
With Frank’s team, I introduced the idea of landing page tests. Their product was pretty sophisticated. They wanted to help you out when your check engine light came on and help you understand what it meant, where you could get service. The problem with that is your check engine light doesn’t come on every week. Finding an audience that we can test with where this is going to happen is surprisingly hard. Which is why they set up a six-month pilot with 200,000 customers because they knew that at least in that six-month window some of those 200,000 people would have their check engine light come on. But it’s not a fast way to learn.
So we introduced this idea of landing page tests where they could at least build a landing page that says, “Hey, we’ll help you when your check engine light comes on.”
The problem is landing page tests scared the daylights out of this team. Let me remind you they worked at Allstate. There were concerns from their marketing team, “Is it going to hurt our brand? You can’t put Allstate on it.” The marketing team offered, “Oh, we have a search engine marketing team, let’s get them involved.” And I was like, “Whoa, whoa, no we are not doing this.”
The product team themselves were like, “I don’t even know how to build a landing page and, Teresa you already know we don’t have a designer. We’re prototyping on a whiteboard, how are we going to do a landing page test?”
Well, here’s the thing, this team was awesome because even though they were terrified and they had a million reasons not to do this, we just picked them apart one by one. We invited people from the Allstate marketing team to come to our coaching sessions. We explained to them what we were doing. We explained to them why it wasn’t the same as search engine marketing. We told them we wouldn’t use the Allstate brand. We didn’t even use the Arity brand. We used a product brand no one has ever heard of and we launched a landing page.
By the way the product manager went out and found Wix and created landing pages in Wix, which gave him A/B testing right out of the box.
The other product manager learned how to set up Google Ads, how to drive traffic to that landing page. This team had never done anything like this before. But because they worked through all of those obstacles, they learned so much in a fraction of the time.
They didn’t wait six months to get their first feedback on their product idea. Every week, they were able to tweak their landing page test and run another set. They played with value propositions, they played with messaging, they played with different feature sets.
By adding landing page tests to their cadence of interviews and rapid prototypes, they were able to learn faster in three weeks than they had in the six months before this. This was one of the most awesome stories to watch happen.
Again, enterprise folks in the room, we can’t do landing page tests, I get it.
Let’s come back to Gina. For Gina, a landing page test wasn’t going to work. She already knew that people were excited about her concept, she learned that from her pitch decks. She already knew that she was working through the “what if” scenario, she knew what her product needed to do.
But with experiments our goal is to remove risk. Her risk wasn’t, “Do people want it?” And by this point it also wasn’t, “Do we know how they want to use it?” Her risk was, “Can we do it?”
Scoring functions are not easy. Google makes a bajillion dollars for a reason. They’ve got really good algorithms. So for Gina, her product experiments had to be focused on feasibility. This is where she had to work with her data analyst, and they had to come up with their own predictions. This is what we think our scoring function is going to predict, and they had to run their own experiments to make sure that happened.
This started a year ago, so we’re a year into this process. They’ve run enough experiments so they now have a set of “what if” scenarios they know they can deliver on and they’re selling and they have another set of “what if” scenarios that they know their customers’ need where they are still trying to figure out how to algorithmically deliver on it.
In Gina’s case, her product experiments are not A/B tests with consumers, they are number-crunching, data analyst, feasibility tests. We see the same thing with engineering problems; we often have cool ideas, but we don’t know if they’re technically feasible. All of Gina’s experimenting was focused on feasibility.
What can we learn from Frank and Gina?
Chip away at those obstacles.
Frank’s team was faced with a mountain of obstacles when it came to running a landing page test. The first time I suggested it to them, they all said almost in unison,”That would never work here.” What I left out of the story was, once they ran a successful landing page, and they shared it with everybody else in the organization, everybody started doing landing page tests. It spread like wildfire.
Some of them weren’t using them quite correctly and they had to come to coaching and learn how to do them better. But they all got excited about a tool that allowed them to learn every week because Frank’s team was willing to chip away at the obstacles.
It’s easy to think about experiments as large-scale pilots. We test the whole product idea.
It’s not what we want to do. We want to design our experiments so they allow us to learn fast. I don’t want to wait until November. I want to know what I can learn this week. And that’s a really powerful question. I get my teams to ask, “What can we learn today?” and “What can we learn this week?” to accelerate the pace.
One of the best ways to make your experiments smaller is to test specific assumptions. Don’t test the whole product idea. Identify the piece that’s riskiest.
In this case, Frank’s whole product wasn’t risky. The first step was, “Will anybody even download our app?” A landing page test is great for testing that.
Then again, just like we saw with prototypes, product experiments come in all shapes and sizes.
Risk shows up in many forms. We have risk around desirability: “Does anybody want it?” We have risk around feasibility: “Can we even build it?” And we have risk around viability: “Is there a market for it? Can we make money from it? Does it support our business?” As product teams, we need to run experiments on all three of these dimensions.
Frank and Gina are now iterating every week. They are talking to their customers, they are prototyping, they are running experiments. What do they get from all of this?
What happens when you interview, prototype test, and experiment every week?
You get fast answers to that week’s questions from multiple research activities. Now, this is important. We make product decisions every day. We don’t want them to be guesses.
We also don’t want to use one research activity. I can’t tell you how many times both Frank and Gina would hear something in an interview and get a conflicting result in a landing page test. Now they could have gone with what they heard in the interview, or they could have gone with what they learned in the landing page test, but because they conflicted, what it meant was there was more to learn here. They were able to dig in and say why is there a difference and that led to a new research question which they could use the following week’s activities to get an answer to.
Continuous discovery allows you to stop guessing and start learning. – Tweet This
Setting up this infrastructure where these activities are already on your calendar makes it easy to get fast answers. When you can get fast answers, you stop guessing and you start learning. This is how we get to great products. I want to encourage all of you to start thinking about: How can you talk to customers every week? How can you prototype every week? How can you run good product experiments every week? Thank you.
nilsdavis says
Teresa – fantastic talk (which I read, didn’t watch)! I love that you have something for everyone in it. I also will keep in mind that mantra about “what can I learn today?”
ttorres says
Thanks, Nils! I’m glad you enjoyed it.
TJ says
Teresa, it takes great skill to break down some common clichés and show practical ways to implement them. Thank you for doing it.
Teresa Torres says
Thanks, TJ. I’m glad you enjoyed it.
Rahul Desai says
I just discovered this site today and I’m blown away by what you wrote. Was so simple to understand and yet covered such complex areas. Thank you for taking the time to do this!
Teresa Torres says
Thanks, Rahul and welcome to Product Talk!