How do you know that you are doing discovery well?
If you want to improve your discovery process, what outcomes help you track your progress?
I get asked these questions often. I like that teams are trying to take an outcome-focused mindset to their discovery practice.
The good news is setting discovery outcomes is no different from setting any other outcome. Start by asking, what does success look like? How will we know when we are doing discovery well?
Setting discovery outcomes is no different from setting any other outcome. Start by asking, what does success look like? – Tweet This
If I envision a high-performing discovery team, what do I see? I see a team that is engaging with customers on a regular basis, mapping the opportunity space, identifying and testing assumptions, iteratively exploring ideas, and shipping value (not just code) often.
While many teams are adopting these discovery activities (even if on an ad hoc basis), it’s the last item—shipping value often—that’s the hardest to measure.
Good Discovery Drives Outcomes
In a perfect world, a high-performing discovery team would hit their product outcomes quarter over quarter. This is the ultimate measure of discovery.
Unfortunately, we don’t live in a perfect world. Whether or not we hit our outcome is dependent on a number of conflating variables that are outside of our control (COVID-19, anyone?).
However, we should see a high-performing discovery team hit their outcomes at a higher rate than a low-performing discovery team and that rate should grow over time, as their discovery practice improves.
A high-performing discovery team should hit their outcomes at a higher rate than a low-performing discovery team and that rate should grow over time as their discovery practice improves. – Tweet This
To measure discovery, chart your progress toward your desired outcome quarter over quarter. Here’s what this might look like:
In this chart, we are tracking the percentage of all users who are engaged and how that metric changes over time. For example, at the start of tracking, roughly 30% of users are engaged, and by the end of tracking, about 60% of users are engaged.
How you track engagement will change depending on your product. Netflix might define an engaged user as someone who watches at least one show per week, whereas Facebook might define an engaged user as someone who logs in at least three times per week.
Notice how we aren’t charting progress as a percentage of the goal. If a team sets their desired outcome to increasing engagement by 10% and they manage to increase engagement by 8%, you might be tempted to track that they reached 80% of their goal.
The challenge with this strategy is that it doesn’t account for how ambitious the team was when setting the goal. You don’t want to encourage your teams to sandbag their goals in order to make their trend line look good.
Instead, track progress toward your outcome as absolute progress. If the team’s outcome is to increase engagement, measure engagement quarter over quarter.
This, of course, assumes that you are letting your team focus on an outcome over time (i.e. for more than one quarter). I can’t recommend this enough.
When we are working on a new outcome for the first time, it may take time to figure out what will move the metric. A team might do all of the right things and make zero progress toward their outcome.
However, with time, quarter over quarter, their rate of impact on that outcome should improve. It won’t be continuous improvement. We may get slammed one quarter by a series of failed tests. We might see a significant shift in the world (e.g. working remotely) that slows us down.
But if we take the long view, looking at several quarters at once, if we are continuously improving our discovery practices, we should see a continuous improvement in the rate at which we drive our desired outcomes.
If we are continuously improving our discovery practices, we should see a continuous improvement in the rate at which we drive our desired outcomes. – Tweet This
An important caveat to note is that most outcomes have a natural ceiling. You will never engage 100% of your end users. As a team gets close to that ceiling, their rate of improvement will slow. But teams often shift to a higher priority outcome long before this happens.
While measuring impact on our desired outcome is the truest measure of discovery work, the rate at which we drive our desired outcomes is a lagging indicator. Lagging indicators don’t make good outcomes. To measure discovery week over week, we’ll need to find some good leading indicators to track.
The Behaviors of a Strong Continuous Discovery Team
We know what a good continuous discovery team looks like. They are focused on driving their outcome. They have a continuous cadence of interviewing, prototyping, and assumption testing. They aren’t afraid to throw away ideas that don’t work and they double down on the ideas that do work.
We can use this picture of what good looks like to identify our leading indicators.
First, I like to measure the cadence of discovery activities. Now, many people misinterpret this to mean counting the number of interviews you conduct or the number of experiments that you run. This is a mistake.
Counts that always go up are vanity metrics. They make us feel good about our progress, but they don’t accurately measure how we are doing.
Counts that always go up are vanity metrics. They make us feel good about our progress, but they don’t accurately measure how we are doing. – Tweet This
If 2 teams set a goal of interviewing 12 customers in the quarter and the first team interviews a customer every week and the second team interviews 12 customers in the last 2 weeks of the quarter, are they both equally good continuous discovery teams? Of course not.
Instead of counting how many times an activity is done, measure the cycle time between activities. Measure the number of days since your last interview. Measure the number of days since your last assumption test. And work to reduce the number of days in between activities.
Here’s what this might look like:
However, teams don’t need to be super quantitative about this. I typically recommend that product trios talk to a customer at least once a week. Depending on how easy it is to connect with your customer base, I may recommend two or three times a week.
But take a continuous improvement mindset to this. If your average cycle time between customer interviews is monthly, try to get to biweekly. If it’s biweekly, try to get to weekly, and so on.
And remember, more than average cycle time (which also can be gamed), it’s really about limiting the maximum number of days you’ll go between any two activities.
Cycle time helps us understand cadence and cadence is critical to adopting a continuous mindset.
What Did You Consider That Didn’t Work?
Cycle time, however, doesn’t tell us if you are getting value out of these activities. I’ve seen too many teams do all of the right activities but fail to learn from those activities.
To get value out of discovery activities, teams need to adopt the right mindsets. We need to trust the experimental data. We need to be prepared to be wrong. We need to have intellectual honesty about what we are learning. And we need to be able to connect the dots between what we are learning and the product decisions that we need to make.
To get value out of discovery activities, teams need to adopt the right mindsets. We need to trust the experimental data. We need to be prepared to be wrong. – Tweet This
The best leading indicator for measuring the effectiveness of discovery activities is to track how often a solution is thrown out.
This metric shouldn’t be defined as a vanity metric. We don’t want to count the number of times we throw an idea out. It’s easy to brainstorm 100 ideas and throw away 99 of them.
Instead, we want to measure cycle time—the number of days since your last idea was thrown out. And then we want to minimize the number of days between activities.
Cycle time between throwing out ideas is a great metric because it does two things. First, it measures (and celebrates) when we learn that we shouldn’t build something. This saves time and energy from building the wrong stuff.
And second, it reinforces a compare and contrast mindset. We know that we make better decisions when we consider more options. If we are being measured by how many options we throw out, we’ll consider more options, and ultimately make better decisions.
Retrospectives Drive Continuous Improvement
Tracking interviewing, prototyping, and assumption testing will help you monitor the cadence of your discovery. Tracking how often you abandon your ideas will help you understand if those activities are having an impact. But neither will tell you how to improve.
For that, we need to modify our team retrospectives. Whenever your team gets together to reflect on your practice (you do do that, don’t you?), add the following to your agenda:
First, ask, “What surprised us during the past two weeks?” Make a list.
Then, for each item on the list, ask, “How could we have learned that sooner?”
Your list might include delivery items that took longer than expected. Was there a feasibility assumption that you missed? How could you have uncovered that assumption sooner?
Did a recently released feature fall short of your expectations? What went wrong? Did you miss a desirability or usability assumption? How could you have tested those assumptions sooner?
These two questions pack a lot of power. They will help your team catch your discovery blindspots. You’ll shift more of your learning from delivery to discovery, saving your team from wasted effort.
A Simple Framework for Evaluating Discovery Practice
The simplest way to measure discovery activities is to track your cycle time between customer interviews, prototyping, and assumption tests.
Tracking how often you abandon ideas will help you measure if those activities are having an impact on your decisions.
Using your retrospectives to reflect on your discovery practice will help you continuously improve.
Using your retrospectives to reflect on your discovery practice will help you continuously improve. – Tweet This
But remember, these metrics only work if they truly are leading indicators of our lagging indicator—driving our desired outcomes. So remember, track your progress toward your desired outcomes quarter over quarter. And watch the trend over time. But keep in mind, this chart will be noisy. Ups and downs are normal. The key is for the rate to trend up over time.