When a team sets out to develop a product, they generate a series of hypotheses. In my experience, good teams are explicit about these hypotheses and test them before they invest too much time into a particular product plan. However, more often than not, I see that development teams are not even aware that these hypotheses exist, instead building their products as if they were fact. The problem with this approach is that rarely does a team get all, if any, of these hypotheses right from the get-go. If they don’t first test them, they run the risk of building a product that nobody wants. Let’s take a look at an example to illustrate how this works.
Explicitly Enumerating Product Assumptions as Hypotheses
I have an idea for a Caltrain mobile app. Caltrain is the local train that runs from San Jose to San Francisco. Riders include daily commuters and occasional train riders. As a daily commuter, I witness a number of problems experienced by occasional riders. They don’t know how to buy tickets, they don’t know how to pay for parking, they don’t know which platform to stand on. They don’t know how to tell what stop they are at or when and where to get off the train.
I suspect that if Caltrain tackled some of these problems, they might convert more of these occasional riders into daily commuters, growing their ridership. I’ve often considered building a Caltrain mobile app that addresses some of these usability problems.
The app would walk the occasional train rider through each step of the process of riding the train, starting with where to park, how to buy a ticket, updates on when the next train is coming, updates on where you currently are relative to where you want to get off the train. My goal would be to make it as easy as possible to ride the train.
I ride the train every day. I know how to do all of these things. I’ve observed these problems first hand for years. Should I just get started and build the app? How do I know that occasional riders will want it?
Let’s take a look at some of my assumptions or hypotheses underlying my app idea.
- H1: Occasional train riders will download a Caltrain mobile app before they ride the train.
- H2: The problems that occasional train riders experience are big enough that they will remember to use the app they downloaded earlier to help solve their problems.
- H3: The desire to ride the train is great enough that if occasional train riders had help they would ride the train more frequently.
It’s quite possible that occasional train riders don’t anticipate having problems. If they did, they would probably choose to drive rather than take the train. So H1 could be a big hurdle.
H2 may not be as big of a hurdle. It’s possible that if I did download the app and I run into a problem, the problem itself may act as a trigger to remind me I have the app. But that still needs to be tested. It might be easier for me to just ask somebody nearby. Or I might just give up, something I see people do daily.
With H3, if my goal with the app is to help grow Caltrain ridership, then I am assuming that the reason people don’t ride the train more often is because it’s hard. This might be part of the problem. But there are a number of other reasons why occasional riders might not take the train more often – the big one being that it is often slower than driving. Even if people had all the help they needed, they might still choose to drive over taking the train because they simply want to get to their destination sooner.
Testing Each Hypothesis
Before I decide whether or not it’s worth it to build this app, I want to test each of these three hypotheses. But how do I do that?
H1 is fairly easy to test. It requires that riders take action on their phones or computers before they ride the train. Using the Google keyword tool, I can easily find out how many people search for keywords like Caltrain schedule, Caltrain tickets, Caltrain parking, and other phrases I identify as related to some of the problems I want to tackle. If the numbers are high enough, it will validate that people do in fact seek out this type of information.
But that’s only part of the story. I also want to validate that the riders seeking this information would download a mobile app to solve their problems. How do I do this without actually building a mobile app?
Again, Google can help. I can build a simple splash page for my mobile app – a webpage that talks about the features and benefits of the app and ask visitors to download it. To get visitors to this page, I can buy Google Ads for the keywords I uncovered in my first test. But wait, there’s no app to download? That’s okay. I’m just testing whether or not visitors will click on the “Download” button. If they do, I can always apologize that the app is not yet ready and let them sign up to be notified when it is. This does two things, 1) it validates that people will take action to download an app and 2) it lets me build up a potential user list before I start building.
H2 might be a little bit harder to test. But here’s where some creativity can help. How do I test whether or not someone will remember they have an app when they don’t in fact have an app. I can fake it, just like I did for my H1 test. Here are a couple of ideas:
- I can ask friends who are occasional train riders to call me when they run into any of these problems. Do they remember to call me? Do they figure it out on their own? Do they just give up?
- Better yet, for the folks who clicked on my “Download” button when testing H1, I can ask them to bookmark a page that acts like an app, so it’s available on their phone. The page could be as simple as an FAQ, or could allow the user to enter their own questions that I directly answer. Either of these would be simpler to manage in the short-run than the full app and either gives me the data I need to know whether or not riders will remember to use the app.
H3 is even tougher yet. But still not impossible. I can ask my early users (again, I have users even though I don’t yet have an app), to tell me how often they ride the train. After they’ve accessed the fake app a certain number of times, I can ask them again how often they ride the train. If their ridership goes up over time, I’ve validated my hypothesis.
I already know what’s going through your head. If I’m solving the problem with an FAQ or by answering one-off questions, why would I build the app at all? That’s exactly the point, I wouldn’t necessarily have to. I would have all the data I need to know exactly how much product to build to solve the problem I set out to tackle. If I’m answering one-off questions, I probably still want to build a product that automates this. If a FAQ is getting the job done, I might learn my problem was much easier to solve than I originally thought. It’s not about my original product vision, it’s about solving a problem, meeting a market need.
It’s An Iterative Process
Even if I can prove all three of these hypotheses and I decide to build the app, I’m not done testing hypotheses. Each step along the way, as I build out the app, I’m going to keep layering more hypotheses onto the ones I’ve already tested. Product development is an iterative process. But if I keep testing each assumption, I’m very unlikely to build too much of a product that nobody wants.
Next Steps
In a future post, I’ll explore how to get better at surfacing assumptions and turning them into explicit hypotheses. In the meantime, what do you do to make sure you aren’t building products on too many untested assumptions?
Stephen says
Faking it with a splash page with no download sounds like a terrible idea. Think of the user experience.
ttorres says
Hi Stephen,
Thanks for your comment. You are absolutely right. A splash page with no download is not a great user experience. However, it’s a trade off. If you build a product that nobody wants, that’s not a great user experience either.
And I’d argue that while the splash page with no download isn’t a great experience, it’s not a bad experience. Let’s look at it from the user’s experience. If I do a search for a Caltrain app and I come across the splash page, one of two things is going to happen. Either I’m not interested in the app at all, in which case, I just go away and never learn that the download button doesn’t work. No harm done. Or I am interested in the app and I click on the download button. In this second case, is it really that bad to find out that the app doesn’t exist but have the option to sign up to get it when it is available? I’m still learning that there is an app that will eventually meet my needs. This isn’t much different from pre-orders.
If the concern is with the “Download” button itself, you can always soften it by having a “Learn More” button or a “Sign Up” button. These may or may not be less effective than a “Download” button, but that may be a fair trade off to improve the user experience.
The devil is always in the details, but the point I really want to emphasize is focusing on learning as quickly as possible. It’s better to learn that you have users who are interested in your app rather than build an app that nobody wants.
Thanks again for taking the time to comment!
Teresa
Diane says
Interesting post.
I know in my experience ‘save it for later’ apps frequently end up in my phones wasteland – that I never remember I have at times when I could have used them. But isn’t that a universal trait? i.e would there be any statistical significance in testing H2 with dummy clicks – versus just acknowledging dropout after download will occur in accordance with similar apps?
I’m probably stating the obvious here, but this hypothesis testing is dependent on market size & window – If your market is large, you could get reasonable results within a few days. If your market is small, this could take weeks to months to get enough valid feedback, and you could risk ostracizing a good portion of your customer base, especially if customer interest peaks during testing.
Also as to the timeline of testing, you would have to ask if the profile of occasional users change with the season – i.e. tourism peaking in May, etc.
ttorres says
Hi Diane,
Thanks for your comment! I think you are right. You can’t test H2 with dummy clicks. You would have to find a way to mimic the behavior when the problem occurs. For H2, I suggest actually mimicking the functionality of the app you intend to build. This could be through a phone call to a real person rather than an automated system, so you can test before you build.
As for your comments on market size and window, you are absolutely right. You do have to make sure that you are testing your hypotheses with your target market, otherwise it’s not really a valid test.
Thanks again for your comment!