In the last post we ended by concluding that the results of many A/B tests lead to further questions. This is a good thing. Let’s take a look at how you can string A/B tests together to build up a knowledge base over time.
Let’s continue with our previous example where our insight is that creating a sense of urgency in our email subject line will increase open rates. We started by testing the following hypothesis:
- Including an expiration date in the subject line will increase open rates.
But this is just one way to create a sense of urgency. There are many other ways. For example, we could have just as easily tested the following hypotheses:
- Indicating a reward is limited to the first 10 people in the subject line will increase open rates.
Or we could have tested any of the following variations on these same ideas:
- Asking people to act now in the subject line will increase open rates.
- Indicating something is hot in the subject line will increase open rates.
- Highlighting the missed opportunity of not using something in the subject line will increase open rates.
And so on.
Some of these hypotheses may pass. Some may fail. Some may work for some segments of your audience, but not for others. Some may work once. Some may work over and over again.
But they all test your insight of whether or not creating a sense of urgency will increase open rates.
Each test tells us if a hypothesis is true or false for a specific context. If we test a number of related hypotheses, we start to understand the nuances of the original insight. This is where real learning happens.
It’s one thing to learn a tactic – adding an expiration date to a subject line will increase open rates. It’s another thing to learn the why behind the tactic. The more you can uncover about what works and what doesn’t, the closer you can get to the why behind your insight.
It’s one thing to conclude that a sense of urgency will increase open rates. It’s much more valuable to know that expiration dates work the first time, but tend to lose their potency with each use; strong calls-to-action create urgency over and over again; and missed opportunities are the most powerful at engaging new users.
Of course, I just made all those conclusions up. To know whether or not they work in your own context, you have to run your own tests. Every context is different. Every segment is different. The point is to keep learning. Keep coming up with related tests to run. Reach for depth of knowledge. Go beyond the tactics and seek to understand the nuances of your use-cases and your audience.
I leave you with one of my favorite quotes by Richard Feynman. He was talking about so-called experts and what it means to know something. It’s quite applicable here and captures the essence of intellectual honesty:
“See, I have the advantage of having found out how hard it is to get to really know something, how careful you have to be about checking the experiments, how easy it is to make mistakes and fool yourself. I know what it really means to know something. And therefore, I see how it is that they get their information and I can’t believe that they know it—they haven’t done the work necessary, they haven’t done the checks necessary, they haven’t done the care necessary. I have a great suspicion that they don’t know how this stuff is done and they are intimidating people by it.” -Richard Feynman
What do you really know? Have you started to build up a body of knowledge related to your context and your audience? If not, what’s stopping you?