A Systematic Approach for Data-Driven Decision Making

I am currently working on my bachelor’s thesis about “Metrics Design for SaaS Startups”. While interviewing a lot of founders I found out that almost everyone thinks metrics are really important for product management decisions. Then again, I still have the strong feeling that a lot of founders just scratch the surface now and then and lack a systematic and regular approach to use metrics for decision making.

TL;DR? Jump to the end of this post to view slides about this topic.

Build-Measure-Learn too simplistic?

Managing an existing product in an early stage startup phase (let’s say before product / market fit) is hard. In customer development interviews potential customers say that your solution is exactly what they need to solve their problem. But your numbers aren’t skyrocketing. Why don’t users come back? Why don’t they upgrade to a paid plan after trial?

Is Build-Measure-Learn too simplistic after problem / solution fit?

The problem a lot of founders face is where to start looking for customer satisfaction / usage barriers (when they aren’t using your product how you wanted them to do). Following the Build-Measure-Learn approach a lot of people forget how much effort has to be put into a good hypothesis for a product change (not necessarily a feature. Most times another feature is the last thing you need to implement!) Often most of the work when going through the Build-Measure-Learn loop has to be done even before Build.

A Systematic Approach

Metrics are indicators. When known how to use them they can teach you a lot about where there is something wrong. But there is a lot more you need to know about your product than just numbers. Focusing on numbers is not enough. So I designed a step-by-step process that helps people to remind themselves that they also need to talk to users, and so on…

My systematic approach for regular data-driven decision making

Here are some short explanations about each of the process steps:

  • Indication: Look at a dashboard of macro metrics (e.g. AARRR funnel metrics) on a regular basis to see if there is something wrong. Identify the key metric you want to improve.
  • In-depth Analysis: Dive into data and customer development interviews. Use your support to help you get insights about the problems users have in the given lifecycle stage.
  • Diagnosis: Based on the gathered insights, identify the main problem why users get stuck.
  • Hypothesis: You know the problem. Now, what can be done to satisfy the users’ needs? Is it a new feature? A blog post? A support video? Do you have to change pricing? State a falsifiable hypothesis!
  • Test: Always test a product change for validated learning. You do not always need to do split-testing.

I will present deeper insights about each of these steps in later blog posts. For now I really need your feedback. Is this a viable approach? Thank you!

For deeper insights and some tool suggestions, please view my slides about this topic:

[slideshare id=16009719&doc=2013-01-15processproductimprovement-130115142740-phpapp01]

Thank you for reading and giving me feedback!

Regards from Germany,

Jan