Making the Most of User Data in Early-Stage Products
Apurva Mudgal
Head of Consumer Product at Simpl
Problem
One could track many things in new or early-stage products, but narrowing it down to the right set of user data could separate success from failure. As expected, it will not always be an easy decision. For example, an early-stage product could have too thin and too few data, and overreliance on that kind of data can lead to erroneous conclusions. However, a seasoned product leader will know when to rely on user data and when to back it up with qualitative insights.
Regardless of the specifics of any given situation, two main challenges when tracking data in early-stage products are:
- Knowing what to measure and how to measure it;
- Being careful not to be blindsided by data. Data can tell any story depending on how it is interpreted. However, there are a number of techniques one can use in conjunction with data analysis to more comprehensively understand a user problem.
Actions taken
Before setting up any metrics, a product leader should have a profound understanding of how a product would be used. They should list down all the instances of expected user behavior and then set the metrics accordingly. At the same time, they should complement data with qualitative metrics based on the recordings or interviewing users. Whenever possible, I would rely on unmediated interaction with users to verify the hypothesis. Then, I would closely examine how those different insights would relate to the data acquired from the metrics we set.
For example, we recently launched a product and started to collect data on the industry of companies signing up. The idea was to better understand users who were signing up and thus better define our next set of features and user acquisition strategy. Some eight weeks and 2000 signups later, we took a look at the data analysis and noticed two verticals standing out -- education technology and medical healthcare providers. But here was the catch: if we would solely rely on that data, we would erroneously conclude that our product was being used mainly by education companies and medical providers and that we should double-down across these verticals. And that was not the case. We had other insights on what was happening in the market in general; the second wave of Covid-19 was plaguing India, and these two verticals were the only ones doing well. These were the only industries able to afford to invest in software, unlike everybody else, so our data was showing which verticals were doing well, not who was using our product.
Furthermore, one needs to understand the full context of data. For example, we build marketing software that helps people run campaigns on Facebook. They would create templates that are then submitted to Facebook for approval. Our data showcased that people were applying for 30 or 40 templates within the first week, which felt like they wanted to create a lot of templates. But when we dug deeper to understand better user behavior, we learned that 90 percent of the templates our users created within the first week were rejected. Because of the low acceptance rate, they kept creating template after template. So the real problem was not to optimize for how many templates one could apply and view on the dashboard but to help our users get through the approval process more easily.
When we did thorough research using various techniques, we learned more about our users’ behavior, their motivation, and the outcome of their actions. If we acted without learning about the entire context, the bare data would lead us astray. Therefore, we started to invest in helping them be successful in the first few attempts, and we managed to drastically increase their acceptance rate.
Lessons learned
- Think of data as one of the tools that could help our overall understanding of users. Unlike interviews and other qualitative techniques, data can include a large group of users while at the same time navigating biases common for individual respondents. But data can become a double-edged sword if not understood within its own constraints and set up correctly.
- Data is all about reading and interpretation. It has to be coupled with a holistic understanding of a product and its users. You should never track only one metric, no matter how important it is, but combine it with other metrics that can enable you to understand the problem in its entirety. Oftentimes, people get obsessed with tracking one data point, leading them into a ditch from where their perspective will be narrowed down to that ditch perspective.
- Let the data speak for itself. Suspend your interpretations and approach data with a fresh mind. Get a good understanding of numbers before putting those data in conjunction with other insights and make them unfold into a story. However, many people let their presupposed narratives precede the reading of data and have them influence the interpretation. That particularly applies to situations when you measure something for the first time. Your first-time analysis should be as holistic as possible.
Be notified about next articles from Apurva Mudgal
Apurva Mudgal
Head of Consumer Product at Simpl
Connect and Learn with the Best Eng Leaders
We will send you a weekly newsletter with new mentors, circles, peer groups, content, webinars,bounties and free events.