5 Steps to Evaluate and Audit New Data Products
With the recent buzz about AI, and in today's data-rich world, more and more products and features are powered by algorithms and AI. As a product leader, it's important to evaluate these "data products" rigorously to ensure success.
Unfortunately, many product teams fail to properly evaluate data products before and after launch, leading to wasted resources, bad user experiences and, even as we’ve seen recently with “pizza glue” example, lost of trust.?
The primary reason? Lack of a clear evaluation framework through the development cycle
To drill a little deeper:
To avoid these pitfalls, I've found myself developing & using a simple 5-step approach for auditing data products. Let's dive in:
Step 1: Define the Data Quality Bar
Begin by establishing a user-centric quality bar required to deliver a good experience. Set measurable targets for precision, recall, and diversity. Align your team and leadership on these precise definitions. For example, when filtering Error Notifications, you’ll want to bias for recall since your users don’t want to miss Errors. Or, when we built a “things to do” travel recommendation product, we biased for high precision in top 5 +add some diversity across categories while across categories like museums, parks , because if you show something that is not touristy, trust is lost, and diversity of recommendation ensures you have something appealing for everyone.
Actionable Tip: Before scaling up, run through 50-100 examples with your team using a simple spreadsheet to evaluate data quality and refine your targets. Don’t skip this step!
Step 2: Estimate Reach Across the User Base
Even the most brilliant data product means little, if hardly anyone can use it. Estimate the potential reach by identifying the subset of users eligible to interact with your product based on their properties, past interactions, and opt-ins. Targets again are company/business specific, at Google I worked on a data product that reached 1% of the search queries and was valuable commercially while working at Facebook Video Search, which was a lot smaller; we didn’t work on specialized/vertical implementation for anything that was smaller than 15-20% of search volume.
Actionable Tip: Define a minimum threshold for reach that makes the product viable for your business. At past companies, we targeted data products, reaching at least 15-20% of the relevant user base.
领英推荐
Step 3: Measure Engagement and Task Completion
Engagement metrics like clicks are a good start, but dig deeper. Define proxy metrics that represent successful task completion, such as items saved to a wishlist in case of travel planning or video watch time.
Actionable Tip: Set clear targets for these proxy before launch metrics to gauge whether users are truly deriving value from your data product.
Step 4: Evaluate Overall Business Incrementality
Ultimately, tie your data product back to core business metrics. A/B tests and holdback analysis are the gold standards to measure true incrementality, accounting for any cannibalization. But if you can’t due to scale, you can use SPC.
Actionable Tip: Define a minimum threshold for incremental business impact that justifies further investment in the data product.
Step 5: Continuously Monitor and Iterate
Data drift and changing user behavior can degrade a once-successful data product. Continuously monitor performance across the previous steps and be prepared to iterate or sunset the product if it no longer delivers value.
Following this structured evaluation process can increase returns on data products that move the needle while limiting investment in those that lack incremental value.
Now it's your turn: What challenges have you faced when evaluating data products? How have you approached addressing them? Share your thoughts and experiences in the comments!