Thinking Critically About Sports Data Measurements
Russell Scibetti
Vice President, Strategy & Business Intelligence at New York Football Giants | SBJ 40 Under 40, Class of 2020
This post was originally published on TheBusinessOfSports.com (July 14, 2016)
Earlier this morning I read a great article from Harvard Business Review titled 4 Steps for Thinking Critically About Data Measurements by Thomas C. Redman, which I’d encourage all of you to read.
After reviewing it, I thought it would be a worthwhile exercise to take the author’s four steps and tie them back to an example from sports business.
1. Clarify what you want to know.
In sports, like any industry, you need to start with your goals before digging into the data, otherwise you could end up down any number of rabbit holes without finding actionable insights. So within ticket sales, you might want to know something along the lines of, what is our conversion rate? Conceptually, this is a simple – divide the number of sales by the total number of prospects you’ve tried to sell to. However, you need to clarify several factors:
- How do you define the number sales, the # of seats or the # of buyers?
- What types of products do we include as succesful sale (e.g. big difference between a season ticket vs. a single game ticket)
- How do we define the audience of who we tried to sell to?
Depending on how you clarify these questions, you will end up with very different conversion rates.
2. Understand how actual measurements line up with what you want to know.
Let’s say that the our initial goal was to increase conversion rate, hence the need to actively measure it. After your analysis, you end up with a conversion rate of 4% over the last two months. Well, what now? Did we do anything over the last four weeks that we thought would help our conversion rate? If so, let’s look at month-over-month. Maybe it was 3.5% two months ago and 4.5% this past month. Have we achieved our goal? Maybe or maybe not – there is a cyclicality in tickets sales, so maybe we need to compare the last two months to the same two month window from last season. The key here is that there’s more to understanding the analysis than simply “running the numbers.”
3. Account for weaknesses in the measurement process.
In order to have accurate, actionable results, you need accurate, reliable data. As the old saying goes, “Garbage In, Garbage Out.” So we need to take a close look at the entire process, from raw data to final result. We already showed that depending on how we clarify what we want to know in step one, we can end up with different results.
Let’s dig deeper at the second part of our conversion rate formula – the audience we tried to sell to. Maybe we ran a campaign that had 1,000 prospects receving phone calls and emails. Depending on the level of adoption of CRM by your staff, you may have a challenge in truly identifying how many of those people were called. Depending on the presence of duplicate records in your system, you may be calling some people that are already customers. Depending on the deliverability and capabilities of your email marketing system, you may have challenges identifying how many prospects opened or clicked your email. Any “weaknesses” in this part of the process ultimately impacts how you interpret the results.
4. Subject results to the “smell test.”
This is something I like to talk about when it comes to the “fight” between data-driven vs. gut-driven decision making. What’s often overlooked is that your gut-feel is often based on the internalization of past data. There aren’t as many cases as people think when the numbers and gut point in completely opposite directions.
So let’s assume your estimated historical conversion rate based on staff-reported numbers or “feel” is around 10%, but your analyst says it’s actually 2%. Or perhaps there’s a league reported benchmark and your number varies dramatically. When this happens, you need to dig deeper, and looking back at steps 1-3 is the right place to start. Maybe in our audience of 1,000 prospects, we’ve closed 20 sales (20/1000 = 2%) but only called 400 of them so far. That means our conversion rate should really be 5% (20/400).
However, you also need to resist the urge to manipulate the analysis to get the desired results. Maybe you’re selling a more expensive product than your peer teams, so it makes more sense that your conversion rate is lower than the benchmark. The combination of the data plus a thorough understanding of the situation will help you accurately interpret the results, and determine if there are any flaws in your analysis, your “gut feel” or perhaps even both.