Data-driven mismanagement

Data-driven mismanagement

Much good has been said about data-driven management. This article does not intend to contradict any of that. Decisions made on the sound basis of facts result in good outcomes. Instead, this article paints another facet. Explores how data is misused in organizations. Data can be used wrongly by genuine mistake, lack of sophistication, or to further an underlying agenda. Over the years I have seen a few patterns outlined below.

The first one is easy and often a mistake without malice. I call it naive generalization. I have seen this used in technical matters, used in characterizing a group, or interpersonal judgment. This is where one uses too few data points and draws a general conclusion. Let’s say a product has a large enterprise customer base. Very often I have seen feature validation decisions being made based on a small user group of ten or so customers who are relatively similar. This is prone to a biased conclusion. It is hard to scale these validations but the sample set should have at least customers from all major representative segments or industries. Another common one I have seen is when one starts to work with let’s say a newly acquired company. In the first integrated release a few quality challenges are found and quickly attributed to the lack of quality culture in the acquired company. This could very well be a lack of understanding of integration needs versus a quality issue. In case of professional interpersonal judgment, a new colleague may be quiet for a few meetings before speaking up in meetings. Sometimes I have seen new colleagues often quickly judged as not contributing or collaborating.

Another technique I have seen used to promote a point of view (or existing bias) is selective sampling. Depending on the situation this is at least unethical. In software release management I have seen exclusion of P3, P4 bugs and only list P1, P2 bugs in release go/no-go decisions. There could be zero P1s and few P2s but there could be a significant number of P3s and P4s that would cause a very annoying user experience and unacceptable load on support. In M&A situations I have often seen presentations showing growth for past several quarters and projecting more– but conveniently not showing previous dips in revenue. That would elicit a discussion about what actually happened, and could it repeat again. In user studies or in NPS discussions I have seen verbatim detractor comments placed toward the bottom or interspersed between many positive comments. When read carefully some of those are very material but the tactic here is hiding a few bad samples within good samples.

Then there is a phenomenon especially among managers and executives is what I call data begets more data. Sometimes it is the hope that more data will actually prove something. Data never proves anything for problems sufficiently complex. It is empirical. It is always correlative and not causative. Sometimes management would want more and more data endlessly to make a decision. This culture is often called analysis paralysis. While not great, it is innocent, or at worst risk averse. The not so innocent version is purposely sending a working team to get more data in order to delay a decision to further a political agenda, or something that goes against the belief of an influential person. Rather than saying ‘no’ or stating a position clearly, the influential person simply asks for more and more data in a veiled effort to tire out the working team.

Yet another technique I have seen used is obfuscating signal with noise. This is somewhat of an anti-twin of selective sampling. In this case, a logical conclusion clear from a representative sample is obfuscated by deliberate inclusion of other samples in an effort to hide a valid pattern. Sometimes I have seen this in presentations of performance marketing metrics. Suppose there has been a significant investment in event marketing but the amount of leads from those fancy events is low. That lead generation data is then surrounded by many other lead generation data from a multitude of sources like websites, outbound calls, etc. If the website lead generation quality is markedly better the overall quality does not look so bad and the low quality leads from expensive events are obfuscated. This adversely affects the company’s spend in cost per lead. Yet another example is hiding security weaknesses by obfuscating a few open critical security defects by showing several other semi-critical and non-critical defects that are fixed, diverting the attention of the reader from those few critical ones.

Decorative Use of data is also commonplace in organizations. This is often done in big operations reviews or such meetings in front of wide audiences to “look good”. Many positive, good data points are shared that may not correlate with business outcomes that really matter. They are often called vanity metrics. For example, marketing organizations may share data about very successful lead generation in the same meeting where sales is lagging behind. The discussion circles on how many leads are being generated versus how the whole pipeline converts to bookings– and takes attention away from lead quality, lead nurture, and lead conversion. On the more technical side, I have seen engineering managers proudly share a high number of daily active users as a sign of “success” while the end user NPS is actually falling at the same time. Net-net, a large number of users are logging in but they are not leaving satisfied. If that is allowed to continue, the daily active users will eventually drop and the product will suffer. The vanity metrics in both these cases are showing some form of “goodness” but taking attention away from some underlying challenges.

I am going to conclude by saying something obvious. Data and intuition are not the same, they are complementary. Many good things have happened without a basis in a corpus of data. The iPhone was not born out of a user study, the Tesla was not born out of some consumer research, the theory of relativity was not born out of a large language model. In those cases something that is not yet well understood, called intuition, gave rise to successful things. Intuitions can also be very wrong– except fancied stories are not written about them. Data is a tool to validate or invalidate intuition. Sometimes data is indeed a valuable tool to provide empirical insight– like the whole field of clinical trials and drug manufacturing. That’s the only tool we have in those cases. Insights from a preponderance of data is a useful tool when we do not analytically understand something– the whole field of various empirical sciences. In those sciences, data is treated with utmost care and respect. It's dissected, cleaned, sampled, re-sampled, correlated, statistically validated, before making a correlative conclusion on which lives may depend on. In the world of corporate decision making, often data is used wrongly like the examples above– and that is what we need to watch out for.


If you enjoyed reading this feel free to simply repost in your feed. If you have colleagues who you think will enjoy regularly reading the newsletter encourage them to connect with?Apratim Purakayastha?and subscribe to this newsletter.

Mohit Agrahari

Assistant Manager | Technology Consulting | Helping companies to achieve their technical roadmap | Digital Transformation Services

1 年

Very Interesting Article Apratim Purakayastha

回复

Really liked this post, and could totally relate to it!

回复

Very interesting and insightful Apratim Purakayastha. I particularly like "Decorative Use of Data". I can recall a few occasions where I've looked a bit side-eyed at data that was intended to prove a point that it really didn't prove. Often the data came dressed in marketing collateral that was widely distributed because it was available Of course, this never happened at SKIL ?? Love your insights!

回复
Urvashi Parashar

Director of Analysis & Chief Economist, Department for Culture, Media and Sport

1 年

Really insightful - all of these points are valid in my world too - evidence based policy making which can sometimes become policy based evidence making. The important thing I feel is to be clear what the purpose is - often the role of data is to help us learn and inform. You don’t know what you don’t know but you can also not know it all!?

回复
Bharath Sundararaman

Vice President & General Manager, Supply Network Orchestration

1 年

Great article. I've seen a healthy dose of naive generalization and decorative use / vanity metrics. I think vanity metrics also come from a place of each function speaking to the input that they control (e.g. the input that marketing can control is high quality leads) as opposed to looking at the end result. One way I'm trying to combat it in my day job is by taking a "pod" vs "pillar" approach. Happy to tell you more offline if you're curious ;)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了