Measuring Anything
image credit: @sxoxm on Unsplash (https://unsplash.com/photos/A-btl_OPYWA)

Measuring Anything

Nasreddin had lost his ring. His wife found him looking for it in the yard and exclaimed, “You lost your ring in the living room! Why are you looking for it in out here?”

Nasreddin stroked his beard and said: “The room is too dark. I came out to the courtyard because there is much more light out here.”


Ever been asked to measure something that defies easy quantification? The bottom-line impact of development time, say, or the effect on a new vacation policy will have on employee retention?

Ever go back to looking for the ring in the yard?

Compared with the precise calculations and binary logic we’re used to in software, the fuzziness of the world outside can feel like a very, very dark room. Measures that are qualitative or less-than-totally-accurate are easily disregarded –if they’re collected at all. But add in Peter Drucker’s aphorism that “what gets measured gets managed,” and a problem emerges. We tend to reward, and hence do, things that are easy to measure.

In the yard.

At the same time, interesting insights are piling up in the corners of the dark room. Paradoxically, the complex, ill-defined problems that have the most to teach us are also the ones we’re most averse to studying.

We need two things to start digging into them confidently:

  1. a definition of “measurement” that allows for uncertainty
  2. confidence estimating, collecting, and applying it

Let’s take a look.

What is measurement?

Measurement is an observation that reduces uncertainty.

This definition of measurement, offered by the management consultant Douglas Hubbard in his book “How to Measure Anything “, absolves the sin of imprecision and lets uncertainty in. Not “eliminates.” Reduces.

It’s an important distinction. Assuming that software should return at least 10x on the time invested in writing it, a measurement within an order of magnitude of the actual value is enough to make a go/no-go decision. Even with 10% certainty and huge error bars on either side, we can still make plans around the low forecast, the high forecast, and a best-guess down the middle. Ten percent is infinitely more than zero.

And we may not even need to measure to get there.

Estimating complex quantities

The physicist Enrico Fermi was known for asking his students to solve what appeared to be impossibly hard problems . A classic example of a Fermi Problem is the question,

How many piano tuners are there in the city of Chicago?

The goal of this question was not a precise answer, but to give students practice working towards estimates that near the realm of possibility. An estimate accurate to within an order-of-magnitude (OOM, unfortunately ) of its measured value can still inform better decisions about system performance , project timelines, or cost, and many estimates will be closer than that.

To increase confidence, it can be helpful to produce multiple estimates starting from different base assumptions. I once oversaw a complex enterprise implementation that the project team anticipated completing within four developer-months, based on the team’s previous experience and the relative (dollar-value) size of the client. A second estimate based on the complexity of projects within the implementation plan suggested a number closer to 60 developer-months–and gave us the warning we needed to adjust staffing and successfully deliver on time.

Getting in the habit of estimation serves two purposes. First, an estimate provides a falsifiable hypothesis for a measurement to test. Strong alignment between the expected and observed values provides at least some basis for trusting both, while wild deviations suggest a need for more measurement.

Measuring from fuzzy sources

Our aversion towards qualitative, subjective, or imprecise measurements can also discourage us the sorts of instruments that produce them. For instance:

  • ad-hoc employee surveys
  • hallway usability tests
  • interview Q&A
  • working group notes

These data sources all share a certain “fuzziness” in that they’re drawn from subjective experience. Encoded in natural language, their data lack the inherent quantitative authority of a latency measure or the DORA metrics , say; still, they contain valuable signals on the state of the team, product, and business as a whole.

Don’t discard them out of hand.

When we do employ fuzzy sources, however, we tend to go out of our way to return them to the quantitative realm. Statistical analyses discard portions of the input in return for a veneer of quantitative legitimacy; after all, numbers are easier to compare and visualize than the range of human language.

If projecting qualitative data onto a number line inspires more confidence in the insights embedded within them, so be it. But data obtained from fuzzy sources contain data beyond the numbers themselves–and with less confidence than our trust in numbers would have us believe.

Applying uncertain measurements

The world has a funny way of defying assertions about it. Drawing on a familiar example, the Pythagorean ideal of a 3-4-5 triangle fails if drawn with a pencil more than 0mm thick–reality is somewhat more complicated than a high school geometry exam.

How, then, can we trust any measurement at all?

Embedded in the definition of uncertainty (an observation that reduces uncertainty) is a clue. Even measurements obtained from automated systems are burdened by clock synchronization and the inevitable risk of failure. They may be more certain than the data obtained from a Developer Experience Survey, but never to the point of absolutes.

However a measurement was obtained, it’s safe to assume some degree of imprecision built in. Acknowledging this reality, digging into methods and assumptions, and assigning appropriate error bars are all important parts to applying a measurement safely.

Do that, though, and whether we’re measuring in the yard or the room an “uncertain” measurement becomes exactly what it is: a measurement.

“No measurement survives contact with reality,” said the German management consultant Helmut von Moltke (the elder ). Pretending otherwise, or shying away from a (doomed) measurement of any sufficiently complex quantity, does a disservice to whatever decision the measurement was meant to inform.

Uncertainty? That’s a given. But we can reduce it as much as we can.



"Measuring anything " originally published on rjzaworski.com

A counterexample is a "measurement" of a quantity that nobody knows much about. The canonical story is that someone wanted to know the length of the Chinese Emperor's nose. But the Emperor resides in the Forbidden Palace and no one is allowed to see him. No matter; we ask 10 million of randomly chosen Chinese people to tell us what they think the length of the Emperor's nose is. After averaging this vast amount of data, we are certain to get a highly precise estimate, right?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了