If you're not measuring, you're not agile.

If you're not measuring, you're not agile.

Agility is in vogue. Your company should be talking about this too. Everyone from your boss to your CEO is talking about it. We need to be more agile. Minor waste. Respond quickly to new requirements. Fail fast, learn fast.

N?o foi fornecido texto alternativo para esta imagem

This is commonly called Fake Agile*.

Basically it's keeping a traditional waterfall way of working (or just chaotic) and painting a new layer of agile buzzwords, buying JIRA licenses for everyone and scheduling some unproductive meetings to justify the money paid to consultants. If you're in this world, you've probably seen it more than once. However, there is another type of Fake Agile that is more difficult to detect because, to some extent, it conforms to agile principles. I refer to Build-Only Agile.

Build Agile, and then what?

N?o foi fornecido texto alternativo para esta imagem

Some teams are very good at the construction phase. They work efficiently, with very little waste, and deliver solid products. But what's next - how does the team know their effort was successful?

A solid, well-designed, effective and error-free product is only part of the battle. The other part is making sure if the team solves customer/product problems. And this is the part where the measurement step is needed. Think of a product you built in the past or are currently working on.

Ask yourself the following questions:

  • Which of the dozens of features of your product do your customers use the most?
  • Which ones don't interest you?
  • Were the assumptions you made during the construction phase correct?
  • Which backlog feature offers the most value?
  • Does the customer use the product?

If you can't answer these questions for sure, then you need to measure. If you can't measure the effects of your work, then you can't get any insight into it, which means you can't iterate on your project. Without interaction, YOU ARE NOT AGILE.

This isn't optional, it's not something you can put off, if you have the time. Empiricism is the only thing you can trust. The rest is guesswork and guesswork. Which can be very dangerous for the results of your end product.

Alright then, measure - how?

1. Qualitative

The easiest way to verify the results of your work is… ask your client and PO.

I didn't say it was rocket science, folks.

All kidding aside, getting customer feedback is essential. You can ask if your work makes the product work faster or better, if your user experience was simpler, if your specific needs were met, etc. It will help you understand how your product made them feel.

However, there is a dilemma: customers are notoriously bad at identifying what they want. They may say they want or even need specific features that, in the end, when built, turn out to be of little or no value. Therefore, you have to combine qualitative measures with it.

2. Quantitative

Quantitative metrics should be the backbone of your measurement. They will help you identify (without guesswork) what your customers actually do with the product.

For example, some of these metrics could be:

  • Subscription fee (how many customers, when introduced to our product, decide to use it?)
  • Retention rate (how many customers, after registration, return to use the product)?
  • Conversion rate (how many customers actually pay for our product?)
  • Usage (How many customers actually use the X function of our product?)
  • Behavior (after doing A, how many customers do B?)

A tale of two tabulations

For example, let's say our product is a 3D CAD application that allows you to your tabulation.

Your product has a function (let's call it to function A) that allows you to manually design a point pipe, allowing you to extract the drawings and isometrics to buy the necessary material and finally a line.

Your development team wants to implement function B, which does the same, but automatically. From a point and end, the product will automatically draw a path to the pipe and extract the initial drawings.

The development team assumes this B function is a brilliant idea, because hey, automatic is better than manual, right? It should save the customer time, reduce errors and improve user experience. Even asked some customers perhaps what they thought and received enthusiastic feedback about this fantastic feature. So the team implements this function, launches it, opens a bottle of champagne and toasts to a job well done.

But their assumption was wrong. Here's a real case: most engineering CAD software (Everything3D, SmartPlant, CADWorx, etc.) has this feature, and most designers don't use it. Why?

  • Auto-routing has problems determining system constraints such as traffic/maintenance zones or other design elements (structures, electrical, etc.).
  • The user decides when multiple components are possible for the same element.
  • It is nearly impossible for auto-routing algorithms to correctly guess the allowable support points for tabulations, which also impacts routing.
  • Designers are wary of automatic algorithms and prefer to be in control of routing.

?If you implement usage metrics and qualitative feedback, you'll find both corroborating with this evidence:

  • Quantitative: If you implement metrics that test the routing method used, you will find that most ducts were made with manual routing over automatic routing.
  • Qualitative: If you ask designers, most of them will say they use manual routing.

Here you can see the trap mentioned above: if you rely only on qualitative methods, your results may be biased because you only get information from some of your users. You might decide to interview a designer and be lucky enough to find the world's biggest fan of auto-routing. But that feedback isn't backed up by our quantitative metrics, so you have to keep digging to get the full picture.

And our development team? Well, they invested a lot of time in developing a very complex and time-consuming feature, only to find that their users don't care. That's a bucket of cold water, no doubt. But now, they know; and for the next iteration of the product, they will try to improve the features that their customers actually use often, rather than just guessing.

Do you have the data? Put into practice.

Once you understand your product, your customers and their behavior, the path will present itself. It will be clear where the product is not adding value, and you will be able to come up with ideas to change that situation. This is the learning phase of the cycle.

Sometimes that means abandoning features or even entire products. While this no doubt hurts, if it's based on solid research and metrics, it will be the truth, and you'll be glad to hear it sooner rather than later.

The likely outcome is that the product will evolve into something much better, and the team will thrive. Remember these words:

If you're not measuring, you're not agile.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了