Estimating software tasks
Daniel Jones
CTO doing a barn conversion after a successful IT startup exit, open to fractional roles
I think that developers should1 estimate the size of tasks that they're due to work on. I do not believe that developers should, in the first instance, estimate how long something will take.
By "size" I mean very specifically the product of the novelty and complexity of the task. The unit that size is given in is another, less important topic: t-shirt sizes, points... They're all fine, as long as they're not excessively granular, and as long as they're not so available as to short-circuit the conversation about complexity.
Complexity implies a degree of unpredictability - complex things are generally hard to predict. This is intrinsic uncertainty that is fundamental to the task. It doesn't matter who performs it, that unpredictability will still exist.?
Novelty implies unpredictability in the form of known-unknowns and unknown-unknowns. This is epistemic uncertainty that is dependent on the person performing the task. It could be that we do a thing for the first time and everything works immediately2, or it could be that there are all sorts of rakes in the lawn, ready for us to step on.?
Neither high complexity nor high novelty necessitate that things will take longer, only that there's a higher chance.
Complexity is determined by the following factors:
Number of types: what are we changing? Just some Golang code? Do we also need to change the Terraform, some YAML, and a database schema? Do we need to go and talk to marketing?
Variation within types: are all the Golang files the same, or were they written in different styles by different people? Are all the marketing folks of the same opinion?
Connectivity: how many connections are there between things? Is the Golang code only used by one client? Or is this a microservice used by hundreds of others?
领英推荐
We can classify novelty quite simply:
One could totally make a mathematical model out of this, but I don't think it's at all necessary. Look at a story as a group, agree what it means, then discuss a rough first idea of how you'll go about achieving it. Think about the things involved in the change, their variability, and their connectivity. If you can't answer those, there's a higher risk. The more of each of them, the higher the estimate. Have a chat about whether anyone's done something similar before - the less familiar and the more novel, the higher the estimate. Translate that into your estimation scale of choice.?
I'll leave explaining why time-based estimates are a bad idea to another post. Estimating based on complexity and novelty is more honest and communicates more of the reality of a situation. By considering the complexity and novelty of a task teams are encouraged to have meaningful conversations that facilitate knowledge-sharing.
Again, it's another post, but I believe the act of software development is complexity refinement, akin to the refinery of metal ores. Teams transmute requirements into irreducible computational steps, then translate these into a context (the existing code base, the deployment target, the social environment, et al). It's taking complexity, refining it down to irreducible computation (minimal complexity), and then embarking on the practical endeavour of making all that a reality.?
Development teams are complexity refiners. It makes sense to think of their throughput in terms of the amount of complexity that they can refine in a given period.
Am I wrong? Have I missed something? I'd appreciate considered feedback and critique based on first principles.?
1. Estimates should be given in any context where understanding the throughput of the team is important. It's quite common for organisations to have not fully embraced just-in-time delivery, and so want to coordinate different activities - in order to do this, you need to know when things are going to be done by. One might also want to understand the throughput of the team to facilitate discussions of the delivery system. Is too much working being given to this team? Is output of business value slowing, and if so, why? Do we need to make the case for extra team members?
2. This happened to me precisely twice in 30 years of programming, and was deeply emotionally unsettling on both occasions.
Platform Engineer @ EngineerBetter | Information Technology, Software Engineering
4 个月If I remember correctly, while I was working for a known consultancy, somebody mentioned the term "velomplexity"?
Cloud Offering Director at Peraton
4 个月I saw pretty, ugly or average babies used once
Improving and optimizing flows of value @ SAP
4 个月I think they shouldn't estimate at all. But if they do, then they should estimate time, because the ultimate question to answer is: When will it be done? Complexity is the worst: I mean, scientists didn't agree on a measure for complexity, but devs are expected to even estimate it. Besides that, 3 people have 5 opinions on what complexity actually means; some even still confuse it with complicatedness. Uncertainty is ok to factor in, because that's what complexity leads to. And it should naturally lead to a range in the estimate.