Estimation, what's the Point?
Being asked to provide an estimate has to be one of the most thankless tasks that a delivery team get to do in delivering a project. You know you're going to get it wrong, secretly the person asking the question knows you are going to get it wrong, and you know it's going to come back to bite you further down the line.
This post is targeting the people asking for estimates, to better understand the inherent inaccuracy of that estimate, the risk it carries when budgeting a project, and a better way of utilising estimates when initiating a project.
As is so often the case there are a couple of Dilbert strips that perfectly sum up this situation. You can read them at:
Basically there are a number of fundamental issues when asking for a estimate:
- The person asking for the estimate doesn't fully understand what is required.
- The person providing the estimate doesn't fully understand what is required.
- The person providing the estimate assumes that the person asking for the estimate does fully understand what is required.
- The person asking for the estimate assumes a level of accuracy from the person providing the estimate which cannot be achieved.
Then there are the unknowns to consider:
- If it's a new technology then there is a learning curve to be followed, so the person providing the estimate is not fully informed.
- If it's enhancements to an existing system then there will be hidden complexities in the legacy code base which will only become visible when work commences. By then of course, the estimates have been baked into a business plan and are considered to be inviolable.
- The mythical man-month comes into play. You have to assume how many hours a day will each team member actually work so you can convert estimates of work into a viable timeline, taking into account variables such as:
- Are the team dedicated to the project full time?
- How much of their time is taken up by administrative/training activities?
- Do they have to also support a production system? How much support does that require?
- How much time will be taking reworking functionality because of misunderstanding of the requirements?
- How much time will the team take as leave or sick?
- How much time do you need to allow for coffee drinking and water fountain conversations?
Of course it's possible to address these by looking at historic estimates vs. actuals, then converting the inaccuracy into a contingency factor that you can apply to your future estimates. This of course assumes that you have accurate, meaningful metrics from historic projects (which is rarely the case) and that you are comparing like with like; it would be meaningless to use the metrics gathered for a .NET development and then apply them to a J2EE one.
The next issue becomes one of preconceptions, where the person asking for the estimate has a preconceived notion of what the estimate should be. This preconceived value is often already baked into a business case, and so what then happens is that the team has to revisit their estimates to make them fit. This usually means you can say goodbye to any contingency that was built in. At this point the person asking for the estimate will be pleased because they have heard what they wanted to hear but the team start off disillusioned because they haven't been listened to; the first foot steps of a death march.
Delivery then commences. At some point, ideally early on, but generally at the last minute, it becomes clear to everyone that the original estimate was too low, and that the project is going to overrun. The ability of the team to provide accurate estimates is then called into question, the team becomes more disillusioned and delivery slows down further. The project then gets delivered in line with the original estimate, the team are blamed for the inaccuracy of the estimate and the project is considered to be, if not a complete failure then at the very least challenged.
At this point you have disappointed stakeholders who have no trust in the delivery team, a delivery team who have lost hope because they aren't listened to and a project manager who is either leaving or being marched out of the door.
Clearly this isn't a satisfactory situation for anyone concerned, but the whole process gets repeated with the same results next time round. It's a problem that is endemic across the software industry and always ends with a blame game, rather than being recognised as an inherent problem with how work is commissioned then taking a radically different approach.
Any organisation commissioning development work need indicative costings to build into business cases, to formulate budgets and enable teams to be built. The problem is that they are seen as absolutes when the science behind their derivation is very weak.
Estimates become more meaningful, and I would argue, more useful when they are used, combined with accurate metrics, to determine the rate at which new functionality can be delivered. To do this we need to move away from absolute figures of time and cost, and be thinking about relative measures of complexity. This is where concepts such a story pointing and planning poker come into their own. I don't want to go into the details of these here, there are plenty of resources on the web which describe them, but I'm more interested in how these metrics are used from a estimation/budgeting perspective.
Delivery Velocity
The principle outcome of story pointing is that the team agrees on the number of points for each piece of functionality based on a multi-disciplinary review of its complexity. After a number of delivery sprints it then becomes clear how many points a stable team can deliver, on average in a single sprint. They key point here is that the team remains stable, if there are changes to the team members, or any stop being 100% dedicated to the delivery, then the average number of points delivered in sprint will undergo a step change. This number of points is known as the velocity and is the measure of the rate at which the team can deliver work. It's also important to note that the velocity is team specific. You can't compare two teams velocities, I have seen attempts to do this and it is meaningless, since the way the teams agree on the points for complexity will be different. The chart below illustrates the trend line over a period of time with differing numbers of points being claimed each sprint.
Velocity Based Forecasting
For Project Managers this the enables the application of Velocity Based Forecasting. given a proven velocity, and a requirements backlog which has been consistently estimated in terms of relative complexity it is then possible to forecast forward. This can been seen in the trend line graph below. It shows a target sprint for the Minimum Viable Product (MVP) to be delivered based on a target velocity being achieved and variants on this where the team is measured to be running over or under velocity. This enables there to be early warnings if the team were out on their relative complexities, giving an opportunity for aspects to be changed in terms of team size/skill sets, scope or process in order to bring the delivery date back on track or to manage and reset expectations. In traditional estimation and delivery models this only becomes clear at a very late stage when it is typically too late to so anything about it. In this model the stakeholders are buying not a final product but a delivery team for the duration of that budget, and the team will solely focus on delivering what is important.
Cumulative Value Flow
hat the above graph doesn't clearly indicate is the value that has been delivered into production over time, and this can be illustrated by the use of a cumulative value flow chart as shown below.
The cumulative value flow shows over time the accumulated points that have been deployed into production and the level of work in progress. Growth in the value delivered into production is clearly a good thing, but any growth in the amount of work in progress needs urgent investigation. It's an early indicator that work isn't being finished, and that focus is drifting from prioritised value. It also show how the total number of points increases over time, indicating that the scope is not static, but evolving. Given this it becomes clear that with a given period of time the project may have delivered the forecast value, without necessarily having delivered the planned scope.
To Conclude
In summary, we need the courage to move away from the reliably unreliable way of fixed price costing projects based on estimates and move to a production line model where money is allocated as the budget for on-going stream of work based on a given team size and structure, effectively a subscription service as we are seeing for many licencing models. I've rarely known a project which actually, once delivered, then remains unchanged. The original value proposition for it is delivered into production then there is always an on-going stream of new or modified functionality required. Taking a different approach to budgeting and estimating then allows there to be a delivery team on tap with a proven rate of delivery, the cost of which can be forecast indefinitely based on a financial run rate per sprint.
In terms of management reporting we can then move away from the traditional Gantt chart, which gives an unreliable illusion of certainty, to the above charts which give a tangible metric driven indication of progress and early visibility of problems. This gives all stakeholders a positive view of real progress and moves away from the confrontational management of budget against scope and date.
VP UK Software Engineering at Vocalink, a Mastercard company
8 å¹´I hate the whole process of providing estimates as it only provides value to everyone apart from the team. But I do understand we need to know when things will get done.
Retired
8 å¹´Great article Phil, for me it is the flexibility in this approach that encourages the customer to understand the variability of the development, but that he can be confident that key functionality will be deployed first with additional functionality later. I've been managing an ongoing software programme that is based on a fixed stable team size and we manage our MoD customer's expectations on what's deliverable by managing the scope and development effort (including support to the existing delivered software). Although it's an MoD programme we've managed to avoid the issues of fixed price estimating and the MoD fund our fixed team with quarterly progress payments. We agree the tasks with the customer at regular meetings, where we also report progress. Our focus is on 'value for money' so when we get requests for change we brainstorm potential solutions and determine which we believe would be acceptable to the customer. We then discuss the options with our customer to agree the most 'cost effective' solution, which obviously focuses on the requirements that are particularly important. It is often the case that the original requirements are for a 'Rolls Royce' solution and we find that there is a similar solution that is acceptable, but at a much reduced cost. Working outside of the usual 'fixed price' constraints also allows flexibility on both sides to develop what is best for the users, without getting hung up with what has been contracted for (how many times have you expended significant effort developing a feature that is never used since things have changed from when the requirements were captured, but can't be dropped as it's in the contract?). Another benefit is that our company doesn't have to get involved in adding additional risk funding into the fixed estimates, thus the management overheads are greatly reduced.