Probabilistic Forecast
Sharing a compilation of 14 insights from Troy Magennis' workshop yesterday in Sydney. Technical insights from courses and conferences I usually share on twitter. If you are interested, you can follow me on https://twitter.com/marciosete
When forecasting, the sampling data needs to be carefully thought. You don’t want as much data points as possible. You want the ones that better describe the possible future in the current situation.
In forecasting, the most important thing is not the maths and how you communicate it
When choosing a sampling dataset for forecasting, highlight the outliers that might create distortion and ask the ones interested in the forecast to decide, consciously, if they should be considered or not.
When forecasting, you need to consider the "backlog split rate" - rate in which your PBIs will be broken down. Reason being is that your completed work is already split (obviously), but your backlog is not. The observed split rate varies between 1 to 3.
A forecast is meant to give you an early indication that your assumptions are/aren’t holding true. If your actuals are outside of the forecast, you need to review your assumptions. Forecasts are not meant to be set on stone, rather its a "live organism"
Instead of giving a date when the work will be done, ask for a desired date and tell how much work is likely to get done by that date. e.g 30/11, 30/12, 30/01. See how much work is likely to be completed by each date and decide which one is more appropriated.
General sample count advice: Minimum sample count is 5; Acceptable sample count is 7; Good sample count is 11; Diminishing return after 30
Two big sources of variability in software are the amount of work and the delivery rate. More often then you could ever imagine these are misunderstood and neglected, specially the variation on volume of work.
When using historical data to forecast, to understand the quality of your sampling data, divide it into 2/3 random groups, run the forecast and compare the results. Variability bellow 25% is good, below 10% is great. That gives you the "sample data stability"
Story points have been proved not to be a good predictor. Learn how to break your work in a way that's just "small enough" and from there just use counts. Focus on spotting a change in distribution instead of spotting a change in magnitude.
Cost of delay that works when consultants leave: Do a fast classification of what people know for sure is high and low. Take them out. Take what has a moderate COD and do a quick qualitative analysis. Eventually do a quantitative analysis for the top ones.
Ask your team: What could prevent us to go live? What could blow up and give us unplanned work? Treat them as real risks! Make them 0% or 100% asap. Don't leave risks at 30% forever and don’t leave risks to someone else to manage. That makes us look stupid.
Forecasts done properly are probabilistic and take in consideration start date, backlog growth (split rate), delivery rate, dependencies, risks and prioritisation (cost of delay). Also, make sure to keep all your assumptions in one single place/view.
5 Basic Metric Rules: 1) Respect individual safety 2) Show trends 3) Compare data in context 4) Show the unusual clearly 5 ) be balanced - avoid over focusing. The four balancing pillars are usually: Quality, Productivity, Predictability, and Responsiveness.
Picked up some new info on good sample set....thanks for sharing
Principal Consultant and Ways of Working Practice Lead at Sentify
6 年Thanks Marcio. I have Troy’s book on my Xmas list. Looks like it was a great session.