Probabilistic Forecast

Probabilistic Forecast

Sharing a compilation of 14 insights from Troy Magennis' workshop yesterday in Sydney. Technical insights from courses and conferences I usually share on twitter. If you are interested, you can follow me on https://twitter.com/marciosete

When forecasting, the sampling data needs to be carefully thought. You don’t want as much data points as possible. You want the ones that better describe the possible future in the current situation.
In forecasting, the most important thing is not the maths and how you communicate it
When choosing a sampling dataset for forecasting, highlight the outliers that might create distortion and ask the ones interested in the forecast to decide, consciously, if they should be considered or not.
When forecasting, you need to consider the "backlog split rate" - rate in which your PBIs will be broken down. Reason being is that your completed work is already split (obviously), but your backlog is not. The observed split rate varies between 1 to 3.
A forecast is meant to give you an early indication that your assumptions are/aren’t holding true. If your actuals are outside of the forecast, you need to review your assumptions. Forecasts are not meant to be set on stone, rather its a "live organism"
Instead of giving a date when the work will be done, ask for a desired date and tell how much work is likely to get done by that date. e.g 30/11, 30/12, 30/01. See how much work is likely to be completed by each date and decide which one is more appropriated.
General sample count advice: Minimum sample count is 5; Acceptable sample count is 7; Good sample count is 11; Diminishing return after 30
Two big sources of variability in software are the amount of work and the delivery rate. More often then you could ever imagine these are misunderstood and neglected, specially the variation on volume of work.
When using historical data to forecast, to understand the quality of your sampling data, divide it into 2/3 random groups, run the forecast and compare the results. Variability bellow 25% is good, below 10% is great. That gives you the "sample data stability"
Story points have been proved not to be a good predictor. Learn how to break your work in a way that's just "small enough" and from there just use counts. Focus on spotting a change in distribution instead of spotting a change in magnitude.
Cost of delay that works when consultants leave: Do a fast classification of what people know for sure is high and low. Take them out. Take what has a moderate COD and do a quick qualitative analysis. Eventually do a quantitative analysis for the top ones.
Ask your team: What could prevent us to go live? What could blow up and give us unplanned work? Treat them as real risks! Make them 0% or 100% asap. Don't leave risks at 30% forever and don’t leave risks to someone else to manage. That makes us look stupid.
Forecasts done properly are probabilistic and take in consideration start date, backlog growth (split rate), delivery rate, dependencies, risks and prioritisation (cost of delay). Also, make sure to keep all your assumptions in one single place/view.
5 Basic Metric Rules: 1) Respect individual safety 2) Show trends 3) Compare data in context 4) Show the unusual clearly 5 ) be balanced - avoid over focusing. The four balancing pillars are usually: Quality, Productivity, Predictability, and Responsiveness.

Picked up some new info on good sample set....thanks for sharing

回复
Carl Weller

Principal Consultant and Ways of Working Practice Lead at Sentify

6 年

Thanks Marcio. I have Troy’s book on my Xmas list. Looks like it was a great session.

回复

要查看或添加评论,请登录

Marcio Sete的更多文章

  • Flow Efficiency, a Health-System parable

    Flow Efficiency, a Health-System parable

    Meet Alice. She is ill, and she depends on the public health system to discover and treat her disease.

  • The Learning Dojo

    The Learning Dojo

    Learning Dojos are an emerging trend, especially in large organisations, to adopt new ways of thinking and acting at…

    2 条评论
  • Segmenting your market by customer purpose

    Segmenting your market by customer purpose

    To understand what your customers expect and what keeps them coming back again and again, you need to understand why…

  • Epistemology of Effectiveness

    Epistemology of Effectiveness

    In times of transformation, effectiveness became an incredible seductive buzzword. Its meaning though is often…

  • The four types of metrics

    The four types of metrics

    The Fitness for Purpose Framework, unveiled by David Anderson and Alexei Zheglov in their recent book Fit For Purpose —…

    2 条评论
  • The dimensions of tribal behaviour

    The dimensions of tribal behaviour

    "Individuals are socially, emotionally, and psychologically defined by their tribal membership." All events and…

  • Learning Camp - Palm Beach, Sydney

    Learning Camp - Palm Beach, Sydney

    The Learning Camp is an immersive weekend about emergent learning. It's about learning how to learn in complex domains.

  • Unleashing Agile into an IT Ops Team - Collaborate to win

    Unleashing Agile into an IT Ops Team - Collaborate to win

    “Why should I care about collaboration?”, “Just let me code!”, “Just let me do [whatever one’s silo does]”. “Shh! Don’t…

  • Unleashing Agile into an IT Ops Team - Visualise everything

    Unleashing Agile into an IT Ops Team - Visualise everything

    My first six months in Australia has been quite intense (ok Marcio, now tell me something new!). Beside been pushing a…

    1 条评论
  • O ano que o Brasil descobriu a prática da Automa??o de Infraestrutura

    O ano que o Brasil descobriu a prática da Automa??o de Infraestrutura

    A era do Digital Business traz com ela clientes mais bem informados e, contudo, com mais poder de decis?o. Para atender…

社区洞察

其他会员也浏览了