The lies we tell ourselves in engineering
dall*e - "the lies we tell ourselves in the style of Klimt"

The lies we tell ourselves in engineering

In my experience, good engineers are natural optimists (Stripe used to have an operating principle "Stripe is for optimists" that encouraged this.) It is damn hard to realize a new idea in software, test it properly, and make it work under stress. Having a lot of natural optimism helps you push through the hard work without getting discouraged.

But lately I've been musing about some of the common and convenient untruths that we may rely on for this optimism. Can we get beyond these "coping mechanisms" or are they intrinsic to the work?

"This will take N weeks to build"

This is the most common lie we rely on daily. The idea that we can even roughly estimate how long it will take to build a feature or new system. The truth is, if we are moving fast enough then often we don't know even know exactly what we are trying to build. The whole point of "agile" is to discover the full and critical requirements iteratively over time. But no, we eyeball a project, SWAG an estimate, and some manager doubles our estimate to get the final number. Then we roll up all the estimates and decide if we have "enough" resources to get through the quarter! Realizing the difficulty of "estimation" as a task we add "Gets better at making estimates!" to our performance ladder.

It's hard to imagine how we could effectively operate otherwise. Not giving any estimates means no dates or expectations for how long a project might take. How is anyone supposed to plan if there are no schedules? How do we balance an aspiration towards lofty goals with the desire for a realistic plan?

I like the innovative thinking that Basecamp has done around scheduling and planning. They basically just run a 6 week sprint and make almost everything fit into that bucket. Rather than ask "How long will feature X take to build?" they invert the exercise and ask "How much of feature X can we build in 6 weeks?" I think this gets effectively at the idea that scope and time are almost equally unknown at the start of a project, so it may be easier to just constrain your time variable.

In my own experience I have developed a toolbox of imperfect tools for this problem:

  • Break things down into smaller pieces as much as possible.
  • Avoid fixed deadlines whenever possible. "Sometime in month X" is about the most time resolution you should expect to get.
  • Optimize for throughput instead of predictability. Better (and easier) to ship small, often and fast than to hit the deadline on one huge project.
  • Be explicit and specific about the scope v. time trade-offs you are making on each project.

"We're gonna ship this by the end of the quarter!"

Humans need deadlines. This is the core counterargument to "let's do all agile dev with no deadlines". People need deadlines to organize their effort and often to force the reasonable trade-offs that we might otherwise neglect in favor of "getting it perfect". Unfortunately whatever your planning cycle is (typically quarterly) will create a natural deadline at the end of each cycle. And your system will easily trend to delivering everything at that deadline.

When I see this happening on teams, my first, optimistic, reaction is "Better to ship it all end of quarter than to miss the deadline!" But the truth is that this points to poor planning, and poor division of working into smaller chunks. I try to look at the plan each quarter and make sure that teams are planning to deliver chunks of work in months 1 and 2, not just in month 3. The best teams and managers know they need to generate their own urgency around smaller deadlines than to "fall back" to the planning boundary.

We start each quarter with blank slate

One of my favorite fictions in software development is the "factory" model for planning our work. We plan a set of projects each quarter with the concept that we will start and finish the project in that quarter, and that whatever we have built will be "done". Like a factory, we will have "shipped" it to the customer and now we can move on to the next thing. But the truth is that building software is NEVER like that! Every time we build something new we have to keep working on it - sometimes a lot. Unless we decide to kill the feature and rip out the code, then that feature will require some level of ongoing investment forever.

And yet our planning process is mostly ignorant of this basic fact. It's only up to good intentions and best practice for teams to surface "fast follow" and "tech debt" projects that visibly surface all this additional work into the planning process. And forget about any multi-quarter project plan or roadmap. You think we have any idea what kind of ongoing work will be required for a feature that we haven't even built yet?

My observation is that features+code follow a typical lifecycle. There is intense development and enhancement in the early life of any feature, followed by a declining level of maintenance effort as the feature matures. Eventually the code matures and you usually get like 12-18 mos of stability where that feature may require very little effort to maintain. But after this your feature starts to break! APIs get deprecated, dependent systems change, or scale demands outstrip the capacity of the original implementation. And now you have the "tech debt/re-architecture" part of the feature lifecycle where suddenly you have to invest work to re-build something you already built once.

I have been working with my teams to try to encode this "lifecycle model" into our planning process. Any project that is building a new capability will likely require intense effort this quarter AND next quarter. Projects from prior quarters should have declining but non-zero investment demands. And ideally we would be surveying the things we built 18-24 mos prior and asking whether its time to schedule their replacement. So far we have found that traditional planning is already quite difficult, and so trying to incorporate this new model adds a good bit of complexity. Maybe lying to ourselves is easier...

Bob Lockwood

Backend and Data Engineering (VP)

1 年

If you can do it; under promise and over deliver (UPOD). Estimating is really difficult period, but even more difficult under pressure, especially when you are being told that it's easy and it should take no time. Lest we forget about good tech plans, solid unit tests, integration testing, and resiliency strategies. I totally agree in breaking it down into the smallest atomic units as possible. Also, if you can release incrementally, is also a big win.

回复

Excellent write-up, Scott. I agree with you that estimation is hard. The planning fallacy is to underestimate and be optimistic about it as humans. What are your thoughts on the cone of uncertainty and refining the estimates iteratively from yearly / quarterly to every sprint in the agile process??Here are a few tricks that I learned and encouraged teams to consider are:?? 1. Considering historical data on similar projects? 2. Estimating in ranges such as t-shirt size ( XS, S, M, L, and XL ) to start with for yearly and quarterly planning exercises for resourcing and to communicate the scale of investment.? 2. Breaking down the initiative and projects into shippable work that you can fit and ship in a sprint or two. Often this may mean shipping behind a feature flag for a large project.?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了