In re: Yes you can measure software developer productivity
Howard Wiener, MSIA, CERM
Author | Educator | Principal Consultant | Enterprise Architect | Program/Project Manager | Business Architect
Read this post from Chris Lockhart this morning and kind of went off. It seems like an article will work better for a response than trying to fit this into multiple comments:
Yes, it's certainly nice to know who is adding to and who is detracting from productivity and who is driving and who is coasting among your development teams. However, important context, much of which is alluded to in the article is missing.
In my opinion, altogether too much development takes place in the process of most product development initiatives. To the degree we mostly all agree that we should be managing outcomes and not output (at least, I hope we do) there is a major chasm between how many executive teams manage and how agile delivery teams do. Too many executive teams manage on a PROJECT basis, with the traditional triple constraint approach instead of on a PRODUCT basis, which relies on experimentation, insight acquisition, adjustment and iteration. This constitutes what I call the Waterscrumfall Taco (which occurs when agile implementation is sandwiched in a traditional management paradigm) and Marty Cagan has demonstrated how this effectively creates a waterfall project structure regardless of the Agile framework employed to churn out the deliverables.
The fly in the ointment (or the turd in the punchbowl, if you prefer) is that there seems to be a need to maintain the backlog and workload of the assigned developers once an initiative gets under way. This gaping maw, which must be fed constantly, undermines experimentation and product discovery and tilts teams to action rather than necessary introspection. Once developers are assigned, releasing them to other work is anathema. Unfortunately, the discipline to work with as-needed development teams seems to be rare, thus my remark earlier about altogether too much development.
Look, accountable executives need and deserve to know where things stand and what to expect from investments they approve. They also need and deserve to receive reliable information about the performance of the people they have executing the initiatives in which they have invested. The complication is that the management context in which initiatives are run and the execution of development work on them have a codependency and influence each other. Ill-defined products and late-stage surprises can seriously impair the value that can be realized from investments and negate the benefits of high-performing dev teams. Ultimately, the measure of success in developing or enhancing products must be the cycle time from recognition of need to ideation to realization. In the course of this process, interim outcome goals are identified and in most, development work is defined, estimated and executed. It is in these self-contained units of work that productivity can be measured, though not easily. How should you value people who contribute ideas that others incorporate and act on? How should you value those who take a little extra time to complete their work but who produce great quality? How should you value those that help others to do better work at the expense of their own productivity?
领英推荐
The McKinsey article acknowledges these issues but the fact is that there are so many factors that determine end-to-end performance that quantification is terribly difficult, if not impossible. If initiatives' management context is not optimized, (product management and dev team coordination, for instance) then it is impossible for the dev teams to be operating at maximum efficiency and this certainly must be taken into account in any analysis of development performance. It's probably fairest, at least at the outset, to start by assessing team-level performance before attempting to dive into the level of individual contributors.
From the article:
"Remember that measuring productivity is contextual. The point is to look at an entire system and understand how it can work better by improving the development environment at the system, team, or individual level."
Team leads know who they want to have working on initiatives they're running and who they don't. Perhaps that is something that should be given consideration. In addition, the entire capability of product definition and implementation is something that should be subject to scrutiny with the intent of continuous improvement. Isn't it likely that the same discipline that is used to execute initiatives be applied to this, also?
Designer, Architect, Philosopher
1 年Took a while to get through the content here. I suggest that the core challenge is determining what one is trying to measure. And I agree with Howard that the only valid measure is a team's ability to deliver what they are asked to deliver. All of this talk about matching the detail of a sales team is nonsense. Development teams are not sales people. Developers do not have individual goals. Imagine telling a sales person that they will be measured on their sales, but they are not allowed to sell anything until 4 different groups get around to approving their sale? And those people are incented to not approve sales? Measuring impact to customers is equally absurd. Many industries have obligations to many other stakeholders than customers. How do you put a value on 'the regulator is happy?' Development efforts are a team effort for a reason. And attempting to find a single measure which applies fairly to every member of a team is going to destroy team cohesion. Either the team achieves the objectives or it does not. And if the team is not too large, everyyone knows who contributes.