How to Drive Developer Productivity Through Improved Experience
DALL-E 3: Happy developers coding in a shared office space with graphs and metrics on the whiteboards.

How to Drive Developer Productivity Through Improved Experience

Authors: Greg Davis ?? and Michael Kingery

Inspired by the research papers DevEx: What Actually Drives Productivity and The SPACE of Developer Productivity.

What is Developer Experience (DevEx)

Developer Experience (DevEx) treats development teams as customers of the practices, processes, tools, and teams they interact with. By focusing on streamlining their ability to deliver outcomes efficiently and effectively, organizations aim to improve business results while satisfying developers. These include reduced time-to-market, higher quality, and increased job satisfaction.

Google Trends – Developer Experience 2010-01-01 – 2023-12-19

The rise of remote work and pressure to “do more with less” have heightened attention on efficiency and productivity. DevEx is increasingly seen as a key facilitator. However, confusion over “productivity” has obscured the relationship between DevEx and productivity.

Developer Experience Facilitates Rather Than Reflects Productivity

It’s important to recognize that DevEx does not directly correlate with individual or team productivity. Rather, DevEx aims to eliminate obstacles that hinder productivity.

Modern product team practices, like Amazon’s two-pizza teams, give teams end-to-end ownership and accountability. Productivity thus measures business outcomes against costs at the team level. Teams self-organize delivery within governance guardrails to suit their business case. This expands beyond writing code into all activities needed to create value.

DevEx facilitates greater productivity while maintaining team ownership and self-direction. First, by standardizing commodity tasks misaligned with the team’s novel business case. This preserves and focuses effort on the unique value teams provide which can’t be measured across teams. Second, by minimizing overhead interactions with dependent groups. This retains accountability while cutting non-value add coordination. Together these reduce productivity detractors so teams can deliver more business value.

In this way, DevEx enables rather than reflects productivity gains. Productivity calculation remains with the product team, requiring a holistic view of the team and business outcomes. DevEx simply clears obstacles from developers’ path.

Why Developer Experience is Important

The ability for software developers to work efficiently and deliver value quickly is critical for organizations seeking a competitive edge. DevEx, encompassing developers’ daily interactions with tools, processes, documentation and colleagues, is increasingly recognized as a productivity accelerator.

A recent McKinsey report emphasizes that companies fostering a positive work environment for developers enjoy 4-5 times more revenue growth than their competitors. This underscores the strong connection between DevEx, developer productivity, and overall business performance.

Framework for Developer Experience

Consider this three-pronged approach to tackle DevEx.

Shorten feedback loops

Focus on optimizing feedback loops throughout the development process. Accelerate repetitive tooling like builds and tests to provide near-instant validation of code changes. Processes for review and approval should be streamlined to minimize delays for developers waiting on feedback. Promote close collaboration between teams to further reduce human latency. By shortening these feedback loops, developers can correct course faster, resulting in more nimble software development and ultimately better products. The quick feedback developers receive through optimizations like faster testing, streamlined reviews, and close team alignment is crucial to rapid innovation and greater agility. They are also key indicators of developer satisfaction and sentiment and they describe the experience of working with tools, platforms, and processes.

Reduce cognitive load

Minimize context switching by using self-service tools and API interfaces that integrate seamlessly with developer workflows. Have well-organized documentation with code snippets and tutorials to help developers quickly map requirements to the appropriate services. Standardize architecture and best practices to guide developers in using proven patterns to avoid reinventing solutions. Automate infrastructure provisioning and deployment options to reduce the mental load of server setup and configuration. Automate testing and release processes through continuous integration and delivery systems that would otherwise require extensive manual orchestration. By leveraging these techniques, developers can focus their cognitive resources on delivering value and innovation.

Promote flow

Minimize disruptions and delays through strategies like clustering meetings, avoiding unplanned work, and batching help requests. Leaders should also aim to create positive team cultures that give developers autonomy over their work structure, clear goals, and optimal challenges. By structuring work in ways that allow developers to fully immerse themselves in energized focus, teams can boost developer experience, performance, and product quality. Research suggests focusing on facilitating developer flow leads to higher job satisfaction, better code, and team innovation.

"A delightful flow, for all developers, on every dev team" - Adam Seligman , Vice President of Developer Experience, Amazon Web Services, AWS re:Invent 2022 - Delighting developers: Builder experience at AWS

Measuring Developer Experience

There are many pitfalls in measuring DevEx. First is relying on traditional productivity metrics like commits, lines of code, story points, and task times. These quantify effort rather than results, confusing outputs with outcomes. Enhancing these metrics may not improve experience or productivity. For example, inflating lines of code or story points can occur without adding value.

Another pitfall is discounting the human experience, potentially enabling burnout and attrition. Platforms and processes overlooking user experience will struggle with adoption.

Over-indexing on specific metrics is another pitfall to DevEx. This includes overemphasizing individual metrics and improving them without a holistic view. Suggested metrics only indicate aspects of DevEx, not a definition. For instance, teams with stellar DevEx and productivity frequently deploy to production. But forcing deployment frequency without considering developer satisfaction hurts experience and productivity. The metrics themselves aren’t the goal, they are different angles and indicators. Focus on metrics reflecting your biggest gaps, and judge success by business outcomes.

Finally, another common pitfall is a failure to measure productivity itself. Productivity calculates business outcomes against delivery costs. Even outstanding DevEx producing weak business results signals low productivity. DevEx enables productivity – and productivity can only be measured with business outcomes.

DevEx metrics should foster a team culture rooted in collaboration, transparency, psychological safety, and shared responsibility. Here are several key metrics to consider:

DORA metrics

  • Lead time for changes - The time it takes for a new feature request to be completed. Shorter lead time indicates greater efficiency.
  • Deployment frequency - How often new code is deployed to production. Higher frequency allows for smaller batches and faster feedback.
  • Change failure rate - The percentage of deployments that result in a failure or rollback. This includes any kind of defect after being deployed and not just issues with the deployment itself. Lower rates indicate higher quality and predictability.
  • Time to restore service - How long it takes to recover from an incident. Faster restoration results in less downtime.

Cycle time - The time it takes to go from start to finish of a process. Examples include going from code commit to code deployment. Shorter cycle times allow for faster feedback loops. Other examples include measuring the cycle time to onboard to a team, which can be expressed by the number of days new hires take to be able to deploy a simple change (Hello World) to Production.

Escaped defects - The number of defects that make it to production. Lower numbers demonstrate better quality.

Developer satisfaction - Measured via surveys containing Employee Net Promoter (eNPS) questions. High satisfaction indicates developers are able to focus and be productive.

Developer engagement - Measured via surveys. High engagement suggests developers are intrinsically motivated.

Developer retention - The rate at which developers voluntarily leave. High retention minimizes loss of knowledge.

System uptime/availability - The percentage of time systems are operational. Higher uptime results in less developer disruption.

Technical support response time - The time it takes to get answers to technical questions. Shorter time indicates better communication and documentation.

Uninterrupted time - The amount of time without meetings or interruptions. Longer blocks of uninterrupted time allow developers to focus on tasks with less context switching.

Incident frequency - The number of incidents or tickets that require developer attention. A lower frequency indicates better code quality, system stability, and documentation.

As the research shows, developer productivity ultimately translates into measurable business gains. By making DevEx a strategic priority, businesses will continue to innovate and drive success for customers.


How Amazon Approaches Developer Experience

One Amazon initiative is the Amazon Software Builder Experience team (ASBX), formed with the primary goal of making it easier to build, every day. A key ASBX delivery is an annual internal Tech Survey sent to technology employees. This survey gauges sentiment, experience, innovation potential, and customer obsession culture. Driven by and for developers, the survey has informed Amazon's builder culture for 16 years. Results are distributed across the company to spotlight improvement opportunities and empower teams to act. Metrics tracking improvement efforts also circulate with the survey findings.

Questions cover building constraints like pipeline health, deployments, maintenance and bug fixes. As well as cultural questions like job satisfaction and psychological safety. The survey supplies the compass, while empowered teams steer improvements for developers. This developer focus has helped propel Amazon’s builder culture and business success over the long term.

You can read the latest output of the Amazon Annual Tech Survey here:

https://www.aboutamazon.com/news/workplace/amazons-annual-tech-survey-results-now-available


Tools to Measure Developer Experience

Version Control Systems

  • Examples: GitHub, GitLab, AWS CodeCommit, Azure Repos, Google Cloud Source Repositories
  • Metrics: Escaped defects, Change failure rate, Developer engagement (commit frequency, branch activity, pull request interactions)
  • Importance: Essential for understanding collaboration efficiency and code integration issues.

CI/CD Tools

  • Examples: Jenkins, CircleCI, AWS CodeBuild, Google Cloud Build
  • Metrics: Deployment frequency, Lead time for changes, Time to restore service
  • Importance: Indicates the health of the codebase and efficiency of the deployment process.

Code Review Tools

  • Examples: GitHub, GitLab, Azure Repos, JetBrains Space, Bitbucket
  • Metrics: Change failure rate, Cycle time, Developer engagement (code review participation, issue interactions, merge/pull request activity)
  • Importance: Reflects on the collaborative aspect of coding and the efficiency of peer review.

Issue Tracking and Project Management Tools

  • Examples: JIRA, Trello, Azure DevOps, ClickUp, Asana, Monday
  • Metrics: Cycle time, Incident frequency, Uninterrupted time (task status changes, comment timing, calendar analysis), Developer engagement (task completion rate, issue resolution involvement, active participation in project planning)
  • Importance: Provides insights into the planning and execution speed of development tasks.

Automated Testing Tools

  • Examples: Selenium, Jest, Pytest, Cypress, Puppeteer, Cucumber
  • Metrics: Escaped defects, Change failure rate, Incident frequency
  • Importance: Crucial for maintaining code quality and reducing bugs.

Performance Monitoring Tools

  • Examples: Dynatrace, New Relic, Datadog, Splunk, AWS CloudWatch, Azure Monitor, Google Cloud’s Operations Suite
  • Metrics: System uptime/availability, Time to restore service, Incident frequency
  • Importance: Helps in identifying performance bottlenecks and improving reliability.

Developer Satisfaction Surveys

  • Examples: Internal surveys, feedback tools, DX
  • Metrics: Developer satisfaction, Developer retention, Uninterrupted time, Technical support response time
  • Importance: Direct feedback about experiences and challenges.


Additional Resources


About Greg Davis

With over 25 years of experience in technical leadership, emphasizing people, product, and project management, I bring a wealth of expertise to roles such as Senior Engineering Manager, Solutions Architect Manager, and Database Engineer. My approach involves high-velocity decision-making, utilizing Agile methodologies, and viewing mistakes as valuable learning experiences. I excel in working backwards from customer problems to craft effective solutions, particularly in backend infrastructure, serverless development, microservices, and event-driven architecture. What sets me apart is a deep passion for continuous learning, reflected in the acquisition of over 30 technical certifications. Beyond this, my extreme interest in artificial intelligence fuels my commitment to staying at the forefront of technological advancements.

https://www.dhirubhai.net/in/gregtx/


About Michael Kingery

Experienced technology leader with a demonstrated history of working with both hard technical skills and the cultures and practices that produce high performing teams. Strong professional experience in Agile, DevOps, and SRE Methodologies / Transformation as well as APIs, Web Applications, Mobile Development, and Databases. A passion for innovation and an ability to apply it from startups to Fortune 50 companies.

https://www.dhirubhai.net/in/kingerymike/

Andrew Cornwall

Senior Analyst at Forrester

11 个月

I was getting nervous as I read "Tools to Measure Developer Experience" until the end, when I saw "Developer Satisfaction Surveys." Whew! Glad it's there, but people with a development background have a tendency to collect data from tools rather than just asking the people who are doing the work. As a manager of developers, your first job is to talk to them. Only once you understand their issues should you think about deciding what the right metrics should be, how to collect them, and how to balance metrics against each other.

Kristian Andaker

Evolving Windows for customer needs of the future

11 个月

I love this. It's very similar to how Atlassian (which I have no relationship with, other than liking their devex approach) aims to make their devs productive. I have a tool to pitch that helps with several of the approaches you describe for improving dev productivity: you can help devs stay more in-flow by automating away a small task which steals their time today: scheduling who is oncall when. I'd love to hear an Amazon or AWS perspective on whether oncallscheduler.com could save your devs precious time.

回复
Emma Reifenberger

Engineering Intelligence @ DX

11 个月

Love what you said about how "DevEx enables rather than reflects productivity gains." Good read, thanks for sharing Greg Davis ??!

Leo Epstein

Support at DX

11 个月

Thanks for referencing our paper! Glad you're finding the concepts useful to you :)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了