Day 2: Developer Productivity Engineering Summit
I’m back from the Developer Productivity Summit.? And wow did it deliver.??The talks were of extremely high quality, the attendees were facing interesting and challenging problems, and everyone was happy to share and learn from each other.
Day 2 felt like a change of tone with two themes.? The first was a focus on the psychology and human factors that contribute to developer productivity.? The link between developer happiness and productivity were highlighted in multiple talks, driving home the need to listen to our customers (developers), delivery the right support for them so they can perform at their peak.? And support doesn’t just mean tools that streamline the day to day work, or even tools that revolutionize the job.? More than that, it is about creating an environment where people feel valued and safe, they can continue to grow professionally, and where they can make a difference.
There was also a very large segment of talks that focused on using AI to reduce developer toil and increase the amount that can be delivered to customers.
The first keynote opened up with an interview with Margaret-Anne Storey (Peggy) by Abi Noda .? Peggy’s work on the SPACE framework of Developer Productivity has been vital to our community.? Peggy’s talk on the Past, Present, and Future of Developer Productivity Research showed the evolution of the discipline and helped put context around the work that she and others continue to do in order to improve the experience and productivity of developers.
The second Keynote from Kelly Hirano and the engaging Akshay Patel talked through Meta’s productivity framework, but more importantly how they collect and use both qualitative and quantitative data.? Using surveys as a data collection mechanism was continuously highlighted as a must do for teams working on Developer Productivity.? At Amazon, we ran these survey’s monthly sampling a subset of the population so we gained regular insights without inducing survey fatigue, but it seems other companies opt for either quarterly or annual surveys (though I did hear of one company that asks for 90% participation in monthly dev tools surveys!).
领英推荐
The high quality talks kept on rolling in with Ty Smith and Adam Huda from Uber.? One of the three parts of their talk shared how they are using AI to reduce toil.? AI was being heavily deployed to improve testing.? They’d build AI agent to generate test code (it seems similar to Meta’s TestGen-LLM (summary).? They also built tooling to migrate their Java codebase to Kotlin at scale for their mobile development (and their slide said they “banned java” in 2024).? But more than that, they also used AI to provide tooling to speed up their migration.? They are aggressively looking for ways to use AI to make their developers' lives better, and I love it.
The talk by Rebecca Fitzhugh and Maurits Evers s honed in on why data scientists are needed when dealing with (large amounts of) data, such as the data we collect when we’re trying to measure productivity and happiness.? Maurits’ and Rebecca’s talk was fun and engaging, highlight how a naive approach to data analysis will lead to wildly incorrect results (oh, the datasaurus paper from Autodesk by Justin Matejka and George Fitzmaurice ) does a great job of explaining how stats can be hard to read).? The take away was that adding data scientists to your developer productivity team will improve your ability to deliver meaningful improvements to your customers (developers).
Yanina Ledovaya and Olga Lvova from JetBrains showcased their view of cognitive psychology in the area of Developer Experience and Developer Productivity.? They also showed why job enthusiasm, useful feedback about job performance, and peer support for new ideas are key to predicting software developers’ productivity.? They also cited Peggy’s essential paper, “Towards a Theory of Software Developer Job Satisfaction and Perceived Productivity” (PDF).? I didn’t get to see Adam McCormick talk, but his slides on Psychological Safety and Performance are ?? as he makes a case that 1) prescriptivism (trying to be fair, we lose equity and diversity); 2) Threats (under the guise of transparency, we induce dread - e.g. performance reviews); and 3) Instability (chasing progress, we lose steady growth) are all mistakes.
One of the most exciting talks of the conference for me was Juan Lopez Marcano 's on Uber’s new approach to mobile testing.? In the past I’ve seen brittle UI and Mobile tests, with an approach that repeatedly requires retrieving a page, finding an element based on an identifier and making assertions.? For Uber the difficulty was magnified as their range of scenarios is huge (each city has different rules plus there are multilingual considerations), and the app is constantly updated by a huge number of engineers.? This meant that UI tests were often breaking and rollbacks were extremely common.? So Juan is using a multi-modal LLM to write tests in plain English, and then execute mobile tests by taking screen grabs and using the LLM to work out how to proceed.? The results were impressive and I believe there is a commercial spin off coming from this work.? You can read more about DragonCrawl on the Uber AI blog.
Thanks to Gradle Inc. for hosting a great conference.? I met a ton of interesting people that are pushing the state of the art forward.? I hope to see you all next year.
Director of Software Engineering
5 个月There are a ton of links in the article to source material and conference talks. Go check it out!