AI and Agility Learnings from XP 2023
I’ve had the pleasure of attending XP 2023 in Amsterdam this year. This was my first time attending an event that has been held for over 20 years in various cities across the world. And it was truly a gem.?
What caught my eye about this conference was the workshop track. Since I am heavily involved in AI for Agile conversations ( and yes, more news to come on this ), I’ve been dying to connect closely with academia professors and researchers. My drive was to get past the marketing case studies on Agile which serve a different purpose and often present the findings in such a way to simply generate more business. Academia doesn’t serve a commercial purpose. Its purpose is the one of knowledge, evolution, creativity, and self discovery. An admirable stance when your goal is to accomplish something different, something great, something life changing and unadulterated by monetary rewards. I'm glad to say, this event did not let me down!
My 4 day conference began with an AI Workshop track, led by one of the leading academics on this topic - Pekka Abrahamsson . Submersing immediately into the first topic, the pink elephant in the room, the infamous ChatGPT. “The fastest adopted software solution in the world, it is everywhere now and it seems we are all getting incredibly overwhelmed by it. Not a discussion goes by that doesn’t mention AI or ChatGPT. But, is it hype or is it real?” said Pekka. You’ll be the judge. However, there are things to consider and we dove right in.
When asking the same question over and over, ChatGPT has a tendency to generate the same answer for certain questions and different answers for others. As Pekka Abrahamson described, the early release of ChatGPT provided the same answer of 4 for “rolling the dice” and “generating a number” always returned a 57. This phenomenon has been fixed and randomization in ChatGTP has now been solved. Meanwhile, if you ask AI a question, let's say in history, considering that AI learns from the data all the time, it could very well provide a different answer. As if a child, the AI can give you a wrong or imprecise answer to a question.?
Now here’s a philosophical dilemma…. A child learning and providing wrong answers shows real human intellect. For a human being, it is ok to provide a wrong answer, be corrected, and learn from the correction. If AI does the same, it truly does the job of operating as an artificial intellect. But, are we willing to accept wrong answers from the AI??
Which of the cases below are acceptable for AI “learning”?
Either of these examples, we as human beings probably do not want to trust a system, no matter how smart it is. Because the risks and costs of being wrong are just too high!
In the Keynote speech by Prof. Dr. Dr. Tony Gorschek , considered one of the top scientists in Software Engineering, he shared a lot about the impact and benefit of AI. What stuck with me during his presentation are some of the critical questions raised during the keynote.
The answers to these questions expose another weakness in our understanding of AIs’ implications.
领英推荐
“If you can judge the quality of the output, by definition you don’t need the output.” -- Tony Gorschek
Meaning, if the AI provides the answer that you already know. What’s the point of asking AI? The true value of intellect is discovery. Getting an answer that is already known is search. Yes, AI can offer an efficient search, but it is still a knowledge based search. Another question, who or what would validate AIs? Will we need one AI to validate another AI??
The impact of AI on the world is significant and profound and we will surely see the signs of change quite soon. This means we need to start changing the way we teach engineering. 20 years ago we taught developers to know and remember. They used books to find the answers and experimented with the information provided. 10 years ago, we taught engineers not to memorize, but to have the ability to find the answers to the questions. Today, we need to start teaching engineers to work with knowledge based tools like GitHub Co-pilot that assist developers in just in time implementation. But what’s next?
As the workshop continued, there were lots more PhDs, PhD candidates, and practitioners sharing their work. We covered many topics from early results of using only ChatGPT prompting to develop software solutions and how well it worked ( it was far from easy and perfect ) to benefits of AI in Agile. We’ve learned that IA assisted tooling was able to come up with a correct design pattern to a software problem 93% of the time ( good percentage, but what’s the potential damage of 7% mistaken design pattern suggested?). We’ve learned that using AI in Agile is a close possibility in a knowledge based advisory capacity, but never becoming a full replacement. And we also learned that the research on future applications of AI on Agile software development is very small with more work that needs to be done to uncover and realize the potential in this area.
We also got together in groups to explore additional areas. For example, we agreed that Product Management is not something that AI can reliably take on. Not because it can’t come up with answers or suggestions, but because it will ONLY be able to come up with data supported answers and that’s not always the case. For example, there are just as many startups that succeeded by doing something everyone else said “will never work” as there are startups that went into the direction everyone believed would work, but failed. Sometimes success is driven more by mere luck and sometimes by gut feeling, though seemingly using logic we understand that something "wouldn't really work".
The day closed with a Panel discussing the impact of AI on education. The topic - how to prevent students from using AI to generate the answers? Logical question since AI can be easily accessed by anyone. For me, I raised my hand to share a different perspective. Why do we need to check if the student used AI to generate the answer or write a paper? Perhaps the way we teach and assess the work no longer fits the modern world. Our current education curriculum requires homework, tests, and papers to validate and certify acquired knowledge. But if the knowledge is easy to get from AI, is it knowledge that we need to teach? Perhaps we need to teach discovery, adaptation, exploration of thought, rather than mere knowledge? Perhaps we need to start adapting so that instead of worrying about AI being used to provide answers, the students can use AI to teach them how to think, how to grow, how to evolve and discover? A very deep and thought provoking conversation followed.
This was just the first day. Over the next 3 days, I had the pleasure of immersing myself into the Keynote of Dave Snowden , who described organizations and complexity and the cross section of the engineering team’s anthropological behaviors. Stuart Munton diving deep within Team Metrics. Eelco Rustenburg demonstrated how we as humans function at the physiological hormone level and that being Agile is not a Mindset. And much much more.?
Bravo! A brilliant event!
Thank you all who made the XP 2023 happen. To the organizers, volunteers, attendees, and especially presenters!
.
Senior Technology Leader
1 年Thanks for sharing, great read!
Chief Technology Officer at BERA
1 年Great read and awesome quick summary of a lot of extremely pertinent topics. Lots of thought provoking questions in here as well. We'll be chatting more about some of these in the near future ??
Project Manager, Agile Coach
1 年Great article, Alex! Thanks for sharing your story!