ChatGPT - Developer serendipity
Sylvain Hellegouarch
Helping you engineer your resilience with @Reliably. Cup of tea lover.
ChatGPT has been quite the rage this late 2022. There will be much debate for years to come about the capabilities and ethics of such technology. The whole question of taking over developer and operationals jobs is moot in my view. Evolutions in our industry is good thing, bits of ChatGPT will find their way because they will have value and that is fine.
As for me, once I used ChatGPT, I was left with the joy of being a newbie again. The serendipity of its replies was way more valuable to me than their actual correctness.
An assistant or a buddy?
When you use the ChatGPT interface, what stands out is really the conversationnal aspect of it. I mean it says it on the tin:
ChatGPT: Optimizing Language Models for Dialogue
Indeed, I found myself almost immediatly using a conversationatial interaction style. This was so an opposite experience to using a search engine, where you use a crosswords approach. It's quite a profound change because the last two decades have optimised for the latter. I have to say the chatbot does succeed at that immitation game fairly efficiently and I've found myself keen to say "Hi", "Thanks" and other human-specific terms. This was not something I anticipated.
From a technology perspective, did the movie Her expect this could be possible so soon?
Perhaps this is the strangest part of it all. ChatGPT is so good at pretending being a person on the other side of the phone and if you feel like a lone engineer - maybe remote does leave a trace or your colleagues are not that nice, it can become quite attractive to project human emotion towards the "discussion" you and the bot are having. It can become a buddy.
Form over substance
ChatGPT is impressive when it comes to the form but is it convincing when it comes to substance? At least in the context that our industry might find relevant. It clearly has been trained a very large and diverse dataset but breadth is not depth. I was more seduced by its capability to be a presence rather than a space for finding the solution I'm looking for. ChatGPT does not try to be a Clippy and that's where it succeeded for me. No one loves a know-it-all colleague looking over your shoulder.
The strength of ChatGPT is that I didn't have to browse a variety of links for a solution - though Google is so good that I rarely need to go past the first few links of the first result page. The challenge is that I don't know if I can be confident in its solution. When I ask a peer, whether it's a colleague in my office or Stack Overflow, I'm looking for some sort of authority.
ChatGPT doesn't deserve that trust - yet. Obviously, it's unfair to ChatGPT since it is not a Stack Overflow-like product. Still for now, I would not use it to find technical solutions I can safely use in my production's deliveries. However, I would likely use it in combination to Google when I cannot nicely formulate my goal.
Interestingly, with ChatGPT, it's the model we need to provide authority to. Not individual instances of its output. It certainly makes sense to me that Stack Overflow disallowed ChatGPT answers as it would have been a way for ChatGPT to piggyback on Stack Overflow authority.
So, does it mean ChatGPT cannot be used meaningfully by a developer? I don't think so but it means you need to use with a different purpose in mind I believe.
领英推荐
Serendipity
As we discussed, ChatGPT invites a conversationnal approach. I'm not thinking about the right keywords to locate the solution I need. Instead, I'm much more fuzzy. The model has to find a way to accomodate this sparse context to propose a response.
What I did not expect during my exploration of the tool was to be surprised by answers that, while I knew them wrong, would open new opportunities.
As the maintainer of the Chaos Engineering Chaos Toolkit tool - what, you are not using it yet? - I was curious to see if the model had been trained against it. Honestly, considering the responses I received, I'm not sure how much it has seen of it, but it knew at least the context of Chaos Engineering. I wanted to see if it could come up with one example of an experiment. Unfortunately, ChatGPT is stubborn. When it doesn't know a good precise example, it will feed you of generic, rather bland, pieces of information or will simply tell you that as a trained model it doesn't have the possibility to interact with the outside world. So no luck for me. Meaning, no luck for users of the tool either.
Yet, something happened I didn't anticipate. When asked to show me a Chaos Toolkit experiment it responded with this.
Now, everything is invalid in this snippet. The chaostoolkit package does not expose an Experiment class and, more importantly, Chaos Toolkit experiments are JSON/YAML resources. For instance, here is one that runs a mild load test while dropping an AZ from an AWS ELB:
What is fascinating to me was not how much ChatGPT was wrong but how much its response made me think. Chaos Toolkit always was dressed with the idea you could define Chaos Engineering as code through a declarative model. Yet, the imperative model ChatGPT suggested is elegant and opens interesting doors to use Chaos Toolkit in different ways.
I kept digging and because its replies come fast and concise, you can avoid losing yourself in the details and focus on the "what if?" picture. It allowed me prototype, at least in my head, what some new features/approaches could look like. This felt empowering I must say.
Good or bad?
Technologies like ChatGPT are neither good nor bad in themselves. They are tools. I do believe the debates about taking over tech jobs is irrelevant because that's not the value proposition it can aim for. Sure some bits of auomation will rely on it, but because there is no crowd-sourced authority, like a Stack Overflow, I think it'll struggle there - not to mention the legal risks of a single authority would lead to. Yet if you use ChatGPT as a way to explore solutions, I think it's a fantastic tool.
Use it or don't but have fun when coding.