ChatGPT and the Software Profession
One of the amazing things that ChatGPT seems to be able to do is write code. Does that mean programmers are going to lose their jobs? Ha, ha, very funny. No. Of course not. Let’s give a simple example to see why it can’t replace programmers.?
First programming assignment
Well look at that! It did in fact write a Python program and it is a correct program that does what was asked. And it even gives an example of what the output would be on two example strings and it’s the right output. That’s right, programmers. Start packing up your belongings and get ready for your new career as a bartender or massage therapist. Software developer jobs are now just for bots.?
But hold on just a minute. Let's see if it has any idea what this program does and can make sensible use of it.?
Uuuuurrrrrrrrr…. Crash!
Ok, that didn’t take long to show that it isn’t very good at programming.? You can see that it appears to have concatenated the two strings but then doesn’t notice that the word ‘cat’ (high-lighted) does in fact show up inside the result. The correct answer to this is 1 not 0.?
?So obviously, it didn’t actually run the program. It cannot do that at the moment. It just guesses at what the answer is going to be; often correct but not always. That’s not all that useful, is it? Let’s see what it thinks about its own program.?
Obviously that is wrong as this last example shows.??
So what this shows is that ChatGPT can write some simple, easy to describe functions but?
Because of this, it clearly cannot replace programmers. Reasoning out what code will do is quintessential to the act of software development. Real software is a hierarchical combination of different programs and the developer needs to understand what those programs do so they can determine the correct structure of their combination.
In fact, the whole idea of functions or subroutines as a programming structure is to create referential transparency. You don’t have to know how a function is implemented. You can compute on its output if you know what its output is going to be like, given certain inputs. It helps to give the functions good names and write documentation strings but that is no real replacement for knowing exactly what they do in all cases. When you know that, you can eliminate from consideration the details of how it does it, which keeps us from being distracted from excess complexity and allows us to continue building larger more complex codebases.?
The circular nature of ChatGPTs programming chops
This is basically what ChatGPT does (AFAIK). It reads through loads of open source code. It also reads through any demos that are online about how to program. Then it recognizes various patterns that it can learn. Then when prompted to write a particular program, it looks for patterns in the prompt and connects it to the deep features that correlate with it and generates reasonable looking code.?
So it is basically limited to programs that have certain characteristics
And how do people test it to see if it can write good programs? Well, of course they choose one of these typical demo-type programs. And so yes, it does a good job at those kinds of tasks but those kinds of tasks are not the kinds of code that we actually need to write.
We generally work on more complex and highly structured codebases where we start with some simple stand-alone programs but then combine them together hierarchically. We know how to combine them because we understand the context of the larger problem being solved. We also factor the code well to contain complexity and make reasoning about the software easier.?
For example, here is a program I wrote recently (somewhat modified)
Note that there is no great way to describe this program that a computer could understand. In fact it is just a wrapper around two other functions make_plot and make_bar_chart. What these actually do is not described by the documentation string.?
Here is what make_plot looks like:
Again, there is a doc string which describes more about what the plot should look like but it’s not enough to actually write the program. It also depends yet again on some other functions, get_date_nums, make_ts_plot and plot_title.?
What are they like?
Again, there is a doc string which describes it some more but does not tell you enough to write the code if you don’t understand the context of the larger problem being addressed. And it also depends on this function get_counts. And so on and so on.?
领英推荐
And then
Yet, again this has a doc string which guides a human reader into what the function does. But it doesn’t tell you enough. You have to read the code to learn that it expects a raw_data stream of records which have fields ‘CustID’ and ‘date’ etc. Apparently those records will not have unique CustIDs so they need to be grouped and each group sorted by date so the first date can be returned.?
The point of this is to show that real code that developers actually write doesn’t look like demo code. It is not self-contained. It depends on other functions which may or may not have been created yet. You can’t easily describe what the code should do. You can only summarize it. The only thing that specifies exactly what it will do is the code itself.
In addition, when we write code, we generally know what we are going to use it for. Often that means building yet again another layer. The layers can be stacked very deeply, especially in a mature code base. We can also write the code in either a top down or bottom up fashion.
There are ways to write the code that might make ChatGPT more helpful; for example, writing the tests first. But ChatGPT and LLMs in general cannot do the real job of a developer and will likely not be able to no matter how much code they look at. It requires some of the most difficult and mysterious parts of human intelligence.
What can it be used for in programming?
This is not to imply that LLMs cannot aid us in writing code. They certainly can. But it is not to replace a developer anymore than a chainsaw replaces a lumberjack. It just changes how they work and gives them the tools to work more productively.
Consider auto-complete. We have this in word processing, at the unix terminal, on our phones etc. It is helping me right now to write this article. It basically recognizes patterns and tries to predict what comes next; exactly what ChatGPT does. If it predicts correctly we can hit a single key and it will write it for us. In fact you can think of ChatGPT as a glorified auto-complete. It can just go much further and suggest much more complex stuff. And we know what havoc auto-complete can cause if we aren’t careful to check what is being suggested.?
If you are not especially worried about controlling what is said, you might find this very useful. It might even be able to write the whole thing for you. But when you do care what is going to be said and want more control, you are not going to want it to continue very far ahead and make too many decisions for you.
So clearly in programming, we want this to act like a better auto-complete and we want to have control over how far it extrapolates and an effective review mechanism. In many cases, and especially when it improves, it will make programming much more productive. Instead of spending so much time typing up simple things that require little skill, we will be able to focus more time and attention on the truly difficult tasks that require our actual expertise.
This is where things are going. It’s also a very straightforward continuation of a trend that has been in place for the entire history of computing. It’s not so different; just the obvious next step in creating higher productivity tools.
The first computing was done with vacuum tubes and brittle electric circuits. So we came up with the programmable integrated circuit. Then we invented computer programming languages like Assembly rather than working in machine language. Then we invented higher level programming languages like C and then Python. We got the internet which allows easier sharing of code libraries so we no longer needed to write the same common programs over and over. We got databases and operating systems and cloud computing etc.?
The main theme that has always been there is to remove repetitive work that doesn’t really have much to do with the rare software development skills that we have. I don’t want to be fixing burnt out tubes, or implementing my own database, or writing Quicksort for the millionth time. I don’t even want to have to deal with computer infrastructure. I don’t want to have to look through a dozen files to find some function that I wrote a few weeks ago. So I have tools which allow me to focus more on what I am actually best at and so that I don’t have to do repetitive work.
IDEs of the future
Most software developers, data scientists etc work with integrated development environments (IDEs). These are programs designed for writing code that have many built in utilities for higher productivity. You can of course use Microsoft Word or any text editor to write code but then you need to do everything yourself. And IDE does useful things like spotting syntax errors and allowing for renaming a function without having to find everywhere it is used in the code manually. It can in fact do some very complex transformations such as refactoring or check whether code will compile etc.
The obvious use of ChatGPT and LLMs is in making our IDEs better. The net result is just more productive developers. In other words, the same natural continuation of an 80 (or so) year old trend. Exactly how this will work is uncertain. That’s for IDE developers to figure out in communication with their customers, us developers.
Not Luddites again??
There is absolutely nothing bad or threatening about ChatGPT for developers. It will just make you more productive. Or rather it will make you more productive if you adopt these tools. If you do not and you cling to the way you have always done things, you may fall behind your peers. In this case, you may indeed lose your job but not to a bot; but rather to a more productive programmer. This is the same way the economy has always been. You need to adapt and stay competitive using whatever technology is available.
You might say something like: “What happened to those rooms full of nice ladies who did numerical computations for NASA from the 1940s through 1960s? (Look it up, it’s a good story) Didn’t they lose their jobs when electronic computers came along?” Yes they lost those jobs and many of them became the first computer programmers. They wrote code in Fortran, C, and C++ that put humans on the Moon and a rover on Mars. Some of them probably retired. Some of them probably found some other manual computation jobs from institutions who didn’t have the resources to buy electronic computers. Very few of them ended up unemployed and miserable. They were intelligent and skilled people and people like that are always in demand.
There are very few examples of some technology coming along to replace workers entirely and instantaneously. Yes, better coal mining technology led to fewer miners and better textile machines led to fewer textile workers (see the Luddite movement). But these two examples have something in common. The demand for coal or textiles did not increase fast enough in reaction to the drop in production costs. Same with farming. The people of the World can only eat so much. That’s why we have fewer farmers than 100 years ago.
But many things are not like this. Software in general is something in very high demand by businesses and consumers. The problem is that it costs so much to produce. If we can produce it cheaper, the demand at that new price will skyrocket. Many companies will invest just as much in technology or even more if they are getting so much value out of it. In fact, there will probably be even more programming jobs to soak up all this new demand. The result is just an economy with higher output, and therefore people, including programmers, bringing home higher incomes and enjoying a higher standard of living.?
Conclusion
Don’t fear ChatGPT. It is not here to take our jobs. It is here to make us more productive. It can write some code but not very well. Like most tools, it is best used in the hands of skilled humans so that the human can focus their time on things that only humans can do, rather than wasting time on repetition. Their real place will be in super-powered IDEs. It will also allow everyone to learn faster just like what Google and the internet did before. Google might need to worry some but they can just develop their own competing products or, if needed, buy out OpenAI. Let’s not worry too much about them. They can take care of themselves.
This is a great article, thanks very much. Apologies for only discovering it now :-D. I'm surprised that I don't find the context of existing code in a project mentioned, and the knowledge an AI does or doesn't have about it. Is this due to the ChatGPT version you worked with at that point? What's stopping ChatGPT from interpreting the complex functions you're sharing with us, and contributing in the same style? Of course, its code suggestions can't be blindly trusted, but the style and approach should not be a problem.
Chief AI Officer, Thoughtworks
1 年Fantastic article. I'm currently using (and loving) GitHub Copilot. I think of it as a kind of supercharged line-level and (sometimes) block-level autocomplete. Given a well structured code base it can extrapolate what I'm trying to do, and it helps me both write tests and implementation code (although the obvious question is "what if there's a bug in both?"). Something I find about ChatGPT and Copilot is that their code is kind of like their English prose. Very authoritative, plausible, and sometimes completely wrong. That's probably more of an issue for the prose than the code, since we at least have compilers and test suites to check that!
Solution Architect & IBM Champion | Cofounder & CEO at rKube
1 年Give it time ??
Staff Software Engineer at Coursera
1 年Great read, grounded and well reasoned. One area of coding in particular that I think it doesn't cover is that tools like ChatGPT can produce enough code that can reasonably be run against predefined tests, as opposed to just suggesting a completion of a line or block, which has been available thus far. This means that a system can be built in conjunction with e.g. a Python interpreter as a substitute for "hard reasoning" which LLMs lack. This system could iterate on writing code until it produces something that passes tests written by a human without any human input. As bugs are discovered, more tests can be added and the system asked to iterate further on the code. Perhaps ChatGPT can make PRs, and humans can do code reviews, followed by ChatGPT making changes to their PR? So instead of coding, maybe we'll be reviewing most of the time, and ChatGPT will be writing code continuously? CI -> CD -> CC? I think there is a qualitative difference in terms of the amount of functionality that can be generated by this novel system, which in turn might spawn new areas of innovation beyond glorified autocomplete, but it remains to be seen when this will be possible and what the quality and maintainability of the resulting product would be.
Program and Product Management Leader | Organizational Change Management | Project Management to Agile PMO | Scaled Agile Practice Consultant SPC 6 | Lean Portfolio Management 6
1 年Good read: 'it is not to replace a developer anymore than a chainsaw replaces a lumberjack. It just changes how they work and gives them the tools to work more productively'