No, ChatGPT Is Not Writing Code For You
"create some javascript code that shows a donut chart in chart.js for me"

No, ChatGPT Is Not Writing Code For You

I’m sort of being lightly driven round the bend by the million plus people on social media who are falling over themselves about this whole “ChatGPT is writing code?!?!” thing.

As such it is time for a gentle rant.No, ChatGPT Is Not Writing Code For You

ChatGPT is without a doubt a watershed moment in that it moves us into a different mode of information discovery. As I wrote about before, the whole “here’s a bunch of web pages, summarise them yourself” is old hat. There will now be “software agents” like ChatGPT that does summarisation work for us.

It is an amazing achievement, and I think one of the reason why it feels so “woah” is that I don’t think anyone knew the state of the art is where it is. I would remind you, dear reader, that there are some issues with it. For example, ask it “who is queen elizabeth 2”, it describes her in the present tense

Anyway, back to code.

If you open TikTok and look for “chatgpt coding”, the videos you get back are nearly all from non-professional programmers. The videos show ChatGPT responding with blocks of code in response to prompts.

One example has the user asking the tool to “create a block of JavaScript that uses Chart.js to create a donut chart showing business costs and expenses”. The tool responds with a block of code that the user copies and pastes into CodePen and, voila, it works.

The output is actually decent. The code looks sensible, and importantly it also emits code required to import the Chart.js library and a sample of the HTML needed to host it.

And as I said before, it runs. And there are millions of other examples, and I’ve spent a good while mucking around with it, and it does some clever stuff.

What it completely misses, however, is the “engineering” part of software systems development. Referring back to our example that generates a donut chart, the code emitted by ChatGPT has sample data in it. To be useful, that chart needs to get its data from somewhere – it needs to make a call over to (e.g.) a REST endpoint. That endpoint has to be able to get the data from somewhere, e.g. a database. That data likely has to be massaged into shape and returned back.

Building software – separate to the concept of engineering – is the process of taking a great number of separate parts and connecting them together so that inputs are transformed into outputs. You cannot build anything of meaning without a “great number of separate parts”, as you need to get over some a threshold of triviality in order to create something that delivers value.

The problem with how people are looking at ChatGPT for “coding” is that there is an expectation that it is doing anything other than the very trivial. Don’t get me wrong – it is *amazing* that it can do what it does, but the outputs are trivial. They are the smallest building blocks that go into an engineered solution.

For example, let’s say I am a capable full stack developer with ten years experience who has been asked to put a donut chart on a screen on an application. With that much experience, I am likely to be very skilled, and I know how to use Bootstrap, and React, and WebAPI, and SQL Server, and C#, and .NET 7, and blah blah. But I may well not have ever had to use Chart.js before.

If I Google for “create a donut chart in chart.js”, the first link happens to be the Chart.js docs, which I can read, and there is an example there of how do to it. Alternatively, I can limit my search to Stack Overflow and, voila, there’s an example there as good as the ChatGPT one.

The key here is that as an engineer, there is a gap in my knowledge. I’ve never used Chart.js before, and I need to get this chart in. I’m also glossing over the assessment of whether Chart.js is a safe library to use – what risk management thinking is going into the process of including out or including in that library? Assuming it is safe, ChatGPT within five minutes gets that chart in the application on a proof-of-concept basis.

What ChatGPT is looking like it will be good at is creating baseline proof-of-concept examples that inform the engineering process.

For example, I did manage to synthesise some interesting examples of things I didn’t know how to do, such as “how to create objects that can hold an address in Erlang”, or “how do I host a Rust process in IIS”. If I just want to know where to start, ChatGPT looks very useful – but I can imagine how that sort of question might just end up being asked via whatever Google integrates in as their competitor to the ChatGPT tool.

There are two more factors to consider ChatGPT – one specific to the engineering domain, and one generally.

Firstly, ChatGPT via GPT-3 is trained on Stack Overflow. If people *en masse* stop using Stack Overflow and start using ChatGPT, you’ll end up with some sort of informational void. Post a question on Stack Overflow and someone will answer it – and this add to the corpus of data that is Stack Overflow. Ask ChatGPT, all you’re doing is mining the existing set – there’s no accretive effect that has made Stack Overflow so successful.

Secondly, GPT-3 (and therefore ChatGPT) is not real time. It still thinks one Lizzie is still alive, and thinks that the other Lizzie (Liz Truss) is a lowly MP. A snapshot of the model has to be trained, and this realistically is the “moat” that Google still has because it is real time. Peculiarly, I suspect this might be less of a problem than it sounds for software engineering activities, because by the time things find their way into general use they’re usually at least 12-18 months old – “bleeding edge” is not that important in this space.

Thomas Radman

Philosophing with my AI Agents about technology, post labour economics and the future of nutrition

1 年

Totally agree. We just need to manage our expectiations. Though as a toddler (if counted in human years) it does some pretty decent things when guided properly ... https://www.dhirubhai.net/posts/thomas-radman-5316837_check-out-this-sharegpt-conversation-activity-7032729461867339776-TRAc?utm_source=share&utm_medium=member_desktop

回复
Theresa Southern

Director of Marketing at Brand Hause

1 年

I agree. As someone who is most definitely NOT an engineer or developer, I find that searching for solutions and having to sift through various answers to find one—or create one—is far more educational. For actual coders, even if you are proofing it, doesn't this also affect YOUR learning process and/or ability to improve your code? It's like writing your own book vs. editing someone else's writing. Anyways, seems more like a gimmick for this. I'm sure there will be an amazing use but imo, AI is not the magical tool that will allow us all to become coders.

回复
Nikolai Petrov

Original Developer of BizRuleAnalyzer, LoanDebugger, FormDebugger & CodeWizard

1 年

Try asking it to write code with recursion and it will likely make mistakes. It's definitely not a replacement for writing proper code, but it can potentially be used to write some parts with some clear inputs and clear expectations for results. In different industries it can be as little as 5% - for example if you have a rather proprietary existing codebase where all new code is literally patching something. And in some it may even be as much as 70% of all the logic while the rest 30% is actually written by someone. That someone will likely be a significantly more senior dev than the one ChatGPT would replace though since he should be able to spot issues with generated code.

回复
Eric M.

Co-founder and architect at Eruditis, an ml-based fintech startup

1 年

I appreciate what you are saying, I've been using it to write chunks of code which I ensemble into a larger system, and it works well for that. I would submit that it is writing code for me, it's just not building systems or engineering entire projects. Your point about not having Stack Overflow to learn from is well-taken, and similar to statements about writing in general: how will a model know what is "great" writing without great examples to learn from? I suspect that this won't be a material issue for much longer, however - with reinforcement learning, it is or will be possible to define actors and an action space (programming language, operating system, libraries) that teach themselves how to write efficient, high-performance, stable, scalable, safe coding solutions. Each of these parameters (efficiency, performance, stability, scalability, safety) are measurable and therefore can be expressed in a cost function which becomes a guide to learning. Just as with chess, eventually the human chess masters are too limited to teach the computer anything, and they instead become the student. This is the natural way of things, and will also be true of coding, probably within a few years.

Anthonio Rajaonarisoa

étudiant(e) (ESPA Vontovorona)

1 年

I asked it to write code to convert longitude and latitude coordinates to UTM xy coordinates using a specific C++ library called eigen. Haven't tested the resulting code but but it looks spot on and more relevant than the first two pages of a google search. In fact the code I found on google was more complex than needed and not very specific to this mathematical problem AND with this specific library

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了