On "ChatGPT is bullshit" and code generation
Morten Heine S?rensen
Not open to work, but if your project is in trouble, I may be able to help
A recent paper on ChatGPT claims that "ChatGPT is bullshit" https://lnkd.in/dus_xz-E
The authors use multiple more or less rigorously defined notions of "bullshit" including so-called "soft bullshit": "speech or text produced without concern for its truth – that is produced without any intent to mislead the audience about the utterer’s attitude towards truth."
The authors argue that "bullshit" is a more appropriate term rather than thinking of ChatGPT's untrue claims as lies or hallucinations. For instance, the authors mention that to lie often means to "make a believed-false statement to another person with the intention that the other person believe that statement to be true."
It's puzzling why the authors, so concerned about finding just the right term to describe the texts of ChatGPT, choose for their title a string that suggests that all or most texts produced by ChatGPT are worthless. This is certainly not demonstrated in the paper, and not true. Perhaps the authors' intention is to put that image in the mind of the reader, at least temporarily, to attract attention, which would make them liars according to their own definition.
A scholarly more appropriate title would be "A new characterization of untrue statements produced by ChatGPT."
Incidentally, with their definition, isn't any application reading text from a database and displaying it on a web page bullshit? A program has no concern or intent and its data may contains strings that are not true in the real world.
So the main point of the paper is that ChatGPT is bullshit, but not lying. But can a program actually lie? I think it can be programmed to lie. For instance, imagine a search engine, which searches the internet and returns the full texts of the hits but with the word "not" placed at the correct place in every sentence.
Conversely, ChatGPT is programmed to present correct conclusions from the training set. If the training set consists of true statements, so will ChatGPT's conclusions usually be. If the training set statements are untrue, so will most of the conclusions be, and this is not lying.
But the distinction between bullshit and lying is is not the main point, I think.
The real point is probably that ChatGPT may be trained on a set containing only true facts or correct instructions and still provide as a result untrue statements or incorrect instructions. As the authors say, "ChatGPT itself produces text, and looks like it performs speech acts independently of its users and designers."
And this is why it is fantastic for code generation.
Why?
Because it combines elements from the training set into customized answers to all our requests for code, rather than simply returning the examples from the training set.
And the issue of getting wrong code from ChatGPT is a well-known condition of life known to all projects with developers who also make mistakes. So when we get the code from ChatGPT we test the code with our automatic test cases. If there are errors we point them out and have ChatGPT fix them. And whenever we get new code, we run the same test cases plus the ones for the new code.
As I tried to explain, the process is, indeed, similar to working with a team of developers. Trained developers who read the books and did the coding, also produce incorrect code. https://formalit.dk/ChatGPT_paper.pdf
Some people write a one-line sentence to ChatGPT and expect to get a working application back. The result is a disaster. But the same would be the case if you gave the same level of guidance to most developers. When we ask ChatGPT, it matters a lot how we ask. In fact, we must have a good understanding of what we want.
When we use ChatGPT to generate code, it is not - at least in some scenarios - because we don't know how to write it. We may know exactly how to write it. But ChatGPT can do it way faster than we can, or a developer in the team can.
So do you think the sentence ""ChatGPT is bullshit" is bullshit" is bullshit?
Well said. My issue is a different one. I still haven't seen or heard of an application of generative AI that addresses truly impactful problems. Now, we might jump into a discussion of what that means, but stay with me: I have only seen applications of optimisation: Doing the same things faster. In that sense, I am not particularly impressed by the quality, and I worry that the amount of generated content and code over the next few years will constitute the vast majority of what is available on the internet. At that point, if it happens, it will be a priori bullshit. Don't get me wrong. Optimising existing workflows is a great thing. But is that the promise of generative AI?
Building a better tomorrow - focus at collaborative interdisciplinary teams. DevOps culture Kubernetes, Talos, AWS EKS, Azure AKS, Rancher RKE and OpenShift
2 个月Evne natual intelligent humans sometimes generated bullshit. Maybe that is not a metric to use for measuring AI??
Hi Morten, Thanks for the insightful comment and article, which make a lot of sense to me. Now, reading your comment here made me think of the term "bullshit" independently of this context. And I wanted to point you to the work of the now (and far too early) deceased American-English sociologist David Graeber, who turned the term "bullshit job" into a technical term with his work published in a wonderful book entitled (you guessed it) "Bullshit Jobs". Subtitle: The Rise of Pointless Work and What We Can Do About It. Graeber was an eminently gifted and humorous man. A true intellectual force. May I recommend also the Youtube video: https://www.youtube.com/watch?v=kikzjTfos0s where one can experience Graeber at his best: talking about bullshit jobs. Now, perhaps back to the context (although, as said, my recommendation is quite independent of that). Perhaps ChatGPT teaches us that there is such a thing (technically speaking) as bullshit jobs in software development. That would be the kind of work best left to ChatGPT, almost by definition. Just an afterthought. Best wishes Jakob