Will AI Make Programming Obsolete?
OpenAI ChatGPT

Will AI Make Programming Obsolete?

We can now see by now the rise of GPT-4, Midjourney, Gemini (formerly Google Bard), and a lot of other advanced AIs have begun to replace many of our everyday work. ChatGPT can write articles, legal documents, codes, and Midjourney can even draw. They have begun to be able to successfully learn how to reason, and can give reasonings on its own. The architecture behind all of these AIs, neural network is so advanced in learning billions to trillions of human language data that they provide a lot of advanced features that can help or even replace manual work. We have gone so far with neural network which stems from von Neumann architecture back in 1945.

With ChatGPT, you can even ask it to write codes, in any languages that you know. You can ask it to write C or even assembly codes if you want to. AI models work through generalizations, as explained in our earlier article here, and ChatGPT has become so powerful in which a lot of programming data have been fed to ChatGPT that it can even write codes on its own. It does this by treating programming languages just like any other human languages, e.g. English, and generalize over a lot of programming patterns found in public Github repositories and a lot of other sites in the web. Many have reported that ChatGPT stole their codes (in which we are fine anyway).

Knowing that ChatGPT can write codes and AI models like ChatGPT will only get more and more powerful over time, questions start to arise: will ChatGPT make programming obsolete? To answer this, we need to see the quality of codes that ChatGPT write.

ChatGPT works by generalizing over a large amount of language data, called Large Language Model (LLM). Programming languages are no exception to this, and ChatGPT will also learn a combination of human and programming languages. It can also do mathematics, so you can even ask ChatGPT to write a code in Go to do calculus for example.

package main

import (
    "fmt"
    "math"
)

// Define the function you want to integrate
func f(x float64) float64 {
    return math.Sin(x) // Example function: sin(x)
}

// Trapezoidal integration method
func integrate(a, b float64, f func(float64) float64, n int) float64 {
    h := (b - a) / float64(n)
    sum := (f(a) + f(b)) / 2.0

    for i := 1; i < n; i++ {
        sum += f(a+float64(i)*h)
    }

    return sum * h
}

func main() {
    // Define integration bounds
    a := 0.0
    b := math.Pi // Integration bounds for example: 0 to π

    // Number of subdivisions (higher value -> more accurate result)
    n := 1000

    // Perform integration
    result := integrate(a, b, f, n)

    fmt.Printf("Result of integration: %.6f\n", result)
}        

The function above is completely written by ChatGPT. It even includes some human-readable comments that explain how the codes work. ChatGPT does integration by using the Riemann series which is the most common way of doing integration in programming.

Some people however have reported some bugs in the codes that ChatGPT has written. While ChatGPT itself is not perfect, we also can't expect people to write perfect codes. In the result above, we can already see that ChatGPT has even surpassed a lot of amateur coders as many coders even have difficulties in writing integration codes above. In some cases however, people have reported that they spend so much time debugging ChatGPT codes that writing the codes by themselves might be faster. This could also be true, after all we can also expect the same result with human coders too.

Are We Going to Be Obsolete?

So are we, software developers, going to be obsolete? Are we going to be replaced with AI? It is very difficult to answer this question as nobody can predict the future. We do however, have some opinions about this:

Calculators do not replace the need for mathematicians. Excel does not replace the need for statisticians.

Will AI replace the need for software developers? In our opinion, it will not. Even ChatGPT admits that it is not intended to replace software developers:

As an AI language model, I'm here to assist and augment the capabilities of software developers rather than replace them. While I can generate code snippets, provide suggestions, and offer solutions to various problems, developers bring creativity, critical thinking, domain knowledge, and problem-solving skills to the table. Additionally, developers are essential for understanding the broader context of projects, ensuring quality, maintaining codebases, and collaborating with other team members. Think of me as a tool in a developer's toolkit, capable of providing assistance, speeding up certain tasks, and offering insights, but ultimately, developers are the ones who use these tools to create innovative solutions and drive progress in software development.

ChatGPT capability to reason always amazes me. It does recognize the most important trait that is needed for programming: creativity. We even haven't got a clue of what creativity actually is in technical sense, and how our brain can work and produce creativity, not to mention how to implement an AI model that can have creativity. There are a huge number of possible steps in chess, and there are even limitless possibilities in writing even a simple codes. Even with a lot of musicians, we never run out of notes to sing and compose music from. With a huge number of possibilities nearing infinity, it seems improbable to design an AI that can capture such a huge number of code patterns. Humans have limitless creativity, and with creativity we can solve problems in a way never seen before. Creativity is also at the tip of innovation, and almost all previous discoveries in pretty much all fields of science were discovered with the help creativity.

As programmers, we tend to forget that we are actually software engineers. As engineers, just like in other fields (civil engineers, chemical engineers, and so on), we are tasked to analyze problems, design, and then implement the solutions to those problems. This whole process is impossible to do without creativity. Creativity does not belong only in art, it belongs in similar or even greater level in engineering. This is something that AI cannot replace, or at least yet. None of the modern AI models were designed to inherently capture creativity, as we don't even know how creativity can be developed in people, in technical sense. Unless we can somehow measure and "quantize" creativity, we cannot train AI model to be able to be creative.

All of these generative AIs like ChatGPT and Midjourney can indeed generate a lot of stuff. Midjourney can draw, and drawing requires creativity in humans. The thing is, Midjourney does not at all use creativity to draw. It learns patterns from a lot of images and try to recreate them again. It does not need creativity to do that, as what AI does is capturing and generalizing over a lot of patterns. This is something that humans can never do. We can never learn as much data in the same pace AI can. But even though we can't compete with AI in pattern matching, we can still create better drawings than AIs.

When humans are tasked with drawing a certain picture, creativity takes place. Humans learn by capturing patterns, just like AI, but we need far less images to learn how to draw when compared to AI. This is what creativity does, it makes learning process very efficient. We can somehow predict a lot of data that did not previously exist, completely out of imagination. AI of course cannot use imagination, a fact that might not change in the near future.

AI as a Tool, Instead of Replacement

Instead of using AI as a complete replacement of programming, maybe what we need to do is learning how we can use AI effectively to augment our everyday tasks. AI seems to do very well in generating simple codes, like the integration code above. Because of that, we can actually just use AI to write codes faster. It does not render all of our knowledge obsolete, instead the use of AI requires a high level of knowledge, as reading code is actually harder and might take longer than writing codes from scratch. Reading code is a skill that is developed in parallel of writing codes, and is actually more difficult to do.

Apart from the ability to read and understand codes quickly, we as programmers usually end up learning only how to write codes but not how to write effective codes that are easy to read. This clean code practice evolves over time too, following the evolution of programming languages. The Clean Code book by Robert C. Martin is a book to go for clean code best practices, but the example covers mainly Java programming language, which is purely an Object-Oriented Programming (OOP) language. The newer languages usually use multiple paradigms that use radically different syntax compared to Java, which probably have slightly different clean code practices. The AI does not really care much about clean code, as it does not currently learn what is clean and not. After all, if all of our codes are going to be developed by AI and read and improved by AI, is clean code still important then?

Code practices sometimes encourage us to continually upgrade our codes and use newer dependencies. Recently, Go version 1.22 introduces the version 2 of the math/rand package, the math/rand/v2 package which can be imported to replace all integer random generation code. It offers more fine-grained ways in generating random numbers. I asked ChatGPT to generate a simple code to generate a random number specifically with this newer v2 package. This is the response:

ChatGPT Refuses to Import the math/rand/v2 Package

Based on the response, ChatGPT uses math/rand, and not math/rand/v2 package, even though it says so. This only proves how it works: it works by generalizing and predicting responses over a lot of different inputs. The language model was trained by a lot of question and answer statements, in many different languages. In the end, ChatGPT knows how to generate a good coherent response over some questions/statements, but it is not really able (at least for now) to capture the reasoning behind the written language. After all, Go 1.22 is still new, and as ChatGPT reads a lot of codes from Github to generate this response, not many people probably have math/rand/v2 in their code already, and so ChatGPT might have not caught up with it.

Who created the math/rand/v2 package and why? The open source community did it. They have their own reasoning and needs. This ultimately requires creativity in order to create something new. The math/rand package was lacking a lot of features in which new solutions have to be made. This is where creativity comes into play. The people behind the new package has discussed a lot of things together in order to come up with some new requirements which in the end get transformed into the v2 package.

New requirements have been developed for the v2 package. As this obviously requires creativity, AI might not be able to do it, maybe for now. However, AI might be able to generate codes based on that requirements. As software engineers, we are not only obliged to manufacture codes but also analyze and design a solution from problems. The AI seems to get better and better in helping us manufacturing codes, and might be able to replace us in the future in the job of manufacturing codes. But in the foreseeable future, we have not seen it be able to replace us in analyzing and designing solutions to problems, just like how the community came up with the relatively simple math/rand/v2 package. After all it has only a few functions inside the package, which is still relatively easy to design. Still, in order to design this package one needs to have base knowledge in using random generator provided by operating systems. One needs experiences to do that, something that we don't currently know how to teach to AI. AI understands only data and model. It does not really understand "experiences" in the same way as we do. Experiences are after all just historical data in AI. And just like creativity, we don't currently know how to measure how good an AI is in keeping experiences.

We Are not Going Away

In the near future, we as software engineers are not probably going away. However, the standards have seemed to be higher as AI can now help us in developing a lot of menial tasks. As I have demonstrated, in order to debug codes developed by AI, just like in the math/rand/v2 example, we need experiences. To design this package even, we need not only experiences but also creativity. Therefore, if as software engineers we cannot utilize our creativity and past experiences effectively to develop solutions for complex problems, we are going to be overtaken by AI. The standards therefore have risen up for the need of high quality software engineers that can not only develop good codes but also good solutions and designs, and can effectively read codes and solutions created by other engineers or AI.

Just like years ago, we acknowledge that being able to do Google search effectively is a required skill in software engineering, we will now have to acknowledge that being able to use AI effectively in a way to increase development efficiency is also a skill. This requires all of the things I have told you previously: creativity and experiences.

Christoforus Jason

Senior Analyst | Data-driven

1 年

Well structured thinking ????

回复

要查看或添加评论,请登录

PotatoBeans Software Development Company的更多文章

社区洞察

其他会员也浏览了