Will AI Eat Software Engineering?
Will LLMs take coding jobs is the most common point of discussion that I come across with software engineers. This is also a massive concern for kids studying computer science today. Several statements made by leaders in the AI space are leaning towards "Yes".
State of LLMs
Having worked with LLMs from the early days of Google BERT to GPT4 and various open source models today I often think about this as well. It also helps that there are relatively few secrets in the LLM space for the most part it's very transparent almost everything you need to know is on Twitter, ArXiv and Github.
LLMs are powerful pattern (distribution) learning machines and we currently don't have a deep understanding of what they learn and how they use their learnings. Early LLMs like BERT could be trained to do specific tasks like classification, extraction, etc but each of these tasks required the LLM to be trained further for this specific task. With the arrival of larger LLMs this changed, they gained some interesting capabilities (emergent abilities), GPT3 for example could do something quite powerful called in-context learning ( where you didn't have to tune the model for a specific task instead you could just provide examples in the prompt and that would allow the LLM to perform your task without being tuned for it.
There's some debate on if "emergent abilities" are something that Large-LLMs gained on their own or it was just a factor of being trained on more data or some combination of the two.
This new ability of learning from what's provided in the context (prompt) has launched the area of "prompt engineering" which has led to breakthroughs like chain of thought, use of tools, function calling, perceived ability to reason, etc. And we're not stopping at text, multi-modal LLMs are being trained on images, text, and other modalities.
I'm telling you all of this as a way to baseline our understanding of LLMs and their capabilities and to also draw a picture of how things are changing rapidly. Everyone involved in building LLMs is in a race to build bigger, better LLMs trained on more data. Meta's is on track to release Llama 3 20B model and even their smaller models are trained on way more data than other larger models so just imagine how much data the 200B model can be trained on.
So, can LLMs write code ?
So, can LLMs write code. Well yes they can, they've been trained on massive amounts of open source code that covers most of the kind of code we might want them to generate. If the code you write involves instances of patterns the LLM have seen during training then the LLM will probably write it.
Now Imagine an engineering team at a startup or a large company what are some examples of tasks they maybe working on.
What do we see here, do you see "patterns" that can be learned especially if there is vast amount of similar open source code out there to learn from. As it stands today LLMs powered software like "Github Co-Pilot Workspace" can do exactly this write entire code like the above from task documentation, "v0" by Vercel can create full functional web user interfaces from text description or even an image from a designer. Using agentic workflows plus function calling LLMs can write, test, debug, execute code, rinse repeat.
So, can LLMs build software systems?
But what if the tasks were more "creative" more groundbreaking like the list below. Software that required deeper understanding of hardware and software, leverage mechanical sympathy to a higher degree. What if the software is harder to define it's more evolved from research and work over time can LLMs still do it. As of right now "No"
10. Create a game as engaging as Minecraft or Dota
Each of these above examples are real and are critical to the success of multi-billion dollar companies. This is just a short list there are probably millions more that exist or need to exist.
The LLMs of today beyond auto-completing (Co-Pilot) tiny snippets won't be too useful in building complex software systems, but can the LLMs of tomorrow do it? That's the hard question, are these software systems, examples of our unique human superpowers, "creativity", "ingenuity", "expertise" or is this pattern matching and we just fail to see those patterns.
So is it game over for us?
So is it game over when the next generation of LLMs arrive. If I extrapolate from the current trajectory then "Yes" a large amount of the coding tasks that keep us busy today will be done by machines. One caveat being access to high quality code examples to train and tune on is hard and expensive. With instruction tuning contributing heavily to the perceived quality of these models it's not entirely clear how and where they would find or synthesize the huge amount of hand labeled code examples needed to take these models to the next level.
But even without AI over the last 20+ years things have only gotten a lot easier for software developers, better tools, serverless cloud infrastructure, open sources libraries and packages for everything under the sun, no-code solutions, etc. Even without AI writing the actual code, software development is getting more efficient by the day. The arrival of large language models just accelerated things by a few years.
Will win against the machine?
I can't believe you gave up so easy. Hold on I did not say that, all I said was LLMs can write code as instructed, they are not superior to humans and never will be. Human intelligence, ingenuity, creativity, aspirations and drive are things so complex and so powerful that we have not been able to describe or understand these deeply human superpowers. We are a race capable of splitting atoms, detecting gravitational waves from star systems lightyears away, blanketing the planet with satellite backed data networks, building self-powered submarines that live at the bottom of deep oceans and one that can build AI. We're not gonna just roll over and let some basic linear algebra walk all over us.
The truth is when AI can write all of the simple software, we will aspire for more. The complexity and capabilities of the software we build when the machines do the coding will not look anything like what we have today. Data will flow more freely in realtime not be trapped in databases, more code will be ephemeral and generated on the fly and not preserved in repos. It's a world where code understands the natural world, it can now see, hear and read the world around us. That world will still need engineers, builders and those who understand software. The architectures will change, more software will run on the GPUs and TPUs than CPUs, our work might involve cleaning and synthesizing data more than writing code, we'll pull together models not libraries.
Marc Andreessen said it the best "software is eating the world" and AI is only accelerating this, every field from geosciences, materials, medicine, energy, defence, finance, food, climate is increasingly powered by software.
Software and engineering will evolve as it always has, the question you must ask yourself Padawan, am I studying for the past or the future.
Power BI | Tableau | Python | Data Science | AI | Machine Learner | Marketing
10 个月The debate over whether LLMs will replace coding jobs is ongoing, but human creativity remains invaluable in software development.
Applying for jobs manually? Automate with ApplyEngine.AI
10 个月Jeevons paradox has your answer… building software more efficient makes it more accessible and used in area where it was thought to be not economically feasible… resulting in more usage and more jobs… take Uner, AirBnB or Shopify as recent examples…
Building Robust MERN Apps | Javascript (MongoDB, Express.js, ReactJS, Node.js) | Python | SQL | Full-Stack Software Engineering Solutions ?? ??
10 个月It's always the people who learn to harness the power of the technology that will thrive when it becomes the norm. Love the article & the hopefulness!
GenAI, MLOps, IoT | Architect | Engineer | Consultant ? I Partner with Tech Leaders To Solve IoT, GenAI & Video Analytics Complexities, Architect Advanced Systems, Cut Costs and Fast-Track Deployments - Let's Talk !
10 个月You have asked a million dollars question and have also answered it. What if the answer is ‘yes’. “The LLMs of today beyond auto-completing (Co-Pilot) tiny snippets won't be too useful in building complex software systems, but can the LLMs of tomorrow do it? That's the hard question”