Hot take: AI will not replace programmers
Is it really true that by 2026, 100% of coding is made by AI and programmers are no longer needed? No way! Let me explain.
The hot topic of the week has been Anthropic CEO Dario Amodei proclaiming that within a year, AI will write all software. If I'm being generous, I think he means that all programmers will use AI assisted tools for writing code. This is certainly possible. Tools like Copilot and Cursor provide a huge boost to productivity and even allow you to code things you could not have done before.
But if you take this to literally mean that all code would be written by AI without the need for programmers, we are not there yet. And we may not get there in the foreseeable future. AI is best described as an over-enthusiastic junior programmer, and I have reasons to believe it may not evolve beyond that.
I've been a programmer for 30+ years, so I know a thing of two about coding. I'm also one of the first developers who've fully embraced generative AI and LLMs, all the way from GPT-3 days before it was a thing, so I've seen the evolution. I'm building a product which depends on AI being as good as possible in writing code, so I have all the reasons in the world to root for AI coding.
The fundamental challenge with LLMs is that they don't see what they output. When an LLM writes code, it doesn't know what it writes. It can make a mistake that is glaringly obvious, but it's unaware of it. You could take the output and show it to the same LLM, and it would instantly point out the error. It will know how to fix it, but it may still make the same mistake when it writes new code. It may even promise to not make the mistake again, and it may proudly proclaim that this time it avoided the mistake, and still the code could be flawed.
Reasoning to the rescue?
The new reasoning models are a partial solution to this. In a crude simplification, reasoning just means that the AI will iterate over the same task multiple times. This way it will see its own output so it is able to refine it, fix bugs, and so on. This often results in dramatic improvement in quality of the output, but it's important to understand that this doesn't really improve the AI. This is kind of like an automatic version of the user telling the AI to check the code because it seems to have an issue.
The underlying LLMs are not improving that much anymore. We've already exhausted all the available learning material. We can provide more code samples or API documentation and iteratively improve the quality, but the current LLM technology is approaching a saturation point where the return of investment in training is diminishing.
Reasoning helps, but it's a process that requires a lot more processing power as it tries to improve the original - flawed - results. As anyone who has tried to do some serious programming with AI knows, there are many issues that the AI will be stuck with and no amount of iterating will overcome this.
Multi-LLM reasoning will probably be a thing at some point. I've noticed that if Grok gets stuck, for example, and doesn't seem to find a way to fix a bug, asking Claude to see what's wrong with this code often helps. But then, Claude gets hung up with another issue, and that time Grok may be able to fix it.
If all code were to be 100% written by AI, we would quickly get to a point where the code quality has deteriorated so much that humans will need to be called back for service. If 100% of the code in 2026 is written by AI, then 2027 will be the year-2000 again when COBOL experts were called back from retirement to fix systems.
I'll give you a real world example. I asked Grok3 (which is my favorite coding assistant today) to create a retro style version of an old classic game, Breakout, such that the game is rendered onto a texture so that I can place the game on a TV screen within a 3D world. Unsurprisingly, it created something that works. It had some bugs in the code, so I had to ask it to iterate a few times, but pretty quickly I had something up and running that looked and played like Breakout.
But it felt weird that the game was running slow on my desktop computer, which should be powerful enough to run a simple retro game. I started looking through the code and I found out that even though it created polygonal blocks and moved them, checked collision, etc., in rather efficient manner, it was when simulating a classic 320x200 resolution TV screen when it rendered the game to a texture.
The correct way would be to create a camera that renders the scene to a texture render target. Instead, the AI had written code that would do a ray cast for every pixel to see if there was something displayed at that point, and then use a SetPixel call to either set or clear color to that point of the target texture. It did this for every one of those 64,000 pixels!
If you don't know what that previous two paragraphs mean - that's the problem. You would probably not see the problem. If this was not quite as heavy solution, say if the number of pixels was smaller, you wouldn't even notice the game is running slowly. You would be happy that the AI was able to create something that works, and move on. That is what will happen, and what is already happening. More code is being written than ever before, but much of that code is terribly flawed. It looks good, it's nicely formatted, it might even do what is intended, but it will have some serious issues that nobody will notice.
When you have a large codebase full of issues like that, it will be very expensive to fix it. At some point, someone is going to wake up to the reality that the best way to use AI for coding is to pair it up with a skilled programmer. I can see a future where programmers are even more valuable than today, because one programmer can do so much more than they can do today (thanks to AI!) but AI without a programmer cannot be trusted to do anything on its own.
If you are a programmer - don't be afraid. Learn how to work really well together with AI, and your future should be safe, for now! Something new will come up one day, but the current LLM tech will not take your job.
CloudTop Co-Founder and CEO
1 小时前Well written ??
Fixer of the North | Helping companies take advantage of AI
2 天前Haha. I just scheduled a post that also has a stab at this very thing ?? Popular claim, it seems ??
Founder and CEO at Elevatix | ML revenue boosting engine for mobile games
4 天前I agree, AI is a great tool that must be used in the companies, and it is no more a luxury - it is a necessity, though it is a helper to us, humans
Looking forward to hanging! See you at GDC!!