Is Coding "Dead"?
Mike Barksdale
Senior Technology Leader | Vision, Strategy, & Execution | Architecture, Azure, AI | Passionate Mentor & Hands-On Developer
Maybe you've followed the development of Github Copilot from its earliest stages to now, giving it a whirl or using it (or some other coding assistant) daily.
Maybe you've talked with others and wonder what the odd juxtaposition of doomsaying and excitement in tech is all about.
For the question at hand: "If AI continues to evolve, is coding really dead?"
The answer: Not really.
The longer answer: It's been dying long before AI showed up to the scene.
Hold on to those pitchforks and let me explain.
What is Coding Anyway?
Let's define it so that we're all on the same page. "Coding", in this article, refers to the mechanics of arranging the sequence of tokens that can eventually be interpreted into machine code for a computer to execute.
Example:
fn main() {
println!("Hello, world!");
}
In Rust, this sequence of tokens -- the keywords, function parameters, punctuation, etc. -- will pop open your terminal and print "Hello, world!"
Cool, right?
Different programming languages can do the same thing just with different commands, similar to spoken languages like "hello" in English and "bonjour" in French.
And just like spoken languages, sometimes things can get lost in translation.
These differences lead to the nuance in my definition of "coding". "Coding" is tied to two main dependencies: the computer and the human.
Programming languages will take the code that's written and compile it, interpreting it for use by an operating system so that "Hello, world!" is appropriately output. The computer doesn't really see -- nor care to see -- the text of the written software.
Who does care, however, is us.
Humans read and write code. We even leave behind comments (hopefully) in plain language for someone to read later and understand what something is doing. Just like this:
// this swaps two numbers
int a = 5, b = 2;
a += b;
b = a - b;
a -= b;
The comment "this swaps two numbers" helps the person reading it understand what that odd piece of code is doing.
Why is it odd? It's not immediately intuitive. It's weird. Someone could take a few seconds, skim through it and say "Ok, I get it," which would most likely be followed up with "Why was it written that way in the first place?"
Can it written in a way that is more readable? Maintainable?
Of course.
And still, there could be more to the picture that we're not seeing. Performance constraints, language quirks, it-was-cool-and-it's-only-my-code-that-no-one-will-touch-so-why-not... all of these could be reasons why, on the surface, a piece of code may appear esoteric.
Whatever the reason, this code -- and by extension all code -- has the same goal: solving problems.
Enter Software Engineering
How do we go from writing code to software engineering?
Well, software engineering is the practice of writing software to solve problems in a way that addresses one or more known constraints. Those constraints are handled through design, coding, testing, etc.
For example, let's say I need to cross a river. It's not too far or deep, but I don't want to get my shoes wet. I've got a plank of wood that I can plunk down and shimmy across. Problem solved.
For me.
Someone else with the exact same problem could come to my plank bridge and shimmy across. But what if they need to cross at a different spot? Or they need to bring a lot of stuff with them? What if their feet are bigger than mine?
What if they don't have feet?
We'll have to think about this a bit more. So we rebuild our bridge taking in these new constraints. We engineer it so that we cover the known possibilities and maybe some unknown ones, too.
If our code is the plank of wood to get across the river, engineering ensures we take into account a lot of different constraints to make sure the solution holds up.
Going back to the analogy, you might ask: "Why are you building bridges in the first place?"
And you'd be right.
Over the years, we've gotten better at engineering software. Frequently encountered problems are abstracted away into patterns, libraries, and best practices. Code is written, tested, engineered, and exposed for use by others who write code to piece these together to produce something new. With an amazing community dedicated to learning, observations and results from engineers across the globe is only a keyword search away.
Don't build your own bridge: just use this one.
Our software and even the infrastructure it runs on has abstractions, enabling serverless, edge, fog, and other computing paradigms. What orchestrates those abstractions? More abstractions chained together through CI/CD pipelines and DevOps.
Those writing the libraries focus on engineering their abstractions to be robust and scalable, coding in a language that fits the problem they, and others, hope to solve. People rally together, creating standards around these abstractions. These standards enable us to integrate quickly, iterate rapidly, and deploy confidently.
It becomes less about "this piece of code is unique" and more about "does it meet the need for its problem space".
This is where "coding" begins to die.
Lost in Abstraction
I know, I know. Pitchforks.
I used the term "abstraction" earlier. An abstraction is a way to hide the immediate details of how something works to the implementer. Abstractions show up in different ways, from libraries to full-blown frameworks.
They help you focus on solving your specific problem, rather than having to reinvite the wheel. You want to focus on hammering that nail into the plank for your bridge, not the details of how to power the lamp so that you can see what you're doing.
Software engineers love abstractions. They also hate them. Some love to hate them. Others hate to love them.
Why this odd relationship (and what does any of it have to do with "coding")?
Solutions often come with multiple levels of abstractions, sometimes abstracting different abstractions. Like an onion. An engineer that has taken the time to understand why the abstractions exist, what problems they solve, and how they solve them will build a mental model in their head of how they all interact with each other.
When something goes wrong, they peel back the layers of the onion in their mental model, digging through logs (if there are any), running traces, and testing behavior to determine the root cause.
And then? They compromise.
Unless you own the abstraction, you work within its confines. Depending on how it works, there may be ways to tweak how it's used to fix the problem. Other times, it's what-you-see-is-what-you-get, adding a constraint that you'll have to get creative to overcome.
What holds all of this up?
Coding.
The abstraction is coded. It's interaction with other abstractions is coded. How they all fit together is coded.
If it's open source, you can see it. You can fork it, change it, and incorporate it yourself. If it's not open source, maybe a vendor or someone out in the world has worked with it before and knows a tip or trick to get you over your hurdle.
But then you ask yourself (or are asked by your project team): is this worth going down the rabbit hole to do it ourselves?
领英推荐
We use abstractions to save time (sometimes money) so that we can focus on solving our real problems. We build bridges here, we don't make planks.
If I want to swap numbers, do I write it myself or use a NumberSwapper library?
Should I care that the NumberSwapper library uses the odd swapping technique I mentioned earlier or should I focus on how my solution using it?
What about knowing how it works? That it's secure? Performant? Scalable?
Unit tests could do that for me. I could code some myself if I'm worried that my specific problem isn't addressed by the ones included with the library (if they're included). There are also code scanning tools and other automation that can help.
Coding becomes integration-by-abstraction. The abstractions, libraries, and other novel use-cases such as embedded systems, specialized hardware, sheer curiosity, and the truly unique there-simply-isn't-anything-else-out-there-to-solve-this-problem-problems are a handful of the cases for fun, quirky, innovative coding.
And then, there's your new AI companion.
Full Circle
Your AI companion has seen those frameworks, delved into those abstractions, seen their insides and how they're used in action for real use-cases.
It's been trained on millions of examples.
It can explain existing code. Yours, your company's, and pretty much anything it can consume.
It can write code.
"I need to swap two numbers. Write it in C#."
Simple, right?
What if you need to do something slightly more complicated, like the normal task of pulling a filtered set of widgets from a data source, transforming them, sending them to something else, and handling the response?
To a software engineer, this leads to a lot of questions. What's the data source? How often is it updated? Is there high concurrency? What happens if the "something else" is down? How do I know if "something else" did what it was supposed to do?
So on and so forth.
An engineer using an AI companion would think through these (and more) concerns, craft something for the companion to produce, and then validate that it was done correctly.
Breaking down the problem and working with an AI to craft a solution is a skill that is similar to working with another colleague. Seasoned engineers are used to that workflow. They may even have a pretty good idea of what "correct" looks like in their context and, if not, can bounce ideas off of others to make sure they've covered the known constraints.
So the companion goes to work, producing code as fast as you can feed it guidance. You ask it to swap two numbers. It produces this:
public void Swap(int a, b)
{
a += b;
b = a - b;
a =- b;
}
Syntax errors? Oh gosh, of course. Let's tell it what the problem was to fix it.
public void Swap(int a, int b)
{
a += b;
b = a - b;
a -= b;
}
Ok, it works now. Why isn't it returning anything? The variables aren't passed by reference...
Coding concerns.
As AI gets better and the solutioning around it improves, we'll spend more time reading and validating code. Code reviewing the AI's code that was guided by an engineer, ensuring that the code fits into the engineered solution.
But what about the less seasoned engineers?
AI doesn't care if you've been doing this for ten years or ten days. With automation, deadlines may shorten, increasing the pressure to produce more, faster.
The code is produced, looks like it works, and passes the unit tests. The PR is reviewed (by people?) and accepted, integrated, and deployed.
Work item closed and months go by: no issues.
Whew.
In that time, more work is released. The deployment pipeline is humming.
An issue arises.
A well thought out prompt goes into the machine, as well as relevant logs, code, and other artifacts to figure out the root cause.
The answer comes back.
Weird code. All over the place.
Except, that it's only weird in aggregate. In isolation, chunks of it may seem perfectly fine.
So what do we do?
Engineers pour a cup of their favorite, work-approved beverage, and get to work.
Coding.
Working out the language quirks, the performance issues, the odd side effects, the general human readability for how it all fits together with meaningful comments.
"Buenos noches" in spoken language can be understood, if technically incorrect. Stopping to correct someone in passing, every single time, would get tiring. As people, we can smile and know what they meant.
In a complex system those small, seemingly fleeting misunderstandings can mount over time.
So we think, review, test.
And code.
So, Coding Isn't Dead, Then?
No, it isn't.
Learning to code still matters because it is currently the only way to truly know how software works. And knowing how something works always matters because when you know how something works, you know how to fix it.
General coding concepts still apply: conditions, variables, control statements. Concepts like data structures, memory management, and concurrency are still important.
AI companions have seen these and will use them to solve the problems their tasked with solving. They'll get better at using them, too.
But applying these concepts to solve problems efficiently, reliably, and securely ourselves, with our fellow engineers, or with our AI companions is relevant now and in the future. This means there will be increased emphasis on reading, reviewing, and fixing code, which is something that has traditionally been learned later rather than sooner.
The ability to break down a problem in a way that is easily consumable, understandable, and actionable will be one of the most important skills needed going forward. Abstractions will grow, change, and evolve, composing answers to problems just like selecting and connecting LEGO bricks.
Each brick developed, connected, tested, and deployed with code.
Still holding onto those pitchforks? In either case, let's collaborate!
Every word lovingly chosen and sequenced by a human... for better or worse.