A Farewell to Developers
A developer walking off into the sunset. Prompt created by Claude Sonnet 3.5. Image created using Ideogram.

A Farewell to Developers

We Had a Good Run

The more I think about my last article, "Prompt and Tag," the more I realize there's a much bigger takeaway than I initially focused on. The deeper insight lies in the reason I needed to create that method - the underlying workflow that I've been iterating and improving upon. While many are fixated on tooling - integrating AI into IDEs and terminals - I'm starting to see that these new tools, though awesome, are a distraction from the real revolution.

A Glimpse at my Current Workflow

Currently, I use Perplexity as my initial jumping-off point. Its annotated iterative search is perfect for exploring the vast solution space of possible stacks - languages, frameworks, cloud platforms, IaC providers, deploy pipelines, and component architectures. Then I switch to Claude for implementation-specific details, refining my plan with a divide-and-conquer approach that scales well with projects of any size. Finally, I take my results to Cursor, where my "AI intern" handles the grunt work.

These tools are great, but far more valuable is the formal process I've developed. You could burn these tools to the ground, and my process would remain intact. I can operate at almost full efficiency with access to any frontier model. The real game-changer here is the LLM itself. While developers worldwide are busy scooping up golden eggs, those who will come out ahead are the ones operating at a process level. Tool makers will always be playing catch-up. Relying on these tools will hold you back, while honing your process will propel you forward.

"formally versioning our conversational threads with LLMs will be more important than versioning the code we write as a result of those threads"

Threads vs Diffs

As I navigate this new AI-assisted workflow, it's becoming clear that source code is no longer the ultimate source of truth. As AI improves in reasoning and context windows grow, the importance of code will diminish further. We're approaching a future where committing code will make as much sense as committing compiler-generated assembly for an Unreal Engine game. You don't think twice about using a high-level model-driven editor without considering the underlying machine code. This realization leads me to believe that formally versioning our conversational threads with LLMs will become more important than versioning the code we write as a result of those threads.

We'll need a way to look back at old threads and understand how certain decisions were made. This requires a method to view the state of the context window at that decision point. When dealing with multiple LLMs, we also need to ensure - or at least measure - context window parity. It's like having a team of brilliant but amnesiac coders locked in separate rooms, working on the same project. You'll quickly tire of explaining the problem repeatedly to each of them, along with what you've done, what you plan to do, and what you're currently working on.

The "Prompt and Tag" method I developed isn't what's important here. What matters is that I envision this emerging as a crucial part of "coding" in this new paradigm. It will become as ubiquitous as Git.

Determinism vs Non-Determinism

There's a significant difference we can't ignore - compilers are deterministic. This is so fundamental that even considering this statement gives pause. If this weren't the case, the "works on my machine" meme would be less funny and more serious. I recently had Perplexity work out a plan for a project I was ideating. Due to spotty internet on a train ride, I rage-clicked submit multiple times and ended up with 10 completely different "best ways to implement my app."

AI skeptics might cite this non-deterministic behavior as a reason to hesitate incorporating it into their workflow. At first glance, this makes sense. But upon deeper reflection, you realize this unique behavior is what makes LLMs so magical. After all, I'm non-deterministic, as are all developers. Every employee at a company is non-deterministic, and yet we still have Google.

Explaining the Paradox

How do we reconcile these two views: the intrinsic determinism of code versus the non-deterministic nature of LLMs? The key is to recognize the paradigm shift that AI imposes upon developers. Every layer up the stack - from assembly to C to JavaScript to TypeScript - creates a higher-level abstraction. Each layer's purpose is to tell the computer how to do what you want it to do. Now, this has changed, causing cognitive dissonance and disruption.

This is the first time in history we don't care about the "how" - well, not completely, yet, but we're orders of magnitude closer than ever. Each abstraction layer has been slowly removing "hows." With C, we no longer had to worry about "how" bits were manipulated - we just told the compiler to "copy this value." Further up the stack, we no longer had to worry about "how" memory management works - we just let garbage collection do its thing. But all these abstractions have one thing in common - they are all higher-level ways to tell a system "how" to achieve a result. In this new paradigm, we won't be bothered by this "how" nonsense anymore - we'll just have requirements and acceptance criteria. We will no longer be developers slinging code; we will be product designers slinging requirements.

A Future Not-So Far Away

Imagine telling AI to "inventory everything in my house." Within seconds, a containerized environment is deployed, complete with a properly initialized database, an API for adding items, and a React frontend with a basic interface. The AI sends you a push notification with a link to your inventory server's frontend interface. Then you decide to share the link with 10 million devoted fans. The AI, realizing your true intent, modifies the acceptance criteria. A migration plan is generated, a new load-balanced Kubernetes cluster is deployed to handle the high traffic, and everything is migrated and rolled out with nothing more than a few angry fans hitting "refresh" for 5 seconds until the AI sorts it out.

Nowhere in that process will a person need (or want) to tell AI how to implement this app. The only reason AI is even forced to use this tech is that we've integrated it into our infrastructure. Eventually, that too will be replaced - perhaps by something so optimized we can't even understand how it works. If you have trouble envisioning this, just open an executable file in vim, and you'll see what I mean.

Conclusion

While we're not quite there yet, I'm certain the role of the "developer" as we know it will fade away - quite simply because everyone will be able to develop anything they want, provided they can explain what they want. There will still be people who are better at this than others, creating a new role similar to that of a product developer. In other words, the Steve Jobs of the world will no longer need their Wozniak.

But until that time comes, we are witnessing a hyper-unique slice of time in our civilization - one that will be over as quickly as it fell quietly into our laps. It's a magical time where developers have a once-in-a-civilization opportunity to possess apparent superpowers. We have the vocabulary needed to describe the tiny bit of "how" still required to make anything we want with almost zero effort. When the "how" requirement fades away, this asymmetric advantage we have will be leveled out, and we'll all be superheroes.

Until then, drop everything, enjoy the ride, and take advantage of how lucky you are to be alive in this tiny slice of history.

Edwin He

Infra at Alma

6 个月

This industry was always about making things easier and faster. If not then we would still be writing in assembly. AI will change how we code and do development and the developers who evolve with it will still be around to manage it.

回复
Lars Opitz

Passionate Software Craftsperson | Seasoned Agile Leader @ eBay

6 个月

I recently watched an episode of STNG, in which Captain Picard visits a holo deck and is asking the computer, to generate a certain Café in Paris (Earth ?? ) in the late 20s. His prompt was short and imprecise, yet the computer generated the exact environment he was looking for. This is the future you lay out in your article and I really like this idea. Yet, when it comes to coding today I'm having a hard time seeing this work. I feel disturbed by Co-Pilot inline code suggestions - so much so, that I abandon it completely. In the end it might all boil down to a trust issue. The more code the AI is generating the more code I would need to review, assess and potentially change. How could you possibly let the AI write test code for production code the AI wrote??? ?? And I have a moral issue with how we got here. The work of millions of people, developers, writers, artists, social media users etc. was simply gulped in without asking for permission. And if this alone was not questionable enough, we now have to pay for these tools. Privatize gains - socialize losses... And then there is the environmental footprint of AI. However, as I'm eager to learn I try out new approaches. Hence, I'm thankful for your thoughts.

回复

要查看或添加评论,请登录

Jarad DeLorenzo的更多文章

社区洞察

其他会员也浏览了