Why 99% of Coders Are Dead Wrong About AI (And How It's Costing Them Everything)
In response to a comment on my Farewell to Developers post
This ended up being too long to post as a reply, but I see these same concerns far too often. I believe it's important to address each of these misconceptions formally in the hopes I can drive some change in the right direction.
Oh boy, where do I start? I want to address each of these concerns, but let's start with Co-Pilot.
...when it comes to coding today I'm having a hard time seeing this work. I feel disturbed by Co-Pilot inline code suggestions - so much so, that I abandon it completely.
People still talk about Co-Pilot?
When I first played with co-pilot, I thought it was magic! Today, I'm quite certain this single tool has held back AI acceptance in the developer community more than anything else. Co-Pilot is virtually useless. I don't even consider it an AI tool - it's a parlor trick. If AI was my shiny new M3 Macbook, Co-Pilot is the plastic toy in the toddler isle at Target - and not even the actual toy, but like, the demo button that kids play with before you pull that metal strip from the battery activating the actual stupid toy. Developers are always quick to point out how its useless (which it is) so they give up on AI before they even skimmed the surface of what's possible. Co-Pilot is nothing more than the lazy application of the most basic thing an LLM does right out of the box - predict the next word. It's basically a LLM's version of 'Hello World'. It's as useful as a test suite with a single unit test. Please stop talking about Co-Pilot!
def test_it_works
assert(true, "yay it works! See? i wriTe tEstS!")
end
In the end it might all boil down to a trust issue. The more code the AI is generating the more code I would need to review, assess and potentially change. How could you possibly let the AI write test code for production code the AI wrote??? ??
Let's talk about the trust issue. This makes sense if you remember AI isn't a new utility or tool - think of it more as your personal intern - really smart, but doesn't know much about you and what you need from them them. It's right to not blindly trust their output. And I address this in my write-up when I say that "we are not there yet". AI needs developers for the time being. It needs us to guide them on the right path, improve their output through careful iteration. How efficiently you do this is the measure of your success in harnessing AI's capabilities. Don't let the AI code for you - let it code with you. Instead of asking it to "write all my tests!", ask it if it sees any potential corner cases you might have missed. You're asking the wrong questions, and delegating the wrong tasks.
领英推荐
The secret sauce here is that ANY effort you put into honing this workflow will net you logarithmic gains. There is nothing you can dedicate your efforts to in this world that would give you the same return for the effort spent. And on top of that, the returns result in a huge asymmetric imbalance in your favor, given how few developers have been awakened to this new paradigm and have embraced it fully. That is why I say we are in an extremely unique era that can happen once and only once in a civilization - because once we get there fully, there's no going back.
...and I have a moral issue with how we got here. The work of millions of people, developers, writers, artists, social media users etc. was simply gulped in without asking for permission.
You then mention your moral apprehension, siting that the data was gulped down without permission. I ask you again to remember that this "thing" gulping down data should be thought of as a new species. It was discovered - not invented. It's not stealing your data without permission any more than a grade school student simply browsing the web, eager to learn. It's hard to wrap your head around this for a few reasons. This species' true power isn't intelligence - it's speed. Gulping down trillions of words, ideas, and videos just doesn't seem like learning - it feels more like stealing. It also doesn't help that companies are profiting from this. "They're getting rich by stealing my content" is the wrong way to look at this. We need these large tech entities to enable this revolution, and the economic reward system is a crucial driver. Remember these are entities, not people. They are uniquely positioned to give our world this gift, and nothing is stopping you from profiting along with them - that's why we have a stock market.
And if this alone was not questionable enough, we now have to pay for these tools. Privatize gains - socialize losses...
That brings me to your next concern - "we have to pay for these tools!". This one's simple. No, you don't! Every tool out there - every. single. one. Is a convenient wrapper around a model. Sure, I'm willing to pay for the convenience of a tool if someone spends the time wrapping it in a nice user experience. I'm also willing to pay for the use of a frontier model if I find value in it - but this is no different than any tech in existence. If you want, you can get the full benefit of having your very own AI intern (or a thousand of them) running locally in your home network. Meta has dedicated themselves to providing the world with free open sourced models that's inching closer to frontier-model capabilities. There are countless communities our there building open source tools around these models. So yes, companies are making billions on AI, but it wasn't created in a lab for the purpose of stealing data and stuffing their own pockets - it was discovered, and made available to anyone with the desire and motivation to make it work for them - which of course will include big companies that exploit this new species as hard as humanly possible to stuff their pockets because WHY NOT?! It's a brilliant idea! Who wouldn't ?! You should too!
Conclusion
When I woke up this morning and saw this reply, I was simultaneously beyond excited to be the one to tell him "no, no, no! It's much better than you think!" - while also being horrified that the proliferation of AI misinformation and misunderstanding is this pervasive. Maybe horrified is the wrong word - it's definitely a mix of feelings - confusion is a big one. I'm pretty average, and it's not hard for me to see how this changes literally everything. So why do I feel like I'm the crazy one? - that homeless, disheveled blind guy holding a cardboard sign that reads "THE END IS NEAR" - everyone casually walking past him paying no attention while they continue living their lives as if we all didn't have limitless AI agents at home, in our pockets, waiting for us to ask them to do whatever we want. How does everyone not see this? Was Co-Pilot the Great Filter?
Todd: "Yea, I tried the AIs. Don't see what all the hype's about"
Mary: "I asked Claude for a joke and it wasn't even funny! PASS!"
I think this phenomenon we're seeing could be called the "Nah, Can't Be" effect. It's almost as if the benefit is too good to be true. Our natural instinct is to be skeptical. As if our progress is outpacing our ability to believe it. Well, it doesn't fool me. The best thing I can do is to keep holding this cardboard sign, grabbing the attention of whoever looks my way.
Software Engineer @ Google || Daily Content on Software Engineering & Cracking FAANG ||?Ex-Hike,?Amazon
6 个月Jarad DeLorenzo I am a Content Creator in the SWE niche and I want to discuss a potential collaboration with you, I am out of connection requests for this week Can you please send a conn. request and add "Collab" as a text in note, and then I can tell you about the opportunity