Are we the Future Assistant to the AI?

Are we the Future Assistant to the AI?

If you buy into the hype, all software developers will be replaced in 18 months by AI. However, anyone who knows anything about software development knows that developers don’t spend most of their time writing code anyways.

Developers spend most of their time reading and debugging code. We spend a lot of time fixing bugs or making minor improvements to existing code. We scan hundreds of thousands or millions of lines of code to change a few lines daily.

Our job is to understand how existing code works and make changes without breaking the entire system.

But supposedly, AI is going to do that now. I welcome its ability to do all production deployments with no problems. ??

Is AI going to be our assistant, or will we be the AI's assistant?

What can AI actually do for development?

I have been using?GitHub Copilot?recently and am a huge fan.

Although, sometimes, it makes me mad.

I’m coding away in a rhythm, and it keeps making excellent suggestions, and I keep accepting them. I’m in the flow, and the magic is happening. Then all of a sudden, it won’t provide a suggestion… I just stare at the screen, waiting and giving it a scornful look. ??

Seriously though, the current Copilot features are helpful, but it is mostly smarter autocomplete.

I also use ChatGPT regularly to ask it how to do things I can’t remember the exact syntax for. Like how to create a unique SQL index or do a specific C# LINQ query. Stuff I have done many times but always forget the syntax for.

The future of what GitHub Copilot X, ChatGPT, and other AI models will do is exciting. I see them as an assistant to help developers write code. AI pair programming sounds incredible, and we are just getting started with the tech.

However, you must still be more intelligent than the AI because it will make bad suggestions.

Speaking of Copilot, airplanes pretty much fly themselves these days. A pilot once told me his job was to?sit there and be ready if the autopilot stopped?or some other weird problems happened. Years and years of training to stare at the autopilot all day in case it doesn’t work or the plane has a mechanical failure or bad weather.

Is our future job to babysit our AI copilot?

AI also makes mistakes

If you know anything about how AI and ChatGPT work, you know it is all a?prediction model of text. It uses statistics based on its training models to predict what words come next in a sentence. That works great for asking it weird history questions buried in Wikipedia. But it also just makes stuff up regarding computer programming based on the prediction model.

For example, I asked ChatGPT how to use?Stackify?with an application written in COBOL. It straight-up provides an answer like it’s really possible. There is no COBOL documentation or support like it mentions. ?? ♂?

No alt text provided for this image

Anytime you are using AI to generate code, you have to review the code and test it to ensure it works. It will absolutely make mistakes. What it thinks is the highest predicted answer doesn’t make it fact or accurate.

We only love our own code

If there is one thing all developers can agree on, we don’t like other people’s code. Trying to read and debug someone else’s code is way more complicated than the code we wrote. By writing the code, we deeply understand what it does.

Many developers quickly call other developers’ code technical debt and want to rewrite it. Not because there is anything wrong with it, but because they didn’t write it and don’t fully understand it. They are scared to modify it.

It’s hard to work with other people’s code for a few reasons:

  • Lack of consistency
  • Poor design choices
  • Lack of documentation
  • Different coding styles
  • Legacy code

It’s understandable why developers don’t like working on other people’s code. It can take hours to fully understand how code works if you are trying to troubleshoot problems with it.

Debugging other people’s code is one of our least favorite things.

What if it’s all AI code?

If the goal is to use AI to generate large percentages of code and developers have to review it, test it, and debug it… that doesn’t sound fun.

Some people love testing and QA work. Testing video games all day is a dream job for some people. However, testing code from AI generators all day sounds like a nightmare.

We want AI to help us do our job. We don’t want our entire job to be fixing the crappy code that comes out of AI. Troubleshooting bugs is hard, especially in other people’s code. Nobody wants that to be their full-time job.

We don’t want to be the assistant to the AI.

What about AI Coding Standards?

One of the funniest things to do with ChatGPT is to ask it to write a bunch of text and then tell it to write it like Darth Vader, Snoop Dogg, Donald Trump, or others. It is fantastic how it can change the writing voice and style.

You can’t tell ChatGPT to write code like Snoop Dogg, but you can tell it “more smaller methods” or “10 spaces instead of tabs”.

Generating code with AI opens up a debate around coding style and standards it needs to follow.

Here are some examples of things to consider about the output of the AI code:

  • Code format - Tabs, spaces, curly brace placement, variables names
  • Abstractions - Interfaces and abstractions around everything?
  • Configuration - How does it use configuration or magic strings
  • Method size - Are we going to an extreme max of 5 lines of code per method?
  • Inline code - Are we using inline ifs and other strategies to compact the code so much it looks minified?
  • Error handling - How does it handle exceptions?
  • Security - Hopefully, it doesn’t create SQL injection and other vulnerabilities
  • Performance - Is the code optimized for performance and avoids N+1 type issues?
  • Comments - Will it put comments in the code to explain it?

Taking code from it and cleaning it up to follow our standards defeats the purpose. In the future, I imagine we will train the AI with our existing code for it to follow.

It’s one thing to make a solution for FizzBuzz or a bubble sort. It’s another to create a new method in an existing code library that uses specific frameworks, configurations, conventions, etc. We will eventually get there with things like GitHub Copilot X that process our entire codebase.

The AI needs to work for us!

If I know anything about developers, nobody is signing up to debug broken AI code all day. We want to build things. We are excited to have AI help us build stuff. It makes for a great assistant to do repetitive tasks and spot potential bugs in our code.

The AI needs to be our assistant. None of us want to be the assistant of the AI and fix its mistakes.

We don’t want to be the Assistant to the Regional Manager. We want to be the Assistant Regional Manager. Thanks, Dwight.

No alt text provided for this image

Please subscribe to my Substack blog for all my updates. You can also follow me on Twitter or TikTok?for more daily content. I also host a podcast called?Startup Hustle?about entrepreneurship.

If your company needs to scale up your development team, please consider my company?Full Scale. We have 300+ software developers in the Philippines. We do outstaffing for dedicated teams, not project outsourcing which usually doesn’t work well.

Also,?if your team needs help with application monitoring, please check out?Stackify, a company I founded in 2012. The product is explicitly built with developers in mind to track errors, logs, and code-level performance (APM).

Your my bro. I use this tool for exactly the same purposes.

回复
Carl DeAmaral

Experienced NET Programmer/Engineer

1 年

My high school guidance counselor told me that programming was a good field right now. But made to caution me that “by 1985 or so, all the programs will be written.”

回复

I was actively developing during the period when neural networks and SVM (Support Vector Machines) were all the rage. That technology was touted as the end to end all for certain tasks. The problem was that they were only as good or valid based upon the data that was used in the training sets. Just to make sure that that statement is clear, what was needed was a validated data set that spanned the space to which that technology was to be applied. Then the set was divided into a training portion and then a test portion. After release, if anything in that space changed then the Neural Networks or SVM needed to be retrained. This may seem obvious, but it required monitoring of the space for changes and then going through the process of retraining and releasing. Something that a lot of companies did not understand or plan for. Now we come to ChatGPT which also relies on a data warehouse of information which it can access through natural language keywords to provide suitable responses. Its information needs to be validated and be continuously updated with corrections and additions and its AI weighting and correlating algorithms need to be improved. Users of AI should not blindly accept the results but have orthogonal ways to validate.

回复

We all know what happened to Clippy

David Blomeyer

Founder - Parable, Jack of Tech and Many Trades, and Veteran

1 年

My take is that is "to be determined". Imagine both outcomes and imagine the "infrastructure" each would take - Tech and Social. Which set do you want to build?

要查看或添加评论,请登录

Matt Watson的更多文章

社区洞察

其他会员也浏览了