The missing teammate - GenerativeAI
I don't know about you, but I am tired of seeing all of these silly posts from CEOs and laymen saying that AI will be replacing programmers anytime soon. It shows the lack of understanding of the current models and their trajectory. This isn't to say that LLMs don't have a place in programming, nor that they can't help improve code because they can. The swath of information and misinformation on the internet has made it very difficult for new developers to find working code, much less up-to-date working code. The latter of which IS NOT solved by LLMs, but MAY be solved with better RAG based coding assistants.
Quick Facts:
None of the above statements negates the usefulness of LLMs for humans! We can use LLMs in conjunction with the many public and private data sources to help our respective industries improve data quality, improve code quality, and help us convert legacy systems into modern code bases to reduce technical debt. But none of that removes a developer from the loop, instead it augments them to become more capable and more productive.
领英推荐
I recently read a post from someone on LinkedIn stating their new workflow of generating code, checking it in, then heading off for a Latte or some other nonsense. No developer should blindly trust an LLMs response, for goodness sakes it even tells you not to do this!
Here's where LLMs are powerful, and the conclusion to this chain of thought: LLMs are the missing teammate, they are capable of answering questions any time of the day regardless timezones. They sometimes get the answer wrong, and if they do, the developer should correct the LLM and provide feedback. "No, that's not how you spell Strawberry, it has three r's in it". As we move toward agent-to-agent and agent-to-data-agent models this will only shorten the curve for becoming productive faster, but until we see a dramatic change in hardware and computational requirements, developers will be needed, they will need to increase their skillsets, and they will become better with LLMs.