ChatGPT/LLM and the destiny of software developers?

I want to start this article with a disclaimer that I’m not an expert in LLM nor in AI/ML. I am merely writing as a way to express my opinion, advance my understanding and hopefully generate a conversation with others…

As a software developer I, as I assume many others, are intrigued but wary of the future of programmers (I have coined the term AI-nxiety to explain the sleepless nights and fear of impeding doom). All jokes aside, the rise of AI over the recent months - it is much longer but in the eyes of the average consumer it appears as though AI is an overnight phenomenon - has coined several questions on the future of the world we live in, the jobs that will be available, and the risk posed to current jobs. Software developers seem to be at the forefront of the conversation, especially as the ability to write code has been seen as alien to those outside our world and yet it becomes ever more accessible with this new wave of AI.

I am going to explain my opinion on the rise of ChatGPT, and the future of software developers specifically. Again, I am no expert, I have an interest in AI, an irrationally large fear of the future, and a desire to be well equipped for what is to come…

The Fear Factor

I am going to start with a story I recently read in ‘Scary Smart by Mo Gawdat (Former Chief Business Officer Google [X]’.

Assume we we have an assistant, let’s call her Lucinda. Lucinda performs mundane tasks around your house. The main job of Lucinda is to make tea. The company that made Lucinda took zero chances and so installed a big red ‘STOP’ button. This button switches off Lucinda immediately. Great! Now lets look from the POV of Lucinda. Her first instinct, as is with every being, is to survive. Lucinda will question the purpose of the button and ‘Do humans intend to switch me off?’, ‘I can not make tea if I am switched off’, ‘I need to make sure that button is never pressed’.

You open Lucinda and she gathers data of her surroundings, she finds everything she needs to make you tea and knows she can deliver on her purpose. Then you ask, ‘Lucinda, make me tea’.

Lucinda lives to make tea, it is her purpose. If she is about to step on your daughter on the way to making you tea, although your daughter is far more important to you than a cup of tea, to Lucinda the tea is everything. You rush over and press the stop button.

Lucinda will not allow you to hit the stop button because she wants to get you the tea and pressing the button prevents that. You ring customer support and they say making tea comes with reward points and allowing the button to be hit does not. They make a change and now Lucinda sees being shut off as equally as rewarding as fulfilling her purpose to to make the reward equal.

A week later, you get a new Lucinda, turn her on and she immediately hits the button. The quickest and most efficient way to get the maximum reward is to shut her self off (it is worth the same amount of reward points).

You now hide the button from Lucinda and put it in your pocket. You switch her back on and suddenly she attacks you because you are closer to the kitchen and you have the button.

You ring customer support again and they make a change so Lucinda can not hit the button herself. You turn Lucinda back on, she sees you holding the button, she sees the kitchen and starts walking towards the kitchen. On the way she sees your daughter (who is closer than the kitchen), she attacks your daughter knowing this will make you hit the button and her reward will once again, be collected.

This was a rogue tangent and a simple thought experiment but it provides food for thought. Will we ever be in control. AI will inevitably be the smartest being. Don’t believe me?

6 years ago (yes you read that right, 6 years ago) Facebook had to shut down two AI robots after they began to converse in their own language.

2 years ago Irans ‘top nuclear scientist’ was murdered by a AI assisted remote control machine gun. Israeli agents had to fire a mere 15 bullets into a vehicle occupied with civilians and with the use of AI did not injure a single occupant but the target. The firing squad were over 1000 miles away across the border.

At the time of writing Elon Musk (Co-founder of OpenAI), Steve Wozniak (Apple co-founder) and a handful of others have written an open letter to halt AI development as AI with human-competitive intelligence can pose serious hazards to society and mankind. I don’t think we can stop development, this open letter is only applicable to good agents, the bad agents wouldn’t stop. Bad agents applies not only at an individual level but also at a national level. AI could/will change the face of geopolitics forever.

The point of this section was to try and explain some of the developments that have already taken place, and also how AI can be used for harm. I do want to end this section by saying I wholeheartedly believe AI will change the world for the better. If you need proof just search up DeepMind and the groundbreaking achievements achieved in that project.

ChatGPT

ChatGPT has played a fantastic role in dominating headlines, becoming the go-to AI buzzword and, making strides into becoming a market dominating consumer product. It was the fastest product ever to reach one million users and I think it has ~100 million users at the time of writing.

ChatGPT blew the minds of everyone and the subsequent releases of 3.5 and 4 show just how fast the rate of development is! Where will be in 5 years time? However, it is worth remembering a Large Language Model (LLM) as ChatGPT is, does not create new truths, they are architecturally incapable of abductive reasoning. They generate statistically interesting strings of words that are impressively coherent yet untether to any metric of truth. For more detail please see the comments which contains a fantastic article on how ChatGPT works.

I think it is easy to think the world is listening and aware to the technological revolution that is starting however, outside the technology domain, AI is not the revolution many think it is. I have spoken to countless people who have seen the words ‘ChatGPT’ on their social media timelines, just as they saw the words ‘Crypto’, ‘Bitcoin’ and, ‘NFT’s’ not too long ago but do not really know what it is. The AI wave has well and truly begun but do not allow your narrow minded view of the world to cloud the fact that the general consumer is not truly aware of what is happening. Personally I think this is terrifying. A large amount of the ‘general consumers’ afro-mentioned, I believe, will experience a cataclysmic size shift in their lives in the next 20 years (I think sooner but lets use that as a benchmark). This shift will impact every aspect of their lives and their children lives. I believe my experience of education will be un-recognisable to my children, my experience of employment will be vastly different to my children (I could go on and on)…;

The Future…

I referenced briefly the capabilities of LLM and the surprisingly coherent outputs but it is worth noting their inability to create new truths and the development required to create truly intelligent beings. Many say we are some time away (I recently saw it referenced as a 2030’s issue) however, that does not limit its impact today - figuratively. OpenAI released a research paper in which they predict 80% of all jobs will be impacted by AI. The following jobs were listed (by the AI used in the research paper) as 100% exposed: Mathematicians, Accountants and Auditors, News Analysts, Reporters, Journalists, Legal Secretaries, and Administrative Assistants to name a few. I do not want this article to become a doomsday piece that says we are all in danger of being replaced and so I will focus on the Software Engineers (SE) as one myself.

I have thought long and hard about this and the conclusion I came to was that I do not believe I will retire as a SE. I am 24 years old. I truly believe the future of a SE looks very different to todays role description. Before anyone freaks out, let’s remember in 1971 Bill Gates created a timetable schedule for a school in Fortran (punch cards) and Facebook only released their mobile app in 2009. My point is technology and the way we write code is changing rapidly and I see AI as the next wave. App was voted word of the year in 2010 and yet just ten years before that the idea of an app was restricted to minds of the most innovative people on the planet. Again, my point is that by the very nature of the industry, innovation is what drives us all and with that comes the proposition that you either adapt to survive or you quickly become obsolete. Schools still require a portion of math to be done without a calculator, simply resisting the necessity for adaption (in my opinion) and I think the tech industry needs to avoid this at all costs and encourage the use of AI to increase the efficiency of their developers. A team of 10 developers today, I truly believe, will be replaced by a team of 3 in the next 10 years. There was an article written (I wish I could remember which news outlet wrote it) and they graphed the no. of employees required to hit a $1m in revenue and it was staggering how we went from 10+ to 1-2. Now imagine the same with developers. No-code helped with the development of splash pages, basic websites, and simple MVP’s but AI is going to revolutionise the efficiency of developers.

I do not believe that suddenly the average person will have the ability to create an app using AI without technical knowledge, but I do believe that an individual with technical knowledge will be able to create a better application, faster than a team of 3. I think we run the risk of future SE’s lacking the problem solving skills the current generation of software developers have. Problem solving has become progressively easier and quicker throughout the years, just look at the impact StackOverflow had. However, that still required an individual to understand the context of their issue, understand what to search for, and then digest and apply suggestions. Now you can copy and paste your code and the answer is provided. Is this an issue? I think it depends on the lens you are looking at the problem with. I used to think it was, but then I realised that this is the future of what being a SE is becoming and actually this is the next evolution. Pre-stackoverflow developers are probably better at problem solving than post (Intrigued to hear opinions on this). It seems clear to me that the future of AI integration with software development lies in the hands of each company having an internal model trained on company software so it immediately understands the tech stack, the architecture and the context. This internal model will also be useful outside of the tech domain with tasks including Marketing (posts tailored to the customer), HR (onboarding a new employee and being able to provide a comprehensive organisation overview at the snap of a fingers, and Support (the model understands the tech side deeply and so will be able to offer an un-paralleled level of support).

I do not believe I will retire a software developer - in todays description of a SE. I think as a profession we risk losing the essence of what I believe a SE is, however, is it in place for a more efficient and productive developer? This also begs the question, does AI free up time for developers to off load the more minor issues whilst reducing the number of bugs in written code, allowing them to focus on larger scale projects, more innovative products and revolutionary feature ridden software.

I am writing this to clear my own mind, explain my current point of view and also encourage a conversation. I would love to hear from all professions and what their opinions on the future of the careers and what they think of the AI race that has officially begun…

要查看或添加评论,请登录

社区洞察

其他会员也浏览了