The AI overlords won’t come for you – they will be too distracted
Mirco Hering
Global DevOps Practice Lead - IT Transformation & Delivery Lead - CIO Advisor - Blogger, Author, Public Speaker
So much has been written about Generative AI (GenAI) recently. There is a lot of hype and a lot of opportunity. At work, I focus on the fantastic abilities that have come our way with it through Co-Pilot and similar tools, but I have also spent time on what comes next…where will this lead…
The good news is – at the moment, I am not at all worried that the AI overlords will come for us or our jobs. The advances we see at the moment will change how we work and live, and I think more people need to think and write about the implications to be ready for it. GenAI is an excellent tool, making it a lot easier to create content…and mostly pretty appropriate content. And that is the thing; it creates appropriate things, not necessarily great things. “So what?” you might say. Well, let me give you a few examples of why we need to think twice about where we want to use it. Of course, the horse might have already bolted, given the frequency of GenAI posts I see
When we look at how the models have been created, we can first see that they have learned from what already exists on the internet. We should expect an article written by it to be as good or perhaps a bit better than the average article from its learning base (a.k.a. the internet). Now if we start using GenAI to create more and more content, you would have to assume that the content on the internet will more and more look like the average. And if that is fed back into the algorithm for the next model, it will more and more move towards the average. If you have read computer-generated commentary for sports events in the past, you got a flavour for what is coming – very bland “fast food”-like articles. It might become increasingly difficult to find new material and not just generated material, so you will likely seek out the more interesting human-written articles by good journalists and pay for them. It might become the next boom cycle for “human” journalism as a way to consume new ideas and thoughts.
That AI is not a panacea and is very specific to the problem it was created to solve can be seen by Watson…the famous Jeopardy player. It was created for that particular purpose and won the hearts and imaginations of many geeks like myself. But the revolution in medicine that Watson promised (well, IBM promised) on the back of it never came to pass. You can listen to some great insights about this on the Human vs Machine podcast.
I want to look at three areas of human life as examples before concluding with some thoughts:
Writing code
There is no doubt that Gen AI will have a powerful influence on writing code. There is a direct lineage from me buying a book to learn Java to searching for advice on Stackoverflow to using Co-pilot. Each step made it easier to get code samples and find ideas to solve the problem at hand. I like the co-pilot name because it indicates what it is best at, which is being a partner to a developer. There is a risk, however, that we delegate too much to the co-pilot to the point that we don’t question what is being produced, which will increase the risk and bloat in our software. As an article in the WSJ so nicely put it while using the common analogy of technical debt being like credit card debt: “People have talked about technical debt for a long time, and now we have a brand new credit card here (with Gen AI) that is going to allow us to accumulate technical debt in ways we were never able to do before.”
So we must remain vigilant in how we use Gen AI in code generation and train the next generation developers how to improve code, curate what is being created by Gen AI and understand the risks of using something created from an existing public code base. It won’t be as easy as scanning for a vulnerability in open-source code; you will have to have more dynamic methods in the ongoing fight against hackers who will try to find ways to “infect” your language model with vulnerabilities.
The legal system
The legal system is quite a different area of life to blog posts on the internet and writing code, but it shows how it is not just IT that is being impacted by this latest technology trend. I recently heard this anecdote from a friend. He told me that before the age of computers, major contracts were only a few pages long because someone needed to type them up on a typewriter. With the advent of the PC and, more importantly, “copy and paste”, contracts became longer and longer. There was no need to be precise as the incremental cost of including additional pre-existing boilerplate material was nearly zero. With Gen AI, the same friend foresees that it will be much easier to create contextual paperwork for a new court case, which might reduce the cost of suing significantly. And the cost is often what deters frivolous court cases. Each lawyer can create the necessary paperwork for many court cases, with Gen AI doing the heavy lifting based on client input. Now this can very easily overwhelm the court system, or we create AI to consume the paperwork and judge for us…but do we want this? And this same reduction in the cost of creating paperwork might overwhelm other socio-human systems like government agencies. After all, if there is no cost to doing something do you care how likely success is for it to succeed…you can just create millions of submissions for patents to the patent office in the hope one of them will work out. (The Economist published an article that makes the point that the legal system becomes more democratised by the increased access to the legal system - which is a more positive perspective of the same situation)???
YouTube
This one was most shocking to me…and truth be told, I learned about this a few years ago when my son was born. As a toddler, he “introduced” me to computer-generated videos for toddlers on YouTube. As an engineer, I understood that the algorithms very clearly combined popular children’s songs like “Baby Shark” with colours and characters from popular franchises like Disney. It also encouraged me to find out more. I came across James Bridle, who talks about a new Dark Age in his book with the same title. The original dark age was when humanity used technologies and practices like fallowing agricultural lands without understanding the underlying science. He compares the period we are getting into on the same principle, that we use technology to achieve things that we do not understand anymore or question.
His examples about Youtube in his book and his talks blew my mind and are very relevant for what comes next (Here you can see one of his videos).
A lot of computer-generated content exists on YouTube. And given the ease of producing this compared with real human-created content, it is easy to estimate that the proportion of algorithm-based videos will increase. The toddler videos are a great example – they do not require any human interaction during the creation. On the other side are algorithms that “watch” the content, be it to learn about content for an AI model or perhaps even to “artificially” inflate viewer numbers. There is minimal incentive for a platform like YouTube to limit this; after all, larger upload and view numbers define success for such a platform to some degree. Okay, why should we be bothered by AI consuming AI-produced content without any human ever being involved in the process? Well, one reason is that humans might be affected. There are pretty shocking computer-generated videos on YouTube that will terrify your toddler (and yourself). To me, there is an economic issue here as well. Companies pay real-world marketing money for ads on YouTube; if that money is spent on “AI on AI” activity, this is a waste of a resource we could use more productively as humankind. And that is what I personally find most difficult at the moment – how do we make sure that, as much as possible, we use AI where it adds real value and avoid the temptations to create lots of AI on AI activity in all walks of life, in our software systems, our socio-human systems and our entertainment and social media.??
So with all this in mind, there is an ethical implication here. Suppose we produce more and more content without it actually adding value or only adding marginal value. Does this mean the energy we consume to create this is wasteful? Does human engagement with noisy content in the search for value distract us from focusing on the more important problems? Those are the questions we should ask ourselves…and companies will not solve them for us. Whether we choose uniqueness over generated content will be down to the consumers. And as I said, I am not worried about AI at the moment because the robots will be busy creating content and consuming it again in an endless loop; but will humans be stuck in the same loop… and will this consume our precious resources in a wasteful manner?
For now, we can focus on the productivity improvements coming with the latest AI technologies and on augmenting our work with it – there are plenty of important problems to solve. The conscious decision of how far we allow content creation to go will, however, come sooner rather than later. I hope we make the right choices with our attention and wallets so that the signal-to-noise ratio on the internet and in our life remains manageable.
(Note - yes, the picture in this article has also been created by AI. The article itself was written by myself with a little help from the Grammarly AI as my co-pilot)
-------------- For an AI created version of the article, read below ---------
Title: The Trade-Off: Generative AI, Average Quality Content, and Resource Consumption
Introduction: Generative Artificial Intelligence (AI) has
emerged as a powerful tool for creating content across various domains. From
领英推荐
text generation to image synthesis and even music composition, generative AI
algorithms have shown promise in producing content autonomously. However, while
generative AI has achieved remarkable feats, it often falls short in terms of
quality when compared to content produced by human creators. Moreover, the
computational resources required to train and operate these models are substantial.
In this article, we explore why generative AI may produce average quality
content while consuming significant resources.
1.????Complex Training Process: Generative AI models typically require extensive training on large datasets to learn patterns and generate content. The training process involves training on millions of data samples, which demands significant computational power and time. While the models can generate content autonomously, they lack the depth of understanding, creativity, and intuition that human creators possess. Consequently, the output often lacks the nuanced elements that make content truly exceptional.
2.????Dataset Limitations: The quality of generated content heavily relies on the quality and diversity of the datasets used for training. If the training data is biased, incomplete, or unrepresentative of the desired output, the generative AI model may struggle to produce high-quality content. Additionally, training models on vast amounts of data introduces the risk of perpetuating biases and perpetuating existing inequalities present in the data, leading to subpar content that fails to capture the essence of creativity and human experience.
3.????Lack of Contextual Understanding: Generative AI models lack a genuine understanding of the context in which content is created and consumed. While they excel at replicating statistical patterns from training data, they struggle to grasp the nuances, emotions, cultural references, and social context that inform human creativity. As a result, the content they generate often lacks the depth, originality, and personal touch that distinguishes exceptional human-made content.
4.????Creative Intuition and Expertise: Human creators possess years of experience, artistic sensibilities, and a deep understanding of their craft. Their ability to bring together diverse influences, innovate, and adapt to changing trends is a testament to their creative intuition and expertise. Generative AI models, on the other hand, lack this artistic intuition and struggle to replicate the intricate decision-making processes that human creators employ. As a result, the content they produce tends to be formulaic, lacking the depth and originality that audiences seek.
5.????Fine-Tuning Challenges: Fine-tuning generative AI models to produce higher quality content is a complex task. Adjusting the model's parameters, dataset composition, or training methodologies can have unpredictable effects on the output quality. It often requires extensive experimentation, trial-and-error, and expertise to achieve the desired results. This fine-tuning process further adds to the computational resources and time required to improve the quality of the generated content.
6.????Computational Resource Requirements: Generative AI models, especially those with high complexity and large-scale architectures, demand significant computational resources to operate effectively. The training phase often involves weeks or even months of processing time on powerful hardware infrastructure, consuming substantial amounts of electricity. Additionally, deploying these models in real-time applications, such as generating video content or running interactive experiences, requires powerful servers or cloud computing resources, contributing to the overall energy consumption and environmental impact.
7.????Ethical Considerations: As generative AI continues to advance, ethical concerns surrounding its usage and impact on society become increasingly important. The massive computational resources required for training and operation contribute to increased energy consumption, carbon emissions, and environmental impact. Balancing the potential benefits of generative AI with its resource-intensive nature is a crucial challenge that needs to be addressed to ensure sustainable and responsible usage.
Conclusion: Generative AI has made significant strides in
content creation, but it often produces average quality content due to
limitations in training, lack of contextual understanding, and the absence of
creative intuition possessed by human creators. The computational resources
required to train and operate these models are substantial and have
implications for environmental sustainability. While generative AI holds
promise, it is essential to acknowledge its current limitations and explore
ways to refine the technology further to bridge the gap between
machine-generated and human-created content. By addressing these challenges, we
can strive to unlock the full potential of generative AI while ensuring that it
contributes positively to the creative landscape and minimizes its impact on
resources.
Global DevOps Practice Lead - IT Transformation & Delivery Lead - CIO Advisor - Blogger, Author, Public Speaker
1 年I re-read parts of James Bridle‘s New Dark Age - and it is still one of the best things written about the upcoming challenges for us even though it is from pre-ChatGPT days. Check it out if you get the chance: https://www.amazon.com.au/New-Dark-Age-Technology-Knowledge/dp/178663547X?nodl=1&dplnkId=fcfa01c0-9c80-4308-9d5f-9a2cb2313967