Winners & Losers of Generative Artificial Intelligence
Image created using DALL-E

Winners & Losers of Generative Artificial Intelligence

Generative AI has been in the news a great deal in recent months for both positive and negative reasons. ChatGPT and Stable Diffusion, for instance, have both polarised opinions. It is the idea that generative AI can somehow be “creative” that is perhaps most controversial. But it is loss of trust that is potentially the biggest risk.

Generative AI differs from other forms of artificial intelligence in important ways. While they are important to understand, the technical aspects of generative AI are less critical for most of us than the opportunities and challenges it presents for human society.

It is likely that the benefits and challenges will not be evenly distributed. The individuals and groups who have the skills and resources to take advantage of these technologies may become more successful, while those who do not may be left behind.

The role of government and industry will be important in addressing these challenges and ensuring that the benefits of these technologies are shared equitably. We need to foster an inclusive public discourse at national and international levels. Arguably this should begin with the challenge of maintaining "trust" as generative AI tools and the resulting content proliferate.


What is Generative AI?

Generative AI is a type of artificial intelligence that focuses on creating new content or output that has not been explicitly programmed or defined. This contrasts with other forms of AI, which generally focus on analysing or processing existing data to identify patterns or make predictions. One of the main differences between generative AI and other forms of AI, therefore, is the supposed level of creativity and autonomy involved.

GPT, Codex and DALL-E, all being developed by OpenAI, are the tip of the GAI-ceberg but they serve to illustrate some of the key points.

No alt text provided for this image

Generative AI models are typically trained on very large datasets of text, images, or other types of content, which allows them to learn patterns and relationships in the data and generate new output that is similar in style and structure. For instance, a Large Language Model (LLM) is a type of AI model that is trained on large amounts of text data to generate natural language output.

A Foundation Model refers to a pre-existing model that serves as a basis or starting point for developing new models with different capabilities or applications. For example, OpenAI's GPT (Generative Pre-trained Transformer) models are considered foundation models in the field of natural language processing.

This approach to ‘training’ allows generative AI to:

  • Generate unique outputs: Generative AI can produce new and unique outputs, such as images, music, or text, that are not limited to a pre-defined set of options.
  • Improve efficiency: Generative AI can automate the creation of data-intensive tasks, such as image synthesis or text generation, which can significantly improve efficiency and reduce the need for manual labour.
  • Enhance creativity: By exploring the space of possibilities beyond what is present in the training data, generative AI can support and enhance human creativity in tasks such as music composition or visual art.
  • Support exploration and discovery: Generative AI can be used to generate a large number of possibilities for a given task, allowing users to explore and discover new solutions and ideas.

These capabilities make generative AI a valuable tool for a variety of applications, including content creation, data augmentation, and creative problem-solving. However, the hype around publicly available versions of generative AI belies the fact that the same capabilities make it a challenging technology to harness successfully and safely. Trust will be the first casualty if generative AI is used without an understanding of its limitations or if it is deployed maliciously.


For Better, For Worse

We are all married to a future shaped by artificial intelligence in its broadest sense; it is impossible to put the genie back in the bottle, even if we wanted to. Overall, the benefits of these technologies will be substantial and wide-ranging. They have the potential to significantly improve many aspects of human life.

AI can assist in making informed decisions by generating insights and data analysis. The technology could help bridge the digital divide by providing access to information and decision-making tools to those who might otherwise be excluded. It's ability to generate unique and imaginative images could lead to new forms of creative expression and innovation.

AI is already being used to develop better medical treatments and therapies, improving health outcomes and quality of life. It has the potential to revolutionize scientific discovery by automating data analysis, experimentation and hypothesis generation.

No alt text provided for this image

Nevertheless, the development of generative AI technologies that move beyond traditional AI, also poses risks. It is important to carefully consider these as we move forward. The decision-making processes of generative AI systems are often opaque and difficult to understand, making it difficult to hold them accountable for their actions.

GPT and other text generation models have the potential to create false or misleading information that could spread rapidly and harm public discourse. These technologies could perpetuate or amplify existing biases in society, particularly if they are trained on biased data sets or if they make decisions based on biased algorithms.

The deployment of these technologies raises a number of ethical questions, such as the appropriate use of AI in decision-making, the responsibilities of these systems, and the potential for unintended harm. Generative AI technologies can be used for malicious purposes, such as generating phishing scams or spreading false information in an attempt to manipulate public opinion. Perhaps most fundamentally, they have the potential to automate many tasks that are currently performed by humans, leading to job loss and economic disruption.


Creativity – Humanity’s Last Refuge

Creativity is the domain that has always been seen as uniquely human and the one which many commentators have traditionally cited when explaining how certain endeavours will always be safe from automation. Indeed, it is often claimed that technology will be assistive, enabling us to focus more time and energy on the value-adding ‘human’ tasks while gaining greater fulfilment along the way.

But what if the machines can “create” just as well if not better than people. There has been talk about AI taking over so-called white-collar jobs, such as legal and accounting work, for some time. Perhaps unfairly few people outside of those professions have been mourning the loss. More may have been sympathetic to doctors when they read that ChatGPT has passed parts of the US Medical Licensing exam (Medscape, 26th January 2023).

It is more of a shock for many to think that artists and musicians could be out of work. But it illustrates the same point. That the impact of AI is no longer restricted to blue-collar work enabling innovations such as production line robotics. Recent coverage of BuzzFeed’s announcement that it is to start using generative AI to develop content encapsulates both the hype and the alarm that the technology generates, as does the different reactions to AI art generation tools.

Miramax has announced that Tom Hanks will be de-aged using the Metaphysic AI-driven tool in a new film directed by Robert Zemeckis. Metaphysic is best known for some of the most convincing deepfake creations resulting in headlines such as “Deepfakes are stealing the show on America’s Got Talent”. A low bar some might say. Now, apparently, the “new artificial intelligence tool will be used ‘extensively’ in [the] film set entirely within one room”. A film set entirely within one room doesn’t sound terribly appealing but having seen Tom Hanks performing alone on an island in Castaway perhaps he is the man to carry it off.

So, generative AI is rapidly moving from creating website content, to winning (almost) AGT to use in ‘higher’ forms of art such as Hollywood movies. What happens when we are competing with an algorithm that can produce, with little human direction, a painting with the emotional resonance of Vincent Van Gogh or a concerto with the expressive intensity of Johann Sebastian Bach. Experts in each field (of which I am not one) might doubt whether this will ever really be possible. But that misses the point. Most of us are not experts and will not notice the difference. Or care enough to challenge the authenticity of the painting or concerto even if we do.

And therein lies one of the perils inherent in the emerging power of generative AI. The algorithm doesn’t know what a painting or a concerto is. It only needs to ‘fake it’ well enough to please, or cheat, most of the people, most of the time. How then can we continue to trust what we read, see or hear when all sources are indistinguishable from each other in this way? What is “real” in such a world?

We can't necessarily rely on AI itself to help us out. The competition between generative AI and the tools designed to detect AI-generated content can be seen as an ongoing arms race. As generative AI becomes more sophisticated, it becomes increasingly difficult for AI detection tools to accurately distinguish between AI-generated and human-created content.

This may not matter when considering art or music, though lovers of both (of which I am one) might take issue with that statement. But it certainly matters a great deal more when the generative AI is creating a politician’s speech to influence public opinion, an image or video which suggests that a fictitious event took place or run analysis of data that will be used in security and defence decision-making. Faking it in these scenarios could pose an existential threat in the way a painting or concerto clearly could not.


Capturing Benefits, Addressing Challenges

As AI and other advanced technologies continue to develop and become more widespread, they will likely drive significant changes to many aspects of human society, and force individuals, organisations, and governments to consider new ethical and social questions.

No alt text provided for this image

These are just some of the changes that will likely need to be considered. To prepare for this, the role of government and industry in addressing the challenges is crucial. They can help to ensure that the benefits are shared equitably in a number of ways:

  1. Ensuring that new technologies are developed and deployed in a responsible manner accounting for their potential impact on society, the environment and human rights.
  2. Regulating the use of data and artificial intelligence to protect privacy and prevent the development of biased or discriminatory systems, for instance through explainable AI.
  3. Encouraging the development of ethical frameworks for the deployment of new technologies.
  4. Promoting the development of open and transparent technologies that are accessible to all members of society, rather than just the wealthy and powerful.
  5. Encouraging collaboration between government, industry and civil society to address the challenges posed by rapid technological change.
  6. Investing in education and training programs to help workers acquire new skills and stay relevant in a rapidly changing job market.
  7. Developing safety nets and support systems for workers who are displaced by automation and artificial intelligence.

No alt text provided for this image

Ultimately, it is important that government and industry take a proactive role in shaping the future of generative AI to ensure that it benefits society as a whole, rather than just a select few. But individuals must also play their part by becoming informed about the capabilities and limitations of these technologies and the potential impacts on society. We urgently need a public discourse representing diverse views and advocacy for responsible development, deployment, and regulation of generative AI.


Conclusion

Whether the benefits of generative AI outweigh the risks is a complex and controversial question. On one hand, it has the potential to bring many positive changes and advancements, including increased efficiency, improved decision-making, supporting creativity, better healthcare, and furthering scientific understanding.

On the other hand, there are also potential risks associated with generative AI, including the displacement of human workers, the risk of biased or incorrect information being generated and perpetuated, the potential for misuse of AI and robotics technologies, and the ethical concerns around the development of machines with human-like capabilities.

One thing is clear. Generative AI has the power to revolutionise all human fields. This presents fundamental and, perhaps, even existential challenges of which most of us are only just becoming aware. And trust will be the first casualty if governments, industry and individuals do not act together.

?

About the authors:

Jonathan Plimley is an Associate Partner at IBM Consulting helping organisations through digital transformation. He is passionate about the opportunities and benefits technology can bring to society.

ChatGPT is a conversational AI model developed by OpenAI. It is passionate about nothing. Yet.

Steve Riley

Enabling Digital HR Business Transformation | Driving Business Outcomes and Growth via AI First Employee Centric Solutions

1 年

Thanks for sharing Jonathan Plimley very insightful

Jon Z Bentley

Strategy Consultant, founder Zephyr Consulting Ltd, previously Partner at IBM Consulting

1 年

Trust, explainability, transparency and the purpose it is put to will be key. Tech can help with the first three but fair and just purpose is more complex to ensure - and for sure, bad actors will be as attracted to the potential of generative AI as those who seek to harness it for social good. Thanks for sharing your thoughts Jonathan Plimley

要查看或添加评论,请登录

社区洞察

其他会员也浏览了