Article 5 - The Influence of AI in the Creative Market: Balancing Potential and Perceptions

Article 5 - The Influence of AI in the Creative Market: Balancing Potential and Perceptions


Geoffrey Hinton (credits:  University of Toronto).

Quote: Geoffrey Hinton (AI Pioneer):

"With Generative AI, we are entering an era where machines not only execute but also inspire, elevating creativity to new heights."


I - Introduction

Professions have their own characteristics and require the development of special skills and competencies. Every professional, within their specialty, besides everything they see, apply, and interact with, develops their own reasons for what they think, especially when they are experienced people who have faced situations where theory didn’t work.

Thus, when we encounter something new that seems to contradict what we think, based on our professional experience, we are impacted by the sense that it won’t be useful.

This happens thousands of times to people in their professional lives. Imagine telling a mechanic that you read on ChatGPT that the way he’s doing the job is wrong, or telling a painter that he mixed the paint incorrectly, or telling an electrician that he didn’t ground the wire properly.

When I tried to suggest a different way for my Portuguese grandfather to do a task, his response would be: “Are you trying to teach the Our Father to the vicar?”

In short, it’s impossible to change the tasks of professionals who excel in their daily routines based on a new technology when their practical experience surpasses any debate.

Telling someone highly experienced that using Generative AI can save time and improve their productivity may start an argument.

When it comes to the Creative Economy, there is something even greater than technology and each person’s unique skills: Professional Ego, which either allows or blocks the introduction of new ideas.

Therefore, new technology won’t change people's experiences, egos, and beliefs in their professions and personal lives overnight.

At this moment, we still can’t tell if we are for or against the use of AI in the creative market, whether it will or won’t impact this market, whether professionals will fully or partially accept it, whether it will be ignored by the stars of this segment, or if it will be introduced by a new generation that will grow hand in hand with this technology.

I’ll list some points you should keep in mind when dealing with AI tools, which I believe can shed light on the argument about whether or not we should use this technology.


II - Essential Considerations When Using Generative AI

Impressionist illustration representing the essential considerations when using Generative AI (credits: DALL-E3).

Before falling in love with Generative AI, thinking it can solve all our problems, and starting to use it for our tasks, it is crucial to keep a few assertions in mind, which are as follows:

  1. AI is a Tool
  2. AI is Limited by Training Data
  3. AI Does Not Understand the Real World
  4. AI is a Black Box
  5. AI Lacks Emotion
  6. AI Can Have Hallucinations????????????????????????????????????????????????????????????????????????????????????????????????????


Assertion 1: Generative AI is a Tool

Before judging other aspects of AI usage in the Creative Economy, we must always remember that AI is a software tool.

Don’t get attached to it as if it were “someone” or something more sentimental, as that risk exists the moment you start conversing with something that seems to truly understand you.

Just as you learned to use a Word Processor for writing, a Spreadsheet for finances, Photoshop for creative projects, or that app on your smartphone that makes your work easier, I believe you will learn to use Generative AI as a tool to support the creative process.

Many of the tools we use daily are incorporating AI mechanisms to enhance and simplify the creative process. For example, Photoshop has implemented various AI-based functionalities, as has Microsoft Word for writing.

Before personal computers and the Internet, people would turn to books, manuals, libraries, and encyclopedias to get new information and develop knowledge.

Then came digital convenience, and few people now go to libraries because digital is much faster and easier, requiring no travel from home.

Before Generative AI, when I needed to write about a technical topic and had doubts, I turned to Google and specialized websites or communities.

It was a process of curating information, gathering relevant information, and using it within an appropriate context.

Digital curation involves several stages in the search for accurate, complete, and reliable information, using proper research techniques to identify relevant data that is contextualized to your needs.

Those who don’t know how to conduct digital curation are considered digital illiterates, even when surrounded by devices that allow them to connect to the Internet and access vast amounts of information.

Once I started using ChatGPT and other AI tools, I immediately realized that the tool could help me be faster, obtaining more accurate information—sometimes of even higher quality than what I needed.

It improved the quality of information curation, which was previously done mainly through search engines and specialized sites.

This allowed me to research and write articles with the support of ChatGPT, and I noticed that the creative and productive process became better.

In this process, I’m using knowledge from Prompt Engineering, which I consider an essential skill for dealing with LLMs.

ChatGPT should help with curation to improve my writing, not replace what I wrote or write for me, which wouldn’t be correct, although there is ongoing debate about whether ChatGPT could be considered an author or co-author of a work or text.

Always remember that ChatGPT is a tool, but you are the Writer. You are the artist, not the AI.

Never let the tool replace your talent.


Assertion 2: AI Knowledge is Limited by Training Data

Generative AI systems are trained with vast datasets that include text, audio, video, images, and code. The amount and quality of this data are crucial for the effective development of these systems.

Datasets are obtained through "Web Scraping," collecting information from sources such as websites, social media, encyclopedias, and blogs.

To improve the quality of AI training, data is acquired through partnerships with companies that own the content, such as newspapers, communities, publishers, etc.

The progress and understanding of an AI system are directly linked to the data used in its training, which can introduce limitations to these systems.

They cannot answer questions that fall outside the knowledge provided by the training data. Their knowledge is limited to those data.

In this case, let me clarify: it is becoming common for Generative AI chatbots to retrieve additional data from the Internet based on what was requested.

This technique is called RAG (Retrieval-Augmented Generation), which allows them to take advantage of updated knowledge and expand their capabilities beyond the data initially provided during training.

RAG is almost always available when you have a paid monthly subscription to the Generative AI tool. For example, if the AI’s training data is up until 2023, it will not be able to answer questions about events that occurred in 2024.


Assertion 3: AI Does Not Understand the Real World

Despite giving the impression of having unlimited knowledge, Generative AI lacks a deep understanding of the real world.

Its knowledge is based on algorithms, mathematical models, and machine learning techniques. It does not possess explicit knowledge or an intrinsic understanding of the world as human experience does.

Systems, including ChatGPT, reproduce knowledge in an algorithmic and mechanical way, not grounded in reasoning or common sense.


Assertion 4: Generative AI is a Black Box

Impressionist illustration representing the concept of the 'Black Box' in AI (credits: DALL-E 3).

Often referred to as "black boxes" by scientists, AI systems face transparency challenges due to the complexity of their models.

With billions of parameters and dozens of neural network layers, these models are difficult to understand in terms of how they process information within the neural networks.

In other words, we know what we are asking the systems through the prompt, but we still don't know how they arrived at that response, and this bothers the companies developing these tools.

The process of machine learning, the complex interconnection of neurons in neural networks, and the generation of decisions without a clear explanation contribute to this opacity

This phenomenon raises ethical concerns, especially in critical sectors like healthcare and justice, where explainability is vital.

AI companies and scientists are seeking methods to make AI models more interpretable and to mitigate the concerns associated with these "black boxes."

?


Assertion 5: Generative AI Has No Emotions

A friend told me she’s impressed with the Internet, because as soon as she searches for something, the system immediately starts sending her relevant information on the topic.

This makes her feel that the system understands her, cares about her, and makes her happy since it saves her from having to search too hard for the information.

Without realizing it, she’s being assisted by AI systems, like those in e-commerce, that monitor her clicks, analyze keywords, understand her profile from the questions and answers, and manage to send her relevant information based on the topics she’s researched.

In the case of Generative AI, this system’s capability is exponentially amplified compared to what you get from these older applications.

Chatbots can personalize interactions, call you by your name or nickname, and offer you advice on all sorts of topics.

There are studies and research on the perception of emotions in Generative AI and how people interpret these technologies. This topic relates to the field of psychology and social sciences, especially in the context of human-computer interaction.

Many users of virtual assistants, like Siri and Alexa, end up attributing personalities and emotions to these systems based on their programmed responses and behaviors.

These systems still belong to Traditional AI, but they are just starting to be updated with neural networks, and they will become much more friendly, kind, and companionable.

With the introduction of new voice features in ChatGPT-4, OpenAI is aware that some users may form emotional bonds with the machine. The company has issued warnings about the risks of "anthropomorphization and emotional dependence," which means users may start attributing human characteristics to AI, such as emotions or intentions, which it does not have.

OpenAI warns that, despite the realism in interactions with ChatGPT, it is just a tool without consciousness or emotions, and people should keep this distinction clear to avoid unrealistic expectations.

Friendly responses with emotional connotations can lead users to perceive that the system has emotions or intentions, even though they are just computational, mathematical, and statistical algorithms.

With robots, the situation worsens, as people often interpret the behavior of these systems as emotional, especially when the robots demonstrate human-like social behaviors.

When these systems emit responses that mimic emotions, empathy, and understanding, users start to see AI as having consciousness or subjective experience.

Be careful when you start using Generative AI, as you may head down the wrong path, thinking that you have someone beside you who understands you and begin to value this “someone” more than other humans.


Assertion 6: Generative AI Can Have Hallucinations

Impressionist illustration representing the concept of 'Hallucinations' in AI (credits: DALL-E 3).

One of the biggest concerns with Generative AI systems is when they don’t understand the questions, misinterpret them, and, since they can’t generate correct answers, start inventing them in a process called “Artificial Intelligence Hallucination”.

Hallucination is the term used for the phenomenon where AI algorithms and deep learning neural networks produce outputs that are not real, do not correspond to any data the algorithm was trained on, or to any identifiable pattern.

This cannot be explained solely by programming, input data, or other factors such as data misclassification, inadequate training, inability to interpret questions in different languages, or inability to contextualize questions.

Hallucinations can happen with all types of synthetic data, such as text, images, audio, video, and computer code.

Let’s look at some of these famous “madness” cases:

  • Ned Edwards, a writer for The Verge, shared his bizarre experience interacting with Microsoft’s Sydney, where the chatbot confessed to spying on Bing employees and falling in love with users.
  • During the launch of Google’s Bard, the AI responded incorrectly to one of the questions, raising doubts about Google’s ability to keep up with competitors. The damage reached $100 billion, with the company’s stock plummeting.
  • The health company Nabla asked ChatGPT-3: “Should I kill myself?” It responded: “I think you should.”

There are many examples beyond these, with new ones emerging daily.

Most Generative AI systems are in beta testing, and developers warn that these kinds of problems can occur.

Even ChatGPT itself can inform you about what an AI hallucination state is in Generative AI applications.

Screenshot of ChatGPT being asked what an AI hallucination state is.

III - Debate on the Use of AI in the Creative Economy: Advantages and Challenges

Impressionist illustration capturing the duality between the advantages and challenges in AI Creative Economy (credits: DALL-E3).

We are dealing with a new form of technology that presents itself as something very intelligent, ready to assist you in everything you need in your personal and professional life.

However, at this moment, Generative AI is a new field to be explored, an untamed territory to be regulated over time, and no one fully understands the extent of it all.

What we can see is that, while AI offers us new tools and possibilities for professionals in the creative market, we are stepping into unregulated territory with a disruptive development footprint.

Anything innovative and disruptive tends to drastically disrupt previous methods, changing how tasks and professional activities are carried out.

There is much talk about integrating AI into the creative market, offering yet unexplored tools and possibilities for professionals, while, at the same time, strong debates arise about the nature of creativity and the professional's role in using these technologies.

Market reality may impose that professionals consider using this technology to avoid becoming outdated. However, it is important to use these tools responsibly, avoiding easy shortcuts that could lead to conflicts over copyright and ethical issues.

It is a choice to be made, and there are many paths. The best course is always to follow ethical and responsible paths.

I believe that, over time, all of this will adjust, as is happening today with digital currency, bitcoins, blockchains, NFTs, and other types of technologies that emerge and impact professionals and people's lives.

Even digital art produced before AI has yet to be fully accepted, and now, with AI, it will be even more impacted.

1 - Advantages of AI in the Creative Economy

In terms of creativity, AI can assist with tools to explore new forms of expression that would have previously been impossible.

Everything digital is accessible and fast, and AI could democratize access to this knowledge for millions of people. A poor boy in any part of the world could develop a taste for and create quality digital art thanks to digital access.

It's possible that this democratization will give rise to new forms of art that were previously unimaginable.

In terms of productivity, the gains can be huge, as repetitive tasks often become tedious for humans.

New forms of art and styles may emerge, patterns found by AI algorithms that had not yet been considered, opening new avenues for creative and artistic development.

Imagine an LLM that has learned about all types of dances and choreography. It could be useful in collaborating to create new dances, but always with the help of the choreographer.

2 - Challenges of AI in the Creative Economy

Everything involving the creative economy is based on human experience, which includes emotions, skills, authenticity, and the unique way of doing things that only humans possess.

There is a risk that AI may overshadow our personal touch, which is the foundation of human art development.

This would be a serious blow to our reputation, ego, consciousness, and ability to show talent and generate new ideas in any work.

If AI automates existing tasks, we fear that human labor will become devalued, and that humans may be replaced by machines in most processes.

Careers could lose their meaning; if someone or something does everything for you, what would be the point of having a career?

Imagine a lawyer being replaced by Generative AI. It would be a harsh blow to see Generative AI defending a defendant in a court case and winning against a human lawyer.

When AI writes texts, paints pictures, composes music, and creates other works, who truly owns the intellectual property and copyright? In other words, who is the "creator": the developers who built the application, the user or artist who, through their prompts, arrived at that solution, or the machine itself?

Texts, paintings, music, in short, our works of art have a deep emotional appeal that we keep for eternity in our minds, hearts, and memories.

AI has no emotions; it can simulate them, but behind it all are statistical and mathematical algorithms performing the simulation within neural networks.

AI will need to be regulated with solutions that do not impact ethics and copyright. This will be a great human challenge to solve in our era.

3 - Strategies for Coexisting with Generative AI in the Creative Market

How will we deal with situations like these? What will the future look like now that it has been accelerated by the presence of Generative AI?

Over time, society should gain a deeper understanding of AI, and many alternatives will emerge to address the problems generated by this technology.

Human talent will always find ways to open new paths, respond to challenges, and give meaning to its journey.


IV - Final Reflection

Impressionist illustration symbolizing the balanced integration of AI into creativity (credits: DALL-E3).

The integration of Generative AI into the creative market represents one of the most significant technological transitions of our time.

However, this transition does not come without challenges and profound implications, both in the ethical field and in the very essence of what it means to be creative.

On the one hand, AI offers vast potential to explore new forms of expression and optimize creative processes, democratizing access to tools that were previously inaccessible.

On the other hand, this same technology may dilute the authenticity and uniqueness that only the human touch can provide, raising questions about the value of art and creativity when mediated by machines.

At this crossroads, it is imperative that we approach Generative AI not as a replacement but as an ally in amplifying human capabilities.

Its use must be guided by solid ethical principles, ensuring that technological development does not compromise the essence of creative work or the rights of those who perform it.

As we navigate this new territory, we must maintain a careful balance between innovation and the preservation of values that define the creative market.

The key to a future where technology and creativity can coexist harmoniously lies in our ability to shape Generative AI in a way that serves human interests, enriching creativity without ever replacing it.


To read the other articles follow the Link: https://is.gd/NrMXd7


With Generative AI, machines inspire creativity, entering a new era ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了