"Let’s not call it 'AI'."
Made with Midjourney

"Let’s not call it 'AI'."

Since embarking on my journey into the world of AI, I've been followed courses, YouTube videos, and articles, experimenting with new tools weekly and growing increasingly excited about the future.

I refrain from using the term 'foreseeable future' due to the rapid pace of change.

Several aspects fascinate me:

1. AI has existed in various forms before.

It had a different name. What's revolutionary now is the speed of data processing and generation. If you're familiar with tools like the eraser tool or content-fill in Adobe Photoshop or Lightroom, you're basically acquainted with the core principle of generative fill. Similarly, many companies utilize machine learning programs to categorize documents based on their content, such as images and their respective subjects. By using a different programming approach for these tools, "AI" has the capabilities to reach these new heights. Not getting technical here? – ask your friendly IT friend instead! :D

?

2.?We are still at the beginning stages of this journey.

Every week brings forth new tools and updates, creating an overwhelming landscape. We currently find ourselves amidst the Hype Curve, where "AI" is super hyped, prompting countless individuals and companies to jump on board. However, as time progresses, the frenzy will slow down, allowing us to determined the areas, occupations, and projects where "AI" can be effectively used.

?

3.?There exist and will continue to be intriguing limitations.

  • Ethical limitations are undoubtedly prevalent, especially regarding deepfakes and voice cloning. Unfortunately, solutions to prevent misuse don’t excist at the moment. But we shouldn’t wait for politics to come up with a plan. Politicians, much like the average person, often lack the technical knowledge required to come up with effective solutions. Instead, companies must take the lead in addressing these concerns. They possess the requisite expertise and capabilities to spearhead initiatives. There's simply no excuse for waiting on politics to establish legally mandated minimum requirements. It is obvious what that should be: Maximum effort to protect individuals from potential harm caused by misuse of one’s products – for example, maybe by implementing markers that are impossible to remove. #notadeveloper
  • Copyright limitations: that has always been a legal minefield, even predating generative AI. And I leave that to the lawyers. The intriguing aspect lies in how major content companies are approaching this issue. For instance, while The New York Times is suing OpenAI for utilizing their content to train models, Axel Springer Verlag in Germany is adopting the exact opposite approach.(https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism)
  • The source of training data for models is crucial. While most models are trained on vast datasets available on the internet, some, like Adobe Firefly, rely solely on specific materials—in Adobe's case, the contents of the stock market. So “AI” can only generate, what it knows – what it has been trained on. However, as highlighted by AI artist Nina Puri in an interview with DOCMA magazine (DOCMA 01/24), when prompted to go beyond its training data, "AI" often struggles to understand the task at hand, revealing inherent biases and reliance on stereotypes. A valid point, which Google tried to counter with its model Gemini, but it went too well, leading to the out-cry we have all heard of.
  • "AI" training on its own generated content results in a "dumber AI." As mentioned by Sabine Hossenfelder in a video, two studies suggest that the diversity of output diminishes due to an increasing volume of "average/general" training content, leading to correspondingly "average" generated output. This output, in turn, is then also used to further train the models, feeding into the cycle.

?

4. "AI" is neither "intelligent" nor "creative"!

The last two points underscore one thing: Human creativity is still the mayor part when working with “AI”. It cannot independently generate concepts.

Contrary to popular belief that generating content is effortless, it takes a lot of creativity, skill, and patience. You must guide the "AI", articulating desired outcomes much like with a human. You enter a creative relationship bouncing ideas around.

?

5.?It is a new way of visual storytelling!

Akin to the advent of photography in the late 19th century or the subsequent invention of film. Storytellers need to adapt a new set of storytelling principals while navigating existing limitations.

?

6. You might have noticed my use of quotation marks around "AI".

This stems from a thought-provoking statement made by a friend:

"Let’s not call it 'AI'. It’s more an APA (Artificial Production Assistant)."

And he is right. As previously mentioned, human input remains integral to utilizing these tools. That’s what they are: tools! Tools, which help you with tasks which you either can’t do yourself or do more time-efficient than anyone else.

They are your personal assistant to create.


I highly recommend listening to the interview with Renard T. Jenkins by the team at Curious Refuge. They delve into some of these points in a bit more depth. https://open.spotify.com/episode/44LZN6ftnAn3u2GZYb1D2F?si=ed2e7ed1306b4930


Try it out! It is frustrating and so much fun at the same time. Totally worth it!

(Polished with the help of APA ChatGPT)



要查看或添加评论,请登录

社区洞察

其他会员也浏览了