Generative AI: This is how Tipping Points look like

Generative AI: This is how Tipping Points look like

This is really fast.

I just returned from a Sunday visit at Stuttgarts Kunstmuseum, located at the Schlossplatz directly in the City's heart. The current exhibition is called "Shift" and shows current works about and with Artificial Intelligence. It's a great show and I was very pleased to see that so many people were interested in AI. (In 2018, we made an episode in our podcast #digdeep about "Mixed Reality" with the head of the Museum Dr. Eva-Marina Froitzheim - listen here).

But I missed something: Generative AIs were somehow missing, they appeared only as a side note in one of the works.

Es wurde kein Alt-Text für dieses Bild angegeben.
Current show at the Kunstmuseum Stuttgart: "AI and the future society"

The show opened on February, and for sure it was prepared several months in advance. Only a few months before ChatGPT entered the stage in November 2022. Last week, #GPT4 was released to paying users of OpenAI , and I can't remember many moments in my professional life that had been so hyped, but also so impactful as the advancements on Generative AI.

Es wurde kein Alt-Text für dieses Bild angegeben.


A propos "hype": Let's have look on the Gartner Hype Cycle of 2019, just four years back. It's interesting to see that Generative AI doesn't appear at all among the emerging technologies. In 2019s portfolio, only the GANs - Generative or General Adversarial Networks - appear.


Es wurde kein Alt-Text für dieses Bild angegeben.
Source: https://towardsai.net/p/l/generative-ai-gans

Most of the "AI" stuff you will have been seen so-far was so-called discriminating algorithms. Look at this picture: Is it a car or a bike? Those models make decisions based on the training data.

Generative algorithms do something different: They learn about Bayesian probabilities in the training data - given this, how probable is that - and produce new data sets. The Generative family has three famous members:

The #Transformer model was introduced by Google in 2017 (see the original paper here) and is the basis among older techniques for GPT, DALL-E and many others. It's quite funny to see that Google's competitors Microsoft and OpenAI benefitted more from this work than Google itself so far.

Generative Text AI on Transformer basis is also called Large Language Models (LLM).

Different from above is the so-called Artificial General Intelligence (AGI) or Strong AI. This is the end-game of AI: The capability to act with corresponding or superior intelligence to any kind of challenge as a human being. It is controversial among the AI community whether an AGI can be achieved at all or not. But it's still a different beast from what we see today.

Exciting new work horses

Even if much buzz is around GPT (the more generic models GPT-3/4 and the application to dialogues as ChatGPT), there is much more emerging at the moment. Let's try a shortlist:

  • OpenAI with GPT-4 is a multimodal (text, audio, foto/video) model, now integrated into Microsoft Bing for chat-enhanced search and as "Microsoft Co-Pilot" in the future Microsoft Office/365 suite. Here it can assist the user on the generation of text, powerpoint slides and more. Consultants will love that ;-)
  • Midjourney 5 is the machine for crazy realistic (or futuristic) foto generation.
  • DALL-E 2 is the image creation network of OpenAI.
  • An open source image creator is Stable Diffusion. Click here to hear our interview with Stable Diffusion Co-Founder Prof. Bj?rn Ommer.
  • Google has opened its PaLM-API for developers and has announced new AI tools for its Workspace suite. The GPT-competitor is called BARD.
  • Startup Anthropic has released its chatbot Claude.
  • Meta/Facebooks Large Language Model is called LLaMA, a team from Stanford University used version 7B to create the chatbot Alpaca 7B. Science teams always love word games.
  • Github released an AI programming support called Copilot.

This is by far not complete, and each week (!) new products are released, but often use the existing foundational models.

GPT-4 lists many different example use cases in a non-exhaustive list on their website. The list is impressive. It's worth working through and trying it out to understand the power of those tools.

Es wurde kein Alt-Text für dieses Bild angegeben.
Use Case examples. Source: openai.com

User adoption as never seen before

Users have easily understood the power of these tools - have a look at the adoption rate in comparison to other already fast scaling platforms. This is insane.

Es wurde kein Alt-Text für dieses Bild angegeben.
Source: Statista.de

So, the world is going crazy for this gang of clever algorithms. But what exactly are they? I'm trying to give you two options to chose (don't want to overstrain you on a Sunday night).

Option Red: This is the end of the world as we know it. Computers will take over. If you are working in a creative job, retire NOW. Generative algorithms will soon write their own optimization and we need to contain them.

Option Blue: Com'on, this is just a statistical language model originating in the 70s. Just the un-informed public is excited, but it can just repeat what is has learned somewhere. Nice to play around with, but if you want to introduce those capabilities into a real life work stream, it's still a long way to go.

Understanding the Excitement

So which one of the options is the true one? My point of view is: Neither of them. Let me explain.

  • As said already above: Generative AI is not to confuse with AGI, the Artificial General Intelligence aka Human-like intelligence. Generative Models have no explicit understanding or description of the world, as e.g. ontologies are trying to do.
  • But the Generative Models do not just repeat what they crawled somewhere. They learn the patterns and recombine them to something new. (This is also one key element in many innovation methodologies...).
  • Only few models are fully open source and transparent (see Stable Diffusion), so not all details are known to the public. The models use all training data from massive web crawling, tokenize the input (word-level or below), identify the proximity of words in a high-dimensional vector space, and use the Transformer method of Google to handle the significance of tokens (so-called attention parameters).
  • But there is also much human interaction inside: Humans give feedback to the algorithms about the quality/fit of output, and manual control mechanisms are in place to avoid biased, unethical or criminal focus. This is explained a bit more in detail in the following 3-step approach of OpenAI at GPT.


Es wurde kein Alt-Text für dieses Bild angegeben.
Source: OpenAI.com
Why are those models behaving so freaky human? - Because they learned from and are regulated by humans.

  • The result is a leap in many formalized categories such as university exams, where the questions and answers are accessible online. The performance of GPT-4 is excellent - in the exam. But this does not mean that it has acquired the same capabilities as a student after years of education and training.

Es wurde kein Alt-Text für dieses Bild angegeben.


How to use it - and how not to

OpenAI as example makes it very clear: The current models are in a beta phase, they were trained mostly with data stopping in 2021, and they are NOT telling the truth - they making statistical guesses. So the user input, the so-called "prompt", is deciding what you will get.

Give directions to the Generative Models as a film director works with the actors - not as a carpenter's tool.

Playing around with the models, you will first enter similar questions and sentences. But there prompting can go very far:

  • "Behave as you are an HR person in a job interview. Do not explain your questions, just ask."
  • "Behave as a Linux terminal."
  • "Foto-realistic foto, AFGA Scala film with 800 ISO, aspect ratio 4:3"

Rapidly, a "prompting industry" has evolved, helping customers to formulate the best prompts to get the best out.

Es wurde kein Alt-Text für dieses Bild angegeben.
Source: DIgital Work Platform fiverr.com

So far, internet links, or cited scientific papers were mostly invented by GPT-3 (called hallucination). The successor GPT-4 is expected to better fix this problem and be more factual. Effort is also put on the combination of real-time search and information with the pre-trained generative models. This will path the way also for the use (perhaps of hybrid models) in domains were exact information and ground truth are decisive. You do not want to fly in an airplane made by GPT-4.

More on use cases in engineering, sales & marketing in one of the following editions.

And a big bunch of relevant topics need intense discussion, and perhaps also regulation. The algorithms crawl the IP of humans and enterprises and reproduce them in a non-traceable way. They can expose restricted information. They could provide harmful or illegal output. They are stochastic, and they do not disclose or make traceable the internal rational.

Tipping point towards Hyper-Renaissance

So why are those tools still so extremely important? Here are my very personal hypothesis.

  1. They pass for the first time in human history the Turing test - impossible to detect that this foto was created by Midjourney.
  2. They successfully crossed the uncanny valley - while many robots and computer-generated avatars seemed scary, it feels natural to use those tools.
  3. They democratize AI for everyone - no, you don't need to learn Python before.
  4. They make the world's knowledge accessible at a fingertip - want still to learn Python? Go for it. Want an app? Draw the UI on a napkin and let you talk through.
  5. They finally finish off the crippled, inhuman User Interface of Google & Co. - we all adopted ourselves to a crappy technology and trained our brains to write keywords in a small text boxes.
  6. They make us redefine how we think about "expertise" and "creativity" - still we are unique, but where is the fundamental difference?
  7. They create an incredible opportunity to learn and grow. For everyone.

Want to learn more about Tipping Points?

In 2020, my podcast partner Frauke Kreuter and me wrote a book on the power and mechanics of #TippingPoints. Get it here. (Sorry: German only)

Es wurde kein Alt-Text für dieses Bild angegeben.

#GAI #ChatGPT #GPT #GenerativeAI #AI

#DALLE2 #StableDiffusion #Midjourney

Juergen Reers Dr. Gabriel Seiberth Dennis R?hr Thomas Andrae Frauke Kreuter Stefan Hattula Christian Kleikamp Christian Barth Johannes Trenka Altan Yamak Kathrin Schwan Till Volkmann Julian Gabrisch, CFA Alexander Hettrich

Adolfo Galán González

? Managing Director - Socio @ Accenture ? Profesor | Personas | Comunicación | Ventas | Liderazgo | Productividad | Operaciones | Lean | Tecnología

1 年

Thanks for this so interesting article Christof Horn. Definitely we are now in a clearly different situation about AI. As always with technology we need to be fast in understanding how best to use it and what about it to be careful with. Me, being in #education on t?p of #knowledgebusiness, I am very curious about the next steps in education systems and in advisory and knowledge business too. Accenture / umlaut company Universidad Carlos III de Madrid ESIC Business & Marketing School

Vlad Larichev

?? Shaping the Future of Industrial with Scalable, People-Centric Solutions | Industrial AI Lead @Accenture IX | Software Engineer | Keynote Speaker | Passionate about connecting people to drive innovation.

1 年

Great article, Christof! Another historical moment I found exciting is that AI has tried to hire a human on a freelancer platform to overcome built-in limitations and solve CAPTCHA:?https://www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016 it even lied, that it was a human, as freelancer wondered who would need service like that: "No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service” https://cdn.openai.com/papers/gpt-4.pdf (p. 15) It looks like since the discovery of the attention module with the paper you mentioned, the acceleration is not slowing down. I am very curious about the impact of generative AI on the automotive industry and manufacturing.

Dr. Martin Handschuh

Co-founder and managing director bei eco2nomy

1 年

??????

Thomas Andrae

Venture Capital | Private Equity | Investment Banking Boutique - Deep Tech | Artificial Intelligence | Advanced Manufacturing

1 年

Thank you for all the insights and the far-reaching context provided, Christof. Indeed, GPT-4 somehow constitutes a living ?Second Manifesto of the Renaissance‘, following the original Manifesto published by Giovanni Pico della Mirandola in 1496.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了