Generative AI: This is how Tipping Points look like
This is really fast.
I just returned from a Sunday visit at Stuttgarts Kunstmuseum, located at the Schlossplatz directly in the City's heart. The current exhibition is called "Shift" and shows current works about and with Artificial Intelligence. It's a great show and I was very pleased to see that so many people were interested in AI. (In 2018, we made an episode in our podcast #digdeep about "Mixed Reality" with the head of the Museum Dr. Eva-Marina Froitzheim - listen here).
But I missed something: Generative AIs were somehow missing, they appeared only as a side note in one of the works.
The show opened on February, and for sure it was prepared several months in advance. Only a few months before ChatGPT entered the stage in November 2022. Last week, #GPT4 was released to paying users of OpenAI , and I can't remember many moments in my professional life that had been so hyped, but also so impactful as the advancements on Generative AI.
A propos "hype": Let's have look on the Gartner Hype Cycle of 2019, just four years back. It's interesting to see that Generative AI doesn't appear at all among the emerging technologies. In 2019s portfolio, only the GANs - Generative or General Adversarial Networks - appear.
Most of the "AI" stuff you will have been seen so-far was so-called discriminating algorithms. Look at this picture: Is it a car or a bike? Those models make decisions based on the training data.
Generative algorithms do something different: They learn about Bayesian probabilities in the training data - given this, how probable is that - and produce new data sets. The Generative family has three famous members:
The #Transformer model was introduced by Google in 2017 (see the original paper here) and is the basis among older techniques for GPT, DALL-E and many others. It's quite funny to see that Google's competitors Microsoft and OpenAI benefitted more from this work than Google itself so far.
Generative Text AI on Transformer basis is also called Large Language Models (LLM).
Different from above is the so-called Artificial General Intelligence (AGI) or Strong AI. This is the end-game of AI: The capability to act with corresponding or superior intelligence to any kind of challenge as a human being. It is controversial among the AI community whether an AGI can be achieved at all or not. But it's still a different beast from what we see today.
Exciting new work horses
Even if much buzz is around GPT (the more generic models GPT-3/4 and the application to dialogues as ChatGPT), there is much more emerging at the moment. Let's try a shortlist:
This is by far not complete, and each week (!) new products are released, but often use the existing foundational models.
GPT-4 lists many different example use cases in a non-exhaustive list on their website. The list is impressive. It's worth working through and trying it out to understand the power of those tools.
User adoption as never seen before
Users have easily understood the power of these tools - have a look at the adoption rate in comparison to other already fast scaling platforms. This is insane.
So, the world is going crazy for this gang of clever algorithms. But what exactly are they? I'm trying to give you two options to chose (don't want to overstrain you on a Sunday night).
Option Red: This is the end of the world as we know it. Computers will take over. If you are working in a creative job, retire NOW. Generative algorithms will soon write their own optimization and we need to contain them.
Option Blue: Com'on, this is just a statistical language model originating in the 70s. Just the un-informed public is excited, but it can just repeat what is has learned somewhere. Nice to play around with, but if you want to introduce those capabilities into a real life work stream, it's still a long way to go.
领英推荐
Understanding the Excitement
So which one of the options is the true one? My point of view is: Neither of them. Let me explain.
Why are those models behaving so freaky human? - Because they learned from and are regulated by humans.
How to use it - and how not to
OpenAI as example makes it very clear: The current models are in a beta phase, they were trained mostly with data stopping in 2021, and they are NOT telling the truth - they making statistical guesses. So the user input, the so-called "prompt", is deciding what you will get.
Give directions to the Generative Models as a film director works with the actors - not as a carpenter's tool.
Playing around with the models, you will first enter similar questions and sentences. But there prompting can go very far:
Rapidly, a "prompting industry" has evolved, helping customers to formulate the best prompts to get the best out.
So far, internet links, or cited scientific papers were mostly invented by GPT-3 (called hallucination). The successor GPT-4 is expected to better fix this problem and be more factual. Effort is also put on the combination of real-time search and information with the pre-trained generative models. This will path the way also for the use (perhaps of hybrid models) in domains were exact information and ground truth are decisive. You do not want to fly in an airplane made by GPT-4.
More on use cases in engineering, sales & marketing in one of the following editions.
And a big bunch of relevant topics need intense discussion, and perhaps also regulation. The algorithms crawl the IP of humans and enterprises and reproduce them in a non-traceable way. They can expose restricted information. They could provide harmful or illegal output. They are stochastic, and they do not disclose or make traceable the internal rational.
Tipping point towards Hyper-Renaissance
So why are those tools still so extremely important? Here are my very personal hypothesis.
Want to learn more about Tipping Points?
In 2020, my podcast partner Frauke Kreuter and me wrote a book on the power and mechanics of #TippingPoints. Get it here. (Sorry: German only)
? Managing Director - Socio @ Accenture ? Profesor | Personas | Comunicación | Ventas | Liderazgo | Productividad | Operaciones | Lean | Tecnología
1 年Thanks for this so interesting article Christof Horn. Definitely we are now in a clearly different situation about AI. As always with technology we need to be fast in understanding how best to use it and what about it to be careful with. Me, being in #education on t?p of #knowledgebusiness, I am very curious about the next steps in education systems and in advisory and knowledge business too. Accenture / umlaut company Universidad Carlos III de Madrid ESIC Business & Marketing School
?? Shaping the Future of Industrial with Scalable, People-Centric Solutions | Industrial AI Lead @Accenture IX | Software Engineer | Keynote Speaker | Passionate about connecting people to drive innovation.
1 年Great article, Christof! Another historical moment I found exciting is that AI has tried to hire a human on a freelancer platform to overcome built-in limitations and solve CAPTCHA:?https://www.iflscience.com/gpt-4-hires-and-manipulates-human-into-passing-captcha-test-68016 it even lied, that it was a human, as freelancer wondered who would need service like that: "No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service” https://cdn.openai.com/papers/gpt-4.pdf (p. 15) It looks like since the discovery of the attention module with the paper you mentioned, the acceleration is not slowing down. I am very curious about the impact of generative AI on the automotive industry and manufacturing.
Co-founder and managing director bei eco2nomy
1 年??????
Venture Capital | Private Equity | Investment Banking Boutique - Deep Tech | Artificial Intelligence | Advanced Manufacturing
1 年Thank you for all the insights and the far-reaching context provided, Christof. Indeed, GPT-4 somehow constitutes a living ?Second Manifesto of the Renaissance‘, following the original Manifesto published by Giovanni Pico della Mirandola in 1496.