How to make efficient use of generative AI
Thomas Wieberneit
Management Consultant, Technology Analyst, Podcast Host, Startup Advisor
Generative AI is here to stay.
It is not only a hype that probably gets worse before it gets better. And we clearly still are in a hype, as the following chart showing the search interest for ChatGPT between October 1, 2022 and April 12, 2023 from Google Trends shows.
Similarly, the Gartner Group sees Generative AI technology approaching the peak of inflated expectations in its 2022 hype cycle for artificial intelligence.
To be sure, we see only the tip of the iceberg when looking at voice, text or image based services that we all know and use. The Gartner Group also foresees many industrial use cases reaching from drug and chip design to the design of parts to overall solutions.?
You think that these scenarios lie far in the future? Read this Nature article from 2021 and think again.
And in contrast to some of the other hypes that we have seen in the past few years, there are actual use cases that support the technology’s survival of the trough of disillusionment. As there are viable use cases, unlike “Metaverse”, Blockchain or NFTs have shown, generative AI is not a solution in search of its problem.
Apart from OpenAI’s GPT and Dalle-E models that surely caught everybody’s attention in the past weeks and months, there are a good number of large language models that are just less known. A brief research that I recently conducted, unearthed more than 50 models that got published over the past few years. For their paper A Survey of Large Language Models that focuses on “review[ing] the recent advances of LLMs by introducing the background, key findings, and mainstream techniques”, a group of AI researchers identified a similarly impressive number of large language models that got developed in the past few years.
领英推荐
This increased competition, along with research into how the resource consumption of large models can be reduced, e.g. through sparsification, will lead to improved pricing.
Need more evidence? Even powerful tools like chatGPT are meanwhile no more the hottest things around. A short while ago, Auto-GPT got published via Github and partly made available via a website named AgentGPT. Now, why does this matter? it matters, because we see the next significant iteration. The success of using ChatGPT mainly relies on the user's ability to prompt the system. Auto-GPT does this on its own. It has the capability of breaking down a complex prompt into smaller ones and to self-prompt itself using these smaller prompts. it basically augments ChatGPT with a set of additional bots.
It is hard to imagine the limits of this concept.
Consequently, businesses need to find a way to leverage this technology without harassing their own data security and/or exposing their intellectual property (IP).
That this is easily possible, has been already learned the hard way by companies as diverse as Amazon or Samsung, to name just two better known cases. And then, there are ongoing use cases around the unauthorized use of IP-protected data for training and even many questions around the ownership of generated (derivative?) work products of generative AI, not even speaking in unwanted bias.?
The way to be walked involves three major steps:
Which ones? Read them here!