Is this time actually different? A closer look at the promise of generative AI
'Hurry slowly' is a phrase often attributed to Caesar Augustus

Is this time actually different? A closer look at the promise of generative AI

The latest corporate panacea

Although divided at times, the entire AI industry is rallying around the flag of generative AI. Investors are pouring unprecedented amounts of financial resources into the space. Technology vendors are racing to release generative AI offerings, either as standalone services or integrated into existing products (e.g., Github Copilot). Early adopters promote successful use cases en masse, in a bid to bolster their corporate reputation as innovative and fast-moving organizations to help attract scarce talent. And the major consultancy firms, never hesitant to join the latest trend, are hailing the current developments in generative AI as the precursor to ‘the next frontier of productivity.’

On one hand, I can’t help but enjoy this new surge of developments and the resulting enthusiasm for AI. On the other hand, it all feels eerily familiar to prior AI hype cycles. The previous decade was marked by the rise of deep learning and reinforcement learning. It featured examples, such as AlphaGo and the automation of medical diagnoses, which are still often cited today. Similar to what we are seeing now, all participants in the AI value chain joined forces to encourage adoption. Victory on the infamous ‘AI winter’ was declared. Unfortunately, the momentum faded as converting AI investments into increased profitability proved to be neither obvious nor easy.

Notwithstanding prior AI busts, the sentiment surrounding generative AI remains one of unbounded optimism. It reminds me of the “this time is different” mentality which occurs when financial markets are starting to overheat (and underlying fundamentals are starting to be ignored). In such a state, we feel like we are on the brink of a paradigm shift. And in this exciting new world, the structural issues that were always there are brushed away as artifacts of history. So, let’s take a closer look at what differentiates the current generative AI wave from prior AI hype cycles. We’ll also examine how these differentiating factors relate to the main obstacles concerning successful AI adoption.

What is actually different

Firstly, the large language models (LLMs) that underpin OpenAI’s ChatGPT and similar offerings, far exceed the performance of prior techniques. Moreover, model performance can be incredibly versatile. When the training data consists of a wide enough array of text documents, the models can work in a manner that is independent of context and industry. As in the case of ChatGPT, value can be generated for the majority of its user base in a relatively plug-and-play manner. For a lot of use cases, proprietary finetuning is not required. Moreover, generative AI systems can also be directed towards the creation of new content other than text (e.g., images, video, or audio).

This increase in performance, however, comes with a caveat. Progress in language understanding and generation was largely driven by an increasing amount of parameters. But, increasing the number of parameters equally implies an increase in model complexity – and hence, model fragility. The high costs of initial LLM training also marks a reversal in the long-standing trend of declining AI training costs. Lastly, the law of diminishing returns will start to kick in at same point (if it hasn’t already). This means that incremental performance improvements will come at the cost of exponentially more complexity (fragility) and rapidly increasing training costs. Sadly, altering this approach is not trivial, as demonstrated by the community’s reports of decreased performance for GPT-4.

A second factor that differentiates the current wave of generative AI from prior AI hype cycles is the astounding pace at which the technology has been adopted by users, with ChatGPT breaking multiple records in terms of user base growth. Aside from the generalizability aspect mentioned above, I feel that a large part of this has to do with improved accessibility. Most generative AI offerings have an easy and fast registration process. The applications feature straightforward UIs. And these offerings are directly available from any browser and often feature companion apps for on-the-go use. This means that the towering barriers that users commonly face when interacting with AI systems, have been broken down for generative AI.

So, what’s the catch? Even though its value has been proven in several domains, such as customer service operations and software engineering, generative AI is neither rare nor inimitable in most cases. Both rarity and inimitability, however, are essential for firms trying to turn a temporary advantage into a lasting one. From this perspective, generative AI as an economic resource ironically suffers from its widespread adoption, ease access, and the versatile nature of pretrained models. Now, there are ways in which we can increase the rarity and inimitability of generative AI. Think of proprietary finetuning in line with a well-defined vision for the use of generative AI linked to use cases with clear business value.

What remains the same

Unfortunately, the prerequisites to a sustainable advantage are what make us circle back to the foundational issues related to the use of AI. Study after study has demonstrated that for most organizations, the major obstacles regarding AI adoption involve strategy, data, and talent. In fact, AI adoption is stagnant. About 55 percent of organizations report that they have adopted AI in at least one business function, which is consistent with prior years. And for the majority of those organizations, AI adoption remains limited to that single business function, suggesting that scaling the use of AI within an organization is by no means easy. It is difficult to see how generative AI will make these problems go away.

For those looking to invest in generative AI, taking the time to take a step back and address these structural challenges seems be the most promising route for the long term. Aside from strategy development, this also means taking the time to curate high-quality datasets. Such datasets, especially ones with data that is unavailable to your competitors, are what will allow you to finetune LLMs and future models to your organization-specific context and objectives. Additionally, take note that you make sure that you remain in control of the increasingly complex models. Be sure to think about a proper governance framework for generative AI in an early stage, so you don’t risk the risk of transforming any possible advantages of generative AI into an unforeseen liability.

Although its exact origins are unknown, the phrase ‘hurry slowly’ is often attributed to Caesar Augustus. This doesn’t mean that you should ignore all the recent developments or that you aren’t allowed to be excited by what’s going on in the field. But with all the hype and enthusiasm going around, be sure to remain thoughtful and deliberate. It often takes courage to innovate. At times like these, however, it may require even more courage to go against the herd. To say no to generative AI when it doesn’t make sense for the specific context of your organization. To take the time to set a proper foundation in order. Most of Augustus’ most successful reforms were slow and gradual, yet their impact is still felt today.

Sources

https://www.cbinsights.com/research/generative-ai-funding-top-startups-investors/

https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

https://towardsdatascience.com/the-ai-winter-is-over-heres-why-1001e6b7bb0

https://awealthofcommonsense.com/2014/01/defining-time-different-2/

https://www.edureka.co/blog/how-chatgpt-works-training-model-of-chatgpt/#:~:text=ChatGPT%20was%20trained%20on%20large,the%20largest%20text%20datasets%20available.

https://docs.kanaries.net/articles/chatgpt-parameters

https://openai.com/research/ai-and-efficiency

https://www.businessinsider.com/openai-gpt4-ai-model-got-lazier-dumber-chatgpt-2023-7?international=true&r=US&IR=T

https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

https://www.dhirubhai.net/pulse/what-resource-based-view-tells-us-potential-analytics-ruben-van-wijk/

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year

Sander Poot

Allround Data Professional || Bruggenbouwer tussen de problemen van de business en de enorme mogelijkheden op het gebied van data!

1 年

Interessante post Ruben! Als je het mij vraagt een mooie plaatsing van de hype die er nu heerst rondom AI

回复
Madhumita Mantri

Staff Product Manager@Walmart Marketplace | Podcast Host | Follow me for 0 to 1 Data AI Product Management Content | PM Coach | Ex-StarTree | PayPal | LinkedIn | Yahoo | Grace Hopper Speaker | Music Enthusiast

1 年

Nice share!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了