Boiling us gently with AI
GenAI created image

Boiling us gently with AI

A few friends have asked me about an appropriate (Gen)AI strategy for their companies. The discussion always circles back to building a business enabled by AI, instead of building a business so AI can be used. While the difference might sound trivial, it can lead on to very different approaches - AI strategy tends to be interpreted as ‘We need to use AI. What can AI do for us?‘; whereas business strategy informed by AI means ‘We need to achieve x business goals. How can AI help?’

Hence, I'm recapping my observations here around why having a business strategy informed by AI is more likely to generate commercial value for businesses and the key elements to unlocking that. It's also a timely discussion as 2024 is hyped up to be the make-or-break year for Gen AI, primarily due to the high cost of integration, especially in the current funding environment, hence the urgency to quickly realize value and prove profitability.

These are some salient questions to ponder over:

  • Why the fuss over ‘AI’ strategy??
  • Is the current GenAI technology taking over everything we do?
  • Is this all GenAI can do for us? Is this not-soon-enough value creation expected?
  • Do we need to build our own model? Is this the only source of value?
  • What do we need to be able to build on top of LLMs, now or in the future?
  • How do we prepare for that?

I am writing this for people who are not deep in the technical side of AI, but more for those interested in building a business in this field or responsible for improving their operations with AI. Even if you are not building a business in the GenAI field, I hope this can still provide a structure for asking the right questions when evaluating a potential AI partner or investment opportunity.?

tl;dr - GenAI will be another breakthrough technology when we look back 5/10 years from now, but 1 year into the launch, the hype has gotten over actual value. That said, the potential is not maximized yet, and this is actually in line with our previous technological breakthroughs.

Data is still the oil, so your best shot would likely be?

  • Understanding the pain points and user journey/business process well to integrate your GenAI workflow into
  • Understanding, defining and getting access to the Minimum Sufficient Data (MSD) to address aforementioned pain points
  • Identifying key stakeholders/partners to build the necessary data partnerships and pre-empt ramifications of generated data
  • Evolving the decision-making process to embrace uncertainties/ambiguities


Why fuss over ‘AI’ strategy??

To widen your options, unless your business is based on AI!        

Anyone still remember the hype around data strategy, blockchain strategy, cloud strategy… a few years back??

Google Trends for these 'Strategy' terms

Strategies of different flavors will ebb and flow over time, but the priority is still to drive business growth, with these in support of that overarching objective. The search volume of ‘Business Strategy’ are still ~3x higher than the next one ‘Data Strategy’ and more stable than other strategies (other than the cyclical downs).

While the underlying technology has gained usage, those tend to happen because of bottoms-up needs, not because the management mandated a grand ‘Y strategy’. I have seen a couple of these fizzled out, because of the mismatch in problem and solutions. Solution-based approaches tend to limit the problem space, or worse, fall into ‘confirmation bias’, where it diverts attention to non-critical areas just to integrate the latest Y technology. If these are truly critical, teams are constantly on the lookout for the appropriate solution and so these cases would have organically happened without any strategy mandated.

Take CoVid vaccines as an example (This is a rudimentary one, but something which everyone can understand. I’m involved with something less extreme and feel free to reach out to discuss more).?

  • A Gen AI strategy might have been feeding whatever available clinical data and asking for potential combinations not tested before. Useful outcomes might or might not have come out of this.
  • A more holistic strategy could have identified that clinical data was not complete enough and emphasized getting the right data and feeding into the algorithm, while working with the regulators on the approval process… in parallel (In this extreme case, everything was accelerated and most stakeholders were on board due to the stakes. But this is also a once-in-a-generation black swan event, and a lot of other use cases would not have this all-the-stars-were-aligned moment and this solution-based approach would likely be insufficient)


Is the current GenAI technology taking over everything we do??

Not quite        

While ChatGPT is the fastest app to gain mass adoption, its growth trends have slowed recently and user engagement (hence value gained) has not caught up to the hype. Until recently, there is no compelling/highly valuable commercial use cases yet.?

Here, I’m benchmarking OpenAI to selected workplace use cases, as this is closer to how ChatGPT is actually used. Users’ engagement with ChatGPT is lower than other use cases, even when measured more liberally.

Yes, ChatGPT is new to the game and definitely still has lots of room to grow. Yet, the fastest-growing app in history tend to mask the fact that most people still do not use it often enough, possibly because the value has not come through yet.

Source:

Is this all GenAI can do for us?

NO!        

Is this not-soon-enough value creation expected?

YES!        

This excitement of getting ahead of actual applications is actually common across previous technology evolution (AI, Internet, Blockchain…) - People are looking for problems to solve with new technology, instead of the other way round. In fact, innovation progresses slowly (outside popular perception and in niche areas where there is a close alignment of product-market fit) then exponentially which may then seem magical to observers.

Human visionaries are great at projecting the eventuality (IF) of an outcome, based on the trajectory of development. But humans are also quite poor at projecting the timing (WHEN) of that outcome. This is true across industries, but in tech, a primary reason is the difference between academic feasibility and large-scale usage. These 2 require very different skills and being expert in one has no bearing on expertise in the other.

  • In physical products, it will be the difference between in-lab success (due to academic brilliance or theoretical advancement) and mass production (due to manufacturing/engineering excellence).?
  • In digital applications, the difference will be between academic breakthrough and good intuition/understanding of user pain points to allow a seamless adoption. Real value is derived from the integration into user journey or workflows, reinforcing the flywheel to further drive adoption.

There are a lot of examples out there, but just sharing these 2 would surprise a lot of people?

  • While a lot of people are working off Microsoft Team, Google Workspace, Zoom, Slack, Dropbox/Box… it’s difficult to imagine working in any other way.?- Guess what, as of 2023, enterprise spending on Cloud is still less than traditional IT (~75%)- It’s not as if Cloud computing only started since CoVid. Cloud computing first came into widespread use in 2006 when Amazon launched AWS.?

As early as 2011, from the highly-respected MIT technology review, “Now that every technology company in America seems to be selling cloud computing, we decided to find out where it all began.

  • If even people like Elon Musk & Stephen Hawking (as early as 2014) warn that “Artificial Intelligence could spell the end of the human race”, it’s difficult to imagine a time when the ability of AI was seriously doubted, funding and academic interest moving away from the field- The latest ‘AI Winter’ only ended in the mid-2000s when the breakthrough in Deep Learning (the same fundamental technology behind LLMs) could finally translate previous theoretical feasibilities into real-life applications.- At that time, even when cutting edge AI was used in general applications, they were rarely called AI to avoid the stigma of false promises

This shows that adoption and more importantly, actual value creation of any ground-breaking technology come in fits and starts. The destination is usually clear enough, but not the projected time to get there.

What is unique about Gen AI is that its usage is more accessible than the previous breakthroughs. Make no mistake, the underlying algorithms and building of new models still require massive investment & expertise. But, when it comes to using it and understanding what it can/not do, the opportunity is a lot wider, and the ability required lower than before. Hence, as different people from different backgrounds experiment with this, the possible permutations will grow exponentially. The breadth, depth and velocity of the ecosystem sprouting up around Gen AI will be unprecedented.

Similar to the previous cycles, most will likely fail or fizzle out because the problem is not meaningful enough, the market is not large enough or the technology is not quite there yet. But it's also important to note that it's through these experiments that the models become better over time, learn from how humans interact with the models, and find creative amalgamations.


Do we need to build our own model? Is this the only source of value??

NO!        

While the development enabling Large Language Models (LLM) is foundational & breakthrough, the access to LLM will likely be a commodity and building LLMs are just inhibitive that only a few players might remain when the dust settles. For example, Open AI costs >$700k daily to operate and the costs scale with usage.

Looking back, most of the commercial value accrue to creative uses & combinations of different technologies, not necessarily to the underlying technology.?

  • Arguably the biggest tech winner in the last few years - Nvidia did not set out to build chips for AI. Their GPUs were initially meant for a very niche use case (Gaming) and hence, they had been able to innovate a lot in that area with little competition & public notice.?
  • While dominating the gaming market, they foresaw the need for accelerated & parallel computing and crucially, identified a market gap in the existing ways of computing. What was less clear was the kind & market size of use cases & applications. But Nvidia stuck with its strategy and stayed connected with partners who feedback on their needs
  • When computationally-intensive applications gained ground in the last few years (AI, crypto mining, blockchain…), their revenue & value skyrocketed and overtook Intel.?
  • Intel, the fabled company creating the world’s first commercial microprocessor chip, is still the market & innovation leader in the more general-usage CPU chips.?
  • Not only does this illustrate how a better understanding of use cases (through industry expertise) can create more value than the initiator, but equally importantly, there can be multiple winners in general-purpose technology and it’s difficult to predict a single winner. Who is to say Nvidia will not be dethroned or Intel making a comeback?

There are countless examples in digital applications too:

  • TikTok did not invent a completely new approach to AI/ML that makes the platform so addictive. Rather, they approached their recommendation system very differently from other leading social platforms.?
  • Uber invented a whole new industry (remember the time when everything was labeled Uber of <Industry X> for <country Y>?) not by building a proprietary mapping/navigation tool or owning the largest fleet of cars, but a clever combination of existing technologies.
  • Nowadays, anyone who wants access to good mapping data or better search on their websites/apps can connect to API offered by the likes of Open Street Map or Google Search, instead of building something from scratch.

As LLMs are inherently general and not built for any specific industry, it will most likely morph into a general platform-like enabler, allowing amazing use cases to be built on top.

The other key takeaway from history is that timing is almost everything. Some of them waited a long time for their moment; some would be derided ‘the deluded team who thought they could bend the world’ if launched a few years earlier. Hence, an important thread in this view is patience and understanding your area thoroughly to be able to build something fit for the problems/use cases and available technology.


What do we need to be able to build on top of LLMs, now or in the future?

The key gaps currently are

  • Lack of comprehensive non-general, non-English data to feed into the models
  • Hallucination & incorrect representation of data,?
  • Organizations not used to non-binary response, hence unwilling to integrate output into higher-value decisions/client-facing scenarios

An useful example might be eCommerce - we are investing tremendously, and seeing good progress in the infrastructure (warehousing, sorting/picking, medium & long haul shipping) but the gaps are now primarily in the input & output stages.

The path forward for people interested in this will be identifying problems/pain points within your domain, thinking about the ideal state then tinkering with Gen AI to map out the features needed to get there. Then, when the stars align, one will be ready to jump onboard and test things out

Lack of data

Drill for the right data, because data is still the oil - finding & being able to access industry-specific data in the right language is critical, because?

  • the foundational algorithms & models will likely stay within the domain of the few leading companies,
  • this is a sustainable moat that cannot be swatted away simply because the LLM developers turn on new features, and
  • instead of directly competing, this complements the LLM developers in finding diverse use cases, allowing a sustainable business model through driving real/long lasting value to the users and building a healthy ecosystem.
  • In the early days, when Gemini was asked about its identity in Chinese, it once replied that it is a BAIDU model!!! (Baidu is the dominant Google-like search engine in China). This was probably due to the paucity of Chinese data to train the model, hence the reliance on that few sources. While this can be fixed, it goes on to demonstrate the challenge in having high-quality data in large use cases.

Inaccurate output

Give it time – While the models will improve over time, even with the best data, no model will ever get to 100% accuracy & completeness. Hence, requiring a mindset shift in how we make sense of the world & use information.

While not yet suitable for high stakes environment, e.g. client- or user- facing, existing model results are still hugely valuable to internal use cases, as the stakeholders have a lot more context to interpret the results better and much shorter & personalized feedback loop with the right knowledge holders, hence reducing the consequences of hallucination.

For example, when summarizing my previous article on the market size in SEA, the model made some factually-wrong conclusions and attempted at paraphrasing my hypotheses with the wrong data.

Source:

Unfamiliar with uncertainties/ambiguities

Understanding stakeholders & business process - are people ready to embrace uncertainties?

  • Understand if stakeholders are ready to move from deterministic to probabilistic decisions. Senior stakeholders have to be comfortable living with uncertainties.
  • Identify the key decision-making processes
  • Identify decisions/use cases with asymmetrical risk-reward profile (high reward, low risk). Develop a framework for driving accountability & learnings if Gen AI is wrong
  • Identify areas where AI can complement humans, instead of completely replacing, to phase out adoption and reduce direct opposition, which can actually slow down or reverse adoption (remember Luddites?)


How do we prepare for that?

Here!


As usual, feel free to reach out if you would like to talk about anything.

Closing off with an AI-generated adaptation of "Killing him softly with his song"

Processing data with cold ease

Coding the future with no pause

Boiling us gently with AI (x2)

Changing employment with each byte

Boiling us gently with AI

I felt the heat of disruption, fear and doubt in our eyes

I sensed our roles shifted wide, machines marching side by side

I prayed for respite, but the code just pressed on...

Yen Siang Leong

Marketing Strategy & Analytics Lead, Devices & Services and GPay at Google

10 个月

Added quick thoughts on whether Human & AI will be in a cycle of symbiosis: https://www.dhirubhai.net/pulse/human-ai-symbiosis-yen-siang-leong-wdmec/ Looking forward to comments!

回复
Yen Siang Leong

Marketing Strategy & Analytics Lead, Devices & Services and GPay at Google

10 个月

Updated the second part on how we can prepare ourselves better, here: https://www.dhirubhai.net/pulse/bounce-bound-out-pot-yen-siang-leong-xw22c

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了