When will generative AI investments pay off?
AI generated

When will generative AI investments pay off?


Leer en Espa?ol

My take on the impact of generative AI investments and how it will transform businesses and society. It is a long journey where market and shareholders expectations do not necessarily match computing technologies cycles (Infrastructure - Platforms - Applications), so it is better to be prepared for it.

No competitive advantage will be possible within your company without the right infrastructure, proprietary data and skills.

Investments

There is a growing concern in many of tech investors about how generative AI will crystallize in specific applications within companies. According to some sources, more than 300 billion dollars have been invested in AI related ventures. Crunchbase claims that only in the first half of this year 35.5 billion have been poured into AI start-ups globally. Despite this, the new sales needle has only hardly moved up in tech companies with only a limited impact in Cloud services for Microsoft Azure , Amazon Web Services (AWS) and Google Cloud .

This concern is a stark contrast with the enthusiasm that the availability of Large Language models (LLMs) chatbots and virtual assistants (text, code) has generated in the public since their launch and their further multimodal developments (voice, image, video,…).

The prize is to win an advantageous position in the new AI ecosystem. If we analyze the capitalization of public companies or the IPOs to come, stakes are high. YTD Tech companies’ valuation has increased by more than 23%. Their overall valuation 17.8 T is around 28% of the total U.S. market capitalization. If we add Internet and Communications industries (where Alphabet Inc. or Meta are included), their weight is almost 33%.


Source: Yahoo Finance 07/21/24

?

These valuations do not reflect a significant increase in their revenue. Price / Earnings ratio has rocketed, and some investors has become skeptical about this imbalance between valuations and performance. Some staggering P/E ratio valuations are those of AMD (220.5), 英伟达 (69.5), 博通 (68.34) or ServiceNow (80.00).


Data from Yahoo Finance 07/19/2024

?

?

Therefore, there is an increasing frustration on the AI adoption in companies and some pessimism in how traditional SaaS players will be adapting to the new scenario.

These may vary depending on the position in the AI ecosystem: new foundation models, enhancements to SaaS and cloud-based solutions, new stand-alone applications based on AI or increased processing and hardware capabilities to cope with new AI processing demand.

Your place in the ecosystem and data accessibility makes the difference

-????????? Semiconductors companies are very well positioned for the moment. All players need their specific processors (GPUs) in their Foundational Models, hence their current valuation ( NVIDIA , AMD , 博通 , 台积公司 ). The main question mark is if they will be able to deliver the expectations. More competition and alliances are expected (for instance OpenAI with 博通 or AMD acquisition of Silo AI ).

?

-????????? Software Infrastructure companies and specifically Cloud service providers (Microsoft Azure , Amazon Web Services (AWS) and Google Cloud). They are already increasing their sales and it is expected to keep their advantageous position by adding specific services tailored to Generative AI or integrating their software infrastructure with other organizations like GitHub , LangChain or Hugging Face . They are also trying to find alliances with the main AI Foundational Models.

?

-????????? AI Foundational Models. Most of the companies are either still Private (owned by VC) or are owned by large Tech Internet companies (Google, Meta, Microsoft) who own data and processing capabilities to train these models. There are many options and a increased competition between them to lure large corporations and monetise their investments. The most important companies are OpenAI , Google DeepMind , Meta , Mistral AI , Anthropic , Cohere , 百度 , 上海人工智能实验室 , 阿里巴巴集团 , Stability AI and 英伟达 . All of them have developed different generative AI models and are expanding their capabilities to multimodal ones.

?

-????????? SaaS companies. They are incorporating generative AI in their existing software to enhance their customers’ experience and increase their analytics firepower. This is the case of Einstein ( Salesforce ) or Shopify Magic. Basically, they are building tools to apply AI foundation models to their proprietary data.

?

-????????? New stand-alone applications. There are thousands of new apps that cover many contents creation based on text, code, images, video, sounds or a combination of them. You may have a peek here to see new usages that span from code debugging to virtual assistants. Some of them may end up in hands of larger SaaS companies to cover some of their new functionalities and only a few, those adding real value or finding brand new usages may grow. They rely on foundation models or proprietary data to feed their applications.

?

In a recent interview with the MIT professor Daren Acemoglu in the last Golman Sach report about AI (June 2024), he predicts a 10-year cycle to get consistent returns on the generative AI investments. Other analysts from this company are also cooling down some of the previous expectations. They frame the topic of investments in generative AI in a similar path than the previous computing cycles pattern: first come the infrastructure, then the Platforms and finally the Applications.

According to this study, these are the industries and applications where generative AI is more commonly used in US companies:



?

Usages:

-????????? Customer Service. In a recent consulting I had the chance to participate on a B2B distribution company, around 93% of the customer contacts involved gathering information about a specific transaction (estimated date of delivery, order number, SKU specifications...). The most widespread generative AI usage has to do with chatbots that can easily retrieve this information using RAG techniques. Customers normally prefer the immediate reply they can get through these channels and its implementation is very straightforward. There are also many applications available in the market.

?

-????????? Finance. Traditional AI/ML techniques have been used in finance, for example, to detect fraud transactions or investment recommendations. Generative AI is being employed to improve analysis incorporating non-structured data, in processes like customer risk scoring. If further developments are achieved regarding ai agents, many accounting, investment and cash management transactions will be simplified.

?

-????????? Software Development. In the last weeks I had the opportunity to visit 谷歌 Atlanta site and speak to people working in Product Development. Code assistants have radically changed the way teams work and collaborate. Coding assistants have improved productivity in boresome task like debugging, as well as in other critical ones like anticipating the deployment of new products and functionalities.

?

-????????? Healthcare. Using techniques such as image recognition has been widely used to detect diseases. Generative AI is useful to recognize new patterns and for medical research and epidemiology.

?

-????????? Education. Generative AI can be employed to personalize learning experiences for students. New content and knowledge are offered to students based on their previous knowledge and skills and how they are advancing in their learning. It can be used in assessments (although this is a controversial topic and there has been bad experiences in some countries). Companies like Duolingo or Khanamigo ( Khan Academy ) are already enhancing their learning experience with AI. New ventures like Eureka Labs AI (launched by one of the OpenAI founders) claims they can deliver a personalized learning experience.

?

-????????? Legal. If you are a lawyer, you already have used ML and traditional AI to locate legislation or court sentences. Legal texts suppliers have been providing these services compiling these texts and tagging them to facilitate its consultation. Generative AI introduces further developments enhancing the search process, finding further similarities or composing texts to produce summaries or legal writings needed in the court procedures.

?

-????????? R & D. Probably, the most fundamental change that generative AI will introduce is in how researchers do their job. Generative AI open new possibilities in how to gather information and how to find patterns within it. For example, economist and central bankers may find new patterns measuring basic economic indicators (inflation, GDP growth, lending, commerce…), medical science can turbocharge their analysis or engineering researchers can find new materials.

?

Business Transformation Journey. Proprietary Data is king.

In this new context many companies from other industries feel compelled to start applying generative AI in their internal and customer facing processes. Many leaders may expect quick returns on their investments in generative AI through an increase in productivity and better value for their companies’ customers. Generative AI looks like the new nice shining object within Technology that sparks companies’ valuation and marketing offer.

Tech companies and large consultancies alike are taking advantage of this. According to a 麦肯锡 survey 65% of the companies are already using generative AI is some of their processes, mainly in sales & marketing, product development and IT.


?

There are however 3 key questions that we should consider before starting any generative AI project:

1.????? Does my company have the right IT architecture and data science capabilities to tackle a generative AI project? Generative AI is a long journey.

?

Firstly, we must have a modern IT architecture. It involves the right applications, data repositories, security, governance, infrastructure, communications and processing capabilities. This involves having cloud computing and storage

?

Secondly, we need to have a clear understanding of our data assets, data science capabilities and infrastructure. It implies for sure that we have data lakes to store unstructured information coming from our different applications and skilled people within our business to take advantage of our data in practical business decisions.

?

2.????? Are the new generative AI developments based on our own proprietary data? If not, we will not be able to create a competitive advantage. We will be simply copying what our competitors can do /buy with other consultancies.

?

For instance, our developers may use GitHub Copilot to generate or debug their code, but probably our competition will also be doing that. The former point (IT architecture and data assets) is what will enable my developers using our company proprietary information to create valuable services and functionalities aimed to improve our business decisions and our value proposition.

?

3.????? Do the new generative AI developments add value to our customers / your business? This is a very important question: we launch new products / services for the sake of our customers, not for the sake of the technology itself, even if it well received by investors and markets.

?

In short, a previous assessment of our own IT and Data assets and capabilities is needed before tackling complex generative AI projects.

?

RAG and Proprietary Data

To leverage our own proprietary data new prompting engineering techniques can be applied. The most common are RAG (Retrieval Augmented Generation) and FLARE (Forward-Looking Active Retrieval Augmented Generation). These techniques are quite recent: RAG was introduced by Meta in 2020 and FLARE was developed in September 2023.

All foundation models have static information. They must be enriched or finetune with additional data to have state-of-the-art information which is a costly process. In case of lack of information LLM tends to hallucinate.

RAG allows the enrichment of LLM with specific information. For example, before generating an answer in a chatbot regarding an Order, the request can be linked to a PDF document that have updated information about the pending orders within our company to provide accurate information. We can also load the specifications of a product, information included in a webpage or previous conversations with a customer in a chat so that if a query about that specific product is received, the reply can include this updated data with more contextual information.

The coding is quite straightforward. There are already many applications with specific functionalities (for example, to summarize the information that is included in a webpage or text).


Source:

?

The FLARE approach is more complex: it follows some of the RAG patterns but make the process iterative with further enrichment of the content (tokens) that is generated with low confidence with until a well-informed and accurate content is generated (the degree of confidence can be determined in the model). The iterative process is as follows:

  1. User Input: The process starts with the user’s input.
  2. Initial Retrieval: Based on this input, the system retrieves relevant documents or information.
  3. Generation and Prediction: The system generates a temporary next sentence and predicts upcoming content.
  4. Confidence Check: It checks if the generated sentence contains low-confidence tokens.
  5. Further Retrieval: If low-confidence tokens are found, the system retrieves additional relevant documents to regenerate the sentence.
  6. Final Output: The process iterates until a confident and accurate output is generated.


Source:


?

?Other generative AI topics discussed during the last weeks.


Improvements in alignment and hallucinations control in Large Language Models (LLM).

A better understanding of how deep learning techniques work with words or concepts are linked to specific neural networks can be used to avoid hallucinations or not desired outcomes in LLM. Some researchers from Anthropic , 英国牛津大学 and the recently disbanded ‘super alignment’ team in OpenAI are already working in this area through the usage of ‘sparse autoencoders’. In this article you can find more information.


The debate about Open Source

The discussions about Open vs. Close (those that are accessible through an API) AI Foundation models keep on. Open source ai foundation models are subject to more scrutiny and customization, foster research and innovation and enables competition.

The flip side is the risk that its widespread availability may entail and the lack of responsibility on its usage. Only in the last weeks new open-source foundation models have been made available by Mistral AI / 英伟达 (NeMo 12B LLM), 苹果 (DCLM 7B), Hugging Face (SmolLM) or Salesforce (XLam).


Copywrite and new legislation. Dust has not settled in yet.

There are growing disputes about the accessibility of content in the web. Many publishing companies have signed deals with LLM to have access to their contents. This may limit the accuracy and performance of LLM (most of their training was fed by scrapping public content in the Internet but it guarantees that the legitimate rights of authors are respected.

Another controversial topic is the future launch of generative AI regulation in California (Bill SB-1047) that set new liabilities to generative AI companies like a previous scrutiny of open source generative ai models and liabilities for companies in case of hazardous usages of this technology.

?

Race between the USA and China

Some generative AI scientists (for instance Geoff Hinton, named Godfather of AI) have changed their approach on how to control undesired generative AI practices. They have shifted from a cautious approach to point out the need of winning the generative AI race to other countries like China.

?

Conclusion

The transformative changes that generative AI may bring are unparalleled. Companies should be prepared for it. However, not everything is about generative AI: many problems can be solved with other solutions that may involve classic computing solutions or non-generative AI. However, as mentioned previously, no change and competitive advantage will be possible without the right infrastructure, proprietary data and skills within your company.

?

?

?

?

?

Miguel Ruesgas Pérez de Arrilucea

Supply Chain Director, COO, B2B, B2C, Retail. ???????????? ???? ?????????????? ?????? ?????????????????????? ????????????????????????????. Consulltant in Supply Chain and Operations.

4 个月

Gracias por la Master Class. Francisco. De lo mas completo y exhaustivo que he visto en meses. Abrazo

回复
Juan Carlos Sánchez Rodríguez

CEO | Director General | AdelantTa, Selección, Formación y Consultoría | Recursos Humanos, Marketing, Ventas

4 个月

Gran artículo, Francisco. Totalmente de acuerdo en que la infraestructura adecuada, los datos propios y las habilidades avanzadas de los profesionales de la empresa son esenciales para sacar el máximo provecho de la IA generativa y crear realmente una ventaja competitiva (IA-Moat) Es un camino largo, más difícil de lo que parece a priori, pero con un enorme potencial. Me ha parecido muy interesante cómo explicas la importancia de técnicas como RAG y FLARE para mejorar la precisión de los modelos de IA. Gracias por compartir tus conocimientos y tu perspectiva, como siempre.

回复
Marc Schoettel

CEO | Senior Advisor | Non-Executive Director | Transformation & Growth

4 个月

Thanks for sharing Francisco Rivero

回复

要查看或添加评论,请登录

Francisco Rivero的更多文章

社区洞察

其他会员也浏览了