Predict & Influence Muses - Series 017: AI 2.0?
After ChatGPT stormed the world (or even before that, with the less impactful but still impressive Stable Diffusion rollout), the phrases AIGC (AI Generated Content) or Generative AI have become the talking points in the IT industry, or among general knowledge/office workers alike. It has been branded as a possible “iPhone moment” for AI by Nvidia CEO Jensen Wong, which it may very well be.
?
Discussion points range from what kind of jobs it will replace, what the boundaries and limits of such software are, and why some generated results are wrong and can't be fully trusted (just like human beings ??). While it's not really safe to bring ChatGPT or GPT 4.0 for in-house usage in organisations (as just reported in the news, Samsung's internal codebase was somehow leaked as their staff were using ChatGPT to guide them in writing proprietary software), it is entirely useful and possible to benefit from using such large language models (LLMs) as a base and adding enterprise knowledge/data on top. However, this version should not be updated to the online general models, to protect the privacy and secrecy of the organizations.
?
In other words, taking a stable, open-source LLM and then adding enterprise knowledge and data to it so that the output of such systems can serve as a personal agent (which I prefer to call an “advisor”) to anyone in the organisation, from the board of directors and C-suite executives to general workers. This is what Dr. Lee Kai Fu named as AI 2.0.
?
领英推荐
My thoughts are aligned with Dr. Lee's. What we are essentially doing now is really AI 1.0. In the AI 1.0 era, there are “point solutions” that we have helped our customers fix. For example, we create AI for fraud or irregularities detection, deploy AI and behavioural science to increase sales by contextually nudging, recommending, and rewarding staff and consumers, or for credit risk scoring or helping in debt collection for banks and FIs. While these solutions deliver actual outcomes and results, they are nevertheless built using the organisational data and only solve that particular pain point.
?
AI 2.0, on the other hand, is really about giving organisations a foundation. It's much like the “data lake” concept for data. It will give individuals in the organisation or departments a fast way to create an AI that can help them. Besides being a personal advisor/assistant, AI 2.0 should also provide APIs for users to be able to integrate the results into operational systems, so that it can add intelligence to business processes and systems. Of course, in this context, there are a host of other considerations, like testing the APIs, putting them through DevSecOps, ensuring interoperability, and others.
?
The vision of the future is there. There are a lot of efforts from the AI industry to work on this. Already, the company that I admire, Palantir, has announced that their version of the AI Platform, adopting the concept of AI 2.0, will be available in May. Let's see if they will be the first to roll out an enterprise-grade AI 2.0 product or not. The adoption by organisations will be key. Will it be convincing enough for organisations to consider this, knowing that they may be the first to adopt? What are the risks and pains that they may face? But are there sufficient benefits for them to take the first leap of faith?