2023: 12 predictions for AI
Antonio Gulli
Google Sr Director, CTO Office. AI Cloud Search HAM: HB9IAZ IU5SKA. Angel Investor
1 - Search and Generative AI will progressively converge. Generative AI went mainstream during 2022 with chatGPT and Dall-E (OpenAI ), after the seminal work made in Google with Transformers. However, generative still needs a lot of effort to check on factuality / groundedness and prevent hallucination with information that looks apparently correct but it's not based on real data. Web search will help with both fact checking algorithms and with web references provided together with generated text. - likelihood 9/10
2 - Music Generative AI will become a real thing. So far, we have seen Generative going big for NLP on text and with Images (Stability.AI and MidJourney ). Music is the next territory to explore. I think about the myriads of opportunities to innovate. Humans play music with two hands only, while a generative model can add more dimensions. - likelihood 8/10
3 - Video AI will go mainstream. Dawn.AI and Lensa became viral showing how to use AI to create fantastic avatars from your pictures. Next direction to explore is video. During 2023+, I would expect to see short videos generated on demand from prompts. This would require a solid thinking of AI principles, and completely new computational power - likelihood 9/10
4 - Serverless AI is a need. AI needs burst computation with rapid scaling up and down. Serverless AI would allow you to pick the AI framework you like, fine-tune models and run inference without worrying of underlying infrastructure and accelerators - likelihood 8/10
5 - Different ML techniques will be blended. During 2022, we have seen more and more ML techniques blended together. For instance, Reinforcement Learning was used to improve the quality of Generative Transformers by taking into account Human feedback in the loop. This trend is going to become the norm with different techniques used together - likelihood 10/10
6 - A new learning algorithm will be conceived.? Geoffrey Hinton proposed a new learning algorithm,"Forward-Forward". The idea? is to replace the traditional forward-backward passes of back propagation with two forward passes, one with positive (i.e. real) data and the other with negative data generated by the network itself. Lots of research still needed but this would be big - likelihood 5/10
7 - Distributed learning and ML Hybrid Cloud are the new norm. More resource-efficient and flexible distributed execution frameworks such as Ray.io (anywhere ) and Petals (Hugging Face ) will start to be largely adopted and push the bar for ML Hybrid cloud. - likelihood 8/10
8 - New coding languages and software best practices will be developed. During 2022 saw Microsoft Co-Pilot , AWS Code whisper , and Repl.it ghostwriter as your partner in code. Now, people are starting to write code comments to give hints to AI code companions. This trend will emerge with new programming languages aware of the AI companions - likelihood 8/10
9 - Large Multimodal Models are the kings. People will stop talking about Large Language Models (LLMs) and will move into Large Multimodal Models (LMMs). Training models is expensive so why only doing it for text?? Google LIMoE is a step towards the goal of a single AI, and more will sure emerge in 2023+ - likelihood 10/10
10 - We will get to quadrillion parameters. This is just a matter of time. GCP was already used to train recommender models of 100 trillion parameters , and GPT-4 is expected to pass 100 trillion parameters . It's not just a matter of size as as the scale of the model increases, the performance improves across tasks while also unlocking new capabilities. - likelihood 10/10
11 - Neuromorphic computing and Mortal computing will finally emerge. I would expect neuromorphic computing to attract more and more VC money, and a lot of research will be devoted to Hinton's mortal computing new hope with amazing breakthrough - likelihood 5/10
12 - More students for Multidisciplinary AI. AI will be a topic to study in any scientific university together with Mathematics. Today AI is still a topic for Engineering, Computer Science, and Maths. Why don't we have it in Biology, Architecture, Medicine, and so on at the Uni?? - likelihood 10/10
East-Flanders - Belgium | Digital Transformation Leader | Program/Project/Change Management | Innovation Management | AI Enabler | Artificial Intelligence | Executive Leadership | Official SAP and AWS Partner
1 年Thank you for sharing this. It is important that we know what top players think about the evolution of AI. I have a different view about this because I think that you are going in the wrong direction and that you will change direction. Your current progress is however a very important building block for the future. I think that you try to put too much functionality/information in one model which you call LLM (or LMM). You think about increasing the number of parameters even further (GPT-4 is already 170 trillion parameters), etc. I think that in the future we will have a modular AI (similar to how our brain functions). A language model will be relatively small and will focus on language understanding and not on hundreds of other functionality! An other module will be for visual understanding. Another module will be for sound understanding. Etc. Another module will be for reasoning (thinking). Etc. In this way we will be able to build an optimized system depending on the needs of the application. ... continued below
Co-founder of Advascale | A cloud sherpa for Fintech
1 年Antonio, thanks.
?? Hobby Beekeeper / Information Security Director & Security Architect / Privacy Advocate / Security Strategy / Driven to Automate (DIE Triad + AI ) / Force multiplier
1 年Let me ask you, when will standards and NIST get involved with multimodal models?
IT Director | Tech Disruptor | Innovation Strategist (Digital Workplace & Cybersecurity)
1 年Thanks for sharing your thoughts on this. I wish most people had an open mind to see what's coming! The future looks bright and promising but not everyone can (or wants) to see it coming. Happy new year, wish you an amazing 2023 :)
Google Sr Director, CTO Office. AI Cloud Search HAM: HB9IAZ IU5SKA. Angel Investor
1 年I wonder if we need any fine tuning at all when we hit hundreds of quadrillions of parameters, or whether would be only prompting? Surely multiple orders of magnitude less expensive