AI's rapid progress: expect the unexpected
While current AI applications are already highly impressive, we anticipate that AI's rapid progress will lead to astonishing breakthrough technologies and tremendous economic value creation.
If you're willing to entertain the notion that an artificial neuron isn't fundamentally different from its biological counterpart, you might envision an artificial neural network comparable in scale to the human brain, which, with proper training, could have similar capabilities.
The human brain is estimated to contain around 100 billion neurons, each forming connections with about 1,000 other neurons via synapses, totalling over 100 trillion synapses.
ChatGPT-4 is said to have about 1.8 trillion parameters (“synapses”). That’s a ten-fold increase compared to ChatGPT-3.5, which uses 175 billion parameters.
Leading AI researchers anticipate that models will continue growing in size and capability. Ilya Sutskever, former lead scientist at OpenAI, even suggests that the current AI architecture knows no bounds in terms of capabilities.
However, parameter count isn't the sole factor determining an AI model's performance. Advancements in model architecture and access to better, more extensive training data can significantly enhance performance.
This is evident in META's latest Llama 3 model, which, despite having fewer parameters, achieves impressive scores on AI benchmarks. Notably, during Llama 3's training, leveraging more training data surpassed expectations in improving the model.
AI model release and capabilities timeline
Source: Anthropic, 2024
Analogously, human intelligence isn't fixed; with more and better training, individuals become more intelligent.
Sam Altman, CEO and co-founder of OpenAI, says there is a high degree of scientific certainty that ChatGPT-5 will be a lot smarter than ChatGPT-4, and ChatGPT-6 is going to be a lot smarter than ChatGPT-5. He emphasises that we're far from reaching the peak of this trajectory and possess the knowledge to continue advancing. ChatGPT-5 is rumoured to be released after summer.
The next breakthroughs in AI model capabilities are likely to emerge in longer context windows (i.e., the length of the input), reduced latency (i.e., the speed of the answers), cheaper inference (i.e., usage cost of the model), better reasoning, planning and memory (i.e., the model recalls past interactions). These advancements should pave the way for personalised virtual super-assistants, enhancing productivity for all.
The current leading AI model in terms of context window is Google’s Gemini 1.5, which has a massive 1 million-token context window, equivalent to about 1 hour of video, 11 hours of audio, 30,000 lines of code, or 700,000 words (i.e., 8 books). The bigger the context window, the more information the AI model can take from a prompt, making its output more useful.
The concept of neural networks is not novel; in fact, it's been around for almost a century. It's only through remarkable advancements in hardware capabilities and the availability of vast amounts of data that we've been able to train the massive AI models seen today.
Sceptics point out that we may soon reach limitations in terms of available data to further enhance AI models. However, it appears data won't be a limiting factor. AI models can generate data, known as synthetic data, which can then be used to train other AI models.
Furthermore, multi-modal AI models are on the rise, no longer solely dependent on text input but capable of using images, video, or audio as training data. As the saying goes, a picture is worth a thousand words; the same might hold true for training AI models.
Tesla’s Optimus humanoid
Source: Tesla, 2024
领英推荐
Indeed, virtually everything can be "tokenised" and serve as the input or output of an AI model: from speech and video to movement and sensing. This holds particular significance for training applications like self-driving cars or humanoid robots, which are experiencing rapid advancements thanks to the utilisation of AI models.
Another commonly heard objection is that achieving hardware advancements will become increasingly challenging. However, Nvidia has presented its vision for future AI compute enhancements based on a systems-level approach. In this approach, improvements in compute capacity are not solely reliant on Moore’s Law—which focuses on increasing chip transistor density—but rather on connecting chips in a network where they function as a unified entity. This innovative system approach to compute architecture introduces a new pathway for ongoing efficiency gains in computing.
The escalation in model size translates to a projection of exponentially increasing AI training costs. ChatGPT4 is said to have cost over USD 100 million to train. Recently, OpenAI and Microsoft unveiled plans for a USD 100 billion mega-AI datacentre, named Stargate.
AI can’t beat humans at everything… yet
Source: Epoch, 2023; AI Index Report, 2024
It’s highly probable that AI models will continue to advance considerably in intelligence. While current AI use cases already demonstrate impressive productivity gains, we are likely just scratching the surface of its potential. If AI can create, reason, and interact in a human-like capacity, it has the potential to accomplish tasks that traditional software has never been capable of.
Therefore, it would be incorrect to assess the total addressable market (TAM) of AI solely based on the current size of the software market, which is approximately EUR 600 billion. A more appropriate TAM would encompass the current services and manufacturing markets, which total several tens of trillions.
S&P500 number of employees per USD 1 million of revenue (inflation adjusted)
Source: Bloomberg, DPAM, 2024
Capturing only a minor portion of that vast TAM through either decreased operating costs (resulting in more productive workers) or the creation of new revenue streams would provide strong economic justification for a massive investment wave in AI infrastructure. The future of AI holds immense promise, with the potential to revolutionise industries and redefine the boundaries of what technology can achieve.
Do you want to know more about AI: https://www.dpaminvestments.com/professional-end-investor/be/en/A-focus-on-AI
Disclaimer
Marketing Communication. Investing incurs risks.
The views and opinions contained herein are those of the individuals to whom they are attributed and may not necessarily represent views expressed or reflected in other DPAM communications, strategies or funds.
The provided information herein must be considered as having a general nature and does not, under any circumstances, intend to be tailored to your personal situation. Its content does not represent investment advice, nor does it constitute an offer, solicitation, recommendation or invitation to buy, sell, subscribe to or execute any other transaction with financial instruments. Neither does this document constitute independent or objective investment research or financial analysis or other form of general recommendation on transaction in financial instruments as referred to under Article 2, 2°, 5 of the law of 25 October 2016 relating to the access to the provision of investment services and the status and supervision of portfolio management companies and investment advisors. The information herein should thus not be considered as independent or objective investment research.
Investing incurs risks. Past performances do not guarantee future results. All opinions and financial estimates are a reflection of the situation at issuance and are subject to amendments without notice. Changed market circumstance may render the opinions and statements incorrect.