Will AI growth necessarily be exponential ?

Will AI growth necessarily be exponential ?


This week was for me a week of conferences and events. All in my city of Warsaw: Women in Tech Summit, Aula Polska birthday party, Data Science Summit ML Edition, AI Breakfast "Night Edition".

And the best and most interesting thing at conferences - conversations, conversations, and more conversations with people.

I had a particularly interesting conversation with one of the organisers of one of the conferences. I'm not identifying the person because I haven't spoken to her about it yet. But I will add that it's a person with a lot of experience at a big tech company, and an academic at the same time. And referring to my post from a week ago. A topic that doesn't disappear from the headlines, namely: Will AI take our jobs. This is what we started talking to each other about.

But what was most interesting about the discourse was the dispute over AI adoption scenarios.

In the opinion of my interlocutor, the exponential growth of AI capabilities is to happen. A must. Full stop.

Surely?

A book I've been drawing on a lot lately, Ethan Mollick's 'Co-Intelligence' draws out various scenarios for AI adoption, and not at all necessarily an exponential one.

An increase in AI adoption of 10-20% per year ('slow variant') is entirely possible.

What are the arguments for the scenario that AI will grow slowly rather than exponentially:

1.Costs. The sheer power consumption in training models is growing exponentially. I don't remember the exact figures anymore, but the cost alone to train GPT 3.5 to GPT 4 is probably an increase times 10. The electricity alone to train GPT 4 cost 100 million dollars. Today, everyone is impressed with NVIDIA's growth. I'm not a robotics engineer, but a lot of people say that NVIDIA is a technically obsolete company. Old GPU solutions cost a lot. Many say that NVIDIA's competitors can show a lot of solutions that are much "lighter and more precise", but so far we don't see that. But economic history shows that every bubble bursts at some point, including this bubble. Companies will go down on GPUs- meaning will go down on AI, as well.

2.Geopolitics. NVIDIA, like other chipmakers, is maximally dependent on global supply chains. Eighty per cent of the most advanced chips are manufactured in Taiwan, which is in danger of Chinese invasion. "Shifting" chip production from East Asia (where it has been for at least the last 40 years) to Europe or the US is happening, but building factories, let alone training technicians, will take time. It would be difficult build ever bigger GPU farms. Literally.

3.Decline in innovation. It is a paradox, but in the age of AI we are seeing a huge decline in innovation. Mollick cites research that over the last 13 innovation has fallen by 50%- in all walks of life, from agriculture to cancer research. The number of start-ups founded by STEM PhDs has fallen by 38% in the last 20 years. We are living in a time of 'scientific baroque'. The number of scientific papers has increased so much that it is difficult for scientists to control what to read. Communication in the scientific world is therefore deteriorating. Scientists are leaving for corporations.

4.Regulation. Neither in the US, the EU nor in China is there currently a clear idea of how to regulate AI. We do have draft regulations everywhere, but it seems to me that it will still be a long time before regulations emerge. For now, I see corporate lawyers failing to deal with AI implementations. And I assume this is slowing down AI implementations in large corporates. The legal risks associated with data ownership and copyright are enormous.

Whether television is or is not general purpose technology is a somewhat academic question. Nevertheless, it is a technology that has profoundly changed our lives and for 100 years has just developed step by step rather than exponentially. I, for one, remember a 14-inch portable black and white TV on which I watched my favorite cartoons as a child. Back then I lived in a poor communist country. Today, I have a 65-inch flat-screen TV. Compared to the 14 inch black and white - unbelievable. My TV is already eight years old. And you know what: I have no intention of changing it at all. I don't see the need. I know that if I were a gamer or a sports fan the current TVs have an advantage over mine. But for kids watching cartoons in 2D and me and my wife watching the news at 7.30?

And here we come back to AI- do we need in business, comparing with TVs :), "still deeper black color"?

For chatbots- probably not;

For recommendation systems- probably not;

For drug discovery- maybe??

I do not want to categorically state that the 'slow growth' scenario for AI will materialise. There are several scenarios for the development of AI deployments, and thus the speed of AI development. Including the 'exponential growth' scenario.

But from my observations over the last 18 months, I see that for now we have a slower rather than faster AI adoption scenario. I justify this by the above 4 factors, but also by the fact (partly factor 3) that we don't quite know what to do with as much potential as we already have.?

Instead of bigger and bigger LLMs, I see more and more small start-ups that will make small breakthroughs in the use of AI. And in this way big companies will be born.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了