The "fooled by parrot lyrics" epoch just started
Recent statements about freezing “AI experiments” and comparisons with threats like a pandemic or a nuclear war are marketing hyperbole and power play tactics.
Systems like ChatGPT & Co are neither intelligent nor a threat to humanity. They need vast amounts of energy, very long training hours, huge training databases, cannot multiply on their own, have only primitive interfaces to the real world, cannot explain the logic of their actions or the sources of their knowledge. This is not to say that sometime in the future we might get a digital competitor to humanity. But I cannot see it anywhere for the time being.
It is more likely that some companies who have invested large amounts of money into the latest breakthroughs want to protect their competitive advantage. Frontrunning regulations offers the opportunity to prescribe how competitors can use existing solutions or how and under which circumstances alternatives can be created.
Given the current state of “AI” I cannot see how any newborn baby is a smaller risk to humanity than a computer program running in a large datacenter. So why is nobody warning that any baby may become a new Hitler or Stalin? Nobody seems to be afraid of this possibility and it makes me much more worried.
Let’s take a historical perspective. “AI” is nothing new and its first step is typically attributed with a publication of Legendre in 1805:
A. M. Legendre,?Nouvelles méthodes pour la détermination des orbites des comètes, Firmin Didot, Libraire pour les Mathématiques, Paris.
It’s all about points, separating different domains marked by points on their boundaries, finding the signal despite noise measurements. The only difference between 1805 and 2023 is the vast amount of data available today. The development of deep neural networks had many steps forwards and often also some step backwards. Here are some of the more remarkable steps forward:
·??????1958 F. Rosenblatt, The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6)
·??????1965 A.G. Ivakhnenko, V.G. Lapa, Cybernetic Predicting Devices, CCM Information Corporation, JPRS 37(803)
·??????1986 D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, MIT Press
领英推荐
·??????1989 Y. LeCun, et al.,?Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 1(4)
·??????1997 S. Hochreiter, J. Schmidhuber, Long Short-Term Memory, Neural Computation, 9(8)
·??????1998 S. Brin, L. Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, Computer Networks and ISDN Systems, 30
·??????2006 G. E. Hinton, R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313 (5786)
·??????2009 R. Raina, A. Madhavan, A.Y. Ng, Large-scale deep unsupervised learning using graphics processors, ICML 09
·??????2018 J. Devlin, et?al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, https://arxiv.org/pdf/1810.04805.pdf??
·??????2023 S. Bubeck, et al., Sparks of Artificial General Intelligence: Early experiments with GPT-4, https://arxiv.org/pdf/2303.12712.pdf
“AI” did not fall from the clear blue sky yesterday. It has been a long process of trial and error. History has not stopped and progress continues. Only in the rear-view mirror we might sometime be able to say that “intelligence” was created. Current chatter is just noise.
It is fine to celebrate breakthroughs like ChatGPT. It is nonsense to see the dominance of AI just around the corner. We see the steady progress of science & technology. Nassim Taleb coined the term “fooled by randomness”. OpenAI has just kicked-off the age of “fooled by lyrics”. Disappointment is assured down the road.
One should keep in mind that the current transformer systems on which ChatGPT is built have been a tool for language-to-language translation. ChatGPT also translates an input sequence to something – in this case it’s the same language – but with content one step extrapolated into the future.
There is no reason to believe that this kind of system becomes “self-aware” or “intelligent” by any means. True, if sometime in the future we can build a digital form of “live”, it may use transformer technology to articulate its thoughts. But transformers by themselves do not lead to “intelligence” or are any threat to humanity. Instead, I am sure that humanity instead will use transformers for their own evil purposes before anything else.