AI Unplugged: How It Learns
Michael Tresca
Director, Marketing & Communications for Global Talent Acquisition at GE Vernova
How generative #AI learns is an important factor in their development, shepherded by humans who guide AI along certain paths. It's also a key constraint, because humans can't train as fast as AI can learn. So what happens when AI teach AI?
Transfer Learning vs. AI Training
Training an AI takes work. A lot of work. Traditionally, AI models are trained from scratch for each specific task, requiring vast amounts of labeled data and computational resources. Training an AI model in this fashion can take weeks or even months, depending on its size. Which is why AI companies have increasingly shifted to transfer learning, also known as "few-shot " learning because the AI can learn a new task from only a few examples.
At its core, transfer learning involves leveraging knowledge gained from solving one problem and applying it to a different, but related, problem. Just like how humans use their past experiences to tackle new challenges, AI systems can utilize pre-existing knowledge to expedite learning in new domains. For instance, a model trained to recognize objects in images can transfer its knowledge to a related task, such as identifying different types of animals . Instead of starting from scratch, the model fine-tunes its pre-existing knowledge, requiring significantly less data and computation.
The real magic of transfer learning lies in its ability to accelerate productivity gains in AI development. By reusing and adapting knowledge, AI systems can learn new tasks with unprecedented speed and efficiency. Estimates suggest that transfer learning can accelerate productivity gains by up to 40% compared to AI training. But what if humans weren't involved at all?
AI Learning with AI
Now, imagine taking transfer learning to the next level by incorporating AI systems into the mix. With AI learning from other AI, the possibilities are limitless. Not only can AI systems transfer knowledge between tasks, but they can also collaborate, exchange insights, and collectively improve their performance. On March 18, in a paper published in Nature , scientists did exactly that.
A composite AI known as a sensorimotor-recurrent neural network (RNN) was trained on a set of 50 psychophysical tasks (tasks like reacting to a light). Through an embedded language model, the RNN understood full written sentences. This let it perform tasks from natural language instructions, getting them correct 83% of the time, despite having never seen any training or experience with the tasks .
This dialogue was accomplished through Natural Language Performance (NLP), a subfield of AI that seeks to recreate natural human language in computers. The advantage of this is that if a NLP is convincing enough, it can use the same interface a human can. Which means it can then perform the same tasks linguistically as a human, including transfer learning. The difference being, because AI can move at much faster speeds, it can train an AI far faster than a human could.
This is a major milestone, because training AI is a huge hurdle in their development, as Scott Alexander explains :
领英推荐
GPT-3 used 300 billion tokens. GPT-4 used 13 trillion tokens (another source says 6 trillion) ... That means GPT-5 will need somewhere in the vicinity of 50 trillion tokens, GPT-6 somewhere in the three-digit trillions, and GPT-7 somewhere in the quadrillions. There isn’t that much text in the whole world. You might be able to get a few trillion more by combining all published books, Facebook messages, tweets, text messages, and emails. You could get some more by adding in all images, videos, and movies, once the AIs learn to understand those. I still don’t think you’re getting to a hundred trillion, let alone a quadrillion.
Or to put it another way, we are rapidly approaching constraints on how advanced AI can get with human help. It takes a lot of energy and compute to train an AI, so anything that speeds this process up speeds up the development of AI overall. To get to the next level of AI development will require assistance from other AI. This AI-on-AI learning paradigm not only accelerates individual AI training but also fosters a continuous cycle of improvement. As AI systems become more adept at learning from each other, the rate of innovation and progress in AI development is poised to skyrocket.
And this is why people worry about the FOOM.
KA-FOOM!
If AI can teach AI, there is theoretically no limit to an Artificial Generalized Intelligence's (AGI) ability to train itself. Or rather, the limit is for the AI to learn everything there is to know about everything there is to know. That moment when this happens is called the FOOM, an onomatopoeic term coined by Eliezer Yudkowsky , a researcher in artificial intelligence and rationality. It's meant to evoke the sudden and explosive nature of a hypothetical scenario called the "Fast Takeoff."
When Sam Altman of OpenAI talks about safety , he's at least partially acknowledging concerns of a FOOM:
We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally. A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
AI development is moving so fast that even OpenAI is pivoting away from just releasing its systems to the world. There's a general sense that everyone needs to slow down, but there's so much money at stake that it seems highly unlikely any business will voluntarily slow-walk the training of AI for the betterment of humanity. While a FOOM scenario may seem unlikely, the first step towards it -- AI teaching AI -- is already here.
Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.
ETO(chief electriciant and electronica,hidroliq,electrical).
3 个月I like AI sir. Teach me sir