AI-GA!
Clune, J. (2019). AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence.?arXiv preprint arXiv:1905.10985.
The AI-GA is built on top of meta-learning techniques. The hope is from manual task learners or the current machine learning community engineers and scientists focusing on manual? AI.? They envision building small blocks of AI that help them to reach AGI, which is not wrong because we do not have any other approach so far. However, it is slow. What current methodologies focus on is, in phase 1, they tend to discover or use some building blocks of? AI by human scientists; in phase 2, they put all of them together to make a complex computer unit in the hope that it will work as good as any AGI learner. However, apart from both the approaches, the author proposed AI-GA (AI generating algorithms, that is the launching of the outer loop slow compute inefficient optimisation process which is searching for an optimising to produce an AI agent that on the inner loop is very very sample efficient because the outer loop has imbued it with all of the negative biases and priors addition to other complex manual building blocks to become general AI. It means that you are not only focused on learning everything in the sense that when having a new problem, you also don't start from scratch and learn everything from scratch again. You can deploy this very sample efficient AI agent you have produced via this original process. (We can say or hope that will work because of the human evolution from single cellular organisms on Earth). So a remarkably sample inefficient, unintelligent sample inefficient algorithm produced the human brain! However, this research is not committed to that outer loop algorithm being evolution!!! (recall meta-learning). Three technologies to invest into:
Initially, hand-crafted systems worked well when learned and surpassed many other computational units. AI-GA envisions that the trend (by Sutton) is going to continue. 10 years ago, running a deep learning model was an expensive task. However, it's not in 2022! So AI-GA bets on the game that the trend will continue and resource-intensive tasks are going to evolve, and we will essentially apply this to the production of General AI as our most powerful AI itself. The core idea here is we should take a learn at all approach to apply to the machinery of machine learning itself! The author hopes that exploiting the compute (not required to have all the compute required to produce Earth) that we will have in future will make us better learners, and this is the Grand Challenge of AI-GA. To find out the methods that allow efficient AI-GA to run within the amount of computing we will have in coming years. The author firmly believes that this approach will be the fastest way to reach the ambitious goal of AI-GA for the research community.
We start with meta-learning synthetic data to accelerate neural architecture search. The dominant paradigm in the current AI scenario is classical machine learning and deep learning. We agree that there is a long way to go to Artificial General Intelligence (AGI), but the question is how do we reach there? It is a grand challenge, as claimed by the scientific community. A Manual Path dominates the current paradigm shift in the industry to AI, which focuses on optimisation, not invention! The manual path to AI starts with phase one of crucial building blocks that are constantly being studied vastly in the community. Some of them are:?
1. Deep Networks: CNN, RNN, LSTM, attention layers, applications, techniques to improve deep networks, DNN optimisation, Regularization, Auto ML?
2. Representation Learning: Representation Learning, Unsupervised learning, pre-training, transfer learning, domain adaptation, distributed representation, discovering underlying causes, Auto DL, neural architecture search, network compression, graph neural networks?
领英推荐
3. Generative Models: Probabilistic Generative models, DBN, RBM, Deep Generative models, encoder-decoder, variational autoencoder, GAN, deep convolutional GAN, variants, and applications of GAN?
What is NN, History of NN, Neuron Diagram, Perceptron Architecture, Perceptron Training Rule, Gradient Descent and Delta Rule, Derivation of Gradient Descent Rule, Derivation of GDR and Proof of Concept of Neural network, The weight update rule for standard gradient descent can be summarized as, Delta Rule, Perceptron Learning Rule, Perceptron Activation Function, Binary Function, Saturated Linear Function, Standard Sigmoid Function, Hyperbolic Tangents, XOR problem in perceptron, origin of Multilayer neural network, Feed-Forward Neural Network, Each perceptron may be linear in nature but the combination of these perceptrons makes nonlinear decision boundaries, Irregular Decision Boundaries, Different Perceptrons work together to make a nonlinear decision boundary, Training Neural Network Numerical, FFNN example, A detailed example for neural network and example 2 for backpropagation, top 8 deep learning frameworks, Why do we need activation functions, neural network without activation function is just a linear regression model, Binary step Function, Linear Function, Sigmoid, TanH, ReLU, Dead Neuron Problem, Leaky Relu, Parametrized ReLU, Softmax, RNN, Training a RNN, Vanishing Gradient, Exploding Gradient, LSTM, CNN, How CNN works?, ReLu layer, Pooling Layer, Stacking up layers, Autoencoders, PCA vs Autoencoders, Encoder Decoder USe of Convolutional Autoencoders, Sparse AutoEncoders, Why no regular neural networks, Why convolutional, Why padding, Stride Convolutions, Processing Sequential Data, Key Features of RNN, RNN Computational Graph across time, Vanishing gradients, learning will happen only with variables, Perceptron learning rule, Delta learning rule, The direction of backpropagation, How training works, RBF Nural Network, Types of Kernels, mahanolibs Distance, Regularization, Optimal Weight vector, Learning Kernel Parameters, 2 Stage Learning, Dropout Regularization, First Deep Learning Model, From Supervised Classification to Unsupervised Feature Learning, Autoencoders, Latent Fingerprint representation using AE, Denoising Autoencoders, The denoising process is a kind of regularization, Multilayer autoencoder regularization approaches, L1 regularization, Dropout, Extension of Dropout, Dropout Connect, Perceptron, Perceptron Learning Algorithm, Quiz 1, Undercomplete Autoencoder, Overcomplete Autoencoder, choice of loss function, PCA vs Autoencoder, Vanishing Gradients, Solution to vanishing Gradient Problem, ConvNet Architecture, MaxPooling, LeNET architecture, RBF NN, Terminologies,DENSENET REVISION, First Deep Learning Architecture, Gaussian Filter, Convnet architecture, Limitations of ReLU, Quiz Solution, Why Convent should be deep, Exploding and Vanishing, Xavier and He Initializations, Data Level tricks Data Augmentation- 1:12:19 Data Level tricks Batch Size, mini Batch gradient Descent, Batch Normalization, Model Level Tricks Learning rate, Adding momentum, ADAM, RMS Prop, Optimization Techniques, Early Stopping, using right Loss function, Triplet Loss, Center Loss, A big trick in CNN, Revision of Autoencoders, CNN Linear Time-Invariant and Linear Shift Invariant, Cross Correlation, CNN Architecture, Pooling again, LeNET 1998, LeNET 5 Summary, AlexNet 5 2012, Top 5 error rate, vanishing Gradient Problem, Dropout regularization, Consequences of Dropout, VGG Net 2014, VGG 16, Differences between VGG and Alexnet, Fine Tuning, Challenges in Deep Learning, Google net, Inception Module, RESNET AND MOMENTUM OPTIMIZER, Appropriate Learning Rate, Adagrad, RMSPROP and ADAM, Batch Normalization, Maxout, Universal Approximator Densnet, Wide Resnet, RNN, Back Propogation through time, Revison, Recurrent unit working, Image captioning, Bidirectional RNN, Pyramidical RNN, LSTM, Forget Gate, Image Caption generation using attention, LSTMS are dead long live transformers, YOLO Architecture, LSTM, Bidirectional RNN, Deep Recurretn Neural Networks Attention Model, Advantages of transformers, Domain Adaptation, Questions to ask to Deep Learning, Incremental online learning, transfer learning, co-training, co transfer learning, mapping approach, Use Deep Learning as feature extractor, Freeze or fine-tune, Cotraining, Co-transfer learning, Iterative training, Multi Task Learning, Deep Relationship Networks, Transformers, Self attention, BERT, XLNET, Mapping approach for domain adaptation, Dictionary learning, Transform Learning, Mapping Coupling Technique, Self Supervised Learning, Active Learning, Numerical of Active Learning, SHEAL MODEL, AUTO ML and NAS, Grid vs Random Search, Genetic Algorithm, Neural Architecture Search, Representation and Evaluation, Genetic CNN, NAS with RL, DARTS, GANs, RBM, Contrastive Divergence, Gibbs sampling.
All the above-mentioned building blocks are part of manual AI or somehow discovered or invented and published as novel research in a paper or conference or improved in a paper or conference in such a way that when combined within to build a complex system on top of it, community envisions to reach towards AGI. However, it is not optimal. It is slow. It makes it even more stressful how many such building blocks are required to discover or invent to make a fully-fledged AGI system. Consider a pool of all discovered and undiscovered tools and techniques in the open world, and then think about the undiscovered ones. And is it possible to find all of them? Is it possible to find them individually and then study their combinations? This is what phase two of the manual path to AI says. Combining building blocks to make a complex thinking machine such that it solves the “Herculean Task”. And is that even possible? Debugging, tweaking, combining, and optimising would be a nightmare. A massive team is required, even if we have communities working together, such as OpenAI. But each team have different motives. Because we cannot set up a team like the Apollo expedition to outer space where 1000s of ML scientists sit together to solve and experiment with each building block.
The trends in ML and DL suggest that alternative approaches are getting popular very fast. Hand-crafted pipelines are ultimately outperforming auto-ML solutions. If we say features (Deep Learning replaces HAAR/SIFT cascades), Architectures (Hand designed replaces learned ones) and Hyperparameters (manually tuned to learned via neural architecture search or grid search). As we see, as of now, handcrafted features, architectures and hyper-paramters are outperforming the learned ones; however, at least we have an alternative path to do so. In the same notion, we could have an alternative to the Manual Path to AI, which is called AI-generating algorithms. The algorithm itself learns end-to-end as a total solution and learns as much as possible. The idea is to start with bootstrap conditions or some primary initial conditions that do not have intelligence. By the time the learning agent evolves and grows, our expensive outer loop will continue to search for better agent combinations which are sample efficient. The outer loop checks for different learning environments, better agents, and learning algorithms. Finally, the so-called meta-learning algorithm will produce an agent that is sample efficient even if the process got to that agent in some inefficient manner. And for proof, consider one Earth-sized computer which evolves the one-celled organism into the human brain! (which is sample efficient and generalised well).
Evolution has a key role to play as we push towards artificial general intelligence and I think so for three reasons the first is that the community the neural evolution community and the evolutionary algorithms community has developed a lot of very important ideas that I think have a role to play in general AI research whether or not they ultimately be are used on an evolutionary backbone or their hybridized with other ideas from machine learning I think the community itself has developed tremendous amount of insights that are helpful and we're seeing them those ideas spread into the wider machine learning community second is I think that evolution algorithms themselves may be a key technology for evolution in terms of having a part to play in the overall solution to AI and third I think that to some extent these names for different fields are metaphorical and so if you think about evolution a little bit more broadly as the outer loop learning algorithm which is to say that we have something that kind of moves and learns across generations and then something that learns within a lifetime that that outer loop which in nature was evolution and in machine learning sometimes is specifically an evolutionary algorithm or sometimes could be a different say reinforcement learning algorithm that outer loop that is definitely going to be critical as we push towards artificial general intelligence and some will call evolution and some will fail to call it evolution but at its heart we're taking inspiration from a natural evolution and evolutionary algorithms to do that outer loop activity and that's essential so I think that data will be key and even more broadly things that from natural evolution are essential to get into our algorithms to ultimately push to AGI such as interactions between Asians and different niches and exploring and expanding population through different niches so in many different ways I think that evolutionary algorithms are going to be essential to our push towards AI and we will see increasingly are currently seeing that the machine the community is starting to take note of that and play with a lot of the ideas from evolution under its various interpretations and definitions.
One example, to use Bayesian optimization which is a very data efficient machine learning algorithm and so to some extent where this is an example where we're hybridizing the best of both worlds the creativity of evolution though it's expensive with the sample efficiency of Bayesian optimization and what we found is that you can basically have a robot that won damage in seconds or maybe a minute or two can kind of figure out a gate that works despite that damage and soldier on with its mission or limpets way back to a repair station so the paper was actually called robots that can adapt like animals and every time I explained the work I usually give the analogy of an animal that's in the forest so if you yourself are in a forest and you sprained your leg what do you do what you don't do is launch an optimization process that tries a bunch of subtle variations on every single theme to figure out what works instead you come pre built with intuitions that came from your childhood about very different ways to walk and you try one type of behavior such as walking on your the ball of your foot if that doesn't work you rule out that entire family behaviors and you try another type of behavior like hopping on your left foot if your right foot is injured and you say AHA that's good enough and you hop out of the forest so in this case what we used was evolution to kind of provide that simulated childhood or you gain this knowledge of all these different ways to walk but where you are good at walking in all those different ways and then you use Bayesian optimization which is very efficient to kind of live figure out which of those is the best gait despite the damage so I just really like that approach because it was a marriage of the creative force of evolution with Bayesian optimization a traditional machine learning algorithm and though evolution was expensive we kind of figured out how to do that offline ahead of time and then use a different algorithm to be a data efficient when the clock is ticking.
CAN EVOLUTIONARY COMPUTATION MAKE ROBOTS CREATIVE?
so there's lots of ways to harness evolution as extremely creative force I like to think about evolution as creativity in a bottle it's almost like the ancient myth sometimes you open the bottle and the evolution gets out and does all this mischievous things because it's so creative it can't kind of help itself from solving problems in ways that you might not have anticipated so we just recently had a paper called the surprising creativity of evolution in which we got legends from the field pioneers from the field all the way through some of the brightest new stars in the field to share their anecdotes of when evolution has proven to be very creative and oftentimes has surprised them and so there's just this huge collection of anecdotes there that I love that shows time after time how routine it is when you're working with evolution to be surprised by how creative it is and how clever it is so one answer to the question is just that evolution almost by default is exceptionally creative and will routinely surprise you and you actually as a scientist have to get good at flirting its efforts because otherwise it usually subverts your experimental intentions but there is another approach which i think is extremely exciting is to some extent the future we've been working on it a lot lately and I think there's a lot more excitement to come and that's not just to use evolution as a creative force but to use evolution or any type of machine learning and in stochastic optimization to produce agents that they themselves are creative so we want as an agent itself which is curious and seeks out novel situations and learns how to efficiently explore and so you know one of the first major salvos in this direction was Ken Stanley and Joel Amos novelty search and so they had robots that learned to do something different that had come before than their ancestors had done in my lab we've done a lot of work on curiosity search which is having an agent that when it wakes up in the world it doesn't want to do something different than what was done ten generations ago it wants to keep doing new things in term care to what it's already done so it gives first it kind of learns to walk and then it gets bored of walking and it learns to run and then it'll earns to do backflips and then it goes and explores the upstairs attic and it wants to take a journey to Africa and find itself etc and ultimately I think that combining these two approaches will be key one of the things that we've shown in my lab is that now research which was a great idea by Joel and Ken produces agents that not just go to new places but they learn general skills for exploration so when you take such an agent and you put it in an entirely new environment it's learned how to explore and so it kind of figures out how to do that and then increasingly we're pushing on trying to get that also applied to the idea of curiosity where it's learning how to fish utley go to new places within its world we've shown that these techniques produce state-of-the-art results on for example Montezuma's Revenge the Atari game they tie state-of-the-art there in terms of producing agents that explore this extremely challenging domain and collect these very sparse rewards because they're incentivized to explore so I like both evolution as a creative force itself as an optimization algorithm and using evolution or any other machine learning algorithm to produce agents that themselves are very creative and exploratory because ultimately I think that's what we want robots and intelligent AI agents that each of them is curious and goes off and finds new things to do in the world and new solutions to challenging problems.
Novelty Search vs Curiosity Search:
One of the really brilliant insights behind novelty search is that we want to reward robots or AI agents for doing different things in the behavior space so you could imagine that if a robot was in this room and I needed to walk around that it would be interesting if it first climbed on the couch or walked it and like opened a door walked up a flight of stairs etc and that's very different from encouraging random search in the book the genotype space or the space of neural network weights so you could imagine that you could forever randomly mutate the weights of in there on that and encourage different neural nets but all of them might do nothing they might just sit there and flail the legs it would you know or just fall over that what we want to do is we want to encourage novel behaviors and novel not novel configurations of neural nets because to neural nets might be very similar but one of them might write you know Shakespeare and one of them might write a Dan Brown novel and those are both interestingly different in my preference more the Shakespeare one and even though they might be subtle in one area which is the weight space they're very different in the ultimate space we care about which is the behavior and similarly many many many many many neural nets might write that same Dan Brown novel or might fall over and so we want to do is we want to encourage novelty in terms of the places we ultimately care about which might be different solutions to an engineering child challenge or different behaviors that a robot performs in a very complicated environment and that is just profound rethink in terms of the right way to do an explore novelty and the same thing applies to curiosity we what we want robots that can be curious within their lifetimes and do qualitatively different behaviors even though their neural net might not be that different or might be actually identical I want to put out one final way that I'd like to dimension which is how do you encourage creativity in robots one of the there was some work that came out of my lab it's called the creative thinking and approach and what I really like about this approach is it doesn't just encourage novel behaviors it encouraged novel ways of thinking about a problem so we actually look into the brain of these neurons and see you see the neural activation patterns in their brain and we say oh you're still kind of solving the problem similar to how agents have done it before but you're thinking about the problem in an entirely new way that might be a stepping stone to a later discovery so we will encourage you because we see that there are novel approaches in your brain which previously people hadn't accessed as a way to encourage creativity and novelty so that I think is a promising approach as well