If an AI can play Go, can it drive a car?
Richard Foster-Fletcher ??
Global AI Advisor | Keynote Speaker | Shaping the Future of Work and Responsible Artificial Intelligence
Somebody put it to me recently about autonomous vehicles, that everything that could happen around the car is predictable.
They said, therefore, that predicting what could happen during a car journey is no different from how AlphaZero predicts the possible futures in Go or Chess.
Could a model that predicts every single possible move and could calculate odds and potential outcomes in a game, in the same way, predict every single possible 'move' whilst driving? Could one, therefore, use similar Machine Learning models as AlphaZero to successfully drive a car and is there any merit in thinking about this approach?
Stuart: You can always make an argument that something that is neither black nor white, is the same shade of grey. So, to some extent, things that go around the car are not completely random and things that happened in chess are not completely random so you could make the argument that they're both to some extent predictable.
Things that go around the car are not completely random and things that happened in chess are not completely random so you could make the argument that they're both to some extent predictable.
I'm not sure that at this stage though that putting game playing models and self-driving models in the same category is useful. If you compare the algorithms that you have for chess playing and the algorithms you have for self-driving cars, right now, you will see that they're not the same type. The two problems are being addressed with different approaches.
The two problems are being addressed with different approaches.
N.B. In a recent paper called 'Game Theoretic Planning for Self-Driving Cars in Competitive Scenarios' the authors propose a nonlinear receding horizon game-theoretic planner for autonomous cars in competitive scenarios with other cars. The online planner is specifically formulated for a two-car autonomous racing game in which each car tries to advance along a given track as far as possible with respect to the other car.
The game-theoretic planner iteratively plans a trajectory for two vehicles that will lead to convergence. But crucially, the trajectory optimisation includes a sensitivity term that allows the first vehicle to reason about how much the other vehicle will yield to it to avoid collisions. The resulting trajectories for the ego vehicle exhibit rich game strategies such as blocking, faking, and opportunistic overtaking. The performance is validated in experiments with two autonomous cars, and in experiments with a fullscale autonomous car racing against a simulated vehicle.
The trajectory optimisation includes a sensitivity term that allows the first vehicle to reason about how much the other vehicle will yield to it to avoid collisions.
In chess, for example, you have a number of pieces that can move in a certain number of ways. There are many options available but you can only select one at a time, whereas things happen simultaneously in and to a vehicle. Is it the difference in complexity?
Stuart: There's an expression that all models are wrong, but some models are useful. The current game playing models and self-driving models progress in different ways.
N.B. The game playing models can run simulated scenarios countless times to devise the best strategies. But the self-driving car models need real-world data to learn from, as building an ML model that simulates actual driving scenarios is as complex as developing the self-driving model itself. They are two sides of the same coin.
Stuart: But if someone wants to say that it's useful to model them in the same way and if they can demonstrate that it's useful to model them in the same way, then, by all means, we can consider that.
You can find the full interview with Stuart Armstrong via his episode on the Boundless Podcast.
Stuart talked about why the future matters and what it takes to unite humanity. About applying safety to Artificial general intelligence and our likelihood of surviving en mass the hurdles and crises’ of the next two centuries. What it will be like to live on digitally and have a relationship with your as yet unborn descendants, and what we do now that could appall future, more enlightened generations. Stuart predicts our likelihood of experiencing the unseen vistas of the galaxy and paints a picture of a remarkable multi-planetary existence.
Stuart Armstrong is a James Martin Research Fellow at the Future of Humanity Institute, Oxford University.
His research at the FHI centres on the safety and possibilities of Artificial Intelligence (AI), how to define the potential goals of AI and map humanity’s partially defined values into it, and the long term potential for intelligent life across the reachable universe.
Richard Foster-Fletcher is CEO and Founder of NeuralPath.io, a Strategic AI Consultancy Practice. Formally with Oracle Corporation, Richard runs the MKAI Meetup (Milton Keynes Artificial Intelligence) and is the host of the Boundless Podcast.
Here on LinkedIn, Richard interviews leading researchers and executives in the fields of futurism, AI and business strategy.
Artificial Intelligence and Data Analytics practitioner
5 年If an AI could play Go, would it even want to drive a car? If it turned out that the best game-playing algorithms required a degree of emotions, of motivations, then it might not desire to drive that car, or even to drive it in the same direction the passenger wanted. Whilst this sound implausible (and may of course not be true), algorithms which are evolved might even evolve emotions, as a means to selecting the best solutions; so its not impossible that very intelligent AIs could have their own, "artificial", desires.