All You Should Know About the Near Future: AlphaGo Beats Itself, Humans Help Robots, Ride-Sharing Aids Congestion
I publish one of tech's favourite newsletters, Exponential View. This is an excerpt. If you like it, do subscribe .
DeepMind blew my mind again. They announced AlphaGoZero (AGZ), a new approach to playing Go which relies
solely [on] self-play reinforcement learning, starting from random play, without any supervision or use of human data. Second, it uses only the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves.
In summary, it’s a single neural net, not trained on any training data. (You’ll recall previous versions of AlphaGo were trained on millions of human games.) The results are impressive. Within three hours AGZ played as well as a human beginner. Within a few days of self-play, it became the world’s best Go player.
Additionally, AGZ achieved this using only four of Google’s tensor processing units, a chip dedicated to these types of neural nets. The original AlphaGo needed 176 GPUs (a less optimised technical architecture). AGZ was whipping AlphaGo’s silicon derriere three days after it was instantiated. DeepMind’s blog post is a very clear read.
?? One observation made by Pedro Domingos, Professor at University of Washington:
AlphaGo Zero is great, but hold on: self-play is one of the oldest ideas in ML, and humans take far less than 5 million games to master Go.
Honestly, 3 days is a pretty quick time to become the world’s best anything. And playing 5m games of Go is hardly expensive, in terms of resources for a computer, while it would be prohibitively so for a human.
For an absolutely wonderful view on AI learning theory, I strongly recommend this debate between Gary Marcus and Yann LeCun on how much innate machinery does AI really need? Marcus is best known for taking a nativist stance: that we must have some sort of innate machinery in our brains to learn as efficiently as we do, and so general AI systems will need the same. LeCun, who has driven the development of convolutional nets, is more closely associated with a more Lockean view that you need data from experience will get you a long way.
Dark factory: how robots and humans work together in factories.
This stunning essay by Sheelah Kolhatkar provides a glimpse into automated workplaces and the dynamics that drive them. For workers:
[a]utomation was bringing greater and greater efficiency, even though, at a certain point, the logic of increasing efficiency would catch up with [them], and [they] wouldn’t be around any longer to witness it. One day, the factory might go dark. In the meantime, [they were] enjoying the advantages of work that involved less work.
Sheelah’s piece is multidimensional, so do read. But one aspect that rings out ominously is what Tim O’Reilly characterises as:
the thrall of an economic theory that says that wages and working conditions are entirely subject to inevitable laws of supply and demand, not recognising the rules and incentives we have created that ruthlessly allocate the benefit of increased productivity to the owners of capital and to consumers in the form of lower prices, but dictate that human workers are a cost to be eliminated.
Related, Nature has a good special on the future of work including hinting that the ongoing training needed to help workers constantly upskill might come from MOOCs. Which is a pity, because Clarissa Shen, a big cheese at Udacity, one of the instigators of the MOOC, has declared them “dead”.
In other news...
1) Tezos, a cryptocurrency beloved by anarcho-capitalists because of it’s supposed self-governing affordances, facepalmed. Having raised more than $230m in a rambunctious ICO earlier this year, its founders have fallen out threatening the entire project. Anna Irerra does a superb job unpicking the story. (Tezos’ futures have dropped 75%. (Read this for a simple intro to Tezos.)
2) Uber and Lyft may add to traffic congestion in cities while moving people away from mass transit, says a fascinating new study. Londoners dodging fleets of Uber Priuses will not be surprised, although more research is needed. The study also identifies that ride-hailing has appealed to a much larger segment of the urban population than previous sharing models, like Zipcar. UBS reckons that by 2035 using robotaxis will be half the cost of owning a car but that today’s ride-sharing is not competitive with owning a car. (h/t Reilly Brennan.) My maths tells me it isn’t cost effective to replace my primary vehicle with an Uber, but Uber has certainly replaced the second car (which I now don’t have.)
3) Noah Yuval Harari calls on creating new models for the post-work economy and political system, warning that we don’t have much time left:
The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time.
4) A lengthy, but worthwhile commentary on what it means to be a citizen in democracy:
Democracy, instead, requires treating people as citizens – that is, as adults capable of thoughtful decisions and moral actions, rather than as children who need to be manipulated. One way to treat people as citizens is to entrust them with meaningful opportunities to participate in the political process, rather than just as beings who might show up to vote for leaders every few years.
Dan Brown - Origin
DANI?MAN - YED?TEPE üN?VERS?TES? HASTANES?
6 年TAN
Challenge finder
6 年"ALphaGo zero beats itself", human player can never do.
Free thinker in every field of interest.
6 年Robotics replace the hundreds of millions of jobs in the world, this is the evolution of the massive scale