Will quantum computers kill AI as we know it?
Ted Prince
CEO performance prediction, family succession,, behaviorally-based investment, behavioral ratings for CEOs, company founder, thought leader, judge for Harvard Innovation Labs
Sigh, I can’t help taking aim at AI. It’s just so fashionable and politically correct. Ask anyone in leadership positions in politics or the private sector what the future is, and they will all tell you seriously it’s all AI, OF COURSE. I have this pressing need to knock down shibboleths, even it they are good ones. So here goes.
Here’s my question. Once we get quantum computers, why will we need AI? Sound like a dumb question, right? We always need AI OK? First, we humans are dumb and need help (from the people who make AI, not the AI itself). Second there’s so much data (Big Data between us girls) that we need things much smarter than us to sift through it all and see those beautiful underlying patterns.
Here’s a couple of straws in the wind for you to divine. You’ve certainly heard of Google Translate right? It’s a pretty amazing tool. Recently it went through a dramatic transformation where its translations became significantly better.
The name of the new product is Google Neural Translation Machine (GNTM if you want to air your superiority in Googlish). The improvement is said to have been due to its addition of new types pf neural networks in which Google has taken a global lead. If you want to see the popular reaction see the new York Times for the ultimate in breathless tech worship (“The Great AI Awakening”)
But investigate the fine print and something becomes clear that’s isn’t in the headlines. The neural machine may indeed have some great neural nets but the secret sauce is its vast database of documents which it now uses in different ways to find the best translation using good old-fashioned brute force approaches. Even the NYT article notes that without the huge investment Google made in hardware and TPUs, the breakthrough could not have been made.
And in fact, the neural nets that Google trumpets don’t embody any huge advance in AI; their power is derived not from amazing new techniques but from the fact that the neural nets, instead of being single level only, now operate at multiple levels. Sort of like having 5 blades in your shaver instead of the old-fashioned one or two.
They can do this not because Google made any huge breakthrough but because it developed an incremental improvement to the neural nets and then threw huge amounts of processing power at them that enable them to use their vast database of translations to find the best translations out of millions. In other words, brute force again.
Check out Douglas Hofstadter’s take on it (“The Shallowness of Google Translate”) if you want to see a more rational, less promotional view of the new Google Translate. (Remember Douglas Hofstadter? Author of the acclaimed book “G?del, Escher, Bach”? So serious street cred in AI).
OK so maybe GNTM is an outlier and other claims of AI doing amazing things are right on the money. So, let’s look at another widely heralded advance in AI, namely Google’s AlphaGo Zero, the program that for the first time beat a Go Master. As I am sure you know, Go has long been the paradigm by which to judge the smartness of AI. If it can beat a human master, we’ve hit the jackpot so to speak since the old paradigm for AI, chess, was crushed many years ago.
AlphaGo Zero beat its human opponent in March 2016. So, there was another outpouring of hyper-tech-ventilation at that time also. Here was the next prodigy to follow the HAL 9000. AlphaGo was yet another milestone in the final losing battle of humans versus computers.
But AlphaGo is not so mysterious. Its main tool is an old one, Monte Carlo simulations. The technique was invented in the 1940s. So, kind of long in the tooth by modern standards. Sure, they work, but hardly the Second Coming.
From where does AlphaGo get its power? Let’s take it from an expert: “This version of AlphaGo - AlphaGo Lee - used a large set of Go games from the best players in the world during its training process”. In other words, AlphaGo uses a combination of a very old mathematical approach, a couple of multi-level neural nets and a huge database of games from which to take experience and evaluate the best results. It’s not to be sniffed at but it’s hardly Deus ex machina stuff either.
To put it another way, a lot of what we these call AI is a combination of old math approaches, tarted up with some multi-level neural nets and vast databases which are accessed with massive computer resources. That is, the AIs we are talking about still uses a lot of very old-fashioned brute force. It works but it sure isn’t isn’t some new form of super-intelligence. In fact it uses huge amounts of elbow grease rather than smidgens of smarts.
So where am I going with all this? AI isn’t what its tarted up to be? Yep partly.
But here’s the deal. We’re getting close to getting quantum computers. Of course, they compute in a radically different way. And the approach is such that for many classes of problems the hitherto vexed issue of vastly insufficient processing power and speed disappears either almost or totally.
When we use digital processing, as in Von Neumann machines, processing power and speed is usually at a premium. There are vast areas of modern problems where the relative lack of power of Von Neumann machines is the ultimate limitation on what we can achieve.
But quantum computing promises to remove that limitation for many classes of these hitherto insoluble or intransigent problems. In the old computing world, we had to use “AI” or whatever to get around the problem of processing power. But with quantum machines this limitation will not be a problem for many types of problem.
In other words, there are many classes of problem which we can’t tackle now that we will be able to tackle in the near future, just using brute force. We won’t need AI, or multi-level neural nets or whatever for many of these types of problems.
Here’s just one example. We all know that decrypting some codes is impossible using Von Neumann machines. But we already know that what is impossible for them is theoretically quite possible for quantum machines. One example is 126- and 256-bit keys where it looks like quantum machines can break the codes just using brute force; that is, no AI required. In fact it looks like the Chinese (who else right?) are already there (“The Race Is On to Protect Data From the Next Leap in Computers. And China Has the Lead. https://nyti.ms/2DZletq). So, what about other types of problem which are way less challenging?
I think we all need to be far more skeptical about claims that purport to be achieved using what is called “AI” and what aren’t. And I think we need to question how useful AI will be in the future, both in general terms, and relative to the upcoming generation of quantum computers.
We might not need AI in its current forms. We might need totally new forms of AI which are not dependent on having brute force as crutches to getting things done.
That might well put Schrodinger's cat amongst the AI pigeons.