AI – The hype and the ground truth.

AI – The hype and the ground truth.

Three years ago, I wrote this:

https://www.dhirubhai.net/pulse/20140811135733-1755074-is-artificial-intelligence-ai-coming-out-of-the-dog-house/

No bragging. At the time of the acquisition of DeepMind by Google for a huge amount of money ($400M reportedly), and as one who had been in “AI” since the late 80s and early 90s, I thought that the field might, at last, come out of the dog house where it had been since the “AI winter”. [Note: DeepMind…as in AlphaGo…check it out]. The AI winter was caused by a lot of hype and not much results. Within 6-12 months of several "AI companies" going down in flame, nothing that had “AI” slapped onto a business plan would get funded. And the field was silent for many, many years.

Those of us who had been involved in AI at the time had two things in common: first, we lamented the cruelty of the market (so much for that) and second, we believed that, some day, there would be a resurgence of the technologies we had all believed in and worked on. But it would take enabling factors for said resurgence to take place: a – lots of computing power and b – the realization that “AI” (and I’ll get to the definition in a second) could tackle problems that had become so big and so pressing that the technologies would be given a second chance. And that time, we'd better deliver!

My post on Linkedin had three messages in it:

  1. Hello world, may be “AI” is coming back from the dead!
  2. Yep, there are things it can do that could not be done 20-30 years ago.
  3. Beware of the hype…because hype is what “killed” AI back in our days.

Today (3 years later), there is not a single tech news cycle that does not mention AI. AI this and AI that. We do AI. AI is going to chance the world. AI is going to destroy the world. Etc etc etc…And you’ve got a bunch of people trying to essentially “ride the AI wave” for whatever purpose…[Note: there is even one local fellow that has been bestowed the title of “father of deep learning” whereas he was in grad school when my fellow AI nuts and I were slugging/struggling along with neural networks and all. Come on man!] Hype is not good. Never has been and never will be.

As a veteran of this uncanny field, let me give you my own assessment.

  1. What in the world is AI? You are simply talking about computer hardware and software. That’s it. Computer based analysis systems and/or decision systems. News flash: the financial industry has been involved in “AI” for decades (how do you think your mortgage application is processed??? Or your insurability profile?) What about them robots? Well, we have been using robots for decades (Note: there is not one car coming out of manufacturing plants that has not been painted by a robot!). If you listen to the “visionaries” and the “luminaries”, it sounds completely “magical”. It is not. Never has been. And we are decades away from it being so! Bits and bytes. Clever bits and bytes at that. But bits and bytes.
  2. Is AI new? Folks, I have been at this for over 40 years. So “new” is all relative to me. When I see that in the span of 18-24 months, “an entirely new, revolutionary field has just been born”, I am a little bit sceptical. No it is not new. And the algorithms that are used today have been around for decades. I used some of them and I designed some of them. It is actually so “not new” that very large institutions have been using “AI stuff” for years without you knowing it. For AI to progress, it will take bright minds to go beyond what we know now. By that I mean, an entire paradigm shift, not simply tweaking neural networks (the cooking equivalent to improving the recipe for meatloaf)! Nope, bold ideas. Brand new ideas like contextual knowlege, or Siri learning to understand non verbal cues (as in facial expressions).
  3. What has changed since the AI winter? Very simple: lots and lots and lots of computing power. Back in the late 80s I was one of the first to try using neural networks on a “real” problem (trying to find out if a minor increase in the ocean temperature would cause the cloud cover to change). Good luck with that at the time. I also used probabilistic algorithms, pure linear algebra, parallel processing…but on a good day, it would take 7-10 hours to process my stuff on the most powerful computer on earth (Cray 2, Cray X-MP, etc). To do it with a neural net would have taken days or even weeks! Today, you have lots of Crays in your iPhone…so yes, what we could not do then, can be done today: processing lots of math and trying to make sense of the results. Results we get in minutes rather than in hours or days or weeks.
  4. What is possible now and what is not yet possible? AI is not a unified or homogenous field. Machine vision is one thing. High Frequency Trading is another. In other words, you don’t have some kind of “universal AI engine” out there that you can plug into your applications and, bingo, stuff will happen. Nope. Not the case. It is still a craftsman field. One problem at a time. On data set at a time.  So let us look at an example of where we are. Face recognition. It’s now in the 90% accuracy range. That is AWESOME. Until you ask yourself this question: “I go to the airport, the entire screening process is AI driven, and in 1 out of 10 cases, the terrorist will not be identified”. How does that sound? Or I jump in my self-driving car, and in 1 out of 10 cases, the car will crash and you will die. Do you like these odds? Current state of the art in AI is simply not reliable enough to handle real-time mission-critical, life-critical situations. Today, an airplane is 99.999% reliable from an operational standpoint. If you introduce a component that is "only" 90% reliable, then you reduce the overall system’s reliability by a factor of 1000. Welcome aboard! Can we get there? Of course we can. Are we there yet? NO, and by a long shot!
  5. “Deep Leaning”: what does it mean? In the late 80s I was trying to answer a fairly “simple and complex” question: if the ocean temperature increases by say 0.5c, is there an impact on cloud cover (ie, the weather)[ Note: hello Harvey, Irma and Maria]. Some folks at UCSD were working on the early incarnations of neural networks and since I was “method agnostic”, I talked to them. What they did was truly fascinating…except that it would take me days or weeks to get an answer for my VERY LARGE data sets (each image was 4Kx4K, times two for the IR and visible spectrum, times one data dump per day since the satellite was flying over my head daily, times multiple images for a single patch of ocean…and pretty soon you are looking at serious amounts of bytes…in those days!). Research funding being what it was (and it has gotten worse), I toyed around on VERY SMALL data sets and moved on, simply because there was not enough computing power available for me to carry on with neural nets without blowing thru my modest super computer time budget ($1,000s per hour). That being said, I knew at the time that there was something to it, theoretically interesting but practically unusable.
  6. Neural networks. As it turns out, I spend 10 years of my life in neurosciences and I can tell you that there is nothing “neural” about neural networks. It sounds “cool” but the billions of neurons we have in our cranium work in a way that is vastly different than “neural networks”. Hype! A cool name.
  7. And when you cannot deliver, then a cool name becomes a liability! It did.
  8. Well, let’s call it by another name: deep learning. Except that you may simply create more hype on a technology that now has a lot of potential…
  9. Learning. Neural networks are simply giant mathematical, statistical models of a particular universe (in my case, it would have been cloud covers over the Pacific Ocean). These models have been around for…decades! It’s statistics. Smart…but not “alien”.
  10. “You don’t need to program a neural network”. Come on man! Yes YOU DO. It is actually worse than programming. You need to “train” the network to “converge” towards a particular set of solutions when you feed it with specific data. It takes – a data (lots of it and with no bias), b – lots of tweaking (aka as programming), c – there are significant issues if you (as a person) cannot fully describe the universe of solutions you are looking for (in my case, “what are the possible cloud covers over an ocean given specific conditions…in other words, the KNOWLEDGE necessary to make the network converge is purely HUMAN knowledge) [I won’t go into the issue of when you “miss” one key attribute of said universe].
  11. “The deep learning machine came up with the solution and we don’t understand how”. Non sense hype BS. A neural network is a mathematical model. For complex problem, the model is going to be very complex. The fact that you can’t compute the solution in your head or on a piece of paper is the reason you use a computer in the first place. There is no magic to it because if you could hire say a 1B people to do the computations, they would end up where the network ends up. That is hype at its worst. It’s just math...and if you don't understand it, then may be you should look for another job!
  12. A network does not “learn” anything. Though it is trained to identify "solutions" out of large set of possible solutions. Understand that a network that has been “trained” to work on cloud covers has ZERO capabilities to do anything useful on a problem that would be somewhat similar, without human intervention (called re-training). Let alone a problem that has nothing to do (say credit card fraud) with the original problem the network has been trained on. Learning is not simply the accumulation of knowledge. It is the ability to apply and derive, ON YOUR OWN, new knowledge to a NEW FIELD, from what you have learned before in a different field. A network trained to recognize cats cannot recognize cars. A network is trained to “converge” on mathematical (mostly probabilistic) solutions to a problem (primarily classification). That’s it. It’s math, statistics, probabilities, Monte Carlo methods! And its decade old maths by the way. Being “old” does not make you “obsolete” (I should know :-) but it’s not Frankenstein voodoo science. [By the way, it will take 100,000s of images for a network to recognize a cat...a human will to the same thing with a few dozen images].
  13. Input data is key. Garbage in, garbage out. When you feed your network with data, the inherent bias that may have been built in will influence whatever “answer” the network comes up with. In essence, tweaking the network is easy. What REALLY matters is making sure the data is representative of your search universe! Care to tell me if you can do that, day in and day out, reliably?
  14. As for the “deep”, well there has never been a shortage of marketing qualifiers for hyping technology. BIG data. CLOUD computing. AUGMENTED reality. In other words, some companies, organizations and people are trying to be “buzz words compliant”. Nothing new under the sun!
  15. AI is taking over the world. News flash: it already has. For many many years, automated systems (yep, it’s another word for AI) have been part of our lives. It’s not new. Can it be a threat to our way of life? Sure. [Computer program trading have been around for a long time. They can and already have created havoc in the stock markets]. Can AI be a benefit? Absolutely! AI is not a “thing” that we do not have control over. That being said, AI systems can definitely screw up big time. But is it any different from all the human inventions since the days we invented the wheel (hello A-bomb and H-Bomb which many people have no idea how threatening they were (and still are) to our society). The hype does NOT help! And the people who want to benefit from the hype are just that: profiteering. When we all understand what we are dealing with, we all can make informed decisions. The promoters of the “deep learning” stuff, “deep dreaming”, “AI is taking over” not only are not helpful, they simply treat us like idiots. We know better than that! Care to go back to your news feed of 24 months ago and see how much the word “disruptive” was “shaping the future of the world as we know it!”
  16. The biggest AI problem of them all. OK we have gigantic amounts of data being investigated by “deep learning” systems. And what is the main problem? ASKING THE MOST MEANINGFUL QUESTION! In other words, if we do not know what question to ask (and us only can actually determine what a “meaningful” question is) then what is the point of the answer we get? So we are pretty much left with ourselves. We have “deep learning” stuff exploring a universe we do not comprehend, spiting our answers that have no real meaning…that sounds awesome, doesn’t it. First thing first: LET US LOOK AT THE DATA! And apply our OWN intelligence to defining what we need to know and learn. That’s called HI…as in Human Intelligence!

Hope this was informative and entertaining, in whatever order. And now watch this, this from DARPA :-) and not a "grumpy old man" like you think I am :-)

https://www.youtube.com/watch?v=-O01G3tSYpU&feature=youtu.be

About Philippe. I have had a very fun life so far, and I am not done yetJ. I am helping high-tech businesses get off the ground and turning around faltering businesses. I am the father of a wonderful 26 years old daughter. I am also a passionate dressage rider.

Collin Li

Replaying the best parts ??

7 年

This reminds me of something I heard a "futurist/technologist" say about how the ones closest to the trends always feel disappointed by how slow the "next big thing" is taking to become relevant, but in due time it changes the world. Hype cycle, then the heads down practitioners, and then Amazon is #1 in market cap...

回复
Bob Korzeniowski

Wild Card - draw me for a winning hand | Creative Problem Solver in Many Roles | Manual Software QA | Project Management | Business Analysis | Auditing | Accounting |

7 年

AI is an example of the new paradigm technology. https://www.dhirubhai.net/pulse/paradigm-shift-technology-how-affect-your-future-bob/ The article does not talk about the bad philosophy behind it. In addition, when companies cheap out on QA, we get the "racist Microsoft bot" happening.

回复
Chris PaRDo

#://CNXT | $://THeXDesK | #://CuRReNCyx $://ANCHoRx | $://ASSeTx $://iSSueRx | #://BoNDx | $://CeNTRaLBaNx | $://THeFeDWiRe $://THeCeNTRaLDesK_x_#://CNXTAi_x_#://CoNTRax

7 年
回复
Wouter Brouwer

Entrepreneur - Fintech

7 年

Spot on! Nice article!

回复
Kiril G.

Team Lead Tactical Development

7 年

It is somewhat ironic that AI is the 'selling' name of ML to not so technical audiences, who subsequently get reasonably disappointed that the (great) progress made does not meet the (unreasonable) expectation for human-like 'I' (in 'AI')...I guess at present state of ML science is ahead of implementation efforts and capacity (as opposed to space propulsion, for example)

要查看或添加评论,请登录

Philippe Collard的更多文章

社区洞察

其他会员也浏览了