What is AI (for the rest of us)?

What is AI (for the rest of us)?

To understand AI, we must speak a common language and have a set of precise definitions. Here lies some confusion with both lay people and imaging professionals. What is it and how do I know when I am seeing AI?   

You will certainly know when a skeletonized 7-foot Terminator robot comes into your Radiology department with a plasma weapon and speaks to you in a monosyllabic Austrian accent. That is not where we are!

AI in the broadest definition is the ability of a machine to carry out an intelligent task within the context of “real world variability”. That means that the machine must achieve an objective utilizing smart algorithms but does not require the machine to have the ability to “learn”. Subtle but important, some algorithms and AI decision making can be “hard coded” into the program. Much of the AI we see in imaging is this more basic form of hard coded programming. It may be considered AI none the less.

As we narrow our definition, we realize that machines have a hard time transposing varying scenarios and are best at solving individual problems. That is a pivotal difference between “hard coded AI” and Machine Learning.

Machine Learning was defined by Arthur Samuel in 1959 “the ability for a machine to learn without being explicitly programmed.” To quote Calum McClelland, “instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learn how.”

“Training” involves feeding tremendous amounts of data to the algorithm and allowing the algorithm to adjust itself and improve. The goal was defined in 1956 by John McCarthy, “AI involves machines that can perform tasks that are characteristic of human intelligence.”

Now we are approaching a methodology for designing machines that “think”. We inherently understand thinking algorithms because we are “frontal lobe beings.” This is how we teach our children. So how do we get from Machine Learning to Deep Learning?

Deep learning is based on the structure and function of the frontal lobe of the brain with the interconnection of millions of neurons. Special computer chips such as chips created for computer gaming (NVIDIA) are optimized for this type of processing. The creation of computerized Artificial Neural Networks can mimic the biological structure of the brain. This creates a layered network of discrete connections to other “neurons”. Each layer identifies a specific feature to learn, such as image identification of curves, angles, edges, contrast and adjacent structures. Depth is a function of the integration of these multiple layers to create a 3-dimensional neural representation of reality. This summation of layers creates a virtual observed reality based on learned features within the neural network, eventually linked to potential physical actions based on this learned pattern. Examples can include a cat approaching an autonomous car. The cat must be recognized, distinguished from a stationary object and avoided. The simple task of teaching the machine that this is a "cat" can be analogous to teaching a small child to recognize a cat. This is done by hundreds or thousands of examples of directed observation and image fractionation into small components. The layered Convolutional Network filters segmented images of hundreds of cats and then puts them forward, much as a child observing a cat from an apartment window, back yard, or partially visualized through a crack in a door. The parent then asks the child if that is a cat, dog, or giraffe, accurately answering the question every time. Eventually, the child will learn, accurately identify and say "cat" 100% of the time, regardless of the situation. Your 5 year old just became a deep learning machine. Now we imagine training our machine to recognize a tiger, dog or monkey independently. This is more complex but is being done.

Furthermore, Convolutional Neural Networks have been programmed to recognize not only content (house, tree, ocean) but stylistic interpretations of art such as Van Gogh's starry night with secondary painting algorithms capable of detecting the real painting as compared to a poor forgery. Try teaching your 5 year old that trick.

Now for the interesting SciFi stuff. AI can be globally divided into two categories, Narrow and General.

Narrow AI incorporates some aspects of human reasoning and adaptive intelligence without the capacity to “put it all together.” This machine is capable of tremendous adaptive focused tasks, but not much more. It can recognize faces, cats, and cars when presented millions of pictures with incredible speed and accuracy. Such techniques are ideal for image interpretation, fingerprint recognition, facial recognition and hundreds of more applications.

General AI further incorporates the characteristics of human intelligence, a true ability to incorporate Artificial Neural Networks and Deep Learning into an integrated AI program capable continued learning and evolving. Can it evolve into s true sense of self?

Some have further divided AI into a third conceptual category, Artificial Super Intelligence (ASI), when an AI doesn't mimic human intelligence and/or behavior but surpasses it. Back to Ex Machina.

I apologize for any over simplification or inherent errors and welcome comments. I developed this explanation to explain AI to my Hospital Administrators and Staff and they seemed appreciative of a simple explanation of a conceptually difficult topic

要查看或添加评论,请登录

社区洞察

其他会员也浏览了