Deep Learning: 3 Key Facts
In recent years the interest in deep learning has permeated most industries. From news outlets using predictive content engines to engage readers, to medical technology advising individuals on best health practices based on past behaviours, each field seeks to leapfrog innovation in their field using this learning model. Unfortunately many basic insights in the field are missed in popular media.
To give you few nuggets that go deeper than the water cooler talks, here are 3 key DL facts:
1. Deep Learning ≠ Artificial Intelligence
The practice of deep learning is used to dramatically improved the state-of-the-art in artificial intelligent tasks like object detection, speech recognition, machine translation. It is not AI in and of itself. Its deep architecture nature grants deep learning the possibility of solving many more complicated AI tasks. As a result, researchers are extending deep learning to a variety of modern domains such as object detection, face recognition, or language models. For example, using the recurrent neural network to denoise speech signals, stacked autoencoders to discover clustering patterns of gene expressions and sentiment analysis from multiple modalities simultaneously are just an overview of applications currently underway.
2. It does not resemble the human brain
Although the ambition to build a system that simulates the human brain triggered the initial development of neural networks, today’s success of deep learning is more due to its deep architecture than its actual grey matter resemblance. Deep learning innovation started in connectionism and eventually lead to the maturity of shallow neural networks. As the research community today is chipping away at the necessity of extending shallow neural networks, incentivised by the promises deep neural networks make, the challenges of deep architecture are filled with unknown unknowns which means the full extent of their impact is impossible to foresee at the moment.
3. The study of Deep Learning started more than 2000 years ago
In 300 BC Aristotle introduced Associationism which is hailed as the start of human’s attempt to understand the brain. Associationism is a learning theory that states history of an organism’s experience as the main sculptor of cognitive architecture. A basic form of associationism might claim that the frequency with which a person has come into contact with Xs and Ys in one’s environment determines the frequency with which thoughts about Xs and thoughts about Ys will arise together in his future. 2000 years later, in 1873, Alexander Bain introduced Neural Groupings as the earliest model of Neural Networks based on the insight that any experience, for example the sound of a bell striking the ear, creates a memory note of the bell and ties this memory to other bell-like experiences from the past. This creates a neural grouping around the bell experience, which means that for every act of memory, every exercise of bodily aptitude, every habit, recollection, train of ideas, there is a specific grouping, or coordination, of sensations and movements, by virtue of specific growths in cell junctions.
This is just a sliver of the many golden nuggets worth learning about Deep Learning. If you have a key piece of knowledge about the subject which you wish the general public understood, I’d love to hear from you. Ping me with your insights!
Global VP of Team Performance & Development | People Partner | Success Coach | Venture Partner
5 年Thanks for sharing your insights Ayla Kremb!
Account Director @ Hexaware Technologies | Helping leaders succeed
5 年This is very insightful piece Ayla, especially the third point on genesis of Deep Learning. Here is my own take on AI in general and problem of overfitting.?https://www.dhirubhai.net/pulse/artificial-intelligence-ai-rahul-sharma/?https://www.dhirubhai.net/pulse/getting-right-fit-machine-learning-rahul-sharma/?