Neat Learning vs Deep Learning
Gopi Krishna Suvanam
Entrepreneur | Author | AI & Decentralization proponent | Alumnus of IIT-M & IIM-A
AI world used to have two camps: neats and scruffies. The neats wanted AI solutions to be, well, 'neat'. In other words the solution should be provable, elegant and clear. Search algos, optimization and statistical inferences would all be part of neat algos. Scruffies believed whatever works is good. Scruffy algos typically use a mix of several models and an abundance of 'hacking'. In 21st century this classification of neats vs scruffies is rarely used in the AI world. In a way, with statistical modeling gaining significance in 1990's, neats have won the war. Not totally though.
Important machine learning techniques like Bayesian inference, regression models, NLP and partitioning models (trees) can be considered as an extension of neat AI. But along with these there developed another set of algos, the so called 'deep learning' algos, which started with the neat approach of back-propagation but soon diverged into the scruffy territory with tweaks, hacks, unproven algos etc. In other words: whatever works approach!
Although there is no distinction of neat vs scruffy AI any more, there is a very deep distinction in the machine learning world. Neat learning models have been delivering fantastic results from NLP to business analytics. But, primarily driven by success in image processing and gaming, deep learning models are gaining much more hype. In this ML world too, I believe, eventually the neats will win over the scruffies/deeps.
There are several reasons why neat learning would win over deep learning. Some of the being:
- Neat learning algorithms are provable. i.e. one can predict the accuracy rate of the model with a certain statistical confidence
- Neat learning algos are not black boxes so someone can look into them and modify them if something fails
- Amount of data required in neat algos will be less than that in deep algos
- Amount of computational power is also typically less
- Neat learning can take domain specific hypothesis as starting point, thus reducing dimensionality of the problem, leading to further improvements vis-a-vis points 3 and 4
Some of the great neat learning techniques I am fan of are:
- Linear models like regression, logistic regression and PCA
- SVM
- Decision trees (both for classification and regression)
- Markov models
- Bayesian networks
- Filtering techniques like Kalman filters