Neat Learning vs Deep Learning

Neat Learning vs Deep Learning

AI world used to have two camps: neats and scruffies. The neats wanted AI solutions to be, well, 'neat'. In other words the solution should be provable, elegant and clear. Search algos, optimization and statistical inferences would all be part of neat algos. Scruffies believed whatever works is good. Scruffy algos typically use a mix of several models and an abundance of 'hacking'. In 21st century this classification of neats vs scruffies is rarely used in the AI world. In a way, with statistical modeling gaining significance in 1990's, neats have won the war. Not totally though.

Important machine learning techniques like Bayesian inference, regression models, NLP and partitioning models (trees) can be considered as an extension of neat AI. But along with these there developed another set of algos, the so called 'deep learning' algos, which started with the neat approach of back-propagation but soon diverged into the scruffy territory with tweaks, hacks, unproven algos etc. In other words: whatever works approach!

Although there is no distinction of neat vs scruffy AI any more, there is a very deep distinction in the machine learning world. Neat learning models have been delivering fantastic results from NLP to business analytics. But, primarily driven by success in image processing and gaming, deep learning models are gaining much more hype. In this ML world too, I believe, eventually the neats will win over the scruffies/deeps.

There are several reasons why neat learning would win over deep learning. Some of the being:

  1. Neat learning algorithms are provable. i.e. one can predict the accuracy rate of the model with a certain statistical confidence
  2. Neat learning algos are not black boxes so someone can look into them and modify them if something fails
  3. Amount of data required in neat algos will be less than that in deep algos
  4. Amount of computational power is also typically less
  5. Neat learning can take domain specific hypothesis as starting point, thus reducing dimensionality of the problem, leading to further improvements vis-a-vis points 3 and 4

Some of the great neat learning techniques I am fan of are:

  1. Linear models like regression, logistic regression and PCA
  2. SVM
  3. Decision trees (both for classification and regression)
  4. Markov models
  5. Bayesian networks
  6. Filtering techniques like Kalman filters

要查看或添加评论,请登录

Gopi Krishna Suvanam的更多文章

  • The Hidden Risks of Over-investment in AI Infrastructure

    The Hidden Risks of Over-investment in AI Infrastructure

    The $500 billion investment in AI Infra definitely looks exciting. But.

    2 条评论
  • The Good, the Bad and the Ugly of ChatGPT

    The Good, the Bad and the Ugly of ChatGPT

    To the uninitiated, ChatGPT is the viral product of the organization OpenAI. It is an AI base chat-bot that interacts…

    2 条评论
  • Transfer Learning: The Future of AI

    Transfer Learning: The Future of AI

    End to end learning is paradigm in machine learning where an algorithm takes inputs and desired actions, to learn a…

    2 条评论
  • Limitations of End-to-End Learning

    Limitations of End-to-End Learning

    (Originally posted as an answer to a Quora question here: https://www.quora.

    1 条评论
  • Value per person is more important

    Value per person is more important

    There has been a mad rush to become the next big thing from eCommerce to social media to SaaS. In this rush, startups…

    1 条评论
  • Online Learning (by Machines)

    Online Learning (by Machines)

    Most ML methods are applied in a batch process mode. So for example in a simple ML model let’s say we want to learn…

  • Primitives for AI

    Primitives for AI

    I want to explore more structural thoughts in this article. Let’s say I want to build a financial market prediction…

    2 条评论
  • Machine Learning on Small Datasets

    Machine Learning on Small Datasets

    You may be wondering what the image above the title has to do with machine learning. We will get there by the end of…

    4 条评论
  • Can financial markets be predicted? - Brief note for beginners

    Can financial markets be predicted? - Brief note for beginners

    There are lot of concepts at play here. Let me highlight a few.

  • Machine Learning on External Memory

    Machine Learning on External Memory

    Once Python fails to load data or R is not able to perform analysis because of running out of memory, many people think…

社区洞察

其他会员也浏览了