Deep Learning and the top 10 Predictions for 2019

Deep Learning and the top 10 Predictions for 2019

Deep Learning is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in Artificial Intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled.

Also known as Deep Neural Learning or Deep Neural Network.


BREAKING DOWN 'Deep Learning'

Deep Learning has evolved hand-in-hand with the digital era, which has brought about an explosion of data in all forms and from every region of the world. This data, known simply as Big Data, is drawn from sources like social media, internet search engines, e-commerce platforms, online cinemas and more. This enormous amount of data is readily accessible and can be shared through fintech applications like cloud computing. However, the data, which normally is unstructured, is so vast that it could take decades for humans to comprehend it and extract relevant information. Companies realize the incredible potential that can result from unraveling this wealth of information, and are increasingly adapting to Artificial Intelligence (AI) systems for automated support.

One of the most common AI techniques used for processing Big Data is Machine Learning, a self-adaptive algorithm that gets increasingly better analysis and patterns with experience or with new added data. If a digital payments company wanted to detect the occurrence of or potential for fraud in its system, it could employ machine learning tools for this purpose. The computational algorithm built into a computer model will process all transactions happening on the digital platform, find patterns in the data set and point out any anomaly detected by the pattern.

Deep learning, a subset of machine learning, utilizes a hierarchical level of artificial neural networks to carry out the process of machine learning. The artificial neural networks are built like the human brain, with neuron nodes connected together like a web. While traditional programs build analysis with data in a linear way, the hierarchical function of deep learning systems enables machines to process data with a nonlinear approach. A traditional approach to detecting fraud or money launderingmight rely on the amount of transaction that ensues, while a deep learning nonlinear technique would include time, geographic location, IP address, type of retailer and any other feature that is likely to point to a fraudulent activity. The first layer of the neural network processes a raw data input like the amount of the transaction and passes it on to the next layer as output. The second layer processes the previous layer’s information by including additional information like the user's IP address and passes on its result. The next layer takes the second layer’s information and includes raw data like geographic location and makes the machine’s pattern even better. This continues across all levels of the neuron network.

Now here's the top 10 predictions for 2019

1. Slow down in DL Hardware Acceleration

The low hanging fruit of hardware for DL has been picked. Systolic arrays gave the world the massive speedup increases in 2017. We cannot expect major increases in computational power in 2019.

However, we shall see newer architectures from GraphCore and Gyrfalcon circumvents the power costs of memory transfers and support sparse operations, however changes in DL formulation will be needed to accommodate these new architectures. New hardware research needs be performed that is inspired by Nano-intentionality found in biology.

2. Unsupervised Learning has been solved, but it’s not what was expected

The mindset for Unsupervised Learning is all wrong. LeCun’s layered cake is all wrong, and the relationships of different kinds of learning should look like this:


Why is UL the least valued and the least difficult? That’s because there’s no goal and you can just cook up any clustering that may or may not work. Ultimately, it boils down to how any of there higher levels perform based on the UL embeddings. UL embeddings are essentially data that contains a rich set of priors, how these priors are exploited depends on upstream processes that do have objectives. What’s been discovered by ELMO and BERT is that we can train UL that predicts (or generates) its data and this serves as a good base for upstream tasks. UL is essentially Supervised Learning with the label already existing in the data. In short, UL has been solved but not in the way most practitioners were expecting. If a network can make good predictions or can generate good facsimiles of the original data, then that’s all there is to UL.

So everyone thought solving UL would be a major advance because one would could use data free of human labeling. Unfortunately, it was a red herring, it’s been solved because something that comes for free is very easy to extract. My prediction for UL in 2019 is that researchers accept this new viewpoint and instead focus on more valuable research (i.e. continual or interventional learning).


3. Meta-Learning will Stagnate

Understand of meta-learning (i.e. Learning to Learn) seems to me to be as nebulous as our previous understanding of Unsupervised Learning. Meta-learning as it is practiced today is more like transfer learning (i.e. interpolative learning). A more advanced kind of meta-learning is one that can create build and improve its own models. More advanced kinds of meta-learning should be able to build extrapolative and inventive models. We are nowhere close to achieving this capability. We do have extrapolative learning in the form of self-play, but it is not a the meta-level capability (i.e. extrapolative learning to learn). This is a major obstacle to progress without any clear indicators on how it is going to be achieved.

4. Use of Generative Computational Modeling in Science

We are going to develop better control of our Generative models. There are three classes of generative models that have shown to be effective: Variational Autoencoders, GANs and Flow based models. I do expect to see a majority of progress in the GAN and Flow based models and a minimal progress in VAE. I will also expect to see applications of this in scientific exploration that deal with complex adaptive systems (i.e. weather, fluid simulations, chemistry and biology).

Progress in this area will have profound influence in the progress of science.

5. Use of Hybrid Models in Prediction

Deep Learning have continued to show its strength in providing predictions of high-dimensional systems. DL however is still unable to formulate their own abstract models and this will remain a fundamental obstacle towards explainability and extrapolative predictions. To compensate for these limitations, we shall see hybrid dual process solutions that do incorporate existing models in combination to model-free learning.

I see more work in model-based RL instead of model-free RL. I suspect the inefficiency of model-free RL can be mitigated using hand-crafted models. I expect progress in Relational Graph Networks and see impressive results when these graphs are biased with prior model-based models. I also expect to see advances in prediction capabilities by fusing existing symbolic algorithms in concert with DL inference.

Industrialization of DL will come not because we’ve made progress in transfer learning (as I incorrectly predicted in 2017) but rather through the fusion of human crafted models and DL trained models.


6. More Methods for Imitation Learning

Imitation does not require extrapolative reasoning and therefore we shall continue to see considerable progress in imitating all kinds of existing systems. To be able to imitate behavior, a machine only needs to create a descriptive model that mirrors the behavior. This is an easier problem than generative modeling where unknown generative constraints have to be discovered. Generative models work so well because all it does is to imitate data and not infer the underlying causal model that generates the data.

7. More Integration of DL for Design Exploration

We shall see a lot of research in generative models migrating into existing design tools. This will occur first in visual domains and move progressively towards other modalities.

In fact, we might even consider the progress made by AlphaGo and AlphaZero as design exploration. Competitive Go and Chess players have begun to study the explorative strategies introduced by DeepMind’s game playing AI to develop new strategy and tactics that previously were unexplored.

The brute force capability and scalability that is available to DL methods are going to be brainstorming machines that will improve the designs done by humans. Many DL methods are now being integrated in products from Adobe and AutoDesk. Style2Paints is an excellent example of DL methods integrated with a standard desktop application.

DL will continue to be introduced as components in human workflow. DL networks reduce cognitive load that a person requires to fulfill tasks in a workflow. DL allows the creation of tools that are more adept in handling fuzzier and messier details of cognition. These fall under the need to reduce information overload, improve recall, extract meaning and faster decision making.

8. Decline of End-to-end training, more emphasis on Developmental Learning

End-to-end training will have diminishing returns. We will see networks trained in different environments to learn specialized skills. We shall see new method to stitch together these skills as building blocks to more complex skills. I expect to see advances in Curriculum Training in 2019. I expect to see more research inspired by human infant development. Training networks to perform complex tasks will involve complex reward shaping and we therefore need improved methods on how to tackle this problem.

9. Richer Embeddings for Natural Language Processing

NLP has advanced in 2018 primarily due to advances in Unsupervised Learning approaches that create word embeddings. This is a continuation of the Word2Vec and Glove approaches. 2018 advances in NLP can be attributed to more advanced neural embeddings (ELMO, BERT). These embeddings have surprisingly improved many upstream NLP tasks across the board by simply substituting richer embeddings. Work in Relational Graph Networks can further enhance DL NLP capabilities.

The Transformer network has also proven to be extremely valuable in NLP and I do expect its continued adoption in other areas. I suspect the dominance of ConvNet networks will be challenged by Transformer network. My intuition behind this is that attention is a more universal mechanism for enforcing invariance or covariance than the fixed mechanism available to ConvNets.

10. Adoption of Cybernetics and System Thinking approaches

Infrastructure and Intelligence Augmentation. This requires going beyond the existing machine learning mindset that many researchers have grown up with. Michael Jordan in his essay “Artificial Intelligence — The Revolution Hasn’t Happened Yet” remarks that Norbert Wiener’s Cybernetics has “come to dominate the current era”. Cybernetics and Systems Thinking will help us develop more holistic approaches to designing AI systems. After all, successful AI deployments will ultimately be tied to how they align with the needs of its human users.

Many of the more novel approaches to DL can be traced back to older ideas in Cybernetics. There will be an increase in understanding that autonomous AI require the inclusion of a subjective perspective in its models of the world. Predictive coding, inside-out architecture, embodied learning, just-in-time inferenceintrinsic motivationcuriosityself models, and actionable representations are all related in this perspective.

Wessim Allegue

Sources (https://medium.com/intuitionmachine/10-predictions-for-deep-learning-in-2019-25ebbda91eb , deep-learning.asp , https://towardsdatascience.com/the-sparse-future-of-deep-learning-bce05e8e094a)




要查看或添加评论,请登录

社区洞察

其他会员也浏览了