AI Research News Update: Issue 1 (Nov 15-21, 2021)

AI Research News Update: Issue 1 (Nov 15-21, 2021)

A New Research On Unsupervised Deep Learning Shows That The Brain Disentangles Faces Into Semantically Meaningful Factors, Like Age At The Single Neuron Level

The ventral visual stream is widely known for supporting the perception of faces and objects. Extracellular single neuron recordings define canonical coding principles at various stages of the processing hierarchy, such as the sensitivity of early visual neurons to orientated outlines and more anterior ventral stream neurons to complex objects and faces, over decades. A sub-network of the inferotemporal cortex dedicated to facial processing has received a lot of attention. Faces appear to be encoded in low-dimensional neural codes inside such patches, with each neuron encoding an orthogonal axis of variation in the face space.

How such representations might emerge from learning from the statistics of visual input is an essential but unresolved subject. The active appearance model (AAM), the most successful computational model of face processing, is a largely handcrafted framework that can’t help answer the question of finding a general learning principle that can match AAM in terms of explanatory power while having the potential to generalize beyond faces.

Deep neural networks have recently become prominent computational models in the ventral monkey stream. These models, unlike AAM, are not limited to the domain of faces, and their tuning distributions are developed by data-driven learning. On multiway object recognition tasks, such modern deep networks are trained with high-density teaching signals, forming high-dimensional representations that, closely match those in biological systems.

Quick Read: https://www.marktechpost.com/2021/11/21/a-new-research-on-unsupervised-deep-learning-shows-that-the-brain-disentangles-faces-into-semantically-meaningful-factors-like-age-at-the-single-neuron-level/

Uber Research Introduces ‘CRISP’: A Tool To Extract Critical Paths From Numerous Jaeger Traces

Microservices architecture consists of a collection of small, self-contained services. Each service should implement a particular business capability and be self-contained within a bounded context. A bounded context is a natural division inside a firm that provides an explicit boundary within which a domain model can exist.?

Uber’s backend is the epitome of such microservice structures. Uber has numerous microstructures interacting with each other through remote procedure calls(RPC). A remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to operate in a different address space without the programmer explicitly specifying the details for the remote interaction.

However, the life of a request call creates much complexity in terms of the interactions. As a result of the nested and asynchronous nature of these interactions, a need arises to invoke numerous other downstream options. Hence, determining which underlying service contributes to the entire end-to-end latency encountered by a top-level request is extremely difficult. Other factors such as time-to-live error rates also fall within the scope of interest.

Quick Read: https://www.marktechpost.com/2021/11/21/uber-research-introduces-crisp-a-tool-to-extract-critical-paths-from-numerous-jaeger-traces/

Google AI Introduces ‘MetNet-2’: A Probabilistic Weather Model Based On Deep Learning

The application of science and technology to forecast atmospheric conditions for a specific location and time is known as weather forecasting. People have tried to guess the weather informally for centuries and methodically since the nineteenth century. Presently, traditional physics-based techniques, powered by the world’s largest supercomputers, are used to forecast weather. But high computational needs constrain such methods, and they are also susceptible to approximations of the physical laws on which they are based.

Deep learning can offer a new approach to computing forecasts. Deep learning has been used to solve a variety of crucial problems, including cancer prevention and increasing accessibility already. Therefore, the use of deep learning models to anticipate weather can be helpful to humans on a daily basis. Deep learning models learn to predict weather patterns directly from observable data rather than applying explicit physical laws and can calculate predictions quicker than physics-based techniques. These methods have the potential to boost the frequency, scope, and accuracy of projected forecasts.

Deep learning algorithms have shown great promise in weather forecasting for nowcasting, predicting weather up to 2-6 hours ahead. Previous research has concentrated on using direct neural network models for weather data, extending neural forecasts from 0 to 8 hours using the MetNet architecture, generating radar data continuations for up to 90 minutes ahead, and interpreting the weather information learned by these neural networks. Deep learning, on the other hand, has the potential to improve longer-term forecasts.

Quick Read: https://www.marktechpost.com/2021/11/20/google-ai-introduces-metnet-2-a-probabilistic-weather-model-based-on-deep-learning/

Imperial College London Researchers Propose A Novel Randomly Connected Neural Network For Self-Supervised Monocular Depth Estimation In Computer Vision

Depth estimation is one of the fundamental problems in computer vision, and it’s essential for a wide range of applications, such as robotic vision or surgical navigation.

Various deep learning-based approaches have been developed to provide end-to-end solutions for depth and disparity estimation in recent times. One such method is self-supervised?monocular depth estimation. Monocular depth estimation is the process of determining scene depth from a single image. For disparity estimation, the bulk of these models use a U-Net-based design.

Although relative depth is perceived very easily by humans, the same task for a machine has proven quite challenging due to the absence of an optimal architecture. To tackle this issue, more complex architectures are chosen to generate a high-resolution photometric output.

Quick Read: https://www.marktechpost.com/2021/11/19/imperial-college-london-researchers-propose-a-novel-randomly-connected-neural-network-for-self-supervised-monocular-depth-estimation-in-computer-vision/

Microsoft AI Open-Sources ‘SynapseML’ For Developing Scalable Machine Learning Pipelines

Microsoft has announced the release of?SynapseML, an open-source library that simplifies and speeds up the creation of machine learning (ML) pipelines. SynapseML can be used for building scalable and intelligent systems to solve various types of challenges, including anomaly detection, computer vision, deep learning, form and face recognition, Gradient boosting, microservice orchestration, model interpretability, reinforcement learning, and personalization, search and retrieval, speech processing, text analytics, and translation.

SynapseML is a powerful platform for building production-ready distributed machine learning pipelines. It bridges the gap between several existing ML frameworks and Microsoft algorithms in order to create one scalable API that works across Python, R Language-based platforms like Scala or Java.

In order to build a machine learning pipeline, you need more than just coding skills. In fact, many developers find composing tools from different ecosystems requires considerable code, and frameworks aren’t designed for the task at hand-building servers in this case.

Quick Read: https://www.marktechpost.com/2021/11/18/microsoft-ai-open-sources-synapseml-for-developing-scalable-machine-learning-pipelines/

A Novel Deep Learning Technique That Rebuilds Global Fields Without Using Organized Sensor Data

A prominent challenge physicist and computer scientist faces is developing ways to accurately rebuild spatial fields from data gathered by sparse sensors. This process becomes even more challenging when sensors are sparsely positioned in a seemingly random or unstructured fashion.

Traditional linear theory-based methods have shown to be inefficient in reconstructing global fields for complicated physical systems or processes, especially when only a small amount of sensor data is available or sensors are randomly positioned. As a result, computer scientists have been investigating the potential of alternate methods for global field reconstruction, such as deep learning models.

A team of researchers from Japan’s Keio University, the University of California-Los Angeles, and other institutions in the United States has introduced a novel deep learning technique that can accurately rebuild global fields without requiring large amounts of well-organized sensor data.?

Quick Read: https://www.marktechpost.com/2021/11/18/a-novel-deep-learning-technique-that-rebuilds-global-fields-without-using-organized-sensor-data/

Google AI Proposes Multi-Modal Cycle Consistency (MMCC) Method Making Better Future Predictions by Watching Unlabeled Videos

Recent advances in machine learning (ML) and artificial intelligence (AI) are increasingly being adopted by people worldwide to make decisions in their daily lives. Many studies are now focusing on developing ML agents that can make acceptable predictions about the future over various timescales. This would help them anticipate changes in the world around them, including the actions of other agents, and plan their next steps. Making judgments require accurate future prediction necessitates both collecting important environmental transitions and responding to how changes develop over time.

Previous work in visual observation-based future prediction has been limited by the output format or a manually defined set of human activities. These are either overly detailed and difficult to forecast, or they are missing crucial information about the richness of the real world. Predicting “someone jumping” does not account for why they are jumping, what they are jumping onto, and so on. Previous models were also meant to make predictions at a fixed offset into the future, which is a limiting assumption because we rarely know when relevant future states would occur.

A new Google study introduces a Multi-Modal Cycle Consistency (MMCC) method, which uses narrated instructional video to train a strong future prediction model. It is a self-supervised technique that was developed utilizing a huge unlabeled dataset of various human actions. The resulting model operates at a high degree of abstraction, can anticipate arbitrarily far into the future, and decides how far to predict based on context.

Quick Read: https://www.marktechpost.com/2021/11/17/google-ai-proposes-multi-modal-cycle-consistency-mmcc-method-making-better-future-predictions-by-watching-unlabeled-videos/

NVIDIA Unveils ‘Modulus’: A Framework For Developing Physics-Machine Learning (ML) Models for Digital Twins

NVIDIA unveils ‘Modulus’, a new framework for constructing Physics-Machine Learning (ML) Models for Digital Twins. NVIDIA’s Modulus, formerly known as SimNet, is a platform that allows engineers, scientists, researchers, and students to train neural networks utilizing governing physics equations combined with observed or simulated data.

NVIDIA’s Modulus is a neural network architecture that combines the capabilities of physics and partial differential equations (PDEs) with artificial intelligence (AI) to create more robust models for better analysis. Modulus can help you get started with AI-driven physics challenges or create digital twin models for complicated non-linear, multiphysics systems’.

Modulus uses an AI-based technique to combine the advantages of physics with machine learning. Modulus trains a neural network that encapsulates the physics of the system into a high-fidelity model that can be employed in numerous applications using training data and the governing physics equations.?

Quick Read: https://www.marktechpost.com/2021/11/16/nvidia-unveils-modulus-a-framework-for-developing-physics-machine-learning-ml-models-for-digital-twins/

DeepMind Researchers Present The ‘One Pass ImageNet’ (OPIN) Problem To Study The Effectiveness Of Deep Learning In A Streaming Setting

The ImageNet database, which was first introduced at the Conference of Computer Vision and Pattern Recognition?in 2009?and today contains over 14 million tagged images, has become one of the most prominent standards in the field of computer vision. ImageNet is also a static dataset, but real-world data is frequently streamed and on a considerably more extensive scale. While academics are constantly working to increase model accuracy on ImageNet, there has been minimal focus on improving resource efficiency in ImageNet supervised learning.

Researchers from DeepMind present the?One Pass ImageNet (OPIN) problem, designed to study and understand deep learning in a streaming setting with constrained data storage, with the intent of developing systems that can train a model with each example passed once through the system.

In the OPIN problem, inputs are sent in mini-batches and do not repeat. The training method is completed after the entire dataset has been exposed. OPIN considers learning capability under confined space and computing conditions, unlike standard ImageNet assessments, which focus on model accuracy.

Quick Read: https://www.marktechpost.com/2021/11/16/deepmind-researchers-present-the-one-pass-imagenet-opin-problem-to-study-the-effectiveness-of-deep-learning-in-a-streaming-setting/

Researchers From Lehigh University Developed An Artificial Neural Network To Detect Symmetry and Structural Similarities In Materials

Vast amounts of unstructured structural and functional images are acquired in the quest for scientific discovery. But only a tiny proportion of this data is carefully examined, with an even smaller fraction ever being published. Greater understanding from costly scientific studies which have already been completed is one way to speed up scientific discoveries. Unfortunately, data from scientific tests are usually only available to the person who created them and is familiar with the experiments and guidelines. This issue is aggravated as there were no reliable ways for searching unstructured image datasets for correlations and insight.?

An artificial neural network has been recently developed and efficiently trained by a research team at?Lehigh University?to identify symmetry and structural similarities in materials and create similarity projections. The researchers created an?artificial?neural network?and used machine learning to train it to detect symmetry, patterns, and trends. The researchers utilized this technology for the first time to scan a database of over 25,000 images and correctly classify related elements.

When a neural network is trained, the output is either a vector or a set of numbers that acts as a compact description of the features. These features aid in the classification of objects so that some degree of resemblance can be discovered. But the output may still be huge in space as there might be 512 or more than that different features. Therefore it is compressed into a space that humans can understand, like 2D or 3D.?

Quick Read: https://www.marktechpost.com/2021/11/15/researchers-from-lehigh-university-developed-an-artificial-neural-network-to-detect-symmetry-and-structural-similarities-in-materials/

About: Marktechpost is a California-based Artificial Intelligence News Platform. Starting in 2019, Marktechpost has been a leading Intelligence platform providing easy-to-consume, byte size updates in machine learning, deep learning, and data science research.

Asif Razzaq: Asif Razzaq is an AI Journalist and Cofounder of Marktechpost, LLC. He is a visionary, entrepreneur, and engineer who aspires to use the power of Artificial Intelligence for good.

Asif's latest venture is the development of an Artificial Intelligence Media Platform (Marktechpost) that will revolutionize how people can find relevant news related to Artificial Intelligence, Data Science and Machine Learning.

Always Open for Collaboration/ Partnership: [email protected]

Share your AI-related research directly via LinkedIn message or email at [email protected]

Feel free to join our 36k+ AI Group on Facebook

Thank you so much for sharing

要查看或添加评论,请登录

Asif Razzaq的更多文章

社区洞察

其他会员也浏览了