Contextualizing a Breakthrough in Machine Learning

Contextualizing a Breakthrough in Machine Learning

Today Perceive exited stealth and introduced a machine learning processor that runs at about 1/10th of a watt designed to bring state of the art machine learning inference to smaller, smarter devices. The breakthrough is in its efficiency - it is between 20X and 100X more efficient than today's inference processors. It can run multiple complex neural networks concurrently without any help from the cloud or powerful cloud processors from NVIDIA or Google. Machine learning training is the complex task of selecting a network and iteratively running inputs paired with outcomes to create a neural network composed of weighted connections between nodes. Training will remain the providence of the massive compute of the cloud and the training of skilled computer scientists. These networks can now run on edge devices - something in a user's home or on their wrist or in their hand - which can then choose when and if to task cloud services.

Perceive sponsored my new white paper Smart Inference Devices: The Wave of Perceptive Electronics Powered by Machine Learning which details the technology and applications of these new, amazingly fast and efficient processors.

The unpublished preface to the paper below is intended to provide some context for those who wish to explore our journey from the dawn of programming to intelligent devices, and it starts with:

>Hello World.

Explicitly coded, this classic start in the journey of programming in a new language is how most of us started our coding journey. Tell the computer to write “Hello World” on the screen. By comparison, machine learning seems wonderous – computer programs designed to learn and produce output without an explicit understanding of the means by which to produce the answer. Programs that learn by observation and execute code generated by machines. 

Machine learning began innocently without trying to structure the learning process. Modern machine learning – deep learning – is a radical departure from this na?ve start. In deep leaning, networks are explicitly organized to attack a target problem – and even organized to self-organize. Coders pre-architect neural networks for success. Learning networks are designed to self-critique to improve their accuracy. They can be designed to find the best architecture to attack a problem. They can even generate their own training inputs to accelerate the learning process beyond what data a human or real-world experience might provide. 

The computation requirement to train and operate the resulting code is tremendous. Vast weighted networks of networks that require highly parallel matrix computation. This computation is clearly suited to a new class of dedicated logic – machine learning processors – capable of executing this radical, multi-dimensional parallelism.  

Solving interesting, real-world problems requires multiple networks to run in tandem – interpreting speech, visual data, sensor data and more simultaneously. The next generation of processors will need to simultaneously run multiple networks to attack complex real-world challenges. They will need to scale – so that they can solve problems on disconnected devices, connected devices, and in the cloud. For many devices, they will need to run with limited heat and power. For many applications, they will need to function without network connections and with low latency such that device inner functions can be determined and driven by inference. And the applications are emerging quickly – many problems attempted algorithmically have only been successfully and efficiently solved with machine learning – hand tracking, realistic avatars, identity recognition, night vision, mind control, artistic augmentation – these are emerging applications emerging to power the next generation of electronic everything. 

So if machine learning innovation is cracking the code on building complex networks – how do we make sure that this innovation can be incorporated into practical devices to assist everyday work, life, and play? We will need to rethink how we run inferences of deep learning networks. We will need to challenge ourselves to produce the breakthrough code and the processing architectures that can transform computationally expensive problems into computationally efficient ones. By making machine learning computationally affordable we may unlock a revolution in smart devices that can be controlled intuitively, perform accurately and reliably, and seamlessly integrate into vast services in the cloud. 

Read the white paper Smart Inference Devices: The Wave of Perceptive Electronics Powered by Machine Learning at https://www.tiriasresearch.com/research/ online.

要查看或添加评论,请登录

Simon Solotko的更多文章

社区洞察

其他会员也浏览了