Artificial Intelligence #43: How do AI chips / neural chips work
Background
For our teaching at Oxford, I track trends on the horizon for AI and Edge
AI Chips / neural processors are one of the trends we watch
In this newsletter, I will highlight why AI chips / neural chips are significant and how they work
Essentially, the term AI chips/neural processors/neural chips refers processors that are optimised for AI specific operations at the expense of non AI specific operations. For geopolitical reasons and security, AI chips will play an increasing role in the future. Also, more applications such as autonomous vehicles will depend on AI chips. Many of the technologies uses in AI chips such as Quantization in deep learning and TinyML are already used with microcontrollers (and covered in our teaching - Artificial Intelligence - Cloud and Edge implementations). See Pete Warden’s blog on why the future of machine learning is tiny for the rationale behind this thinking
?
How do AI chips work
The term AI chips is thus broad and covers a range of architectures like GPUs, FPGAs, ASICs etc. As we see below, AI chips include a number of specific architectural features that lend themselves to optimised processing of AI specific operations such as image classification, machine translation, object detection etc
领英推荐
The biggest feature of AI chips is parallel computing. Deep neural network (DNN) operations lend themselves to parallelization because they are identical and independent of the results of other computations. DNNs need a large number of independent, identical matrix multiplication operations that are then summed (multiply-and-accumulate -MAC operations).?
In addition to data parallelization, you could also increase the amount of parallelization using other means
You could connect multiple AI chips in parallel. You could use model parallelization ?(the model is split into multiple parts on which computations are performed in parallel). You could optimise using low-precision computing —which sacrifices numerical accuracy for speed and efficiency. You could employ Memory Optimization where if an AI algorithm’s memory access patterns are predictable, AI chips can optimize memory. Finally, libraries like?TensorFlow and Pytorch have features that ?can take advantage of AI chip features.
Below is a list of machine learning processors (source wikichip)
As you see, there are already many vendors in this field but also this is a nascent domain and many other vendors are entering the area such as Microsoft's project brainwave
We will watch it with interest!
References: AI chips what they are and why they matter
I help insurers to build digital & data driven solutions | Analytics & Insights | ML & AI | HealthTech & InsureTech | Speaker & Author | Thought Leadership & Mentoring |
2 年Goldmine Ajit Jaokar
Founder, Advisor, Engineer and Venture Capitalist
2 年Thanks, have you seen any good hardware acceleration solutions that focus on I/O and connections between or across custom chips?
Senior Manager Technology - Driving Innovation for Positive Impact !!!
2 年Thank you for sharing AI chips details. This is niche area. Interesting thing will be different vendors chips should fit, synchronize and work effective and efficient manner to get Best final AI product!
in this edition Pete Warden