New chip architectures for today’s AI
Pixabay Free Images

New chip architectures for today’s AI

AI is shifting to the hardware realm, specifically in the development of integrated circuits.

Most advances in Artificial Intelligence (AI) have so far been confined to software. Today’s AI computer programmes are vast users of data. They sift through these data and use methods such as pattern recognition. For instance, an online retailer like Amazon looks at your past history of browsing for a particular product online and then “matches” this use pattern to effectively target advertisements to you through sites like Facebook and Google so that you are enticed to buy.

This is simple enough, but a similar method sits behind more advanced uses of AI such as self-driving vehicles. As I have written in this column before, human beings pore over hundreds of thousands of hours of video, labelling every small detail, including road signs, traffic lights, distances from other traffic and so on so that these otherwise random data elements are now labelled accurately for the AI software in self-driving cars to analyze—and then act on.

Some AI is now actually smart enough to write improvements to its own software code as its “understanding” of the data fed to it increases, suggesting that machines can now “think” for themselves. There has been astonishment, as I have written here before, at the discovery that these black-boxes can autonomously develop the capacity to obfuscate the truth.

But all this is still software. The relentless march of Moore’s law: the explosion in computing performance with the attendant reduction in the cost of computing, along with the Internet, which has produced an explosion of data, has allowed the decades-old AI software research ideas of neural networks and machine learning to see the light of day.

It seems that AI is now shifting into the hardware realm, specifically in the development of integrated circuits (ICs). I spoke recently with Nagendra Nagaraja and Prashant Trivedi, two of the founders of a deep technology start-up called AlphaICs, who are trying to revolutionize the design of ICs to meet AI’s future needs. Vinod Dham, the reputed designer of some of Intel Corp.’s breakthrough chips, such as the Intel Pentium, is the third founder.

Of the three mainstream hardware platforms—Intel and other CPU chips popular in laptops and servers, ARM chips in mobile devices, and high-performance gaming chips called GPUs, mostly from Nvidia—GPUs seem to have the edge today in AI development. This is because today’s CPUs are primarily scalar-based, wherein a single Instruction operates on a single piece of data, and GPUs are vector-based, wherein a single instruction operates on a “linear array” of data called vectors, and Nvidia has capitalized on the opportunity. Nagaraja was a chip designer at Nvidia.

Nagaraja, however, feels that none of these chips has been able to take today’s AI from a net data consumer to the point where it is instead a net data producer. They want to build chips that will allow the AI programs to produce data that can drive decisions, rather than continue with today’s world of spoon feeding labelled data into AI platforms. AlphaICs claims to have built a custom hardware platform for “supervised” self-learning agents that are delivering “reinforcement” learning today and will provide the foundation for unsupervised learning in the future as AI evolves, in a process they call “Real AI”.

The AlphaICs Real AI Processor (called RAPTM), is based on “agents”, a group of interconnected “tensors” (mathematical objects analogous to but more general than the vectors I have referenced above that are found in GPUs). Nagaraja claims that today’s GPUs do not have the architecture to handle a divergence of threads needed for “reinforcement” learning, while “agents” do.

AlphaICs has also developed a new specialized set of Instructions called SIMATM (Single Instruction Multiple Agents) to increase the energy efficiency of chips. SIMATM enables multiple agents working asynchronously in groups, in different environments, such as mobile, data centres, and PCs, to bring a large level of parallelism at the agent level thereby significantly increasing rate of AI learning.

When queried on the advances in quantum computing hardware, both gentlemen claimed that it was at least two decades away from the real time and needs a temperature of absolute zero (-273 degrees Centigrade) to function. 

They believe their agent-based hardware paradigm is much closer to seeing fruition as AI evolves in future into a net “producer” of data.

Siddharth Pai has led over $20 billion in technology outsourcing transactions. He is now founder of Siana Capital, a venture fund management company focused on deep science and tech in India.

*This article first appeared in print in the Mint and online at www.livemint.com

For this and more, see:



JONY KUMAR

Managing Director at Faithful Consultants & Recruiter

6 年

You are doing great job.

回复

Before reaching quantum computing, 3D integration may be the way in between now and the next 20 years (at least)!

Gary Gibson

Chief Technology Officer at VSBLTY | AI | Data Science | Product Management | Cloud & Edge Computing | HPC | IoT | Mobile | AR/VR

6 年

Interesting, they are claiming 30TOPS @ 13W for inference for their RAP processor. AlphaICs was not one of the (many) hardware DL companies I have been watching. I'm curious to find out more info. FYI: https://www.alphaics.ai/

Dayan Graham

Software Developer at Marshall Wace

6 年

Gasim G. Tim Green Just what we were talking about yesterday

要查看或添加评论,请登录

社区洞察

其他会员也浏览了