Tech giants chart different courses for artificial intelligence
Pixabay Free Images

Tech giants chart different courses for artificial intelligence

While Google’s Android has a large footprint among mobile phone users, Microsoft still enjoys a hegemony since it is still by far the most popular choice for desktops used by large businesses

Until now, most firms have been using the Graphical Processing Unit (GPU) architecture, originally developed for video games by firms such as Nvidia, to build out their Artificial Intelligence (AI) programmes. The GPU is much more capable of handling voluminous data than the humble Central Processing Unit (CPU) that is at the heart of most computers that you and I are familiar with.

A couple of weeks ago, I wrote in this column about a new hardware chip design for AI, and referenced a start-up firm called AlphaICs, which counts the renowned Vinod Dham among its founders. AlphaICs is trying to redefine the type of chip used for AI applications by designing a chip among a new class of processors called Tensor Processing Units (TPUs) that allow for several more pieces of data to be simultaneously processed on their chips.

Hungry AI monster programmes need to crunch through enormous data stores in order to be able to continuously “learn”, and the hope is that this new class of TPU chips, which are themselves an extension of GPUs, will be sufficient to handle the vast amount of data flying in from various devices that connect to the Internet.

The realization that the war in AI is not just about the data, but also the ability to process it effectively through new hardware, has not been lost on the large tech giants. Microsoft, Amazon, Google and Facebook are huge buyers of hardware, and each has toyed with many start-ups such as AlphaICs to see whether a new class of chip would be required to handle AI tasks.

Facebook has said in the past that it might try to design new types of chips for its own use. Google realized some years ago that without these advanced chips, it would need to significantly expand the size of its already humongous computer farms. It has hired a number of engineers to design its own TPUs in order to process through the ever increasing amount of data that is coming at it from Android, its mobile phone operating system. Google also rents these chips out to its cloud customers.

Meanwhile, these large firms have meanwhile also been using new architectures in chips from more traditional CPU makers such as Intel, which use an architecture called Field Programmable Gate Arrays or FPGA.

Just this past week, Google provided previews at its developer conference to to what its Google Assistant can do when powered with even more AI capability. Its new programme called Duplex can believably mimic a human being while making automated phone calls to complete mundane tasks such as scheduling appointments at a spa or making reservations at a restaurant.

According to ZDNet, a business technology news website, at its own annual “Build Conference” last week, Microsoft announced that it is breaking away from attempts to build new chips and will stick with the FPGA architecture produced by Intel in order to roll out new AI processing capability for users of its Azure cloud platform, in an initiative that it is calling “Project Brainwave”. Microsoft spokespeople have said that Project Brainwave’s FPGA technology is attractive because it can process data quickly, but at significantly lower cost than GPU chips commonly used in today’s AI machine learning projects.

The key to FPGA chips, is that unlike others, their design can be reconfigured on the go. Microsoft’s bet is that machine learning is evolving at warp speed, and it does not make sense to hard-wire today’s AI ideas into new chips, simply because this hard-wiring will become obsolete quickly. Conveniently, this decision also allows Microsoft to not take on the burden of having to design and build new chips of its own; chip design and manufacture is a famously expensive endeavour. In Microsoft’s estimate at least, the spending on this is best left to traditional hardware players such as Intel.

While Google’s Android operating system has a large footprint among mobile phone users, Microsoft still enjoys a hegemony since it is still by far the most popular choice for desktops used by large businesses. This move allows it to chart its own course in the AI and cloud worlds.

Sit back. Let the games begin.

Siddharth Pai has led over $20 billion in technology outsourcing transactions. He is the founder of Siana Capital, a venture fund management company focused on deep science and tech in India.

*This article first appeared in print in the Mint and online at www.livemint.com

For this and more, see:




Laxman P Joshi

Livelihood | Disability | Ekyam Impact || Lend A Hand India | Ex HCL, Airforce

6 年

FPGA is definitely better than hard wired processing in fast changing technology. However programming and securing is challange.

回复
Baltazar Ruiz

Cloud Account Executive @ Intel | Customer Success Specialist

6 年

The Neural Network Processor L1000 by Intel AI is a great alternative. It was not mentioned in the article probably because it was published before the NNP-L1000 was launched, during #AIDevCon. The link below provides more information on the specifications. https://newsroom.intel.com/editorials/artificial-intelligence-requires-holistic-approach/ #IamIntel

回复

I see only two options for AI. Realistic chatbots and Skynet...

回复
Sharad (Jim) Kaushik

Engineering Leader | Product | Cloud

6 年

With the acquisition of Altera, Intel has ~40% of the FPGA market share and well poised if others follow Microsoft's lead to use FPGA for AI data processing.

dammn they still dont get after terminator series god help us

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了