Deep tech may stumble on insufficient computing power
Pixabay Free Images

Deep tech may stumble on insufficient computing power

Sophisticated algorithms need the kind of processing capacity that draws too much costly energy

It appears that many of the “deep tech" algorithms the world is excited about will run into physical barriers before they reach their true promise. Take Bitcoin. A cryptocurrency based on blockchain technology, it has a sophisticated algorithm that grows in complexity, as very few new Bitcoin are minted—through a digital process called “mining". For a simple description of Bitcoin and blockchain, you could refer to an earlier Mint column of mine.

Bitcoin’s assurance of validity is achieved by its “proving" algorithm, which is designed to continually increase in mathematical complexity—and hence the computing power needed to process it—every time a Bitcoin is mined. Individual miners are continually doing work to assess the validity of each Bitcoin transaction and confirm whether it adheres to the cryptocurrency’s rules. They earn small amounts of new Bitcoin for their efforts. The complexity of getting several miners to agree on the same history of transactions (and thereby validate them) is managed by the same miners who try outpacing one another to create a valid “block".

The machines that perform this work consume huge amounts of energy. According to Digiconomist.net, each transaction uses almost 544KWh of electrical energy—enough to provide for the average US household for almost three weeks. The total energy consumption of the Bitcoin network alone is about 64 TWh, enough to provide for all the energy needs of Switzerland. The website also tracks the carbon footprint and electronic waste left behind by Bitcoin, which are both startlingly high. This exploitation of resources is unsustainable in the long run, and directly impacts global warming. At a more mundane level, the costs of mining Bitcoin can outstrip the rewards.

But cryptocurrencies are not the world’s only hogs of computing power. Many Artificial Intelligence (AI) “deep learning neural" algorithms also place crushing demands on the planet’s digital processing capacity.

A “neural network" attempts to mimic the functioning of the human brain and nervous system in AI learning models. There are many of these. The two most widely used are recursive neural networks, which develop a memory pattern, and convolutional neural networks, which develop spatial reasoning. The first is used for tasks such as language translation, and the second for image processing. These use enormous computing power, as do other AI neural network models that help with “deep learning".

Frenetic research has been going into new chip architectures for these to handle the ever-increasing complexity of AI models more efficiently. Today’s computers are “binary", meaning they depend on the two simple states of a transistor bit—which could be either on or off, and thus either a 0 or 1 in binary notation. Newer chips try to achieve efficiency through other architectures. This will ostensibly help binary computers execute algorithms more efficiently. These chips are designed as graphic-processing units, since they are more capable of dealing with AI’s demands than central processing units, which are the mainstay of most devices.

In a parallel attempt to get beyond binary computing, firms such as DWave, Google and IBM are working on a different class of machines called quantum computers, which make use of the so-called “qubit" , with each qubit able to hold 0 and 1 values simultaneously. This enhances computing power. The problem with these, though, is that they are far from seeing widespread adoption. First off, they are not yet sophisticated enough to manage today’s AI models efficiently, and second, they need to be maintained at temperatures that are close to absolute zero (-273° celsius). This refrigeration, in turn, uses up enormous amounts of electrical energy.

Clearly, advances in both binary chip design and quantum computing are not keeping pace with the increasing sophistication of deep tech algorithms.

In a research paper, Neil Thompson of the Massachusetts Institute of Technology and others analyse five widely-used AI application areas and show that advances in each of these fields of use come at a huge cost, since they are reliant on massive increases in computing capability. The authors argue that extrapolating this reliance forward reveals that current progress is rapidly becoming economically, technically and environmentally unsustainable.

Sustained progress in these applications will require changes to their deep learning algorithms and/or moving away from deep learning to other machine learning models that allow greater efficiency in their use of computing capability. The authors further argue that we are currently in an era where improvements in hardware performance are slowing, which means that this shift away from deep neural networks is now all the more urgent.

Thompson et al argue that the economic, environmental and purely technical costs of providing all this additional computing power will soon constrain deep learning and a range of applications, making the achievement of key milestones impossible, if current trajectories hold.

We are designing increasingly sophisticated algorithms, but we don’t yet have computers that are sophisticated enough to match their demands efficiently. Without significant changes in how AI models are built, the usefulness of AI and other forms of deep tech is likely to hit a wall soon.

Siddharth Pai has led over $20 billion in technology transactions. He is the founder of Siana Capital, a venture fund management company focused on deep science and tech in India. These are opinion pieces; the opinions expressed are the author's own and do not represent any entity.

*This article first appeared in print in Mint and online at www.livemint.com

For this and more, see:




Meharnosh Bara

Supply Chain Operations II Global Logistics & Operations II Global Sourcing II Online Marketing II Warehousing Management II Process Compliance Specialist II Growth Specialist II Electronics Engineer II Problem Solving

4 年

Thanks for sharing this insight :)

回复
KHADER BASHA KK EXPORTS

Chief Executive Officer at KK Exports - India

4 年

Good To Know

回复
Radhika Kamath

Content Strategist | Author | Digital Marketer | Sales Enablement | Research & Advisory

4 年

Very well articulated, Sid. The admission by the MIT researchers is alarming. The pace of compute power reaching its maximum may only get shorter if current trends of investments and advancements in AI/ML are any indication. (White House is the latest to join the bandwagon with funding of $1B into QC/AI R&D). In the meantime, alternatives such as Composable architecture and DNA computing are being explored. While these do hold promise by offering a modular, elastic and shared ‘compute components’ much like the cloud and translating into self-contained systems/apps, they are in infancy stage or at best, early pilots. So, it may eventually boil down to a frenetic race b/w these and the algos!

回复
Viraj Kulkarni

AI-Powered Speech Therapy | Founder @ Iyaso

4 年

Gate based quantum computers are still far from becoming practically useful, but I’m more optimistic about quantum annealers. But your point is valid: we’re hitting the upper bounds of computational power that’s available.

回复

Excellent article Sid. Thanks for explaining it in terms we can easily understand.

回复

要查看或添加评论,请登录

Siddharth Pai的更多文章

社区洞察

其他会员也浏览了