The Intersection of AI and Quantum

The Intersection of AI and Quantum

?

We are all very lucky to be living in this era. If you are born in the last 4-5 decades, you have witnessed a path breaking shift in technology. Whatever was considered science fiction once, has become an endeared science now. We are sending a signal to the cosmos, a signal that signifies human beings are ready to be class 2 species, a species that can roam the interstellar space and discover new worlds. About three decades back, hand held devices were a dream. The first one built by Motorola, of course drew inspiration from the early Star Trek tricorders.

We are also lucky to have witnessed artificial intelligence come out of the shadows of winter twice or thrice, once in early 2000s where statistical techniques helped its growth, the second one in 2012 where GPUs and the ability to compute became easily accessible, and the third, now last year when Chat GPT showed the world, how AI can generate language and make it sound like a human being. If any one thinks this is the pinnacle of technological revolution, think again , because this is just the tip of the iceberg.


In this Monday technology series, I plan to cover technology and especially this one covers on what AI is progressing into and how Quantum Computing is on its path to collide with AI. Its akin to two galaxies , our own home, the Milky Way on its path to collide with another giant , the Andromeda galaxy (Don’t worry this is not going to happen for another 5 billion years).

The Hype of Chat GPT has nearly settled now. During this hype, we saw this topic being discussed in every board meeting with bewildered amazement , companies and tech groups rushing to find use cases within their organization.

As the dust settles, companies have now done loads of experiments. Bigger ones have created an instance within Open AI so that their data is safe, and start-ups and other smaller ones have now created wrappers on top of gen AI based algorithms available in open source like Hugging face to accentuate their portfolio.

Little have companies realized that AI in the current form is not just unsustainable but also carbon intensive. One training model running on large data centers generates 57 times more carbon dioxide than a single person’s air trip of around 1500 Kms.

As a result, research and the game to lead AI has shifted to make AI sustainable, to make it more compute friendly. The other change that is brewing is ensuring AI is responsible. Now responsible AI is a big terminology and is not easily catered to by one company or country for the rest of the world.

Responsibility includes a whole slew of measures and mark ups to endure. Starting from Inclusiveness, Safety, Efficacy, Accountability , Transparency to Explainability. Some are easily catered to than the others. For e.g. Explainability is a difficult ask especially when the newer AI models are built using deep neural networks which have a million weights and parameters along with millions of nodes. Some others like transparency are easier to handle as you can backtrack your solution. This is a major area of research and Makers Lab, the R&D arm of TechMahindra continues to delve into it.

A recent work done with World Econmic forum also highlights this facet. (https://www.weforum.org/publications/chatbots-reset-framework-pilot-projects-using-chatbots-in-healthcare/ )

AI is also shifting its nature to ensure models are smaller, hence a shift from LLM (large language models) to SLM (Small language models). A research has found out that around 1 to 1.5 billion parameters in a model are good enough for AI to realize language and its intricacies. It really depends upon the use case , so while you are seeing an increase in investments in AI, a major portion is going in this research.

Mechanisms to compress models like Quantization, Sparsity and Pruning along with alternative architectures which are brain inspired or Quantum inspired(tensor networks) are now being worked at full swing by companies, research organizations and academia around the world. Makers Lab is also researching along with its partners to provide commercial value to its customers.

While the AI world progresses the world of Quantum is showing green offshoots. The biggest change that is coming via Quantum is in Quantum security, and the usage of PQC(Post quantum cryptography) or QKD (Quantum key distribution). It is a well known fact that Quantum poses a threat to our well known security algorithms, particularly those which use numbers and calculations at the base.

By its inherent nature to look at a vast number of parameters quickly and simultaneously , Quantum techniques have shown in a synthetic scenario that they have the ability to break algorithms like RSA, which were hitherto considered safe and un-breakable. There is also a slew of start ups in this area both in India and across the world.

While security may be considered a horizontal plane, the world of AI is now being shifted by Quantum machine learning, which is a major area for us to proceed. Complex Combinatorial problems which include a large number of factors and are difficult to be solved using classical AI are now finding its solutions within the hybrid world of AI and quantum computing.

Quantum Computing is a difficult subject to understand, largely because the theoretical science is non intuitive. Having said that quantum computing taken from the lens of mathematics is exceedingly simple, simpler than classical mathematics because it only looks at linear algebra at its base.

Quantum computing is also the most sustainable format of computing, as it is reversible in its origin.Reversibility refers to the facet that we can derive the input based on the output by moving back without expending a large amount of energy. Classical methods like time flow in one direction and hence are irreversible.

Problem statements like travelling salesman algorithm, generation of small molecules in chemistry for drug analysis, fraud analysis, portfolio optimization in banking, network digital twins and network layout understanding find themselves easily solvable by this hybrid structure of classical AI and Quantum AI.

As an organization , we are at the forefront of all of this, driving this change. The R&D arm today is delving into all of the above issues , in a bid to create solutions and accelerators for our client partners and account facing teams. Project Indus and Project Garuda are an offshoot of the same. The learning obtained from building an LLM from scratch is lending itself into multitude of tools and techniques described above.

As I bid you a happy week ahead, I implore you to read about these topics to tickle your brain nerves.

Mohammad Yasin

Problem Solver | Solutioning | OSS | Digital Transformation | 5G | Decison Intelligence | TM Forum Member

8 个月

Well articulated article Nikhil Malhotra. I liked the introduction pitch, artistic way of galaxies. Rightly touching the cord of carbon intensive, I also discovered this problem last year (during a course whitepaper writing) and also found that Bill Gates has started way back in 2015 and got a architectural success too. Though the plan execution went into stalemate position due geo political scenario. My 2 cents, this earth, part of milky way, need to get united as 1 unit to be more fortunate for the 8 billion individuals we are now. Great Going and Keep writing over the weekends, that's the most efficient time for writing. Just to share, as a passing on the baton. We, me and my son, write over the weekends as a joint exercise.

Richa Arora

Technical Writing || Information Developer || Data&Analytics || Gen AI Enthusiast || MS Copilot || Certified SMP @ Tech Mahindra || @ HCL Technologies || @ Nucleus Software || @ Mando || @InterGlobe Technologies (Indigo)

8 个月

Truely inspired by the insights you shared sir….

Arvind Tiwary

Multi Disciplinary, Futurist GreenPill:

8 个月

The exponential usage of energy and need to water cool is an issue far bigger then the noise around Bitcoin. According to a new?study?by University of California Riverside researchers,?roughly two weeks of training for GPT-3 consumed about 700,000 liters of freshwater. The global AI demand is projected by 2027 to account for 4.2-6.6 billion cubic meters of water withdrawal, which is more than the total annual water withdrawal of Denmark or half of the United Kingdom. https://news.ucr.edu/articles/2023/04/28/ai-programs-consume-large-volumes-scarce-water

Asish Gupta

Group Practice Head | GCP Data Architect | Data & Analytics (Communications, Media & Entertainment) - Head of Solutions at Tech Mahindra

8 个月

Very insightful - good to know about model compression

要查看或添加评论,请登录

社区洞察

其他会员也浏览了