Will Quantum Machine Learning one day make as big waves as ChatGPT?

Will Quantum Machine Learning one day make as big waves as ChatGPT?

This is about a very nice paper published in March 2023 whose authors summarise in my opinion rather well the status of a technical field called Quantum Machine Learning (QML). We all know about machine learning (ML), just think ChatGPT for a moment and how powerful many flavours of ML are these days.

Regarding classical ML (i.e., with no quantum computing involved), there are still things to improve, and much progress is being made. When leveraging quantum computing for ML, hopes are for example to gain speedups and improved prediction capabilities (from better identifying hidden patterns in data), thereby outperforming classical ML methods. The paper I refer to gives various hints about application domains where QML might provide real benefits in the future for businesses. In addition, that paper also lists a good amount of challenges that are still to be overcome.

Some of the more important takeaways from this paper are in my view:

Speedup from QML has been observed indeed (great news), however, for rather contrived problems (bad news). Reaching speedup for data science more generally “is still uncertain even at the theoretical level”. Yes, I do agree. Therefore, more work needs to be done.

Regarding in which stage QML is, the authors provide a nice analogy: “After the invention of the laser, it was called a solution in search of a problem. To some degree, the situation with QML is similar”. Well, at least as of now in April 2023, I do agree as well. That's a bit inconvenient today, but the use of laser truly exploded in the years following its invention.

Then there is a nice observation regarding what mathematicians call the Hilbert space. My description for the layperson is: If you have (big) data say, with 5 dimensions, and you want to search for hidden patterns in the data maybe for clustering or classification reasons, then with quantum computing, you can project this data into a higher-dimensional space, e.g. 256 dimensions. Think of this as the space where quantum states of a quantum computer can live. Why can this be useful?

Let me explain with a small detour. On Oct 14 2023, there will be an annual solar eclipse visible in parts of North and South America, and today, 20 April 2023 there has been one visible in Australia. In an annual solar eclipse, the moon does not fully cover the sun. So, when you look at the sky you see a black dish (the moon) and a yellow ring around it (what remains of the sun), it’s getting a bit dark and the birds stop tweeting (usually). Since what we look at is so far away, we can consider this as an image in two dimensions. Now your task: find a hyperplane to separate what you see regarding what belongs to the moon and what belongs to the sun in this image in the sky. Since the image has 2 dimensions, a corresponding hyperplane would be a line. Can you draw a straight line between the yellow circle left over from the sun and the black dish of the moon in the sky? No.

If we now take this (same, real) data and look at it in 3 dimensions, then the new task is to find a hyperplane of 2 dimensions that can separate the data (i.e. the pieces belonging to the sun, and the pieces belonging to the moon). That’s quite easy: Just hold a firm (large) piece of paper between moon and sun, and you have solved the problem. The data points belonging to the moon and those belonging to the sun have been linearly separated. That was not possible in 2 dimensions (when staring at the annual eclipse).

So, that’s the trick how we can separate data belonging to two or more classes in the Hilbert space of quantum computing, by projecting the data into a space of a dimension larger than the original dimension of the data.

This is a wonderful idea, as long as the original data belonging to say two different classes (the good and the ugly, the genuine financial transactions and the fraudulent ones) get projected into different ‘corners’ of the huge Hilbert space.

A space of 256 (or more) dimensions has kind of many corners and niches where you can hide. A bit like: all the good financial transactions end up in the 1st galactic quadrant of the Milky Way, and all the bad transactions end up in the 3rd galactic quadrant, separated by a few light years of distance where you can stack many sheets of paper in between to linearly separate the good from the ugly. Then it should be easy to do binary classification.

So, the gist of the story is: large spaces can help to distinguish between different classes of data points.

Back to the paper: The authors point out that sometimes we have to compare data points, e.g. measure similarity. And before I proceed, recall: Hilbert spaces in quantum computing can be very large. And here comes the authors’ nice observation: “Comparing objects in exponentially large Hilbert spaces requires an exponential precision, as their overlap is usually exponentially small.” I think this is a beautiful observation, which can translate into a challenge. What if we add quantum noise? And shot noise?

Anyway, the authors continue and amongst others ponder about where quantum advantage might arise in the future. Here comes a bit of a dampener: When the data is of purely classical origin as is the case certainly today for most (or shall we say all) ML applications in telecommunications, automation, finance etc, “there is no known exponential advantage”. Bad. But we shall not lose hope, since the authors also state that “it is still reasonable to expect polynomial advantage”, for instance quadratic speedup compared to classical machine learning. That lightens the mood.

On this positive note, I do recommend this paper for interested readers (great for people with a bit of background in data science and still educational for those curious about ‘this thing called quantum machine learning’).

Link to the good stuff:

https://arxiv.org/pdf/2303.09491.pdf

“Challenges and Opportunities in Quantum Machine Learning”, M. Cerezo et al. 16 Mar 2023.

Karan Pinto

Entrepreneur making deep tech useful | Quantum AI, Cybersecurity & Tech Infrastructure

1 年

Alexey Melnikov Romi Sumaria good read from Guenter!

Pedro Almenar

Sr Network Development Manager of Inter DC Core Services at Amazon Web Services

1 年

Thanks for sharing, Guenter. Very interesting article... The problems where QC can be applied today (with advantages over classical methods) are quite limited, but I agree with you that it is a matter of time. The classification one that you expose looks a good candidate... There is also the problem of realization: how to build a quantum computer with enough qbits to be actually useful (at an affordable price), but that's also a parallel race that will yield results quite soon (we get news about improvements every day). A topic to watch closely, certainly.

Marcus Dormanns

Director Product Management & Business Development at COMPRION

1 年

Great summary of a very interesting topic. Did you already ask ChatGPT about this?

回复
Jinmi O.

Global TPM @ Meta Infra | Member Board of Trustees, CoFounder @ Akama Fund | Regional Co-Chair Black@ Meta

1 年

Great post Guenter Klas you are always ahead of the curve!

回复

要查看或添加评论,请登录

Guenter Klas的更多文章

社区洞察

其他会员也浏览了