Optimisation Using Quantum Computing – Automated Selection of Best Algorithm
Src: G. Klas

Optimisation Using Quantum Computing – Automated Selection of Best Algorithm

I’m writing about this topic for two reasons: First, the related challenge has been in existence for a long time and good solutions to the challenge would be highly valuable. Second, I now stumbled over a very recent research paper which proposes a solution that seems to work surprisingly well for certain kinds of use cases, and which offers inspiration regarding ways forward in this field.

The challenge

Assume we have a real-world optimisation problem, a use case for instance in logistics or communications network design, where finding an optimal solution would lead to major cost savings. We might aim to solve the problem using a classical optimiser. In case we find that the problem size is too big, or the offered solution to the problem is too sub-optimal, we might want to turn to quantum computing (today to determine feasibility of a new approach, in a couple of years to really solve industrial-size problems). And we might want to see whether any algorithm for solving an optimisation problem by leveraging a quantum computer (here called a quantum solver) might lead to better / faster results.

There is no lack of quantum solvers today. The challenge is that what constitutes the best-performing quantum solver unfortunately depends on the nature of the use case we want to solve. Thus, which quantum solver to select for a brand-new use case? The notion of best-performing quantum solver needs to be further refined to the i) type of quantum solver and ii) the setting of its configuration parameters such that the combination of type and associated parameter setting is optimal for our computational problem at hand.

Given a new use case, the challenge is, how to guess or predict what the optimal, most suitable, best-performing quantum optimisation solver will be and how to set its associated parameters. A trial-and-error approach is most often infeasible; the search space can be large and substantial costs could arise. A further motivation is that with high likelihood a domain expert in optimisation (for example working in logistics optimisation or telco network design) has insufficient know-how about quantum computing and quantum algorithms to tackle above complex decision making challenge.

As a side note: A similar issue exists in other areas as well, like quantum machine learning (QML), where the most suitable QML approach often depends on the nature of the data too.

A solution

A paper from Deborah Volpe and co-authors from Aug 2024 suggests a predictive solution to selecting an optimal quantum solver for a new optimisation problem based on machine learning. The starting point is a specific set of available quantum optimisation solvers. Given a new optimisation use case, a predictor module can then be used to predict which solver would be optimal (likely producing the best results) for the optimisation problem at hand. After a particular solver has been automatically identified, the automation framework would then also select most adequate parameter settings for that chosen solver, i.e., adjust the parameter values based on the specific optimisation problem size and problem characteristics.

Benefits

In various industry sectors, users of such an automation system can then focus on their domain expertise (outside quantum computing) and don’t need to acquire special expertise in understanding complex quantum solvers and quantum computing more generally. The hope is that this kind of design automation broadens the adoption of relevant quantum techniques. This reminds me of Citizen Access to Big Data and the rise of the Citizen Data Scientist (a development that started a couple of years ago). Just this time, one might call this more aptly Citizen Quantum Optimisation Scientist.

Limitations

The use cases addressable by the solution suggested in the paper I refer to are limited to optimisation problems that can be formulated as QUBO problems: Quadratic Unconstrained Binary Optimisation problems. The quantum optimisers (or solvers) considered in the solution are: Quantum Annealer, Quantum Approximate Optimisation Algorithm (QAOA), Variational Quantum Eigensolver (VQE), and Grover Adaptive Search. ?However, the set of quantum solvers could be easily extended in the future.

How does the solution work – the main idea

The core of the solution is the automated quantum solver selection. This is achieved through turning the solver selection problem into a classification problem that in turn is solved using supervised learning (in the case of the paper, with a machine learning (ML) approach called Random Forest). Thus, to create such an ML model that will eventually act as a predictor for a best quantum solver for a given input problem, the authors needed a labelled training dataset useful for supervised learning.

If we imagine a dataset with examples (samples), feature columns, and class labels, then in this case, an example/sample would correspond to a real QUBO problem formulation, the features would correspond to different characteristics of such QUBO formulation (an example feature: number of non-zero 2nd order QUBO coefficients in the mathematical formulation of the optimisation problem), and the class labels would consist of types of quantum solvers (like QAOA and VQE).

Therefore, the ML model is trained to map features characterizing a QUBO problem formulation into a target (optimal) solver. Regarding the dataset for training and evaluation, the authors considered 500 different QUBO problems. This is a remarkably high number of QUBO problems, but they also agree that the dataset should ideally be even larger which indeed is a challenge. Once the ML model has been trained, it can be deployed for inferencing (prediction mode) and integrated into a workflow for optimisation. You bring along a new valuable optimisation challenge, and invoke a workflow where the prediction module then predicts the most suitable quantum solver to use.

In addition, for automatically configuring the (hyper) parameters of a selected quantum solver, the authors leverage experimental results reported about such solvers in the literature.

Next steps

The authors suggest that their research proves feasibility of their suggested approach which is indeed promising. However, they also point out the need to ideally grow the training dataset in size (the set of QUBO problems mapped to an ideal, best-performing quantum solver).

Moreover, they need to incorporate into the dataset problems of bigger size, as the size of the currently included QUBO problems is much constrained by the limits of todays quantum computer simulators and processors. Mind: each QUBO problem in the training dataset needs to be mapped to a class label, which is the type of the best-performing quantum solver. This implies that these QUBO problems in the training dataset need to be solved either using a quantum simulator or a real quantum processor.

They also believe, that in the future further improvements can be made in smartly predicting the optimal solver parameter settings instead of deducing such settings from literature results including heuristics.?

Takeaways

I find the paper I have discussed above particularly useful as it is comprehensive, carefully authored, balanced in its presentation and very readable (in other words recommended for a weekend). There are some very nice sections in it, e.g., one summarising the process from going from optimisation problem to a quantum solution.

Some time ago, I implemented an optimisation problem myself, formulating the use case as a QUBO problem, specifying it in Python with the help of Pyomo, and finally solving the QUBO problem with a variant of the QAOA available in the SDK from Classiq. At the time, I selected QAOA not knowing whether it would be the best solver or not for my problem at hand, and I didn't have the time to explore alternative solvers. I believe, a design automation solution similar to the one suggested by the authors of the paper discussed would in fact be a great asset for the industry. Maybe some good commercial or open-source solution already exists?

An accompanying paper that was published ahead of the one discussed above in June 2024 is also informative and available here.

Finally, the researchers provide the code of the solution on Github as part of the Munich Quantum Toolkit.

Keywords: Quantum Computing, Quantum Optimisation, Design Automation, Automated Quantum Solver Selection

Update 23 Aug 2024

This update aims to address the question whether there are benchmarks or reports of a quantum optimiser outperforming classical methods.

This is a good and more fundamental question, since if there is no outperformance of quantum over classical methods, there is little need for automating the selection of a quantum optimisation algorithm (which is the main topic of above article).

I would not be surprised to find some papers here and there that claim that a specific variant of quantum approach to optimisation outperforms a classical reference solution for a particular problem in some metric (e.g. time to solution, time to convergence, approximation ratio, resource consumption). Overall, I think, it is too early to make a final, more general judgement because big optimisation problem sizes cannot yet be tackled with today’s limited quantum processors and also simulators. Thus, no comparison can be made for many industrial-size challenges. I know a business use case where a modern, classical optimiser runs out of steam at large problem size, which is then a motivation to see whether a quantum solution could successfully address the problem. However, with today’s state of quantum processor technology, we cannot attempt to test this yet in practice.

The view e.g. from 2312.02279 (arxiv.org): ?“While quantum optimization algorithms will not necessarily improve the performance for all problems, they provide additional tools with new properties that can have significant impact and improve performance for some problem instances and thus, improve our overall capabilities in optimization”. Relief, we may consider this as progress. However, there is also a hint in the paper: “Since quantum computing will not necessarily accelerate all problems, it is crucial to understand exactly where advantages might be found.” In short, more research needs to be done.

That work also recognises the issue of benchmarks (what to compare a new quantum approach to) which is also much prevalent e.g. in quantum machine learning: “benchmarking problem instances known to be hard for today’s solvers are often (hand-) crafted. Thus, even if a quantum advantage could be shown for one, it would not necessarily generalize to other problem instances.” The issue of benchmarking (versus classical methods) is a serious one: I hope to find a paper in the optimisation context that is as good and honest as the one from Xanadu in the QML context ("Better than classical?"). -> In my view, the quest towards quantum advantage in optimisation is still on, similar to other application areas outside optimisation.

Some hope is expressed in 2310.03011 (arxiv.org): “We discuss the (limited, but existing) evidence that these approaches [variational algos, adiabatic algo, ... ] could generate significant advantages, as well as the associated caveats.”? At the same time the authors say what they have not covered: “work on quantum approaches for approximate combinatorial optimization…. While approximate optimization remains an interesting direction, these quantum algorithms are heuristic and there is a general scarcity of concrete evidence that they will deliver practical advantages.” Certainly true as of 2023 when that document was published, and likely true still today.

Some further hope is expressed by Gurobi: "Even just a 1% improvement in these areas [incl. telco network optimization, renewable energy grid optimisation and others] can translate into millions in annual cost savings for an organization". I think this is true. And it might raise another point: Maybe in this case we shouldn't be too obsessed with focussing on speed (and exponential speed-up) as the most important metric to measure commercial benefits. Outside optimisation, within the context of quantum machine learning, Maria Schuld and Nathan Killoran have dared to raise a similar question in this paper from 2023: "We argue that these challenges call for a critical debate on whether quantum advantage and the narrative of “beating” classical machine learning should continue to dominate the literature the way it does". Maybe a similar observation applies to optimisation.

Gurobi is also right with pointing out current challenges, including the major focus on QUBO problems. But work is under way to go beyond Quadratic Unconstrained Binary Optimisation, as in this paper from Daniel Goldsmith and Joe Day-Evans (Digital Catapult) from June 2024.

Regarding claims of papers that a certain quantum approach outperforms a classical reference method, I suspect, Scott Aaronson’s advice to “Read the Fine Print” (as he thinks is necessary for quantum machine learning algorithms) would also be healthy advice for the quantum optimisation field.

Marcin Kamiński

Computer Scientist | Data + Algorithms + Optimization

5 个月

Great points, Guenter Klas. I agree that automated algorithm design is a valuable method for improving optimization algorithms. Choosing the best solver is a general issue, not limited to quantum computing -- it involves selecting the most effective heuristic and parameters from a range of options. The paper on quantum QUBO solvers surprisingly omits reference to the classical work "What Works Best When? A Systematic Evaluation of Heuristics for Max-Cut and QUBO." Additionally, there are many ways to generate features and train ML models as predictors. QUBO can be converted to Max-Cut, and we’ve seen good results using GNNs on the cut graph as predictors. If you're interested, please feel free to reach out. https://optimization-online.org/wp-content/uploads/2015/05/4895.pdf

回复

Can you point me to benchmarks or reports of a quantum optimizer out performing classical methods?

Steven Gans

Building Responsible Intelligence For Every Environment

6 个月

Interesting new problem outlined here to investigate which solver to use. This gives me deja vu when in the past working on the problem space NAS (neural architecture search). Highly recommend watching this great video as a compliment to this article by Guenter Klas: https://youtu.be/kn3ae4mg1i8?si=wqt62yLmTbvT_9jn

Robert J.

Data Scientist

6 个月

Great article, Guenter! Definitely peaked my interest to read the article by Volpe et al., thank you!

要查看或添加评论,请登录

Guenter Klas的更多文章

社区洞察

其他会员也浏览了