Noisy vs. fault-tolerant quantum computing - understanding the needs
Morten Bache
Scientific Director, PhD, DrTechn - Quantum Technologies, Data Science, AI, Gefion AI Supercomputer - at the Novo Nordisk Foundation
It is critical to understand the difference between today’s noisy and tomorrow’s fault-tolerant quantum computer. Solving complex problems of real value, in, e.g., electronic structure calculations in large, complex molecules, actually does not require that many qubits (a few 100s of them), so what’s the big deal? Can’t we already get that commercially in current gate-based quantum computers?
Yes, but critically the qubits performing the calculation must be highly insensitive to errors. This is what we mean when we refer to fault-tolerant operation
This is why it is important to pursue fault-tolerant quantum computers: the quantum processor unit (QPU) would need to have on the order of millions of qubits and few 100s-1,000 logical qubits in order to accurately solve a large, complex, and realistic quantum-chemical problem within a few days.
In turn, on today’s noisy quantum computer, we have to settle for “noisy intermediate scale quantum” (NISQ) applications: due to decoherence, the qubits lose their superposition state too quickly, which in turn leads to a reduced “circuit depth”, i.e., how many sequential logical operations we can perform, before errors and noise limit the calculation. On NISQ machines, the noise will accumulate when the circuit depth increases, limiting the number of operations. Thus, while the next generation NISQ quantum computers certainly have enough qubits to be of interest (e.g., 433 qubits on the IBM Osprey expected later this year, or 216 qubits on the current Xanadu Borealis), they are not reliable enough to be able to attack problems in quantum chemistry involving large, complex molecules.
Understanding the scope of fault-tolerant quantum computers
In a recent paper, Goings et al. (Google, Boehringer Ingelheim, QSimulate collaboration) investigated the cytochrome P450 enzyme, and assessed the computational resources needed to assess its electronic structure [1].
The electronic structure of complex chemical compounds like enzymes is the exact class of hard problems where quantum computers offer a potential new approach to find an accurate solution: the problem becomes exponentially larger every time an orbital is added to the chemical problem, and this is inherently built into the quantum computer approach, where each added qubit doubles the number of states that can be represented.
The cytochrome P450s (see picture) is a superfamily of enzymes that plays the role as a catalyst in steroid hormone synthesis and drug metabolism, and is found throughout the kingdoms of life (animals, plants, bacteria, viruses etc.).
The premise of the paper
Solution strategy
The paper then lays out a strategy for solving the quantum chemical problem on a fault-tolerant quantum computer.
Background explanation on the resources needed to solve the problem
The paper then turns to estimating what is needed for solving the quantum chemical problem on a fault-tolerant quantum computer. The computational overhead
First, we have the factors stemming from the algorithm choice and its numerical implementation
领英推荐
Secondly, we have factors stemming from the chemical and dynamical nature of the problem: The computational overhead will for instance depend directly on the temporal dynamics of the chemical system (the so-called L1-norm of the Hamiltonian coefficients). That is, the complexity increases the more the chemical system depends on time, and this directly affects the execution time in terms of number of iterations before convergence.
Finally, and most important for our discussion here, is factors stemming from the complexity in executing the required quantum logical gates
Specific resource examples
While more technical details can be found in the paper on these aspects, we here provide a short summary for the resources needed to solve the ground state energy calculations for the cytochrome P450 enzyme:
In a similar study, Google together with collaborators from Columbia University, Macquarie University, and University of Washington, investigated the FeMo-cofactor of the nitrogenase enzyme [2]. This so-called FeMoCo molecule is important for understanding the mechanism of biological nitrogen fixation. They found that it can be simulated using about four million physical qubits and under 4 days of runtime, assuming 1-μs cycle times and physical gate-error rates no worse than 0.1%. This is just shy of one trillion clock-cycles.
Conclusion
Based on these reflections, there is no doubt that in order to execute such refined quantum chemical calculations on real problems, it imposes massive requirements on the quality and scalability of the quantum computer, which must be operated in a fault-tolerant mode in order for the problem to converge.
There is currently no path for a NISQ computer to be able to faithfully execute on the order of millions-billions of Toffoli gates. The long-term aim must therefore be to develop a fault-tolerant quantum computer with on the order of millions of physical qubits with on the order of 100-1,000 logical qubits. At the same time, any improvements on the algorithm side to implement and solve the chemical problems in a smarter and more efficient way will lead to huge reductions in requirements on the hardware part.
When will we have such a large-scale fault-tolerant quantum computer? We could expect to have a scalable qubit platform
References
[1] J. J. Goings et al., “Reliably assessing the electronic structure of cytochrome P450 on today’s classical computers and tomorrow’s quantum computers”, PNAS 119, e2203533119 (2022), https://doi.org/10.1073/pnas.2203533119, https://arxiv.org/abs/2202.01244
[2] J. Lee et al., “Even more efficient quantum computations of chemistry through tensor hypercontraction”, PRX Quantum 2, 030305 (2021), https://doi.org/10.1103/PRXQuantum.2.030305, https://arxiv.org/abs/2011.03494
Co-founder and CEO, Algorithmiq Ltd
2 年Morten, I both agree and disagree with you :).? I agree about the fact that algorithms for fault-tolerant quantum computers (FT-QC), such as the one of the paper of Goings et al. (https://lnkd.in/ejR9gsjA) are impossible to run on NISQ devices because of the very large circuit depth. But the computational complexity analysis of that paper refers to such algorithms, which are not meant to be run on near-term devices.? I disagree with you about the fact that algorithms for FT-QC are the only ones that can be used to prove quantum advantage for chemistry. Complex optimised hybrid algorithms with powerful noise mitigation strategies will play a key role in the transition towards FT.? In fact, the combination of noise mitigation and hardware improvements is already reaching levels sufficient to achieve useful quantum advantage on near-term devices (https://research.ibm.com/blog/gammabar-for-quantum-advantage). The missing ingredient is a scalable readout strategy that removes existing roadblocks on hybrid algorithms and, at the same time, enables new powerful error mitigation techniques, see our latest papers: arXiv:2208.07817, arXiv:2207.01360.
Head of IT Technology Strategy / CTO at Boehringer Ingelheim | ex-Oracle
2 年great article :) and thanks for citing our work - and explain in in detail
Global Quantum Lead in HCLS @ IBM Quantum | PhD | Speaker | Drug Discovery & Pharmaceutical Research | Quantum + AI | T. J. Watson Research Center |
2 年Nice article, Morten. To formulate the fermionic Hamiltonian of a large molecule of industrial importance, we need millions of physical qubits to get good quality logical qubits.