"Rough computing"
Something captivating is currently unfolding: the most cutting edge IT meets some of the most old-fashioned techniques. We all thought that things like "noise", "errors" and "approximations" were hidden forever into dusty statistical books. With Quantum Computing, they are back under the spotlights!
Dealing with noise played a key role in the invention of modern computers: Claude Shannon introduced the concept of noisy channel communication in the early forties. At that time it applied to analog channels, but it soon paved the way for digital ones -Hamming error correction codes (in the fifties). Since then, error correction codes have been lording over integrity of all layers of our digital world: compute, network and storage.
Binary errors correction is in such a wide use and is so efficient that we don’t give it a second thought: we take the (near) limitless accuracy of digital IT processing for granted, and we tend to forget about noise.
But some key quantum IT breakthroughs of the past decade(s) have come from the rediscovery of noise and its innate properties. Some approaches have embraced noise as an ally, while others have treated it as an adversary.?Let’s take a look at these great developments. Welcome to the world of “Rough computing”!
Errors as friends
When NIST shortlisted Post-Quantum Cryptography algorithms to secure the business of tomorrow, they picked 3 digital signatures candidates: 2 are based on noise (Dilithium and Falcon), 1 is not (Sphincs).
Sphincs enjoys the least performance: the reason why it was chosen is that Dilithium and Falcon’s robustness rely on the assumption that some lattice problems are NP-hard to solve, while Sphincs relies on battle hardened hashing techniques, like SHA. It would be used as a fallback solution in the unlikely (but not impossible) case where the NP-hardness assumption is proven wrong.
In Dilithium and Falcon, a small amount of noise is sampled and injected into the initial conditions of a lattice problem. It was formally proven that this pseudo-random perturbation keeps the NP-hardness property if you don’t know some of the initial conditions (typically: if you are an adversary), but removes it completely if you do (if you are a legitimate user). So a legitimate user can easily check a post-quantum signature, unlike an adversary.
The samplers used in Dilithium and Falcon are uniform and gaussian, respectively.
Most of PQC is rocket science, but the foundations of PQC -the samplers- are just the old distributions that we learnt at school!
As an aside: the NP-hardness of lattice problems is closely related to a problem called "sphere packing": what is the most efficient way to pack oranges into a box? This question sounds simple but the formal answer is a nightmare to solve in 3D (it was only proved in 1936), because stacked oranges form a lattice, and there are several competing configurations. The number of configurations and the difficulty climbs steeply as the number of dimensions grows... The above-mentioned error correction codes are very efficient tools for making compact packings in high-dimensional lattices. Some recent breakthroughs in packing optimization wouldn’t have been possible without error correction codes.?
领英推荐
Errors as foes
Quantum computing wouldn’t exist without uncertainty. Qubits entanglement is what makes quantum computing so efficient, and entanglement comes from uncertainty.
In quantum computers, qubits are arranged in such a way to have a fixed number of neighbors: the layout depends on each actual processor. Neighbors can be logically connected or disconnected according to?the circuit one wants to set up. A quantum circuit is characterized by a WIDTH (the number of qubits that can be used as "input") and a DEPTH (the number of subsequent operations that can be applied to qubit neighbors before yielding an "output").
One of the greatest challenges of today's quantum technology is to overcome the WIDTH*DEPTH limitation, which is currently quite small: in IBM's latest quantum utility announcement published only a fortnight ago, the state of the art Eagle quantum processor may only deal with 127 (WIDTH) qubits and 60 (DEPTH) layers of gates, for a total of 2880 CNOT gates.
The limitation comes from the fact that the more connections you add to the computer board, the harder it is for the qubits to maintain their quantum superposition. Quantum effects happen at a micro scale: when a system becomes macro, it loses its micro properties and becomes classical. This is called decoherence. It is caused by tiny imperfections at hardware level. These generate noise, and errors.
But decoherence is not the only problem:
Quantum errors also come from the way we simulate physical problems on quantum computers.
Currently, some of the most useful algorithms perform what is called "Hamiltonian simulations". It means that a physical problem is decomposed into a series of smaller problems which are easy to implement as quantum circuits. Due to quantum uncertainty, the order by which we set up the series matters: if we apply circuit 'A' before circuit 'B', we (usually) don't get the same result as if we apply 'B' before 'A'. We say that A and B do not commute (AB != BA).
When we perform a Hamiltonian simulation, we have to make arbitrary choices at every single step of the computation: should we implement 'A' before 'B'? Say we pick 'A' last, and 'C' comes next: should we implement 'C' before or after 'A'? ... Each decision we take adds a small quantity of error to the calculation. A workaround is to compute more steps, and make each step smaller. But this means increase circuit DEPTH… You know where this takes us: we are back to the original issue.
The tech giants are engaged in a race to deal with quantum errors: some like Google seem to bet on quantum correction codes (inspired from Hamming correction codes of the fifties), some like IBM seem to prefer errors mitigation through noise cancellation (inspired from the good old laws on uncomputation).
This field of research is very active, because the stakes are huge.