The market size of quantum computing: a tale of mystery and imagination
Edinburgh. The weather in Scotland is proverbially hard to predict. (Chris Fleming from UK, CC BY-SA 2.0, via Wikimedia Commons)

The market size of quantum computing: a tale of mystery and imagination

?

“In ordinary life, if somebody consistently exaggerates or lies to you, you soon discount everything they say. In court, there is the legal doctrine of falsus in uno, falsus in omnibus, which means untruthful in one part, untruthful in all. But when it comes to the media, we believe against evidence that it is probably worth our time to read other parts of the paper. When, in fact, it almost certainly isn’t. The only possible explanation for our behavior is amnesia.”

Michael Crichton on Gell-Mann Amnesia, from “Why we speculate”, 2002

?

“Quantum tech. This field manages to be both overhyped and undersold at the same time.”

Anonymous, 2023

?

[Note on the title: I included “A Tale of Mystery and Imagination” in the title. The reader might find this choice somewhat unnatural. It doesn’t refer directly to Edgar Allan Poe, where the phrase would seem more at home, but rather to The Alan Parsons Project’s progressive rock album, Tales of Mystery and Imagination (1976); indeed, a tribute to Poe. While this title has nothing to do with Poe’s ghostly characters or doomed protagonists, it does capture something about quantum computing. This is a field that, today, both thrives on and struggles with speculation, projections, and a fair dose of storytelling (no more than other areas of technology). And from a psychoanalytic perspective, the crystalline sonorities of the album awaken in my mind a longing for clarity. It’s in contrast, perhaps, to the fog of uncertainty that often surrounds discussions of quantum markets. As a soundtrack of this writing, I’d suggest “A Song for One in Paradise”. It’s contained in the album just mentioned above.]


Quantum computing is a strange beast, both wildly overhyped and quietly undersold. Market projections balloon into absurdity, timelines contract and expand depending on who’s talking, and somewhere in the noise, real progress continues at its own unhurried pace. This isn’t another exercise in prediction, nor a eulogy for classical computation. It’s a quick exploration of the narratives that shape how we think about quantum markets—the myths, the speculations, the convenient fictions, and the occasional inconvenient truths. A tale of mystery and imagination, if you will. What I would like to say (and possibly to repeat) at a high level, without using any methodology, is that quantum computing is a science-first story. Because in pushing the limits of what we can compute with quantum physics, we push the limits of what we can understand.

I need a lengthy preamble before getting to the core of the discussion, as it will help me set the tone, establish the rationale, and, ideally, create a connection with the reader before dropping some market figures on them. Although there’s a chance the reader will be disappointed since my market figures are scarce.

One of Albert Einstien’s famous quotes is: "Everything should be made as simple as possible, but not simpler." It obviously emphasizes the importance of clarity and simplicity in expressing complex ideas, without oversimplifying them to the point of inaccuracy—and by writing “Einstien” I scored 5 points in the Crackpot Index of John Baez, something that makes me feel more in tune with the pulse of our times. Let’s move toward the core of what I’m trying to share, with the idea that, in the asymptotic limit, I will either convey some information, express some feelings, or simply vent. I need some basic but useful points.


Point 1. The Dunning-Kruger effect is that peculiar cognitive glitch where the less someone knows, the more they think they know. It’s a confidence-knowledge mismatch: those with little expertise overestimate their abilities, while those who actually understand the complexity of a subject tread carefully. The result? A world where loud certainty often drowns out quiet competence. An easy manifestation and corollary of the Dunning-Kruger effect is performative learning, where the act of displaying knowledge, whether in a brief review matters more than actually possessing it. And then comes the social reinforcement. Instead of rigorous debate or correction that might refine understanding, we receive likes, retweets, and nods of approval, further insulating us from the realization of our own superficiality. [Reader: “OK, fine, give us new information.”]

Point 2. Truth is a practical necessity. We can say this many times. We love the idea that everyone should be free to say whatever they want—sure, after all, the amendment tells us so and evolutionarily must have some advantages. But when it comes to technical matters, history and decency teach us that a bit of carefulness is required. Words in this context aren’t just expressions of the motions of our souls; they need to translate into something actionable. In other words, if I’m a sports commentator, I can wax poetic about the elegance of the Italian goalkeeper’s leap in the 1982 World Cup—il grande Dino Zoff—and no one will get hurt. But if I’m a heart surgeon, I don’t get that luxury. I can’t just make declarations about “the artistry of a bypass.” I need precision, because threading a catheter through a narrowed coronary artery to restore blood flow isn’t immediately poetic, but it’s about keeping someone alive. Hence, following once more the philosopher Harry Frankfurt in On Bullshit, a book that anyone working in technology should keep next to their glass of water on the bedside table: "It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction."

This is wonderful. Frankfurter argues that bullshit, i.e., speech that is indifferent to truth, is more dangerous than lying because it erodes the very distinction between truth and falsehood. Truth isn’t just a moral ideal. It’s more. It’s necessary, since without it, we lose our ability to act effectively in the world. We lose actionability as agents. [Reader: “Good. Still waiting for new information.”]

Point 3. In the age of accelerationism, the pursuit of instant gratification in the learning is a defining trait of contemporary culture. People no longer want to learn; we want to be experts overnight, just as we dream of getting rich instantly through cryptocurrency speculation. Philosopher Byung-Chul Han talks about this phenomenon, arguing that “today’s society is no longer a disciplinary society, but an achievement society," (Leistungsgesellschaft) where individuals are pressured to self-optimize, consume, and produce at an exhausting pace rather than engage in meaningful, reflective work (or, with other images, individuals feel compelled to quickly absorb, display, and monetize information, rather than engage in learning). Similarly, many people these days warn against a life of fleeting experiences, where everything must be fast, disposable, and immediately gratifying. Our obsession with speed leads to a world where patience, mastery, and depth are sacrificed in favor of instantaneous yet superficial rewards (including expertise without effort, and wealth without creation; and, why not, social media’s dopamine-driven reward systems).

?

With these three (fairly monotonous) points in my satchel of eloquence, I will tell a story. A part of me, undeniably, temptingly, would love to narrate the realistic CXO’s favorite legend. The story of Saint Decoherius, as written in a future version of The Golden Legend by Jacobus de Voragine. Saint Decoherius fought against noise and decoherence in 21th-century Val d’Orcia. A protector of fragile superpositions, he stood against the forces of environmental disturbance, striving to keep the sacred states of quantum reality uncollapsed. Unfortunately, the favorite legend of the unrealistic CXO is the tale of Saint Qeorge (the quantum computer) slaying the dragon. The dragon could be many things. For example, it could be massive optimization problems that Saint Qeorge solves with 17 qubits [CXO: “Yes, 17, I know, but they are so good and 17 is my lucky number!”]. Slaying the dragon could be curing the alien hand syndrome, or providing us more accurate weather forecasts. Perhaps lean world models for training robots, or effortless exploration of interstellar cavities, or even hard problems in AI. But whatever you do, don’t mention the lack of QRAM or dequantization and other issues that decouple quantum from AI—unless you want to be instantly categorized as either a scaremonger, a “person without a flag,” or one of those hyper-realistic dull types who seem to lack any capacity for dreaming. Unfortunately, I am one of those because I still don’t see how quantum will help AI, but rather the other way round. I hope, one day, I will see both sides of this coin.

And then, on the exact opposite end of the spectrum of the CXO’s, you have: “It will take 2000 years before we have a quantum computer.” Right. “In particular, you.” “Who?” “You.” Pointing a finger at me while I am just sitting in the audience. “You will never see the real McCoy of quantum computing because by then, you’ll either be worm food, struggling with cataracts, or your brain will be floating in some preservation tank in a future too deep to matter.” Of course, the unspoken words here are: “…because my company isn’t building one, so better to tell investors it’s never going to happen.” In full fairness, that CXO could well be right. Gustave Flaubert said that “There is no truth. There is only perception.” Nobody wants to give investors the wrong perception.

No. I will not talk about CXOs and their visions because it snowballed beyond control recently with questions like “When do you think quantum computers will do [replace this text with your favorite thing]?” were asked to even the CTO of a cheese factory, where entanglement only happens when wheels of Camembert are stacked too close together. Instead, I will speak of something even more imaginary, more whimsical: organizations drafting their n-th report on the wonders of future quantum computing markets. And, of course, there would be plenty of numbers to make it all sound inevitable. I will call them PrediQtors (see Footnote 1 when you have time, after reading the body of the document): “The EU market size for quantum computing will be $748.3 billion in 2036.” Mirabile dictu! When I read something like this, even if someone tries to walk me through the methodology behind the numbers, I can’t help but feel the same way I do when reading a Nostradamus prophecy: “In A.T. (Anno Turing) 147/5, the machines shall rise, calculations shall bend, and fortunes shall be foretold by the priests of the unseen logic.” There is definitely convenience in mystical predictions. They automatically put you in a position of power.

Now, a set of secret friends (I don’t say “group of friends” but rather “set” otherwise the mathematically cautious reader would ask me which one of my friends is the neutral element), whose names are A, A, F, S, et al., told me the following two fables, which I will report by modifying the modifyiable and making sure I stick to their views. They are part of the same story. Nota bene: We may or may not agree with the views of A, A, F, S, et al. Their names are reduced here to mere initials to protect their identity from the PrediQtors.

?

Part 1. They say that quantum error correction is essential for all currently identified applications, but it comes with significant overhead in both the number of physical qubits required and the runtime of quantum computations. [Author: So far so good. However, I believe we need to be open minded, because innovation occurs.] Early applications will be constrained by the limited availability of physical qubits. [Author: “Early” means kind of nothing and not 2026 or 2032 or 2049.] Under realistic noise assumptions and using surface code error correction, even the simplest use cases—such as basic physics simulations—are estimated to require approximately 100,000 to 500,000 physical qubits. However, the majority of identified applications demand 10 million to 100 million physical qubits.

At these scales, computational speed becomes the next major bottleneck. Logical operations are expected to run at speeds below 1 MHz, which is 10,000 times slower than classical processors operating at GHz frequencies. To make quantum computing truly valuable for most practical applications, logical clock rates must improve by two to four orders of magnitude beyond current expectations. [Author: Thus, builders of quantum computers, unite and bring forth your marvelous machines! Because no matter what prophets of doom or exuberance may predict, you will get there. Once the hardware exists, the path will reveal itself. Then, all those pesky heuristic algorithms will finally have a home to go to, rather than wandering aimlessly through the deserts of scientific papers.]?

Part 2. [Author: The same friends came up with this second fable which includes a methodology to size the markets with the caveat that there is no year 20xy. This is something they leave open.] The 58 million node-hours allocated through the 2023 U.S. Department of Energy (DOE) INCITE Awards provide a perspective on the potential role of quantum computing. These awards represent 60% of the leadership-class computing resources at Argonne and Oak Ridge National Laboratories, accounting for 15% of the total compute power of the world’s top 500 supercomputers. The allocations were distributed across 56 scientific projects from industry, academia, and government, selected from 97 proposals requesting 102 million node-hours. A closer look at these allocations offers insight into where quantum computing might eventually provide an advantage. About 22% of node-hours (13 million) went to problems where quantum computers are known to provide a theoretical speedup, including high-accuracy quantum chemistry and neutrino-nuclei interactions in particle physics. An additional 31% (18 million node-hours) focused on areas where quantum algorithms have been proposed but where no concrete advantage has yet been demonstrated, such as deep learning and differential equation simulations. The remaining 47% (27 million node-hours) were dedicated to problems where a quantum speedup is unlikely, including multiscale simulations in astrophysics, geophysics, and biological systems.

Using a (very) rough cost estimate, a large cloud instance with a number of GPUs and CPUs costed in 2023 around $25 per hour. Running the 13 million node-hours of quantum-favorable computations at this rate would translate into an estimated $320ish million per year in classical computing costs. Expanding the scope to include problems where quantum algorithms have been proposed but no speedup has been established raises this estimate to $758 million per year. If these figures were extrapolated to the total compute resources of the world’s top 500 supercomputers, the upper bound of potential quantum computing applications in scientific computing could reach $5 billion. It’s important to note that this analysis only considers research applications and does not account for industrial use cases. Personally, I have no idea what kind of multiplier would be needed to include those as well. Times 2? Times 10?

Keep in mind that these numbers are based on how we use computers today. Quantum computers won’t just replace some existing supercomputers workloads—they will enable entirely new ones. The new computational paradigm will bring unforeseen applications, ripple effects across disciplines, and emergent behaviors that no market analysis can fully anticipate. Just as classical computers reshaped fields from climate modeling to genomics, quantum systems could drive breakthroughs in ways still invisible to our current frameworks. Imagine being in the 1950s, trying to estimate the market size for computers. Could anyone have foreseen the internet, cloud computing, or AI as we know them today? Just as it was impossible then to predict a world of smartphones and distributed computing, today’s projections for quantum computing are limited by the same lack of real-world pieces needed to assemble a coherent vision. It’s like trying to invent a new animal—you would most likely piece together features from animals you already know, because we are bound by existing forms, often reshuffling what is familiar rather than creating something entirely new.

Part 3. These estimates above assume that quantum computers will eventually provide solutions superior to classical alternatives, making it disadvantageous to continue solving these problems using classical hardware. While quantum computing holds immense promise, broad market estimates often fail to reflect the complexity of the underlying challenges. Many high-profile reports project demand far ahead of technical capability, creating an illusion of inevitability that history suggests is rarely accurate. A more pragmatic approach is to focus on unlocking real value in specific applications rather than assuming large-scale adoption will follow automatically. So, it seems kind of absurd that people still make predictions like “In 2029, the market size of quantum computers in finance will be...”

All I feel to say now is that these applications are highly unlikely to be in machine learning / AI, for reasons I will not discuss here (to be continued). It also remains unclear whether optimization will be significantly impacted. However, with good probability, these applications will lie in science. What is clear is that if we want to reach them, we must invest substantial effort and resources in algorithm development while the hardware is being built. Please be cautious here. Algorithm development is not about programming languages but rather work on the blackboard, work that requires pen, paper, and a willingness to engage with challenging mathematics and its potentially painful topics. It follows that significant attention should be given to what is traditionally called theory, as it is also essential in shaping the future of quantum computing.

Part 4. I’ll keep it concise. Most scientific discoveries follow a long and unpredictable path to economic value. Maxwell’s equations (1860s) eventually led to radio, telecommunications, and modern computing, but only after decades. Quantum mechanics (1920s) seemed purely theoretical at first, yet today it underpins a trillion-dollar semiconductor industry. Global navigation systems—GPS, BeiDou, Galileo, GLONASS—all rely on relativity corrections, proving that Einstein’s work was not so useless after all. By the way, I won’t even mention the discovery of DNA, whose impact is so vast that it will take centuries to fully comprehend. What is the economic value of these discoveries? What is the market size of their applications? Hard to quantify, but undeniably huge, shaping daily life whether we like it or not. Manipulating qubits will follow a similar trajectory, and there is no need to convince industry leaders or governments. It simply is.

Maybe it will be better battery designs, ammonia synthesis, catalytic reactions, or higher-temperature superconductors, as we hear every other day. The point is, we must build quantum computers. It’s both uninteresting and frankly demeaning to justify quantum computers by their use in pricing options or solving logistics problems, while attaching market size predictions to these applications. These are narrow framings that only add unnecessary pressure to the system while lacking the depth and imagination to see the broader scientific applications whose value cannot simply be predicted.

In my preamble, I arrogantly rambled that we should avoid speaking on technical matters without the necessary knowledge, only to fall straight into the very trap I had set for myself. I know nothing about how market share is measured, whether through a top-down or bottom-up approach, how potential customers are quantified, or any of the other intricacies. So, everything written here is an analysis that isn’t worth much. And yet, it doesn’t take much to see that when it comes to quantum computing, we are really looking at two different stories. On one side, there’s the market of users—how much money can be made by selling or renting machines, either in isolation or as part of complex cloud pipelines, woven into the fabric of microservices and other IT resources. On the other, there’s the market of impact and the innovation driven by quantum computing. This innovation is likely beyond the reach of classical computers, unless artificial intelligence somehow learns to infer the workings of physical reality.

[Reader: OK, can you give us more numbers?]

[Author: Sorry, I wish I could, but I’m superficial.]

Still, to satisfy your curiosity, I’ll share a different kind of numbers. Below are monetary values tied to specific events that will happen in the world once quantum computers are deemed successful, where “success” is left to the reader’s interpretation—some people will call it “commercially valuable”, others “scientifically supreme”, others “economically convenient”, others “energetically favorable”. I imagine that world as a good one. A hypothetical reality that, naturally, would also be the best of all possible worlds. This has nothing to do with David Deutsch and his multiverse, but rather with Leibniz, who argued that our world, despite its flaws, is the best possible one (Théodicée, 1710).

Before you get offended, recall that “Le rire est le propre de l’homme” (“Laughter is unique to mankind”), as the great Fran?ois Rabelais wrote in Gargantua and Pantagruel. So, here are some numbers in that wonderful post-quantum world, where we will all be successful and at peace with ourselves. It will be a world where coherence last forever.

$120. The quite expensive lobster dinner, probably around Kendall Square, that Gil Kalai will ultimately have to pay Aram Harrow once their debate is finally settled by the existence of a truly performant quantum computer.

$250. The amount John Preskill would be willing to pay the community just to make them stop asking him to invent new acronyms for different generations of quantum computers. I’m talking about NISQ, MegaCoops, and whatever comes next.

$3,000. The celebratory cocktail party that the contributors to quantum error correction will organize at The 9-Qubit, the new trendy bar opened by Peter Shor. Shor himself will be behind the bar, serving drinks to Andrew Steane, Daniel Gottesman, Manny Knill, Dorit Aharonov, Alexei Kitaev, Barbara Terhal, Michael Ben-Or, Dave Bacon, Robert Calderbank. (My apologies to anyone I’ve forgotten to mention, though I suspect you’d rather be left out of this text altogether.) The night’s most popular cocktail will be the GK (aka Gottesman-Knill), a Negroni variation with a dash of New Mexico tequila, a nod to G and K's time at Los Alamos.

$23,000. The prize money (not including sponsors) that Oskar Painter will receive for winning the Baja 1000 with his car team, The Resonators Racing, after retiring from the lab and dedicating himself full-time to the garage, aided by engine wizards of the caliber of J. Hamilton.

$49,000 or 357,000 CNY. Based on the 2024 Paris Olympic Games: 529 grams, containing about 6 grams of gold plating over 523 grams of silver. This is the cost of the medal that some experimentalists will receive from their government for building a top quantum computer, where each logical qubit will require an overhead of 27 physical PhD students working on the project.

$3B. The cash that [here is the name of your closest billionaire friend] should invest (in the long-term) to ensure the creation of a large 1,000,000-neutral-atom quantum computer. Because, regardless of the competing modalities for building quantum chips, we are destined to deepen our understanding of pristine atoms in vacuum, illuminated and moved around by lasers.?

$316B. The amount that Terry Rudolph and Jeremy O’Brien will spend to buy Tasmania, based on a rough estimation of its land value and economic influence. They will use the entire surface of the island to build their newest PsiFactory.

$1T (trillion). What the AI community will be willing to pay to get their hands on quantum machines. Not because they’ll be useful for solving AI problems—because, given what we know today, with 50% probability they won’t be—but because these machines will be the ultimate luxury hardware collectible, an indispensable addition to their technological menagerie. You know, just in case. And this, of course, brings to mind the image of Dr. Evil in Austin Powers, raising his pinky and demanding one million dollars! Remember?

A lot of Monopoly money (given Part 4 above). For a new theory of quantum gravity that, in a twist of irony, turns out not to need quantum at all. Or perhaps for a grand breakthrough in the interpretation of quantum mechanics, finally settling debates that have raged for a century. Or maybe for the realization that Roger Penrose and Stuart Hameroff weren’t such berserkers after all when they suggested that some quantum process is actually happening in the brain.

An undetermined small sum. That’s what I’d be willing to pay to find out what Hartmut Neven will do next after the success of the entire field. Perhaps launching his own version of Burning Man in the desert, a fusion of quantum futurism and art. Or maybe, starting a fashion blog, because let’s be honest, a lot of scruffy scientists could take inspiration from his style.

Priceless. The image of Scott Aaronson’s face fully relaxing as he sips a margarita on a chaise longue, wearing Bermuda shorts, possibly with cucumber slices resting on his eyelids. A moment of true serenity, for at last, there is order in the zoo of BQP and some other computational complexity creatures.

?

Coda:

[Reader: “Why did you waste your time writing this?”]

Everything is about the result and not the process these days, isn’t it?

[Author: Because I enjoy writing. Well, to be honest, I didn’t actually write anything. I used GenAI like ChatSVRN (which runs on brain pseudo-randomness) and relied on help from friends. Zero effort and zero responsibility on my part. And just so you know, I don’t do sports, play golf, or any musical instruments.]

If you wish to know more, return to the top of the page and read again, though, of course, rereading is never the same as reading, just as recollection is not memory, nor is knowledge the same as wisdom. The former are accumulations, the latter are structures; one stores, the other interprets.

[Reader (eyes widening, voice unsteady): “But… showing off all this erudition, the way you wield language… don’t tell me… you’re one of them, you’re one of the PrediQtors?”]

The weight of the realization sinks in. A flicker of doubt sharpens into fear. The room feels colder now, the air thick with unspoken truths…

?

Footnote 1:

The PrediQtors, mentioned in one of the lost writings of Viktor Khrizhanovsky, and with high probability also in an early version of Tl?n, Uqbar, Orbis Tertius by Jorge Luis Borges, are a secretive and, just as likely, imaginary organization. They are said to have emerged in Palermo, Sicily, during the reign of Frederick II. However, documentation is scarce and fragmented, consisting mainly of marginal notes in medieval manuscripts, most of them illegible. Legend has it that the organization arose after Frederick II, tired of astrologers decided to ban fortune-telling based on “non-verifiable models.” The PrediQtors, a group of mystics and mathematicians, saw an opportunity to offer themselves as a “rational” alternative, swearing to foresee the future through more sophisticated means—such as numerological cabala, systematic dice rolling, reinforcement learning, optical tables, and comparative analysis of sacrificial leaks (yes, vegetables).

One of the earliest references to the order is the Codex Futurologum, an apocryphal manuscript kept for centuries in the library of Palermo. Some scholars claim it contains a primitive theory of market forecasting based on magical numbers. However, the Codex mysteriously vanished in 1572 after an auction in which it was mistakenly sold as a Saracen cookbook to an unsuspecting Venetian restaurateur.

Beyond Europe, traces of the PrediQtors are said to have surfaced in Japan during the 17th century. A scroll found in the Kinkaku-ji Temple mentions a group of itinerant monks known as Mirai Shimpan (Judges of the Future). They could predict the size of markets by studying the behavior of the snow monkeys (Macaca fuscata) in Nagano Prefecture as they soak in natural hot springs. The way they ease into the steaming water, the intensity of their contemplative gazes, and the subtle fluctuations in their grooming hierarchies. All of it, when properly analyzed, could reveal deep insights into economic cycles, investor sentiment, and the inevitable rise and fall of techno-bubbles.

Meanwhile, in Constantinople, a minor chronicle by the Byzantine monk Theophylaktos of Nicaea recounts the case of Demetrios the Probabilist, who claimed he could predict the market size of silk through a complex system of secret signals sent by Genoese merchants using arrangements of candles. His method reportedly worked so well that Emperor Andronikos III Palaiologos, suspecting a pact with the devil, ordered his exile to a monastery on Mount Athos, where he ended up opening a betting house for wrestling matches between monks.

The last documented appearance of the PrediQtors seems to date back to the 18th century, when a mysterious Treatise on Technology and Uncertainty was found in the archives of the National Library of Paris. The work, attributed to a certain Jean-Marcel Fourier, theorized a system in which the future market size of a specific technology is determined by hidden variables of obscure nature, often imperceptible to ordinary observers.

Of course, there is no real evidence of any of this. But, as a pen like Borges might write, “the absence of evidence is, at times, the most sophisticated form of proof.” As for today, I lack both the expertise and the courage to delve into conjectures, but I must confess a lingering suspicion: Doyne Farmer, architect of chaos theory, pioneer of complexity, eudaimonian, and dear friend, has given me reason to believe that he may have been part of the PrediQtors, if not in action, at least in spirit. There is something in the way he wears his beard now that would make him equally at home as an Argonaut or a Sumerian deity. To be serious for a moment, his book Making Sense of Chaos: A Better Economics for a Better World (Yale University Press) was published in April 2024.

Mark Mattingley-Scott

Chief Revenue Officer (CRO)

3 小时前

LOL: Jeremy buying half of Tasmania

回复
Andreas Masuhr

Physicist ? Philosopher ? Consultant

1 周

Possibly the best, definitely the most enjoyable piece of writing on the state of the field that I've read in a long time. Thank you, dear Simone Severini .

Disharth Thakran

Marketing Enthusiast | Content Manager | Social Media Marketing

1 周

Microsoft just made a game changing move in quantum computing, leaving Google in a superposition of shock! But the real question : are our passwords and Bitcoin safe? curious about it, have a look https://www.dhirubhai.net/pulse/microsoft-unleashes-quantum-leap-revolutionary-new-sqbic/?trackingId=Cd6FWi5yw%2BDarAjRm8QPFg%3D%3D

回复

Why our understanding and commercialization of quantum will remain in state of superposition—until we finally figure things out, at which point we will snap out of the quantum state into inevitable decoherence state- which in itself will be a giant step for this enigmatic field. Lovely article Simone Severini, keep bringing more.

Leandro Aolita

Executive Director of Quantum Algorithms @ Quantum Research Centre - Technology Innovation Institute

2 周

Thanks a lot for writing such a masterpiece reflection Simone. It enjoyed reading it very much. Very sober and deep and terribly honest!

要查看或添加评论,请登录

Simone Severini的更多文章

社区洞察

其他会员也浏览了