Quantum Computing: A New Era Begins with Google's Breakthrough
Written on
Chapter 1: The Quantum Leap
In 2012, theoretical physicist John Preskill raised an intriguing question: “Is controlling large-scale quantum systems merely really, really hard, or is it ridiculously hard?” As of last week, we have a clearer answer: it is merely really, really hard. A paper mistakenly published online revealed that Google has reached what Dr. Preskill referred to as “quantum supremacy.” Researchers at Google performed a calculation on a quantum computer that would take Summit, the world's leading classical supercomputer, an astonishing 10,000 years to complete, all in just over three minutes.
This credible demonstration of quantum supremacy, which the leaked document undoubtedly illustrates, is a landmark event in the realm of quantum computing. It effectively marks a bifurcation in the history of the field into two distinct eras: the “before,” characterized by mere hopes of surpassing the best classical computers, and the “after,” where this achievement has become a reality.
Section 1.1: Understanding Quantum Supremacy
Google's experiment revolved around “circuit sampling”: analyzing whether the numbers generated by their machine, based on random inputs, adhere to a specific pattern. This specialized task was intentionally chosen because it is manageable for a quantum computer while still being verifiable—though with some difficulty—by a classical counterpart. The success of this experiment suggests that quantum computers could eventually tackle longstanding issues of practical significance, including drug discovery, advancements in machine learning, and the obsolescence of cryptographic codes safeguarding sensitive information.
Subsection 1.1.1: The Mechanics of Quantum Computing
Quantum computers leverage three perplexing principles. The first is “superposition,” the concept illustrated by Schrödinger’s iconic thought experiment of the cat that is both alive and dead. Unlike classical bits, which can only be one or zero, “qubits” can exist in a combination of both states. Google's quantum processor operates with 53 qubits, allowing it to represent nearly ten million billion potential superposed states simultaneously.
The second principle is “entanglement,” a phenomenon that links quantum particles across both time and space. In contrast to classical computers, where each bit operates independently, quantum computers utilize entangled qubits. This allows mathematical operations on superposed and entangled qubits to affect all of them simultaneously, to varying extents.
A quantum calculation begins by addressing qubits individually—setting one to be primarily zero, for instance—and then entangling it with a neighboring qubit to a certain degree. Once this step is complete, the rules of physics govern the evolution of the qubits’ states and connections over time. At the conclusion of the process (though not before, as that would disrupt the calculation), the qubits are examined all at once to derive a result.
The challenge lies in maximizing the probability of selecting the correct answer amid a multitude of incorrect ones. This brings us to the third counterintuitive concept. While classical physics dictates that probabilities must be non-negative—like a 30% chance of rain—quantum mechanics introduces “amplitudes,” which can be negative as well. By ensuring that amplitudes representing incorrect answers cancel each other out, while those for the correct answer reinforce, programmers can confidently zero in on the right solution.
Section 1.2: The Challenges Ahead
While textbooks present this explanation, the reality in the lab is considerably more complex. Superpositions and entanglements are extremely fragile. Even slight disruptions from neighboring molecules can compromise these states and taint calculations. Many quantum computer designs necessitate operating in conditions colder than deep space and require extensive supervision from highly educated personnel to maintain stability.
However, no level of expertise or extreme cold can completely eliminate the errors that may arise. The primary challenge facing quantum engineers is identifying and correcting these errors, especially since many practical applications of quantum computing will demand far more qubits than current devices can offer, thus increasing the likelihood of mistakes. This has prompted significant efforts from established companies like IBM, Intel, and Microsoft, as well as newer players like Rigetti, to develop improved, more reliable technologies.
Chapter 2: The Future of Quantum Computing
In conjunction with the race to create better machines, there is a parallel endeavor to craft useful quantum algorithms. A notable example is Shor’s algorithm, which enables the rapid factorization of large numbers into their prime components, posing a threat to cryptographers whose work relies on this task being difficult. However, to truly capitalize on quantum computing's potential, additional algorithms will be essential. The development of these algorithms is facilitated by the fact that many proposed applications—such as drug design and materials science—are inherently quantum in nature. This connection explains why these applications have remained challenging until now.
Despite the excitement surrounding quantum computing, many experts express discomfort with the term “quantum supremacy,” as it suggests a definitive milestone that renders decades of classical computing obsolete. Even with the “before” and “after” distinction that Google's paper represents, realizing practical, error-corrected machines will remain a formidable challenge.
Consequently, it is a common belief that quantum computing will not replace classical computing entirely. The practicalities of operating at low temperatures alone will likely limit this technology's widespread adoption. Governments, large corporations, and affluent universities will undoubtedly invest in their own quantum machines, while others may opt to rent time on devices linked to quantum cloud services. Nevertheless, the overall number of quantum computers is expected to be limited.
This situation is reminiscent of early predictions about classical computing. In 1943, Thomas Watson, then head of IBM, famously stated, “I think there is a world market for maybe five computers.” He vastly underestimated the demand, by perhaps a billion-fold.