Confirming anything whose defining characteristic is uncertainty is obviously difficult, even when the confirmation involves whether a computer sold two years ago works the way it’s supposed to.
Confirming that the first quantum computer developed and sold for commercial use uses specific quantum phenomena to perform calculations is a pretty complicated matter—especially when one of the phenomena in question is finding the simplest solution to a problem based on changes in probability.
Quantum computers should be exponentially more powerful than standard computers due to an increase in the number of variables that can be considered. In standard computing, the smallest piece of information is the bit that can represent either a one or a zero, but must be one or the other. In quantum computing, the smallest unit of information is the quantum bit (qubit), which can also represent either a one or a zero, but can also be both, or neither.
Manufacturing processors capable of handling a significant number of qubits in the same calculation has proven difficult for most would-be quantum-processor manufacturers. D-Wave’s presentation of its chip as a working 128-qubit processor was a major breakthrough; but competitors, IT analysts and specialists in quantum mechanics have raised questions about how D-Wave’s secret process actually works.
Confirmation of Uncertainty
A team of researchers at USC confirmed June 28 that D-Wave’s 128-qubit processor solved problems using techniques consistent with quantum mechanics rather than classical physics.
The test specifically focused on how the D-Wave processor calculates the simplest solution among a range of options using calculations based on quantum fluctuations—changes in a field based on the appearance and disappearance of energetic particles whose behavior follows patterns outlined in Heisenberg’s Uncertainty Principle, rather than rules of classical physics.
The Uncertainty Principle holds that it’s possible to know only one of two variables about a particle. It is possible to know the location of a particle while simultaneously knowing its velocity, for example. Algorithms using the process of “quantum annealing” make calculations based on the “tunneling field strength” of a set of options, which boils down to the likelihood that a particle will remain in a particular location (energy state) or tunnel into an adjacent one based on differences in the level of energy between the two.
Calculations based on non-quantum “simulated annealing” or “thermal annealing” do the same thing, except the likelihood a particle will move from a location with one level of energy to one with a different level are based on changes in temperature instead. (The difference between the two can be described more precisely according to whether the calculation uses a diabatic or adiabatic process, but both implementation and explanation of those processes come with a much higher “huh?” factor than those already included.)
Processors designed for quantum computing are also able to operate using thermal annealing, rather than quantum, making it difficult to determine which of the two is at work. That’s why sale of the first two D-Wave quantum computers took place in 2011—to Google and Lockheed- Martin—and confirmation that they use quantum annealing in their calculations rather than simulated annealing was published only Friday in the journal Nature Communications (abstract).
“[Our research] rules out one type of classical model that has been argued as a proper description of the D-Wave machine,” according to the lead author of the USC paper, Daniel Lidar, a professor of electrical engineering, chemistry, and physics. “A lot of people thought that when D-Wave came on the market their machine was just doing that, [but] we ruled that out.”
Doubt about the method D-Wave’s processors use stem from demonstrations the company gave in 2007 of a 16-qubit processor solving Sudoku puzzles. A number of physicists and computer scientists wondered if demo’s process was based on classical physics rather than quantum, according to Scientific American.
“D-Wave’s technology has been an enigma, in a negative sense,” UC-Davis professor Greg Kuperberg told Wired in 2012.
A previous effort at explanation and confirmation of how the D-Wave machines work, published in May by Amherst College computer science professor Catherine McGeoch, confirmed that the D-Wave machines were extremely fast compared to non-quantum hardware, but didn’t confirm the they were using processes based on quantum physics rather than classic-physical rules. “It’s such a different approach to computation that you have to wrap your head around this new way of doing things in order to decide how to evaluate it,” McGeoch said in the announcement of her findings. “It’s like comparing apples and oranges, or apples and Fish.”