**TUESDAY, 20 MAY 2014**

Quantum computers are regularly heralded as the future of computing, harnessing the power of atoms and elementary particles to perform calculations that today’s computers could only dream of. Quite how this remarkable feat is achieved is either complicated with jargon such as ‘qubits’, ‘superposition’ and ‘entanglement’ with no further description, or dismissed as too complicated for a layman. This article aims to explain how quantum computers work, why they’re faster than classical computers, and why they’re not a replacement for them.

Before we can describe how a quantum computer works, we need to understand today’s classical computers. Currently, computers work by manipulating ‘bits’ of data. A bit is something that can take one of two values, commonly written as 0 or 1. For example, a coin can be either heads or tails, or a light-switch can be on or off. In the case of a computer, the values may be a charged or discharged capacitor in memory, or the presence or absence of a groove on a CD. Computers operate on strings of eight bits, termed a ‘byte’. These bytes form instructions sent to the computer’s processor, directing it to perform functions such as adding two numbers, printing to the screen, or writing to memory.

Quantum computers work fundamentally differently. Rather than using a difference in electric voltage to encode the bit values, they use the physical properties of fundamental particles or atoms. As long as there are two different measurable states, a bit value can be stored.

For example, electrons are sub-atomic particles that have a property called spin. This can be imagined similar to the rotation of a ball around its axis; just as a ball can rotate clockwise or anti- clockwise, electrons can have a spin value of 1/2 or -1/2 (termed ‘up’ and ‘down’). The bit value can therefore be assigned by measuring the spin state of the electron. Another example would be the polarization state of light. Light travels through space as a transverse wave, oscillating perpendicular to its direction of motion. A wave travelling along for example the z-axis can be oscillating about either the x- or y-axis. We could therefore store the bit value by whether the light is horizontally- or vertically-linearly polarized.

Were the analogy to end there, quantum computing would be no more complicated than classical computing. However, in 1930 theoretical physicist Paul Dirac published the first edition of his book The Principles of Quantum Mechanics. In it, he introduced the world to the revolutionary concept of ‘quantum superposition’ that now forms the backbone of quantum mechanics. Dirac was trying to explain the baffling evidence that light acts as both a wave and a particle–the so- called ‘wave-particle duality’ theory. On the one hand, Thomas Young’s double-slit experiment showed that when a laser was shone through two parallel slits, it diffracted like a classical wave, with the two waves interfering with each other where they were out-of-phase. In contrast, alternative evidence, such as the emission of photoelectrons by atoms when they absorb light of a particular frequency, showed that light was composed of discrete particles with definite energy and momentum, termed photons. Dirac considered the problem of a beam of light containing a single photon passing through a double-slit. For the light to pass through the apparatus and cause an interfering diffraction pattern the single photon must partly go through both slits, with each part interacting with the other–it is in superposition! This idea of wave-particle duality and superposition was later shown to not be restricted to light, but universally extended to all particles. Quantum superposition therefore holds that a physical system exists at some probability in all possible states simultaneously.

This phenomenon of superposition becomes even more bizarre when you extend it to particles that originate from the same source or are brought to interact together. Such particles do not exist in superposition independent of each other, but rather their quantum states become ‘entangled’ so that their physical properties can only be described relative to each other. For example, a particle with spin 0 may decay into two particles, each with spin 1/2. These particles, because they are entangled, exist in superposition where in addition to both being spin up or down at some probability, they have a probability of being in their anti-correlated spin states up/down and down/up simultaneously. They therefore do not individually have a spin direction of their own, but rather their state is defined as being opposite each other.

If we return now to the quantum computer, quantum mechanics dictates that in addition to the quantum bit (termed a ‘qubit’) having two measurable states (for example, spin up or down), it exists in a superposition where it has both states at the same time. It therefore has some probability of being both 0 and 1 at the same time. This means that the amount of information each qubit can hold is significantly larger than its equivalent bit. Consider a computer consisting of two classical bits. To fully describe the possible states of the system (00, 01, 10, 11), only the values of each bit are required. So the computer has an information capacity of two. The corresponding quantum computer has its entangled qubits in a superposition of all states, with a separate probability of being in each state: the probability of being in state |00> (for example, both electrons have spin down) is a; the probability of them being in state |11> (for example both electrons have spin up) is b; and the probability of being in states |01+10> and |01-10> (that is in an entangled superposition) is c and d respectively. To fully describe this quantum system, four probability values (a, b, c, d) are required. This two-qubit system therefore holds double the amount of information as the classical two-bit system, with each additional qubit exponentially increasing the amount of information it can contain. A system with 10 qubits holds the same information as 1024 classical bits, while 300 qubits holds more information than there are particles in the universe!

However, accessing the information stored in these qubits is not a simple matter, as superposition doesn’t automatically make any computation faster than a classical computer. The quantum computer must be designed, and quantum algorithms written specially, to utilise the probabilities in the entangled qubits’ superimposed states and get the desired speedup. Otherwise it is nothing more than a very fancy, but expensive, classical computer containing only a few bits. This severely constrains its applicability to solving specific problems, such as using Shor’s quantum algorithm for factorising integers. While this doesn’t sound very exciting, the widely-used public-key cryptography relies on the intractable time it takes to factorize very large numbers in keeping messages encrypted. If quantum computers can factorise quickly, these encrypted messages can be easily read. However, to date the largest integer that has been successfully factorised by a quantum computer is 143! Much money and research is therefore being invested in this field; in 2013 Canadian company D-Wave claimed to have a 512-qubit computer that solved a Travelling Salesman problem over 3,600 times faster than a classical computer. So, while quantum computers will probably not replace personal computers, they are also not just a proof of concept.

*Simon Watson is a postdoctoral researcher at the Wellcome Trust Sanger Institute*