## Quantum computers: Grover’s algorithm

I’ve been meaning to get my head around the idea of a quantum computer for a while now and, since my mathematical energy is currently reduced to clearing out my ETH office, I thought I’d do some reading and find out more. I leafed through my dusty copy of Wikipedia [Wik], picked up Kitaev et al [KSV] from the library and turned to chapter two…

I think the easiest way to illustrate how a quantum computer differs (functionally) from a classical computer is by explaining an algorithm which only makes sense for quantum computers and which really outperforms a classical algorithm for the same task (that is the point of quantum computers, after all). The first algorithm they explain in [KSV] is called Grover’s algorithm and it performs the task of searching a database. Wikipedia [Wik] has a really nice exposition too, but I was initially confused by what they both call a “quantum oracle” (sounds like something from Star Trek TNG). I tried to explicitly avoid that in what I said below.

**TLDR:** Classically you have to look through all N elements of a database until you find the right one (so runtime increases linearly in N); Grover’s algorithm has a surprising runtime of order to do the same thing, using clever ideas from quantum mechanics.

## The problem

You have a collection of N objects you want to store on a database. The objects can be either black or white. Each object has an identification number between 1 and N and a colour. For simplicity assume that there’s only one white object in the database and you want to find out its identification number . A classical computer would work through the objects in the database one by one and check each one to see if it were white. Assume that the probability that the k^{th} object is white is 1/N, then the expected runtime for this classical algorithm would be

which is asymptotically linear in N (i.e. if you have lots and lots of objects then quadrupling the number of objects quadruples the runtime).

Grover [Gro] showed that a quantum computer could do better, with a runtime which goes asymptotically like (quadrupling the number of objects only doubles the runtime).

## So what is a quantum computer?

Of course I don’t know how to build a physical quantum computer. If I did I’d probably have your bank account details by now. But let me explain how to mathematically model quantum computers: how they manipulate and store data. Classically, you store data as strings of 0s and 1s. On a quantum computer you store data as vectors. A “bit” of a string (i.e. a place where a 0 or 1 could go) is now replaced by a “qubit” – a two dimensional complex vector space with a basis . Obviously you *could* represent 0s and 1s by pointing in those directions, but you can now point in a whole host of other directions, linear combinations of the vectors and . Note that the funny notation is standard in quantum mechanics, was introduced by Dirac and makes it easy to talk about vectors in the dual space and the number you get by evaluating on is .

A string of bits is replaced by the tensor product of qubits (i.e. of vector spaces). In Dirac’s notation we have a nifty trick for writing tensor products:

Let’s write T for the tensor product of all our qubits (this is now a 2^{n}-dimensional complex vector space) and call it our **state space**. So a string 0011101 can be transformed into a vector , but vectors can be added in all sorts of ways and we end up with points in our state space which don’t correspond to classical data strings.

Ideally, T would be realised as the Hilbert space of some explicit quantum mechanical system, but remember we’re only interested in how to model it mathematically (which is much easier).

## Quantum algorithms and measurements

Instead of getting a Turing machine to work along a classical data string manipulating the 0s and 1s according to some program, at each timestep our quantum computer will apply a unitary transformation to the state space T.

After some sequence of unitary transformations have been applied (which we think of as a “quantum algorithm”), we measure the system. Now measurement is a tricky thing in quantum mechanics and I want to avoid that discussion right now. But at least we have to specify what we are measuring. Well suppose we just had one qubit and we applied a load of unitary transformations to it (just unitary 2-by-2 matrices). If we started off with some vector, say , and ended up with another one, say , we really want to get back to basics and know if we’ve got a 0 or a 1. But that doesn’t make sense any more! It’s quite possibly a superposition of the two. “Measuring” the 0- or 1-ness of the qubit in a state means:

- we consider the operator which projects the qubit onto the subspace spanned by ,
- this has eigenvalues 0 and 1 with eigenspaces spanned by and respectively,
- when we measure the system we cause it to collapse into one or the other eigenspace and the number we measure is 0 (with probability ) or 1 (with probability ),
- in order to ensure that all probabilities sum up to 1 (which they surely should!) we need to normalise our states to have norm 1, i.e. .

As I said, I don’t want to justify this – we can treat it as axiomatic or we can go and read Bohm. Either way, we end up getting a probabilistic algorithm (i.e. we have to run the algorithm many times and only afterwards can we guess what was by looking at the probabilities).

## Grover’s algorithm

### State space for the database

First we encode our database into a vector space. Remember our data consists of strings of length n (identification numbers for the N=2^{n} objects) and colours (black or white) for each object (yes I know they’re not colours, it’s a figure of speech).

Introduce basis vectors for each identification number and , for the colours. Form the tensor product (state space) so that a general basis element is something like

### Unitary transformations for the algorithm

Let U be the unitary operator

(writing for an arbitrary identification number) and let

where denotes the colour of object and the coefficient is for normalisation (we want all states to have norm one). Finally we define

The funny notation need another word of explanation: means the operator sending a vector to (which is just a rescaling of by the number ).

These (U and V) are the unitary transformations we’ll use in Grover’s algorithm.

Notice that they’re both reflections: U reflects in the hyperplane orthogonal to all white vectors and V acts as -1 on the hyperplane orthogonal to and preserves the vector .

### The algorithm

We apply the composite transformation to and specifically look at what happens to the vector which we defined above as .

What is the operation ? It’s a composition of two reflections and hence it’s a rotation. To understand exactly what’s rotating where, let’s write for the projection of to the subspace spanned by black vectors (normalising to have norm 1) and restrict attention to the 2-dimensional subspace spanned by and the vector , corresponding to the unique white vector in our dataset (remember I told you a long time ago that is the identification number of the unique white object!).

Now is a linear combination of and by construction, say

Moreover, we know that the coefficient of in is , so

The two reflections U and V preserve the 2-dimensional subspace spanned by and the vector . Indeed, reflecting using U gives

and the using V (exercise, very clear when you draw the picture!) gives

In other words, VU is a rotation by towards . If we rotate R times then the -component of is

and (remembering our discussion about measurement) the square of this quantity

has an interpretation as the probability that we get the right identification number for the white object when we measure the identification number in this state (i.e. after rotation). When N is large is very small and we can pick R to make very close to 1. Indeed, when N is enormous, so that it’s clear that we need , i.e.

Note: we don’t want to do any more than this, or else we start rotating **away** from !

## Conclusion

We only had to do Grover’s algorithm about times to the right answer with high probability, which is a big saving – if we quadruple the number of objects in the database then classically we quadruple computing time, but with Grover’s algorithm we only double it!

Of course, we need to repeat the experiment many times to get a probability distribution and pick the identification number which seems to occur with a frequency

but we can do this a fixed number of times (independently of N) because the probability that we get the right answer is improving with N. All in all, we quadratically save computational effort and it’s all thanks to the tricksy use of (purely formal) quantum mechanics.

Of course, it’s not clear to me how you would really implement this algorithm (How would you start the quantum computer in the state ? How would you actually do the unitary transformations?) so we should really treat it as a mathematical abstraction until further notice. However, it’s a simple and surprising illustration of the power of quantum mechanics.

This post was brought to you by the numbers 0 and 1 and the letter Q.

## References

[Gro] L. Grover, “A fast quantum mechanical algorithm for database search”, Proceedings, 28th Annual ACM Symposium on the Theory of Computing (STOC), May 1996, pages 212-219, arXiv: quant-ph/9605043

[KSV] A. Y. Kitaev, A. H. Shen, M. N. Vyalyi, “Classical and Quantum Computation”, Grad. Stud. in Math. Vol. 47 (2002) AMS, Providence

[Wik] Wikipedia (13.08.2012) http://en.wikipedia.org/wiki/Grover’s_algorithm

## Leave a Reply