Skip To Content
Cambridge University Science Magazine
Look around. Take any object nearby and ask yourself: ‘What is this made from?’. Repeat it. My keyboard is made from plastic which is made from polymers which are made from smaller molecules, which are made from… Once you get to atoms, then nuclei and electrons, you will be peering inside protons and neutrons. At the level of protons, we are talking about particles as small as 10-15 metres across. That’s 100 billion times smaller than the width of a human hair. Though these particles are minuscule, we must go one step further to reach the current frontier of particle physics. In 1968, experiments at the Stanford Linear Accelerator Centre revealed that the proton is constructed from smaller particles that we have since named quarks and gluons. These are considered fundamental particles in our current best model of the building blocks of the universe: the Standard Model. Much has been learnt about quarks and gluons in the past 45 years from Lattice Field Theory, an approach based on the replacement of the spacetime continuum by a finite grid of points, a domain well suited to a computer. Future developments will depend on the growth of our computational capacity and the realisation of new technologies.

Much has been learnt about quarks and gluons in the past 45 years from Lattice Field Theory, an approach based on the replacement of the spacetime continuum by a finite grid of points, a domain well suited to a computer.


Quarks and gluons behave rather differently to other fundamental particles. If two quarks get too close to each other, they repel, similarly to how two like poles on magnets behave. Too far, and an attractive force pulls them together. The interplay of these two properties allows quarks to clump together to form multi-particle objects like the proton and neutron, along with a plethora of other compositions. Unfortunately, it is precisely this dynamic that makes theoretical predictions of these clumps, referred to as hadrons, a challenge for the particle physics community. In 1974, Kenneth G. Wilson of Cornell University published a paper titled ‘Confinement of quarks’ that gave rise to a promising new approach for calculations with quarks inside hadrons. The idea was to replace the continuous space around us by a grid or lattice of evenly spaced points extending in the three spatial dimensions as well as a fourth dimension for time. To understand why this is sensible, ask yourself the following question: ‘what is the average temperature in the room you are sitting in?’. Well, the temperature will vary throughout the room, and you can’t possibly hope to measure the temperature at every point in the room in finite time. The best you can do is to sample the temperature at a few different points in the room, then take an average. You will want to take many samples to give an accurate measurement, so you will have a big average to calculate. A computer is very good at such calculations, using its finite memory to store and process your finite set of samples. The problem has become tractable with the help of computer. This discretization approximation was Wilson’s idea for quarks and gluons, bringing about powerful predictions and measurements of the properties of hadrons.

Wilson’s paper gave rise to a new field, though practical barriers would require computational advances. To measure the temperature in your room, you would like to take many readings throughout the room; the more samples you make, the more trustworthy your final result becomes. When simulating particles on a lattice, we know from theory that the error in our predictions roughly goes like the mass of the particle: heavier particle means larger error. There are 6 types of quark in the Standard model, from the almost massless to the very heavy. If we want to have the very heavy quarks on the lattice, we are often forced to use a lattice with points close to each other in order to keep the error small. Doing so necessarily presents a more computational expensive calculation. With lighter quarks comes a different issue. Imagine you are in a long corridor with a door at either end. However, these are no ordinary doors; walk through one door and, as if by magic, you appear at the other end of the same corridor. In fact, as you open one of the doors and look through it, you’ll see yourself at the far side of the corridor, standing at the door. You can think of this scenario as an infinite line of identical corridors each containing an identical clone of you, wondering back and forth with no way out. It is convenient for our lattice to behave similarly where particles do not encounter boundaries that they could bump into, potentially spoiling out calculation. Instead, they have infinitely many clones spaced out evenly. If the clones are far apart they don’t interact much. Light hadrons can communicate with their clones over larger distances than heavy-hadrons. To suppress these unphysical interactions in lattice calculations involving these light hadrons, we have to use big lattices which means a large computation. In summary, for heavy quarks we need very fine lattices, and for light quarks we need very large lattices. These two considerations amount to very expensive calculations that require powerful computers.

For heavy quarks we need very fine lattices, and for light quarks we need very large lattices.


The mission for higher precision with finer and larger lattices is frequently inspired by the physics that can be observed in experiment. Colliders such as the LHC at CERN on the France-Switzerland border give the means to study hadrons. Searching for discrepancies between predictions from our physical model and experimental observation is a key strategy in discovering new physics. As more precise experiments are designed, greater accuracy is demanded from lattice practitioners. The most powerful supercomputers in the world are exploited to reveal deep insights into the structure of hadrons.

Wilson’s groundbreaking paper coincided with a boom in supercomputer technology, helping to popularise the approach and inspire ever-more far-reaching calculations. In 1981, the first lattice calculation of the mass of a hadron was carried out on the VAX 11/780 supercomputer. Though powerful for its time, a typical modern-day laptop can complete roughly 50 thousand times more instructions per second. Since the VAX 11/780, an exponential growth in computing power has been observed, a phenomenon fondly referred to as Moore’s Law. Where once Moore’s Law seemed to guarantee legendary computational power in years to come, many believe that we are now beginning to witness the gruesome death of Moore’s Law. Transistors in circuit boards are now so small that there are immediate limitations due to the finite speed of light, disruptive quantum effects of small particles, and even the size of atoms.

The future direction of high performance computing is uncertain, though there are advances in development that have been touted to bring about a new era. One such technology has been literally staring us in the face. The workhorse behind the display of your computer and phone, known as a graphical processing unit or GPU, is a specialised processor designed to quickly manipulate and display images. There is considerable interest in the lattice community to harness this efficiency by coupling GPUs with more conventional computing architectures. Of course, an image is just a two-dimensional grid of pixels, not too structurally dissimilar to the lattice we wish to use for our hadron calculations. More speculatively perhaps, the advantages of utilising quantum computers are being explored in the hopes of moving past practical barriers present with classical machines. Real-time simulation of hadron collisions, the holy grail of the lattice approach, may well become a reality with the help of quantum computation. The future is exciting for lattice calculations as new technologies expand and come of age; discoveries of new physics may be just around the corner.

Laurence Cooper is a PhD student in theoretical physics at DAMTP, University of Cambridge.