Skip To Content
Cambridge University Science Magazine
One of the most central principles used in all of modern physics is ‘The Principle of Least Action’. As the title suggests, the statement roughly translates to “energy is minimised” or, alternatively, “physics is lazy”. Roughly speaking, the principle means that the basis of all physics lies in minimising the use of energy.

To take an example of how we use the principle, we can consider a soap bubble on a loop of wire. The shape of this bubble can be predicted using the principle of least action. This shape is the form with the lowest energy, and altering the shape of the bubble will require more energy. This is why bubble surfaces are always smooth, like spheres, and why you can’t create one with sharp edges, like a pyramid or a cube, as this violates the principle.
Furthermore, we can address a question that has puzzled mankind for millennia: why does light travel in straight lines? The principle of least action gives us the answer. The reason relies on the fact that a straight line is the fastest and easiest way to get from one place to another—and for light, this is the path of least action.

But the principle goes much deeper. The principle can be used to calculate the equation of motion, which determines how objects move—how everything will proceed from start to end, accounting for objects colliding with each other and all the physical forces at work. Before we established this principle, the laws that govern the object’s evolution had to be generated by hand, relying primarily on observations and good guesses.

One of the most famous examples of an equation of motion is Newton’s 2nd Law, “F=ma”. This says that when a force (F) is applied to an object with a certain mass (m), it will accelerate (a) based on the size of the force. This is a mathematical expression of the obvious statement “things accelerate more if you push them harder, and lighter things accelerate faster than heavier things.”

The principle of least action means that you can derive this equation of motion, and hence can say exactly what will happen without any hocus pocus guesswork. Furthermore, whilst F=ma only works on our human, everyday scale, the principle of least action is so broad it covers all areas of physics—from the way planets and galaxies move around, to the subatomic collisions of electrons and quarks.

Action is similar to energy in many respects, but there is a subtle difference, which is part of the magic of the principle of least action. In physics, we define the total energy of an object to be comprised of the kinetic energy (due to the object’s motion, faster moving objects have more kinetic energy) plus its potential energy (due to a force, for example gravity— heavier objects have a larger potential energy). In contrast, we define the action of an object to be the kinetic energy minus the potential energy. Notice the minus, which makes the action subtly different to the total energy.

When we push a box along the floor, we can weigh it, we can measure its position and speed, but we can’t directly measure its total energy. Instead we calculate the total energy from the above quantities. Similarly, we have to calculate the action, rather than observe it directly. This puts energy and action on an equal footing—the only difference is that we’re more used to the concept of energy, for sociological rather than scientific reasons.

Having defined what the action is, we can now finally define what the principle of least action says: “objects move on paths that minimise the action”.

On the human, day-to-day scale, we can use the principle of least action to derive Newton’s 2nd law, and this in turn tells us everything we need to know. But physics has moved on since the days of Newton. We now know that on the largest scales of black holes, solar systems and galaxies we instead need Einstein’s General Relativity, rather than normal Newtonian gravity (which gives us apples falling from trees). Conversely, on the smallest scales, gravity has little or no effect and Quantum Mechanics comes into play. Other forces must also be considered, such as the electromagnetic and nuclear forces that hold atoms together, which are important at this tiny scale.

Although these are completely different areas of physics, operating on vastly different scales, remarkably we can use the same principle of least action to derive the vastly different equations of motion, no matter what scale we’re working at. This means we only have to make one simple assumption—the principle of least action—rather than assuming all the equations of motion, and putting them in by hand. Actually, the equations of motion turn out to be just specific examples that were worked out by good guesswork by physicists in the past. Philosophically, assuming only the principle of least action is much nicer and it makes physics much simpler.

Chemistry is, at its heart, a study of energy. Nature as we observe it arises from energy minimisation at an atomic and molecular level. As such, in order to predict what happens in a chemical reaction, we simply need to consider the relative energies of all of the chemicals involved and those with the lowest total energy will be the products we obtain. The ability to understand these interactions lies behind a plethora of scientific advancements, such as the development of new drugs, designing new and more efficient ways to produce energy, and the modelling of natural processes like the climate.

What might seem like a very simple fact—that chemical entities, given enough time and energy will always tend towards the state and structure that has the lowest total energy—actually hides a wealth of detail and complexity that have enabled chemistry to advance so far in the last century. Crucially, time and time again we observe in nature and in the laboratory materials and molecules that don’t make sense, and that don’t seem to occur in their lowest energy state. Trying to understand these many exceptions is what leads us to advance our knowledge of the universe.

With the advent of more powerful computing, chemistry has begun employing new high-tech tools to help researchers to more accurately predict the complex and detailed structures and properties of inorganic materials. Inorganic materials are those that do not contain carbon and hydrogen atoms. The Materials Project (formerly the Materials Genome, before the name was requested by President Obama for a policy initiative) is an exciting new pursuit from the Massachusetts Institute of Technology. Prof. Gerbrand Ceder and his team are building a massive database that aims to catalogue every known inorganic material.

Using computational approaches developed in the past decade, the optimal structure and energy of each material can be determined using a concept known as density functional theory (DFT). This technology can be used to refine and improve predictions of how a material is structured based upon the concept of energy minimisation. Given a poorly approximated structure based on experimental data, the computer is able to construct a much more detailed idea of how chemical structures work on an atomic level. This should theoretically give the most accurate prediction for the actual structure of the substance being studied, which can then be stored and recorded for use in experimental models.

Once you know the energy of a material, you can actually predict a lot about how that material will behave in various chemical situations. With a little more work, the Materials Project will be able to calculate information on various thermodynamic and electrical properties of a material, and add these to the database too. Our understanding of chemicals is only limited by our knowledge of their energy and structure. While DFT is still not completely accurate, it is accurate enough to provide a stable theoretical grounding for new experimental investigations. So far, the Materials Project includes over 30 000 materials, supporting other existing collections of chemical structures, such as the Inorganic Crystal Structures Database. More are added every day, and the project is always being updated and adding new features and results.

A key benefit is that the Materials Project database is open source, meaning anyone can access information on materials from the database (try it out at http:// www.materialsproject.org/). Interested members of the public can make use of this fantastic new tool to learn more about the world around them, giving it the potential to join other successful popular science initiatives such as Foldit or Galaxy Zoo. For scientists, all researchers can be given access to the nuts and bolts of the database allowing them to modify it and run their own sophisticated models which can help to predict the results of different experiments and which may be industrially, commercially or medically significant. It can also be used to find the most efficient way to produce new chemicals from easily available substances. One prominent application is the determination of ways to make better lithium ion batteries for more efficient and long lasting energy stores. You can even use the structures of known materials to estimate the properties of a material that hasn’t even been made yet.

Whilst it may seem the domain of the physical sciences, energy minimisation has been and continues to be fundamental to the evolution of almost all living things. In the process of natural selection, the mechanism of evolution first described by Charles Darwin, those that survive and successfully pass on their genes to the next generation are the ‘fittest’. Energy is the currency used in this competition for fitness and, if used efficiently, will lead to survival of a species. For most wild animals the supply of energy is limited and highly fought over, so many different methods of energy minimisation have evolved to suit different environments.

Energy saving is being practised by even the simplest life forms, such as bacteria, which have especially small genomes. The smaller the genome, the less energy and time are used in making copies of DNA and producing new bacteria , allowing them to multiply much more rapidly than larger organisms. Evolutionary studies have suggested that the first bacteria were highly independent organisms, some of which later evolved to adopt two key methods of energy conservation in ecology, symbiosis and parasitism, whereby they depend upon other living things. Both methods involve the direct dependence of an organism on the other. In symbiosis both partners gain something from the relationship and are generally made stronger, by contrast a parasite gains from its host but gives it nothing in return, which often weakens, and sometimes even kills the host.

The genomes of parasitic and symbiotic bacteria are even smaller and more efficient than other bacteria. This is because bacteria like these tend to acquire deletion mutations, they lose parts of their genome very easily, hence the parasite and symbiont (an organism that lives by having a symbiotic relationship with another living thing) genomes started to shrink. The bacteria don’t need the genes they lose because the products of these genes are provided by the host that they depend on, which is why the parasites or symbionts are able to exist. Why waste your own energy to duplicate a large genome and produce the molecules that you need to survive, if you can get them from your host? In fact, this is how mitochondria and chloroplasts, the energy processing powerhouses of more complex organisms, like plants and animals, are thought to have evolved, through bacteria living and growing in co-operation with each other, the endosymbiotic theory.

The endosymbiotic theory was first suggested and described by the Russian botanist Konstantin Mereschkowski in 1905, based on the morphological similarities between chloroplasts (found inside plant cells) and cyanobacteria (free-living blue-green algae), both of which make energy by photosynthesis.  However, the theory was not taken seriously until the 1960s, when it was discovered that mitochondria (thought to have evolved from proteobacteria) and chloroplasts possessed their own DNA, separate from the genome of the rest of the cell. This very small amount of DNA is not enough for chloroplasts or mitochondria to be able to live alone. It is now thought that over millions of years of co-operative living, the majority of genes were transferred to the host cell and the symbiotic relationship between bacterial cells gave rise to a new, more complex form of life—eukaryotic cells (the basic units that make up plants, animals and fungi). The eukaryotic cell produces and provides most of the gene products required by the mitochondria and chloroplasts. These ancient bacteria are no longer organisms in their own right, but rather internal cellular structures (organelles) that are functional parts of eukaryotic cells. This is the ultimate achievement when it comes to ‘fitness’—surviving and passing your genome onto the next generation. The cyanobacteria and proteobacteria succeeded through expert energy minimisation. Their genes are now copied and conserved in all complex living things.

These higher organisms are also subject to the same concepts of energy reduction. In the animal kingdom, access to easily available energy, such as food, varies depending on the time of year, the weather, their environment and the number of individuals competing for that energy. Not all the animals in a population will have equal share of resources, or an equal need for them, so they have to compete for it. Most of the energy they have is needed for maintenance, fighting off disease and basic survival. The rest is used for growth and reproduction, or stored as fat, which aids future survival. Some species have offloaded much of this expenditure onto others. For example, cuckoos have become masters of disguise, able to hide their eggs in the nests of other bird species. The cuckoo eggs avoid detection because they have evolved to mimic the colour and pattern of their favoured hosts’ eggs. Amazingly, if the host fails to detect and reject the cuckoo eggs, once hatched, the cuckoo chick will push other eggs over the edge of the nest, this ensures that the newborn cuckoo survives in preference to the offspring of the host bird. The bird that made the nest and laid the other eggs will happily feed and defend the cuckoo chick, despite the fact that it often differs widely in appearance from it’s adopted mother. In this ruthless, parasitic way cuckoo species reduce the energy cost of reproduction by tricking other birds into rearing their young whilst the cuckoo parents use their energy on their own survival.

In the ocean, there are several mutualistic symbioticrelationships that reduce energy expenditure in both species. The remora fish swim alongside sharks, while eating parasites off the sharks’ bodies, helping them survive. The fish receive protection from predators and bits of food when the sharks feed. In this way, the sharks use less energy on fighting disease, and the fish save the energy they would otherwise have to use defending against predators and finding food. While such relationships are fruitful ways to save energy, the physiology of many animals has also evolved so they can conserve energy on their own.

Keeping our bodies warm (thermoregulation) in the winter is energy-expensive. While we wear thick woolly jumpers and put the heating on, other animals have developed sophisticated ways to save energy on heating. Fat tissue is an ideal body insulator. With less water and blood than other tissues it conducts heat less easily. Mammals such as whales, seals, and polar bears, have a thick layer of fat (known as blubber) under their skin, to keep warm in the cold oceans around the North Pole and Antarctica. In much the same way, most land mammals have a coat of fur that traps air—another very effective insulator.

Thermoregulation is not the only challenge posed by winter; it is also very difficult to find food. To make precious fat stores last the season, many animals go into hibernation, an inactive state with lower body temperature and reduced metabolism and with no energy wasted on moving around, hunting, eating or reproducing. Although mostly associated with mammals in the winter months, a small number of animals hibernate in the summer, a state called aestivation, practised by molluscs, arthropods, reptiles, amphibians, and even a couple of mammals, allowing them to conserve water and energy in the heat.

The law of conservation of energy in physics states that the total amount of energy in an isolated system remains constant over time. The law means that energy can change form within the system, and move within the system, but that energy can be neither created nor destroyed. Survival and fitness of a species depends on it making the best use of the limited energy available to it. While it is necessary and obvious, it is fascinating to observe the different mechanisms of energy conservation, evolved by different species, which have allowed them to exist today.

We have seen the power of the principle of least action to reproduce the physics we know about, but now physicists are exploring how changes to these actions can lead to dramatic changes in the theory that follows. This is a way of predicting new and exciting physics that we have never even dreamt of! One thing almost all physicists agree on is that the principle of least action should appear in a fundamental way in the Universal Theory of Everything. Once we uncover the theory of everything, we may indeed become masters of the universe, and the key to this will undoubtedly be related to the minimisation of energy.

Hinal Tanna is a 2nd year PhD student in the Department of Oncology

Matt Dunstan is a 2nd year PhD student at the Department of Chemistry

Zac Kenton is a 4th year Undergraduate studying Mathematics