Skip To Content
Cambridge University Science Magazine
WHAT IS REALITY? This question has plagued philosophers for millennia, and now scientists face it too. A cardamom bun is my default order, yet you may find it bitterly herbal. Sitting in my garden with my bun, I am focused on the expanse of clouds above me. I hear the swoop of a buzzing bee past my ear. I am dimly aware of the heat of the sun and the tickle of grass, but I am oblivious to the electromagnetic waves surging from the creatures around me.

Not all of reality is available to us, and that which is – like the taste of cardamom – we disagree on. The trouble is, the world is ‘out there’, and we are ‘in here’, with our brains gatekeeping the exchange. My reality is not yours, and it is certainly not that of a bat, a bee or a boa constrictor. Why? To address this, neuroscientists are decoding the ways that organisms sense and make sense of the information coming from ‘out there’. Now, the question is not ‘what is reality?’ but ‘how do we build our own realities?’

A LIMITED VIEW

We only sense a few select properties of our environment. For us – our little Homo sapiens branch of the evolutionary tree – building our surroundings is primarily a visual task. The importance of vision for humans, at least in Western cultures, is reflected in our language. We speak of seeing the ‘bigger picture’, blind spots and visionaries.

But for the bat, it is a soundscape; the hunting platypus an electric map. All creatures use a different complement of senses to navigate their worlds, some detecting physical properties that we can’t even dream of. These other sensory worlds are just as rich as ours, yet with such an emphasis on vision we can’t get over that pity when imagining those without it. We are utterly unable to disentangle our minds from our experiences, so to us, those rich worlds are dark, empty spaces.

Fifty years ago, as part of his work addressing the mind-body problem, American philosopher Thomas Nagel wrote the seminal paper What is it Like to be a Bat? He argued that we all face a fundamental limitation in our perspectives, arising from our select sensory abilities, that do not allow us to truly consider what it is like to be another creature.

‘In so far as I can imagine this (which is not very far), [the facts] tell me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task.’

Nagel perfectly illustrates how the boundaries of our sensory experience draw symmetrical boundaries around our consciousness, but we didn’t always recognise the limits of our experience. Another fifty years before Nagel, Baltic German scientist Jakob Johann von Uexküll contended the prevailing belief that creatures without the human sensory organs must lack a true understanding of the world. von Uexküll was studying the sensory systems of ‘basic’ organisms like ticks, worms and amoebae. He noted that they were still aware of their environments despite missing the organs we use to sense the world. Ticks, for example, could use the smell of butyric acid that oozes from mammalian pores as a cue to drop from their perch as an unsuspecting potential host passed underneath them. Finely-tuned temperature sensors would confirm their landing was indeed upon a mammal - no need for eyes or ears. From his observations, von Uexküll proposed the concept of an ‘umwelt’ – the unique way in which the world is experienced by a particular creature.

SENSING OUR WORLD

These ‘umwelten’ arise from the unique combination of sensory hardware each creature has. In humans we tend to think about five classical senses: sight, sound, taste, touch and smell. Aristotle proclaimed these ‘five and only five’ in De Anima, and they seem to have stuck with us, besides the occasional foray into the supernatural with that mysterious ‘sixth sense’. We have far more than five, or even six, though – proprioception (awareness of your body in space), the vestibular system (balance and sense of movement) and all those forms of interoception that tell us we’re hungry, tired and more – but even within those classic five, there is much debate regarding the way we classify them.

We can simplify senses into chemosenses (those involving detection of chemical substances, like taste and smell) and physical senses (those measuring properties of our environment, like pressure, temperature and light).

On the other hand, we can think about each sense as an array of many others. Under the umbrella of touch, we have different types of receptors to detect pressure, itch, temperature, chemical substances, vibrations and tension. Breaking it down this way is where lines begin to blur; how does detecting a chemical on your skin differ from tasting the chemicals of food?

In both cases, nerves – bundles of long, sprawling neurons that fire messages throughout our nervous systems – erupt out onto our body surfaces, exposing themselves to our environment. Many tiny receptors lie at their tips in wait of very particular prey – let’s say capsaicin, the infamous spicy molecule of chillies. When capsaicin binds to its receptor on the exposed neuron, the receptor is forced to change shape. As it bends, a channel opens which is perfectly designed for ions to flow through to activate the neuron and then, like dominoes, the circuit.

Capsaicin can bind these receptors on the tongue, where it conjures a spicy taste. On the skin, the exact same receptors exist and give the sensation of a burn. Funnily enough, the very same receptors are also sensitive to high temperatures – hence, once again, the burn. The more faint-hearted among us would agree that the ‘taste’ of chilli is more of a burn, reminding us that it’s the same pathway underlying all these sensations.

Clearly there is some untangling to do with the way we categorise and define our sensory biology. But sensing is just the beginning.

FINDING MEANING IN THE NOISE

For many of our fellow creatures, these sensory systems are simple circuits designed to give yes-or-no outputs. This is not because they are behind in evolution, but rather that in their niches these binary responses are sufficient to survive.

Hydra are small creatures related to jellyfish and anemones. You can find them clinging to weeds or rocks at the bottom of a pond, tentacles wafting above them. They have elementary nervous systems, perfectly adapted for purpose. Their tentacles are covered in tiny harpoons and ‘trigger’ hairs. When a creature passes by, causing these hairs to bend, specialised receptors that respond to mechanical pressure activate a local circuit which in turn causes a harpoon laced with a powerful toxin to be launched at their target.

Sensing is the function of hardware. It is measuring the properties of your environment, like the hydra detecting movement nearby; the number of photons (light particles) colliding with your retina at 560nm wavelength; or guaiacol molecules surfing the café air waves to drift into your nostrils.

But none of these measurements have a meaning attached: those 560nm photons don’t tell us that there’s a red light ahead and so we should slow down, and guaiacol alone certainly doesn’t make us think of coffee. This is where the difference between sensing and perception becomes important.

For creatures like the hydra who live in environments where yes-or-no responses work on average, perception is not necessary. They don’t need to consciously consider their actions, they just act. We rely on this sometimes, too: touch a hot pan and your hand will jerk back before even realising. This reflex arc is a very short and simple circuit that passes from skin to muscle via a relay in the spine, with no need for input from the brain. Duly, there is no associated conscious decision to pull back. But in complex environments like ours, we more often need to put together several pieces of information to make decisions – this is where the conscious experience, or perception, comes in. If the cone cells in our retina brought us to a halt every time they detected the colour red, we’d soon be in trouble. We need to apply meaning and context to this kind of sensory information to weigh up the costs and benefits of acting on it.

To find meaning in these sensory signals, countless measurements across our sensory systems are organised, integrated and then informed by our past experience to make predictions about our surroundings. Et voilà: now, we have constructed reality.

A ROUGH TRANSLATION

Constructed is the key word. Acknowledging that our reality is not a direct representation of our world is old news; indeed, according to von Uexküll’s ‘umwelt’ theory, we cannot attempt to distinguish the mind and the world because the world is only interpreted through the mind.

What is new are the powerful tools that allow us to investigate how we represent our world, and how the way we experience it may not just differ from species to species, but also from person to person. We can record individual neurons chattering away in their circuits, look at patterns of activity across the brain and even build computer models to test our predictions.

Perception has proved to be one of those sum-greater-than-the-parts phenomena which equally amaze and frustrate researchers. To study a component of the system, it must be isolated, but by isolating it, it may behave differently. This playoff between mechanistic detail and physiological relevance is omnipresent in science. We try to tackle it by using many different approaches and piecing together all of the findings at the end. So, what have we learned so far?

The raw materials for perception arrive through the sensory systems. As we’ve seen with the example of capsaicin, sensory neurons are primed to detect precise events or conditions using specialised receptors.

To generate a conscious experience, those messages need to reach your cortex (the outer surface of your brain, involved in ‘higher level’ processes like thought, reasoning and memory), but sensory neurons don’t send their messages there directly. It’s a pass-it-on game from neuron to neuron, and this relay is crucial because every connection provides an opportunity to modulate the message or integrate it with others.

ENCRYPTING AND DECIPHERING

Following circuit activation, the first step towards perception is called sensory coding. Different aspects of a sensory event – the intensity, duration, location and so on – are encoded in precise features of neural activity. For example, in the retina, contrast is relatively simple to encode because the more photons that hit the receptor, the greater the activation of the sensory neurons, and so more signals are fired. By comparing firing frequency against a baseline, an estimate of relative brightness is made. Colour can be calculated from the ratio of activity in pathways branching from blue, green or red receptors. Spatial properties like orientation and location are computed later in the cortex by maps of neurons which correspond to the space around you; by integrating visual signals with information about your own position, each mapped neuron is tuned to represent a particular real-life location relative to you. Isolated signals are meaningless, but in the patterns, our brains find meaning.

Laying aside the differences in sensory hardware that give all creatures their various ‘umwelten’, here lies the first opportunity for divergence in our individual human realities.

Our neurons are very malleable, changing throughout our lifetime – sometimes very quickly and temporarily to adapt to a new situation, and sometimes more slowly as we learn. This means there is no predefined response of a neuron to a given input. You know that sensitivity you feel around an injured area, when just a gentle brush becomes painful? Nearby sensory neurons have reacted to inflammation in the area by holding extra positive ions, meaning only a tiny signal will push them over the threshold to fire a signal, and when they do, it will be stronger than usual. Essentially, those neurons are shouting instead of chattering, so the sensory coding of the gentle brush signal is thrown off. This is an extreme case to illustrate the point, but even under normal conditions, it is true that from person to person, or from day to day, we encode sensory signals in slightly different ways, so the messages that reach the cortex are not identical.

GET IN LANE

As these encrypted messages travel along the information highway towards the cortex, they pass through major intersections like the thalamus (an egg-shaped but walnut-sized structure at the centre of your brain). Here, all our senses converge before reaching the cortex – with the exception of smell, whose pathway is so ancient that it predates the evolution of the thalamus.

This is a heavily traffic-lighted junction. As sensory information arrives, signals are paused and those with the same ‘time stamp’ encoded are given the green light to travel up to the cortex together, giving a unified sensory experience. It would be disorientating to watch someone’s mouth move a second before you hear them speak – this filtering helps to smooth it out.

Your cognitive state matters, too. Huge amounts of sensory information must be blocked here to avoid sensory overload: can you imagine experiencing every detail of your environment all at once, all the time? As well as being utterly overwhelmed, we would lose sight of patterns and meaning. But exactly how much and what gets blocked changes depending on factors like attention, memory and sleepiness. Your personal history essentially trains you to take extra notice of, or to block, certain information: now the major divergences start to occur.

YOUR PAST MAKES YOUR PRESENT

We build mental representations of our world at incredibly high speeds. This is only possible because we don’t build reality brick by brick. As the first trickles of information arrive at their respective destinations in the cortex and the final stages of sensory coding begin, we make a prediction.

To make this possible, there are powerful connections between different parts of the cortex, allowing us to compare the incoming snippets of reality with our past experiences and knowledge, looking for a match. How does this really work?

Memories are particularly important. A memory exists if the connections between the neurons that were active in a given moment or thought are strengthened by being repeatedly lit up together. Those robust connections mean that if just a segment of the circuit is activated, the rest lights up again too, and so the memory is experienced. As snippets of information come in and activate their associated neurons or circuits,  one of those memory circuits may also be activated. Those tiny memories are also all linked with each other – for example, the circuit associated with hearing your name is more commonly activated alongside the circuits associated with being in your home or office, but probably not with any circuits that may be activated by being on a remote island. In the office, then, you are primed to hear your name – that circuit is already partially activated by being at work. This means that a sound remotely similar (perhaps the same number of consonants, or intonation) will trigger the activation of your ‘my name’ circuit, but on that remote island it would not.

Predicting your surroundings based on the first snippets you sense is sort of like playing an infinite game of ‘twenty questions’ with yourself, narrowing down the possibilities of what an object, event or sound could be based on the context, but at lightning speed, faster than you are even aware of the input.

THERE’S ALWAYS A COMPROMISE

We are constantly making these predictions without any awareness of it. We crave patterns and so sometimes we see them where they don’t exist: faces in inanimate objects; depictions of religious figures burnt into toast; shapes in clouds. Why? Essentially, efficiency and speed.

Complex tasks like perception use huge amounts of energy – the average adult uses a fifth of their energy in the brain. By making predictions, we reduce the brain’s workload and so save ourselves some energy. Since survival is a toss-up between saving resources (such as energy) and acting appropriately (such as spotting a predator and running away), prediction is a sensible strategy. Predicting means we don’t have to gather every morsel of information about our environment before understanding it, allowing us to perceive and act faster without overloading our brains. The counterbalance of occasionally tricking yourself is trivial in comparison. Running away because you mistook fluttering grasses for a lurking predator is a much smaller price to pay than being chomped on by said predator because you loitered too long trying to decide if it was real.

OUR OWN LITTLE WORLDS

Since we have largely fenced, bricked and bulldozed ourselves out of these ‘natural order’ ecosystems, our predictive capacities have been repurposed. This doesn’t mean that our brains have evolved in the relatively short time that modern humans have been around. Rather, it is a reflection of how predictions are rooted in learning and memory, and we have now learned and remembered a different set of cues. We make social predictions using facial expressions and have become extraordinarily good at recognising individual faces. Music, clothing and architecture all evoke different expectations and emotions in us thanks to the predictions we make based on experience – a fact that is ingeniously exploited by film makers and artists everywhere.

While some of our assumptions underlying predictions are universal – like the expectation for light to come from above that determines how we understand objects using shadows – others are more individual and give rise to unique experiences of exactly the same event. Listening to grunge in an underground club or gazing at a masterpiece in the Louvre do not evoke the same joy in us all: while one rocks away, another covers their ears. Our experience, beliefs and knowledge all influence the way that ‘raw’ sensory information is organised in our cortex, giving rise to individual perception.

In this case, we can be sure that we don’t share the same experience – you can easily share that Nirvana is your favourite band, and that you don’t understand the fuss over the Mona Lisa. But what about for more fundamental sensory events? Try describing what it’s like to see the colour blue, or hear an E-flat note. These experiences of specific sensory inputs are called qualia and there’s no way to know if we share them or rather just the language to describe them. Perhaps one day, if (or when) our brains are uploaded to the web, we will visit each other’s internal worlds as tourists. For now, we are stuck in our own timelines of experience, and so our unique ‘illusions’ of reality.

WHAT IS REALITY?

The world outside your head is layered with information, some of which we detect and some we don’t. Each type of information, be it electromagnetic, mechanical, chemical or otherwise paints vivid landscapes which we cannot fully see.

What we do have, however, are the tools to measure them. Science has given us a shared objective reality, a glimpse of a world that is shared across all cultures and creatures. We know that snakes ‘see’ an infra-red map to locate prey among the rocks by their body temperature; that spiders can ‘listen’ to the vibrations of their prey twanging their web strings as they flutter by; and that ants can ‘smell’ invisible chemical trails towards food that were laid down by the original explorers returning from that source. We do not see, hear or smell these things ourselves, but we know they are there.

Millions of realities exist, each roughly constructed from the reality. There are worlds built with bricks and mortar of vibrations, or touch or taste. Ours is largely assembled from photons, but given the raw materials, there are many ways to construct a house. With a brain as inconceivably complex as ours, running at a processing speed a whole order of magnitude higher than our most powerful supercomputers, there are perhaps infinite ways to construct the house. None may be quite true to the blueprint, but each is our own interpretation, true to us.

von Uexküll was a daydreamer, though, determined not to be fooled by something as trivial as our personal perspective:

“These different worlds, which are as manifold as the animals themselves, present to all nature lovers new lands of such wealth and beauty that a walk through them is well worth while, even though they unfold not to the physical but only to the spiritual eye.”

We may be stuck in our own little worlds, and the confines of human hardware. But our privilege is to know, and to imagine, those vivid landscapes that we can’t see ourselves.

Ailie McWhinnie is a Neuroscience PhD student based in PDN at Cambridge University. Scientifically she is interested in neuroregeneration and so studies the olfactory system of the brain, where regeneration occurs naturally. She has been a fervent writer and science communicator since her undegrad, previously leading EUSci at Edinburgh University, and now Women in Neuroscience UK's blog. Artwork by Thainá Carias.