THURSDAY, 27 MAY 2021Computer scientists are remarkably good at not telling anyone what they are doing.
Even when their inventions are changing the fabric of our society, you probably won’t hear about it. For instance, imagine picking up your phone and searching for an image on Google. This is an incredible power that we have forged for ourselves. But how does this actually happen? Where is that picture stored? How does our phone know where to find it?
To make things worse, the internet is constantly changing. There are many twists and turns coming up in this story which will profoundly change how the whole world stays connected, but what chance will anyone have to understand them if we don’t know what we already have?
So, in this spirit, take this article as a “Last week on…” catch-up of sorts in the TV show that is The Internet. And get ready for the rest of the season, because it’s coming thick and fast.
Our journey starts by pressing ‘Search’ on a phone to look up a picture. The goal is to reach Google with a request for the picture. But first, that request has to get off the phone.
Wi-Fi is a system developed by Australian radio-astronomers in the 1990s. Every modern phone contains a small antenna which can send and receive messages across a certain range of frequencies. Our phones are constantly “listening” for announcements, also known as “broadcasts”, from a Wireless Access Point (WAP) to let them know that there is a way to access the internet. The WAP listens for replies from phones and talks to each of them over different radio frequencies.
It begs the question, what do these ‘messages’ look like? To operate properly, all of these devices need to agree on what order to send information, bit by bit, so that the recipient knows what they are looking at. The exact layout of this information is called the ‘protocol’, and a single chunk of this information is called a ‘packet’. For example, one of the first things that is sent in a packet is its intended destination, known as the ‘IP address’. 13 servers scattered across the globe, known collectively as the ‘DNS root zone’, control how domain names like ‘google.com’ get turned into IP addresses.
Internet Service Providers
Our message then makes its way out to an ‘Internet Service Provider’ (ISP). These companies create sprawling infrastructure networks across entire countries, with the sole task of receiving packets and getting them where they need to go. The inner workings of each ISP are well-kept secrets, but in general, they calculate efficient routes to get packets across their network. Each ISP will also form business relationships with other ISPs to share packets. These relationships are critical to the internet’s success, as otherwise every ISP’s users would be cut off from the rest of the world. It also prevents ISPs from attracting users with ‘exclusive websites’, since all data is shared across all ISPs.
Once our message has found its way to an ISP, it hops from country to country to make its way towards a Google data centre. Data centres are huge warehouses full of millions of computers (‘servers’), each handling different requests. These data centres, now more commonly known as ‘the cloud’, store data, process search requests, show you your photos, and more.
Data centres across the world are a grand exercise in engineering, networking and computation. They handle billions of users, trillions of requests, and quadrillions of pieces of data, and require enormous teams of engineers to manage them properly.
There and back again
Once Google has processed our request for a photo, how does it reply? It retraces the initial packet’s steps! ISPs remember where your packet came from so immediately know where to send the reply. Once this connection is established, Google can finally send the photo to your phone, packet by packet.
Of course, there is a lot of complexity being omitted here. How is this conversation kept private? How are all of the packets, which assemble to make up the picture, sent in the right order? There’s an even more important question though...
How did we get here?
The most remarkable mechanism here is not any one invention -- is the nature of the endeavour itself. The internal structure of computers provides engineers with extraordinary permission to place blind faith in each other in the pursuit of ever-larger and ever-faster systems.
This hyper-collaborative structure is a result of careful planning during the creation of ARPANET, a predecessor to the internet. Connecting two computers through an ‘inter-network’ would require enormous amounts of planning, code and infrastructure. But with so many developers attempting to contribute to this new field, how could there possibly be any agreement? A seminal paper from Vint Cerf and Bob Kahn in 1973 provided a solution by imposing deep structure into the way the internet functions. It split the internet into ‘layers’, each with a distinct role, and outlined how each layer would communicate with the next. For instance, one layer was responsible for controlling the order of 1s and 0s being sent down copper cables. Another was responsible for handling errors in unreliable networks.
Cerf and Kahn recognised that these divisions were necessary to ensure that development of the internet was structured and sustainable, even if it restricted the choices available to engineers working in each layer. Communicating in ARPANET prior to this was like addressing your post to “the red house near the station” -- Cerf and Kahn had just invented street numbers, street names, and a whole postal service.
Cerf and Kahn chose not to specify how each layer should be implemented. These gaps have since become battlegrounds between companies, fighting to create the fastest systems for each layer.
It also ensured that the advancements made within every layer would be felt across the entire internet. For instance, a researcher testing improvements to Wi-Fi and a developer working on Google’s search algorithm will almost certainly never collaborate during their careers. However, if the Google developer takes a year off and Wi-Fi continues to be refined, the developer may return to find that their product appears faster to users without having moved a muscle simply because phones connect to Wi-Fi faster! Cerf and Kahn’s proposal demands complete faith between engineers and it intertwines their fates, since every success and failure can be felt by billions of people.
Next time on… The Internet
These structures that have been placed on the internet form the basis for a competitive market. However, this organism is at risk. ISPs have lobbied for the end of ‘net neutrality’, which requires all packets over the internet to be treated equally. Without it, tech giants like Google and Netflix could dominate even further by paying for higher bandwidths and faster connections. Governments are also requesting backdoors into encrypted communications over the internet, which introduces new privacy and security risks.
The internet might seem a lot like your favourite TV show, but seasons of character development can sometimes be thrown away in a single moment.
Will The Internet’s next season be even more exciting than the last? It certainly has the foundations for it. But never take it for granted – it could very well be headed for early cancellation.
Charles Jameson is a 3rd year Computer Scientist studying at Queens’ College. Artwork by Zuzanna Stawicka.