THURSDAY, 12 MAY 2022No knowledge remains absolutely true if we wait for long enough. The train is no longer the fastest way to commute, and the quartz watch is no longer the most accurate way to keep time. Throughout our long history, we have had to relearn facts and theories, and this discovery process has been picking up speed — from evolution to the cause of infectious diseases — many scientific theories have been superseded in the last few hundred years. With the advance of artificial intelligence (AI), the dismantling of old knowledge is like the accelerated breaking off of gigantic glaciers at the poles and similarly overwhelming in its sheer scale.
Often, rapid paradigm shifts in our understanding of the world are accompanied by identity crises (remember the social outcry back in the 16th century when we discovered that the earth is not at the centre of the universe?) Algorithms are becoming smart enough to eclipse human intelligence and already show signs of superhuman capabilities in games such as chess and Go. Is this the demise of the dominance of human intelligence? On the other hand, humans see themselves as agents of judgement, decision-making, and action — our narrative of history is a drama underpinned by these keywords, which shape our cognitive identities. Resolving an identity crisis is no easy task. Will we be able to reinvent ourselves?
What do you think of as the more defining characteristic of humans — the ability to act or the ability to perceive? Most readers of this article would probably choose the former, thanks to our result-oriented culture, but this question of whether humans are defined by the active life (vita activa) or the contemplative life (vita contemplativa) has been subject to debate for centuries.
Ancient philosophers in many cultures insisted on the superiority of vita contemplativa. Zhuang Chou in China focused on the contemplative nature of human life: his famous paradox in which he dreamt of becoming a butterfly, but pondered whether it was instead the butterfly dreaming of becoming him, was his means of approaching human consciousness. The view that human consciousness begins with contemplation was also shared by the celebrated 17th-century French philosopher Descartes: ‘I think therefore I am’. Contemplation is, after all, the domain of philosophers, so it is no wonder that most agree on its superiority, but, as ever, exceptions do exist.
Karl Marx, for instance, flipped the hierarchy to place vita activa on top of all else, claiming that without labour, humans would cease to be humans. Ironically, this idea is shared by many who are embroiled in the workaholic culture of the modern day, driven by a full-blown capitalist, global economy.
Hannah Arendt purports that vita activa is neither superior nor inferior to its counterpart and needs to be better understood in terms of three concepts: labour, work, and action. With the advance of AI, we will soon be free from repetitive labour, of which the sole aim is to meet our biological necessities, and be outcompeted in areas of work that produce longer-lasting entities like architecture. Music and arts may seem secure, but it is not impossible to imagine a world where machines write better poems and music: today’s natural language processors are already capable of producing literary works that are difficult to differentiate from those of humans by a layperson’s eye. In ancient Greek culture, action belonged solely to free men in the public realm where they could distinguish themselves through great deeds and great words, whereas subordinated slaves and women were confined to labour and work. Will AI one day take the place of subordinated slaves so that we can all be like the free men of ancient Greece?
AI will most certainly wipe out a great number of jobs in many arenas that presently depend on human labour. Depending on your philosophical and social outlook, it may be considered either unsettling or emancipating to know that the value of any of us is no longer determined by the works we produce. Is it time that we retreat to the spectators’ stand and sit back to enjoy the unfolding of a story that is now no longer ours?
Machines and algorithms not only change the way we interact with the world but also our perception of it. Already, truth is increasingly defined by the top result on a Google search. Deepfake, a technology that can generate videos of anybody saying anything, makes misinformation ever more indistinguishable from fact.
Reality has always been evasive, and humanity has struggled for thousands of years in pursuit of knowledge. So far, our crowning glories are language and the scientific method. Language ascribes a ‘concreteness’ to abstract, generalised concepts we know as reality by attaching to them small things meaningful only to our individual selves. On the other hand, the scientific method reduces reality to the fewest, simplest premises that all of us can agree upon, and anything that cannot be deduced from therein are not nominally regarded as parts of reality.
It is easy to fall into thinking that because machines are less likely to make mistakes than humans, human endeavours should be relegated to a position of irrelevance. However, the conduction of extraordinary sciences through asking questions beyond the current framework shows that science is inherently a human endeavour and not just the manifestation of objectivity. For example, the choice of p-value thresholds is, to some extent, an arbitrary measure for testing credibility of scientific claims, dependent on the scientist’s insight. Take another example: unless there is a meaningful prior, maximum likelihood calculations are not forms of proof, and ‘meaningfulness’ is constructed by the brain mostly through language, not numbers!
As AI replaces some of our physicians and drivers, new jobs will emerge and disappear more quickly. Humans will have to retrain ourselves constantly as jobs become more volatile, but this is not insurmountable, once we set up the right infrastructure for continuous education. Also, the element of human touch warrants that some jobs should remain human. For instance, we will never completely replace our actors and dancers if we consider art as an expression of the human experience.
More centrally, as we begin to rely more on algorithms than on trusting ourselves to make decisions — for example, Google Maps decides for us the best route to our destination — human judgement is at risk of becoming increasingly irrelevant. This has become the case even in courtrooms. Judges in New York (and other jurisdictions including Wisconsin and California) use a proprietary risk-assessment algorithm called COMPASS to predict how likely a defendant would be to re-offend to inform decisions on jail sentences. Indubitably, algorithms remove the human biases that flaw the judicial system: it has been repeatedly shown that judges are affected by cognitive biases and personal circumstances, which serves as justification for the involvement of AI. However, algorithms are not without biases of their own: they learn from existing information, so inevitably they will perpetuate historical biases. Hence, the best way forward is to combine the use of algorithms with human judgement.
Limiting the power of algorithms is our responsibility. We should test them and fix them if necessary. More importantly, we should always retain the power to veto machine decisions if something looks wrong on the surface.
Machines work with numbers. Therefore, non-numerical entities like justice will always remain in the human remit — that is, until we decide to write an algorithm for its definition. As the human experience changes (perhaps as we someday move out into space), we will be rewriting our personal and social algorithms as well. We will need to use our imagination and creativity, as we have many times throughout history, to find our new place in the new world.
Gladys Poon is a final-year PhD student in oncology at Magdalene College. Artwork by Sumit Sen.