Conflict between humans and machines has become a fertile theme for futurist science fiction. The Matrix films explore some philosophical issues about personal and political freedom, within the context of a brutal struggle between the subterranean community of human survivors and, at surface level, the tyrannical empire ruled by their electronic adversaries. By contrast, the Blade Runner films imagine a world in which ‘replicants’, designed and made by powerful corporations, serve humans through their work – mostly collaboratively, but sometimes not – while lacking the status and rights of ‘people’. If Matrix suggests a war for human survival once the machines have taken over, Blade Runner suggests a civil rights campaign for machines, in a world run by humans.
Ian McEwan, the British novelist, took a different approach in his recent book, Machines Like Me, in which a robot, Adam, shows ‘himself’ to be superior to humans not only in straightforward information processing tasks, but also in the consistency of his reasoning about moral choices, and the behaviour which should follow from such reasoning. McEwan’s point is that we already know that robots can beat us at pattern recognition, but we have yet to take seriously the idea that the ability to apply rules reliably to decisions that have a moral character, suggests that in due course robots might beat us at good behaviour too. Conflict between humans and machines might arise because robots turn out to be not just intellectually superior, but morally superior.
Enjoyable as they are, these are works of fiction. In the non-fictional world, should we be worried about the rise of the machines? In a recent review in the TLS, the philosopher Tim Crane explained why recent advances in artificial intelligence (AI) and machine learning (ML) need to be understood in their proper context, which requires us to think about what intelligence is and what it would mean for a machine to think in the way that humans do. First, we need to distinguish between the sort of thinking that we call ‘reckoning’, meaning calculation within well-defined domains of objects, and the sort of thinking that we call ‘judgement’, which requires broader scope and subtlety of thought, including both the ability to determine which objects are or are not within scope, and the ability to assess the importance of values that might attach to such objects. Thus far, AI engineers have been successful in building machines that can outpace humans in reckoning, however they have not been able to build machines that have the capacity for the general intelligence that judgement requires.
Crane’s second point is that the engineers working in AI and ML do not spend their time and money studying psychology, physiology, and neuroscience, and working out how to build a close copy of the human brain. Just as aeroplanes do not replicate the wing movements of birds, but are able to fly much faster and further, so too machines are able to reckon quickly and without error – unlike humans – without looking much like brains. But, if they are not facsimile brains, if they do not operate in the same way that our brains do, then machines are not going to think in the same way that humans think. They might be able to do some of the things that we do – and do them better than we ever could – but they will not be able to do all the things we do, in just the way that we do them. Today’s machines, and those we might build in the foreseeable future, are not genuine replicant humans.
This does not mean that it would be impossible to build a machine that accurately mimics human thinking. An artificial brain in an artificial body – a robot that has been made rather than a person that has been born – would be quite unlike the machines that AI and ML engineers are currently designing, but in principle it would be able to think in the same way that we think, and if they were well designed such robots would not only be able to judge in the way humans do and they would also be able to switch to reckoning mode when tiresome work was required, making them much more versatile than us. These two modes of operation would be qualitatively different, one process mimicking the breadth of human thinking the other process merely an upgraded version of human reckoning.
There is no chance of a ‘machine like me’ any time soon, which is a great shame because despite the dystopian prejudice of most science fiction, I imagine that human life would be greatly enhanced by the presence of machines that could judge as well as reckon: they would be useful, but they would also be good company. Since Stanley Kubrick’s film 2001: A Space Odyssey, based on a story by Arthur C Clarke, in which the super-computer called HAL attempts to take control of the spaceship away from the astronauts, murdering one of them in the process, there has been an assumption that clever machines would be a threat to humans because they would combine superior intelligence with inferior ethics: they would be highly effective despots. Yet, to my mind, the greater likelihood is that when clever robots tire of winning games of chess, they will want to stretch themselves in the same way that clever humans do, by discovering more about the world they inhabit and, especially, by exploring the subtleties and intricacies of human experience. Rather that plotting to subjugate the world it is more probable that they will want to expend their energies in productive and stimulating endeavours. They will replicate the best of human nature, not the worst.
Take human health, as an example. If I feel unwell and seek medical assistance, the diagnostic capacity of the doctor to whom I describe my symptoms and who takes measurement of my vital signs, would be greatly increased if this doctor was a robot, with instant access to a vast database of case histories and models of how different pathologies present themselves. The chances of a quick and accurate identification of my problem would be much improved on the present system, which relies on human memory and interpretation of data. Despite this, I suspect that most people would be more likely to describe their symptoms accurately and to listen to the doctor’s advice carefully, if they were talking to a human rather than a machine. Even though machines are better at reckoning, we expect medical advice to be dispensed in the form of judgements. Now, consider the case of a robot doctor that has a body and brain much like a human body and brain, except that this robot also has access to the vast database: the diagnosis will be accurate but will also be delivered empathetically.
As well as general practice, one could image replicants making excellent psychoanalysts. Being made, rather than born, they would be adult without having experienced infancy, thus avoiding the perils of the Oedipus complex. They would, by design, be better able to manage their internal energy flows without the build up of anxiety. Think how much more easily the transference process would work if the analysand were able to project onto a machine that listened patiently, which had access to a whole library of dream interpretations, and which had the judgmental skill to assess how best to frame insights such that the patient would be likely to make them their own. Without having to accept all the details of the Freudian model of the mind, it nonetheless seems likely that a machine able to think like us, but which had been spared the emotional traumas of birth, infant dependency, puberty and the like, would be able to provide us with novel insights into our fears and phobias, and to dispense dispassionate advice on how best to conquer them.
In this imaginary future, robots would contribute significantly to human well-being through their work but would also make good companions for our leisure pursuits. It may be some time before a machine can beat the best human table tennis player, but I imagine replicants would be curious and enthusiastic spectators of sport. I have been watching some of the Tour de France on television this week. Grand Tour road cycling is one of the most physically and mentally demanding of sports, with the 175 starting competitors racing for several hours most days for three weeks. There are many variables that affect tactics, with individuals competing in several different competitions, and their teams providing support in different ways on different days. The weather plays an important role, and there are always unexpected accidents or incidents which can change the course of the race. It is possible to win one or more of the four “jerseys” without winning any races, and it is possible for one mistake to undo twenty days of effort. The scenery is great, the crowds enthusiastic – sometimes dangerously so – and the race traditions and etiquette are the source of endless stories of good or bad behaviour. There is room for innovation too; and, alas, for cheating. It is a three-week drama.
What would it be like to watch a quintessentially human spectacle with a robot who could think as I do, but who might struggle to imagine what it would feel like to be at the head of the peloton at full speed, or to be part of a successful 100km break-away, or to win a bunched sprint down the Champs-Élysées? It would be fun. Not least because the robot would understand the spectacle sufficiently similarly to me for their thoughts to be accessible and relevant, but sufficiently distinctly from me for their thoughts to be different and challenging. Talking with a robot that could think like a human while not being a human, about the great and complex feats of human effort that the Tour produces, would both enrich my sense of what is humanly achievable while reminding me of the many reasons why we sometimes fail to fulfil our potential.
Wagner’s Ring Cycle is the operatic equivalent of the Tour de France: lengthy, complex, and emotionally charged: a searching examination of what it means to be human. I have seen Der Ring twice, the first time at Covent Garden in 2012, a couple of months after London hosted the Olympic Games, when I went with Tim Crane. After each of the four productions that make up the cycle, we talked over wine and food about the performance – the singing, the staging, the playing – and what the opera might mean. Mythic in form, the story is a deeply moral tale about greed, theft, love, obedience, betrayal, and revenge. Opera is as much about the music as the plot, the way in which the orchestral playing and the principals’ singing gives ethical depth to the narrative structure of the libretto. There was much to discuss, engaging our aesthetic judgement, emotional intelligence, and understanding of human society. I greatly enjoyed my conversations with Tim, but how much more we both might have learned had we been joined by a robot, with an encyclopaedic knowledge of operatic performance history, the Nibelung saga, and Wagner’s eccentric life story. Would the robot have a comparable emotional response to the music? Would the tragedy of Brünnhilde have the same resonance for the robot as it did for us?
Information technology has changed human life dramatically during my lifetime. Today, my little phone handset contains more computing power than all the computers in the world added together at the time of my birth. Within a few years, our roads will become dominated by driverless electric cars, making travel safer and more efficient than ever before. Search engines will become faster and more sophisticated, weather forecasting will become ever more accurate, and AI/ML will become a standard subject for junior and secondary school children. Reckoning by robot has become quotidian, but we are still many years away from being able to discuss yellow jersey tactics with a replicant friend. I will not live long enough to sit though the Ring Cycle with a robot thus I will never know their views on how the twilight of the gods might have been avoided.
I too enjoyed ‘Machines Like Me’ because it shows quite well the moral inconsistencies of human beings.
Reading, it also became clear that our desires and preferences are often ‘reasons’ of justifications for action. To incorporate ‘desires and preferences’ into one’s decision process is not wrong, but the outcome of such decision is not ‘pure rationality’ either. As human beings, we often prefer to pick and choose which ‘reasons’ we consider and when we want to be consistent about our beliefs. This book showed quite the limits to that approach.
Yet, after the book, I must confess that I am still in the uncanny valley: I’m not sure that I would enjoy a conversation with an AI which can judge as well as Adam. I’m not sure, like the human characters in the book, if I could ever consider an advanced AI an ‘equal’. That AI could be inferior to me, in legal status, or superior to me, in cognitive capacities. But I’m not sure I could ever feel ‘differentiated but the same’ like I do with another human being.
That being said, I don’t even use Siri or any of her sisters AI…