In an accidentally thought-provoking alignment of editorial strands in this morning’s edition of Today, we had the director of a charity supporting ‘survivors’ of child abuse commenting on the significance of broadcaster Paul Gambaccini’s unfortunate experience of being investigated by police on the basis of clearly manufactured accusations – and a scientist at Sheffield university’s department of robotics discussing his latest project.
I actually became quite distressed by this latter item. A robot ‘child’ has been designed in the laboratory, according to the reporter unnervingly realistic; and the aim of the project is to give this machine a ‘sense of self’. We heard a child’s voice – electronically produced yet unmistakeably childlike – repeatedly referring to itself in abstract terms as ‘I’.
Whether it was a technological conjuring trick or a real step along the road to independence of thought in machines, I cannot say. The mere idea is enough to raise serious moral questions.
There are still scientists – not only American dentists – who believe that ‘lower orders’ of living creatures have no sense of self, and that that is what sets humans apart from, and above them. I have always argued that any biologically reproductive creature, even microscopic spiders and mites with evidently the tiniest of brains, that nevertheless demonstrates a reflexive ‘flight’ reaction or other self-protective strategy when becoming aware of an existential threat, can do so only if they possess a sense of self. Otherwise, how would they know ‘who’ was being threatened?
In this week in which BBC output is obsessed with the state of development of Artificial Intelligence in computers and machines, I am not certain anyone is thinking enough about what it would mean for a machine to become, not only ‘self-referential’ in the sense that it learns by feedback from its experiences; or ‘cognitive’, in the sense that it is aware in some sense of its surroundings and can be programmed to recognise objects and communicate with other machines and with humans, or ‘self-replicating’, in that part of its mechanical function enables it to manufacture other machines like itself, but actually for it to possess a ‘sense of self’; so that, in the famous Asimov laws, it can obey the Third Law and not allow itself to come to harm (provided in doing so, it does not conflict with the first two laws… etcetera!)
The moral question goes beyond equipping a machine with the ability to recognise when it is coming to harm, and to respond accordingly. The precaution makes obvious commercial sense when dealing with a valuable product. Independently functioning robots can already right themselves when they sense they are becoming unstable in motion and may fall over. But there is a huge difference between a gyroscopic stabilising mechanism and a higher cognitive function alert to a range of possible physical and emotional threats.
The question therefore must be: is there a distinction between having ‘a sense of self’ and having an actual self, of which one may have a sense? Is the one an artificial attribute, a ‘product feature’ if you like; while the other is a step along the road to creating a new life-form – a ‘product benefit’ (or otherwise)?
Instead of designing machines with sufficient cognitive abilities to perform useful functions, why are we trying to go so much further, to create machines so much ‘in our own image’, virtually replicant humans, if not for deep-seated psychological reasons? It appears that it is not enough for humans to make slaves of machines, without making the machines into humans.
Self-awareness is the one attribute of biological life-forms that allows us to feel emotions. Emotions are our responses to the different situations in which we find ourselves placed. They are an evolved development of the basic self-protective instincts shown by our little insect friends. The most primitive elements of our emotional landscape, I suppose, are happy/unhappy: since an emotional reaction of happiness or contentedness allows us to feel we need do nothing more to improve our situation, while an emotion of sadness or discomfort stimulates us to some ameliorative action or attitude. A feeling of ‘No action possible’ encourages us to sink into miserable apathy – depression being the normal response to feelings of disempowerment.
And this is as true for the dog curled up at my feet as I write, happy to be with me in the warm but impatient to go on his morning walk, as it is for me or you. From this basic pairing of happy/unhappy develop all other emotions such as fear, anger, complacency, love, loneliness, conflictedness, impatience, etcetera, as extensions of our need or ability to act, or not to act, for our own benefit. We need hardly extrapolate from this thought to take in Hamlet’s soliloquy.
The literary allusion that immediately sprang to mind when I heard the item on Today, however, was the fable of Pinocchio, the tragic puppet child who comes to life to gratify the desire for companionship of an old man and is ultimately consumed in the fire.
For, the Big Question that arises when you grant a machine the power of self-awareness is: what rights and protections do you then offer it, if any? Or are we to have certain types of machines that, like human slaves, might exist purely for our gratification, that allow us to have power over their emotions – as if emotions were simply a utility, to be commodified? As if machines are merely slaves to our whims and desires, regardless of (or possibly because of) what we impart to them of our own humanity and social status?
The idea of an ‘abusable child’ – whether sexually or in any other of the many ways we have learned to abuse one another emotionally and physically – depends for its success on the child being susceptible to the power of the adult. The purpose of abuse is not merely to gain the immediate gratification of the abuser’s desires, but more significantly to experience the emotional responses of the abused: the according of respect, the development of dependence, the granting of authority, the healing of the abuser’s own sense of wounded selfhood through the experiencing of emotions at one remove, that are otherwise distorted or entirely lacking in the abuser themself.
It could equally as well be the relationship an abuser develops with a horse, or a dog, a celebrity or a willing adult partner, as with a child; even, in some psychopathologies, with another part of themself. And so why not with a machine? Why not create a machine that returns emotion to its owner, while allowing itself to be kicked and beaten and starved and throttled and spied on, enslaved, bought-and-sold and even sexually abused for the customer’s gratification?
What are the limits to the uses to which we could put such machines? Are there any limits? Would such a machine, if sufficiently realistic, sufficiently submissive, compliant in all regards, pleading for its identity, its very survival, not provide an adequate and legal substitute for the non-acceptable abuse of other humans; thus performing the vital service of removing from us for once and all, all responsibility for being the ‘survivors’ of abuse; we, on whose emotional output abusers depend for their existence? Should the ‘abusable child’ perhaps be equipped to cry?
Well, my feeling on listening to the machine child who seems to know it exists was that this seemingly calm and rational professor of robotics is either an emotional idiot or a fool, a monster – or all three.
Beyond Pinocchio looms Mary Shelley’s patchwork creature; and beyond Frankenstein, Prometheus; and beyond Prometheus is the Biblical god who grants us ‘free will’; and beyond the Biblical god is the serpent of Genesis, who slyly gives us self-awareness but at a terrible cost.
Such myths are intended to make us think more than once before commodifying whatever it is that makes us human, merely in the interests of scientific experiment and eventual commercial gain; the slave trade in automata.