Robots will not take over the world
Expert says artificial intelligence involves some dangerous myths
Published: 2 February 2010 (GMT+10)
This is the pre-publication version which was subsequently revised to appear in Creation 33(1):37.
Artificial intelligence, or “AI” is an enthralling research field. As computing speeds get ever faster, it has spawned many predictions, from science fiction writers as well as from serious researchers in the field, of a world in which superintelligent machines with consciousness “take over” the planet.
Fans of such predictions will be disappointed by recent statements from a leading AI researcher—Noel Sharkey, Professor of Artificial Intelligence and Robotics at the University of Sheffield. In a recent interview,1 he claimed that these were “fairy tales”, based on the unproven belief that intelligence was “computational”. He says that “there is no evidence that machines will ever overtake us or gain sentience [conscious self-awareness].”
While not claiming any sort of religious belief himself, he says that when he tells others that there is no evidence that our own intelligence is related to the way computers work, they become “almost religious” in their (anti-religious) reactions. They accuse him of saying that the mind must therefore work in “supernatural” ways. But while accepting that the brain is a physical system, he says “it could be a physical system that cannot be recreated by a computer”.
Sharkey uses topflight chessplaying computer programs as an example of “some very smart things done by humans that are done in dumb ways by machines”. Even though chess requires a great deal of intelligence in pattern-recognition and so forth, some supercomputers today can overwhelm even some of the world’s best chess players. But the way they do this is quite different from humans—it does not involve intelligence at all, but rather the sheer “brute force” of calculations.
AI—the science of illusion
Sharkey comments that many of the AI achievements people get excited over are really using “trick and illusion” to make the machines appear almost alive—such as language programs that “search databases to find conversationally appropriate sentences” or “machines that can recognise emotion and manipulate silicon faces to show empathy.” He says, “If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.”
He says that there seems to be a desire in people to see machines as being potentially like animals and humans. Sharkey says that this leads to a “willing suspension of disbelief”, even among his fellow researchers in the field. He is concerned that this fantasy-led drive could lead to a dystopian world. Emotionless robots could be widely used to substitute for humans in such things as care of the elderly, who need the “love and human contact” that only a real person can provide.
Sharkey’s candour on the subject, while refreshing, is not popular. He used to get calls from reporters asking for comments on the subject, but when he told them that he did not believe that machines were going to take over the world, they were not interested in reporting it or talking to him further.
Undeterred, Sharkey comes right out and states about the many impressive achievements in his field what should perhaps be obvious: “It is the person who designs the algorithms and programs the machine who is intelligent, not the machine itself.” How amazing to contemplate the superintelligence of the One who designed the human brain, capable of such feats, in the first place.