Monday, September 20, 2004

Alice in Virtual Land



A programme called Alice has won the latest round of competitions to produce a software system capable of holding a conversation with a human. Drawing inspiration from British mathematical genius Aln Turing's hypothesis - the Turing Test - that if a conversation with a machine fooled a human into believing he/she was talking to another human then that machine was effectively 'intelligent'. No-one has yet won the Gold or Silver awards but the Bronze is given out to the best attempt each year.



I've always thought the Turing Test was rather simplistic myself and argued as much in an essay on AI and cognitive psychology back in college. Conversational ability does not prove sentience: George Bush can talk (sort of, not very well, but does better when prompted - or is he merely repeating a script like a trained parrot and therefore not exhibiting any intelligent behaviour? Discuss) but would we class him as sentient? Besides it is a long way from mimicking a human skill to having actual AI. Most programmes until reasonably recently have often taken the root of pretending to be mentally impaired patients in a hospital, such as ELIZA and others. This means when the software is unable to give a convincing response to a question it can be interpreted as a result of the impairment. In fact I recall playing a home computer game based on this idea many years ago - back in the old Sinclair ZX Spectrum days in fact. It was a game called ID - you held long 'conversations' with the personality who had amnesia and possibly other impairments such as verbal aphasia and tried to ascertain who and what they were and had been. Anyone else remember that one?



Even if we do have software which can talk to us as easily as say HAL 9000 and understands natural language input it is merely another, albeit more sophisticated, form of interface with our machinery. It does not prove intelligence - we need a lot more to argue for that in a machine, not least sense of self-awareness. Although how you ever prove that I do not know - I can't prove I have such a faculty really (shut up in the back Descartes, your idea is rather simplistic and proves nothing). And even if we can create a real AI and prove that it is sentient many people will refuse to believe it for religious reasons or simple stupidity or bigotry. And how will we react to such a creation if we make it? I suspect that will be a huge moral quandry for humanity. If we recognise an AI as sentient then we can no longer class it as a mere device there to serve us, can we? That would be tantamount to creating a new form of slavery. But could we bring ourselves to see an AI as equal in rights to a human? Would the AI see us as equal? And would it sound like Majel Barret Rodenberry in Star Trek or HAL 9000?

2 comments:

  1. The whole concept creates an entirely new set of dilemna's - and in my opinion we have way too much to deal with already in relation to human rights. Now to throw a computerized version into the mix? Oy Vey! And I'm not even Jewish! But please don't discriminate me because of that or the phrase I chose to use to show my disdain!
    *smooches*

    ReplyDelete
  2. Heya! It's me again :O Im not gona fault you on another post...instead just here to tell you i've added you to my blogroll...guess you'll have lots of traffic soon :O

    ReplyDelete