Sunday, February 26, 2006

The SF Future and moral dilemmas

Yvonne over on Nemeton had a thought-provoking post on a recent scientific endeavour which is working on changing science fiction into actual science: teleportation. As she points out they realised something SF writers and readers have known for a long time - with a teleport or transporter you don’t actually break the person down into energy and beam them to another location, what you actually do is create a copy of them at the receiving station. So in essence you have a clone, which may not seem like a big deal to the person involved if, say, the other version of them was created at a receiving station on another world and they will never come in contact with them. But what if that copy beams to another location and another copy is made? And if you want to stick with the idea of only one person what do you do with the original? If that original is destroyed would it be murder? This may sound like an esoteric moral argument but consider that the same problems arise if you are talking about a copy of a person created by cloning technology, which is a science we are progressing on rather faster.

It did get me thinking about other moral dilemmas that come about through new technology in science fiction. For example, Richard Morgan’s powerful debut Altered Carbon (and its sequels) has a technology called memory stacks, whereby people have the essence of their memory and personality backed-up, meaning that in the event of death you can be transferred into a new body. Richard also spins further on this having a disparity between the rich and poor, with rich people not only having a stack but one which backs-up via satellite every few hours in case someone blows their head off and destroys their stack (true death) and also keep a bank of specially cloned bodies in a secure location ready to be downloaded into, almost like putting on a new Armani suit.

Richard has organised religions almost extinct in this future because they hold that you cannot copy the soul, so a stack if a diabolical invention and believers should shun it. So they die off after a normal lifespan and that’s it for them, while most other folk may also wonder at this point to but decide on a more pragmatic approach, e.g. they would rather have the chance to keep on living, so have a stack and try to get re-sleeved. The main character, Kovacs, is also sent to other worlds by needle cast, which basically transmits your stack to another world to be downloaded into a new sleeve there (it is illegal to have another copy in another sleeve at the same time), muddying the waters even more as to how original is that person? Are they still them? Or are they just a digital copy? What if your brain is implanted into a totally different, alien body as in Paul Chadwick’s Concrete? Is it live or is it Memorex as they used to say…

Which brings you to AI - assuming you could create true Artificial Intelligence, would you treat it as ‘alive’? That is to say, would you accord it the same respect, rights and privileges of any human being? Or would you treat it as merely a clever box of tricks? Leaving aside how the AI may think about all of this for a moment, how humans would react to AI and treat it would probably say more about human’s morality than anything else. Would you deny it rights and therefore essentially treat an intelligent being as a device to be used - essentially a slave? Sounds silly? Well, it’s not been unknown for people, even in the last couple of centuries, to argue that it was alright so enslave a group, say Africans, because they didn’t count as fully human therefore there was no moral ambiguity for even a good God-fearing Christian in holding slaves. This is an area that has been covered by a lot of SF - Star Trek the Next Generation did it well several times and Asimov’s I, Robot is an obvious key text - and while we may be a long way from having to face such a moral problem for real it is still a fascinating intellectual exercise - to say nothing of how it relates to how humans perceive other humans, let alone artificial beings. And we haven’t even touched on aliens yet of course!

Of course you can combine AI tech with copies of human minds by having human’s downloaded into a computer, which is quite common in SF dealing with the Singularity (where IT gets to a level where AI comes to pass, everything accelerates at an incredible rate and you end up with a post-human civilisation - see Charlie Stross' Accelerando for example). If a person decides to have their memory and consciousness uploaded into a computer then would it still count as being them? We’re always saying how it is what’s inside that counts, would we stick by that maxim? Or would we just use a mind-in-a-box as a particularly clever computer to serve our needs? Ken MacLeod had a tale a few years ago where one character uses the uploaded mind of another person almost as her PDA - against the wishes of the mind, who would very much like to get out of this servitude and be uploaded into a new body again, please.

It’s all fiction - for the moment - but it does get the old brain thinking, doesn’t it? Assuming of course that we are all real, we are all conscious and we do all ‘think’ and aren't just subroutines of some vast digital intelligence…

1 comment:

  1. Ian Macdonald's River of Gods has intelligent AIs running round all over the place creating mayhem - worth a read.

    Your post about this is better than mine, by the way - but I'm glad to have started you off an such an interesting train of thought!

    ReplyDelete