OCTOBER 28, 2013
MAKING MINDS MORALLY: THE RESEARCH ETHICS OF BRAIN EMULATION

Dr. Anders Sandberg

 

James Martin Research Fellow at the Future of Humanity Institute at Oxford University and Research Associate at the Oxford Neuroethics Center

 

Abstract: This talk will outline some of the ethical considerations that will need to go into any project aiming at creating brain emulations. In the near future the main issue is the ethical treatment of virtual experimental animals under profound uncertainty about their true moral status. Applying a cautious approach suggests to use methods similar to existing animal welfare methods, but adapted to the peculiarities of software entities. In the mid-term, the case of human emulations raises a number of other ethical challenges, including informed consent, handling of flawed versions, time rate rights, vulnerability, and the change of identity and death. Finally, the long-term effects and importance of brain emulation will be discussed: are there ethical reasons that strongly speak against pursuing it at all, or equally strongly favor a push towards it?

 

Transcript 

 

The reason we're here is that we're interested in questions about the future. We want to get to the future. But that also implies that the future better be a good place. Otherwise there wouldn't be a point in getting there. And that might mean, in turn, that the methods we are going to use in order to get to the future better be good, too. We don't want to end up in a future built on bad methods. We don't want to climb to the heavens on a pile of corpses.

 

So, that's why we're going to bring in some small ethical considerations just to be a bit proactive. Normally, of course, ethicists tend to be the ones saying, “Oh, you shouldn't be doing that research.” The ethics board that really complicates your research proposal, or the guy interviewed in the newspaper saying, “this raises grave questions,” and never states the questions. I think we can be proactive in ethics. There are a lot of things we can figure out beforehand and do something about. Yes, there is a limited amount of brain power we can send in to ethical consideration; we shouldn't be making up too many problems just to make up problems and give ethicists like me job security. But there are interesting issues that we can actually resolve, and in some cases they might turn out to be very simple if we think about them before we embark on the project.


So, my talk, which I'm going to be trying to shorten a bit, is going to deal partially with the questions about the research that is going to lead up to the Avatar project. What should we be thinking about while doing that? Some of the weird implications: what happens when we get more advanced technologies, in terms of our concept of death and identity; some of the issues that might happen when we get to human emulations; and also the importance of the whole project – should we actually be thinking about doing this, or is this actually a bad idea we should actually avoid, or even try to prevent?


I'm not going to be talking too much about what brain emulation is, because Martine has already started with one approach; tomorrow you're going to hear a lot of other approaches. Being a neuroscientist with a particular view on how to do it, I'm not going to be really getting into the details. But the basic concept that I'm interested in: if we can take an individual brain through some process, scan it – maybe by slicing it up, maybe by using some clever quantum mechanical properties – and end up with something that runs in a computer...if we can get software entities like this, what are the ethical implications?

 

The most obvious one is, of course, that we're going to need an awful lot of lab animals. This is sadly true for most biomedical research. A lot of mice are going to end up in various bad ends. We have some rather serious animal rights protesters around Oxford, and they typically say that “you researchers, you're using the animal inefficiently. At the very least you should reduce the need for lab animals. And you could do that by running computer simulations.” Which is a lovely idea, except that we of course need to get those computer simulations, which means that we need to do experiments on animals to figure it out. And even worse, once we have these lovely simulations, how do we know we're right? We would need to specify them well enough so that, for example doing drug development, we know that the drug effects are simulated in the same way as it would affect a real animal, and vice versa; and hopefully it also affects humans in the same way.


But the real interesting question that gets my philosopher's nerves tingling is, of course, what is the moral importance of a piece of software? After all, when we think about what we do with animals, with the sort of things we shouldn't be doing – if I pinch the tail of a mouse, that's generally regarded as cruel, and it's generally regarded as bad; why is it bad? Well, ethicists have given various answers. One set of theories are the indirect theories. Immanuel Kant is the most famous philosopher to expound them, saying that actually animals don't have any more weight whatsoever. However, the person who kicks a dog is a cruel and nasty person. He might even be degrading himself.

 

So, it's still a bad thing to be cruel to animals. Then there are other theories saying no, animals actually do have an internal world; maybe not a human world, but they certainly have some forms of feelings. Some animals definitely have projects. Wasps building a wasp's nest have a kind of plan going on. It's not terribly flexible, but there is a plan there. The cat who is setting up an elaborate trick to get a mouse is also having a plan, and probably wants to be fed, and so on (or at least entertained).

 

So, they have a life, and we shouldn't be interfering in it too much. It might not be a human life, but it still has value, at the very least to the animal. And then, of course, we have the moral equality theories who say no, there is no fundamental difference whatsoever between animals and humans. Jeremy Bentham is famous for saying it matters not whether they can think, but whether they can suffer. And the mouse? Pain might feel just as much to the mouse as it would feel to me. So, by those theories, of course, we should give serious consideration to the pain we might be causing when we do experiments.


Now, what does this leave us with software? So the pictures up here are two pieces of educational software, and I don't think there's any suffering whatsoever going in them, because the computers and program are essentially just showing pictures. Moving pictures – the picture to the left demonstrates operant conditioning, so you can simulate giving electric shocks to a mouse or a rat, depending on what it's doing; but it's just modeling, essentially, the statistical link between stimulus and response. There's probably no consciousness, nothing there. It's just like a little symbolic game. The picture to the right is a surgery simulator, where you can dissect a rat or a mouse. Again, it's just pictures. It doesn't correspond to anything like a real mouse, which is a real organism. And people have very different views about whether software can be conscious.

 

Martine was giving an eloquent argument that of course software can be conscious. But I think there are at least some among you who are rather skeptical about it. Sir Roger Penrose certainly doesn't think our normal, classical software is conscious. And outside our walls, I think some have said “no, software can't possibly be conscious. It's just symbols moving around at most.” But to them, of course, some of us happen to be functionally saying “of course software can be conscious.” And the battle rages on. You can literally fill bookshelves with philosophy books where people give different views of this. There is no real agreement.


And it turns out that sometimes this can get downright creepy. Rodney Cotterill, for example, had a project about understanding consciousness and created a CyberChild, which is a piece of software that actually has small neural networks corresponding to different brain areas based on the overall structure of a human brain, and linked a body model that had, among other things, blood glucose levels, how full the bladder was, whether the nappy of the baby was dirty; and the idea was this baby would learn behaviors in order to get fed, to get cared for. So, this baby was essentially floating in empty space. The only things it could do were kind of flail its arms around trying to get milk from a bottle if it appeared; and scream. And the user could give it milk, or not give it milk. If the simulated blood glucose ever went below a certain level, the simulated child died.


Now, I don't think there was any consciousness here. There were only 21 neurons in each brain area. But it's still eerily similar to this quote from the German neuroethicist Thomas Metzinger, where he, in his book where he's talking about consciousness, is very strongly pointing out that it would be very bad if we made conscious, suffering software. It would be deeply, deeply, horrendously unethical. He didn't know about the CyberChild, so this is a completely hypothetical example, but he argues that we have a moral obligation not to make conscious software.


Now, the problem is we won't be able to agree on consciousness, because we have very different views. Daniel Dennett has a classic paper, “Why You Can't Make a Robot that Feels Pain”. And again he argues that we don't understand what pain is, so we can't program it in to a robot. But he also reached an interesting conclusion: that, well, in the end we might be able to figure it out and put it in. And it might be a good idea not to kick robots. Of course, sometimes we do kick real robots. I don't know whether the BigDog can feel pain, but I wouldn't exactly want to kick a big military robot anyway. That's just stupid.


So, here is my very Swedish solution to this: Let's just assume that if you're making a model of a system that is attempting to replicate everything that's inside the system, then it might have the same moral weight as what you're trying to imitate. So, if you have that virtual mouse, and it's a 1:1 map from a real mouse brain, and it's behaving roughly like a mouse does, so if you pinch its virtual tail it gives out a little squeal – in that case you probably have done something bad. Assuming it's conscious. But you can't be certain; it might just be a virtual zombie, something you have emulating pain. But the safe thing is to assume, of course, that, yeah, it might be conscious, we shouldn't be treating it badly. We should treat it like any other mouse.


This has fun implications, which might be slightly annoying for the Avatar project, but I think it's easy to use anyway. We actually need to take the book on lab animal ethics – the instructions we get when we try to do real research on real animals – and apply that to our software. Not to all software. Not to those simulacras and images. But to anything that is actually trying to map the structure. Then, of course, if it doesn't function in the same way as a real animal, well, in that case we have a good reason, maybe, not to treat it the same way. But a virtual mouse? We might actually want to give it virtual painkillers, which can actually be amazingly good in software, because you can actually comment out the code generating the pain system. It's very clean and easy, and you can even replace it back.

 

In a real mouse, removing parts of the brain is irreversible. We might also have to think about the quality of virtual life. This is an increasing problem in lab ethics, because we actually want our lab animals to have good lives, which means not too-small cages, not too boring an environment; and maybe we shouldn't force dogs to live in Second Life. Maybe that's too boring. And we might also have to think about a panacea for software. One of the most interesting parts here, of course, is that in software you can always restore from backup. Death itself becomes extremely strange when we get to brain emulations. Normally we tend to regard death as bad because of a bundle of reasons. The suffering part? We can already separate that using painkillers. But we can also, in this case, stop the experience. We can stop the execution of software, and restart it. Nothing happens to the software. It doesn't notice anything.

 

So, that's very separate from the cessation of identity, which I think is at the core of real death. But again, you can restore things from backup. And the bodies? Well, many methods of figuring out how the brain works are destructive, so it might be that an emulated organism has first undergone bodily destruction, and then of course we get the emulated version that appears in the software world, which is going to have a virtual body, because that's how neural networks interact with worlds. That body might also accidentally be deleted. Some of you might have been on Second Life when griefers attacked and messed up the avatars. That could happen. But it's also reversible, even if it can be traumatic. Then there is the hardware running the software, and that might break, too, and you might restore from backup. So, again we see that death becomes rather multi-faceted, and there's only some parts of it that are truly irreversible. Those are the ones, of course, that carry more weight, so the other parts become much looser.


We can think, of course, of what this means in terms of getting to the human level, and we have a bundle of interesting things. I don't have time to get deeply into this, so I'll just rush past to show that there is so much fun stuff to think about. The ethical status is actually, funnily enough, relatively easy to figure out. At least we can check whether we're conscious. So, we take an opponent of machine consciousness – let's say like the philosopher Searle – we upload him, and ask SIM Searle, “So, do you feel conscious?” And if the software ponders it a bit and says, “Uh, darn, yeah, I do feel conscious!” - At that point we have some interesting evidence, maybe not that it's actually conscious, but we have simulated the process where somebody does introspection about his consciousness and reaches an unpleasant conclusion... which is already fairly close to how we test for consciousness of each other. And it might, of course, be that software or philosophy will solve this, but if they're eloquent, they might demand a vote anyway; maybe we actually should give them the vote. It's not entirely obvious why consciousness is essential here. This is, of course, another matter where I expect Stuart Hameroff and me to have a bit of disagreement. But there are very interesting aspects to this.


Then there is, of course, the problem of who to volunteer to this? It's one thing to get [?]. But in this case you actually need humans who are willing to do some rather experimental neuroscience, which might actually not produce a conscious outcome. And most methods, I believe, have to be destructive. So, it's a one-way ticket.

 

So, the most likely thing is that it's somebody like me who's got a cryonic contract, is signed up for it and is willing to have his or her brain scanned, and then we'll see what happens. It might also work in a kind of weird form of suicide if it's a destructive method. The laws have to change in order to allow that. One interesting implication is that, at least at first, there's going to be a period where somebody who's getting uploaded is going to be a non-person from a legal standpoint. Hopefully after they go to CNN and talk about their new life, or new existence, that's going to be changed.

 

But there is a gap here. There is a real risk that we might end up with people who are moral persons, but who are not legal persons, and we'd better make that gap short. We're also going to have essentially the standard medical ethics problem of “what do we do when the brain is slightly broken?” We're going to get a lot of rather broken brains, and the real problem happens if they're broken, but not so broken that we can say “yeah, stop the simulation, erase the file,” but instead “yep, this is a person. It's a person in distress which we cannot fix.” One interesting possibility is, of course, we can stop it. Put the hard drive on the shelf and hope one day we can fix him or her. However, it's a bit unclear, should we attempt to do that, or should we just pay lip service and say “hopefully one day we can do it.”


We also have interesting questions, once you get them up and running, of how fast they are allowed to run. This is going to take a lot of computing power. That's expensive. Do we demand that a virtual self run at the same speed as real time? Or that they're not allowed to run faster? And there are interesting moral implications here, because if they run really fast and have pleasure and pain, you might generate an awful lot of pleasure or pain very quickly. So, you might slip up and, “Oops, 10 years of pain, that was not morally good.” Yeah, we need to think a little bit about that.

 

Personal identity? I don't even have the time to get into this one. Again, it fills entire philosophy libraries. There are wonderful legal issues here, especially since you might have multiple realizability. You might make copies of yourself. Which means contracts get really weird. And “one man, one vote” gets very odd if you can do ballot-box stuffing with yourself. We might need to update, a little bit, the formal roots of democracy.

 

Also, existing as software has a lot of beautiful, wonderful advantages, but it also makes you really vulnerable to the guy who's running the computer. After all, you can be instantly erased, or somebody might tweak you, slightly or completely. And privacy gets really iffy. I mean, if we're worried right now about surveillance, just think about if somebody could run surveillance on your brain, on your direct neural network, maybe even run small bootleg copies of you, testing your reactions. Yep.

 

So, software security's really important if we want to have a non-dystopian future here. Also, of course, who owns the brain scan? It's actually going to be tricky here because if I'm cryonically frozen, I'm no longer a legal person; then, of course, some corporation or university scans my brain; now they legally own the brain scan. Then maybe later I get to be a real person again, Congress creates some act making me a person – but the ownership of that file? Hm... especially if it turned out to be really profitable. After all, I'm rather good at searching for information. Maybe they copied that, and now Google is using part of my neural network. It's not clear who actually should be paid and how much. But, again, this is a legal problem. We can hash it out. Probably in court, which is going to be expensive and annoying, but it is a solvable problem; it's not a deep, profound philosophy.


So, to end here, I think there is going to be an enormous global impact. I don't have the time to get into this, but copyable human capital is economic plutonium. An associate of our institute, professor Robin Hanson at George Mason University, has done some economic analysis, just first-order sketches of what happens if you have software people, and it looks like, yep, that might be an economic singularity, if nothing else. Of course, it's going to have a big social impact.

 

There are some people who might argue that creating a whole new post-human species this way might also have some ethical problems. I don't have the time to get into that, but I don't think that's true. And then there is the important part. A lot of our research at my institute is about existential risk – the threat of humanity going extinct. And it tends to dominate everything. Essentially, if there is an existential risk you can fix, drop everything you are doing and fix that. It has priority.

 

Now, the problem here is brain emulation seems to make things both better and worse. On one hand, you might actually end up with a radical new situation, changing the economy, changing your views of what it means to be an individual, creating new entities that not everybody might agree are even conscious, or even human, and we might upset a lot of things. On the other hand, we get backup species. We get entities who can actually colonize space relatively easily. And we might be able to replace some artificial intelligence that has a problematic motivation process, with something better. And I think the most important part, we might be able to explore various new forms of being. Indeed, a post-human mind-space might exist in states which are much more amazing than what we can currently do in our brains.

 

I think what we need to do is we need to do some tricks here. For example, it turns out that if you speed things up that seems to make things safer, because: The first uploads are going to be fairly slow. If we figure out the neuroscience, we're going to need a supercomputer to run the mouse. That gives us plenty of time to think about consequences. If we wait long enough, if we say, “Oh, that Avatar project, that was just blue-sky thinking,” we might end up with a nasty surprise when somebody suddenly succeeds and we have a lot of computing power that can run very fast, very powerful emulations, and we didn't have the time to do a proper ethical debate.

 

So, I think we have a chance of getting to the future in a moral way, and I think it's actually going to be really good, but we need to think about it ahead. Thank you.

 

Acknowledgements

 

Our thanks go to our volunteers Giulio Prisco, Kim Solez, Chris Smedley, Philip Wilson, Xing Chen, including anonymous volunteers for their help with the transcription of Congress presentations.

Congress announcements
01.10.2013
In June 2013, the Initiative hosted GF2045, the Global Future 2045 International Congress: Towards a New Strategy for Human Evolution, in New…
19.06.2013
At the Global Future 2045 conference (GF2045) in New York City on June 15-16, 2013, emcee Philippe van Nedervelde said, “It used to be that the…
15.06.2013
'How's this for a weekend conference: Some of the smartest people in the world are gathering in New York to try to figure out how to build…
14.06.2013
To accomidate popular demand, we are now able to offer a limited number of tickets for one day attendance for Saturday, June 15th or Sunday,…
04.06.2013
Topics for the Two-Day Conference at New York’s Alice Tully Hall June 15 – 16, 2013 Include: Human-Like Androids, Brain-Computer Interfaces,…