Should human enhancement be a moral imperative? An interview with Julian Savulescu

2018-04-16T10:24:29+00:00 April 16th, 2018|Philosophy of Ethics, Science and Society|

by Meghan Winsby

Julian Savulescu is Director of the Uehiro Centre for Practical Ethics at the University of Oxford, Professor of Practical Ethics, and Fellow of St Cross College. He is editor-in-chief of the Journal of Medical Ethics, and the author of over 250 publications, including 2012’s Unfit for the Future: The Need for Moral Enhancement (with Ingmar Persson). 2011’s Enhancing Human Capacities (co-edited with Ruud ter Meulen and Guy Kahane) reviews and thoroughly evaluates the ethical and policy implications of the latest in human enhancement technologies. Professor Savulescu visited the Rotman Institute of Philosophy from March 14-15, 2018.

Prior to his talk, The Science and Ethics of Human EnhancementProfessor Savulescu sat down with Rotman member Meghan Winsby for the following interview.

MW: Your talk today is titled “The Science and Ethics of Human Enhancement.” What, on your view, counts as enhancement?

JS: Most people understand human enhancement to be the enhancement of some kind of function within the normal range. So, typically, there is a distinction between disease and health, and between treatment and enhancement. For example, intellectual disability is defined as an IQ that is less than two standard deviations below the mean—so less than 70, where the average is 100. So any increase in somebody’s IQ below 70 would count as treatment. If somebody started with an IQ greater than 70, and we increased it, this would count as enhancement. This leads bioethicists like Frances Kamm, for example, to distinguish between two different categories of functional enhancements. One category contains increases within the normal range—so in our example increasing somebody’s IQ from 80 to 120—and the other contains increases outside of what human beings could ordinarily achieve, like increasing somebody’s IQ from 150 to 250.

In my view we can also think of another kind of enhancement, and I have called this the welfarist definition of enhancement. So this is more of a normative definition rather than just talking about functional enhancement. According to the welfarist definition of enhancement, some change in our biology or psychology in a given set of social or natural circumstances that tends to increase our well-being counts as enhancement. On this view, whether some functional increase or decrease counts as a welfarist enhancement depends on its impact on welfare, or well-being. So, often you hear people like John Harris saying that if something is an enhancement then it must be good. What he means by that is if something is increasing your well-being then it must be good. But, obviously, whether increasing or decreasing some function is good depends on the circumstances, our values, and our concept of wellbeing.

MW: Are you optimistic that human enhancement will help us lead overall happier lives?

JS: It depends on what you mean by happiness. Some people use happiness to mean eudaimonia or flourishing, which essentially means well-being. So, on that view, impulse control and the ability to delay gratification will be an enhancement that promotes happiness in this broad sense. But I think of happiness in a narrower, sort of hedonic sense of feeling pleasurable mental states. And there’s good evidence that people’s level of happiness is set biologically—they have a hedonic set point. So people tend to just get to their point of how happy they are regardless of how bad or good their circumstances may be.

There’s no reason why you couldn’t change people’s hedonic set points. I would rather be happy, more of the time. Now, of course, if you just became oblivious to everything else, and you were just pressing a happy button to experience euphoric mental states, we want to say that wouldn’t be a good life. But I think if you think about the happiest day of your life, and you could have that sort of happiness more often, why not? And I think happiness, contentment, and these sorts of positive mental states are going to be something that is within our grasp of modifying. Of course drugs like Prozac are starting to sort of get us there, but I think in the future we will have drugs that are better than alcohol, which is a neurotoxin and a hepatotoxin, changing how happy we feel. People won’t be taking backyard, bikie-produced ecstasy drugs. There will be ways of changing our mental states to enable us to be happy.

I think people worry that we will just become zombies, just taking these drugs, and that is a real worry. But seeing my kids on social media and the internet, I think we’ve already reached that zombie-like existence and it’s going to be a challenge—whether it’s using the pharmacology of the future or the computer—to enable people to use it in a way that enables them to have a truly good life. I think you’ve got to have some objective conception of well-being that says there are certain things that are good for people—real human relationships, for example, or a variety of human relationships and not just virtual ones. So I think that we can lead happier lives and the challenge is not just to be happy, but to have a good life overall. I think happiness is only one part of a good life.

MW: What are some of the ethical issues associated with the use of enhancement technologies?

JS: When people are concerned about functional enhancements rather than welfarist enhancements, they’re concerned about a number of different issues. The first one is obviously safety. I think this is an uninteresting ethical issue, because if it’s not safe or involves risks, then this is just the familiar issue of balancing risks versus benefits. But people tend to say that enhancement is special because when you’re thinking about risks, you’re often thinking about health risks. The idea is that you’re trading health for some other non health-related benefit. And people often say we shouldn’t be trading any kind of health risk for other improvements, and I think that’s a mistake. This is to hold health above all other values. Now, I think health is an instrumental good, but we risk health in itself all the time. People risk health when they try to climb Mount Everest or when they drive a car. So there are issues around safety but I think that they’re no different to the issues we face in everyday life when we decide whether to drink alcohol for social or personal benefits versus the risk it has to health. Alcohol consumption in this context is a form of enhancement.

The second issue that many people are concerned about is inequality. The charge is that where some people are getting an advantage that other people aren’t, it’s unfair. I think that’s a valid concern and the classic instantiation of this is the film Gattaca, where we’ve created a two-tiered society of the genetically privileged and the genetically oppressed, who are discriminated against. But although I think this is a concern that we have to address, I don’t think it’s an overwhelming objection to enhancement. There is natural inequality. It’s not the case that we all start out equal, and then enhancement suddenly disturbs this fair, natural state. We start off very unequal. IQ is a sort of example; some people are born with very low IQ, some with very high. And the same is true for every human characteristic from empathy to self-control or athletic ability. Everyone is different.

MW: So we may not all start out equal, but there’s a distribution of advantages that would be skewed, right?

JS: Right, so what people who are on the sort of left of the debate fear is that the rich will buy these advantages and they will skew the distribution curve by getting even greater advantage. And that’s certainly true, that could happen. But we could conceivably treat enhancement like health care here in Canada, for example. Impulse control—particularly the ability to delay gratification—for example, has been shown to be one of the biggest determinants of your academic success, your social success, and your economic success. So, if that’s important, and even people at the low end of normal but who don’t have diseases are disadvantaged, you could treat that as we treat healthcare and make it freely available. So Canada could make enhancements of self-control available to the worst off, and that would reduce inequality. So whether inequality is reduced or increased depends on how we distribute this technology. And it’s not inevitable that it has to be expensive, it’s not inevitable that it has to be made available only to the best off in society. I think that’s the challenge of enhancement, to use it to both promote people’s wellbeing and also promote the kind of society we want to live in. I personally think we can do better than the status quo. We probably won’t be living in a Utopia, but we might have much better than we have now.

MW: Inequality seems to be a really common objection. The idea that these technologies and interventions will likely be available only to the best off seems reasonable when we look at the current global inequality of access even to basic health care. Are you sympathetic at all to the worry that the enhancement of people will serve to enhance the social inequality between them as well?

JS: Yeah, I’m worried about that. But I think there are three responses to that challenge. One is that there is no reason to treat biological enhancements differently to any other enhancements—technological, for example. So we don’t think that we should limit the development of the internet, computers, smart phones, and so on because the rich will bear the best versions of that technology. These technologies are vastly more powerful—I mean artificial intelligence is going to be hugely more powerful than nearly all biological enhancements—so if that argument applied it should apply across the board. Second I do think there is a concern about these and all forms of technological enhancements increasing the divide and that we should look at ensuring there’s a safety net—not just for medicines, as Canada has—but for technological opportunity and biological opportunity. And the last thing I think, just for context, is that you have to take a long view. So we’re not talking about next year. If you look at our history so far, enhancements—initially reading and writing—were only available to a tiny fraction of humanity. And now, at least in developed countries, it’s available to everyone. Mobile phones were initially available only to the very wealthy. Now they’ve revolutionized business infrastructure in Africa.

So there is a diffusion of technology, and what starts out divisive tends to diffuse and improve people’s lives. I think we can facilitate that, and we should facilitate it. But this idea that everything has to be available to everyone, immediately, just flies in the face of reality. For example, I think patents are not ideal, but patents exist to stimulate innovation and for a period of time. There will be initial inequality but after that period of time those technologies can be widely available. So if you’re thinking 500 years ahead, the kinds of things we’re concerned about now will be sort of obsolete. I think the challenge is to balance that, and there is an unhealthy obsession with inequality. Inequality is bad, but equality is not the only value. Some level of inequality should be tolerated for the sake of other values. People often don’t want that, but rather want absolute equality and that’s just not achievable, and nor is it desirable. I think the place of equality is important in this debate, and there is interesting work to be done for sure.

MW: Are there other important ethical worries?

JS: Another issue that, for example, Michael Sandel is very concerned about is a certain attitude that enhancement engenders. First of all, the worry goes, a society in which enhancement was widely available would undermine solidarity. This is because those who didn’t choose to be enhanced would then be responsible for their own disadvantage. We would take up a different attitude toward the worst off, where we wouldn’t feel we have to care about them as we do now. Now—where we’re all victims of chance and misfortune—we create insurance schemes and social welfare to unite us against this threat of bad luck. Sandel and others worry that we would no longer feel it a matter of justice to structure our institutions so as to protect against natural misfortune, as this misfortune would be optional. I think this is a concern but again, not overwhelming. I think there will always be natural misfortune, and enhancement will never guarantee people a perfect life. Enhancement is only going to change the probabilities, it’s not going to create a certainty. So I think we still need to enhance solidarity and indeed enhancement could be used for those sorts of attitudinal changes through various forms of moral enhancement, which I’ve talked about.

Another concern that people like Sandel have is that rather than accepting our lives and our capacities as gifts, we will start to think of ourselves as the masters of ourselves and our destiny. Without enhancement we accept our strengths and limitations, we are open to the unbidden and to chance and we try to fashion society in such a way as to enable people to have the best opportunity in an uncertain world. When we adopt the attitude Sandel is worried about, there is a danger of instrumentalizing ourselves—using ourselves or our children as a means to achieving certain goals. I think that’s a danger, but I also think life is about trying to improve yourself. We try to improve ourselves with education, with diet, with psychological techniques and enhancement technologies are just another part of that whole pattern or approach of trying to enable ourselves to have good lives and to allow society to flourish.

A different kind of moral worry is that enhancement renders people’s achievements inauthentic. So you get this in the doping debate. People say it’s not really the person’s achievement, it’s the drugs, or the pharmaceutical company or the corporation that produced the device that they used. Again, this could be the case with extreme forms or enhancements, but you have to think about the specific cases under consideration. Nobody thinks that the computer that enables me to manipulate text and information and do things much more quickly than I did when I started in academia now robs me of my achievements. They say well, that enables you to do more but they’re still your achievements. If you’ve put in sufficient work, it’s sufficiently yours. So, we’re constantly integrating ourselves with technology, and it’s true technology could dominate us. If my computer started telling me what to write, and started writing my papers for me, then we have a legitimate concern—then it’s not really you, it’s the technology.

MW: It seems this kind of worry brings us into the free will/determinism debate, where we ask questions like ‘was it me? My computer? My genes?’

JS: And that’s another thing. There’s no doubt that enhancement can undermine our authenticity. It can undermine our free will. But it can also enhance it! To give you an example: my account of free will is not that we are completely free and able to—through a process of uncaused causation—affect the world. My view of freedom is that you’re free when you’re able to set your own rules according to your values. So actually, being free requires delaying gratification, constraining yourself in a world where given your human limitations you’re prone to deviate from what you most value. What people do is set pre-commitment contracts, they say, I’m going to get rid of butter from the fridge or I’m not going to have any alcohol in the house. So, that’s freedom when you do that. At that point, although you’re constraining your options, you’ve done it! You have achieved a certain outcome that you value.

Now, one way in which a substance could enhance our free will is by improving impulse control. Ritalin, for example, does this and enables you to delay gratification. Another way is through motivational enhancement. Drugs like Ritalin, and also Modafinil, will do this. One hypothesis about how Modafinil works to enhance cognitive performance is by improving task engagement and task enjoyment. So you enjoy doing what you want more. So, that is a way of enabling you to put aside distracting temptations or other cues. Now, does that rob you of free will? If it’s a part of an intentional, value-driven project where you’re still exercising large amounts of effort but you use Modafinil strategically aid you in this, it’s no different to training that enables you to perform effortlessly or enter a flow state in some sporting event. In these cases when you’re actually doing it you’re simply enjoying it. You’re not experiencing the difficulty of exerting huge amounts of effort and control, but that’s because of the prior training and the prior effort. So, provided you had that background effort as part of a value-laden project, I think that enhancement substances could enable you as a human being to be, in a certain sense, more free.

So I think the answer when you get any of these objections is that whether enhancement is good or bad depends on the specific enhancement that you’re talking about. So, for example, a drug that enabled you to become more empathetic, and stopped you being violent, wouldn’t be undermining your free will, unless you really wanted to be violent. A device that just changed your desiresto be violent would undermine your free will. So, they’re quite different things. Even though people want to have this very blunt discussion about enhancement, there are so many subtleties. On the outer edges, in my view, there are clearly dangers but there are also clearly great benefits.

MW: It’s interesting that you bring up increasing empathy, particularly in light of your support for moral bioenhancement. Recently people like Peter Singer and Paul Bloom have argued against empathy’s positive role in moral decision-making. What are your thoughts?

JS: You have to make a decision whether we need more empathy or less empathy in society. I think my general view on this is that the current—or natural—state is highly unlikely to be optimum for what our values are for our lives or for society. So if you look at a recent study—there was a large meta-analysis of psychological research that used a standard measure of empathy for the last 30 or 40 years, and looked at how empathy has been changing over time in the US. It’s been decreasing, especially since the year 2000. Now you might say, if you’re Peter Singer or Paul Bloom, well that’s a good thing. That’s social progress! Or you might say it’s a bad thing, and we ought to do something about it. The point is we have to make a decision, and then if it’s in our power to increase it or decrease it, then we should increase it or decrease it. It’s not that you say whatever happens is good, that’s an inhuman approach to things.

In our paper, The Moral Importance of Reflective Empathy, Ingmar Persson and I argue that empathy is not solely good, but it’s an important motivational kick-starter. So if you want people to change their behaviour, harnessing empathy is the most likely way, or important way of getting them to act. You can know something is important, you can morally reason that you should be doing that thing, but what kicks you into action is empathy. So I think we should be enhancing empathy to a degree, but in conjunction with cognitive enhancements. So again, it’s not just that we need more empathy. It’s possible to have too much empathy and then you’re just paralyzed. At the moment we’re in a medieval stage of thinking about which modules need to be put together, and how. But I think that’s the challenge of this whole area is to try to decide how much empathy, in what way, in combination with what other cognitive capacities to make the sorts of beings that we want, and the fact is, we can. The point is that on an evolutionary view, evolution just hasn’t given us all we need. It’s given us some tools, and they’re varied and limited. We’re in the position now where we can change that.

MW: You’ve said that not only is human enhancement permissible, it is our moral imperative. Can you elaborate on this? What’s the argument?

JS: There are a number of ways in which enhancement could be a moral imperative. First of all, when you’re talking about welfarist enhancement—enhancing people’s well-being—you’ve got the same moral imperative to develop human enhancements as you do to develop treatments for disease. We should be developing treatments for disease. Why? Because disease undermines people’s well-being. But it’s not just disease—it’s not just being two standard deviations below the mean—that is bad for people. It might be that being one standard deviation below the mean is bad. Or it might even be being average, given the way society has evolved, or according to our own values. Consider that 100% of people age. People get deafer as they get older and they lose their sexual potency and lose their memory. That’s completely normal. But even though 100% of people get it, it’ still not conducive to well-being! And if you can change it then you should change it in my view, according to its risk/benefit profile and our set of values. So that’s one argument—that it promotes human well-being.

Here’s another one. We should all agree that we want people to be more moral. Well, a part of our moral behaviour is determined by our biology. We’re moral animals and we can change aspects of our biology that will make it more likely that we will behave morally. So for example, we’re all implicitly racist. This is programed into our biology because we evolved in small groups of 150 on the African savannah and morality evolved to facilitate cooperation with small groups, not with globalized societies. So we’re distrustful of out-groups. One of my PhD students Sylvia Terbeck ran a study of Propranolol, a beta blocker, which showed that taking the drug reduced implicit racism. So that’s not to say that everyone should be on Propranolol, but it shows that changing our biology can affect our beliefs and our attitudes.

I believe we’ll be using this sort of knowledge in the future to create more tailored educational programs that harness physiological changes with educational paradigms. One of the goals will be to educate people into a sort of secular morality. You might say that sounds crazy but you don’t want to try to teach people who are sleep-deprived. That’s just not going to be effective. Likewise there might be other physiological changes we’re to bring about by either diet, or by transcranial electrical stimulation, or through the use of future pharmaceuticals that enable people to learn more effectively. This holds not just for mathematics, but also for moral attitudes and moral behaviour.

MW: What do you think are the ethical limits on human enhancement? In the context of prenatal genetic enhancement, for example, should any parent be permitted or encouraged to design their ideal child according to their own preferences and values?

JS: I think you do need limits, and I think there ought to be limits around safety. But again, the claim that something has to be perfectly safe in order for it to be ethical is just ludicrous and out of contact with any sort of reality. The risks have to be reasonable and commensurate with other risks that we expose children to. So we need to have quite a high bar for safety but there is always going to be some risk when you’re trying to do anything in the world. Nothing is perfectly safe. So safety is one. And then the second is that it needs to be aimed at something that is plausibly good. So it needs to be something that promotes the child’s well-being, and based on a plausible conception of well-being. You can’t just say, “well, I think my child will be better off with one leg rather than two.”

MW: What about a deaf couple, who would like a child who can be fully a part of her parents’ community?

JS: So I’ve written a lot on this. I think deafness is a good example of how functional performance, and the valueof functional performance depends on context. So, I think deafness is a disability. In the world as it is and is likely to be, deafness represents a disadvantage. Hearing people have the ability to communicate both through sign and speech, and there will inevitably be various auditory cues, warnings and modes of communication that are inaccessible to those living with deafness. So being hearing in the world the way it is, under conditions of justice, is likely to leave people with deafness with some disadvantage. Now I don’t think it’s as great as people suggest. So I think that if we actually improved the social context—have it so that everyone must learn to sign in school, for example—this would make people’s lives with deafness better, but it still wouldn’t remove entirely the disadvantage. However, if the world were extremely noisy—if you had to work in an extremely noisy factory all the time—being deaf would be an advantage. But this is a highly restricted context and isn’t how the world is and is likely to be. I think deafness is a disability, and I think it’s wrong to deafen a child. So if two deaf parents see it as an enhancement to make their child deaf, perhaps for social reasons, I still think it would be wrong to deafen that child. At that point, we would need to say that you can’t give that child drugs to make them deaf or cut the auditory nerves.

A real life version of this is if deaf parents have a child who is deaf, a cochlear implant is available as a treatment, and they refuse the cochlear implant. On my view there is overwhelming reason to say we need to give that child that advantage. The child can always remove the cochlear implant when they’re older, but they can’t have the implant later because the advantages of learning to speak and hear are only fully realized when these abilities are accessible quite early in life.

So I think there will be limits. We’ll have to make judgements about well-being, about disability, about the limits on what a good life is.

And then the other kind of restriction would be I think we can’t change things that would impose a significant harm to others. So, creating children who are highly aggressive or violent or psychopathic, would be something where we should say no. So, likewise, I think if there were enhancements against psychopathy and sociopathy and these harmful traits, we should employ them. And a lot of the psychiatry around personality disorders is a form of enhancement aimed at improving social integration and reducing antisocial tendencies.

MW: This might be more a risk/safety concern, but do you worry at all about unintended consequences of a species-wide enhancement or engineering effort for human beings—a loss of diversity, or a whole new evolutionary path?

JS: So one of the objections people give is that we need diversity. It’s true that throughout human history, the species has survived in part due to our genetic diversity. So one example is there is a group of people who are naturally immune to HIV. So what would have happened through most of human history is you have an epidemic of HIV, then most people die off, and then a few people who happen to be different manage to survive and repopulate the species. Genetic diversity has in fact been an extremely beneficial feature of our evolutionary history. But is that how we survive HIV today? We’re not sitting waiting for these genetically privileged people to repopulate us. We’re developing drugs to treat the disease and we’re developing strategies to prevent its spread. So this reliance on brute diversity as a means of protecting ourselves or enabling progress I think is just out-dated.

The world is not the African savannah now. And indeed if we wanted diversity for its own sake we could engineer massive diversity through manipulation. So if diversity is a good thing, we should have more of it! Maybe we should be engineering lots of diversity. But I think we need the amount of diversity and the kind of diversity that is fit for those of our values that are morally defensible. This needs to take into account the likely trajectories of the world while maintaining the flexibility of changing course if something disastrous happens. So, 1% of the population are psychopaths. That’s 70 million people! Is that a good thing? Well through most of human history having a psychopath in your group might have helped your group survive against another group, but it’s not at all a good thing that 70 million people can make biological weapons that will wipe out humanity. We don’t need psychopaths when we’ve got weapons of mass destruction. It’s not at all a good thing that there are psychopaths in charge of countries and corporations.

Where diversity is interesting is in the context of discussions about neurotypical children and autism spectrum disorders. I think that really severe autism is just a bad thing. But as you travel along the spectrum to disorders like Asperger Syndrome, there may be advantages and disadvantages. Here I think we just have to say that ethics is not black and white It’s black, white, and grey, and there’s going to be a lot of cases we just can’t judge overall good or bad. I think we ought to be protecting space for that, although just because we don’t know whether it’s better to be a neurotypical or have Asperger Syndrome, doesn’t mean we can’t come out and say that that severe autism is bad. Again people think we’ll have this really neat, cardinal ordering of everything where we can pick out precisely what is good and what will be enhanced and what won’t be. I think it’s going to be much rougher than that.

MW: As our knowledge grows, human beings will take—and are taking—an increasingly intentional role in our own evolution. Should we have any specific goals in mind? Do you think we could arrive at a consensus on an ideal for our species?

JS: Yeah, so I’ve used this term ‘rational evolution’ here. Rational evolution steps beyond Darwinian evolution, where the focus was survival to the point of reproducing. The goals there were survival and reproduction. But today I think our goals should be two. The goals should be well-being and some sort of moral ordering of society. Our choices should reflect our pursuit of those goals, and I think the biggest questions that we face as a society—whether in relation to biological enhancement, or to our educational institutions or immigration policies—is basically what is a good life and how do we balance individual interests against those of society as a whole? How much should we promote some conception of justice at the cost of some aspects of individual well-being? I think we can agree on that.

I personally think well-being has three aspects. Happiness, desire-fulfilment or autonomy, and certain objectively valuable activities that characterize our species like having social relationships, love, friendship, being creative and original, gaining knowledge, being able to affect the world, and so on. The intersection of those three values is where we find well-being. And you have to answer this question because if you’re trying to work out whether your country is doing well or your citizens are doing well, GDP is just an outdated measure of what’s good in life. So I think I like enhancement because it brings right into your face what the most important questions in ethics are. These are not questions that are unique to the bio-enhancement debate. They’re central to bioethics and practical ethics more broadly.