Monday, 10 April 2017

Psychopaths and Moral Enhancement

 

Michael Cook questions whether psychopaths should be morally bio-enhanced. This posting will examine his question. In attempting to answer Cook’s question I will attempt to answer several related questions. A psychopath might be roughly defined as someone who lacks feelings for others and has no remorse about any of his actions, past or present. A psychopath is someone, who even if he understands moral requirements, does not accept these requirements. In this posting it will be assumed that moral bio-enhancement should be focussed on this non acceptance. The first related question I want to address is whether a psychopath’s non acceptance of moral norms is a form of disability? Secondly I will consider whether any moral bio-enhancement of psychopaths should be mandatory, I will argue it shouldn’t. Thirdly I will consider whether we have a moral duty to offer moral bio-enhancement to someone convicted of some crime due to his non acceptance of moral norms, I will argue we do. Lastly I will suggest if it is permissible to offer moral bio-enhancement to psychopaths there is no reason not to permit moral bio-enhancement more generally.

Let us accept that if someone suffers from a disability and we can mitigate the effects of his disability that we have a prima facie duty to do so provided the costs associated with so doing are not too onerous. Let us also accept that some form of safe moral bio-enhancement becomes possible, such safe enhancement is unavailable at the present time. It appears to follow in such circumstances provided that a psychopath’s failure to accept moral norms is a form of disability that we have a prima facie duty to mitigate the effects of this disability. It further appears to follow that if we can only mitigate this disability by bio-enhancement that we have a duty to do so provided such enhancement is safe. Is a psychopaths non acceptance of moral norms a disability? Most psychopaths are able to understand moral requirements and so their failure to act in accordance with these requirements is not caused by an inability to understand moral norms. It appears to follow that a psychopath’s non acceptance of moral norms is not a disability. This appearance is too simplistic. Let us accept that most psychopaths can understand moral norms even if they don’t accept these norms. Perhaps this lack of acceptance might be due to an inability to feel the force of moral norms and that this inability to feel should be classed as a disability. It follows that a psychopath’s failure to accept moral norms might be regarded as a disability.

Does this moral disability matter? I will now argue whether it matters depends on the context. It has been suggested that some CEO of some large companies have psychopathic tendencies. Having psychopathic tendencies might be seen as enhancing by a CEO whilst the same tendencies might be seen as a disability by someone if they lead to him being imprisoned for some crime. I argued above that if someone suffers from a disability and that we can mitigate the effects of his disability then we have a moral duty to do so, provided the costs associated with doing so are not too onerous. It follows if a psychopath lives in circumstances in which his condition might be classed as a disability he should be morally bio-enhanced. This enhancement should only take place subject to the provision that means used are safe and costs involved aren’t too onerous.

The above conclusion needs some clarification. A psychopath who is the CEO of a large company might not want to be morally enhanced even if his condition disables him in some social contexts. I would suggest that we only have a duty to offer moral enhancement to psychopaths. It might be objected that my suggestion is too weak. My objector might point out that some psychopaths damage society and other people. He might proceed to argue that for such people moral enhancement should be mandatory rather than voluntary due to the need to protect society. I accept that we need to protect people and society from psychopaths but I do not accept we must do so by means of mandatory biomedical moral enhancement. We can protect society from those psychopaths who harm it by restricting their freedom. Let us assume there is a safe bio-medical form of enhancement which prevents psychopaths from committing crimes due to their condition. My objector might now argue that mandatory moral bio-enhancement is both a cheaper and a more humane way of treating psychopaths who have committed crimes than detention. Mandatory moral bio-enhancement would be better for both psychopaths and society.

I would reject such an argument which could easily be extended to include paedophiles. Let us accept most psychopaths retain their autonomy. Unfortunately, whilst exercising their autonomy some psychopaths damage society. My objector wants to limit the damage done to society by removing some of a psychopath’s capacity for autonomy. Is it possible to remove some of someone’s capacity for autonomy? We can of course restrict the exercise of someone’s autonomy but this is not the same as removing some of someone’s capacity for autonomous action. I would suggest that we should limit the damage psychopaths do to society by limiting his ability to exercise his autonomy rather than modifying his autonomy for autonomous action. Some might question whether there is a meaningful difference between these two approaches. I now want to argue there is. If someone’s ability to make autonomous decisions is modified, then he is changed as a person. If someone’s ability to exercise his autonomy is removed, then he is not changed as a person even though the exercise of his will is frustrated. Does the difference between changing someone as a person and frustrating his will matter? If we change someone as a person we treating him simply as a thing. We are treating him in much the same way as something we can own and can do with it as we please. Psychopaths may differ from most of us but they are still human beings and should be treated as such, they should not be treated in the same way as something we own, should not be treated in the same way as an animal. If we frustrate a psychopath’s will by detaining him, we are not treating him as something we own but merely protecting ourselves. We are still accepting him as a person, albeit a damaged person. In the light of the above I would suggest that the mandatory moral bio-enhancement of psychopaths would be wrong. I also would suggest that psychopaths should be offered voluntary moral bio-enhancement. It seems probable most psychopaths would accept such enhancement on a voluntary basis if the alternative might be compulsory detention. Accepting the above would mean that we are still respecting the autonomy of those psychopaths who need to be detained.

I have argued that we should offer voluntary moral bio-enhancement to psychopaths but it is feasible that the exactly the same form of enhancement might be offered to people in general. Prima facie such an enhancement would not be regarded as correcting some disability. It might then be argued that because such enhancement is not correcting any disability that it cannot be argued by analogy that a more general moral bio-enhancement is desirable. I would reject such argument because I don’t believe the prima facie assumption stands up to close examination. Ingmar Persson and Julian Savulescu suggest we are unfit to face the feature as our morality has not developed enough to permit us to cope with technological progress (1). What exactly does unfit mean? I would suggest being unfit means we are unable to counter some of the dangers created by our technology. If we are unable to do something in some circumstances, then we have an inability, in these circumstances we have a disability. It is conceivable that prior to our most recent technological advances our morality was fit for purpose. It might be argued our morality remains fit for purpose but that these advances have made it difficult for us to accept the full implications of our moral norms disabling us in much the same way psychopaths are disabled. It follows that the prima facie assumption that a more general moral enhancement by bio-medical means should not be regarded as correcting some disability is unsound. It might be concluded that if technological changes make our morality unfit for our purposes by morally disabling people that it can be argued by analogy that more general moral enhancement by bio-medical means is desirable. It might be objected that this conclusion is not the only option available in these circumstances, we might try to change our current circumstances. My objector might suggest that instead of a more general moral enhancement we should reject our most recent technological advances and seek to return to circumstances in which we accept the norms of our evolved morality. Such a suggestion seems impractical for two reasons. First, once the genie is out of the bottle it is hard to put it back in. Secondly I am doubtful if our morality was ever fit for purpose once we ceased being hunter gatherers.

We live in a dangerous world, provided safe moral bio-enhancement becomes available should such enhancement be mandatory? In the light of the dangers we face such an option seems to be an attractive one, but I would somewhat reluctantly reject it. Mandatory moral bio-enhancement would damage our autonomy. Our autonomy forms the basis of us being moral agents and damaging our agency would also damage our moral systems. If safe moral bio-enhancement becomes available, then it should encouraged, perhaps subsidised, but it should remain voluntary.


  1. Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.



Tuesday, 14 March 2017

Automation, Work and Education


Our world is becoming increasingly automated and this increase appears to be having an effect on the number of jobs available. It is possible that in the future automation might not only lead to a decrease in the number of existing jobs but also create an increasing number of different jobs. A second possibility is that automation will mostly lead to a decrease in the number of jobs. In this posting I want to examine some of the consequences this second possibility has for work and education.

Pessimists might argue that a widespread loss of jobs will lead to widespread hardship and poverty. I believe such a pessimistic outcome is unlikely because such an outcome would threaten the survival of both the state and the market economy. In this situation both the state and the markets would have reasons to introduce some form of universal basic income, UBI. According to Tim Dunlop UBI means,

“A basic income, on the other hand, is the idea that everyone should be paid a minimum monthly income that allows them to meet their basic economic needs.” (1)

It is important to note that UBI in response to increasing unemployment caused by automation is not some attempt to reform the benefits system but rather an attempt to counter an existential threat which might be posed to the state due to this unemployment. It might be speculated that UBI might not just be useful in combating the effects of unemployment but might also be necessary for the continuation of capitalism. In an age of large scale automation, capitalism might survive without workers but it seems doubtful if it could survive without consumers In the rest of this posting I am going to assume that if automation causes widespread job losses in any state that that state will introduce some form of UBI in order to counter this existential threat. I will further assume that UBI will be large enough to permit people to live in moderate comfort.

Some might think that automation and UBI will lead to some golden age. In the ancient world the upper classes in Greek and Roman society led a life of leisure in which most of the work was done by slaves. It might be argued by analogy that automation might introduce a golden age in which we live a life of leisure with most work either becoming automated or done by robots. I believe such a golden age is an illusion for two reasons. First, upper class Greeks and Romans may have lead happier lives than their slaves but there is no evidence that they lead happier lives than people living now. The ancient golden age at least for some appears to be an illusion and so any argument by analogy fails. Secondly if we live in a world in which all the work is automated or done by robots we might suffer from the unbearable lightness of simply being. We might feel our world has lost all purpose and that we simply exist. We might become bored. Limited boredom might encourage us to take steps to alleviate our boredom but prolonged boredom is harmful. According to Harry Frankfurt boredom is not some innocuous discomfort but something that threatens our psychic survival. (2) I have previously argued that a world whose inhabitants are bored and feel they are simply existing is a dangerous world, see riots and the unbearable lightness of simply being . It is possible that even if automation frees people from work and that the resultant widespread loss of jobs does not lead to widespread hardship and poverty that it might also lead to people’s lives being degraded rather than some golden age.

The above pessimistic scenario seems to be a realistic possibility and I now want to examine what might be done to counter the negative effects of such a possibility. Prior to my examination I want consider what we mean by work. Work might be roughly defined as making an effort for some economic reward or hope of such a reward. However, such a definition is at best an incomplete one. I have suggested previously that someone might work in her garden purely for the pleasure it brings her without any thought of economic reward. Hannah Arendt suggested there is a difference between work and labour. According to Arendt labour is what we do in the normal process of living in order to survive. For Arendt work might be simply defined as any human activity which is not driven by our need to survive. Arendt’s definitions are interesting but also seem to be incomplete ones to me, dancing is not working. Intuitively work requires some effort. Work might be now defined as any human activity requiring effort which is not driven by our need to survive. Such a refined definition also seems an incomplete one. If I am running away from a bull I might make a great effort but I’m not working. Work might be now defined as any human activity which matters to us requiring effort which is not driven by our need to survive. I believe Arendt’s insight is important and I will use it to define two different ways of working. I believe it might be better to label labouring as ‘working for’ something we need to survive. ‘Working for’ something has mostly instrumental value. Work defined as a human activity which matters to us requiring effort which is not driven by our need to survive might be labelled as ‘working at’. ‘Working at’ has mostly intrinsic value.

Let us now examine the possible effects of increasing automation bearing in mind these two definitions of work. Let us assume that automation might decrease or even eliminate our need to ‘work for’ things, to work instrumentally. Does this decrease matter? I would suggest it does matter to someone if she doesn’t ‘work at’ something. In such a situation it seems highly probable that such a person might suffer from the unbearable sense of simply being. She might feel her world has lost all purpose and that she’s simply existing. It follows we have some reason to fear the effects of increasing automation.

Assuming we aren’t Luddites and don’t want to or can’t stop the progress of automation what steps should we take to mitigate some of the worst effects of not ‘working for’ anything? First, if automation greatly decreases our need to ‘work for’ we would need to refocus our education system. At the present time at lot of education focusses on equipping people for jobs, to ‘work for’. Let us assume people no longer need to ‘work for’ and that a purely hedonistic lifestyle also leads to a lightness of simply being. In such a situation ‘working at’ something might help counter someone’s sense of simply existing due to her ceasing to ‘work for’ something. In this situation education should focus on enabling people to ‘work at’. In order to do so science education remains important because we need to understand how the world we live in works. But we also need to simply understand how to live in such a world and to enable us to do so education should place greater emphasis on the humanities.

I have argued in a highly automated age people need to become better at ‘working at’ something. All work can be good or bad and this includes ‘working at’. Someone might ‘work at’ doing crosswords all day. I would suggest this is not good work. If ‘Working at’ is to replace working for it must be good work. Samuel Clark defines one element of good work is that it requires some skill. According to Clark,

“the development of a skill requires: (1) a complex object and (2) a self-directed and sometimes self-conscious relation to that object.” (3)

I now want to consider each of these requirements. According to Clark good work involves working at something which must have some complexity. According to Clark the something we work at must have a complex internal landscape of depth and obstacles (4). He gives as examples of a skilled activity, music, mathematics, carpentry, philosophy and medicine. Doing crosswords might be a difficult task but it lacks complexity. Clark argues good work must be self-directed. Let us assume someone is self-directed to work at some complex task purely to mitigate her sense of simply being. I would suggest that such self-direction fails. Why does it fail? It fails because in order to prevent this sense of simply being someone must work at something that satisfies her. For an activity to satisfy someone she must care about that activity. Let us accept that Frankfurt is correct when he argues ‘caring about’ is a kind of love because the carer must identify with what she cares about. It might be concluded that good work is doing something complex which the doer ‘cares about’ or loves. It might then be suggested that provided people can ‘work at’ something and that this is good work and that this ‘working at’ might mitigate the some of the effects of job losses due to automation.

 However even if we accept the above difficulties remain. Let us assume any good work either ‘working for’ or ‘working at’ requires some skilfull action. Let us further assume a skilful action requires that the doer must identify with her actions by ‘caring about’ or loving them. Unfortunately, ‘caring about’ or loving is not a matter of choice.

“In this respect, he is not free. On the contrary, he is in the very nature of the case captivated by his beloved and his love. The will of the lover is rigorously constrained. Love is not a matter of choice.” (5)

It further if someone simply chooses to ‘work at’ something in order to compensate for her loss of ‘working for’ that this ‘working at’ need not be good work and as a result won’t mitigate her sense of boredom. Someone cannot simply choose to do anything to alleviate her boredom. If she simply chooses it seems probable her choice will bore her. She must ‘care about’ what she chooses. If society is help mitigate the effects of job losses, due to automation, then it must create the conditions in which people can come to care about doing complex things. I have suggested above that education might help in this task. W B Yeats said ‘education is not the filling of a pail, but rather the lighting of a fire’ perhaps education must fire peoples’ enthusiasms every bit as much as enabling their abilities. Perhaps also we should see learning as a lifelong process. Lifelong education broadly based which fires peoples’ enthusiasms might help create the conditions in which people can ‘work at’ things hence mitigating some of the harmful effects of job loss due to automation.


Lastly there are activities which might mitigate some of the harmful loss of jobs which have little to do with work. Music and Sport would be examples of such things. Of course it is possible to ‘work at’ music and sport, we have professional sportspersons and musicians, but most people just play at such activities. Play is a light hearted pleasant activity done for its own sake. Play is important; especially for children. It might be suggested that some forms of play are a form of good ‘working at’. All work is goal directed and so is some play. Perhaps there is a continuum between work and play with the importance of the goal varying. Perhaps in an automated age play should become more important to older people also. Activities playing sport or music require some infrastructure and perhaps in an automated age it is even more important that society helps build this infrastructure. At the present time governments foster elite sport. Perhaps this fostering should change direction to fostering participation rather than funding elite athletes.

  1. Tim Dunlop, Why the future is workless, (Kindle Locations 1748-1749). New South. Kindle Edition.
  2. Harry Frankfurt, 2006, The Reasons of Love, Princetown, University Press, page 54
  3. Samuel Clark, 2017, Good Work, Journal of Applied Philosophy 34(1), Page 66.
  4. Clark, page 66.
  5. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 135.

1

Wednesday, 22 February 2017

Sex with Robots


In the near future it seems probable that some people will have sex with robots, see the rise of the love droids . In this posting I will discuss some of the problems this possibility raises. I will divide my discussion into two parts. For the most part my discussion will consider sex with robots which are simply machines before moving on, and much more fancifully, to discussing sex with robots which might be considered as persons.

Let us consider someone having sex with a robot which isn’t a person, is simply a machine. Human beings have created objects to be used for sexual purposes such as vibrators and other sex toys. If a robot isn’t a person, then it might appear that someone having sex with a robot is unproblematic in much the same way as is the use of these artefacts. I now want to argue that this appearance is false. But before making my argument I want to consider the nature of sex. Sex among humans isn’t simply a matter of reproduction. Human beings enjoy sex. Neither is this enjoyment a purely mechanical thing. According to Robert Nozick,

“Sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other. Even in masturbatory fantasy, people dwell upon their actions with others; they do not get excited by thinking of themselves whilst masturbating. “(1)

If we accept that Nozick’s view what does having sex with a robot really mean to the person having sex? Provided a robot has been supplied with the appropriate genitalia would someone might want to have sex with it? I would suggest it does not in many cases. Let us assume that a robot has the appropriate genitalia, four legs, one arm and several detachable eyes. I would suggest very few people would want to have sex with such a machine. Nozick argues even when masturbating someone is imaging having sex with another person and I would suggest much the same applies to having sex with a robot. If someone has sex with a robot, he would want it to look like a beautiful person because he is imagining having sex with such a person.

What are the implications of accepting the importance of such imagining? First I would suggest having sex with a robot is just an enhanced form of masturbation. Masturbation isn’t wrong because it doesn’t harm others. Having sex with any robot which is purely a machine doesn’t harm others and so by analogy also isn’t wrong. Indeed, in some circumstances masturbation might be an acceptable choice for those who are physically or emotionally incapacitated and perhaps also for those who are incarcerated. However even if we accept the above masturbation isn’t ideal and neither would be sex with a robot. Someone having imaginary sex with a person is having inferior sex because what he desires is real sex.

I have argued that the first reason why someone might want to have sex with a robot is that he cannot have sex with another person and that there is nothing wrong with his actions. Anyone having sex with a robot knows he cannot harm the robot. This gives rise to a second reason why someone might want to have sex with a robot. Someone might know that the type of sexual activity he wants to indulge in might be harmful to another human being and because he knows he cannot harm a robot he prefers to indulge in this activity with a robot. Does acting on such a preference matter for after all he isn’t harming anyone else? Kant argued we shouldn’t be cruel to animals as this might make us cruel to human beings. Might it be then if someone engages in such sexual activity with a robot that this activity might make him more likely to engage in harmful sexual acts with other human beings?  At present there is no conclusive evidence to support Kant’s argument that if someone is cruel to animals that this cruelty makes him more likely to be cruel to other people. If this is so it seems doubtful that if someone engages in such sexual activity with a robot that his activity would not make him more likely to do so with another human being. The above is an empirical question and cannot be settled by philosophical analysis. However, someone engaging in sex with a robot, which would be harmful to a human being might harm himself. I have previously argued that for the users of pornography there is a split between fantasy and reality, see wooler.scottus . I further argued in the case of sexual practices which might harm others that the maintenance of the split between fantasy and reality is absolutely essential. I have argued above that someone having sex with a robot imagines he is having sex with a person. It follows for someone engaging in sex with a robot, which might harm another human being, that the maintenance of the split between fantasy and reality is also essential. I further argued that if someone uses pornography that this split threatens the unity of his will which is damaging to his identity. It follows that someone engaging in sex with a robot, which would be harmful to a human being might harm himself by damaging his identity.

Some people assume at some time in the future some robots might become persons. I am extremely sceptical about this possibility but nonetheless I will now consider some of the problems of someone having sex with such a robot. However, before I do so I will question whether anyone would want sex with such a robot. Let us accept Nozick is correct in his assertion that “sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other.” How do we perceive the connection to a robot which is also a person? I suggested above that a robot can take many forms. Would anyone want to have sex with a robot with four legs, one arm, several detachable eyes, appropriate genitalia even if it could be considered as a person? Persons are partly defined by the actions they are capable of enacting and these actions are partly defined by their bodies’ capabilities. Robots can have very different bodies from us. A robot with a different body structure might be capable of very different actions to us, such a robot even if it is considered as a person might be very different sort of person to the sort we are. The same might also be true of a robot with similar structure which is constructed from different materials. If someone or something is very different to us then the connection between us and that someone or something becomes tenuous. Would someone want to sex with any robot with which he had only a tenuous connection, I doubt it. Of course someone might want to have sex with such a robot provided it looked like a beautiful human being. But if this is so isn’t he really imaging having sex with a person and the problems associated with having sex with a robot which is purely a machine once again become relevant.

In conclusion I have argued that someone would not harm others by having sex with a robot and his actions would not be morally wrong. However, I argued whilst it might not be wrong to have sex with any robot which is purely a machine that it might nonetheless be damaging to the user’s identity, in much the same way as pornography, by splitting character. Lastly questioned whether anyone would really want to have sex with any robot which might be considered as a person.

  1. 1.     Robert Nozick, 1989, The Examined Life, Touchstone, page 61

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...