Tuesday, 2 May 2017

Widespread Moral Enhancement


In my last posting I examined whether we should morally bio-enhance psychopaths. I concluded that we should encourage such enhancement. Ingmar Persson and Julian Savulescu argue that there is a need for a much more widespread moral enhancement in order to counter the existential dangers our modern world poses (1). They argue that because our morality developed in small communities it is unsuitable for combatting these dangers. I accept that there is a need for such enhancement. In this posting I want to examine how widespread such enhancement needs to be in order to be effective and how such enhancement might be implemented.

Some might argue that if we change our society by becoming more tolerant then we will naturally morally enhance the members of society. If someone lives in a brutal society then she is more likely to act in a brutal manner, whilst if she lives in a tolerant society her toleration is likely to increase. Stephen Pinker argues that this is already be happening (2). I believe society can change people, enhance people, but that this change is extremely slow. The existential dangers we face are pressing and it seems likely that moral enhancement by creating a more tolerant society might be too slow to combat these dangers.

Persson and Savulescu favour moral bio-enhancement. According to them provided such enhancement is proven to be safe then,

“some children should be subjected to moral bio-enhancement, just as they are now subjected to traditional moral education.” (3)

What exactly do Persson and Savulescu mean by moral bio-enhancement? They argue that moral bio-enhancement should seek to increase our dispositions for altruism and justice They argue moral bio-enhancement should do so by making,

 “men in general more moral by bio-medical methods through making them more like the men who are more like women in respect of sympathy and aggression, but without the tendency to social forms of aggression.” (4)

Such bio-enhancement is aimed at changing our dispositions in respect to empathy or sympathy but does not seek to change our cognitive abilities. Let us accept that such enhancement is safe. I now want to examine two questions regarding this form of enhancement. First is it likely to be effective and secondly should such enhancement be mandatory or voluntary.

If we simply enhance our disposition for empathy is such an enhancement likely to combat the dangers facing us? Some have argued that enhancing someone’s empathy simply increases the degree of empathy she feels, but doesn’t expand the domain of her empathy. Paul Bloom questions the benefits of empathy by suggesting that increasing people’s empathy is more likely to increase tension between different groups rather than diminish it. (5) If we accept Bloom is correct then we have reason to believe moral bio-enhancement based solely on enhancing our capacity for empathy would not be very effective. However, I believe there are reasons why dual enhancement involving both our capacity for empathy and cognitive abilities might be more effective, see moral character enhancement . It seems possible that if we enhance our cognitive abilities whilst at the same enhancing our capacity for empathy that such dual enhancement might lead to a broadening of the domain of our moral concern. Bloom holds that it is useful to compare empathy with anger.

“Both are universal responses that emerge in childhood. Both are social, mainly geared toward other people, distinguishing them from emotions such as fear and disgust, which are often elicited by inanimate beings and experiences. Most of all, they are both moral, in that they connect to judgments of right and wrong.” (6)

Judgments are based on the way we view some situation. The way we view some situation depends to some degree on our cognitive abilities. It follows if judgments are similar in some way to empathy that empathy might also depend to some degree on our cognitive abilities. In the light of the above it might be sensible to also enhance our cognitive abilities if we are going to enhance our capacity for empathy.  In the light of the above I would suggest that provided it can be shown that cognitive enhancement enlarges the domain of our empathy that any moral bio-enhancement should be dual enhancement.

Let us accept that dual moral bio-enhancement is desirable and that the means of such enhancement are safe. In these circumstances should such enhancement be mandatory or voluntary? In my previous posting I argued that any moral bio-enhancement of psychopaths should be voluntary in order to respect their autonomy. I will now argue the same is true of more widespread moral bio-enhancement. It might be objected that the need to counter the threats posed by climate change and nuclear armageddon should trump respecting autonomy. Indeed, my objector might point out if we don’t deal with these existential threats there will be few people left whose autonomy we should respect. In response to my objector I would suggest that there is no need to make moral bio-enhancement mandatory in order to counter these threats. It has been assumed that such enhancement has been thoroughly tested and proved to be both safe and effective. In these circumstances it might appear that any decision about becoming morally bio-enhanced is simply a no brainer. Surely we all want to be good people? In response my objector might point out that vaccines have thoroughly tested and proved to be both safe and effective and in spite of this some people refuse to have their children vaccinated even though they desire that their children enjoy good health. She might then argue by analogy that much the same would apply to any moral bio-enhancement. I am prepared to accept that my objector is correct in her assessment that some people would not voluntarily morally bio-enhance themselves. However, I will now argue that her analogy is unsound. For any vaccination program to be effective a high percentage of the population need to be vaccinated. For moral bio-enhancement to be effective, in order to counter existential threats, I would suggest that only a majority of people need take such enhancement in a democracy. A majority is all that is needed to enact legislation to counter these threats. I would further suggest that provided moral bio-enhancement is proven to be safe and effective a majority of people would take it. It follows that even if a substantial minority refuse to take such enhancement that there is no need for such enhancement to be mandatory.

My objector now might raise a further objection. She might argue that cost of such enhancement might deter a majority of people from taking it. If the costs of any bio-enhancement are high then I am prepared to accept my objector’s objection, but I am doubtful whether in practice such costs would be high. If the majority of the population take such enhancement, then these large numbers should lower these costs. However, let us assume I am wrong and that the costs would be high. Let us accept that civilised society has duty to protect both itself and its citizens from anarchy and possible destruction. It follows if society faces anarchy and destruction due to these existential threats which could be avoided by moral bio-enhancement provided the costs of such enhancement were lower, that society should subsidise or freely provide moral bio-enhancement. In addition, such enhancement would carry further benefits for society. If someone is morally bio-enhanced, then it seems probable that she will be less likely to commit crime. More fancifully moral bio-enhancement might reduce the threat of terrorism. Reduced crime would be a saving for society. It follows that society has financial incentives to encourage moral bio-enhancement. In the light of the above it seems improbable that the cost of moral bio-enhancement is going to prevent the majority of people taking it provided it is safe.

In the above it has been assumed that moral bio-enhancement is safe. This assumption may be false because all drugs have some side effects. In these circumstances we would still be faced with existential threats and a morality which seems incapable of addressing these threats. In these circumstances there is a further alternative we might consider. Perhaps we might use algorithms to guide our decision making in response to these threats. It might be objected that the use of algorithms threatens our autonomy. I response I would argue whether this threat is meaningful depends on how we use any such algorithms. I am not suggesting we simply use algorithms to make these difficult decisions for us but rather to guide our decision making. I am suggesting that we might possibly use algorithms in assisting us in making moral decisions. Such assistance should be interactive and the algorithms in question might evolve in response to our interactions. I have dealt with algorithmic assisted moral decision making at greater length in a previous posting. Perhaps using algorithms in such a way does not threaten our autonomy.

  1. Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Stephen Pinker, 2011, The Better Angels of our Nature, Viking.
  3. Persson & Savulescu, page 113.
  4. Persson & Savelescu, page 112.
  5. Bloom, Paul. Against Empathy: The Case for Rational Compassion (pp. 207-208). Random House.
  6. Bloom, page 207.



Monday, 10 April 2017

Psychopaths and Moral Enhancement

 

Michael Cook questions whether psychopaths should be morally bio-enhanced. This posting will examine his question. In attempting to answer Cook’s question I will attempt to answer several related questions. A psychopath might be roughly defined as someone who lacks feelings for others and has no remorse about any of his actions, past or present. A psychopath is someone, who even if he understands moral requirements, does not accept these requirements. In this posting it will be assumed that moral bio-enhancement should be focussed on this non acceptance. The first related question I want to address is whether a psychopath’s non acceptance of moral norms is a form of disability? Secondly I will consider whether any moral bio-enhancement of psychopaths should be mandatory, I will argue it shouldn’t. Thirdly I will consider whether we have a moral duty to offer moral bio-enhancement to someone convicted of some crime due to his non acceptance of moral norms, I will argue we do. Lastly I will suggest if it is permissible to offer moral bio-enhancement to psychopaths there is no reason not to permit moral bio-enhancement more generally.

Let us accept that if someone suffers from a disability and we can mitigate the effects of his disability that we have a prima facie duty to do so provided the costs associated with so doing are not too onerous. Let us also accept that some form of safe moral bio-enhancement becomes possible, such safe enhancement is unavailable at the present time. It appears to follow in such circumstances provided that a psychopath’s failure to accept moral norms is a form of disability that we have a prima facie duty to mitigate the effects of this disability. It further appears to follow that if we can only mitigate this disability by bio-enhancement that we have a duty to do so provided such enhancement is safe. Is a psychopaths non acceptance of moral norms a disability? Most psychopaths are able to understand moral requirements and so their failure to act in accordance with these requirements is not caused by an inability to understand moral norms. It appears to follow that a psychopath’s non acceptance of moral norms is not a disability. This appearance is too simplistic. Let us accept that most psychopaths can understand moral norms even if they don’t accept these norms. Perhaps this lack of acceptance might be due to an inability to feel the force of moral norms and that this inability to feel should be classed as a disability. It follows that a psychopath’s failure to accept moral norms might be regarded as a disability.

Does this moral disability matter? I will now argue whether it matters depends on the context. It has been suggested that some CEO of some large companies have psychopathic tendencies. Having psychopathic tendencies might be seen as enhancing by a CEO whilst the same tendencies might be seen as a disability by someone if they lead to him being imprisoned for some crime. I argued above that if someone suffers from a disability and that we can mitigate the effects of his disability then we have a moral duty to do so, provided the costs associated with doing so are not too onerous. It follows if a psychopath lives in circumstances in which his condition might be classed as a disability he should be morally bio-enhanced. This enhancement should only take place subject to the provision that means used are safe and costs involved aren’t too onerous.

The above conclusion needs some clarification. A psychopath who is the CEO of a large company might not want to be morally enhanced even if his condition disables him in some social contexts. I would suggest that we only have a duty to offer moral enhancement to psychopaths. It might be objected that my suggestion is too weak. My objector might point out that some psychopaths damage society and other people. He might proceed to argue that for such people moral enhancement should be mandatory rather than voluntary due to the need to protect society. I accept that we need to protect people and society from psychopaths but I do not accept we must do so by means of mandatory biomedical moral enhancement. We can protect society from those psychopaths who harm it by restricting their freedom. Let us assume there is a safe bio-medical form of enhancement which prevents psychopaths from committing crimes due to their condition. My objector might now argue that mandatory moral bio-enhancement is both a cheaper and a more humane way of treating psychopaths who have committed crimes than detention. Mandatory moral bio-enhancement would be better for both psychopaths and society.

I would reject such an argument which could easily be extended to include paedophiles. Let us accept most psychopaths retain their autonomy. Unfortunately, whilst exercising their autonomy some psychopaths damage society. My objector wants to limit the damage done to society by removing some of a psychopath’s capacity for autonomy. Is it possible to remove some of someone’s capacity for autonomy? We can of course restrict the exercise of someone’s autonomy but this is not the same as removing some of someone’s capacity for autonomous action. I would suggest that we should limit the damage psychopaths do to society by limiting his ability to exercise his autonomy rather than modifying his autonomy for autonomous action. Some might question whether there is a meaningful difference between these two approaches. I now want to argue there is. If someone’s ability to make autonomous decisions is modified, then he is changed as a person. If someone’s ability to exercise his autonomy is removed, then he is not changed as a person even though the exercise of his will is frustrated. Does the difference between changing someone as a person and frustrating his will matter? If we change someone as a person we treating him simply as a thing. We are treating him in much the same way as something we can own and can do with it as we please. Psychopaths may differ from most of us but they are still human beings and should be treated as such, they should not be treated in the same way as something we own, should not be treated in the same way as an animal. If we frustrate a psychopath’s will by detaining him, we are not treating him as something we own but merely protecting ourselves. We are still accepting him as a person, albeit a damaged person. In the light of the above I would suggest that the mandatory moral bio-enhancement of psychopaths would be wrong. I also would suggest that psychopaths should be offered voluntary moral bio-enhancement. It seems probable most psychopaths would accept such enhancement on a voluntary basis if the alternative might be compulsory detention. Accepting the above would mean that we are still respecting the autonomy of those psychopaths who need to be detained.

I have argued that we should offer voluntary moral bio-enhancement to psychopaths but it is feasible that the exactly the same form of enhancement might be offered to people in general. Prima facie such an enhancement would not be regarded as correcting some disability. It might then be argued that because such enhancement is not correcting any disability that it cannot be argued by analogy that a more general moral bio-enhancement is desirable. I would reject such argument because I don’t believe the prima facie assumption stands up to close examination. Ingmar Persson and Julian Savulescu suggest we are unfit to face the feature as our morality has not developed enough to permit us to cope with technological progress (1). What exactly does unfit mean? I would suggest being unfit means we are unable to counter some of the dangers created by our technology. If we are unable to do something in some circumstances, then we have an inability, in these circumstances we have a disability. It is conceivable that prior to our most recent technological advances our morality was fit for purpose. It might be argued our morality remains fit for purpose but that these advances have made it difficult for us to accept the full implications of our moral norms disabling us in much the same way psychopaths are disabled. It follows that the prima facie assumption that a more general moral enhancement by bio-medical means should not be regarded as correcting some disability is unsound. It might be concluded that if technological changes make our morality unfit for our purposes by morally disabling people that it can be argued by analogy that more general moral enhancement by bio-medical means is desirable. It might be objected that this conclusion is not the only option available in these circumstances, we might try to change our current circumstances. My objector might suggest that instead of a more general moral enhancement we should reject our most recent technological advances and seek to return to circumstances in which we accept the norms of our evolved morality. Such a suggestion seems impractical for two reasons. First, once the genie is out of the bottle it is hard to put it back in. Secondly I am doubtful if our morality was ever fit for purpose once we ceased being hunter gatherers.

We live in a dangerous world, provided safe moral bio-enhancement becomes available should such enhancement be mandatory? In the light of the dangers we face such an option seems to be an attractive one, but I would somewhat reluctantly reject it. Mandatory moral bio-enhancement would damage our autonomy. Our autonomy forms the basis of us being moral agents and damaging our agency would also damage our moral systems. If safe moral bio-enhancement becomes available, then it should encouraged, perhaps subsidised, but it should remain voluntary.


  1. Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.



Tuesday, 14 March 2017

Automation, Work and Education


Our world is becoming increasingly automated and this increase appears to be having an effect on the number of jobs available. It is possible that in the future automation might not only lead to a decrease in the number of existing jobs but also create an increasing number of different jobs. A second possibility is that automation will mostly lead to a decrease in the number of jobs. In this posting I want to examine some of the consequences this second possibility has for work and education.

Pessimists might argue that a widespread loss of jobs will lead to widespread hardship and poverty. I believe such a pessimistic outcome is unlikely because such an outcome would threaten the survival of both the state and the market economy. In this situation both the state and the markets would have reasons to introduce some form of universal basic income, UBI. According to Tim Dunlop UBI means,

“A basic income, on the other hand, is the idea that everyone should be paid a minimum monthly income that allows them to meet their basic economic needs.” (1)

It is important to note that UBI in response to increasing unemployment caused by automation is not some attempt to reform the benefits system but rather an attempt to counter an existential threat which might be posed to the state due to this unemployment. It might be speculated that UBI might not just be useful in combating the effects of unemployment but might also be necessary for the continuation of capitalism. In an age of large scale automation, capitalism might survive without workers but it seems doubtful if it could survive without consumers In the rest of this posting I am going to assume that if automation causes widespread job losses in any state that that state will introduce some form of UBI in order to counter this existential threat. I will further assume that UBI will be large enough to permit people to live in moderate comfort.

Some might think that automation and UBI will lead to some golden age. In the ancient world the upper classes in Greek and Roman society led a life of leisure in which most of the work was done by slaves. It might be argued by analogy that automation might introduce a golden age in which we live a life of leisure with most work either becoming automated or done by robots. I believe such a golden age is an illusion for two reasons. First, upper class Greeks and Romans may have lead happier lives than their slaves but there is no evidence that they lead happier lives than people living now. The ancient golden age at least for some appears to be an illusion and so any argument by analogy fails. Secondly if we live in a world in which all the work is automated or done by robots we might suffer from the unbearable lightness of simply being. We might feel our world has lost all purpose and that we simply exist. We might become bored. Limited boredom might encourage us to take steps to alleviate our boredom but prolonged boredom is harmful. According to Harry Frankfurt boredom is not some innocuous discomfort but something that threatens our psychic survival. (2) I have previously argued that a world whose inhabitants are bored and feel they are simply existing is a dangerous world, see riots and the unbearable lightness of simply being . It is possible that even if automation frees people from work and that the resultant widespread loss of jobs does not lead to widespread hardship and poverty that it might also lead to people’s lives being degraded rather than some golden age.

The above pessimistic scenario seems to be a realistic possibility and I now want to examine what might be done to counter the negative effects of such a possibility. Prior to my examination I want consider what we mean by work. Work might be roughly defined as making an effort for some economic reward or hope of such a reward. However, such a definition is at best an incomplete one. I have suggested previously that someone might work in her garden purely for the pleasure it brings her without any thought of economic reward. Hannah Arendt suggested there is a difference between work and labour. According to Arendt labour is what we do in the normal process of living in order to survive. For Arendt work might be simply defined as any human activity which is not driven by our need to survive. Arendt’s definitions are interesting but also seem to be incomplete ones to me, dancing is not working. Intuitively work requires some effort. Work might be now defined as any human activity requiring effort which is not driven by our need to survive. Such a refined definition also seems an incomplete one. If I am running away from a bull I might make a great effort but I’m not working. Work might be now defined as any human activity which matters to us requiring effort which is not driven by our need to survive. I believe Arendt’s insight is important and I will use it to define two different ways of working. I believe it might be better to label labouring as ‘working for’ something we need to survive. ‘Working for’ something has mostly instrumental value. Work defined as a human activity which matters to us requiring effort which is not driven by our need to survive might be labelled as ‘working at’. ‘Working at’ has mostly intrinsic value.

Let us now examine the possible effects of increasing automation bearing in mind these two definitions of work. Let us assume that automation might decrease or even eliminate our need to ‘work for’ things, to work instrumentally. Does this decrease matter? I would suggest it does matter to someone if she doesn’t ‘work at’ something. In such a situation it seems highly probable that such a person might suffer from the unbearable sense of simply being. She might feel her world has lost all purpose and that she’s simply existing. It follows we have some reason to fear the effects of increasing automation.

Assuming we aren’t Luddites and don’t want to or can’t stop the progress of automation what steps should we take to mitigate some of the worst effects of not ‘working for’ anything? First, if automation greatly decreases our need to ‘work for’ we would need to refocus our education system. At the present time at lot of education focusses on equipping people for jobs, to ‘work for’. Let us assume people no longer need to ‘work for’ and that a purely hedonistic lifestyle also leads to a lightness of simply being. In such a situation ‘working at’ something might help counter someone’s sense of simply existing due to her ceasing to ‘work for’ something. In this situation education should focus on enabling people to ‘work at’. In order to do so science education remains important because we need to understand how the world we live in works. But we also need to simply understand how to live in such a world and to enable us to do so education should place greater emphasis on the humanities.

I have argued in a highly automated age people need to become better at ‘working at’ something. All work can be good or bad and this includes ‘working at’. Someone might ‘work at’ doing crosswords all day. I would suggest this is not good work. If ‘Working at’ is to replace working for it must be good work. Samuel Clark defines one element of good work is that it requires some skill. According to Clark,

“the development of a skill requires: (1) a complex object and (2) a self-directed and sometimes self-conscious relation to that object.” (3)

I now want to consider each of these requirements. According to Clark good work involves working at something which must have some complexity. According to Clark the something we work at must have a complex internal landscape of depth and obstacles (4). He gives as examples of a skilled activity, music, mathematics, carpentry, philosophy and medicine. Doing crosswords might be a difficult task but it lacks complexity. Clark argues good work must be self-directed. Let us assume someone is self-directed to work at some complex task purely to mitigate her sense of simply being. I would suggest that such self-direction fails. Why does it fail? It fails because in order to prevent this sense of simply being someone must work at something that satisfies her. For an activity to satisfy someone she must care about that activity. Let us accept that Frankfurt is correct when he argues ‘caring about’ is a kind of love because the carer must identify with what she cares about. It might be concluded that good work is doing something complex which the doer ‘cares about’ or loves. It might then be suggested that provided people can ‘work at’ something and that this is good work and that this ‘working at’ might mitigate the some of the effects of job losses due to automation.

 However even if we accept the above difficulties remain. Let us assume any good work either ‘working for’ or ‘working at’ requires some skilfull action. Let us further assume a skilful action requires that the doer must identify with her actions by ‘caring about’ or loving them. Unfortunately, ‘caring about’ or loving is not a matter of choice.

“In this respect, he is not free. On the contrary, he is in the very nature of the case captivated by his beloved and his love. The will of the lover is rigorously constrained. Love is not a matter of choice.” (5)

It further if someone simply chooses to ‘work at’ something in order to compensate for her loss of ‘working for’ that this ‘working at’ need not be good work and as a result won’t mitigate her sense of boredom. Someone cannot simply choose to do anything to alleviate her boredom. If she simply chooses it seems probable her choice will bore her. She must ‘care about’ what she chooses. If society is help mitigate the effects of job losses, due to automation, then it must create the conditions in which people can come to care about doing complex things. I have suggested above that education might help in this task. W B Yeats said ‘education is not the filling of a pail, but rather the lighting of a fire’ perhaps education must fire peoples’ enthusiasms every bit as much as enabling their abilities. Perhaps also we should see learning as a lifelong process. Lifelong education broadly based which fires peoples’ enthusiasms might help create the conditions in which people can ‘work at’ things hence mitigating some of the harmful effects of job loss due to automation.


Lastly there are activities which might mitigate some of the harmful loss of jobs which have little to do with work. Music and Sport would be examples of such things. Of course it is possible to ‘work at’ music and sport, we have professional sportspersons and musicians, but most people just play at such activities. Play is a light hearted pleasant activity done for its own sake. Play is important; especially for children. It might be suggested that some forms of play are a form of good ‘working at’. All work is goal directed and so is some play. Perhaps there is a continuum between work and play with the importance of the goal varying. Perhaps in an automated age play should become more important to older people also. Activities playing sport or music require some infrastructure and perhaps in an automated age it is even more important that society helps build this infrastructure. At the present time governments foster elite sport. Perhaps this fostering should change direction to fostering participation rather than funding elite athletes.

  1. Tim Dunlop, Why the future is workless, (Kindle Locations 1748-1749). New South. Kindle Edition.
  2. Harry Frankfurt, 2006, The Reasons of Love, Princetown, University Press, page 54
  3. Samuel Clark, 2017, Good Work, Journal of Applied Philosophy 34(1), Page 66.
  4. Clark, page 66.
  5. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 135.

1

Wednesday, 22 February 2017

Sex with Robots


In the near future it seems probable that some people will have sex with robots, see the rise of the love droids . In this posting I will discuss some of the problems this possibility raises. I will divide my discussion into two parts. For the most part my discussion will consider sex with robots which are simply machines before moving on, and much more fancifully, to discussing sex with robots which might be considered as persons.

Let us consider someone having sex with a robot which isn’t a person, is simply a machine. Human beings have created objects to be used for sexual purposes such as vibrators and other sex toys. If a robot isn’t a person, then it might appear that someone having sex with a robot is unproblematic in much the same way as is the use of these artefacts. I now want to argue that this appearance is false. But before making my argument I want to consider the nature of sex. Sex among humans isn’t simply a matter of reproduction. Human beings enjoy sex. Neither is this enjoyment a purely mechanical thing. According to Robert Nozick,

“Sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other. Even in masturbatory fantasy, people dwell upon their actions with others; they do not get excited by thinking of themselves whilst masturbating. “(1)

If we accept that Nozick’s view what does having sex with a robot really mean to the person having sex? Provided a robot has been supplied with the appropriate genitalia would someone might want to have sex with it? I would suggest it does not in many cases. Let us assume that a robot has the appropriate genitalia, four legs, one arm and several detachable eyes. I would suggest very few people would want to have sex with such a machine. Nozick argues even when masturbating someone is imaging having sex with another person and I would suggest much the same applies to having sex with a robot. If someone has sex with a robot, he would want it to look like a beautiful person because he is imagining having sex with such a person.

What are the implications of accepting the importance of such imagining? First I would suggest having sex with a robot is just an enhanced form of masturbation. Masturbation isn’t wrong because it doesn’t harm others. Having sex with any robot which is purely a machine doesn’t harm others and so by analogy also isn’t wrong. Indeed, in some circumstances masturbation might be an acceptable choice for those who are physically or emotionally incapacitated and perhaps also for those who are incarcerated. However even if we accept the above masturbation isn’t ideal and neither would be sex with a robot. Someone having imaginary sex with a person is having inferior sex because what he desires is real sex.

I have argued that the first reason why someone might want to have sex with a robot is that he cannot have sex with another person and that there is nothing wrong with his actions. Anyone having sex with a robot knows he cannot harm the robot. This gives rise to a second reason why someone might want to have sex with a robot. Someone might know that the type of sexual activity he wants to indulge in might be harmful to another human being and because he knows he cannot harm a robot he prefers to indulge in this activity with a robot. Does acting on such a preference matter for after all he isn’t harming anyone else? Kant argued we shouldn’t be cruel to animals as this might make us cruel to human beings. Might it be then if someone engages in such sexual activity with a robot that this activity might make him more likely to engage in harmful sexual acts with other human beings?  At present there is no conclusive evidence to support Kant’s argument that if someone is cruel to animals that this cruelty makes him more likely to be cruel to other people. If this is so it seems doubtful that if someone engages in such sexual activity with a robot that his activity would not make him more likely to do so with another human being. The above is an empirical question and cannot be settled by philosophical analysis. However, someone engaging in sex with a robot, which would be harmful to a human being might harm himself. I have previously argued that for the users of pornography there is a split between fantasy and reality, see wooler.scottus . I further argued in the case of sexual practices which might harm others that the maintenance of the split between fantasy and reality is absolutely essential. I have argued above that someone having sex with a robot imagines he is having sex with a person. It follows for someone engaging in sex with a robot, which might harm another human being, that the maintenance of the split between fantasy and reality is also essential. I further argued that if someone uses pornography that this split threatens the unity of his will which is damaging to his identity. It follows that someone engaging in sex with a robot, which would be harmful to a human being might harm himself by damaging his identity.

Some people assume at some time in the future some robots might become persons. I am extremely sceptical about this possibility but nonetheless I will now consider some of the problems of someone having sex with such a robot. However, before I do so I will question whether anyone would want sex with such a robot. Let us accept Nozick is correct in his assertion that “sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other.” How do we perceive the connection to a robot which is also a person? I suggested above that a robot can take many forms. Would anyone want to have sex with a robot with four legs, one arm, several detachable eyes, appropriate genitalia even if it could be considered as a person? Persons are partly defined by the actions they are capable of enacting and these actions are partly defined by their bodies’ capabilities. Robots can have very different bodies from us. A robot with a different body structure might be capable of very different actions to us, such a robot even if it is considered as a person might be very different sort of person to the sort we are. The same might also be true of a robot with similar structure which is constructed from different materials. If someone or something is very different to us then the connection between us and that someone or something becomes tenuous. Would someone want to sex with any robot with which he had only a tenuous connection, I doubt it. Of course someone might want to have sex with such a robot provided it looked like a beautiful human being. But if this is so isn’t he really imaging having sex with a person and the problems associated with having sex with a robot which is purely a machine once again become relevant.

In conclusion I have argued that someone would not harm others by having sex with a robot and his actions would not be morally wrong. However, I argued whilst it might not be wrong to have sex with any robot which is purely a machine that it might nonetheless be damaging to the user’s identity, in much the same way as pornography, by splitting character. Lastly questioned whether anyone would really want to have sex with any robot which might be considered as a person.

  1. 1.     Robert Nozick, 1989, The Examined Life, Touchstone, page 61

Monday, 23 January 2017

Robots and Persons




In an interesting post John Danaher asks if someone can be friends with a robot, see philosophicaldisquisitions . He argues virtue friendship might be possible with a robot. Virtue friendship involves two entities sharing values and beliefs which benefit them. Let us accept that any entity which is capable of having values and beliefs can be regarded as a person. Perhaps one of the great apes might be regarded as a person but can the same be said of a robot? Does it make sense to say a robot might have rights or can be regarded as a person? In what follows I will limit my discussion to robots but my discussion could equally well be applied to some advanced system of AI or algorithms. At present he actions of some robot have some purpose but this purpose doesn’t have any meaning which is independent of human beings. At present the actions of a robot have no more meaning which is independent of us than the action of the wind in sculpting a sand dune. In the future it is conceivable that this situation might change but I am somewhat sceptical and believe at the present time there is no need to worry about granting rights to robots akin to human rights. In this posting I will argue the nature of belief means to worry about robot personhood is both premature and unnecessary.

How should we regard the actions of a robot if it has no beliefs? What are the differences between the wind sculpting a sand dune and the actions of a robot? One difference is that even if both the wind and a robot don’t have beliefs that nonetheless a robot’s actions are in accordance with someone’s beliefs, its designer or programmer. But does this difference matter? A refrigerator is acting in accordance with our belief that it will keep our food cold. If we don’t want to grant personhood to refrigerators, why should we do so for robots? Perhaps then we might implant some beliefs into robots and after some time such robots might acquire their own emergent beliefs. Perhaps such robots should be regarded as persons. Implanting such beliefs will not be easy and may well be impossible. However, I see no reason, even if such implantation is possible, why we should regard a such a robot as some sort of person. If a person has some belief, then this belief causes him to behave in certain ways. How do we implant a belief in a robot? We instruct the robot how to behave in certain circumstances. In this situation the of course the robot behaves in accordance with the implanted belief but the primary cause of this behaviour is not this implanted belief but rather a belief of those who carried out the implantation. A robot in this situation cannot be said to be behaving authentically. In this situation I can see no reason why we should attribute personhood to a robot which uses implanted beliefs as outlined above.

At this point it might be objected that even if a robot shouldn’t be considered as a person it might be of moral concern. According to Peter Singer what matters for something to matter morally is not that it can think but that it can feel. Animals can feel and should be of moral concern. Present day robots can’t and shouldn’t. Present day robots are made of inorganic materials such as steel and silicon. However it might be possible to construct a robot partly from biological material, see Mail Online. If such a robot could feel then it should be of moral concern but this doesn’t mean we should consider it as a person, frogs can feel and should be of moral concern but they aren't persons. Nonetheless I would suggest that the ability to feel is a necessary precursor for believing which is a precursor for personhood.

For the sake of argument let us assume that it is possible to create a robot which the primary cause of its behaviour is its implanted or emergent beliefs.  What can be said about this robot’s beliefs?  When such a robot decides to act the primary cause of the action is its internal beliefs, it is acting in a manner which might be regarded as authentic. How might such a robot’s beliefs and actions be connected? Perhaps they are linked by Kant’s hypothetical imperative.  The hypothetical imperative states,

“Whoever wills an end also wills (insofar as reason has decisive influence on his actions) the indispensably means to it that are within his power. (1)

Some might suggest that having a set of beliefs and accepting Kant’s hypothetical imperative are necessary conditions for personhood, some might even regard them as sufficient conditions. They might further suggest that any robot meeting these conditions should be regarded as a candidate for personhood. Of course it might be possible to design a robot which conforms to the hypothetical imperative, but conforming is not the same as accepting. Let us accept anyone or anything that can be regarded as person must have some beliefs and must accept rather than conform to the hypothetical imperative.

What does it mean for someone to accept the hypothetical imperative? Firstly, he must believe it is true, the hypothetical imperative is one of his beliefs. Someone might believe that he is made up of atoms but this belief doesn’t require any action when action is possible. The hypothetical imperative is different because it connects willed ends with action. Can the hypothetical imperative be used to explain why a robot should act on its beliefs, be they implanted by others or emergent? Kant seems to believe that the hypothetical imperative can be based on reason. I will argue reason can only give us reason to act in conjunction with our caring about something. I will now argue the hypothetical imperative only makes sense if an agent views beliefs in a particular way. What does it mean to will an end? I would suggest if someone wills an end that he must care about that end. If someone doesn’t care about or value some end, then he has no reason to pursue that end. What then does it mean to care about something? According to Frankfurt if someone cares about something he becomes vulnerable when that thing is diminished and is benefited when it is enhanced. (2) People by nature can suffer and feel joy robots can’t. It is worth noting animals can also suffer and feel joy making them like people with rights rather than like robots. The above raises an interesting question. Must any entity which is capable of being conscious, robot or animal, be able to suffer and feel joy? If we accept the above then the ends we will must be things we care about. Moreover, if we care about ends then we must value them. It follows if the hypothetical imperative is to give us cause to act on any belief that that belief must be of value to us. It follows the hypothetical imperative can only be used to explain why a robot should act on its beliefs provided such a robot values those beliefs which requires it becoming vulnerable. A right is of no use to any entity for which the implementation of the right doesn't matter, isn't vulnerable to the right not being implemented.

I have argued any belief which causes us to act must be of value to us and that if we find something valuable we are vulnerable to the fate of the thing we find valuable. What then does it mean to be vulnerable? To be vulnerable to something means that we can be harmed. Usually we are vulnerable to those thing we care about in a psychological sense. Frankfurt appears to believe that we don’t of necessity become vulnerable to the diminishment of the things we value by suffering negative affect. He might argue we can become dissatisfied and seek to alleviate our dissatisfaction without suffering any negative affect. I am reluctant to accept becoming vulnerable can be satisfactorily explained by becoming dissatisfied without any negative affect. It seems to me being dissatisfied must involve some desire to change things and that this desire must involve some negative affect. I would argue being vulnerable to those thing we value involves psychological harm and that this harm must involve negative affect.


Let us accept that in order to be a person at all someone or something must accept and act on the hypothetical imperative. Let us also accept that the hypothetical imperative only gives someone or something reason to act on some belief provided that someone or something must value that belief. Let us still further accept that to value something someone or something must care about what they value and that caring about of necessity must include some affect. People feel affect and so are candidates for personhood. It is hard to see how silicon based machines or algorithms can feel any affect, positive or negative. It follows it is hard to see why silicon based machines or algorithms should be considered as candidates for personhood. It appears the nature of belief means any worries concerning robot personhood when the robot intelligence are silicon based are unnecessary. Returning to my starting point it would appear that it is acceptable for young children to have imaginary friends but it that is delusional for adults to believe they have robotic friends. However I will end on a note of caution. We don’t fully understand consciousness so we don’t fully understand what sort of entity is capable of holding beliefs and values. It follows we cannot categorically rule out a silicon machine becoming conscious. Perhaps also it might become possible to build some machine not entirely based on silicon which does become conscious. 

  1. Immanuel Kant, 1785, Groundwork of the Metaphysics of Morals,
  2.  Harry Frankfurt, 1988, The Importance of What We Care about, Cambridge University Press, page 83.




Monday, 21 November 2016

Cryonic Preservation and Physician Assisted Suicide


Recently a terminally ill teenager won the right to have her body preserved by cryonics in the hope that she might live again at some future date. Such a hope comforted her. The case was really a case of whether she had the right to determine how her body was disposed of when she died, see bbc news . Cryonic preservation is not a form of treatment at the present time, cryonic preservation is simply a form of bodily disposal tinged with hope and as a result does not presently appear to pose any major ethical problems. However, let us imagine a scenario in the future when cures become available for some diseases which are currently terminal and that those preserved by cryonics can be brought back to life. This scenario raises ethical concerns and this posting I want to examine these concerns.

At the present time cryonic preservation might be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future and that the body can then be resuscitated and cured. The case of the teenager above was an example of this type of cryonic preservation. However, an alternative form of cryonic preservation seems possible. Let us consider someone suffering from a terminal illness who finds his present life not worth living. He might consider physician assisted suicide PAS, such an option is available in several countries and some states in the USA. On reflection he might think a better option might be open to him. He wants his body frozen whilst he is still alive and preserved in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured. For the rest of this post cryonic preservation will be referred to by CP. These alternative types of CP will be defined as follows.
  • Type 1 CP will be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future in the hope that he may be resuscitated and cured.
  • Type 2 CP will be defined as the preservation by freezing of someone’s body whilst he is alive in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured.

Type 1 CP is extremely fanciful because not only do cures need to be found and bodies unfrozen but also dead bodies need to be brought back to life. I will deal with type 2 CP first because there is more realistic opportunity of it being realised. There seems to be some possibility that in the future it might become possible to freeze and preserve someone’s living body and unfreeze it after a substantial period of time permitting him to resume his life. Such a scenario remains fanciful but is by no means impossible. Let us assume studies have frozen and stored large living animals and that after a substantial period of time have thawed them permitting them to resume their lives. I am assuming here that studies on rats or mice might not be applicable to humans. Let us assume someone is aware of this fact and learns he is starting to suffer from Alzheimer’s disease. I have previously argued it would not be irrational or wrong for such a person to commit suicide if he so desired and moreover it would not wrong to help him do so, see alzheimers and suicide . In this scenario it might be argued by analogy it would neither be irrational nor wrong for such a person to choose type 2 CP. Indeed, it might be argued as I have suggested above it is a more rational option than suicide. I now want to examine the ethical concerns raised by type 2 CP.

If we preserve someone using type 2 CP are we doing something which is wrong? To answer this question, we must first ask what are we doing if we preserve someone using type 2 CP? The company providing the service is simply storing a living body and this seems to raise no ethical concerns. But what are the doctors who prepare the body for storage doing? It is uncertain whether a cure might be found for Alzheimers. In this scenario how we describe these doctors’ actions in preparing him for preservation raise different ethical concerns. Are they killing him now, preserving him to die in the future or helping him commit suicide? The first possibility is clearly wrong. Is the second possibility also wrong? Delaying death is an accepted part of medical practice. But would it be right to delay death if there is no conscious life between the start of the delaying process and death? Intuitively though such an action harms no one and might not be wrong it seems pointless. If we accept physician assisted suicide, then we must accept the third option isn’t wrong. In the case of the teenager what was being decided was how a body might be disposed of. Let us now consider a variation of this case. Let us assume that a relatively young person who is competent to give informed consent is suffering from terminal cancer wishes to have his body preserved by type 2 CP. He wants his body preserved whilst he is still alive. Let us also assume using type 2 CP it is possible to preserve a body for a hundred years and then resuscitate it. It seems possible that a cure, or at least an ability to manage cancer, might well come about in the next hundred years. In this case if his doctors prepare his body for type 2 CP, what are they doing? In this scenario it seems wrong to say they are killing him, delaying his death or helping commit suicide. If the techniques involved in type 2 CP are proved to be safe and a cure for cancer is a genuine possibility then I would suggest his doctors are treating his disease, they are treating him a patient and there are no ethical objections to doctors treating patients. In might be objected that doctors only treat patients so they can recover from their illnesses. In response I would point out doctors treat patients when they manage cancers they can’t cure by providing palliative care. Let us assume it is possible to preserve a body for a hundred years and then resuscitate it. In the light of the above it might be concluded that it would not to be wrong for doctors to prepare a relatively young competent person for type 2 CP provided he or those close to him could pay for the service. The above conclusion is subject to two conditions. First there must be a reasonable prospect that the condition he suffers from will become curable or manageable in the future and secondly that it has proved to be possible freeze and store large living animals and after a substantial period of time these can be unfrozen permitting them to resume their lives.

Let us accept the above conclusion that it is not wrong to provide type 2 CP to relatively young competent patients when there is a realistic possibility that their illness can be cured in the future. But would it be wrong to provide such treatment to older or even incompetent patients. I will deal with someone who is incompetent first. If someone is incompetent to give consent to treatment then a surrogate decision maker, such as the courts or his parents, acts on what is in his best interests. Intuitively it might be objected deciding to use type 2CP is not deciding about treatment. However, I have argued for a relatively young competent person type 2 CP can be seen as a form of treatment. Moreover, I would suggest treatment doesn’t become non-treatment simply because someone is unable to give competent consent. Accepting the above raises many practical problems. Should type 2 CP be carried out if someone who is incompetent resists such treatment? I would suggest it should not but will not offer any arguments here to support my suggestion. Should type 2 CP be carried out if someone who is incompetent but is prepared to accept such treatment? I would suggest it should. If we believe it shouldn’t then mustn’t we also believe the lives of the incompetent have less value than those of the competent whilst at the same time remembering that young children are incompetent. Moreover, if we accept the above aren’t we are encouraging eugenics by the backdoor? It might be concluded that it would not wrong to provide type 2 CP to relatively young incompetent patients, provided they are prepared to accept treatment and those close to them are prepared to pay for the service. This conclusion is subject to the same conditions required for the relatively young competent patients outlined above.

Is it wrong to offer type 2 CP to older persons? It seems to me in world of infinite resources it would not. Resources in this scenario are not a problem and it would appear that if someone believes it would wrong to offer type 2 CP to older persons that it should be up to him to justify his belief. It can again be concluded it would not be wrong to offer type 2 CP to older persons, subject to the same conditions outlined in the other two cases.

I now want to consider a different question. If type 2 CP could be regarded as treatment would we have a duty to provide this treatment? This question is at the moment completely hypothetical. However, if studies froze and stored large living animals and then after a substantial period of time thawed them permitting them to resume their lives, then this question would cease to be a hypothetical one. Indeed, if there was also the possibility of several new cures for previously incurable diseases an answer to this question becomes important. Usually whether someone should be offered treatment depends on the quality added life years, QUALYs, expected from the treatment in question. It might be concluded that it would not be wrong to offer type 2 CP to older persons when the number of expected QUALYs is similar to the expected QUALYs offered by other accepted treatments subject to two provisions. First, the number of expected QUALYs should not include the years spent in frozen state. Secondly it possible that the freezing process might reduce the number QUALYs and this should be taken into account.

I have argued that it would not be wrong to provide type 2 CP to people who can finance this service themselves. I have also argued that it is possible that in the future type 2 CP might be regarded as treatment. It seems that the same arguments I used regarding type 2CP can be applied to type 1 CP concerning its permissibility. However, in some circumstance type 2 CP might be seen as a form of treatment, it is difficult to see how type 1 CP might be regarded as a form of treatment. Philosophy should not only be concerned with what should be permitted but also with what helps people flourish, philosophy should make recommendations about how to live. Let us assume one or both types of CP prove to be effective. Should we recommend that someone facing terminal or life changing illness try CP? Several reasons might be advanced as to why we should not. First, a long suspension might mean they awake to an alien world making it hard for them to cope. Secondly a long suspension might mean they awaken to find their friends, spouse and even children have died. Whether someone would want to undergo CP would depend not only on their imagined future but also on their current circumstances. A single lonely person might find CP attractive whilst someone whose life centres on family might not. The young might find CP more attractive than the old because CP offers them the possibility of a longer life extension. Personally as a relatively old man I do not find the idea of CP attractive, however returning to our starting point if I was fourteen I might well do so.


Tuesday, 8 November 2016

Nussbaum, Transitional Anger and Unconditional Forgiveness




Charles Griswold argues that forgiveness is a kind of transaction and as a result there are certain conditions attached to the transaction which mean that one cannot truly forgive without fulfilling these conditions (1). In response it might be pointed out that conditional love is inferior to unconditional love. It might then be argued by analogy that conditional forgiveness, transactional forgiveness, is inferior to unconditional forgiveness. In this posting I will argue this doesn’t hold and that transactional forgiveness is morally more desirable than unconditional forgiveness because of the message it sends to the offender.

Martha Nussbaum rejects the idea of transactional forgiveness as suggested by Griswold and goes further by arguing that there are also problems with unconditional forgiveness. The problem with all sorts of forgiveness according to Nussbaum is that it is essentially backward looking and attached ideas of payback. She argues rather than forgiving we should engage with offenders in a spirit of active love (2). In response to such arguments Griswold suggests that for a victim just to give unconditional forgiveness means she lacks self-respect and that others will also fail to respect her. Intuitively if someone who has been wronged and the offender exhibits no remorse or indeed continues offending, holds no resentment, then the victim lacks self-respect. Intuitively it also seems morally wrong, not just hard, for someone who has been sexually assaulted to unconditionally forgive her assailant.

In this posting I don’t want to examine a lack of respect. Instead I want examine two different objections to unconditional forgiveness. First, I will argue that in some circumstances unconditional forgiveness means the victim far from having too little self-respect, means she actually has too much and is over proud. Secondly I will argue unconditional forgiveness by the victim harms the offender. Let it be accepted that all forgiveness whether unconditional or transactional means letting go of resentment. Intuitively this appears to be true for it seems impossible to believe a victim truly forgives her transgressor if she still bears resentment towards him. For the sake of argument let us assume Sue has been morally harmed by John and that she has unconditionally forgiven him. In this context because Sue’s forgiveness is unconditional it is possible that John might remain quite happy with the fact that he has morally harmed Sue and would be fully prepared to do so again.

Let us examine Sue’s motives in unconditionally forgiving John. According to Nussbaum sometimes,

“the person who purports to forgive unconditionally may assume the moral high ground in a superior and condescending way.” (3)

If we accept Nussbaum view, then it is possible that Sue’s underlying motive in unconditionally forgiving John is to feel good in a superior way. Sue’s motive displays a certain moral arrogance. Such a motive does not justify unconditional forgiveness. However, let us assume that Sue’s motive is not to feel superior but simply a desire to act in a moral manner.
Let us examine the above assumption. I now want to present two arguments why even in this context Sue’s unconditional forgiveness might be flawed. Both arguments will be based on Sue’s focus. Firstly, I will argue that by unconditionally forgiving John to satisfy her desire to act in moral manner Sue might still be exhibiting an excessive moral pride. Before proceeding I must make it clear I am not attacking limited moral pride, moreover I believe that some limited moral pride is a good thing. How then can Sue exhibit excessive moral pride by unconditionally forgiving John? It seems possible to me that Sue’s motives for forgiving John might have nothing actually to do with John. Let us assume Sue’s unconditional forgiveness is due to her focus on acting morally and isn’t a case of moral grandstanding. Her focus might be flawed if it focusses exclusively on Sue’s behaviour because her focus is too narrow. If Sue focusses exclusively on her own behaviour, focusses on herself, then she seems to be exhibiting excessive pride. Nussbaum for instance might argue such a limited focus is unhealthy because it contains a narcissistic element. It follows that if underlying Sue’s unconditional forgiveness is an excessive pride that her motive for this forgiveness is flawed, indeed it might be argued that by excessive cherishing of herself she damages herself. However, it does not automatically follow that her unconditional forgiveness of John cannot be justified by other reasons just because Sue’s motivation is flawed.

Let us assume Sue’s motive for her unconditional forgiveness is simply focussed on acting morally and has nothing to do with excessive pride. This brings us to the second of my two arguments. I want to argue that whilst Sue’s simple desire to act morally is admirable the way she enacts this desire is flawed. I will base my argument once again on Sue’s narrow focus. In order to act in a true moral way people must consider all moral agents and not just a select few, a particular morality is a partial morality. Any non-partial system of morality must include those who harm us. I would suggest that Sue’s narrow focus on unconditionally forgiving John means she fails to genuinely consider his moral needs. Sue is only considering herself morally and disregarding the moral needs of John. By withdrawing her resentment Sue is withdrawing something that might help John become a better person. Resentment at wrongdoing is not simply something the victim feels; resentment also sends a signal to the offender that he is causing moral harm. It seems to me that by unconditionally forgiving John Sue is denying John this signal which might help him become a better person. Agnes Callard makes a similar argument with respect to revenge when she argues that “revenge is how we hold one another morally responsible” (4). It follows Sue’s unconditional forgiveness of John whilst admirable in some ways is nonetheless flawed because she ignores John’s moral needs or is mistaken about what will help John become a better person.

I have argued that conditional is superior to unconditional forgiveness however it might be argued by some that my conclusion is unsound. They might point out that unconditional forgiveness seems to set an excellent example of how to love others and this reason for supporting unconditional forgiveness outweighs the reasons against I have advanced above. In response I would argue the recognition of others as moral agents is even of even more fundamental importance to morality than any possible demonstration of love. Without this basic recognition no system of morality can even get started. In my example it seems to me if Sue unconditionally forgives John then she is acting in a way she believes is best for John and by so doing she is failing to recognise him as a fully moral agent.

Does accepting that unconditional forgiveness might be harmful mean we must accept the type of transactional forgiveness favoured by Griswold? Nussbaum sets out the long list conditions necessary for Griswold’s conditions for transactional forgiveness to take place (5). She argues that going through such a process is a humiliating one smacking of payback, I am inclined to agree. Griswold’s transactional forgiveness makes sense if we accept a traditional view of anger which includes payback. However, Nussbaum argues ideas of anger involving payback doesn’t make sense. Once we see traditional anger doesn’t make sense we can transmute it into action according to Nussbaum. Traditional anger,

“quickly puts itself out of business, in that even the residual focus on punishing the offender is soon seen as part of a set of projects for improving both offenders and society.” (6)

I am again inclined to agree with Nussbaum that anger should be transmuted into something useful. I am inclined to agree because I believe like Michael Brady that emotions, including anger, act in a way analogous to alarms focussing our attention on the need to do something (7). Alarms are meant to be attended to, an unattended car alarm is annoying, unattended anger can be damaging. However, even if unattended anger is harmful this doesn’t mean anger is harmful. Unattended alarms are annoying but alarms are useful. Unattended anger may be harmful but anger is useful, anger draws attention on the need to do something. According to Nussbaum anger should “focuses on future welfare from the start. Saying ‘Something should be done about this”. (8) If we accept that anger should be attended to, be transmuted, then it seems to me Griswold’s transactional idea of forgiveness is in trouble because the transactions involve payback which seem to me to be related to un-transmuted anger.

If we forgive someone and we do not adopt Griswold’s ideas on transactional forgiveness are we forced somewhat reluctantly to conclude that our forgiveness should be unconditional? I don’t believe it does. What does it mean to forgive? If we define forgiveness as simply as relinquishing anger and its associated desire for revenge, then a commitment to transitional anger also means commitment to unconditional forgiveness. It means even if John remains quite happy with the fact that he has morally harmed Sue and remains prepared to do so again that if Sue translates her anger that she forgives him unconditionally. However, forgiving someone might mean also be defined as the normalisation of relations between the forgiver and the forgiven. Translating anger in this context doesn’t simply mean moving on. Transitional anger means looking to the future, moving on. Transitional anger also means looking back to the past, past wrongdoing cannot be ignored after all it is the reason why we must look to the future. This approach doesn’t of necessity involve a formal transactional process involving payback. It does however mean that certain minimum conditions not involving payback must be met. Relations cannot be normalised if a wrongdoer disputes the facts or wrongness of his action. In this situation victims are entitled to protect themselves by withholding trust. Trust is an essential part of normal human relations if someone is always wary of another their relationship cannot said to be a normal one. Protecting oneself doesn’t need involving payback. It follows forgiveness requires that the wrongdoer must accept responsibility for the act and acknowledge its wrongness for normal relations to be met. It further follows if someone accepts transitional anger that his acceptance does not commit her to unconditional forgiveness which might harm the wrongdoing.

1.    Charles Griswold, 2007, Forgiveness, Cambridge University Press.
2.    Martha Nussbaum, 2016, Anger and Forgiveness, Oxford University Press, Chapter 3.
3.    Nussbaum, chapter 3.
4.    Agnes Callard, 2020, On Anger, Boston Review Forum, page 15
5.    List of Griswold’s conditions as outlined by Nussbaum.
·       Acknowledge she was the responsible agent.
·       Repudiate her deed (by acknowledging it. Express regret to the injured at having caused this particular injury to her
·        Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words.
·       Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
·       Acknowledge she was the responsible agent. Repudiate her deed (by acknowledging its wrongness) and herself as the cause.
·       Express regret to the injured at having caused this particular injury to her.
·       Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words. 
·       Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
 
6.    MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 51.
7.    Michael S. Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press
8.    MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 54.


1

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...