Tuesday, 14 March 2017

Automation, Work and Education


Our world is becoming increasingly automated and this increase appears to be having an effect on the number of jobs available. It is possible that in the future automation might not only lead to a decrease in the number of existing jobs but also create an increasing number of different jobs. A second possibility is that automation will mostly lead to a decrease in the number of jobs. In this posting I want to examine some of the consequences this second possibility has for work and education.

Pessimists might argue that a widespread loss of jobs will lead to widespread hardship and poverty. I believe such a pessimistic outcome is unlikely because such an outcome would threaten the survival of both the state and the market economy. In this situation both the state and the markets would have reasons to introduce some form of universal basic income, UBI. According to Tim Dunlop UBI means,

“A basic income, on the other hand, is the idea that everyone should be paid a minimum monthly income that allows them to meet their basic economic needs.” (1)

It is important to note that UBI in response to increasing unemployment caused by automation is not some attempt to reform the benefits system but rather an attempt to counter an existential threat which might be posed to the state due to this unemployment. It might be speculated that UBI might not just be useful in combating the effects of unemployment but might also be necessary for the continuation of capitalism. In an age of large scale automation, capitalism might survive without workers but it seems doubtful if it could survive without consumers In the rest of this posting I am going to assume that if automation causes widespread job losses in any state that that state will introduce some form of UBI in order to counter this existential threat. I will further assume that UBI will be large enough to permit people to live in moderate comfort.

Some might think that automation and UBI will lead to some golden age. In the ancient world the upper classes in Greek and Roman society led a life of leisure in which most of the work was done by slaves. It might be argued by analogy that automation might introduce a golden age in which we live a life of leisure with most work either becoming automated or done by robots. I believe such a golden age is an illusion for two reasons. First, upper class Greeks and Romans may have lead happier lives than their slaves but there is no evidence that they lead happier lives than people living now. The ancient golden age at least for some appears to be an illusion and so any argument by analogy fails. Secondly if we live in a world in which all the work is automated or done by robots we might suffer from the unbearable lightness of simply being. We might feel our world has lost all purpose and that we simply exist. We might become bored. Limited boredom might encourage us to take steps to alleviate our boredom but prolonged boredom is harmful. According to Harry Frankfurt boredom is not some innocuous discomfort but something that threatens our psychic survival. (2) I have previously argued that a world whose inhabitants are bored and feel they are simply existing is a dangerous world, see riots and the unbearable lightness of simply being . It is possible that even if automation frees people from work and that the resultant widespread loss of jobs does not lead to widespread hardship and poverty that it might also lead to people’s lives being degraded rather than some golden age.

The above pessimistic scenario seems to be a realistic possibility and I now want to examine what might be done to counter the negative effects of such a possibility. Prior to my examination I want consider what we mean by work. Work might be roughly defined as making an effort for some economic reward or hope of such a reward. However, such a definition is at best an incomplete one. I have suggested previously that someone might work in her garden purely for the pleasure it brings her without any thought of economic reward. Hannah Arendt suggested there is a difference between work and labour. According to Arendt labour is what we do in the normal process of living in order to survive. For Arendt work might be simply defined as any human activity which is not driven by our need to survive. Arendt’s definitions are interesting but also seem to be incomplete ones to me, dancing is not working. Intuitively work requires some effort. Work might be now defined as any human activity requiring effort which is not driven by our need to survive. Such a refined definition also seems an incomplete one. If I am running away from a bull I might make a great effort but I’m not working. Work might be now defined as any human activity which matters to us requiring effort which is not driven by our need to survive. I believe Arendt’s insight is important and I will use it to define two different ways of working. I believe it might be better to label labouring as ‘working for’ something we need to survive. ‘Working for’ something has mostly instrumental value. Work defined as a human activity which matters to us requiring effort which is not driven by our need to survive might be labelled as ‘working at’. ‘Working at’ has mostly intrinsic value.

Let us now examine the possible effects of increasing automation bearing in mind these two definitions of work. Let us assume that automation might decrease or even eliminate our need to ‘work for’ things, to work instrumentally. Does this decrease matter? I would suggest it does matter to someone if she doesn’t ‘work at’ something. In such a situation it seems highly probable that such a person might suffer from the unbearable sense of simply being. She might feel her world has lost all purpose and that she’s simply existing. It follows we have some reason to fear the effects of increasing automation.

Assuming we aren’t Luddites and don’t want to or can’t stop the progress of automation what steps should we take to mitigate some of the worst effects of not ‘working for’ anything? First, if automation greatly decreases our need to ‘work for’ we would need to refocus our education system. At the present time at lot of education focusses on equipping people for jobs, to ‘work for’. Let us assume people no longer need to ‘work for’ and that a purely hedonistic lifestyle also leads to a lightness of simply being. In such a situation ‘working at’ something might help counter someone’s sense of simply existing due to her ceasing to ‘work for’ something. In this situation education should focus on enabling people to ‘work at’. In order to do so science education remains important because we need to understand how the world we live in works. But we also need to simply understand how to live in such a world and to enable us to do so education should place greater emphasis on the humanities.

I have argued in a highly automated age people need to become better at ‘working at’ something. All work can be good or bad and this includes ‘working at’. Someone might ‘work at’ doing crosswords all day. I would suggest this is not good work. If ‘Working at’ is to replace working for it must be good work. Samuel Clark defines one element of good work is that it requires some skill. According to Clark,

“the development of a skill requires: (1) a complex object and (2) a self-directed and sometimes self-conscious relation to that object.” (3)

I now want to consider each of these requirements. According to Clark good work involves working at something which must have some complexity. According to Clark the something we work at must have a complex internal landscape of depth and obstacles (4). He gives as examples of a skilled activity, music, mathematics, carpentry, philosophy and medicine. Doing crosswords might be a difficult task but it lacks complexity. Clark argues good work must be self-directed. Let us assume someone is self-directed to work at some complex task purely to mitigate her sense of simply being. I would suggest that such self-direction fails. Why does it fail? It fails because in order to prevent this sense of simply being someone must work at something that satisfies her. For an activity to satisfy someone she must care about that activity. Let us accept that Frankfurt is correct when he argues ‘caring about’ is a kind of love because the carer must identify with what she cares about. It might be concluded that good work is doing something complex which the doer ‘cares about’ or loves. It might then be suggested that provided people can ‘work at’ something and that this is good work and that this ‘working at’ might mitigate the some of the effects of job losses due to automation.

 However even if we accept the above difficulties remain. Let us assume any good work either ‘working for’ or ‘working at’ requires some skilfull action. Let us further assume a skilful action requires that the doer must identify with her actions by ‘caring about’ or loving them. Unfortunately, ‘caring about’ or loving is not a matter of choice.

“In this respect, he is not free. On the contrary, he is in the very nature of the case captivated by his beloved and his love. The will of the lover is rigorously constrained. Love is not a matter of choice.” (5)

It further if someone simply chooses to ‘work at’ something in order to compensate for her loss of ‘working for’ that this ‘working at’ need not be good work and as a result won’t mitigate her sense of boredom. Someone cannot simply choose to do anything to alleviate her boredom. If she simply chooses it seems probable her choice will bore her. She must ‘care about’ what she chooses. If society is help mitigate the effects of job losses, due to automation, then it must create the conditions in which people can come to care about doing complex things. I have suggested above that education might help in this task. W B Yeats said ‘education is not the filling of a pail, but rather the lighting of a fire’ perhaps education must fire peoples’ enthusiasms every bit as much as enabling their abilities. Perhaps also we should see learning as a lifelong process. Lifelong education broadly based which fires peoples’ enthusiasms might help create the conditions in which people can ‘work at’ things hence mitigating some of the harmful effects of job loss due to automation.


Lastly there are activities which might mitigate some of the harmful loss of jobs which have little to do with work. Music and Sport would be examples of such things. Of course it is possible to ‘work at’ music and sport, we have professional sportspersons and musicians, but most people just play at such activities. Play is a light hearted pleasant activity done for its own sake. Play is important; especially for children. It might be suggested that some forms of play are a form of good ‘working at’. All work is goal directed and so is some play. Perhaps there is a continuum between work and play with the importance of the goal varying. Perhaps in an automated age play should become more important to older people also. Activities playing sport or music require some infrastructure and perhaps in an automated age it is even more important that society helps build this infrastructure. At the present time governments foster elite sport. Perhaps this fostering should change direction to fostering participation rather than funding elite athletes.

  1. Tim Dunlop, Why the future is workless, (Kindle Locations 1748-1749). New South. Kindle Edition.
  2. Harry Frankfurt, 2006, The Reasons of Love, Princetown, University Press, page 54
  3. Samuel Clark, 2017, Good Work, Journal of Applied Philosophy 34(1), Page 66.
  4. Clark, page 66.
  5. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 135.

1

Wednesday, 22 February 2017

Sex with Robots


In the near future it seems probable that some people will have sex with robots, see the rise of the love droids . In this posting I will discuss some of the problems this possibility raises. I will divide my discussion into two parts. For the most part my discussion will consider sex with robots which are simply machines before moving on, and much more fancifully, to discussing sex with robots which might be considered as persons.

Let us consider someone having sex with a robot which isn’t a person, is simply a machine. Human beings have created objects to be used for sexual purposes such as vibrators and other sex toys. If a robot isn’t a person, then it might appear that someone having sex with a robot is unproblematic in much the same way as is the use of these artefacts. I now want to argue that this appearance is false. But before making my argument I want to consider the nature of sex. Sex among humans isn’t simply a matter of reproduction. Human beings enjoy sex. Neither is this enjoyment a purely mechanical thing. According to Robert Nozick,

“Sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other. Even in masturbatory fantasy, people dwell upon their actions with others; they do not get excited by thinking of themselves whilst masturbating. “(1)

If we accept that Nozick’s view what does having sex with a robot really mean to the person having sex? Provided a robot has been supplied with the appropriate genitalia would someone might want to have sex with it? I would suggest it does not in many cases. Let us assume that a robot has the appropriate genitalia, four legs, one arm and several detachable eyes. I would suggest very few people would want to have sex with such a machine. Nozick argues even when masturbating someone is imaging having sex with another person and I would suggest much the same applies to having sex with a robot. If someone has sex with a robot, he would want it to look like a beautiful person because he is imagining having sex with such a person.

What are the implications of accepting the importance of such imagining? First I would suggest having sex with a robot is just an enhanced form of masturbation. Masturbation isn’t wrong because it doesn’t harm others. Having sex with any robot which is purely a machine doesn’t harm others and so by analogy also isn’t wrong. Indeed, in some circumstances masturbation might be an acceptable choice for those who are physically or emotionally incapacitated and perhaps also for those who are incarcerated. However even if we accept the above masturbation isn’t ideal and neither would be sex with a robot. Someone having imaginary sex with a person is having inferior sex because what he desires is real sex.

I have argued that the first reason why someone might want to have sex with a robot is that he cannot have sex with another person and that there is nothing wrong with his actions. Anyone having sex with a robot knows he cannot harm the robot. This gives rise to a second reason why someone might want to have sex with a robot. Someone might know that the type of sexual activity he wants to indulge in might be harmful to another human being and because he knows he cannot harm a robot he prefers to indulge in this activity with a robot. Does acting on such a preference matter for after all he isn’t harming anyone else? Kant argued we shouldn’t be cruel to animals as this might make us cruel to human beings. Might it be then if someone engages in such sexual activity with a robot that this activity might make him more likely to engage in harmful sexual acts with other human beings?  At present there is no conclusive evidence to support Kant’s argument that if someone is cruel to animals that this cruelty makes him more likely to be cruel to other people. If this is so it seems doubtful that if someone engages in such sexual activity with a robot that his activity would not make him more likely to do so with another human being. The above is an empirical question and cannot be settled by philosophical analysis. However, someone engaging in sex with a robot, which would be harmful to a human being might harm himself. I have previously argued that for the users of pornography there is a split between fantasy and reality, see wooler.scottus . I further argued in the case of sexual practices which might harm others that the maintenance of the split between fantasy and reality is absolutely essential. I have argued above that someone having sex with a robot imagines he is having sex with a person. It follows for someone engaging in sex with a robot, which might harm another human being, that the maintenance of the split between fantasy and reality is also essential. I further argued that if someone uses pornography that this split threatens the unity of his will which is damaging to his identity. It follows that someone engaging in sex with a robot, which would be harmful to a human being might harm himself by damaging his identity.

Some people assume at some time in the future some robots might become persons. I am extremely sceptical about this possibility but nonetheless I will now consider some of the problems of someone having sex with such a robot. However, before I do so I will question whether anyone would want sex with such a robot. Let us accept Nozick is correct in his assertion that “sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other.” How do we perceive the connection to a robot which is also a person? I suggested above that a robot can take many forms. Would anyone want to have sex with a robot with four legs, one arm, several detachable eyes, appropriate genitalia even if it could be considered as a person? Persons are partly defined by the actions they are capable of enacting and these actions are partly defined by their bodies’ capabilities. Robots can have very different bodies from us. A robot with a different body structure might be capable of very different actions to us, such a robot even if it is considered as a person might be very different sort of person to the sort we are. The same might also be true of a robot with similar structure which is constructed from different materials. If someone or something is very different to us then the connection between us and that someone or something becomes tenuous. Would someone want to sex with any robot with which he had only a tenuous connection, I doubt it. Of course someone might want to have sex with such a robot provided it looked like a beautiful human being. But if this is so isn’t he really imaging having sex with a person and the problems associated with having sex with a robot which is purely a machine once again become relevant.

In conclusion I have argued that someone would not harm others by having sex with a robot and his actions would not be morally wrong. However, I argued whilst it might not be wrong to have sex with any robot which is purely a machine that it might nonetheless be damaging to the user’s identity, in much the same way as pornography, by splitting character. Lastly questioned whether anyone would really want to have sex with any robot which might be considered as a person.

  1. 1.     Robert Nozick, 1989, The Examined Life, Touchstone, page 61

Monday, 23 January 2017

Robots and Persons




In an interesting post John Danaher asks if someone can be friends with a robot, see philosophicaldisquisitions . He argues virtue friendship might be possible with a robot. Virtue friendship involves two entities sharing values and beliefs which benefit them. Let us accept that any entity which is capable of having values and beliefs can be regarded as a person. Perhaps one of the great apes might be regarded as a person but can the same be said of a robot? Does it make sense to say a robot might have rights or can be regarded as a person? In what follows I will limit my discussion to robots but my discussion could equally well be applied to some advanced system of AI or algorithms. At present he actions of some robot have some purpose but this purpose doesn’t have any meaning which is independent of human beings. At present the actions of a robot have no more meaning which is independent of us than the action of the wind in sculpting a sand dune. In the future it is conceivable that this situation might change but I am somewhat sceptical and believe at the present time there is no need to worry about granting rights to robots akin to human rights. In this posting I will argue the nature of belief means to worry about robot personhood is both premature and unnecessary.

How should we regard the actions of a robot if it has no beliefs? What are the differences between the wind sculpting a sand dune and the actions of a robot? One difference is that even if both the wind and a robot don’t have beliefs that nonetheless a robot’s actions are in accordance with someone’s beliefs, its designer or programmer. But does this difference matter? A refrigerator is acting in accordance with our belief that it will keep our food cold. If we don’t want to grant personhood to refrigerators, why should we do so for robots? Perhaps then we might implant some beliefs into robots and after some time such robots might acquire their own emergent beliefs. Perhaps such robots should be regarded as persons. Implanting such beliefs will not be easy and may well be impossible. However, I see no reason, even if such implantation is possible, why we should regard a such a robot as some sort of person. If a person has some belief, then this belief causes him to behave in certain ways. How do we implant a belief in a robot? We instruct the robot how to behave in certain circumstances. In this situation the of course the robot behaves in accordance with the implanted belief but the primary cause of this behaviour is not this implanted belief but rather a belief of those who carried out the implantation. A robot in this situation cannot be said to be behaving authentically. In this situation I can see no reason why we should attribute personhood to a robot which uses implanted beliefs as outlined above.

At this point it might be objected that even if a robot shouldn’t be considered as a person it might be of moral concern. According to Peter Singer what matters for something to matter morally is not that it can think but that it can feel. Animals can feel and should be of moral concern. Present day robots can’t and shouldn’t. Present day robots are made of inorganic materials such as steel and silicon. However it might be possible to construct a robot partly from biological material, see Mail Online. If such a robot could feel then it should be of moral concern but this doesn’t mean we should consider it as a person, frogs can feel and should be of moral concern but they aren't persons. Nonetheless I would suggest that the ability to feel is a necessary precursor for believing which is a precursor for personhood.

For the sake of argument let us assume that it is possible to create a robot which the primary cause of its behaviour is its implanted or emergent beliefs.  What can be said about this robot’s beliefs?  When such a robot decides to act the primary cause of the action is its internal beliefs, it is acting in a manner which might be regarded as authentic. How might such a robot’s beliefs and actions be connected? Perhaps they are linked by Kant’s hypothetical imperative.  The hypothetical imperative states,

“Whoever wills an end also wills (insofar as reason has decisive influence on his actions) the indispensably means to it that are within his power. (1)

Some might suggest that having a set of beliefs and accepting Kant’s hypothetical imperative are necessary conditions for personhood, some might even regard them as sufficient conditions. They might further suggest that any robot meeting these conditions should be regarded as a candidate for personhood. Of course it might be possible to design a robot which conforms to the hypothetical imperative, but conforming is not the same as accepting. Let us accept anyone or anything that can be regarded as person must have some beliefs and must accept rather than conform to the hypothetical imperative.

What does it mean for someone to accept the hypothetical imperative? Firstly, he must believe it is true, the hypothetical imperative is one of his beliefs. Someone might believe that he is made up of atoms but this belief doesn’t require any action when action is possible. The hypothetical imperative is different because it connects willed ends with action. Can the hypothetical imperative be used to explain why a robot should act on its beliefs, be they implanted by others or emergent? Kant seems to believe that the hypothetical imperative can be based on reason. I will argue reason can only give us reason to act in conjunction with our caring about something. I will now argue the hypothetical imperative only makes sense if an agent views beliefs in a particular way. What does it mean to will an end? I would suggest if someone wills an end that he must care about that end. If someone doesn’t care about or value some end, then he has no reason to pursue that end. What then does it mean to care about something? According to Frankfurt if someone cares about something he becomes vulnerable when that thing is diminished and is benefited when it is enhanced. (2) People by nature can suffer and feel joy robots can’t. It is worth noting animals can also suffer and feel joy making them like people with rights rather than like robots. The above raises an interesting question. Must any entity which is capable of being conscious, robot or animal, be able to suffer and feel joy? If we accept the above then the ends we will must be things we care about. Moreover, if we care about ends then we must value them. It follows if the hypothetical imperative is to give us cause to act on any belief that that belief must be of value to us. It follows the hypothetical imperative can only be used to explain why a robot should act on its beliefs provided such a robot values those beliefs which requires it becoming vulnerable. A right is of no use to any entity for which the implementation of the right doesn't matter, isn't vulnerable to the right not being implemented.

I have argued any belief which causes us to act must be of value to us and that if we find something valuable we are vulnerable to the fate of the thing we find valuable. What then does it mean to be vulnerable? To be vulnerable to something means that we can be harmed. Usually we are vulnerable to those thing we care about in a psychological sense. Frankfurt appears to believe that we don’t of necessity become vulnerable to the diminishment of the things we value by suffering negative affect. He might argue we can become dissatisfied and seek to alleviate our dissatisfaction without suffering any negative affect. I am reluctant to accept becoming vulnerable can be satisfactorily explained by becoming dissatisfied without any negative affect. It seems to me being dissatisfied must involve some desire to change things and that this desire must involve some negative affect. I would argue being vulnerable to those thing we value involves psychological harm and that this harm must involve negative affect.


Let us accept that in order to be a person at all someone or something must accept and act on the hypothetical imperative. Let us also accept that the hypothetical imperative only gives someone or something reason to act on some belief provided that someone or something must value that belief. Let us still further accept that to value something someone or something must care about what they value and that caring about of necessity must include some affect. People feel affect and so are candidates for personhood. It is hard to see how silicon based machines or algorithms can feel any affect, positive or negative. It follows it is hard to see why silicon based machines or algorithms should be considered as candidates for personhood. It appears the nature of belief means any worries concerning robot personhood when the robot intelligence are silicon based are unnecessary. Returning to my starting point it would appear that it is acceptable for young children to have imaginary friends but it that is delusional for adults to believe they have robotic friends. However I will end on a note of caution. We don’t fully understand consciousness so we don’t fully understand what sort of entity is capable of holding beliefs and values. It follows we cannot categorically rule out a silicon machine becoming conscious. Perhaps also it might become possible to build some machine not entirely based on silicon which does become conscious. 

  1. Immanuel Kant, 1785, Groundwork of the Metaphysics of Morals,
  2.  Harry Frankfurt, 1988, The Importance of What We Care about, Cambridge University Press, page 83.




Monday, 21 November 2016

Cryonic Preservation and Physician Assisted Suicide


Recently a terminally ill teenager won the right to have her body preserved by cryonics in the hope that she might live again at some future date. Such a hope comforted her. The case was really a case of whether she had the right to determine how her body was disposed of when she died, see bbc news . Cryonic preservation is not a form of treatment at the present time, cryonic preservation is simply a form of bodily disposal tinged with hope and as a result does not presently appear to pose any major ethical problems. However, let us imagine a scenario in the future when cures become available for some diseases which are currently terminal and that those preserved by cryonics can be brought back to life. This scenario raises ethical concerns and this posting I want to examine these concerns.

At the present time cryonic preservation might be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future and that the body can then be resuscitated and cured. The case of the teenager above was an example of this type of cryonic preservation. However, an alternative form of cryonic preservation seems possible. Let us consider someone suffering from a terminal illness who finds his present life not worth living. He might consider physician assisted suicide PAS, such an option is available in several countries and some states in the USA. On reflection he might think a better option might be open to him. He wants his body frozen whilst he is still alive and preserved in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured. For the rest of this post cryonic preservation will be referred to by CP. These alternative types of CP will be defined as follows.
  • Type 1 CP will be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future in the hope that he may be resuscitated and cured.
  • Type 2 CP will be defined as the preservation by freezing of someone’s body whilst he is alive in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured.

Type 1 CP is extremely fanciful because not only do cures need to be found and bodies unfrozen but also dead bodies need to be brought back to life. I will deal with type 2 CP first because there is more realistic opportunity of it being realised. There seems to be some possibility that in the future it might become possible to freeze and preserve someone’s living body and unfreeze it after a substantial period of time permitting him to resume his life. Such a scenario remains fanciful but is by no means impossible. Let us assume studies have frozen and stored large living animals and that after a substantial period of time have thawed them permitting them to resume their lives. I am assuming here that studies on rats or mice might not be applicable to humans. Let us assume someone is aware of this fact and learns he is starting to suffer from Alzheimer’s disease. I have previously argued it would not be irrational or wrong for such a person to commit suicide if he so desired and moreover it would not wrong to help him do so, see alzheimers and suicide . In this scenario it might be argued by analogy it would neither be irrational nor wrong for such a person to choose type 2 CP. Indeed, it might be argued as I have suggested above it is a more rational option than suicide. I now want to examine the ethical concerns raised by type 2 CP.

If we preserve someone using type 2 CP are we doing something which is wrong? To answer this question, we must first ask what are we doing if we preserve someone using type 2 CP? The company providing the service is simply storing a living body and this seems to raise no ethical concerns. But what are the doctors who prepare the body for storage doing? It is uncertain whether a cure might be found for Alzheimers. In this scenario how we describe these doctors’ actions in preparing him for preservation raise different ethical concerns. Are they killing him now, preserving him to die in the future or helping him commit suicide? The first possibility is clearly wrong. Is the second possibility also wrong? Delaying death is an accepted part of medical practice. But would it be right to delay death if there is no conscious life between the start of the delaying process and death? Intuitively though such an action harms no one and might not be wrong it seems pointless. If we accept physician assisted suicide, then we must accept the third option isn’t wrong. In the case of the teenager what was being decided was how a body might be disposed of. Let us now consider a variation of this case. Let us assume that a relatively young person who is competent to give informed consent is suffering from terminal cancer wishes to have his body preserved by type 2 CP. He wants his body preserved whilst he is still alive. Let us also assume using type 2 CP it is possible to preserve a body for a hundred years and then resuscitate it. It seems possible that a cure, or at least an ability to manage cancer, might well come about in the next hundred years. In this case if his doctors prepare his body for type 2 CP, what are they doing? In this scenario it seems wrong to say they are killing him, delaying his death or helping commit suicide. If the techniques involved in type 2 CP are proved to be safe and a cure for cancer is a genuine possibility then I would suggest his doctors are treating his disease, they are treating him a patient and there are no ethical objections to doctors treating patients. In might be objected that doctors only treat patients so they can recover from their illnesses. In response I would point out doctors treat patients when they manage cancers they can’t cure by providing palliative care. Let us assume it is possible to preserve a body for a hundred years and then resuscitate it. In the light of the above it might be concluded that it would not to be wrong for doctors to prepare a relatively young competent person for type 2 CP provided he or those close to him could pay for the service. The above conclusion is subject to two conditions. First there must be a reasonable prospect that the condition he suffers from will become curable or manageable in the future and secondly that it has proved to be possible freeze and store large living animals and after a substantial period of time these can be unfrozen permitting them to resume their lives.

Let us accept the above conclusion that it is not wrong to provide type 2 CP to relatively young competent patients when there is a realistic possibility that their illness can be cured in the future. But would it be wrong to provide such treatment to older or even incompetent patients. I will deal with someone who is incompetent first. If someone is incompetent to give consent to treatment then a surrogate decision maker, such as the courts or his parents, acts on what is in his best interests. Intuitively it might be objected deciding to use type 2CP is not deciding about treatment. However, I have argued for a relatively young competent person type 2 CP can be seen as a form of treatment. Moreover, I would suggest treatment doesn’t become non-treatment simply because someone is unable to give competent consent. Accepting the above raises many practical problems. Should type 2 CP be carried out if someone who is incompetent resists such treatment? I would suggest it should not but will not offer any arguments here to support my suggestion. Should type 2 CP be carried out if someone who is incompetent but is prepared to accept such treatment? I would suggest it should. If we believe it shouldn’t then mustn’t we also believe the lives of the incompetent have less value than those of the competent whilst at the same time remembering that young children are incompetent. Moreover, if we accept the above aren’t we are encouraging eugenics by the backdoor? It might be concluded that it would not wrong to provide type 2 CP to relatively young incompetent patients, provided they are prepared to accept treatment and those close to them are prepared to pay for the service. This conclusion is subject to the same conditions required for the relatively young competent patients outlined above.

Is it wrong to offer type 2 CP to older persons? It seems to me in world of infinite resources it would not. Resources in this scenario are not a problem and it would appear that if someone believes it would wrong to offer type 2 CP to older persons that it should be up to him to justify his belief. It can again be concluded it would not be wrong to offer type 2 CP to older persons, subject to the same conditions outlined in the other two cases.

I now want to consider a different question. If type 2 CP could be regarded as treatment would we have a duty to provide this treatment? This question is at the moment completely hypothetical. However, if studies froze and stored large living animals and then after a substantial period of time thawed them permitting them to resume their lives, then this question would cease to be a hypothetical one. Indeed, if there was also the possibility of several new cures for previously incurable diseases an answer to this question becomes important. Usually whether someone should be offered treatment depends on the quality added life years, QUALYs, expected from the treatment in question. It might be concluded that it would not be wrong to offer type 2 CP to older persons when the number of expected QUALYs is similar to the expected QUALYs offered by other accepted treatments subject to two provisions. First, the number of expected QUALYs should not include the years spent in frozen state. Secondly it possible that the freezing process might reduce the number QUALYs and this should be taken into account.

I have argued that it would not be wrong to provide type 2 CP to people who can finance this service themselves. I have also argued that it is possible that in the future type 2 CP might be regarded as treatment. It seems that the same arguments I used regarding type 2CP can be applied to type 1 CP concerning its permissibility. However, in some circumstance type 2 CP might be seen as a form of treatment, it is difficult to see how type 1 CP might be regarded as a form of treatment. Philosophy should not only be concerned with what should be permitted but also with what helps people flourish, philosophy should make recommendations about how to live. Let us assume one or both types of CP prove to be effective. Should we recommend that someone facing terminal or life changing illness try CP? Several reasons might be advanced as to why we should not. First, a long suspension might mean they awake to an alien world making it hard for them to cope. Secondly a long suspension might mean they awaken to find their friends, spouse and even children have died. Whether someone would want to undergo CP would depend not only on their imagined future but also on their current circumstances. A single lonely person might find CP attractive whilst someone whose life centres on family might not. The young might find CP more attractive than the old because CP offers them the possibility of a longer life extension. Personally as a relatively old man I do not find the idea of CP attractive, however returning to our starting point if I was fourteen I might well do so.


Tuesday, 8 November 2016

Nussbaum, Transitional Anger and Unconditional Forgiveness




Charles Griswold argues that forgiveness is a kind of transaction and as a result there are certain conditions attached to the transaction which mean that one cannot truly forgive without fulfilling these conditions (1). In response it might be pointed out that conditional love is inferior to unconditional love. It might then be argued by analogy that conditional forgiveness, transactional forgiveness, is inferior to unconditional forgiveness. In this posting I will argue this doesn’t hold and that transactional forgiveness is morally more desirable than unconditional forgiveness because of the message it sends to the offender.

Martha Nussbaum rejects the idea of transactional forgiveness as suggested by Griswold and goes further by arguing that there are also problems with unconditional forgiveness. The problem with all sorts of forgiveness according to Nussbaum is that it is essentially backward looking and attached ideas of payback. She argues rather than forgiving we should engage with offenders in a spirit of active love (2). In response to such arguments Griswold suggests that for a victim just to give unconditional forgiveness means she lacks self-respect and that others will also fail to respect her. Intuitively if someone who has been wronged and the offender exhibits no remorse or indeed continues offending, holds no resentment, then the victim lacks self-respect. Intuitively it also seems morally wrong, not just hard, for someone who has been sexually assaulted to unconditionally forgive her assailant.

In this posting I don’t want to examine a lack of respect. Instead I want examine two different objections to unconditional forgiveness. First, I will argue that in some circumstances unconditional forgiveness means the victim far from having too little self-respect, means she actually has too much and is over proud. Secondly I will argue unconditional forgiveness by the victim harms the offender. Let it be accepted that all forgiveness whether unconditional or transactional means letting go of resentment. Intuitively this appears to be true for it seems impossible to believe a victim truly forgives her transgressor if she still bears resentment towards him. For the sake of argument let us assume Sue has been morally harmed by John and that she has unconditionally forgiven him. In this context because Sue’s forgiveness is unconditional it is possible that John might remain quite happy with the fact that he has morally harmed Sue and would be fully prepared to do so again.

Let us examine Sue’s motives in unconditionally forgiving John. According to Nussbaum sometimes,

“the person who purports to forgive unconditionally may assume the moral high ground in a superior and condescending way.” (3)

If we accept Nussbaum view, then it is possible that Sue’s underlying motive in unconditionally forgiving John is to feel good in a superior way. Sue’s motive displays a certain moral arrogance. Such a motive does not justify unconditional forgiveness. However, let us assume that Sue’s motive is not to feel superior but simply a desire to act in a moral manner.
Let us examine the above assumption. I now want to present two arguments why even in this context Sue’s unconditional forgiveness might be flawed. Both arguments will be based on Sue’s focus. Firstly, I will argue that by unconditionally forgiving John to satisfy her desire to act in moral manner Sue might still be exhibiting an excessive moral pride. Before proceeding I must make it clear I am not attacking limited moral pride, moreover I believe that some limited moral pride is a good thing. How then can Sue exhibit excessive moral pride by unconditionally forgiving John? It seems possible to me that Sue’s motives for forgiving John might have nothing actually to do with John. Let us assume Sue’s unconditional forgiveness is due to her focus on acting morally and isn’t a case of moral grandstanding. Her focus might be flawed if it focusses exclusively on Sue’s behaviour because her focus is too narrow. If Sue focusses exclusively on her own behaviour, focusses on herself, then she seems to be exhibiting excessive pride. Nussbaum for instance might argue such a limited focus is unhealthy because it contains a narcissistic element. It follows that if underlying Sue’s unconditional forgiveness is an excessive pride that her motive for this forgiveness is flawed, indeed it might be argued that by excessive cherishing of herself she damages herself. However, it does not automatically follow that her unconditional forgiveness of John cannot be justified by other reasons just because Sue’s motivation is flawed.

Let us assume Sue’s motive for her unconditional forgiveness is simply focussed on acting morally and has nothing to do with excessive pride. This brings us to the second of my two arguments. I want to argue that whilst Sue’s simple desire to act morally is admirable the way she enacts this desire is flawed. I will base my argument once again on Sue’s narrow focus. In order to act in a true moral way people must consider all moral agents and not just a select few, a particular morality is a partial morality. Any non-partial system of morality must include those who harm us. I would suggest that Sue’s narrow focus on unconditionally forgiving John means she fails to genuinely consider his moral needs. Sue is only considering herself morally and disregarding the moral needs of John. By withdrawing her resentment Sue is withdrawing something that might help John become a better person. Resentment at wrongdoing is not simply something the victim feels; resentment also sends a signal to the offender that he is causing moral harm. It seems to me that by unconditionally forgiving John Sue is denying John this signal which might help him become a better person. Agnes Callard makes a similar argument with respect to revenge when she argues that “revenge is how we hold one another morally responsible” (4). It follows Sue’s unconditional forgiveness of John whilst admirable in some ways is nonetheless flawed because she ignores John’s moral needs or is mistaken about what will help John become a better person.

I have argued that conditional is superior to unconditional forgiveness however it might be argued by some that my conclusion is unsound. They might point out that unconditional forgiveness seems to set an excellent example of how to love others and this reason for supporting unconditional forgiveness outweighs the reasons against I have advanced above. In response I would argue the recognition of others as moral agents is even of even more fundamental importance to morality than any possible demonstration of love. Without this basic recognition no system of morality can even get started. In my example it seems to me if Sue unconditionally forgives John then she is acting in a way she believes is best for John and by so doing she is failing to recognise him as a fully moral agent.

Does accepting that unconditional forgiveness might be harmful mean we must accept the type of transactional forgiveness favoured by Griswold? Nussbaum sets out the long list conditions necessary for Griswold’s conditions for transactional forgiveness to take place (5). She argues that going through such a process is a humiliating one smacking of payback, I am inclined to agree. Griswold’s transactional forgiveness makes sense if we accept a traditional view of anger which includes payback. However, Nussbaum argues ideas of anger involving payback doesn’t make sense. Once we see traditional anger doesn’t make sense we can transmute it into action according to Nussbaum. Traditional anger,

“quickly puts itself out of business, in that even the residual focus on punishing the offender is soon seen as part of a set of projects for improving both offenders and society.” (6)

I am again inclined to agree with Nussbaum that anger should be transmuted into something useful. I am inclined to agree because I believe like Michael Brady that emotions, including anger, act in a way analogous to alarms focussing our attention on the need to do something (7). Alarms are meant to be attended to, an unattended car alarm is annoying, unattended anger can be damaging. However, even if unattended anger is harmful this doesn’t mean anger is harmful. Unattended alarms are annoying but alarms are useful. Unattended anger may be harmful but anger is useful, anger draws attention on the need to do something. According to Nussbaum anger should “focuses on future welfare from the start. Saying ‘Something should be done about this”. (8) If we accept that anger should be attended to, be transmuted, then it seems to me Griswold’s transactional idea of forgiveness is in trouble because the transactions involve payback which seem to me to be related to un-transmuted anger.

If we forgive someone and we do not adopt Griswold’s ideas on transactional forgiveness are we forced somewhat reluctantly to conclude that our forgiveness should be unconditional? I don’t believe it does. What does it mean to forgive? If we define forgiveness as simply as relinquishing anger and its associated desire for revenge, then a commitment to transitional anger also means commitment to unconditional forgiveness. It means even if John remains quite happy with the fact that he has morally harmed Sue and remains prepared to do so again that if Sue translates her anger that she forgives him unconditionally. However, forgiving someone might mean also be defined as the normalisation of relations between the forgiver and the forgiven. Translating anger in this context doesn’t simply mean moving on. Transitional anger means looking to the future, moving on. Transitional anger also means looking back to the past, past wrongdoing cannot be ignored after all it is the reason why we must look to the future. This approach doesn’t of necessity involve a formal transactional process involving payback. It does however mean that certain minimum conditions not involving payback must be met. Relations cannot be normalised if a wrongdoer disputes the facts or wrongness of his action. In this situation victims are entitled to protect themselves by withholding trust. Trust is an essential part of normal human relations if someone is always wary of another their relationship cannot said to be a normal one. Protecting oneself doesn’t need involving payback. It follows forgiveness requires that the wrongdoer must accept responsibility for the act and acknowledge its wrongness for normal relations to be met. It further follows if someone accepts transitional anger that his acceptance does not commit her to unconditional forgiveness which might harm the wrongdoing.

1.    Charles Griswold, 2007, Forgiveness, Cambridge University Press.
2.    Martha Nussbaum, 2016, Anger and Forgiveness, Oxford University Press, Chapter 3.
3.    Nussbaum, chapter 3.
4.    Agnes Callard, 2020, On Anger, Boston Review Forum, page 15
5.    List of Griswold’s conditions as outlined by Nussbaum.
·       Acknowledge she was the responsible agent.
·       Repudiate her deed (by acknowledging it. Express regret to the injured at having caused this particular injury to her
·        Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words.
·       Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
·       Acknowledge she was the responsible agent. Repudiate her deed (by acknowledging its wrongness) and herself as the cause.
·       Express regret to the injured at having caused this particular injury to her.
·       Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words. 
·       Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
 
6.    MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 51.
7.    Michael S. Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press
8.    MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 54.


1

Friday, 28 October 2016

Montgomery and the Information needed for Valid Informed Consent


In the light of Montgomery case  the Royal College of Surgeons  has warned the NHS that failure to fully implement informed consent rules opens the way to more litigation. In this case the court held that doctors must ensure patients are fully aware of any and all risks that an individual patient, not mainstream medical practice, might consider significant. This judgement appears contrary to the judgement given in the Bolam case which held the information necessary for consent to be considered valid was the information most doctors would consider necessary. In other words, the medical profession could act in a paternalistic manner with regard to the amount of information it provided.

I have some sympathy for the Montgomery decision because I believe that patients rather than their doctors should decide how much information they need to make informed consent decisions. Nonetheless I also believe there is now a clear danger of some patients being over informed making it hard for them make informed consent decisions. There is a difference between doctors simply acting paternalistically and acting paternalisticlly if asked to do so. In this posting I will argue it is possible to over inform some patients and that it is possible to give adequate consent on limited information.

 In making by argument I will use an example provided by Steve Clarke.

“Consider the case of ‘Squeamish John’. Squeamish John cannot bear to hear the details of medical procedures; hearing these make him feel weak at the knees and dramatically diminishes his capacity to make sensible decisions. Nevertheless he does not wish to abrogate responsibility for his decision about whether or not to undergo an operation. Squeamish John wishes to participate in a restricted informed consent process in order to make his decision. He wishes to make a decision based only on the disclosure of the risks and benefits of the operation couched in cold, impersonal, statistical language. He does not wish to have any significant details of the procedure described to him.” (1).

In addition, let us assume John is lying in hospital bed suffering from type2 diabetes and needing a leg amputation. Let us assume John gives consent in the restricted way outlined by Clarke and he regains consciousness minus one leg. Intuitively such a situation seems very wrong. Nonetheless I would argue it is possible for John to make an autonomous decision based on restricted information. John is making a decision to trust his doctor and it is possible to make an autonomous decision based solely on trusting the advice of another. If I trust the advice of my lawyer or financial advisor I am making an autonomous decision I can identify myself with. Are then doctors any less trustworthy than lawyers or financial advisors? It seems obvious to me that they are not. Does then the context in which informed consent takes place differ from other contexts such as the law and finance in respect of an agent’s ability to make autonomous decisions? Once again I would suggest it does not. It follows if squeamish John is permitted to make a decision in the way he prefers it would be an autonomous decision.

However, let us put questions of autonomy to one side. Let us assume John’s doctors follow the Royal College of Surgeons advice and do not allow him make his informed decision based on restricted information. Let us further assume John refuses to listen to or read most of the information they provide. In this situation it seems to me that John’s doctors have two available options. First, they might discharge him simply because he won’t listen knowing that his discharge will probably lead to death. In less life threatening cases than that of John this would be the probable outcome. It seems wrong to condemn summon to die because he simply won’t listen. Secondly they might decide his refusal to listen to all the details of his projected operation makes him incompetent to give consent. This decision would have to be validated by the courts. Let us assume this decision is validated and a decision on John’s treatment is made by a surrogate. This surrogate should make a decision based on John’s best interests, his best interests will be decided by his doctors. The outcome would be identical to that if it was accepted that John could make a valid informed consent decision based on trusting his doctors.

I accept my example is an extreme one but I believe it nonetheless raises some interesting questions. First, the Montgomery case seems to show that informed consent is not simply based on respect for autonomy. I have argued it is perfectly possible to make an autonomous decision based on trusting the advice of another person. Some patients at a very stressful time might want to make extremely complicated decisions and would prefer to make simple autonomous decisions. The Montgomery decision seems to deny them the possibility of making such decisions. The Montgomery judgement seems to require that much more information needs to be provided in order to make a valid informed consent decision as opposed to an autonomous decision. Secondly if doctors must ensure that patients are fully aware of any and all risks involved in their procedure, must doctors ensure they understand these risks including the probabilities involved or only have the capacity to understand. Must doctors ensure that their patients listen intently or fully read the information provided? Lastly even if doctors can be sure that patients understand the information should they also insist patients actually use it when making decisions? In conclusion I believe there are problems connected to the Montgomery case’s requirement that patients must aware of any and all risks involved their procedures. I also believe patients must have the possibility of becoming aware of any and all risks involved their procedures but that this awareness should be driven by patients’ needs. It seems this awareness is at moment being driven by fear of litigation rather than any genuine concern for patients’ real needs.


  1. Steve Clarke, 2001, Informed Consent in Medicine in Comparison with Consent with Consent in Other Areas of Human Activity, The Southern Journal of Philosophy, 39, page 177

Tuesday, 4 October 2016

A Duty to permit Assisted Suicide?



In previous postings I have argued that we should accept that terminally ill people have a right to die and that we should respect that right by accepting assisted suicide. My arguments were based on respecting autonomy and of course respecting autonomy involves duties. However in this posting I want to focus more directly on duties. I will argue that we have a duty not to cause terminally ill people who are suffering to continue to existing against their will. 
We have a duty not to force innocent people endure pain in order to protect the vulnerable, surely the vulnerable can be protected in better ways.My argument will be based on the premise that we have a duty not to bring into existence any being which would find its life not worth living.


Let us accept the above premise without argument. I now want to suggest that the duty not to bring into existence any being we think would not find its life worth living is analogous to a proposed duty not to cause any being to continue to exist against its will if its life is not living. Accepting this analogy would have implications for using animals in medical research but in the following discussion I will limit my argument to assisted suicide. It might be objected that my suggested analogy fails for two reasons. First, it might we objected that by refusing to grant the right to assisted suicide to these people we do not cause them to lead lives which are not worth living. Secondly it might be objected even if some people do experience lives which are not worth living this would be better rectified by changing the conditions of these lives rather than by making assisted suicide available to such people. I will deal with each of these objections in turn.

Let us accept that that we have a duty not to bring into existence any being we think would find its life not worth living. We have a duty not to cause the existence of such lives. My objector might accept this premise. We shouldn’t enslave or torture people for instance. But he might argue that we don’t cause terminally ill patients or prisoners serving life sentences to lead lives not worth living and as a result my analogy fails. The cause of their misfortune is due to disease or past crimes. He might then proceed further by suggesting even if we are a partial cause of the type of lives some people live that a partial cause doesn’t give rise to a duty. Let accept that my objector does accept that he has duty not cause a child to come into existence who wouldn’t have a life worth living. Let us assume this child wouldn’t have a life worth living due to some genetic defect. It follows anyone who permits such a child come into existence is only a partial cause of the child not having a life worth living. It would appear my objector must accept either that our partial causation of some event can incur duties or that there is nothing wrong with causing a child to exist when he will not have a life worth living due to genetic defects. In the light of the above example my initial premise might be amended as follows. We have a duty not to be the partial cause of the existence of any being which wouldn’t have a life worth living. If someone accepts my amended premise, then it might be argued by analogy that we also have a duty not to be the partial cause of someone continuing to live a life he doesn’t find worth living.


At this point my objector might raise a second objection to my analogy. He might point out that in my amended premise we only have a binary option of causing or not causing existence. He might proceed to further point out that for both those suffering from terminal illnesses and prisoners serving life sentences other options are available. For terminally ill patients we could improve palliative care and for prisoners serving life sentences we might improve penal conditions. I accept my objectors point and accept that provided other options are available which would allow both of these categories of people to live lives they would find worth living my analogy fails. I also accept that improvements in palliative care and prison conditions are desirable and should be carried out. However, I do not accept that such improvements always means we are not the cause of making someone live a live he finds not worth living. Simply removing pain from a terminally patient’s life doesn’t mean he has a life worth living. We can remove all pain from someone by putting him in an induced coma for the rest of his life. Would such a patient really be alive? I would argue if someone is unconscious and will never regain consciousness he is in a state equivalent to being dead, he is certainly not living any sort of live at all. Whether it is possible to remove almost all the pain from all conscious terminally ill patients so that pain by itself doesn’t mean they don’t have lives worth living is an empirical question. Personally I doubt whether this will be possible in all cases but I will not pursue the point here. However, even if we could reduce pain to acceptable levels for all terminally ill patients it does not follow that they have lives which they believe are worth living. A life worth living is not just a question of having a relatively pain free conscious existence. Is simply existing really living? A very limited lifespan together with vastly impaired capabilities might well mean some such people find their lives lacking all meaning, find their lives not worth living. I would suggest anyone who suggests otherwise might be accused of epistemic arrogance. It follows even if palliative care was much improved there would still be some terminally ill patients living lives which they would find to be not worth living. It might also be argued much improved prison conditions don’t automatically mean prisoners serving life sentences always find their lives worth living. Some such prisoners might suffer from remorse which makes their lives not worth living. Indeed, better penal conditions might increase such prisoners’ propensity to suffer remorse. Other such prisoners might find the impossibility of freedom makes their lives meaningless, not worth living. It again follows that improved penal conditions would not mean all prisoners serving life sentences would have lives they considered to be worth living.

In the light of the above it appears, if we accept the premise that we have a duty not to bring into existence any being which would find its life not worth living that we also have a duty not to cause people to continue to exist if they have lives not worth living. It follows we should permit assisted suicide to those suffering from terminal illness. 
Accepting the above might also mean some patients with a terminal diagnosis who find their lives worth living might better enjoy their lives if they had the reassurance that if these lives became unbearable they could be helped to end them removing their worries about how these lives might end.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...