Wednesday, 7 June 2017

Autonomy and Beneficence


In this posting I want to investigate what we mean by autonomy and the relationship between autonomy and beneficence. I will firstly examine two different accounts of autonomy. In order to do so I will briefly outline the differences between a content neutral account of autonomy and a substantive one. I will then raise some difficulties with accepting a substantive account of autonomy. Next I will examine the relationship between a content neutral account of autonomy and acting beneficently. I will conclude that preference should be given to respecting autonomy over acting beneficently when these two values clash. I will then consider what specifically makes rape and slavery so wrong to support my conclusion. Lastly I will examine the implications of accepting this conclusion for the doctrine of informed consent, the age at which someone should be able to vote and the right of the disabled to make their own decisions.

John Stuart Mill defined the only way power can be rightfully exercised over another.

“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.” (1)

Mill’s definition can be used to provide a definition of a content neutral account of autonomy. An autonomous person should always be free to exercise his will freely, provided this exercise doesn’t harm others. Such a definition of autonomy might be classed as a primitive or basic account. In what follows a basic account of autonomy will refer to a Millian account It might be suggested that advances in technology and medicine mean that such a basic account is an outdated one. It might be further suggested that these advances mean a more substantial account of autonomy is required. A substantive account. A substantive account of autonomy places some constraints on what an autonomous person can autonomously choose even when his choices don’t harm others. For someone’s decision to be autonomous it must accord with certain accepted norms in addition to the norm of not harming others.

I now want to present two arguments against adopting a substantive account of autonomy. My first argument will suggest that adopting a substantive account of autonomy would mean that autonomy becomes a superfluous concept. Let us consider a substantive account of autonomy in which someone’s decisions are only accepted as autonomous ones provided they would be regarded as reasonable by most reasonable people. It might be objected it would be hard to define which people are reasonable and what such reasonable people might find to be reasonable. For the sake of argument let us ignore this objection. Why does autonomy matter? It matters because it differentiates between those decisions we should always respect and others. Let us accept that an autonomous choice should be a reasonable one which would be accepted by most reasonable people. In this situation it seems that to talk of respect for autonomy becomes mere rhetoric. In this situation the concept of autonomy is doing nothing useful. Someone wanting to know which decisions he must respect, by deciding if they are autonomous decisions must first know that which decisions would be accepted by all reasonable people. I would argue that this is all he needs to know in order to respect someone’s autonomy. In this situation we simply don’t need the concept of autonomy. I would suggest the same argument can be applied to any other norms which might be applied in any other substantive account of autonomy such as, someone’s best interests, acceptability to society or giving no offence to religious standards. It appears to follow that those who advocate the need for a more sophisticated account of autonomy, a substantive one, make the concept of autonomy a redundant one.

I have argued above that respecting someone’s autonomy cannot simply mean acting to further someone’s best interests because if we do so the idea of respecting autonomy seems to do no useful work. I might act in my dog’s best interests but this doesn’t mean I respect him or believe he is an autonomous dog. Caring about someone or someone doesn’t seem to be the same as respecting autonomy. What then is meant by respect? Respecting someone doesn’t simply mean saying nice things about him for if this was so there would seem to be no difference between respect and flattery. I would suggest respecting someone involves admiration and that admiration is linked to valuing. For instance, if I respect someone I might do so because I admire his honesty, determination or his ability to make good decisions. I can admire someone because he is a certain kind of person but what qualities do I admire and value if I admire someone simply as person? What is a person? Christine Korsgaard argues that a person is not identical to the human being the person supervenes on. She suggests that,

“When you deliberately decide what sort of effects you will bring about in the world, you are deliberately deciding what sort of cause you will be. And that means you are deciding who you are.” (2)

It would be difficult to call anyone who was unable to make any decisions a person. However, whilst the ability to make decisions is a necessary condition for personhood it isn’t a sufficient one. Some who makes all his decisions randomly or based on mere whims might be regarded as a wanton according to Frankfurt (3). To be a person someone must have the capacity to make decisions based on what he cares about or values. What does the above tell us about respecting someone simply as a person? It would seem that if respect involves admiration then respect for a person involves admiration for a creature which can make his own decisions based on what he cares about. I have suggested that respect, admiration and valuing are linked. I would now suggest that respecting a person requires accepting his decisions. If we don’t do so, our supposed admiration and valuing of him as the kind of creature who can make his own decisions based on what he values, becomes mere rhetoric.

A defender of a substantive account of autonomy might object to the above and argue that we can respect someone’s autonomy by respecting him as the sort of creature that can make some of his own decisions. He might proceed to argue we need only accept those of his decisions which don’t harm his best interests. He might suggest that by doing so we then are still respecting his autonomy, we are just according it less importance. I would reject such a suggestion and. will now present two arguments to support my rejection.

Firstly, let us assume that we can respect someone’s autonomy by only accepting those of his decisions which are in his best interests. It might then be argued that provided we do so we are still respecting him as a person. But if we do so are we respecting someone simply as a person or a particular kind of person? For instance, I can respect someone simply as a person whilst at the same time failing to respect him as a particular kind of person.  I believe for instance that he is a bad parent. It seems if we respect someone’s autonomy, by only accepting those of his decisions which are in his best interests, that we are respecting a particular kind of person. We are only respecting those persons who make good decisions. Alternatively, we might only respect someone when he makes good decisions. Does this matter? Let us assume we only respect the autonomy of those people who make good decisions and that we should adopt a beneficent attitude to those who don’t. I have argued above that what defines someone simply as a person is his ability to make his own decisions, to shape his life. It appears if we only respect the autonomy of people who make good decisions that we fail to recognise some people simply as persons. Next let us assume that we only respect someone’s autonomy when he makes what we regard as good decisions. If we do so I can employ the argument used above and question whether respect for autonomy really does any useful work. It follows if we respect peoples’ autonomy by only accepting those of their decisions which we think are in their best interests that either we won’t be respecting some people simply as persons or we are only respecting people as a part time persons.

I now want to argue that if we respect someone’s autonomy by only accepting those of his decisions which we believe to be in his best interests that we aren’t acting in a fully caring way. It might be objected that we are only acting in this way because we really do care about people. In response I would suggest that in this situation because we decide what is in someone’s best interests we might be accused of epistemic arrogance. However, let us lay this suggestion aside and assume that respecting someone’s autonomy in this way doesn’t mean we are exhibiting epistemic arrogance. I would still suggest that this form caring is a deficient form. I accept if we act in such a way we are acting sympathetically but I would argue we aren’t acting empathically. True empathic care means we must care about what someone cares about rather than simply what we believe to be in his best interests. Someone might suggest sympathetic caring is as good as empathic caring. I would reject such a suggestion. I can care about someone or something sympathetically simply because I want him to flourish. This is the way someone might care for a dog he loves. Empathic caring isn’t so simple. If I care about someone empathically I must care about what he cares about in addition to what I believe are in his best interests. Empathic caring is a more complicated way of caring than caring based on sympathy. However, because something is more complicated doesn’t automatically mean it is better. People don’t want to be cared for in the same way as dogs. But why, surely it’s good to be loved, cherished and beneficently cared for? People don’t want to be cared for in the same way as pets because they value being recognised as persons which requires recognising them as the kind of creatures who can decide their own future. It follows if we care about people as persons we must care about what they care about and this requires caring about them in an empathic way.

Even if the above is accepted an objector might argue that if we care about someone empathically that whilst we must always care about what he cares about, in some situations we should give priority to acting beneficently. This argument supposes a particular concept of beneficence. This concept holds that to act beneficently is to act in someone’s best interests. It also holds that to act beneficently towards someone doesn’t always means acting in what he perceives to be in his best interests. This means we must act in accordance with some accepted standard, perhaps a standard that most reasonable people would accept. But if we act beneficently in this way who are we acting beneficently towards? We are certainly acting beneficently to human beings but towards persons? We are acting as if someone can be a part time person. It might be objected that there can’t be such a thing as part time person. I find this objection unconvincing. Children can make some decisions for themselves whilst their parents make others in their best interests. Children might be regarded as part time persons. Nonetheless I would suggest that most adults don’t want to be part time persons being a person is central to them. Perhaps this is one reason why children want to grow up. Being a person is central to most people’s interests. Can we be said to be acting truly beneficently towards someone if we are prepared to ignore what he perceives to be central to his interests? I would suggest we can’t. If we accept the above, it follows that acting truly beneficently requires acting in accordance with someone’s perceived best interests and not what we perceive to be in his best interests. It further follows if we act in a way that serves someone’s best interests, as we see them, that we are acting in a caring way, however our caring even if well intentioned is an incomplete form of caring.

At this point a further objection might be raised. It might be suggested that I’m presenting a misleading view of substantive autonomy. A substantive account of autonomy might be better defined as an account that places some constraints on what an autonomous person can choose, even when his choices don’t harm others, in some limited circumstances but in all other circumstances we should respect his choices. My objector might agree that in the past a basic account of autonomy was sufficient to protect our freedoms. He might now suggest that technological progress and modern medicine mean we have a need for a more sophisticated account of autonomy and that a substantive account satisfies this need. I have questioned above whether any substantive account of autonomy is actually an account of autonomy. I would suggest that any such proposed account is in reality an account of how to balance respect and caring about someone. How to balance respecting autonomy and acting beneficently. My objector might suggest that it is perfectly legitimate to balance these two. In response I would argue that whilst someone might well have a legitimate aim of respecting autonomy and acting beneficently when these two values don’t clash that this clash. A clash of these values depends on a particular account of beneficence. To act beneficently according to this account is to act in someone’s best interests and this doesn’t always means acting in what he perceives to be in his best interests. I have argued above that acting in this way is a deficient form of beneficence and is an incomplete form.

I have argued that we should reject a substantive account of autonomy for two main reasons. Firstly, if we adopt a substantive account of autonomy this account makes itself redundant. Secondly, if we adopt such an account we are not acting in a truly beneficent or caring way. Accepting the above means we must always accept someone’s basic autonomous decisions. It also means we cannot ignore such decisions or coerce someone into changing such a decision. Accepting the above also means we must sometimes accept bad decisions. Autonomous decisions needn’t be good decisions. In such cases we should attempt to persuade the decision maker to change his decision when it is unwise, however if our persuasion fails we must be prepared to accept the decision.

I now want to consider what’s wrong with slavery. It might be argued that the wrongness of slavery is self-evident. Slaves are abused and cruelly treated. However, R M Hare (4) used a thought experiment to show this need not always apply. He imagined an island called Juba which was ruled by a benevolent elite for the good of all with no abuse or cruel punishments. He also imagined an island called Camaica on which everyone was free but all lived in abject poverty. He speculated that some free citizens of Camaica might prefer to be slaves on Juba. If we accept such a situation is possible, even if unlikely, and we believe slavery is wrong what reasons can we advance for this wrongness. What is wrong is that the slaves on Juba are not regarded as the kind of creatures who can determine their own future and this harms them because as I have argued above for any person the ability to determine his own future is central to his interests. However, I will now argue that the concept of autonomy violated is our basic concept of autonomy. Is it conceivable that the substantive autonomy of the slaves on Juba could be respected? A substantive account of autonomy might allow a slave’s decisions to be accepted as autonomous ones and respected provided they are in his best interests and any decision a slave makes which aren’t in his best interests aren’t regarded as autonomous ones. If we accept a substantive account of autonomy, then the autonomy of the slaves on Juba would be respected. The slaves on Juba would be treated as children or part time persons. Wasn’t colonialism a bit like this? Beneficent colonialism was a bit like Hare’s imaginary Juba and treated the people colonised as children or only part time persons.

Let us now explore the wrongness of rape using a thought experiment similar to that of Hare. Let us consider a gentle rapist and a compliant victim. The physical harms caused by such a gentle rape are minimal nonetheless the crime doesn’t seem to be a minor one to us. What reasons can be advanced for the seriousness of a psychically gentle rape? It might be pointed out that the harm lies not the violence inflicted but the threat of violence, the violation of bodily integrity or both of these harms. I accept these points. Let us consider the violation of bodily integrity first. The simple fact that the victim’s body was penetrated is irrelevant, this could occur during consensual intercourse. What matters was that her body was penetrated against her will and this involves failing to respect her autonomy. Let us now consider the threat of physical harm causing psychological. Let us assume the victim is aware that she will not be harmed provided she complies. She complies and is raped. She isn’t psychically harmed and because she complied moreover she had no reason to fear psychical harm so any psychological harm is not due to fear of being psychically harmed. In spite of this I would argue psychological harm occurs. It occurs because she isn’t seen as the kind of creature who has a right to decide what to do with her own body, she isn’t considered as a person, her basic autonomy isn’t respected.

If we accept a non-substantive or basic account of autonomy as the only meaningful account of autonomy what implications does this have for the doctrine of informed consent? Is the doctrine of informed consent based on respect for autonomy? If the doctrine of informed consent is based on a substantive account of autonomy, then I would suggest the doctrine isn’t actually based on respecting autonomy for the reasons given above. In this situation the doctrine of informed consent is concerned with balancing acting beneficently and respecting autonomy. The concern is to stop people making bad decisions rather than respecting autonomous ones. This balancing act assumes beneficent care means acting in someone’s best interests as seen from a particular vantage point, perhaps what most reasonable people would consider to be in someone’s best interests. I have argued above such a concept of acting beneficently is an incomplete concept and that true beneficence requires always accepting basic autonomous decisions. Autonomous decisions don’t have to be good decisions. However autonomous decisions are not made randomly or based on mere whims. Autonomous decisions are based on what we care about, based on what matters to them.

Accepting the above has important implications and I will now briefly examine three of these. We might divorce the doctrine of informed consent from respecting autonomy and simply say that the doctrine is concerned with furthering patients’ best. This would be an honest approach. However, if we do so when we ask patients for their consent are we really asking for consent or acquiescence? Alternatively, we might accept that the doctrine of informed consent is based on respect for basic autonomy. If we do so it seems to me that a patient can make an autonomous decision simply to trust his doctor’s advice, after all we trust lawyers, accountants and other professionals all the time. It also seems that the information needed to make a basic autonomous decision is less than that currently supplied when taking informed consent. This might have more to do with a fear of litigation rather than a misguided concept of autonomy, see montgomery and the information needed for informed consent . The information required for informed consent should be patient driven and be determined by how much information he needs and wants to make an autonomous decision. Secondly democracy depends on voters’ ability to make an autonomous decision. If we accept a basic concept of autonomy, then perhaps the voting aged should be lowered. Perhaps it should be lowered to the age needed to give sexual consent. Lastly the United Nations convention on the rights of persons with disabilities want more people with cognitive and psychosocial disabilities to make their own decisions, see United Nations . Let us accept that an autonomous person has the right to make his own decisions. It follows that how many people with cognitive and psychosocial disabilities should be able to make these decisions depends on the concept of autonomy employed. If as I have suggested a basic concept is employed then more disabled people should be able to make their own decisions as an autonomous decision is not the same as a good decision. The emphasis should be on helping such people make good decisions rather than making good decisions on their behalf.



  1. John Stuart Mill, 1974, On Liberty, Penguin, page 69
  2. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1
  3. Harry Frankfurt, 1999, Necessity, Volition, and Love. Cambridge, page 114
  4. R. M. Hare, 1978, What is Wrong with Slavery, Philosophy and Public Affairs 8.
  5. Matthew Burch, 2017, Autonomy, Respect and the Rights of Persons with Disabilities in Crisis, Journal of Applied Philosophy, Vol 34(3) 

Tuesday, 2 May 2017

Widespread Moral Enhancement


In my last posting I examined whether we should morally bio-enhance psychopaths. I concluded that we should encourage such enhancement. Ingmar Persson and Julian Savulescu argue that there is a need for a much more widespread moral enhancement in order to counter the existential dangers our modern world poses (1). They argue that because our morality developed in small communities it is unsuitable for combatting these dangers. I accept that there is a need for such enhancement. In this posting I want to examine how widespread such enhancement needs to be in order to be effective and how such enhancement might be implemented.

Some might argue that if we change our society by becoming more tolerant then we will naturally morally enhance the members of society. If someone lives in a brutal society then she is more likely to act in a brutal manner, whilst if she lives in a tolerant society her toleration is likely to increase. Stephen Pinker argues that this is already be happening (2). I believe society can change people, enhance people, but that this change is extremely slow. The existential dangers we face are pressing and it seems likely that moral enhancement by creating a more tolerant society might be too slow to combat these dangers.

Persson and Savulescu favour moral bio-enhancement. According to them provided such enhancement is proven to be safe then,

“some children should be subjected to moral bio-enhancement, just as they are now subjected to traditional moral education.” (3)

What exactly do Persson and Savulescu mean by moral bio-enhancement? They argue that moral bio-enhancement should seek to increase our dispositions for altruism and justice They argue moral bio-enhancement should do so by making,

 “men in general more moral by bio-medical methods through making them more like the men who are more like women in respect of sympathy and aggression, but without the tendency to social forms of aggression.” (4)

Such bio-enhancement is aimed at changing our dispositions in respect to empathy or sympathy but does not seek to change our cognitive abilities. Let us accept that such enhancement is safe. I now want to examine two questions regarding this form of enhancement. First is it likely to be effective and secondly should such enhancement be mandatory or voluntary.

If we simply enhance our disposition for empathy is such an enhancement likely to combat the dangers facing us? Some have argued that enhancing someone’s empathy simply increases the degree of empathy she feels, but doesn’t expand the domain of her empathy. Paul Bloom questions the benefits of empathy by suggesting that increasing people’s empathy is more likely to increase tension between different groups rather than diminish it. (5) If we accept Bloom is correct then we have reason to believe moral bio-enhancement based solely on enhancing our capacity for empathy would not be very effective. However, I believe there are reasons why dual enhancement involving both our capacity for empathy and cognitive abilities might be more effective, see moral character enhancement . It seems possible that if we enhance our cognitive abilities whilst at the same enhancing our capacity for empathy that such dual enhancement might lead to a broadening of the domain of our moral concern. Bloom holds that it is useful to compare empathy with anger.

“Both are universal responses that emerge in childhood. Both are social, mainly geared toward other people, distinguishing them from emotions such as fear and disgust, which are often elicited by inanimate beings and experiences. Most of all, they are both moral, in that they connect to judgments of right and wrong.” (6)

Judgments are based on the way we view some situation. The way we view some situation depends to some degree on our cognitive abilities. It follows if judgments are similar in some way to empathy that empathy might also depend to some degree on our cognitive abilities. In the light of the above it might be sensible to also enhance our cognitive abilities if we are going to enhance our capacity for empathy.  In the light of the above I would suggest that provided it can be shown that cognitive enhancement enlarges the domain of our empathy that any moral bio-enhancement should be dual enhancement.

Let us accept that dual moral bio-enhancement is desirable and that the means of such enhancement are safe. In these circumstances should such enhancement be mandatory or voluntary? In my previous posting I argued that any moral bio-enhancement of psychopaths should be voluntary in order to respect their autonomy. I will now argue the same is true of more widespread moral bio-enhancement. It might be objected that the need to counter the threats posed by climate change and nuclear armageddon should trump respecting autonomy. Indeed, my objector might point out if we don’t deal with these existential threats there will be few people left whose autonomy we should respect. In response to my objector I would suggest that there is no need to make moral bio-enhancement mandatory in order to counter these threats. It has been assumed that such enhancement has been thoroughly tested and proved to be both safe and effective. In these circumstances it might appear that any decision about becoming morally bio-enhanced is simply a no brainer. Surely we all want to be good people? In response my objector might point out that vaccines have thoroughly tested and proved to be both safe and effective and in spite of this some people refuse to have their children vaccinated even though they desire that their children enjoy good health. She might then argue by analogy that much the same would apply to any moral bio-enhancement. I am prepared to accept that my objector is correct in her assessment that some people would not voluntarily morally bio-enhance themselves. However, I will now argue that her analogy is unsound. For any vaccination program to be effective a high percentage of the population need to be vaccinated. For moral bio-enhancement to be effective, in order to counter existential threats, I would suggest that only a majority of people need take such enhancement in a democracy. A majority is all that is needed to enact legislation to counter these threats. I would further suggest that provided moral bio-enhancement is proven to be safe and effective a majority of people would take it. It follows that even if a substantial minority refuse to take such enhancement that there is no need for such enhancement to be mandatory.

My objector now might raise a further objection. She might argue that cost of such enhancement might deter a majority of people from taking it. If the costs of any bio-enhancement are high then I am prepared to accept my objector’s objection, but I am doubtful whether in practice such costs would be high. If the majority of the population take such enhancement, then these large numbers should lower these costs. However, let us assume I am wrong and that the costs would be high. Let us accept that civilised society has duty to protect both itself and its citizens from anarchy and possible destruction. It follows if society faces anarchy and destruction due to these existential threats which could be avoided by moral bio-enhancement provided the costs of such enhancement were lower, that society should subsidise or freely provide moral bio-enhancement. In addition, such enhancement would carry further benefits for society. If someone is morally bio-enhanced, then it seems probable that she will be less likely to commit crime. More fancifully moral bio-enhancement might reduce the threat of terrorism. Reduced crime would be a saving for society. It follows that society has financial incentives to encourage moral bio-enhancement. In the light of the above it seems improbable that the cost of moral bio-enhancement is going to prevent the majority of people taking it provided it is safe.

In the above it has been assumed that moral bio-enhancement is safe. This assumption may be false because all drugs have some side effects. In these circumstances we would still be faced with existential threats and a morality which seems incapable of addressing these threats. In these circumstances there is a further alternative we might consider. Perhaps we might use algorithms to guide our decision making in response to these threats. It might be objected that the use of algorithms threatens our autonomy. I response I would argue whether this threat is meaningful depends on how we use any such algorithms. I am not suggesting we simply use algorithms to make these difficult decisions for us but rather to guide our decision making. I am suggesting that we might possibly use algorithms in assisting us in making moral decisions. Such assistance should be interactive and the algorithms in question might evolve in response to our interactions. I have dealt with algorithmic assisted moral decision making at greater length in a previous posting. Perhaps using algorithms in such a way does not threaten our autonomy.

  1. Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Stephen Pinker, 2011, The Better Angels of our Nature, Viking.
  3. Persson & Savulescu, page 113.
  4. Persson & Savelescu, page 112.
  5. Bloom, Paul. Against Empathy: The Case for Rational Compassion (pp. 207-208). Random House.
  6. Bloom, page 207.



Monday, 10 April 2017

Psychopaths and Moral Enhancement

 

Michael Cook questions whether psychopaths should be morally bio-enhanced. This posting will examine his question. In attempting to answer Cook’s question I will attempt to answer several related questions. A psychopath might be roughly defined as someone who lacks feelings for others and has no remorse about any of his actions, past or present. A psychopath is someone, who even if he understands moral requirements, does not accept these requirements. In this posting it will be assumed that moral bio-enhancement should be focussed on this non acceptance. The first related question I want to address is whether a psychopath’s non acceptance of moral norms is a form of disability? Secondly I will consider whether any moral bio-enhancement of psychopaths should be mandatory, I will argue it shouldn’t. Thirdly I will consider whether we have a moral duty to offer moral bio-enhancement to someone convicted of some crime due to his non acceptance of moral norms, I will argue we do. Lastly I will suggest if it is permissible to offer moral bio-enhancement to psychopaths there is no reason not to permit moral bio-enhancement more generally.

Let us accept that if someone suffers from a disability and we can mitigate the effects of his disability that we have a prima facie duty to do so provided the costs associated with so doing are not too onerous. Let us also accept that some form of safe moral bio-enhancement becomes possible, such safe enhancement is unavailable at the present time. It appears to follow in such circumstances provided that a psychopath’s failure to accept moral norms is a form of disability that we have a prima facie duty to mitigate the effects of this disability. It further appears to follow that if we can only mitigate this disability by bio-enhancement that we have a duty to do so provided such enhancement is safe. Is a psychopaths non acceptance of moral norms a disability? Most psychopaths are able to understand moral requirements and so their failure to act in accordance with these requirements is not caused by an inability to understand moral norms. It appears to follow that a psychopath’s non acceptance of moral norms is not a disability. This appearance is too simplistic. Let us accept that most psychopaths can understand moral norms even if they don’t accept these norms. Perhaps this lack of acceptance might be due to an inability to feel the force of moral norms and that this inability to feel should be classed as a disability. It follows that a psychopath’s failure to accept moral norms might be regarded as a disability.

Does this moral disability matter? I will now argue whether it matters depends on the context. It has been suggested that some CEO of some large companies have psychopathic tendencies. Having psychopathic tendencies might be seen as enhancing by a CEO whilst the same tendencies might be seen as a disability by someone if they lead to him being imprisoned for some crime. I argued above that if someone suffers from a disability and that we can mitigate the effects of his disability then we have a moral duty to do so, provided the costs associated with doing so are not too onerous. It follows if a psychopath lives in circumstances in which his condition might be classed as a disability he should be morally bio-enhanced. This enhancement should only take place subject to the provision that means used are safe and costs involved aren’t too onerous.

The above conclusion needs some clarification. A psychopath who is the CEO of a large company might not want to be morally enhanced even if his condition disables him in some social contexts. I would suggest that we only have a duty to offer moral enhancement to psychopaths. It might be objected that my suggestion is too weak. My objector might point out that some psychopaths damage society and other people. He might proceed to argue that for such people moral enhancement should be mandatory rather than voluntary due to the need to protect society. I accept that we need to protect people and society from psychopaths but I do not accept we must do so by means of mandatory biomedical moral enhancement. We can protect society from those psychopaths who harm it by restricting their freedom. Let us assume there is a safe bio-medical form of enhancement which prevents psychopaths from committing crimes due to their condition. My objector might now argue that mandatory moral bio-enhancement is both a cheaper and a more humane way of treating psychopaths who have committed crimes than detention. Mandatory moral bio-enhancement would be better for both psychopaths and society.

I would reject such an argument which could easily be extended to include paedophiles. Let us accept most psychopaths retain their autonomy. Unfortunately, whilst exercising their autonomy some psychopaths damage society. My objector wants to limit the damage done to society by removing some of a psychopath’s capacity for autonomy. Is it possible to remove some of someone’s capacity for autonomy? We can of course restrict the exercise of someone’s autonomy but this is not the same as removing some of someone’s capacity for autonomous action. I would suggest that we should limit the damage psychopaths do to society by limiting his ability to exercise his autonomy rather than modifying his autonomy for autonomous action. Some might question whether there is a meaningful difference between these two approaches. I now want to argue there is. If someone’s ability to make autonomous decisions is modified, then he is changed as a person. If someone’s ability to exercise his autonomy is removed, then he is not changed as a person even though the exercise of his will is frustrated. Does the difference between changing someone as a person and frustrating his will matter? If we change someone as a person we treating him simply as a thing. We are treating him in much the same way as something we can own and can do with it as we please. Psychopaths may differ from most of us but they are still human beings and should be treated as such, they should not be treated in the same way as something we own, should not be treated in the same way as an animal. If we frustrate a psychopath’s will by detaining him, we are not treating him as something we own but merely protecting ourselves. We are still accepting him as a person, albeit a damaged person. In the light of the above I would suggest that the mandatory moral bio-enhancement of psychopaths would be wrong. I also would suggest that psychopaths should be offered voluntary moral bio-enhancement. It seems probable most psychopaths would accept such enhancement on a voluntary basis if the alternative might be compulsory detention. Accepting the above would mean that we are still respecting the autonomy of those psychopaths who need to be detained.

I have argued that we should offer voluntary moral bio-enhancement to psychopaths but it is feasible that the exactly the same form of enhancement might be offered to people in general. Prima facie such an enhancement would not be regarded as correcting some disability. It might then be argued that because such enhancement is not correcting any disability that it cannot be argued by analogy that a more general moral bio-enhancement is desirable. I would reject such argument because I don’t believe the prima facie assumption stands up to close examination. Ingmar Persson and Julian Savulescu suggest we are unfit to face the feature as our morality has not developed enough to permit us to cope with technological progress (1). What exactly does unfit mean? I would suggest being unfit means we are unable to counter some of the dangers created by our technology. If we are unable to do something in some circumstances, then we have an inability, in these circumstances we have a disability. It is conceivable that prior to our most recent technological advances our morality was fit for purpose. It might be argued our morality remains fit for purpose but that these advances have made it difficult for us to accept the full implications of our moral norms disabling us in much the same way psychopaths are disabled. It follows that the prima facie assumption that a more general moral enhancement by bio-medical means should not be regarded as correcting some disability is unsound. It might be concluded that if technological changes make our morality unfit for our purposes by morally disabling people that it can be argued by analogy that more general moral enhancement by bio-medical means is desirable. It might be objected that this conclusion is not the only option available in these circumstances, we might try to change our current circumstances. My objector might suggest that instead of a more general moral enhancement we should reject our most recent technological advances and seek to return to circumstances in which we accept the norms of our evolved morality. Such a suggestion seems impractical for two reasons. First, once the genie is out of the bottle it is hard to put it back in. Secondly I am doubtful if our morality was ever fit for purpose once we ceased being hunter gatherers.

We live in a dangerous world, provided safe moral bio-enhancement becomes available should such enhancement be mandatory? In the light of the dangers we face such an option seems to be an attractive one, but I would somewhat reluctantly reject it. Mandatory moral bio-enhancement would damage our autonomy. Our autonomy forms the basis of us being moral agents and damaging our agency would also damage our moral systems. If safe moral bio-enhancement becomes available, then it should encouraged, perhaps subsidised, but it should remain voluntary.


  1. Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.



Tuesday, 14 March 2017

Automation, Work and Education


Our world is becoming increasingly automated and this increase appears to be having an effect on the number of jobs available. It is possible that in the future automation might not only lead to a decrease in the number of existing jobs but also create an increasing number of different jobs. A second possibility is that automation will mostly lead to a decrease in the number of jobs. In this posting I want to examine some of the consequences this second possibility has for work and education.

Pessimists might argue that a widespread loss of jobs will lead to widespread hardship and poverty. I believe such a pessimistic outcome is unlikely because such an outcome would threaten the survival of both the state and the market economy. In this situation both the state and the markets would have reasons to introduce some form of universal basic income, UBI. According to Tim Dunlop UBI means,

“A basic income, on the other hand, is the idea that everyone should be paid a minimum monthly income that allows them to meet their basic economic needs.” (1)

It is important to note that UBI in response to increasing unemployment caused by automation is not some attempt to reform the benefits system but rather an attempt to counter an existential threat which might be posed to the state due to this unemployment. It might be speculated that UBI might not just be useful in combating the effects of unemployment but might also be necessary for the continuation of capitalism. In an age of large scale automation, capitalism might survive without workers but it seems doubtful if it could survive without consumers In the rest of this posting I am going to assume that if automation causes widespread job losses in any state that that state will introduce some form of UBI in order to counter this existential threat. I will further assume that UBI will be large enough to permit people to live in moderate comfort.

Some might think that automation and UBI will lead to some golden age. In the ancient world the upper classes in Greek and Roman society led a life of leisure in which most of the work was done by slaves. It might be argued by analogy that automation might introduce a golden age in which we live a life of leisure with most work either becoming automated or done by robots. I believe such a golden age is an illusion for two reasons. First, upper class Greeks and Romans may have lead happier lives than their slaves but there is no evidence that they lead happier lives than people living now. The ancient golden age at least for some appears to be an illusion and so any argument by analogy fails. Secondly if we live in a world in which all the work is automated or done by robots we might suffer from the unbearable lightness of simply being. We might feel our world has lost all purpose and that we simply exist. We might become bored. Limited boredom might encourage us to take steps to alleviate our boredom but prolonged boredom is harmful. According to Harry Frankfurt boredom is not some innocuous discomfort but something that threatens our psychic survival. (2) I have previously argued that a world whose inhabitants are bored and feel they are simply existing is a dangerous world, see riots and the unbearable lightness of simply being . It is possible that even if automation frees people from work and that the resultant widespread loss of jobs does not lead to widespread hardship and poverty that it might also lead to people’s lives being degraded rather than some golden age.

The above pessimistic scenario seems to be a realistic possibility and I now want to examine what might be done to counter the negative effects of such a possibility. Prior to my examination I want consider what we mean by work. Work might be roughly defined as making an effort for some economic reward or hope of such a reward. However, such a definition is at best an incomplete one. I have suggested previously that someone might work in her garden purely for the pleasure it brings her without any thought of economic reward. Hannah Arendt suggested there is a difference between work and labour. According to Arendt labour is what we do in the normal process of living in order to survive. For Arendt work might be simply defined as any human activity which is not driven by our need to survive. Arendt’s definitions are interesting but also seem to be incomplete ones to me, dancing is not working. Intuitively work requires some effort. Work might be now defined as any human activity requiring effort which is not driven by our need to survive. Such a refined definition also seems an incomplete one. If I am running away from a bull I might make a great effort but I’m not working. Work might be now defined as any human activity which matters to us requiring effort which is not driven by our need to survive. I believe Arendt’s insight is important and I will use it to define two different ways of working. I believe it might be better to label labouring as ‘working for’ something we need to survive. ‘Working for’ something has mostly instrumental value. Work defined as a human activity which matters to us requiring effort which is not driven by our need to survive might be labelled as ‘working at’. ‘Working at’ has mostly intrinsic value.

Let us now examine the possible effects of increasing automation bearing in mind these two definitions of work. Let us assume that automation might decrease or even eliminate our need to ‘work for’ things, to work instrumentally. Does this decrease matter? I would suggest it does matter to someone if she doesn’t ‘work at’ something. In such a situation it seems highly probable that such a person might suffer from the unbearable sense of simply being. She might feel her world has lost all purpose and that she’s simply existing. It follows we have some reason to fear the effects of increasing automation.

Assuming we aren’t Luddites and don’t want to or can’t stop the progress of automation what steps should we take to mitigate some of the worst effects of not ‘working for’ anything? First, if automation greatly decreases our need to ‘work for’ we would need to refocus our education system. At the present time at lot of education focusses on equipping people for jobs, to ‘work for’. Let us assume people no longer need to ‘work for’ and that a purely hedonistic lifestyle also leads to a lightness of simply being. In such a situation ‘working at’ something might help counter someone’s sense of simply existing due to her ceasing to ‘work for’ something. In this situation education should focus on enabling people to ‘work at’. In order to do so science education remains important because we need to understand how the world we live in works. But we also need to simply understand how to live in such a world and to enable us to do so education should place greater emphasis on the humanities.

I have argued in a highly automated age people need to become better at ‘working at’ something. All work can be good or bad and this includes ‘working at’. Someone might ‘work at’ doing crosswords all day. I would suggest this is not good work. If ‘Working at’ is to replace working for it must be good work. Samuel Clark defines one element of good work is that it requires some skill. According to Clark,

“the development of a skill requires: (1) a complex object and (2) a self-directed and sometimes self-conscious relation to that object.” (3)

I now want to consider each of these requirements. According to Clark good work involves working at something which must have some complexity. According to Clark the something we work at must have a complex internal landscape of depth and obstacles (4). He gives as examples of a skilled activity, music, mathematics, carpentry, philosophy and medicine. Doing crosswords might be a difficult task but it lacks complexity. Clark argues good work must be self-directed. Let us assume someone is self-directed to work at some complex task purely to mitigate her sense of simply being. I would suggest that such self-direction fails. Why does it fail? It fails because in order to prevent this sense of simply being someone must work at something that satisfies her. For an activity to satisfy someone she must care about that activity. Let us accept that Frankfurt is correct when he argues ‘caring about’ is a kind of love because the carer must identify with what she cares about. It might be concluded that good work is doing something complex which the doer ‘cares about’ or loves. It might then be suggested that provided people can ‘work at’ something and that this is good work and that this ‘working at’ might mitigate the some of the effects of job losses due to automation.

 However even if we accept the above difficulties remain. Let us assume any good work either ‘working for’ or ‘working at’ requires some skilfull action. Let us further assume a skilful action requires that the doer must identify with her actions by ‘caring about’ or loving them. Unfortunately, ‘caring about’ or loving is not a matter of choice.

“In this respect, he is not free. On the contrary, he is in the very nature of the case captivated by his beloved and his love. The will of the lover is rigorously constrained. Love is not a matter of choice.” (5)

It further if someone simply chooses to ‘work at’ something in order to compensate for her loss of ‘working for’ that this ‘working at’ need not be good work and as a result won’t mitigate her sense of boredom. Someone cannot simply choose to do anything to alleviate her boredom. If she simply chooses it seems probable her choice will bore her. She must ‘care about’ what she chooses. If society is help mitigate the effects of job losses, due to automation, then it must create the conditions in which people can come to care about doing complex things. I have suggested above that education might help in this task. W B Yeats said ‘education is not the filling of a pail, but rather the lighting of a fire’ perhaps education must fire peoples’ enthusiasms every bit as much as enabling their abilities. Perhaps also we should see learning as a lifelong process. Lifelong education broadly based which fires peoples’ enthusiasms might help create the conditions in which people can ‘work at’ things hence mitigating some of the harmful effects of job loss due to automation.


Lastly there are activities which might mitigate some of the harmful loss of jobs which have little to do with work. Music and Sport would be examples of such things. Of course it is possible to ‘work at’ music and sport, we have professional sportspersons and musicians, but most people just play at such activities. Play is a light hearted pleasant activity done for its own sake. Play is important; especially for children. It might be suggested that some forms of play are a form of good ‘working at’. All work is goal directed and so is some play. Perhaps there is a continuum between work and play with the importance of the goal varying. Perhaps in an automated age play should become more important to older people also. Activities playing sport or music require some infrastructure and perhaps in an automated age it is even more important that society helps build this infrastructure. At the present time governments foster elite sport. Perhaps this fostering should change direction to fostering participation rather than funding elite athletes.

  1. Tim Dunlop, Why the future is workless, (Kindle Locations 1748-1749). New South. Kindle Edition.
  2. Harry Frankfurt, 2006, The Reasons of Love, Princetown, University Press, page 54
  3. Samuel Clark, 2017, Good Work, Journal of Applied Philosophy 34(1), Page 66.
  4. Clark, page 66.
  5. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 135.

1

Wednesday, 22 February 2017

Sex with Robots


In the near future it seems probable that some people will have sex with robots, see the rise of the love droids . In this posting I will discuss some of the problems this possibility raises. I will divide my discussion into two parts. For the most part my discussion will consider sex with robots which are simply machines before moving on, and much more fancifully, to discussing sex with robots which might be considered as persons.

Let us consider someone having sex with a robot which isn’t a person, is simply a machine. Human beings have created objects to be used for sexual purposes such as vibrators and other sex toys. If a robot isn’t a person, then it might appear that someone having sex with a robot is unproblematic in much the same way as is the use of these artefacts. I now want to argue that this appearance is false. But before making my argument I want to consider the nature of sex. Sex among humans isn’t simply a matter of reproduction. Human beings enjoy sex. Neither is this enjoyment a purely mechanical thing. According to Robert Nozick,

“Sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other. Even in masturbatory fantasy, people dwell upon their actions with others; they do not get excited by thinking of themselves whilst masturbating. “(1)

If we accept that Nozick’s view what does having sex with a robot really mean to the person having sex? Provided a robot has been supplied with the appropriate genitalia would someone might want to have sex with it? I would suggest it does not in many cases. Let us assume that a robot has the appropriate genitalia, four legs, one arm and several detachable eyes. I would suggest very few people would want to have sex with such a machine. Nozick argues even when masturbating someone is imaging having sex with another person and I would suggest much the same applies to having sex with a robot. If someone has sex with a robot, he would want it to look like a beautiful person because he is imagining having sex with such a person.

What are the implications of accepting the importance of such imagining? First I would suggest having sex with a robot is just an enhanced form of masturbation. Masturbation isn’t wrong because it doesn’t harm others. Having sex with any robot which is purely a machine doesn’t harm others and so by analogy also isn’t wrong. Indeed, in some circumstances masturbation might be an acceptable choice for those who are physically or emotionally incapacitated and perhaps also for those who are incarcerated. However even if we accept the above masturbation isn’t ideal and neither would be sex with a robot. Someone having imaginary sex with a person is having inferior sex because what he desires is real sex.

I have argued that the first reason why someone might want to have sex with a robot is that he cannot have sex with another person and that there is nothing wrong with his actions. Anyone having sex with a robot knows he cannot harm the robot. This gives rise to a second reason why someone might want to have sex with a robot. Someone might know that the type of sexual activity he wants to indulge in might be harmful to another human being and because he knows he cannot harm a robot he prefers to indulge in this activity with a robot. Does acting on such a preference matter for after all he isn’t harming anyone else? Kant argued we shouldn’t be cruel to animals as this might make us cruel to human beings. Might it be then if someone engages in such sexual activity with a robot that this activity might make him more likely to engage in harmful sexual acts with other human beings?  At present there is no conclusive evidence to support Kant’s argument that if someone is cruel to animals that this cruelty makes him more likely to be cruel to other people. If this is so it seems doubtful that if someone engages in such sexual activity with a robot that his activity would not make him more likely to do so with another human being. The above is an empirical question and cannot be settled by philosophical analysis. However, someone engaging in sex with a robot, which would be harmful to a human being might harm himself. I have previously argued that for the users of pornography there is a split between fantasy and reality, see wooler.scottus . I further argued in the case of sexual practices which might harm others that the maintenance of the split between fantasy and reality is absolutely essential. I have argued above that someone having sex with a robot imagines he is having sex with a person. It follows for someone engaging in sex with a robot, which might harm another human being, that the maintenance of the split between fantasy and reality is also essential. I further argued that if someone uses pornography that this split threatens the unity of his will which is damaging to his identity. It follows that someone engaging in sex with a robot, which would be harmful to a human being might harm himself by damaging his identity.

Some people assume at some time in the future some robots might become persons. I am extremely sceptical about this possibility but nonetheless I will now consider some of the problems of someone having sex with such a robot. However, before I do so I will question whether anyone would want sex with such a robot. Let us accept Nozick is correct in his assertion that “sex is not simply a matter of frictional force. The excitement comes largely in how we interpret the situation and how we perceive the connection to the other.” How do we perceive the connection to a robot which is also a person? I suggested above that a robot can take many forms. Would anyone want to have sex with a robot with four legs, one arm, several detachable eyes, appropriate genitalia even if it could be considered as a person? Persons are partly defined by the actions they are capable of enacting and these actions are partly defined by their bodies’ capabilities. Robots can have very different bodies from us. A robot with a different body structure might be capable of very different actions to us, such a robot even if it is considered as a person might be very different sort of person to the sort we are. The same might also be true of a robot with similar structure which is constructed from different materials. If someone or something is very different to us then the connection between us and that someone or something becomes tenuous. Would someone want to sex with any robot with which he had only a tenuous connection, I doubt it. Of course someone might want to have sex with such a robot provided it looked like a beautiful human being. But if this is so isn’t he really imaging having sex with a person and the problems associated with having sex with a robot which is purely a machine once again become relevant.

In conclusion I have argued that someone would not harm others by having sex with a robot and his actions would not be morally wrong. However, I argued whilst it might not be wrong to have sex with any robot which is purely a machine that it might nonetheless be damaging to the user’s identity, in much the same way as pornography, by splitting character. Lastly questioned whether anyone would really want to have sex with any robot which might be considered as a person.

  1. 1.     Robert Nozick, 1989, The Examined Life, Touchstone, page 61

Monday, 23 January 2017

Robots and Persons




In an interesting post John Danaher asks if someone can be friends with a robot, see philosophicaldisquisitions . He argues virtue friendship might be possible with a robot. Virtue friendship involves two entities sharing values and beliefs which benefit them. Let us accept that any entity which is capable of having values and beliefs can be regarded as a person. Perhaps one of the great apes might be regarded as a person but can the same be said of a robot? Does it make sense to say a robot might have rights or can be regarded as a person? In what follows I will limit my discussion to robots but my discussion could equally well be applied to some advanced system of AI or algorithms. At present he actions of some robot have some purpose but this purpose doesn’t have any meaning which is independent of human beings. At present the actions of a robot have no more meaning which is independent of us than the action of the wind in sculpting a sand dune. In the future it is conceivable that this situation might change but I am somewhat sceptical and believe at the present time there is no need to worry about granting rights to robots akin to human rights. In this posting I will argue the nature of belief means to worry about robot personhood is both premature and unnecessary.

How should we regard the actions of a robot if it has no beliefs? What are the differences between the wind sculpting a sand dune and the actions of a robot? One difference is that even if both the wind and a robot don’t have beliefs that nonetheless a robot’s actions are in accordance with someone’s beliefs, its designer or programmer. But does this difference matter? A refrigerator is acting in accordance with our belief that it will keep our food cold. If we don’t want to grant personhood to refrigerators, why should we do so for robots? Perhaps then we might implant some beliefs into robots and after some time such robots might acquire their own emergent beliefs. Perhaps such robots should be regarded as persons. Implanting such beliefs will not be easy and may well be impossible. However, I see no reason, even if such implantation is possible, why we should regard a such a robot as some sort of person. If a person has some belief, then this belief causes him to behave in certain ways. How do we implant a belief in a robot? We instruct the robot how to behave in certain circumstances. In this situation the of course the robot behaves in accordance with the implanted belief but the primary cause of this behaviour is not this implanted belief but rather a belief of those who carried out the implantation. A robot in this situation cannot be said to be behaving authentically. In this situation I can see no reason why we should attribute personhood to a robot which uses implanted beliefs as outlined above.

At this point it might be objected that even if a robot shouldn’t be considered as a person it might be of moral concern. According to Peter Singer what matters for something to matter morally is not that it can think but that it can feel. Animals can feel and should be of moral concern. Present day robots can’t and shouldn’t. Present day robots are made of inorganic materials such as steel and silicon. However it might be possible to construct a robot partly from biological material, see Mail Online. If such a robot could feel then it should be of moral concern but this doesn’t mean we should consider it as a person, frogs can feel and should be of moral concern but they aren't persons. Nonetheless I would suggest that the ability to feel is a necessary precursor for believing which is a precursor for personhood.

For the sake of argument let us assume that it is possible to create a robot which the primary cause of its behaviour is its implanted or emergent beliefs.  What can be said about this robot’s beliefs?  When such a robot decides to act the primary cause of the action is its internal beliefs, it is acting in a manner which might be regarded as authentic. How might such a robot’s beliefs and actions be connected? Perhaps they are linked by Kant’s hypothetical imperative.  The hypothetical imperative states,

“Whoever wills an end also wills (insofar as reason has decisive influence on his actions) the indispensably means to it that are within his power. (1)

Some might suggest that having a set of beliefs and accepting Kant’s hypothetical imperative are necessary conditions for personhood, some might even regard them as sufficient conditions. They might further suggest that any robot meeting these conditions should be regarded as a candidate for personhood. Of course it might be possible to design a robot which conforms to the hypothetical imperative, but conforming is not the same as accepting. Let us accept anyone or anything that can be regarded as person must have some beliefs and must accept rather than conform to the hypothetical imperative.

What does it mean for someone to accept the hypothetical imperative? Firstly, he must believe it is true, the hypothetical imperative is one of his beliefs. Someone might believe that he is made up of atoms but this belief doesn’t require any action when action is possible. The hypothetical imperative is different because it connects willed ends with action. Can the hypothetical imperative be used to explain why a robot should act on its beliefs, be they implanted by others or emergent? Kant seems to believe that the hypothetical imperative can be based on reason. I will argue reason can only give us reason to act in conjunction with our caring about something. I will now argue the hypothetical imperative only makes sense if an agent views beliefs in a particular way. What does it mean to will an end? I would suggest if someone wills an end that he must care about that end. If someone doesn’t care about or value some end, then he has no reason to pursue that end. What then does it mean to care about something? According to Frankfurt if someone cares about something he becomes vulnerable when that thing is diminished and is benefited when it is enhanced. (2) People by nature can suffer and feel joy robots can’t. It is worth noting animals can also suffer and feel joy making them like people with rights rather than like robots. The above raises an interesting question. Must any entity which is capable of being conscious, robot or animal, be able to suffer and feel joy? If we accept the above then the ends we will must be things we care about. Moreover, if we care about ends then we must value them. It follows if the hypothetical imperative is to give us cause to act on any belief that that belief must be of value to us. It follows the hypothetical imperative can only be used to explain why a robot should act on its beliefs provided such a robot values those beliefs which requires it becoming vulnerable. A right is of no use to any entity for which the implementation of the right doesn't matter, isn't vulnerable to the right not being implemented.

I have argued any belief which causes us to act must be of value to us and that if we find something valuable we are vulnerable to the fate of the thing we find valuable. What then does it mean to be vulnerable? To be vulnerable to something means that we can be harmed. Usually we are vulnerable to those thing we care about in a psychological sense. Frankfurt appears to believe that we don’t of necessity become vulnerable to the diminishment of the things we value by suffering negative affect. He might argue we can become dissatisfied and seek to alleviate our dissatisfaction without suffering any negative affect. I am reluctant to accept becoming vulnerable can be satisfactorily explained by becoming dissatisfied without any negative affect. It seems to me being dissatisfied must involve some desire to change things and that this desire must involve some negative affect. I would argue being vulnerable to those thing we value involves psychological harm and that this harm must involve negative affect.


Let us accept that in order to be a person at all someone or something must accept and act on the hypothetical imperative. Let us also accept that the hypothetical imperative only gives someone or something reason to act on some belief provided that someone or something must value that belief. Let us still further accept that to value something someone or something must care about what they value and that caring about of necessity must include some affect. People feel affect and so are candidates for personhood. It is hard to see how silicon based machines or algorithms can feel any affect, positive or negative. It follows it is hard to see why silicon based machines or algorithms should be considered as candidates for personhood. It appears the nature of belief means any worries concerning robot personhood when the robot intelligence are silicon based are unnecessary. Returning to my starting point it would appear that it is acceptable for young children to have imaginary friends but it that is delusional for adults to believe they have robotic friends. However I will end on a note of caution. We don’t fully understand consciousness so we don’t fully understand what sort of entity is capable of holding beliefs and values. It follows we cannot categorically rule out a silicon machine becoming conscious. Perhaps also it might become possible to build some machine not entirely based on silicon which does become conscious. 

  1. Immanuel Kant, 1785, Groundwork of the Metaphysics of Morals,
  2.  Harry Frankfurt, 1988, The Importance of What We Care about, Cambridge University Press, page 83.




Monday, 21 November 2016

Cryonic Preservation and Physician Assisted Suicide


Recently a terminally ill teenager won the right to have her body preserved by cryonics in the hope that she might live again at some future date. Such a hope comforted her. The case was really a case of whether she had the right to determine how her body was disposed of when she died, see bbc news . Cryonic preservation is not a form of treatment at the present time, cryonic preservation is simply a form of bodily disposal tinged with hope and as a result does not presently appear to pose any major ethical problems. However, let us imagine a scenario in the future when cures become available for some diseases which are currently terminal and that those preserved by cryonics can be brought back to life. This scenario raises ethical concerns and this posting I want to examine these concerns.

At the present time cryonic preservation might be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future and that the body can then be resuscitated and cured. The case of the teenager above was an example of this type of cryonic preservation. However, an alternative form of cryonic preservation seems possible. Let us consider someone suffering from a terminal illness who finds his present life not worth living. He might consider physician assisted suicide PAS, such an option is available in several countries and some states in the USA. On reflection he might think a better option might be open to him. He wants his body frozen whilst he is still alive and preserved in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured. For the rest of this post cryonic preservation will be referred to by CP. These alternative types of CP will be defined as follows.
  • Type 1 CP will be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future in the hope that he may be resuscitated and cured.
  • Type 2 CP will be defined as the preservation by freezing of someone’s body whilst he is alive in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured.

Type 1 CP is extremely fanciful because not only do cures need to be found and bodies unfrozen but also dead bodies need to be brought back to life. I will deal with type 2 CP first because there is more realistic opportunity of it being realised. There seems to be some possibility that in the future it might become possible to freeze and preserve someone’s living body and unfreeze it after a substantial period of time permitting him to resume his life. Such a scenario remains fanciful but is by no means impossible. Let us assume studies have frozen and stored large living animals and that after a substantial period of time have thawed them permitting them to resume their lives. I am assuming here that studies on rats or mice might not be applicable to humans. Let us assume someone is aware of this fact and learns he is starting to suffer from Alzheimer’s disease. I have previously argued it would not be irrational or wrong for such a person to commit suicide if he so desired and moreover it would not wrong to help him do so, see alzheimers and suicide . In this scenario it might be argued by analogy it would neither be irrational nor wrong for such a person to choose type 2 CP. Indeed, it might be argued as I have suggested above it is a more rational option than suicide. I now want to examine the ethical concerns raised by type 2 CP.

If we preserve someone using type 2 CP are we doing something which is wrong? To answer this question, we must first ask what are we doing if we preserve someone using type 2 CP? The company providing the service is simply storing a living body and this seems to raise no ethical concerns. But what are the doctors who prepare the body for storage doing? It is uncertain whether a cure might be found for Alzheimers. In this scenario how we describe these doctors’ actions in preparing him for preservation raise different ethical concerns. Are they killing him now, preserving him to die in the future or helping him commit suicide? The first possibility is clearly wrong. Is the second possibility also wrong? Delaying death is an accepted part of medical practice. But would it be right to delay death if there is no conscious life between the start of the delaying process and death? Intuitively though such an action harms no one and might not be wrong it seems pointless. If we accept physician assisted suicide, then we must accept the third option isn’t wrong. In the case of the teenager what was being decided was how a body might be disposed of. Let us now consider a variation of this case. Let us assume that a relatively young person who is competent to give informed consent is suffering from terminal cancer wishes to have his body preserved by type 2 CP. He wants his body preserved whilst he is still alive. Let us also assume using type 2 CP it is possible to preserve a body for a hundred years and then resuscitate it. It seems possible that a cure, or at least an ability to manage cancer, might well come about in the next hundred years. In this case if his doctors prepare his body for type 2 CP, what are they doing? In this scenario it seems wrong to say they are killing him, delaying his death or helping commit suicide. If the techniques involved in type 2 CP are proved to be safe and a cure for cancer is a genuine possibility then I would suggest his doctors are treating his disease, they are treating him a patient and there are no ethical objections to doctors treating patients. In might be objected that doctors only treat patients so they can recover from their illnesses. In response I would point out doctors treat patients when they manage cancers they can’t cure by providing palliative care. Let us assume it is possible to preserve a body for a hundred years and then resuscitate it. In the light of the above it might be concluded that it would not to be wrong for doctors to prepare a relatively young competent person for type 2 CP provided he or those close to him could pay for the service. The above conclusion is subject to two conditions. First there must be a reasonable prospect that the condition he suffers from will become curable or manageable in the future and secondly that it has proved to be possible freeze and store large living animals and after a substantial period of time these can be unfrozen permitting them to resume their lives.

Let us accept the above conclusion that it is not wrong to provide type 2 CP to relatively young competent patients when there is a realistic possibility that their illness can be cured in the future. But would it be wrong to provide such treatment to older or even incompetent patients. I will deal with someone who is incompetent first. If someone is incompetent to give consent to treatment then a surrogate decision maker, such as the courts or his parents, acts on what is in his best interests. Intuitively it might be objected deciding to use type 2CP is not deciding about treatment. However, I have argued for a relatively young competent person type 2 CP can be seen as a form of treatment. Moreover, I would suggest treatment doesn’t become non-treatment simply because someone is unable to give competent consent. Accepting the above raises many practical problems. Should type 2 CP be carried out if someone who is incompetent resists such treatment? I would suggest it should not but will not offer any arguments here to support my suggestion. Should type 2 CP be carried out if someone who is incompetent but is prepared to accept such treatment? I would suggest it should. If we believe it shouldn’t then mustn’t we also believe the lives of the incompetent have less value than those of the competent whilst at the same time remembering that young children are incompetent. Moreover, if we accept the above aren’t we are encouraging eugenics by the backdoor? It might be concluded that it would not wrong to provide type 2 CP to relatively young incompetent patients, provided they are prepared to accept treatment and those close to them are prepared to pay for the service. This conclusion is subject to the same conditions required for the relatively young competent patients outlined above.

Is it wrong to offer type 2 CP to older persons? It seems to me in world of infinite resources it would not. Resources in this scenario are not a problem and it would appear that if someone believes it would wrong to offer type 2 CP to older persons that it should be up to him to justify his belief. It can again be concluded it would not be wrong to offer type 2 CP to older persons, subject to the same conditions outlined in the other two cases.

I now want to consider a different question. If type 2 CP could be regarded as treatment would we have a duty to provide this treatment? This question is at the moment completely hypothetical. However, if studies froze and stored large living animals and then after a substantial period of time thawed them permitting them to resume their lives, then this question would cease to be a hypothetical one. Indeed, if there was also the possibility of several new cures for previously incurable diseases an answer to this question becomes important. Usually whether someone should be offered treatment depends on the quality added life years, QUALYs, expected from the treatment in question. It might be concluded that it would not be wrong to offer type 2 CP to older persons when the number of expected QUALYs is similar to the expected QUALYs offered by other accepted treatments subject to two provisions. First, the number of expected QUALYs should not include the years spent in frozen state. Secondly it possible that the freezing process might reduce the number QUALYs and this should be taken into account.

I have argued that it would not be wrong to provide type 2 CP to people who can finance this service themselves. I have also argued that it is possible that in the future type 2 CP might be regarded as treatment. It seems that the same arguments I used regarding type 2CP can be applied to type 1 CP concerning its permissibility. However, in some circumstance type 2 CP might be seen as a form of treatment, it is difficult to see how type 1 CP might be regarded as a form of treatment. Philosophy should not only be concerned with what should be permitted but also with what helps people flourish, philosophy should make recommendations about how to live. Let us assume one or both types of CP prove to be effective. Should we recommend that someone facing terminal or life changing illness try CP? Several reasons might be advanced as to why we should not. First, a long suspension might mean they awake to an alien world making it hard for them to cope. Secondly a long suspension might mean they awaken to find their friends, spouse and even children have died. Whether someone would want to undergo CP would depend not only on their imagined future but also on their current circumstances. A single lonely person might find CP attractive whilst someone whose life centres on family might not. The young might find CP more attractive than the old because CP offers them the possibility of a longer life extension. Personally as a relatively old man I do not find the idea of CP attractive, however returning to our starting point if I was fourteen I might well do so.


Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...