Wednesday, 29 June 2016

Outsourcing Ethical Decision Making and Authenticity



In a previous posting I questioned whether algorithmic assisted moral decision making is possible. Let us assume for the sake of argument that AAMD is possible. Using such a system might be considered as an example of algorithmic outsourcing of our moral decision making. Such outsourcing according to John Danaher means taking away the cognitive and emotional burden associated with certain activities, see Danaher . Intuitively outsourced moral decisions are inauthentic decisions. In this posting I will argue that under certain conditions outsourced ethical decisions using AAMD could be authentic ones.

Before proceeding I must make it clear what I mean by algorithmic assisted moral decision making, outsourcing and authenticity. Any moral decision simply made by an algorithm is not an authentic decision. In my previous posting I suggested when initialising an AAMD system we should first use a top down approach and install simple human values such as avoiding harm. However once initialised such a system should be fine-tuned by the user from the bottom up by adding his personal weights to the installed values. This primitive system might then be further modified from the bottom up using of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can form a basis for similar future decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions employed. The user might then further refine the system either by altering these weights or highlighting differences between the current decision and any previous decisions the system employed. According to Danaher outsourcing can take two forms. Cognitive outsourcing means someone using a device to perform cognitive tasks that she would otherwise have to perform himself. Affective outsourcing means someone using a device to perform an affective task that she would otherwise have to perform himself. I will assume here that an authentic decision is a decision that the decision maker identifies herself with or cares about.

According to Danaher taking responsibility for certain outcomes is an important social and personal virtue. Further, someone only takes responsibility for certain decisions if he voluntary wills his chosen outcomes of these decisions. Authenticity is an important social and personal virtue. Getting an app to automatically send flowers to someone’s partner on her birthday doesn’t seem to be an authentic action because the sender doesn’t cause the action. However, here I am only interested in outsourcing our ethical decisions, does outsourcing such decisions damage their authenticity?

I will now argue the answer to the above question depends not on outsourcing, per se, but on the manner of the outsourcing. Let us assume that in the future there exists a computer which makes decisions based on a set of values built into it by a committee of philosophers. Let us consider someone who outsources his moral decisions to this computer. I would suggest that if she implements a moral decision made in this way that his decision is an inauthentic one. It is hard to see how someone in this situation could either identify with the decision or consider herself to be responsible for the outcome. Let us now consider someone who outsources her moral decision making to a AAMD system which is finely tuned by the user as outlined above, are her decisions also inauthentic? I would suggest someone who makes a moral decision in this way is acting authentically because she can identify with his decision. She is able to identify with the systems decisions because, once initialised, the system is built from the bottom up. Her weights are attached to the incorporated values and her past decisions are built into its database.

I suggested that some who uses such a system must accept or reject its decisions. Someone might object that someone who simply accepts the systems decisions without reflection is not acting authentically. In response I would point in virtue ethics someone can simply act and still be regarded as acting authentically. My objector might respond by pointing out Christine Korsgaard pictures the simply virtuous human as a sort of Good Dog (1). Perhaps someone who simply accepts the results of an AADM system might also be pictured as behaving as a good dog with the system replacing the dog’s owner. Surely such a person cannot be regarded as acting authentically. In response I would suggest what matters is that the agent identified himself with the system’s decision. To identify with a decision someone has to be satisfied with that decision. What does it mean to be satisfied with a decision? According to Frankfurt satisfaction entails,

“an absence of restlessness or resistance. A satisfied person may be willing to accept a change in his condition, but he has no active interest in bringing about a change.” (2)

I’m not sure that an absence of restlessness or resistance with a decision is sufficient to guarantee its authenticity. I would suggest authentic decisions are ones that flow from our true self. I have argued our true self is defined by what we are proud or ashamed of, see  true selves do they exist . Let consider someone who accepts the recommendation of an AAMD system without feeling any shame, is her acceptance an authentic one or simply not an inauthentic one? I have argued that there are two types of shame . Type one shame is anxiety about social disqualification. Type two shame is someone’s anxiety about harming the things she cares about, loves and identifies with. Let us accept someone must feel type two shame when she acts in a way which harms the things she cares about, loves and identifies with. In the above situation if someone simply accepts the recommendation of an AAMD system without feeling any type two shame then he is acting in accordance with what he loves and identifies with and is acting authentically.

What conclusions can be drawn from the above. If someone outsources some of his moral decision making to a computer, she may not be acting authentically. However, if she outsources such decision making to an AAMD system designed using a bottom up approach as outlined above it is at least conceivable that she is acting authentically.

  1. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 3. 
  2. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press, page 103.

Wednesday, 25 May 2016

Cosmetic Surgery, Enhancement and the Aims of Medicine

  
Jessica Laimann wonders whether we should prohibit breast implants (1). She proceeds to argue that we shouldn’t prohibit breast implant surgery but then suggests we might compensate individuals who decide not to have such surgery. She seems to be uneasy with the idea that breast implant surgery could be a legitimate aim of medicine, I agree with Laimann that we shouldn’t prohibit breast surgery and would and suggest that the skills of medical practitioners might be better employed elsewhere. However, there is a difference between what could be a legitimate aim of medicine and what we should prohibit. Let us assume that in the future medical practitioners can satisfy all the now commonly accepted aims of medicine, in these circumstances could breast implant surgery become a legitimate aim of medicine? In these circumstances could human enhancement become a legitimate aim of medicine? In this posting I want to examine these questions.

In order to examine these questions, I must first examine what the aims of medicine should be. The aims I am concerned with a list of aims, such as repairing heart valves, treating cancer and so on but with aims common to all medical procedures. It might be suggested that aim of all medicine is obvious, to make people better. But what do we mean by better? William Mayo expressed the traditionally held view that “the aim of medicine is to prevent disease and prolong life, the ideal of medicine is to eliminate the need of a physician.” Mayo’s definition might be extended to include the treatment of injury and disability. According to the traditional view medicine makes us better by the treatment of disease, injury, disability and the prolongation of life. If we accept this definition then cosmetic surgery, assisted reproduction and any enhancement, with the possible exception of life extension, wouldn’t be things that make us better. A slightly different definition of the aims of medicine is given by Silver.

“The proper ends of medicine are to use medical skills and training to maintain or improve the position of the person involved, subject to her autonomous consent.” (2)

If we accept Silver’s definition then cosmetic surgery and some forms of enhancement might be considered as making us better. How can we decide which of the above definitions to accept? Let us accept that medicine is a caring profession. Let us also accept that medical practitioners should exercise their skills to serve those interests of patients which can only be served by medical means.

Unfortunately accepting the above doesn’t automatically help us in deciding which of these different aims of medicine to accept. Firstly, what is better for a patient might simply be defined as her medical interests as defined by her doctors. Secondly, what is better for her might be partly defined by what she sees to be her interests, her subjective interests. Let us accept that doctors should respect a patient’s autonomy. I have previously argued that a purely Millian account of autonomy is an incomplete account, see autonomy and beneficence revisited . I argued that a more complete account means that respecting someone’s autonomy requires that one must sometimes act beneficently towards her by attempting to satisfy her desires provided so doing does not harm her on balance and does not cause me significant inconvenience. Autonomy and some forms of beneficence are linked. If the above argument is accepted, then it seems to me that we should accept that a patient’s interests must include her subjective interests provided her general health interests can be easily satisfied. Such satisfaction is difficult now but might be more easily achieved in the future. If we accept the above it might be concluded that we should accept Silver’s definition, such a conclusion would be premature.

Let us assume that breast implants might be in the subjective interests of some individuals. However, it does not automatically follow that breast implantation surgery should be a legitimate aim of medicine. Breast implantation might damage society by sending a damaging picture of what it means to be a woman to both to some men and vulnerable young women. In this situation should we give greater weight to the interests of individual women or to the interests of society? I now want to argue that the above is a false dichotomy and that by respecting individual rights we benefit rather than damage society. Let us accept that breast implantation does some damage to society by projecting a damaged picture of what it means to be a woman. I now want to argue that a ban on breast implantation surgery would cause even greater damage to society. If we fail to respect the right of individuals to make their own decisions, then we fail to see them as the kind of people who can make their own decisions. This failure has two bad consequences, first we fail to truly respect those individuals and secondly we might be accused of moral arrogance. Even more importantly in this failure is the implicit belief that society should shape its members’ decisions. I believe such a belief is dangerous because it is too simplistic. Let us accept that when individual members of a society make decisions that those decisions are partly shaped by the society they live in. However, society both shapes and is shaped by the decisions of its individual members. A flourishing society resembles a living entity that evolves and changes over time. This change is in part shaped by the decisions of its individual members. In order for this shaping to take place such a society must be prepared to accept these decisions. Mill makes much the same point when he suggests that the human race is damaged by silencing the expression of an opinion.

What conclusions can be drawn from the above? Firstly, that Silver is right and that the aim of medicine should be to use medical skills are both to maintain or improve the position of the person involved, subject to her autonomous consent. Let us also accept that in achieving this aim precedence should be maintaining rather than improving the position of the person involved if resources are scarce. Secondly provided resources aren’t scarce then cosmetic surgery and assisted reproduction can and should be a legitimate aim of medical practice. Lastly the above suggests that we have some reason to accept that other forms of enhancement, of those who autonomously desire enhancement, should be a legitimate aim of medical practice unless compelling reasons can be advanced as to why such enhancement causes greater damage to society than the satisfaction these autonomous desires.

  

  1. Jessica Laimann, 2015, Should we Prohibit Breast Implants? Journal of Practical Ethics 3(2)
  2. Silver M, 2003, Lethal injection, autonomy and the proper ends of medicine, Bioethics 17(2).

Wednesday, 27 April 2016

Diversity and Editing Our Children’s Genes


I have recently been reading ‘Should you edit your children’s gene?’ by Erika Check Hayden in nature . Hayden is not concerned with editing genes, which might enhance a child’s cognitive abilities or physical prowess, but rather with editing genes for specific diseases or conditions. Such editing might be achieved by using CRISPR to edit embryos. In this posting I want to consider two related arguments, both based on diversity, which Hayden outlines against adopting such a policy. In my discussion I will assume without any argument that these diseases and conditions harm those who experience them to some degree, even if this degree is small. Some might object to this assumption, for instance some deaf people do not see their deafness as a disability and some deaf parents would even prefer to have deaf children.

Until recently disabled people were often treated badly but changing attitudes, at least in the Western World, has improved their lives. It might be argued that these changing attitudes has not only benefitted disabled people but have also benefitted all of us by creating a more caring society. At some time in life disability is likely to directly affect most people because we are prone to experience sickness, accidents and age-related decline. Let us accept without argument that a more caring society which cares for the disabled benefits us all. It might then be argued if we try to eliminate various disabilities we might inadvertently damage all by creating a less caring society.

The above argument seems to depend on the premise that a more diverse society is a more caring society. I want to challenge this premise. Let us imagine we now start using CRISPR to edit embryos. The motive to do so is a caring one, we want to reduce disability, which I have assumed above harms those disabled to some degree. Let us now imagine that by 2050 we have eliminated many current disabilities and that by doing so have created a less diverse society. At this point someone suggests that in order to create a more diverse society that we now use CRISPR, or some future technology, to create some disabled embryos. The purpose of doing so would be to create more diversity and hence caring by deliberately creating disabled children. Let us assume that these disabled children would of course still have meaningful lives they wanted to live. A similar society is satirised by Kurt Vonnegut in his short story ‘Harrison Bergeron’. It seems to me that any future society would find such a course of action totally abhorrent. It would seem that such a society’s policy of rejecting using CRISPR to produce disabled children is in total opposition to the policy of a society which rejects using CRISPR to reduce disability. Why should some future society find such a policy abhorrent? I would suggest it would do so because it cares about harming its members. It follows any future society which rejects such a policy would a caring one even if it was slightly less diverse.


At this point an objector might accept that whilst such a society would remain a caring one it might also be a less caring one than one which contained greater diversity. He then might suggest we should care about increasing caring. A consequentialist account of caring, more caring is better. Unfortunately for my objector the above seems to commit him to the abhorrent conclusion that in some circumstances it would be right to use CRISPR, or some future technology, to create some disabled embryos subject to the proviso that any resultant children would be able to live meaningful lives they wanted to live, in order to increase diversity and hence increase caring. My objector is using the term ‘’caring in two different ways. Firstly, ‘caring’ means something is important it matters to him, secondly the ‘caring’ that is important to him means sympathy or empathy. I will now argue it makes no sense to attempt to ‘care about’, in the first sense, maximising ‘caring about, in the second sense. Let us examine what ‘caring’ in the second sense involves. It must involve some empathic concern for others. If someone ‘cares about’, in the first sense, increasing caring, in the second sense, then he is not exhibiting this empathic concern if he is prepared to alter some perfectly healthy embryos to produce disabled children. By ‘caring about’ in the first sense, ‘caring about’, in the second sense, he is failing to ‘care about’, in the second sense. He believes something is important but doesn’t act as if it is important. Such a course of action is nonsensical.

Let us accept if we edit our children’s genes that we might create a less diverse society but that doing so doesn’t harm society in general by making it an uncaring one. I now want to address a second but related argument again based on diversity against editing our children’s genes. It might be argued even if a less diverse society doesn’t harm all of us it nonetheless might care less those who remain disabled. A society with less disabled people in it might care less for disabled people because it is less able to cope with their needs. Such a society might fail to cope adequately with their needs for two reasons. Firstly, such a society might allocate less resources to the needs of the disabled and secondly it might fail to understand these needs as well a more diverse one. Let us examine the first of these reasons. Prima facie a society with a lower proportion of disabled people in it should have more resources to devote to the disabled people than a similar one with a greater proportion. It might be accepted that such a society has greater resources but argued it might still be less responsive to the disabled’s needs. It might be less responsive because the lower number of disabled people means their voice carries less weight. I find this argument unconvincing. Let us accept in such a society the disabled can still express their needs. Let us also accept that such a society remains a caring one. I can see no reason why such a society should be unresponsive to the expressed needs of the disabled. Now let us examine whether a society with less disabled people in it would be less able to understand their needs. I accept that it is possible that such a society might understand the lives of the disabled less well than a society which contains a greater proportion of disabled people with a stronger voice. However, understanding the lives of the disabled is not the same as responding their needs. In any advanced society if the needs of the disabled can be expressed they can be acted on. If such a society remains a caring one then the expressed needs of the disabled should be acted on. It is also possible that in the future automation might mean members of such a society have more time to try to understand those who are disabled even if the number of those disabled forms a lower proportion of that society. It follows even if members of such a society don’t fully understand the lives of the disabled as members of a more diverse society that there is time for dialogue to better understand these needs.


I have argued any argument against editing our genes based on disability benefitting our society due to increased diversity is unsound. Accepting my argument of course does not mean we should edit our children’s genes as there may be other stronger arguments against so doing.

Wednesday, 30 March 2016

Factitious Virtue


In this posting I want to consider Mark Alfano’s idea of factitous virtue, I will only consider factitious moral virtue (1). In recent years the whole idea that human beings can possess virtues has come under sustained attack from moral psychologists and many would now question whether virtue ethics has any real future. Moral psychologists argue that all that really matters when we make moral decisions are the situations we find ourselves in and not any supposed virtues we might possess. However, if all that matters are the situations people find themselves in when making such decisions then everyone should act in a similar fashion in a similar situation. Clearly this isn’t true. People’s character varies. It is conceivable that someone’s character is partly shaped by her moral behaviour, being a trusty person is part of that person’s character. Perhaps virtue is hard and limited to a few people or perhaps most people only have limited virtue. In this posting I will argue that if virtue matters that it does so in two specific domains. I will then consider whether Alfano’s factitious virtue can be considered a virtue in the traditional sense. Lastly I will consider whether factitious virtue matters.

Let us consider the way we make moral decisions. When making important moral decisions with wide scale implications virtue ethics is not really useful. Some might disagree with the above. When making important moral decisions we don’t simply do what a virtuous person would do, we think. We think of the consequences or perhaps we question whether any decision we make could be made into a universal law. When making important decisions, such as those concerning the consequences of global warming or whether terminally ill patients should have the right to assisted suicide, then thinking about consequences or universal laws seem to be better way forward than simply asking what a virtuous person would do. I will not consider whether we should employ consequentialist or deontological methods here. It might be thought in the light of the above that I believe virtue should play no part in moral decision making. Such a thought would be premature. Not all moral decisions are of wide scale importance, for instance a daughter might have to decide whether to help her aged mother to go shopping or spend an enjoyable afternoon by herself in her garden on a beautiful summer’s day. Such decisions are not made after careful consideration but rather by simply deciding, deciding in accordance with our virtues, provided of course this is possible. It follows there is a possible place for the virtues in making some moral decisions, Bernard Williams would have classed such decisions as ethical decisions, see Internet Encyclopaedia of Philosophy . Virtue would be useful in the domain of making every-day moral or ethical decisions provided virtue is possible. I now want to argue that virtue might also matter in a second domain. Alfano suggests that,

“People enjoy acting in accordance with their own self-concepts, even those aspects of self-concept that are evaluatively neutral; they’re averse to acting contrary to their self-concepts, especially the evaluatively positive aspects of their self-concepts.” (2)

I’m not sure Alfano is totally correct when he suggests people enjoy acting in accordance with their self-concepts. I would suggest people are satisfied with acting in accordance with their self-concept and hence have no reason to act otherwise. I would however agree with Alfano that people do act in accordance with their self-concepts. The daughter in in example I used above makes her decision based on her self-concept. She might consider herself to be a caring person and as a result takes her mother shopping. It follows that if we partly define ourselves by the virtues we possess that virtue matters in the domain of self-definition.

Let us accept that there is a possible domain for virtue in moral decision making. I would suggest that this is not a trivial domain because most of the moral decisions we make are everyday ones and our concept of self matters. I would further suggest that we have evolved a capacity to make every day moral decisions and find it hard to transcend this capacity. However, even if there is a possible domain for the virtues in making moral decision this possibility by itself doesn’t mean the virtues exist. A lot of psychological research seems to point to the situation someone finds herself in when making moral decisions being much more important than any supposed virtue she might possess. In 1972 Alice Isen and Paula Levin conducted a famous experiment which showed participants who found a dime in a payphone were much more likely to aid someone needing help (3). Many other studies have replicated Isen and Levin’s finding that what really matters when making a moral decision is the context the decision is made in rather than any supposed virtue the decision maker possesses. Let us accept for the sake of argument that virtue is weak or rare in most people and hence not a useful concept as far as most people are concerned.

In the light of the situationist challenge Alfano argues that the idea of factitious virtue is useful. What exactly is factitious virtue? Alfano suggests that factitious virtue is a kind of self-fulfilling prophecy. He gives us an example of a self-fulfilling prophecy.

“Were United States Federal Reserve Chairman Ben Bernanke to announce …. On a Sunday evening that the stock market would collapse the next day, people would react by selling their portfilios, leading indeed to a stock market crash. (4)

A factitious virtue is analogous to a self-fulfilling prophecy. Alfano argues if you label someone as having a virtue that she comes to act as if she possesses the virtue, she has factitious virtue.

“Virtue labelling causes factitious virtue, in which people behave in accordance with virtue not because they possess the trait in question but because that trait has been attributed to them.” (5)

For labelling to be effective it should be made in public and believable to the person labelled. Let us return to my previous example. Telling the daughter in my example that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and would mean that she would be more likely to help her mother with her shopping.

Let us examine the status of a factitious virtue. The question then naturally arises is factitious virtue a real virtue? Alfano uses an analogy between a placebo and factious virtue to explain how factitious virtue works. If someone believes that a placebo will help her then her belief is a self-fulfilling one. In the same way someone believes she has a virtue due to labelling then she has factitious virtue. But a placebo isn’t a drug and it might be argued by analogy that factitious virtue is a not real virtue. What do we mean by a virtue? According to the Cambridge Online Dictionary virtue is “a good moral quality in a person or the general quality of being morally good.” If we accept the above definition then factitious virtue is a real virtue in a narrow sense because it induces a good quality in a person and the argument by analogy fails, however labelling does not seem to induce the more global quality of someone being morally good.

I now want to examine whether factitious virtue is a real virtue in the broader sense of being connected to being a morally good person. Factitious virtue differs from the more traditional virtues in the way it is acquired, does this difference in acquisition mean factitious virtue is not a real virtue? Julia Annas argues we acquire the virtues by learning (6). Learning requires some skill. If someone acquires a factitious virtue of caring by means of labelling then her acquisition need not involve any skill. It follows, provided Annas is correct, that factitious virtue is not a real virtue. Annas further argues we cannot acquire a moral virtue in isolation, for instance someone cannot learn to be caring without also learning to be just. Perhaps we can acquire non-moral virtues such as courage in isolation. It follows if someone acquires one moral virtue that in doing so she must acquire others because there is some unity of the moral virtues and this leads her to being a morally good person. Beneficence is a moral virtue and someone might become more beneficent by being labelled as caring. Acquiring the factitious virtue of caring by labelling doesn’t require that someone acquires any other moral virtues. It again follows provided Annas is correct that factitious virtue is not a real virtue in the broader sense. However factitious virtue remains a real virtue in the narrow sense because it induces a good quality in a person.

I now want to consider two objections to regarding factitious virtue as a real virtue in even the narrow sense. Firstly, it might be argued that any real virtue must be stable over time and that once labelling ceases a factitious virtue slowly decays over time. Michael Bishop argues that positive causal networks PCN are self-sustaining (7). A PCN is a cluster of mental states which sustain each other in a cyclical way. For instance, confidence and optimism might aid some to be more successful and her success in turn boosts her confidence and optimism. Bishop argues that successful relationships, positive affect and healthy relationships skills/patterns form such a network (8). Healthy relationship skills include trusting, being responsive to someone’s needs and offering support. Healthy relationship skills involve caring and so it is possible that caring is part of a self-sustaining network. It follows it is possible that if the factitious virtue of caring is induced in someone that once induced this factious virtue has some stability. Whether such a possibility exists for other factitious virtues is not a question for philosophy but for empirical research. It would appear that at least one important factitious virtue, the one of caring, might be stable over time and that this might be true of others.

Secondly it might be argued that a virtue is not something we simply accept, not something induced in us in the same way a virus might induce a disease. It might be argued that unless we autonomously accept some virtue, it isn’t a real virtue. I accept this argument. It might then be further argued that because we don’t autonomously accept a factitious virtue that factitious virtues aren’t really virtues. I would reject this further argument. There is a difference between autonomously accepting something and making an autonomous decision. What does it mean to autonomously accept something? I would suggest it means identifying oneself with the thing one accepts. It means caring about something. This caring about means someone “makes himself vulnerable to losses and susceptible to benefits depending upon whether what he cares about is diminished or enhanced” according to Frankfurt (9). It might be suggested if a factitious virtue is induced in us that there is no need for us to identify with that virtue. I now want to argue that the above suggestion is unsound. According to Frankfurt what someone loves, ‘cares about’ or identifies with is defined by her motivational structures.

“That a person cares about something or that he loves something has less to do with how things make him feel, or his opinions about them, than the more or less stable motivational structures that shape his preferences and guide his conduct (10).
Frankfurt also believes our motivational structures are defined by what we are satisfied with, passively accept (11). To autonomously accept something means we are satisfied with our acceptance and experience no resistance to or restlessness with that acceptance. Let us return to factitious virtue. Labelling if it is to be effective must be done in the right circumstances. Labelling must be public and believable to the person labelled. In my previous example telling the daughter in question that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and she would be unlikely to resist such labelling. If we accept the above analysis of autonomous acceptance then the daughter autonomously accepts the factitious virtue. I would also suggest that a lack resistance or restlessness to accepting what children are being taught is the way in which traditional virtue ethicists see them as coming to autonomously accept the virtues they are being taught. It follows that we autonomously accept factitious virtues in much the same way we accept real virtues.

Does factitious virtue matter? Let us accept without argument that the world would be a better place if people acted virtuously. Let us also accept that factitious virtues act in much the same way as real virtues at least for a period. It follows factitious virtues can make the world a better place for a period even if these virtues are relatively short lived. It would also appear that because the factitious virtue of caring has some stability it can have improve the world in a more lasting way. Intuitively a more caring world is a better world. However, it might be argued that our intuitions are unsound. Factitious virtue might indeed make people more caring but only by caring more for those already close to them to the detriment of others. In response to the above argument firstly I would point out not all ethical decisions are best made by considering what a virtuous person would do. Some ethical decisions are best made using consequentialist or deontological considerations. Secondly it might be feasible to extend the domain of factitious caring by well-considered labelling. Labelling someone as caring for strangers in the right circumstances might extend this domain. Accepting the above means accepting that the factitious virtue of caring might well improve the world in a more lasting way and that the factitious virtue of caring matters.

  1. Mark Alfano, 2013, Character as Moral Fiction, Cambridge University Press.
  2. Alfano, 4.1
  3. Alice Isen & Paula Levin, 1972, The effect of feeling good on helping; cookies and kindness, Journal of Personality and Social Psychology, 34, 385-83.
  4. Alfano, 4.2.2
  5. Alfano, 4.3.1
  6. Julia Annas, 2011, Intelligent Virtue, Oxford University Press, page 84.
  7. Michael Bishop. 2015, The Good Life, Oxford University Press, Chp 3.
  8. Bishop, page 75.
  9. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 83.
  10. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 129
  11. Frankfurt, 1999, Necessity, Volition, and Love, page 103.

Monday, 7 March 2016

Algorithmic Assisted Moral Decision Making


Albert Barqué-Duran wonders whether humanity wants computers making moral decisions? In the face of the coronavirus outbreak when we are faced by difficult and complex moral decisions it might be suggested that we need all the help we can get. However is it even right to outsource any of our moral decisions? If we can’t outsource at least part of our moral decision-making, then the whole idea of applied philosophy has very shaky foundations. I believe applied philosophy remains a meaningful discipline, even if some of its practitioners seem to over-elaborate some things in an attempt to justify both their position and the discipline itself. Let us assume outsourcing some moral decisions can be justified. A doctor might for instance trust a bioethicist. If we assume that it can be right to outsource some moral decision-making to experts in some fields could it also be right to outsource some moral decision-making to algorithms or machines? In this posting I want to consider this question.

What is a moral decision? It isn’t simply a decision with ethical implications, computers can already make these, it is a decision based on moral considerations. It is feasible that in the future computers might make moral decisions completely by themselves or computers might aid people to make such decisions in much the same way as computers now aid people to design things. In this posting I want to examine whether humanity might want computers to make moral decisions. I will consider three reasons why it shouldn’t. Firstly, it might be argued only human beings should make moral decisions. Secondly, it might be argued that even if computers can make moral decisions that some of these decisions would not in the interests of human beings. Lastly it might be argued that computers might make decisions which human beings don’t understand.

First let us assume that some computers could make moral decisions independently of human beings. A driverless car would not possess such a computer as it makes decisions based on parameters given to it by human beings and in doing so acts instrumentally to serve purely human ends. Personally I am extremely doubtful, whether a computer which can acquire the capacity to make moral decisions independently of human beings is feasible in the foreseeable future. Nonetheless such a computer remains a feasibility because human beings have evolved such a capacity. If we accept that such a computer is at least a feasibility do any of the three above reasons justify our fears about it making moral decisions? Firstly, should only human beings make moral decisions? If any sentient creature has the same or better cognitive capacities as ourselves then it should have the same capacity as we do to make moral decisions. We seem quite happy with the idea that aliens can make moral decisions. Prima facie provided a machine can become sentient then it should be able and allowed to make moral decisions. In fiction at least we seem to be quite happy about computers or robots making moral decisions. Secondly might such a computer make decisions which are not in the interests of human beings? It might, but I would suggest what really matters is that it takes into account human interests. Let us accept that the domain of creatures we could possibly feel sympathy for defines the domain of creatures that merits our moral concern, this includes animals but not plants. If a computer tries to make moral decisions without some form of sympathy then it might mistakenly believe it is making moral decisions about rocks, shoes and even clouds. Once again I would reiterate that at present a computer which feels sympathy is an extremely fanciful proposition. Let us accept that a computer that cannot feel sympathy cannot make moral decisions independently of human beings. Let us assume a computer capable of feeling sympathy is possible. Let us also assume that such a computer will also have the cognitive powers to understand the world at least as well as we do. It follows such a computer might make some decisions which are not in human interests but that it should always consider human interests, surely this is all we can ask for. Thirdly might we not be able to understand some of the moral decisions made by any computer capable of making moral decisions independently of human beings? John Danaher divides this opacity into three forms, intentional, illiterate and intrinsic algorithmic opacity . I have argued above that any computer which can make moral decisions must be capable of feeling a form of sympathy, because of this I will assume intentional opacity should not be a problem. Illiterate opacity means some of us might not understand how a computer reaches its decision, but does this matter as long as we understand the decision is a genuine moral decision which take human interests into account? Lastly intrinsic opacity means there may be a mismatch between how humans and computers capable making moral decisions understand the world. Understanding whether such a mismatch is possible is fundamental to our understanding of morality itself. Can any system of morality be detached from affect and can any system of morality be completely alien to us? I have tried to cast some doubt on this possibility above by considering the domain of moral concern. If my doubts are justified, then this suggests that any mismatch in moral understanding cannot be very large.

Let us accept that even if computers can make moral decisions independently of human beings are possible that this possibility will only come into existence in the future, probably the far distant future. Currently there is interest in driverless cars having ethical control systems and we have computer aided design. It follows that it is then at least conceivable that we might develop a system of computer aided moral decision-making. In practice it would be the software in the computer which would aid in making any decisions so the rest of this posting will be concerned with algorithmic aided moral decision making. Giubilini and Savulescu consider an artificial moral advisor which they label as an AMA, see the artificial moral advisor . In what follows AMA will refer to algorithmic aided moral decision making. Such a system might emerge from a form of collective intelligence involving people in general, experts and machines

I now want to consider whether we should trust an AMA system I want to consider whether we need such a system. Persson and Savulescu argue we are unfit to make complicated moral decisions in the future and that there is a need for moral enhancement (1). Perhaps such enhancement might be achieved by pharmacological means but it is also possible our moral unfitness might be addressed by an AMA system which nudges us towards improved moral decision making. Of course we must be prepared to trust such a system. Indeed, AMA might be a preferable option to enhancing some emotions because of the possibility that enhancing emotions might damage our cognitive abilities (2) and boosting altruism might lead to increased ethnocentrism and parochialism, see practical ethics .

Let us accept there is a need for AMA provided that this is feasible. Before questioning whether we should trust an AMA system I need to sketch some possible features of AMA. Firstly, when initialising such a system a top down process would seem to be preferable because if we do so we can at least be confident the rules it tries to interpret are moral rules. At present any AMA system using virtue ethics would seem to be unfeasible. Secondly, we must consider whether the rules we build into such a system should be deontological or consequentialist. An AMA system using deontological rules might be feasible but because computers are good at handling such large quantities of data it might be preferable if initially we installed a consequentialist system ethics. In the remainder of this posting I will only consider AMA based on a consequentialist system of ethics. As this is only a sketch I will not consider the exact form of consequentialist ethics employed. Thirdly, any AMA system operating on a consequentialist system must have a set of values. Once again when initialising the system, we should use a top down approach and install human values. Initially these values would be fairly primitive values such as avoiding harm. In order to initialise the AMA system we must specify what we mean by harm. Perhaps the easiest way to specify harm would be to define it as suffering by some creature or loss of some capacity by that creature. We might specify the sort of creature which can be harmed by a list of sentient creatures. Next we might simply specify suffering as pain. Lastly we would have to specify a list of capacities of each sentient creature on our list of sentient creatures

At this juncture we have an extremely primitive system of AMA. This system could be regarded as a universal system and in the same circumstances would always make the same decision. A private morality is of course nonsense but nonetheless we must identify with our moral decisions and a universal morality might be hard to identify with. At this point the user of such a system might modify it by adding weights to the built in values. For instance, a user might give a greater weight to avoiding harm, acting beneficently, and a lesser weight to respecting autonomy in situations in which these two values clash. At this point we have a primitive system the user might identify with. This primitive system might now be further modified by use in a more bottom up way by the use of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can be made a basis for similar decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions used. The user might then refine the system either by altering the weights attached to the values involved and/or feeding into the system how the circumstances the current decision is based on differ from the circumstances of past decisions used. Lastly in this joined up age the system’s user might permit the system to use the weights attached to values and decisions made by other systems belonging to people she trusts or respects. Employing such a system might be seen as employing a system of collective intelligence which uses both humans and algorithms in making moral decisions.

I now want to return to my original question, should we trust the decisions such an AMA system makes? I want to consider the three reasons outlined above as to why we shouldn’t trust such a system. I will conclude each of these reasons appears to be unsound in this context. Firstly, an objector might suggest if someone relies on such a system she isn’t really making moral decisions. I suggested a moral decision is a decision based on ethical considerations. The inbuilt values of the system are moral values so it would appear any decision made by the system is based on ethical considerations. However, my objector might now suggest if someone makes a moral decision she must identify herself with that decision. It would appear even if the accept my objector’s suggestion that because the system relies on moral values built into it by the user that any decision it makes must be based on values she identifies herself with. Secondly my objector might suggest that such a system does not serve human interests. The system sketched above is a consequentialist system and it might make decisions which aren’t in the user’s self-interest, however because the values built into it are human values the system should always acts in humans’ interests. It might of course make bad decisions when trying to serve those interests but then so do humans themselves. Lastly my objector might return to Danaher’s opacity question and suggest that the user of such a system might fail to understand why the system made a particular decision. I would suggest that because the system has feedback loops built into it that this shouldn’t occur. I would further point out that because it is always the user who implements the decision and not the system that the user retains a veto over the system.

This examination has been extremely speculative. It seems to me that whether we would want such computers to make moral decisions depends on the background circumstances. All moral decisions are made against some background. Sometimes this background is objective and at sometimes it includes subjective elements. For instance someone’s decision to have an abortion contains subjective elements and someone is unlikely to use AMA to help in making such a decision. The covid-19 outbreak creates a number of moral questions for doctors treating covid patients with limited resources which need to be made against a mainly objective background. Such decisions are more amenable to AMA and perhaps now would be a good time to start designing such systems for emergencies.


  1. Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.

Monday, 22 February 2016

Traditional and Nussbaum's Transitional Anger


The world is an angry place and it seems this anger is increasing, see for instance why are Americans so angry? Anger is a basic emotion which goes back into our evolutionary past and is one of the five basic emotions everyone seems to recognise according to Paul Ekman’s studies. In the past anger must have served some useful purpose but is anger still useful in today’s culture? Martha Nussbaum defines two types of anger. Traditional anger which,

“involves, conceptually, a wish for things to go badly, somehow, for the offender in a way that is envisaged, somehow, however vaguely, as a payback for the offense” (1)

Nussbaum also defines transitional anger as follows.

“There are many cases in which one gets standardly angry first, thinking about some type of payback, and then, in a cooler moment, heads for the Transition.” (2)

According to Nussbaum traditional anger can transmute into transitional anger,

“quickly puts itself out of business, in that even the residual focus on punishing the offender is soon seen as part of a set of projects for improving both offenders and society.” (3)

The type of anger given to us by evolution appears to be traditional anger but we are no longer hunter gatherers and so perhaps traditional anger no longer serves its original purpose and we should always transmute it into transitional anger. Perhaps if such a transmutation is possible our society might become less angry. In this posting I will argue that whilst in most situations we should transmute our anger into useful action there are some situations in which it is right to maintain anger.

Stoics such as Seneca argued that anger is a dangerous emotion, a type of temporary madness and ought to be eliminated or controlled. Emotions aren’t simply somatic responses. According to Michael Brady emotions act as a kind of mental alarm. They do so by facilitating,

“reassessment through the capture and consumption of attention; emotions enable us to gain a “true and Stable” evaluative judgement.” (3)

Alarms need attending to and this requires action, anger requires attention and action rather than simply control or elimination. Simply controlling anger leads to resentment which is bad both for individuals and society. Traditional and transitional anger lead to different sorts of actions and I will examine the appropriateness of these different actions in our society.

If we accept anger is a kind of warning about some harm then this explains why being angry makes no sense in some situations. However if someone becomes angry when diagnosed with cancer her anger does not act as a warning. But anger isn’t just a general warning about any situation, it’s a warning about social situations. Anger should be a call to action connected to some wrongdoing. Anger as traditionally envisioned has a target and a focus. The target is the person or institution which inflicted the wrongdoing and the focus is the wrongdoing itself. This wrongdoing causes harm and the call for action created by anger seeks to address this harm. Traditional anger and transitional anger seek to address this harm in different ways. Traditional anger seeks to mend the harm done by making the offender suffer, changing the status of the offender. Nussbaum regards this as a kind of magical thinking rooted in our past. Traditional anger, as so conceived, is not only rooted in our past but deals with past harm, however the harm is done and making the offender suffer does not appear to mitigate this harm. Nussbaum argues we should reject such a concept and replace it by the concept of transition anger. Transitional anger doesn’t focus on status instead it focuses on future welfare from the start by trying to mitigate the harm involved. I agree with Nussbaum that our concept of anger sometimes needs updating but will argue that traditional anger still has an important part play.

Traditional anger is concerned with the difference in status between the target and the victim.  Concern for this difference in status can lead to non-productive behaviour. Nussbaum gives an excellent example.

“People in academic life who love to diss scholars who have criticized them and who believe that this does them some good, have to be focusing only on reputation and status, since it’s obvious that injuring someone else’s reputation does not make one’s own work better than it was before, or correct whatever flaws the other person has found in it.” (5)

Nussbaum’s example clearly shows concern with differences in status can lead to rather silly behaviour provided anger is only concerned with the past and present wrongs. If traditional anger is only focussed on past and present harms then perhaps we should always transmute our traditional anger into transitional anger provided of course we are the sort of creatures capable of carrying out such a transmutation. Greg Caruso believes empirical evidence suggests that the strike back emotion plays an important role in our moral responsibility beliefs and practices making such a transmutation difficult, see psychology today . However anger is sometimes focussed on the future, indeed if anger acts as some kind of alarm requiring action then it’s very nature means it must contain a forward looking element.

Let us accept that anger should trigger a forward looking element. However even if we accept the above it doesn’t automatically mean we should always try to transmute traditional anger into transitional anger. Nussbaum herself suggests that transitional anger,

“focuses on future welfare from the start. Saying ‘Something should be done about this” (6)
Let us now accept that transitional anger is forward looking by seeking to alleviate the harm which caused anger.

I now want to argue that the nature of the harm involved should determine whether traditional or transitional anger should be the appropriate response. Nussbaum uses a case of rape as an example.

“Offender O has raped Angela’s close friend Rebecca on the campus where both Angela and Rebecca are students. Angela has true beliefs about what has occurred, about how seriously damaging it is, and about the wrongful intentions involved: O, she knows, is mentally competent, understood the wrongfulness of his act.” (7)
Angela is justifiably angry but Nussbaum suggest nonetheless that she should try to transmute her raw traditional anger into transitional anger.

“Angela is likely to take a mental turn toward a different set of future-directed attitudes. Insofar as she really wants to help Rebecca and women in Rebecca’s position …..  helping Rebecca get on with her life, but also setting up help groups, trying to publicize the problem of campus rape and to urge the authorities to deal with it better.” (8)

Let us assume O was Rebecca’s boyfriend, sees he acted wrongfully, is remorseful and is no more likely to rape someone in the future than anyone else. With these caveats in place then punishing O will not lessen the harm done to Rebecca and I am inclined to agree with Nussbaum that Angela would be right to transmute her traditional anger into transitional anger. Such a transmutation might prove difficult due to angers usefulness in our evolutionary past and even if such a transmutation could be achieved such a case should still involve justice. In these circumstances I would suggest the justice should be restorative justice.

Some harms are not physical, some involve intimidation and others involve recognition. In what follows I will argue that maintaining our anger, rather than transmuting it, is a more appropriate response to both of these harms. In some sports a bad tackle by player A might injure player B, the physical injury inflicted by A cannot be undone by B causing A to suffer, but if A’s purpose was to intimidate B then B’s retaliation causing A to suffer might well target A’s intimidation. Maintaining traditional anger would be more appropriate in this situation than transitional anger. A wife’s abuse by her husband in order to intimidate her might also be better adressed by maintaining traditional anger, provided of course this is possible. Intimidation whilst a serious problem is not a widespread problem. A failure to recognise the rights of others is a more widespread problem. This failure might be due to inconsideration, a lack of attention, or even intentional. Let us reconsider Nussbaum’s example. Let us assume O doesn’t recognise the wrongfulness of his actions and also doesn’t recognise women merit the same status as men. In this scenario it seems to me that maintaining traditional anger would be a more appropriate response than transitional anger. I accept that the harm done to Rebecca cannot be undone by making O suffer, nonetheless O’s continuing failure to recognise women as having the same rights as men might be targeted by making O suffer, might be addressed by traditional anger. It appears in cases in which anger is generated by a lack of recognition that raw traditional anger ought to be the appropriate response. Anger in this situation must still be transmuted into action appropriate to gaining this recognition and this action might justifiably include inflicting harm on the offender in order to achieve this recognition. I believe the above appearance needs to be qualified. At the beginning of this post I remarked people appear to be getting angrier, perhaps this anger is because our society is not very good at recognising individuals. Useful anger must be effective anger. I would suggest targeting society using traditional anger is not useful and it would be better to employ transitional anger. The boundary between offenders who should be targeted by traditional anger or transitional anger is hard to define. Clearly society as a whole should be targeted by transitional anger and some individuals by traditional anger but what about corporations and other organisations?


What conclusions can be drawn from the above? Dylan Thomas asks us “not go gently into that good night but rage, rage against the dying of the light.” If anger is an alarm then rage, anger, at those things we can do nothing about is inappropriate. Anger if it is to be a useful emotion must be capable of being transmuted into something else. It follows in situations in which a transmutation of any sort is impossible anger that is not a useful emotion and should be avoided provided this is possible. Secondly there are some situations in which the focus of anger is not ongoing and transitional anger seems the right sort of anger to employ, once again provided this is possible. In such situation the infliction of harm on the wrongdoer seems to be pointless. Retributive justice might require some harm but I am considering anger in isolation from justice. I have suggested above that in such situations restorative rather than retributive justice would be more appropriate. According to Nussbaum in such situations anger should be transmuted into actions aimed at a set of projects for improving both offenders and society. Lastly there are situations in which the focus of anger is intimidation or a failure of recognition, in these situations traditional anger ought to be employed and the infliction of harm on the wrongdoer might be appropriate. In this situation the aim of anger mustn’t be payback but recognition and the anger employed should be transmuted into actions appropriate to achieving this recognition. 


  1. MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 46.
  2. Nussbaum, page 53.
  3. Nussbaum, page 51.
  4. Michael Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press, page147.
  5. Nussbaum, page 49.
  6. Nussbaum, page 54.
  7. Nussbaum, page 46.
  8. Nussbaum, page 49

Tuesday, 2 February 2016

Terminally ill patients and the right to try new untested drugs


In the United States nearly half of the states have passed a “right to try” law, which attempt to give terminally ill patients access to experimental drugs. Some scientists and health policy experts believe such laws can be harmful by causing false hopes and even suffering.  Rebecca Dresser argues that states should not implement such laws due to dashed hopes, misery, and lost opportunities which can follow from resorting to unproven measures, see  hastings centre report . For instance, someone might lose the opportunity to spend his last days with his family in a futile attempt to extend his life. In this posting I want to examine the right of terminally ill patients to try experimental drugs which have not been fully tested.My comments here will only apply to experimental drugs but I would suggest that they could equally apply to all experimental treatments such as the use of Crispr gene editing tools. In what follows experimental drugs will refer to new drugs which have not yet been fully tested. Of course pharmaceutical companies must be willing to supply these drugs. I am only examining the right of patients to try experimental drugs which pharmaceutical companies are willing to supply and not patient’s rights to demand these drugs. In practice pharmaceutical companies might be unwilling to supply such drugs because of a fear of litigation, I will return to this point at the end of my posting.

I fully accept Dresser is correct when she asserts that experimental drugs might cause dashed hopes, misery, and lost opportunities. Untested drugs can cause harm. It is this harm that forms the basis for not allowing terminally ill patients access to these drugs. I now want to examine in more detail the harm that access to experimental drugs might cause to the patients who take them. I will then examine how access to these drugs might harm future patients by distorting drug trials.

How might access to experimental drugs harm the patients who take them? Firstly they might further limit a patient’s already limited lifespan. Secondly they might cause a patient greater physical suffering. Lastly they might cause him psychological suffering by falsely raising hopes and then dashing these hopes if they fail. I will now examine each of these three possible harms in turn. Previously I have argued that terminally ill patients, those suffering from Alzheimer’s disease and other degenerative conditions should have a right to assisted suicide, see alzheimers and suicide  . If terminally ill patients have a right to end their lives it seems to follow that the fact that experimental drugs might possibly shorten someone’s life does not give us a reason to prohibit the taking of such drugs. It might be objected that someone taking a drug to end his life and someone taking an experimental drug to extend his life have diametrically opposed ends. However, even if this is true a patient taking a drug to try and extend his life should be aware that it might do the opposite. Provided a patient is reasonably competent and aware that such a drug might shorten his life it should be up to him to decide if he is prepared to accept the risk of shortening his life in order to have the possibility of extending it. It might now be objected that by providing experimental drugs to someone we are not acting in a caring way, we are not acting beneficently. In response I would argue the opposite holds and that if we prohibit the use of these drugs we are caring for patients rather than caring about them. Caring for differs from caring about. If I care for a dog I must care about what is in its best interests. If I care about a person I must care about what is in his best interests and what he thinks are in his best interests. Failure to do so is a failure to see him as the sort of creature who can decide about his own future and displays moral arrogance. I have argued elsewhere that If I care about someone in a truly empathic way I must care about what he cares about, rather than simply what I think might be in his best interests, see woolerscottus . It appears to follow that competent patients should not be prohibited from taking experimental drugs which might shorten their lives provided they are aware of this fact. After all smoking shortens many smokers’ lives but because we respect autonomy smoking is permissible.

It might be objected that the above argument is unsound as often terminally ill patients are not the sort of creatures who can really make decisions about their own future. The above objection as it stands is unsound as the terminally can make some decisions about their treatment. For instance, it is perfectly acceptable for a patient to choose forgoing some life extending treatment in order to have better quality of life with his family. The above argument can be modified. It might be argued that terminally ill patients are not good at making decisions about their future or lack of it. This might be caused by stress, a disposition for false or exaggerated optimism and an inability to understand probabilities. In response I would point it is not only the terminally ill but the public and some doctors who are not very good at understanding probability, see Helping doctors and patients make sense of health statistics . Nonetheless false optimism remains and this false optimism might distort a terminally ill patient’s decision making capacity. What exactly is meant by false optimism? Is it just a failure to understand probability or is it someone assigning different values, weights, to the things he finds to be important? No decisions are made without reference to these weights, our values, and it follows changing our values might change the decisions we make without any alteration to the probability of certain events occurring. What might appear to us as false optimism might be someone giving different weights to what he finds to be important. I would argue we must accept that the terminally ill have a right to determine their own values and assign their own weights to things they find pertinent to their decision-making for two reasons. Firstly, we should recognise that the terminally ill remain the sort of creatures who can and should make decisions about their own future. Secondly, most of us are in a state of epistemic ignorance about what it means to experience terminal illness and if we criticise the values of the terminally ill we are guilty of epistemic arrogance. It would appear if we accept that the terminally ill are the kind of creatures who can make decisions about their own future the fact that experimental drugs might shorten their lives does not give us reason to prohibit them from using such drugs.

Patients who take experimental drugs might cause themselves physical harm. The first principle in medical ethics is to do no harm, non-maleficence, so it might be argued that the prescription of such drugs by medical practitioners should be prohibited. The above argument is unsound. Chemotherapy harms patients but this harm is offset by its benefits. Let us accept that an experimental drug might harm a patient but that it might also benefit him. Indeed, such drugs are only tested because it is believed that they might benefit patients. The above argument might be modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking these drugs can be shown to offset any harm they cause. However, the above argument is also unsound. Chemotherapy does not always benefit patients and so does not always offset any harm it causes. If we accept the above argument then we should prohibit chemotherapy, such a suggestion is nonsensical. However, the modified argument might be still further modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking such drugs can be shown to offset any harm they cause in the majority of cases. This further modified argument is about how much risk patients should be exposed to.

The reason we don’t want to expose patients to excessive risk is because we care about them. However we don’t prohibit paragliding because we care about those who participate. Who should determine what risks are acceptable? I can use the same argument I employed above, showing that patients have the right to risk shortening their lives if there is some limited chance of life extension provided they understand the risk involved, to deal with the risk of patients harming themselves. I would suggest that if we prohibit the use of experimental drugs which might harm patients but also might benefit them that once again we are caring for patients rather than caring about them. I argued above that we should care about people in a different way to the way we care for dogs. Failure to do so is a failure to see patients as the sort of creature who can make decisions about their own future and displays moral arrogance. If patients understand the risks involved it should be up to patients to decide if they are prepared to accept these risks. The last way the use experimental by patients might harm them is by causing psychological suffering by raising false hopes and dashing these hopes if these drugs fail. I believe in this context the above argument can once again be applied and I will not repeat it. To summarise it would seem that possible harm to actual patients is not a reason to prohibit access to experimental drugs provided patients are aware of this possible harm.

Even if we accept the above somewhat tentative conclusion it doesn’t follow that we don’t still have a reason to prohibit access of experimental drugs to terminally ill patients. Future patients might be harmed because the effectiveness of drugs might not be fully tested in the future. Drug trials are expensive and if pharmaceutical companies can rely on data obtained by using a drug on terminally ill patients then they might be reluctant to finance fully fledged trials. Doing so might lead to two problems. Firstly, some drugs which appear not to harm terminally ill patients might harm other patients. The long term effects of a drug, which extends a patient’s life in the short term, might not become apparent. Secondly some drugs which do not appear to have any effect on terminally ill patients might be effective on less seriously ill patients. Such drugs might not become available to future patients. Can these two problems be solved?

Regulation might solve the first problem. Experimental drugs might be used on terminally ill patients if they desire them but their use should not be permitted on other patients until after undergoing a full clinical trial. It might appear that because there are less terminally ill patients compared to other patients that pharmaceutical companies would continue to conduct full clinical trials on experimental drugs. However this appearance might be unsound. Pharmaceutical companies might try to extend the definition of a terminally patient so as to continue using some drugs without them ever having to undergo a full trial. This problem might be overcome by regulatory authorities insisting that experimental drugs are only used on those who are terminally ill. Applied philosophers might aid them in this task by better defining what is meant by terminal illness. The well-known physicist Stephen Hawking has motor neurone disease and it is probable that this disease will kill him but at the present he would not be classed as terminally ill. Terminal illness should be defined by how long someone will probably live rather than the probability that his illness will kill him. Perhaps someone should not be considered to have a terminal illness unless it is probable that he has less than six months to live. Let us consider the second problem. Might some pharmaceutical companies be tempted not to fully trial some drugs which might benefit some patients on the basis of incomplete evidence gathered from their use on terminally ill patients? Once again regulation might solve this problem. I would suggest that provided that terminal illness is defined tightly enough that this problem shouldn’t arise. A tight definition of terminal illness means fewer terminally ill patients for pharmaceutical companies to test drugs on forcing them to conduct full clinical trials. To summarise once again it appears harm to future patients does not give us reason to prohibit access to experimental drugs for the terminally ill provided that terminal illness is tightly defined.

Lastly at the beginning of this post I suggested that in practice pharmaceutical companies might be unwilling to supply experimental drugs due to a fear of litigation. It should be possible to overcome this fear if patients are required to sign a comprehensive consent form making it clear not only that there are risks involved but also that these risks include as yet unknown risks.

The above discussion leads to the rather tentative conclusion that the terminally ill should not be prohibited from trying experimental drugs subject to certain safeguards. These are,
  1. Terminal illness must be clearly and tightly defined. Philosophy can play an important part in doing this.
  2. No drugs which have not been fully tested should be used on non-terminally ill patients except for the purpose of testing
  3. Any terminally ill patient taking an experimental drug must sign a comprehensive consent form in the same way patients taking part in trials do. This form must make it clear that they are prepared to accept as yet unknown risks.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...