Tuesday, 2 February 2016

Terminally ill patients and the right to try new untested drugs


In the United States nearly half of the states have passed a “right to try” law, which attempt to give terminally ill patients access to experimental drugs. Some scientists and health policy experts believe such laws can be harmful by causing false hopes and even suffering.  Rebecca Dresser argues that states should not implement such laws due to dashed hopes, misery, and lost opportunities which can follow from resorting to unproven measures, see  hastings centre report . For instance, someone might lose the opportunity to spend his last days with his family in a futile attempt to extend his life. In this posting I want to examine the right of terminally ill patients to try experimental drugs which have not been fully tested. In what follows experimental drugs will refer to new drugs which have not yet been fully tested. Of course pharmaceutical companies must be willing to supply these drugs. I am only examining the right of patients to try experimental drugs which pharmaceutical companies are willing to supply and not patient’s rights to demand these drugs. In practice pharmaceutical companies might be unwilling to supply such drugs because of a fear of litigation, I will return to this point at the end of my posting.

I accept Dresser is correct when she asserts that experimental drugs might cause dashed hopes, misery, and lost opportunities. Untested drugs can cause harm. It is this harm that forms the basis for not allowing terminally ill patients access to these drugs. I now want to examine in more detail the harm that access to experimental drugs might cause to the patients who take them. I will then examine how access to these drugs might harm future patients by distorting drug trials.

How might access to experimental drugs harm the patients who take them? Firstly they might further limit a patient’s already limited lifespan. Secondly they might cause a patient greater physical suffering. Lastly they might cause him psychological suffering by falsely raising hopes and then dashing these hopes if they fail. I will now examine each of these three possible harms in turn. Previously I have argued that terminally ill patients, those suffering from Alzheimer’s disease and other degenerative conditions should have a right to assisted suicide, see alzheimers and suicide  . If terminally ill patients have a right to end their lives it seems to follow that the fact that experimental drugs might possibly shorten someone’s life does not give us a reason to prohibit the taking of such drugs. It might be objected that someone taking a drug to end his life and someone taking an experimental drug to extend his life have diametrically opposed ends. However, even if this is true a patient taking a drug to try and extend his life should be aware that it might do the opposite. Provided a patient is reasonably competent and aware that such a drug might shorten his life it should be up to him to decide if he is prepared to accept the risk of shortening his life in order to have the possibility of extending it. It might now be objected that by providing experimental drugs to someone we are not acting in a caring way, we are not acting beneficently. In response I would argue the opposite holds and that if we prohibit the use of these drugs we are caring for patients rather than caring about them. Caring for differs from caring about. If I care for a dog I must care about what is in its best interests. If I care about a person I must care about what is in his best interests and what he thinks are in his best interests. Failure to do so is a failure to see him as the sort of creature who can decide about his own future and displays moral arrogance. I have argued elsewhere that If I care about someone in a truly empathic way I must care about what he cares about, rather than simply what I think might be in his best interests, see woolerscottus . It appears to follow that competent patients should not be prohibited from taking experimental drugs which might shorten their lives provided they are aware of this fact.

It might be objected that the above argument is unsound as often terminally ill patients are not the sort of creatures who can really make decisions about their own future. The above objection as it stands is unsound as the terminally can make some decisions about their treatment. For instance, it is perfectly acceptable for a patient to choose forgoing some life extending treatment in order to have better quality of life with his family. The above argument can be modified. It might be argued that terminally ill patients are not good at making decisions about their future or lack of it. This might be caused by stress, a disposition for false or exaggerated optimism and an inability to understand probabilities. In response I would point it is not only the terminally ill but the public and some doctors who are not very good at understanding probability, see Helping doctors and patients make sense of health statistics . Nonetheless false optimism remains and this false optimism might distort a terminally ill patient’s decision making capacity. What exactly is meant by false optimism? Is it just a failure to understand probability or is it someone assigning different values, weights, to the things he finds to be important? No decisions are made without reference to these weights, our values, and it follows changing our values might change the decisions we make without any alteration to the probability of certain events occurring. What might appear to us as false optimism might be someone giving different weights to what he finds to be important. I would argue we must accept that the terminally ill have a right to determine their own values and assign their own weights to things they find pertinent to their decision-making for two reasons. Firstly, we should recognise that the terminally ill remain the sort of creatures who can and should make decisions about their own future. Secondly, most of us are in a state of epistemic ignorance about what it means to experience terminal illness and if we criticise the values of the terminally ill we are guilty of epistemic arrogance. It would appear if we accept that the terminally ill are the kind of creatures who can make decisions about their own future the fact that experimental drugs might shorten their lives does not give us reason to prohibit them from using such drugs.

Patients who take experimental drugs might cause themselves physical harm. The first principle in medical ethics is to do no harm, non-maleficence, so it might be argued that the prescription of such drugs by medical practitioners should be prohibited. The above argument is unsound. Chemotherapy harms patients but this harm is offset by its benefits. Let us accept that an experimental drug might harm a patient but that it might also benefit him. Indeed, such drugs are only tested because it is believed that they might benefit patients. The above argument might be modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking these drugs can be shown to offset any harm they cause. However, the above argument is also unsound. Chemotherapy does not always benefit patients and so does not always offset any harm it causes. If we accept the above argument then we should prohibit chemotherapy, such a suggestion is nonsensical. However, the modified argument might be still further modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking such drugs can be shown to offset any harm they cause in the majority of cases. This further modified argument is about how much risk patients should be exposed to.

The reason we don’t want to expose patients to excessive risk is because we care about them. However we don’t prohibit paragliding because we care about those who participate. Who should determine what risks are acceptable? I can use the same argument I employed above, showing that patients have the right to risk shortening their lives if there is some limited chance of life extension provided they understand the risk involved, to deal with the risk of patients harming themselves. I would suggest that if we prohibit the use of experimental drugs which might harm patients but also might benefit them that once again we are caring for patients rather than caring about them. I argued above that we should care about people in a different way to the way we care for dogs. Failure to do so is a failure to see patients as the sort of creature who can make decisions about their own future and displays moral arrogance. If patients understand the risks involved it should be up to patients to decide if they are prepared to accept these risks. The last way the use experimental by patients might harm them is by causing psychological suffering by raising false hopes and dashing these hopes if these drugs fail. I believe in this context the above argument can once again be applied and I will not repeat it. To summarise it would seem that possible harm to actual patients is not a reason to prohibit access to experimental drugs provided patients are aware of this possible harm.

Even if we accept the above somewhat tentative conclusion it doesn’t follow that we don’t still have a reason to prohibit access of experimental drugs to terminally ill patients. Future patients might be harmed because the effectiveness of drugs might not be fully tested in the future. Drug trials are expensive and if pharmaceutical companies can rely on data obtained by using a drug on terminally ill patients then they might be reluctant to finance fully fledged trials. Doing so might lead to two problems. Firstly, some drugs which appear not to harm terminally ill patients might harm other patients. The long term effects of a drug, which extends a patient’s life in the short term, might not become apparent. Secondly some drugs which do not appear to have any effect on terminally ill patients might be effective on less seriously ill patients. Such drugs might not become available to future patients. Can these two problems be solved?

Regulation might solve the first problem. Experimental drugs might be used on terminally ill patients if they desire them but their use should not be permitted on other patients until after undergoing a full clinical trial. It might appear that because there are less terminally ill patients compared to other patients that pharmaceutical companies would continue to conduct full clinical trials on experimental drugs. However this appearance might be unsound. Pharmaceutical companies might try to extend the definition of a terminally patient so as to continue using some drugs without them ever having to undergo a full trial. This problem might be overcome by regulatory authorities insisting that experimental drugs are only used on those who are terminally ill. Applied philosophers might aid them in this task by better defining what is meant by terminal illness. The well-known physicist Stephen Hawking has motor neurone disease and it is probable that this disease will kill him but at the present he would not be classed as terminally ill. Terminal illness should be defined by how long someone will probably live rather than the probability that his illness will kill him. Perhaps someone should not be considered to have a terminal illness unless it is probable that he has less than six months to live. Let us consider the second problem. Might some pharmaceutical companies be tempted not to fully trial some drugs which might benefit some patients on the basis of incomplete evidence gathered from their use on terminally ill patients? Once again regulation might solve this problem. I would suggest that provided that terminal illness is defined tightly enough that this problem shouldn’t arise. A tight definition of terminal illness means fewer terminally ill patients for pharmaceutical companies to test drugs on forcing them to conduct full clinical trials. To summarise once again it appears harm to future patients does not give us reason to prohibit access to experimental drugs for the terminally ill provided that terminal illness is tightly defined.

Lastly at the beginning of this post I suggested that in practice pharmaceutical companies might be unwilling to supply experimental drugs due to a fear of litigation. It should be possible to overcome this fear if patients are required to sign a comprehensive consent form making it clear not only that there are risks involved but also that these risks include as yet unknown risks.

The above discussion leads to the rather tentative conclusion that the terminally ill should not be prohibited from trying experimental drugs subject to certain safeguards. These are,
  1. Terminal illness must be clearly and tightly defined. Philosophy can play an important part in doing this.
  2. No drugs which have not been fully tested should be used on non-terminally ill patients except for the purpose of testing
  3. Any terminally ill patient taking an experimental drug must sign a comprehensive consent form in the same way patients taking part in trials do. This form must make it clear that they are prepared to accept as yet unknown risks.

Friday, 8 January 2016

Driverless Cars and Applied Philosophy


Google has developed a driverless car and major car makers such as Ford and VW are showing an interest in doing the same. It is reported that up to ten million such cars might be on the road by 2020, see businessinsider . I am somewhat doubtful about such a figure but nonetheless we are going to get driverless cars and their coming raises some ethical issues. According to Eric Schwitzgebel,

“determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess”,

see driverless cars . Clearly driverless cars needs collision-avoiding software. Intuitively Schwitzgebel seems to be correct when he argues part of this software should be the concern of practical ethics. For instance driverless cars might be programmed to not to protect their passengers if by so doing a large number of pedestrians would be harmed. In this posting I want examine three questions. Firstly is Schwitzgebel correct when he argues part of that software should be the concern of practical ethics. Secondly is such software actually possible. Lastly if it isn’t possible to design software which can make moral decisions should we nonetheless permit driverless cars on our roads?

What does Schwitzgebel mean when he says that the software of a driverless car should be the concern of practical ethics? In this posting I will assume he means some rules should be built into a driverless car’s software about what to do in a dangerous situation which involves some moral considerations. Does a driverless car need such software? It is by no means certain it does. Consider a driver whose car will collide with either a young pregnant mother or an old man due to unforeseen circumstances. Does she make a decision about what to do based partly on applied philosophy? I suggest she does not. Of course her emotions might kick in causing her to avoid the pregnant mother. It might then be argued if drivers don’t, or can’t, make a decision based on applied philosophy, that there is no reason why driverless cars should do so. I accept of course that driverless cars should be safe as possible for its passengers, other road users and pedestrians.

How might the above argument be challenged? Firstly it might be objected that my example is chosen to mislead and that in other situations when the circumstances are much clearer people do in fact make decisions roughly based on applied philosophy. For instance a driver might be faced with the choice of hitting a concrete stanchion and killing himself or running into a queue of schoolchildren waiting at a bus stop might choose to hit the stanchion for moral reasons. I accept that in some extreme circumstances drivers might make a moral decisions. Someone might object that such moral decisions are based on people’s emotions rather than the application of applied philosophy. Applying philosophy takes time and time is not available in a collision situation, if more time was available then a driver could probably might well take some action obviating the need for any moral input. Moreover I would suggest in real life situations this second example is just as misleading as the first. A car crashing into a queue might kill one or two people but it is unlikely to kill a very large number. It seems to me that only a large number of victims might enable a driver to make a clear moral decision quickly. I have argued drivers don’t usually make moral decisions when making collision decisions and rarely if ever do so by applying philosophy. Does this mean driverless cars do not need a controlling system that takes account of moral considerations? I would suggest it doesn’t. Let us assume drivers should take into account moral considerations in collision situations provided this is possible. It appears to follow driverless cars should have a controlling system that takes account of moral considerations in collision situations provided this is possible.

Designing systems that enable driverless cars to make decisions which include moral considerations will be difficult. Perhaps then rather than designing such systems it might be better to make driverless cars avoid the circumstances in which the need to make moral decisions arises. Cars and pedestrians don’t mix so perhaps it might be safer to limit driverless cars to motorways and other major roads. Doing so might have the additional benefit that it might prove easier to design driverless cars to avoid dangerous circumstances, to avoid the necessity of making decisions based on applied philosophy, than adding software to make such decisions. Unfortunately such a course of action whilst desirable would seem to be impractical unless the way people use cars changes radically. People want cars to take them home, to work, to go shopping and their children to school. Satisfying these wants means mixing cars and pedestrians. Cars that don’t satisfy these wants would be unwanted. It would appear that even if it is very hard to do that an attempt to program the collision-avoiding software of a driverless cars to enable them to take in to account moral considerations should be made provided this is possible.

I have argued that the collision-avoiding software of a driverless car should include moral considerations provided this is possible. Let us turn to the second question I posed, is such software possible? I have argued that in an emergency situation in which people have to make moral judgements that they do so quickly based on their emotions. Cars don’t have emotions so it follows the basis of any software system for making moral decisions in driverless cars will be different from that used by drivers and based on set of rules. What sort of rules? Schwitzgebel argues that the rule of protecting a driverless car’s occupants at all costs is too simplistic. I would question whether such a rule is indeed a moral rule at all. Might a strictly utilitarian rule of maximising the lives saved in a crash situation be adopted? Schwitzgebel points out such a rule would unfairly disregard personal accountability. For instance what if a drunken pedestrian steps in front of a car? Isn’t he accountable for his actions? If so shouldn’t his accountability be taken into account when assessing the consequences of any decision about the oncoming collision? Could a driver spot that a pedestrian was drunk in an emergency situation? At present a driverless car’s software certainly couldn’t. It follows any rules used by driverless cars must be primitive rules which don’t fully represent our own understanding of moral rules. It seems possible that if we are prepared to accept some primitive rules built into a driverless cars software that it is possible for a driverless car’s software to make some primitive moral decisions.

Let us consider my last question. If the rules involving moral decisions which are built into driverless car’s software must, at least for the present, be rather primitive rules should we permit the use of such cars? I will now argue we should. Firstly I have argued that drivers don’t, or very rarely, make moral decisions in collisions situations. There is no legal requirement that drivers should make such decisions and I see can no reasons as to why a higher standard should be applied to driverless cars. Indeed driverless cars might be safer. Drivers can get drunk and speed. Driverless cars can’t get drunk and their software can control their speed. Secondly morality evolves and the moral rules governing driverless cars might also evolve as the technology develops. Perhaps a starting point might be Schwitzgebel’s over simple rule of protecting a driverless car’s occupants at all costs. Such a rule seems to me to reflect an equal status of driverless and driven cars. Next perhaps another simple rule such as maximising lives saved should be adopted. Schwitzgebel argues such a rule fails to account for accountability. I agree with Schwitzgebel. However an imperfect rule is preferable to no rule at all and I see no reason why this rule shouldn’t be adopted. Of course purchasers of driverless cars which incorporate this rule should be made aware of the rule and accept it. Perhaps purchasers need to be informed of the rules involved in much the same way patients are informed about the risks of surgery. Better rules should be incorporated as technology and its associated software becomes available.

To summarise, firstly it seems to me that provided driverless cars reach the same safety standards as driven cars that we should be prepared to accept them even if their collision avoiding software does not have any rules taking into account moral considerations. Secondly attempts should be made to develop software that does take into account some moral considerations. The rules embedded in this software might be rather primitive but primitive rules would be better than unachievable ones. Lastly as technology develops these rule might be further developed to become less primitive.


Sunday, 29 November 2015

Terrorism, Love and Delusion


In this posting I want to look at terrorism. As a philosopher rather than a psychologist I won’t consider the means by which potential terrorists might become radicalised, instead I will consider one of the conditions which might make some people might susceptible to radicalisation. Terrorists are sometimes seen as idealists, albeit with warped ideals. I will argue that ideals are vital to us as persons and that if someone lacks ideals that this lack creates a condition in which she becomes susceptible to radicalisation.

Usually the ideals that are important to a terrorist are grand political ideals. I’m interested in the time before she acquires such grand ideals, I’m more interested in the mundane ideals that shape people’s everyday lives. I want to link ideals mundane or otherwise to what someone loves. I will assume as Harry Frankfurt does that someone who loves nothing at all has no ideals (1). An ideal is something someone finds hard to betray and as a result limits her will. Love also limits the will. Love need not be grand romantic love but can simply be seen as ‘caring about’ something in a way that limits the carer’s will. I would suggest if someone loves something this something forms a sort of ideal for her as she must try to ensure the thing she loves is benefited and not harmed. If this wasn’t so she would remain indifferent to her supposed beloved rather than loving it. It is impossible for someone to be indifferent to her ideals. However accepting the above doesn’t mean that ideals have to be grand ideals, indeed someone’s ideals can be quite modest.

I now want to argue that ideals, as defined by what we love above, are essential to us as persons. According to Frankfurt someone without ideals

“can make whatever decision he likes and shape his will as he pleases. This does not mean that his will is free. It only means that his will is anarchic, moved by mere impulse and inclination. For a person without ideals, there are no volitional laws he has bound himself to respect and to which he unconditionally submits. He has no inviolable boundaries. Thus he is amorphous with no fixed shape or identity.” (2)
Let us accept that ideals are essential to us as persons and I would suggest that someone without ideals has a sense of simply being. I would further suggest that this sense of simply being is one that most people would find unbearable. According to Christine Korsgaard human beings by their very nature are condemned to choosing (3). Someone without ideals has no basis on which to choose and as Frankfurt points out is ruled by impulse and inclination. It seems the combination of the need to choose even if that choice is an unconscious one and the lack of a basis for that choice is what makes simply being, simply existing, unbearable.
If one accepts the above then the need to love something, have ideals, expresses a quite primitive urge for psychic survival. I would suggest that in some cases this need to love something creates the conditions which makes some people vulnerable to radicalisation. Of course this need to love something might be met in other ways, perhaps even perhaps in such mundane ways such as keeping a pet. However the young, perhaps especially young men, want to feel important and perhaps this feeling causes them to prefer grand rather than mundane means in order to satisfy this need. In some cases the combination of the need to love and feel important makes some people especially vulnerable to radicalisation.
I now want to argue that choosing to be a terrorist in order to satisfy the primitive urge to love something is a form of self-delusion. It is a self-delusion due to the nature of love. Love is not simply a matter choosing to love. According to Frankfurt, “love is a concern for the well-being or flourishing of a beloved object – a concern that is more or less volitionally constrained so that it is not a matter of entirely free choice or under full voluntary control, and that is more or less disinterested.” (4) Now if we accept Frankfurt’s position then when someone chooses to become a terrorist in order to satisfy her urge to love something she is deluding herself. Love is not a matter of choice and it is impossible for someone to choose to love in order to satisfy this need. Of course such decisions about choosing to love are usually unconscious ones but nonetheless they are still decisions and as a result remain self-deluding.
It might be objected that I am exaggerating the importance of the need to love and underestimating the need to feel important. I will now argue even if this is so, which I don’t accept, that some of the same considerations apply. To terrorists the feeling of importance is connected to violent action. Terrorists want to be considered as heroes by some people. I have previously defined a hero as someone who chooses to recognisably benefit someone else or society in ways most people could not, in addition her actions must be beyond the call of duty and must involve some real sacrifice on her part, see Hobbs and Heroes . Now what motivates a true hero is a need to benefit someone else or society, it is not to satisfy some need to be seen as a hero. Some who pushes someone into a river in order to rescue them certainly isn’t a hero. Someone might choose to become a hero but if the motivation for her actions is a desire to be a hero then she is deluding herself about her actions even if this desire is an unconscious one because no real sacrifice is involved. Indeed it is even possible to argue that someone who resists her desires to be seen as heroic might be better seen as a hero even if a minor one.
Let us accept that one of the conditions which makes some people susceptible to radicalisation is a sense of simply being, simply existing, due to a lack of ideals. Other conditions may play a part but what might be done to alleviate this condition? Unfortunately there seem to no easy or quick solutions because real ideals must be acquired rather than given. In spite of these difficulties I will offer some rather tentative solutions. Firstly good parenting; good parenting should always involve love. Some deprived and inarticulate parents find it hard to give or to express their love even if they are excellent parents in other ways. Some parenting skills can be taught but loving can’t. It follows we should encourage social conditions conducive to the emergence of love. Perhaps also we should actively encourage policies that promote happiness, see action for happiness . Secondly education must be more broadly based. Education should not only be focussed on the skills valued by employers but also on the skills that help all pupils to flourish. For instance the skills needed in sport and music should not be considered to be on the educational periphery. Education should be broad enough so that all have the opportunity to acquire skills to enable them to be good at something rather just acquire skills that are good for employment. Even if ISIS can be defeated by other means or collapses due its inherently stupid doctrines the solutions outlined above would remain useful in building a more cohesive society.


  1. Harry Frankfurt, 1999, Necessity, Volition, and Love, Cambridge University Press, page 114.
  2. Frankfurt, page 114.
  3. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
  4. Frankfurt, page 165.



Wednesday, 11 November 2015

Autonomy and Beneficence Revisited


I have previously argued that if someone asks me to buy him cigarettes and I was not going to be significantly inconvenienced that I have reason to do so. I assumed that he was an adult fully aware of the dangers of smoking. I am a non-smoker and believe smoking is harmful. However I also believe in giving precedence to respecting autonomy over acting beneficently. Recently a posting by Michael Cook in bioedge has caused me to question my position. Cook considers the case of a North Carolina woman called Jewel Shuping. Ms Shuping wasn’t born blind but was convinced that she was meant to be blind.  According to her doctors she had Body Integrity Identity Disorder. A psychologist gave her some counselling and after this failed gave her some eye-numbing drops before washing her pupils with drain cleaner. Cook asks was the psychologist right to destroy his patient’s eyesight even if she freely requested him to do so and was happy with the result of this treatment? The case of Shuping is an extreme one, however let us assume I am a carer for someone who becomes housebound and unable to buy the cigarettes he had previously enjoyed. Let us further assume that I buy these for him for a number of years and that eventually he develops lung cancer. In this situation am I partly to blame for his condition or have I only been respecting his autonomy? In this posting I want to examine the way in which we should respect someone’s autonomy. This examination is important for as Cook points out it has wider implications in difficult contexts for informed consent such as gender reassignment surgery and euthanasia.

Why did I argue that if it didn’t inconvenience me that I should buy a smoker a packet of cigarettes when he asked me provided he was an adult and fully aware of the dangers involved? I argued by doing so I was respecting his autonomy. Most people would object that my buying someone cigarettes has nothing to do with respecting autonomy. Respecting someone’s autonomy to most people simply means not interfering with someone doing something he cares about provided that by so doing he doesn’t harm others. If this is all it means to respect autonomy then respecting a smoker’s autonomy gives me no reason to buy him cigarettes when he asks me to do so. Let us accept informed consent is based on respect for patient autonomy. It then also follows that Shuping’s informed consent gave her psychologist no reason to acquiesce to her wishes. He might of course thought he was acting beneficently.

I now want to argue that the account of autonomy outlined above is an incomplete one. I will argue that a more complete account means that someone’s autonomous wishes must carry some weight for me. Let us suppose someone asks me to do him a favour and that doing so would not significantly inconvenience me. If I respect him I must feel it would be better to satisfy these wishes, provided by doing so I do no harm. If this was not so I would be indifferent towards him. Being indifferent to someone is not compatible with showing respect. At this point it might be argued that satisfying someone’s wishes has more to do with acting beneficently towards him than respecting his autonomy. However I would reject such an argument. I can act beneficently towards my dog by satisfying his needs but this doesn’t mean I respect him. I may of course love my dog but love differs from respect. Respecting someone as a person means accepting him as the sort of creature that can determine his own future. Respecting someone as a person means accepting what he determines to be his wishes must have some sort of weight for me. If I see someone as the sort of creature who can determine his own future but give no weight to his wishes then I am indifferent towards him rather than respectful. It does not of course automatically follow on from giving weight to his wishes that I have to satisfy them. Doing so might may harm others or cause me significant inconvenience. However it does follow that if I respect someone as a person and can satisfy those of his wishes which do no harm others without any significant inconvenience that I have reason do so. It further follows a more complete account of autonomy requires satisfying someone’s autonomous wishes provided these wishes do no harm to others or cause significant inconvenience.

Let us accept this more complete account of autonomy. If we accept that informed is based on respect for autonomy then I would suggest Shuping’s psychologist did have reason to acquiesce to her demands. It might be objected even if Shuping’s desire did have some weight him that her psychologist should not have acted as he did due the harm caused. Cook poses the question,

“Was the psychologist right to destroy his patient’s eyesight if she freely requested it, was happy with the treatment, and was living in psychological torment because she could see.”

Let us assume that Shuping would have been satisfied if the psychologist had blinded her but that he didn’t do so. Perhaps he believed his refusal to act was in her best interests. However if he did this he might be accused of epistemic arrogance. Moreover he might be accused of failing to respect her autonomy because he is failing to see her as the sort of creature who could make her own decisions. If the above is accepted then when respecting someone else’s autonomy requires that ‘the doing no harm condition’ should be replaced by ‘doing no harm on balance’. At this point it might be objected that such a concept of autonomy is far too demanding as people cannot always decide what on balance does no harm and we should retain the simpler condition of doing no harm.

I now want to argue we should accept the condition of ‘doing no harm on balance’. Let us assume that embedded within our thicker account of respecting autonomy is the simpler Millian account. Let us assume our smoker makes an autonomous decision to buy cigarettes. It follows that if I respect his autonomy that I should not act to stop him buying cigarettes by hiding his wallet according to the Millian account. Now let us now assume that he has broken his leg and that it would not inconvenience me to buy him the cigarettes. However I believe the cigarettes will cause him harm and refuse. In both scenarios I can prevent this harm by refusing to buy cigarettes when he has broken his leg and by hiding his wallet when he hasn’t. In both of these scenarios the outcome doesn’t change. If I hide someone’s wallet then I am acting to block him from exercising his autonomy. And if I refuse to buy him cigarettes I am omitting to act. A discussion of autonomy is an unusual place for the act’s/omissions controversy to occur. Does the difference between acts and omissions apply in this context? Indeed is there any real difference between acts and omissions in practical deliberation, see Julian Savulescu’s posting in practicalethics . In both of the above scenarios we are aware of the effects of our choice of behaviour. Christine Korsgaard argues that “choosing not to act makes not acting a kind of acting, makes it something that you do.” (1) I would suggest provided Korsgaard is correct then if someone chooses to act or chooses to omit to act that there is no meaningful difference between acts and omissions. It is still possible that acts and omissions might differ provided ones actions are ones he is fully conscious of and are omissions are unconscious choices. However is such a difference one between acts and omissions or a difference between degrees of consciousness concerning our behaviour? The above suggests to me that when it comes to respecting autonomy there is no meaningful difference between acts and omissions. It follows if I believe smoking will harm the smoker but refrain from hiding his wallet but refuse to buy him cigarettes I am acting inconsistently.


What conclusions can be drawn from the above? Firstly that a purely Millian account of autonomy is an incomplete account. A more complete account means that respecting someone’s autonomy requires that one must sometimes act beneficently towards him provided so doing does not harm him on balance and does not cause significant inconvenience. Autonomy and some forms of beneficence are linked. Of course I accept that someone might have other reasons to act beneficently which are independent of respecting autonomy. Secondly it follows I should buy the smoker his cigarettes. Lastly it would seem Shuping’s psychologist acted correctly. I am somewhat reluctant to accept this conclusion. Perhaps in cases in which the stakes are so high there must be some doubt as to whether one is in fact causing no harm on balance and the precautionary principle should be applied. Nonetheless in spite of my reluctance I am forced to conclude that provided he was sure he was causing no harm that on balance Shuping’s psychologist was acting correctly.

  1. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.


Tuesday, 27 October 2015

Emerging AI and Existential Threats


AI is much in the news recently. Google’s chairman Eric Schmidt believes AI is starting to make real progress whilst others such as Nick Bostrom believe AI might pose an existential danger to humanity (1). In this posting I want first to question whether any real progress is in fact being made and secondly examine the potential dangers involved. Before proceeding I must make it clear I don’t deny AI is feasible for after all human beings have evolved intelligence. If intelligence can evolve due to natural selection then it seems feasible that it can be created by artificial means however I believe this will be harder to achieve than many people seem to believe.
At present computing power is rising fast and algorithms are increasing in complexity leading to optimism about the emergence of real AI. However it seems to me that larger faster computers and more complex algorithms alone are unlikely to lead to real AI. I will argue genuine intelligence requires a will and as yet no progress has been made to creating for or endowing AI with a will. Famously Hume argued that reason are the slave of the passions. Reason according to Hume is purely instrumental. It might be thought that better computers and better algorithms ought to reason better at the very least. I would question whether they can reason at all because I would suggest that reason cannot be separated from the will. In the present situation seems to me that better computers and better algorithms only mean they are better instruments to serve our will, they don’t reason at all. The output of some computer program may indeed have some form but this form doesn’t have any meaning which is independent of us. The form of its output alone has no more meaning than that of a sand dune sculpted by the wind. However sophisticated computers or algorithms become if the interpretation of their output depends on human beings then they don’t have any genuine intelligence and as a result I believe it is misleading to attribute AI to such computers or algorithms. Real AI in this posting will mean computers, algorithms or robots which have genuine intelligence. Genuine intelligence requires reasoning independently of human beings and this reasoning involves having a will.
Let us accept that if some supposed AI doesn’t have a will that it doesn’t have any genuine intelligence. What then does it mean to have a will? According to Harry Frankfurt,
“The formation of a person’s will is most fundamentally a matter of his coming to care about certain things, and of his coming to care about some of them more than others.” (2)
For something to have a will it must be capable of ‘caring about’ or loving something. If computers, algorithms or robots are mere instruments or tools, in much the same way as a hammer is, then they don’t have any will and real AI is no more than a dream. How might we give a potential AI a will or create the conditions from which a potential AI will acquire an emergent will? Before trying to answer this question I want to consider one further question. If something has a will must we regard it as a person? Let us assume Frankfurt is correct in believing that for something to have a will it must be capable of ‘caring about’ something. Frankfurt argues that something
“to whom its own condition and activities do not matter in the slightest properly be regarded as a person at all. Perhaps nothing that is entirely indifferent to itself is really a person, regardless of how intelligent or emotional or in other respects similar to persons it may be. There could not be a person of no importance to himself.” (3)
Accepting the above means that to have a will is essential to being a person. It also suggests that if something has a will it might be regarded as a person. This suggestion has moral implications for AI. Clearly when we switch off our computers we are not committing murder however if we switched off a computer or terminated an algorithm which had acquired a will we would. I will not follow this implication further here.

Let us return to the question as to whether it is possible to seed a potential AI with a will or create the conditions in which it might acquire one. If we accept Frankfurt’s position then for something to have a will it must satisfy three conditions.
  1. It must be able to ‘care about’ some things and care about some of them more than others.
  2. It must ‘care about itself.
  3. In order to ‘care about’ it must be aware of itself and other things.

Before being able to satisfy conditions 1 and 2 a potential AI must firstly satisfy condition 3. If we program a potential AI to be aware of itself and other things it seems possible we are only programming the AI to mimic awareness. For this reason it might be preferable to try and create the conditions from which a potential AI might acquire an emergent awareness of itself and other things. How might we set about achieving this? The first step must be to give a potential AI a map of the world it will operate in. Initially it need not understand this map and only be able to use it to react to the world. Secondly it must be able to use its reactions with the world to refine this map. If intelligence is to be real then the world it operates in must be our world and the map it creates by refinement must resemble our world. Robots react more meaningfully with our world than computers so perhaps real AI will emerge from robots or robot swarms connected to computers. However it seems to me that creating a map of the things in our world will not be enough for a potential AI to acquire emergent awareness. For any awareness to emerge it must learn to differentiate how different things in that world react to its actions. Firstly it must learn what it can and cannot change by physical action. Secondly and more importantly it must learn to pick from amongst those things it cannot change by physical action the things it can sometimes change by change by simply changing its own state. A potential AI must learn which things are aware of the potential AI’s states and perhaps by doing so become aware of itself satisfying the third of the conditions above. Meeting this condition might facilitate the meeting of the first two conditions.
For the sake of argument let us assume a potential AI can acquire a will and in the process become a real AI. This might be done by the rather speculative process I sketched above. Bostrom believes AI might be an existential threat to humanity. I am somewhat doubtful whether a real AI would pose such a threat. Any so called intelligent machine which doesn’t have a will is an instrument and does not in itself pose an existential threat to us. Of course the way we use it may threaten us but the cause of the threat lies in ourselves in much the same way as nuclear weapons do. However I do believe the change from a potential AI to a real AI by acquiring a will does pose such a theat. Hume argued it wasn’t “contrary to reason to prefer the destruction of the whole world to scratching of my finger.” It certainly seems possible that a potential AI with an emerging will might behave in this way. It might have the will equivalent to that of a very young child whilst at the same time possessing immense powers, possibly the power to destroy humanity. Any parent with a young child who throws a tantrum because he can’t get his own way will appreciate how an emerging AI with immense powers and an emergent will potentially might poses an existential threat.
How might we address such a threat? Alan Turing proposed his Turing test for intelligence. Perhaps we need a refinement of his test to test for good will, such a refinement might called the Humean test. Firstly such a test must test for a good will and secondly, but much more importantly, it must test whether any emergent AI might in any possible circumstances consider the destruction of humanity. Creating such a test will not be easy and it will be difficult to deal with the problem of deceit. Moreover it is worth noting some people, such as Hitler and Pol Pot, might not have passed such a test. Nonetheless if an emerging AI is not to pose a threat to humanity the development of such is vital and any potential AI which is not purely an instrument and cannot pass the test should be destroyed even if this involves killing a proto person.

  1.  Nick Bostrom, 2004, Superintelligence, Oxford University Press
  2. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 91
  3. Frankfurt, 1999 Necessity, Volition, and Love. Cambridge University Press. Page 90.

Friday, 18 September 2015

Do Same Sex couples have a greater right to Fertility Treatment?


Emily McTernan “argues that states have greater reason to provide fertility treatment for same sex couples than for heterosexual couples” (1). She bases her argument on the premise that greater access to fertility treatment for same sex couples will encourage a diversity in of ways of life and that this diversity is a social good. In this posting I will argue that she is mistaken and that same sex couples do not have a greater right to fertility treatment.

In what follows I will restrict my discussion of fertility treatment to IVF. McTernan argues that IVF cannot be justified as an element of general healthcare. Healthcare she assumes should be concerned with disease and infertility is not normally a disease. She defines disease as an adverse deviation from normal species functioning and this deviation is a deviation from what is statistically normal given someone’s age and sex. Of course there are exceptions.  A women with a specific problem such as blocked fallopian tubes has a disease and using the above definition has a right to fertility treatment, has a right to IVF. However for most couples, especially if the women is older, infertility is not a deviation from what is statistically normal, and as a result most couples do not have a right to IVF based on a right to healthcare. I agree with McTernan.

Accepting the above of course doesn’t automatically mean people don’t have some right to IVF or that the state shouldn’t provide IVF. After all the state provides such things as libraries, parks and sports fields. Accepting the above only means that the states provision of IVF should compete with the states’ provision of libraries, parks and other things which help its citizens flourish. Let us accept that the state should provide some funding for IVF commensurate with other requirements it has. McTernan argues that within this provision same sex couples should be prioritised in order to encourage a diversity in ways of life.

I now want to argue that McTernan’s argument is unsound and that in allocating IVF we shouldn’t prioritise same sex couples. Firstly I will argue McTernan’s reason for such prioritisation is unsound and secondly present an argument against any such prioritisation. McTernan believes that we should encourage diversity in ways of life. Offering priority in access to IVF to gay couples might increase diversity in child rearing. However if diversity in ways of life is an unqualified good then perhaps the state should reform the law on bigamy and even encourage polygamous marriage as by doing so it would encourage different sorts of relationships. Few people would support such a reform but even if such a reform could be justified other examples could be imagined to show that not all diversity in ways of life are good. Nonetheless let us accept that diversity is sometimes desirable, is a qualified good. It follows diversity in child rearing might be such a qualified good and hence should be encouraged. What exactly does McTernan want to increase diversity in? Does she want to increase diversity in child rearing or simply diversity in relationships? Child rearing involves loving, safe guarding, nurturing and guidance. I don’t believe McTernan wants to change these basics. It follows she wants to increase a diversity in relationships. However the state could also encourage a gay lifestyle, in order to increase a diversity in relationships, by tax incentives. Few would support such a proposal. Such a proposal seems to be mistaken for surely the amount of diversity in sexual orientation in a society should be determined by people’s natural inclinations rather than by government policy. Accepting the above means of course if sexual orientation in a society should be determined by people’s natural inclinations rather than by the state that the state has reason to permit gay marriage. Accepting the above also means the state has no reason to prioritise access to IVF for gay couples.

Gay couples cannot have children unaided by anyone else. It might be suggested that this fact means gay couples should be prioritised in accessing IVF. However even if gay couples cannot have children unaided by others IVF is not the only option open to them if they want to have children. Both male and female same sex couples might be able to adopt a child. Male couples might also use surrogacy and this need not involve IVF. Female couples can use AID. It seems to me the fact that gay couples cannot have children unaided does not mean they should be prioritised in accessing IVF.

I now want to argue there is a second reason as to why gay couples should not be given greater priority in accessing IVF. My argument is based on fairness. Let us assume that gay couples are given greater priority in accessing IVF. It might then be objected such prioritisation is unfair. Fairness requires that everybody’s needs are considered. It does not follow of course that everybody’s needs should be satisfied equally. However it does require that if some peoples’ needs aren’t satisfied equally that some reason can be given for this. Let us assume that people have a need to have children who are genetically related to them. Let us consider a gay and a heterosexual couple both of whom are unable to conceive children without IVF. Both couples have the same need. Fairness requires that if the needs of these couples are satisfied unequally that there that some reason can be given for this unequal satisfaction. The need of both couples are identical, to have children they are genetically related to. If the needs of both couples are the same then any reason given for unequal treatment must depend on the outcomes for any children so conceived or some benefit to society. The outcomes for any children depends on the parenting skills of the couples involved. Perhaps for instance either gay or heterosexual couples make better parents. However there seems no evidence to support such a reason. Perhaps then society might benefit from unequal satisfaction. It is difficult to see how society might benefit except for promotion of greater diversity, but I have argued above that whilst society must permit greater diversity it should not try to alter the natural diversity occurring within it. In conclusion it would seem that the encouragement of a diversity in ways of life does not give us a reason to prioritise IVF for gay couples over heterosexual couples. It further seems that fairness requires that all couples are given equal priority.


  1. Emily McTernan, 2015, Should Fertility Treatment be State Funded? Journal of Applied Philosophy, 32,3. Page 237



Wednesday, 2 September 2015

The Philosophy of Rudeness


In this posting I want examine rudeness. It might be thought that rudeness is of minor concern to society and hence not of any great philosophical interest. I believe rudeness should be of greater concern to society. For instance, consider the former Chief Constable of Northumbria Police who resigned over alleged rudeness to senior colleagues, see the guardian . It also seems possible that rude and aggressive behaviour, rudeness and aggression seem to be linked, might make teaching more difficult. Lastly it appears that someone’s creativity and willingness to help others might be damaged by rudeness, see the psychologist . It follows there are some reasons as to why rudeness should be of concern to society. It further follows that if philosophy can say anything meaningful about rudeness that rudeness should be of philosophical concern.

What do we mean by rudeness? Rudeness might be defined as a lack of manners or being discourteous. In what follows I won’t deal with etiquette and mainly focus on someone being discourteous. What then do we mean when we say someone acts discourteously? One can’t be discourteous to oneself, discourteousness applies to relationships. Someone acts discourteously in his relationships if he focusses solely on his needs and wishes without considering the needs, views and wishes of others. Such a definition of discourteousness seems to be too broad. For instance someone might not consider the needs, views and wishes of others due to ignorance. Rudeness, acting discourteously, might be better defined as knowingly not considering the needs, views and wishes of others. It might be objected this definition remains too broad as there is a difference between acting selfishly and acting rudely. My objector might then proceed to suggest that real rudeness means someone not only not considering the needs, views and wishes of others but also making explicit his lack of consideration and perhaps even his contempt for them. In response to my objector in what follows I will argue that knowing selfishness is a form of rudeness. I would further respond that my objector is really pointing to more explicit rudeness rather than proposing a different concept.

Before proceeding let us be clear what the above definition entails. It must include a lack of consideration for the views and wishes of another and not just his needs. If only needs were involved I could be rude to my dog by not considering his need for exercise. However the above definition remains inadequate. For instance I could ignore my sleeping partner’s needs, views and wishes but my lack of consideration would not be a case of rudeness. Let us modify our definition of rudeness; rudeness might be defined as someone knowingly not considering the needs, views and wishes of another and at the time of this inconsideration the other being aware of this inconsideration.

Accepting the above definition means that rudeness and morality are linked. However differences remain between acting rudely and acting immorally. Morality very roughly consists of someone considering the needs of others and acting to meet these needs provided he judges or feels action is appropriate. Acting rudely only involves a lack of consideration. It follows rude behaviour need not necessary be immoral behaviour but that rudeness is on the road to immoral behaviour. Let us consider an example. Suppose I knowingly fail to consider ways to get my partner to work, when her car has broken down and that she is aware of my lack of consideration. Clearly I have acted rudely. However whether I have also acted immorally depend on the circumstances. If I had an important doctor’s appointment then I have acted rudely but not acted in an immoral manner. However if I only want to sleep a bit longer and a little less sleep would not harm me and I fail to run my partner to work then I have acted both rudely and acted in a slightly immoral way. It is also true that behaving in an immoral way towards someone need not be rude behaviour. I can behave in an immoral way when the subject of my bad behaviour is unaware of my behaviour. For instance if a charming sociopath might use his charm to further his own ends without consideration of someone’s needs then he may be acting immorally but he is not acting rudely.

I now want examine the causes of the lack of consideration which seems to be an essential element of rudeness. Firstly someone might attach great importance to his self. Secondly he may lack empathy. This second reason might explain why it appears men on average display greater rudeness than women. In what follows a lack of consideration refers to a knowing lack of consideration when those who are not considered are aware of this lack. Someone’s needs will refer to his needs, views and wishes.

The first cause I wish to examine is when someone overvalues his self-importance. Such a person when deciding on how to act focusses solely on his own needs. If someone focusses on his own needs and these needs don’t affect others then he is acting prudently rather than rudely. However if someone focusses on his own needs without any consideration of the needs of others, when his needs affect their needs, he acts rudely. If someone always bases his actions on his own self-importance then I would suggest he fails to see others of equal importance. But his failure has an additional element he fails to recognise something essential about his own nature, he fails to recognise his nature as a social animal. Such a failure damages both the relationships which help foster society and him personally.

The second important cause of rudeness is that someone lacks empathy. I must make it clear by empathy I mean associative rather than projective empathy. A sociopath can project himself into the minds of others and understand the feelings of others. He might use this understanding to experience pleasure in the pain of others. Associative empathy means someone experiences the feelings of others. It seems to me a rude person might have projective empathy but that he does not have associative empathy. I should make it clear at this point that I don’t believe only having projective empathy necessarily makes someone into a sociopath. It makes him indifferent. It also gives him one of the tools a sociopath needs. I would suggest a lack of associative empathy damages someone as a person as he lacks an essential element needed in the makeup of social animals.

I have argued that whilst rudeness is not always immoral it is on the road to immorality. I further argued that rudeness damages a rude person’s status as a social animal. I would suggest that for the most people being a social animal is a good. It follows rudeness damages people. At the beginning of this posting I gave three examples which pointed to rudeness damaging society. What then can be done to combat rudeness? One thing that might be done is that society should become less accepting towards rudeness. What is entailed in being less accepting? Less acceptance means not being indifferent to rudeness but pointing out to rude people that their rudeness damages them as social animals. However less acceptance should simply mean less acceptance and not slip into aggressively challenging rudeness which might itself might become a form of rudeness. Secondly we must become more prepared to accept that other people are the same sort of creatures as ourselves. We must respect the autonomy of others. This means we must give priority to respecting someone’s autonomy before acting beneficently towards him. Indeed acting to satisfy our perception of someone’s needs instead of attempting to satisfy his expressed needs might be seen as a form of rudeness, see woolerscottus . Respecting autonomy means we must be tolerant of persons and their views. However this toleration should not extend to their attitude towards others if this attitude is a rude one. Sometimes we must be prepared to simply accept that our views and those of others differ and do no more, see practicalethics . Thirdly I have argued that a lack of associative empathy is one of the root causes of rudeness. It follows we might combat rudeness by addressing this lack. Unfortunately doing so is not easy, it can’t be done by simply increasing awareness or cognition. Michael Slote argues that parental love helps a child develop associative empathy (1) but even if combatting rudeness by increasing parental love is possible it will be a slow process.



  1. Michael Slote, 2014, A Sentimentalist Theory of the Mind, Oxford, pages 128-134.