Wednesday, 14 September 2016

Happiness and Consumerism



It is commonly asserted that people aren’t as happy as they used to be years ago due to the consumerist culture we live in. Usually very little evidence is produced to support this assertion. In this posting I will attempt to remedy this situation by providing an argument to support the above assertion. I will argue that our culture limits our ideas about what makes us happy and that this limitation limits the amount of happiness we experience. Previously I have argued that the way in which we are happy changes as we age and mature and I will use this argument as a starting point to support the current argument. My argument rests on two important premises both of which I will support by argument. Firstly, I will argue that our actual level of happiness depends on, at least to some degree, our ideas about what will make us happy. Secondly, I will argue that our culture helps define those ideas. I will conclude that we should try to broaden the focus of our culture particularly with regard to the way we work and the way we are educated.

Before proceeding with my argument I want to introduce two differing ideas, concepts, of happiness. Firstly, there are hedonistic concepts of happiness such as that outlined by Fred Feldman. Feldman believes someone is happy now “if when we consider all the propositions with which she is currently intrinsically attitudinally (dis)pleased with and we then consider the degree to which she is (dis)pleased with these propositions and find the sum to be positive” (1). This is a definition of momentary happiness but in this posting my concern will be with happy persons rather than momentary happiness. Feldman believes a happy person is simply one who over time is pleased to a greater degree than she is displeased. A different concept is that of Daniel Haybron. According to Haybron,

“To be happy then, is for one’s emotional condition to be broadly positive – involving stances of attunement, engagement and endorsement – with negative central affective states and mood propensities only to a minor extent.” (2)

 There is some overlap between these concepts but Haybron argues that happiness,

“has two components: a person’s central affective states and second, her mood propensity …. What brings these states together, I would suggest is their dispositionality.” (3)

I have previously argued that a disposition to be happy is an essential element of being a happy person and will briefly repeat my argument here, see Feldman, Haybron and happy-dispositions . There is a difference between a happy person and a person who is happy. It seems to me that Feldman and hedonists are interested in people who are happy rather than happy persons. A person who is happy is simply a person who is currently happy. The fact that a person is currently happy by itself gives me little reason to assume she will be happy tomorrow. I may of course believe she will be happy tomorrow because I know that tomorrow will be her birthday, but the fact she is happy currently, by itself, gives me little reason to predict her future happiness. However, if I believe someone to possess a happy disposition then I normally expect her to be happy tomorrow. For this reason, I believe Haybron better defines what it is to be a happy person and will use his definition unless stated otherwise.

If we accept Haybron’s definition, then it seems to me that the relative importance of the various elements within his definition change as we age, see does our concept of happiness change as we age . I will briefly outline my argument. Let us recall that that someone is happy if her emotional condition is broadly positive and that this involves her in general being attuned to, engaged with and endorsing her emotional condition. Haybron ranks the importance of attunement, engagement and endorsement in that order in relation to happiness. Haybron connects endorsement to feelings of joy or sadness (4). I suggested endorsement involves being satisfied with rather than any large scale feelings of joy or sadness. I did however suggest being satisfied with does involve some minimal positive emotion, slight joy? If the above is accepted, then being satisfied is an essential element of being happy. A further argument can be advanced as to why being satisfied is an essential element of happiness. Martin Seligman believes achievement is an essential element of happiness (5). It seems to me that achievement usually linked to being satisfied. If we accept that Seligman is correct and achievement is an important element of being happy then it follows so is being satisfied. Lastly I argued if we accept that satisfaction is an essential element of being happy then the way we are happy changes as we age because younger people give greater weight to hedonistic pleasures whilst older people give greater because to being satisfied. It would appear the way in which we are happy changes as we age.

Let us accept that the way in which we are happy changes as we age. I now want to argue that our ideas about what will make us happy affects the level of happiness we actually experience. Some might question if an idea about what will make us happy is needed if we are to be happy. They might suggest people are just happy or unhappy and don’t need any ideas about what will make them happy. To support this suggestion, they might point out animals and infants can be happy without any idea of what will make them happy. They might proceed to argue that apart from philosophers most people are simply happy or unhappy. However unlike animals or infants we aren’t simply happy, we actively pursue happiness. A pursuit is impossible without some goal. The pursuit of happiness implies that we must have some ideas about what will make us happy. Let us accept that people must have some ideas about the things which will make them happy however vague. However even if we accept the above it doesn’t automatically mean our ideas about the things will make us happy are related to the level of happiness we experience. We might be mistaken about what will make us happy. It might be suggested that such mistakes are of little importance because we naturally pursue the things that make us happy. I would reject such a suggestion. Let us accept that there can be a mismatch between the things we think will make us happy and the things that actually make us happy. Having mistaken ideas of what will make us happy can damage our actual happiness. We can pursue things that don’t really make us happy at the expense of not pursuing thigs that actually make us happy. Examples are easy to find. For instance, someone who desires meaning in his life but pursues hedonistic lifestyle because he believes living such a life will make him happy. Or perhaps someone who pursues a stoic way of life and rejects the demands of love. It follows that our ideas of what will make us happy can affect the actual level of happiness we experience.

I now want to argue that the culture someone lives in affects her idea of what will make her happy. Clearly someone’s culture affects the things that make her happy. For instance, some cultures value wealth whilst others value honour more than wealth. It might be argued that this difference is only a difference in what makes us happy but not in the way we are actually happy. For instance, someone might be a gourmet and value good food whilst someone else might be a libertine who values having sexual intercourse as often as possible. Two different sort of things make these people happy but both of these people have the same underlying idea about the way they will be happy. It might be concluded that our basic idea about the way we will be happy doesn’t change even if its focus does. I now want to argue such a conclusion would be mistaken because in certain cases the things we value helps determine the way we enjoy them. Let us consider someone who values honour. Haybron hints that if someone is happy there is a link between her happiness and the self. (7) I believe Haybron is correct and that there is indeed a connection between some forms of happiness and the self. Clearly this is not the case with hedonistic happiness. To enjoy a good meal or sexual intercourse no one needs a sense of self. This is not true of someone who values honour ‘cares about’ or loves her honour. Valuing honour is connected to her identity, her sense of self, see some of my previous postings. I would suggest such a person will be happy when she acts honourably and that her happiness depends on her satisfaction with acting as she believed she should. I would further suggest her satisfaction is linked to her sense of self by her cognition. The way she is happy is very different to the way someone is happy when enjoying a good meal or having sexual intercourse. Let us accept that people can be happy in different ways and that the pursuit of different ways of being happy requires different ideas about happiness. Let us also accept what we value determines the way we enjoy it, the way in which we are happy. Different cultures value different things. Some of the things we value are determined by the culture we live in. It follows culture helps to determine the way in which we are happy.

No culture is completely homogenous and our culture certainly isn’t. However, I now want to argue that a certain dominant idea within our current culture fosters ideas about what will make us happy which damage our actual happiness. In the western world our culture is dominated by the idea of the consumer. Advertising suggests we will be happier if we have the latest car, have a large modern house, have shinier hair, have brighter teeth, etc. Advertising suggests we will we happier if we have certain things, if we are consumers. Western culture sees us as consumers just as much as it sees us as citizens. The idea of a consumerism is widespread even extending into education. In school pupils are encouraged to learn in order to get good jobs rather than enjoy learning. In education more generally courses are becoming increasingly designed with employment in mind rather broadening students’ horizons. Education is in Yeats words becoming a matter of filling pails rather than lighting fires. I argued above culture helps determine the way we are happy. A culture with a dominant consumerist ethos supports a hedonistic ideas of happiness such as that of Feldman. I further argued that an account such as that of Feldman offers an incomplete concept of happiness because it offers an inadequate account of what it means to be a happy person. Lastly I argued that our ideas about what will make us happy affect the actual happiness we experience. It follows that someone holding an incomplete idea of what will make her happy might experience less actual happiness than if she had a more complete idea.

I now want to discuss four ways in which our overly consumerist culture damages our happiness by fostering an incomplete idea of happiness. First, I have argued above our consumerist culture fosters a hedonistic ideas of happiness. I argued above that such an account of happiness is an incomplete account. Let us recall that that according to Haybron someone is happy if his emotional state is positive and he is attuned, engaged and endorses that state to some degree. I have suggested endorsement is linked to satisfaction. Someone might be satisfied if she is eating a chocolate cake, with some state of affairs or past achievements. Being satisfied with eating a chocolate cake does not involve any cognitive abilities. However, if someone is satisfied with some state of affairs or past achievements she engages some of her cognitive abilities. A hedonistic account of happiness does not directly involve our cognitive abilities. It follows if culture fosters a hedonistic idea of happiness that this fostering might limit some peoples’ ideas about happiness by diminishing their desire to pursue some of the things which might add to their happiness, by limiting their desire to pursue things that satisfy them. Secondly I have argued that as people age the weights attached to the various elements which contribute to their happiness change. A culture which fosters a mostly hedonistic idea of happiness damages that change and as a result damages the happiness they experience. I have outlined this argument above and will not repeat it here. Again it follows that an overly consumerist society might limit our overall happiness especially for older people. Thirdly would argue that our consumerist culture encourages an attitude to work which limits the happiness we experience. Let us accept that some work can give our life meaning and that this meaning increases our happiness. There are two different definitions of work. Firstly, we might define work simply as labour undertaken for some economic reward or hope of such a reward, let us define this as working for something. Such work is instrumental and has no intrinsic value. Secondly someone might work at something. For instance, she might work at playing some musical instrument simply because she enjoys it. Someone playing a musical instrument might become fully immersed with her music losing any feeling of reflective self-consciousness.  According to Mihaly Czikszentmihalyi when someone is in such a flow state she experiences positive emotions. These emotions contribute to his happiness. Our consumerist culture encourages working for something at the expense of working at something and by so doing limits our ability to experience our ability to experience the positive emotions generated by flow. It again follows that an overly consumerist society damages our overall happiness. Lastly our consumerist culture emphasizes consuming things makes us happy. I don’t deny consumption might make us happy for a while. A consumerist culture places emphasis on momentary happiness. It seeks to make people happy, which in itself is laudable, but it is much less concerned with happy people and this lack of concern also limits our happiness. At this point I must it clear that when I speak about happy people I am concerned with people who have a disposition to be happy rather than people who are simply experiencing positive emotions. Our consumerist culture limits happiness because momentary happiness is fragile happiness whilst the happiness experienced by happy people is more robust than momentary happiness. It again follows that an overly consumerist society damages our overall happiness.

What conclusions can be drawn from the above? First our consumerist society damages our happiness and we should seek to broaden the focus of society. Our attempts to broaden the focus of society should concentrate on work and education. This expanded focus might be particularly important if automation leads to people working less. If work provides some meaning in life then it is important to change society’s focus from ‘working for’ to ‘working at’, see work, automation and happiness . An overly consumerist society might find such a change difficult. Secondly if it is hard to broaden society’s focus it becomes especially important to have an accurate idea of what makes us happy, our concept of happiness matters.

  1. Fred Feldman, 2010, What is this thing called Happiness? Oxford, page 29.
  2. Daniel Haybron, 2008, The Pursuit of Unhappiness, Oxford, page 147.
  3. Haybron, page 138.
  4. Haybron, page 113
  5. Martin Seligman, 2011, Flourish, Nicholas Brealey Publishing, Chapter 1.
  6. Haybron, page 130.

Tuesday, 16 August 2016

Sport, Motivational Enhancement and Authenticity

 

Heather Dyke writing in the conversation examines why doping in sport is wrong. In a previous posting I have argued that doping in sport is wrong for three main reasons, see sport performance and enhancing drugs . Firstly, I believe there should be a difference between sport and simple spectacle and that the use performance enhancing drugs by sportspersons erodes this difference. Secondly I argued that permitting performance enhancing drugs simply moves the goalposts. If we don’t permit the use of all drugs, including dangerous ones, we will still have to test whether any drugs used are permitted ones. Lastly I argued what we admire about sport is linked to the determination and effort required by sportspersons and that the use of performance enhancing drugs weakens this link. Determination and effort are linked to motivation, to character. I have previously argued that it would not be wrong to enhance our motivation, see effectiveness enhancement . It would appear that I hold two conflicting positions with regard to doping in sport. In this posting I want to examine this conflict.

Let me start my examination by making it clear the sort of doping I am opposed to. I believe any drug which enhances an athlete’s body damages sport for the three reasons outlined above. If some mediocre athlete could transform himself into an Olympic champion in a matter of weeks by taking some drug which vastly physically enhanced him would we really admire him? I would suggest we would not because we feel sporting excellence should require some effort. Now let us consider a second mediocre athlete who transforms himself into an Olympic champion over by taking some drug which enhances his motivation over a number of years. By transforming his motivation, he trains more determinedly and makes greater effort when training. This second athlete raises three interesting questions. Firstly, is there any real difference in a sporting context between an athlete taking a drug to enhance himself physically and enhance himself mentally? Secondly would we admire such an athlete? Lastly is the enhancement of someone’s motivation compatible with the ethos of sport?

I will now attempt to answer each of the above a questions in turn. Is there any real difference in a sporting context between an athlete taking a drug to enhance himself physically and enhance himself mentally? Clearly there is a difference in this case because an athlete who enhances himself physically with the use of drugs need make no effort to achieve his enhancement whilst a second athlete who physically enhances herself by mentally enhancing her motivation must still train hard. Does this difference matter? The answer this additional question is connected to our second original question. What do we admire about sportspeople? I would suggest we admire their dedication to the effort required for their sport, we admire their motivation for sport, we admire part of their character. Of course it follows we need not admire all of a sportsperson’s character. Let us accept that we admire a sportsperson’s motivation, effort and dedication. The question now would admire his motivation, effort and dedication if these were artificially enhanced?

It might be argued that if we obtain certain goods easily without any real determination that in so doing we devalue determination in general. Let us assume it is possible to artificially enhance our motivation by making us more determined. Let us accept that if an athlete enhances himself physically by the use of drugs, gene therapy or blood doping that he devalues the importance of motivation. Does the same apply if he enhances his motivation artificially? I would suggest it does not. There is an important difference between the enhancement of effectiveness and the enhancement of motivation. Enhancing our effectiveness devalues our motivation whilst it is hard to see how enhancing our motivation could possibly devalue motivation. Accepting the above means it might be possible to admire an athlete who artificially enhances his motivation whilst at the same time failing to admire an athlete who simply enhances himself physically.

At this point someone might object that whilst accepting someone who enhances his motivation does not devalue his motivation that nonetheless he devalues himself as a person. He does so by making himself less authentic. My objector might then argue someone shouldn’t enhance his motivation because being authentic is something we value. In response I would point out the things which make us authentic aren’t fixed from birth, babies aren’t authentic. People seek to change themselves by enhancing themselves by training or learning. I can see of no reason why people changing themselves by these means will render themselves inauthentic. I would suggest someone’s authenticity depends on him seeking goals he identifies with rather than the means he chooses to seek these goals. Someone’s authenticity is determined by what he loves or cares about. I would further suggest that a truly authentic person must always choose those means which are most effective in promoting the goals he identifies herself with. It follows if these means include enhancing his motivation that this enhancement isn’t inauthentic. Indeed, it appears that if someone doesn’t use the most effective means to promote those goals he identifies with that his authenticity is weakened. Sometimes those most effective means might include motivational enhancement and it follows someone does not use motivational enhancement that his authenticity is weakened

What conclusions can be drawn from the above. Firstly, physical enhancement by artificial means devalues sport.  Secondly motivational enhancement by artificial means does not seem to conflict with the ethos of sport provided it is accepted this ethos is connected to the sportsperson’s character. I accept some people might be reluctant to accept this second conclusion and might believe I am wrong to separate so completely the goals someone identifies with and the means he uses to achieves his goals. 


Wednesday, 29 June 2016

Outsourcing Ethical Decision Making and Authenticity



In a previous posting I questioned whether algorithmic assisted moral decision making is possible. Let us assume for the sake of argument that AAMD is possible. Using such a system might be considered as an example of algorithmic outsourcing of our moral decision making. Such outsourcing according to John Danaher means taking away the cognitive and emotional burden associated with certain activities, see Danaher . Intuitively outsourced moral decisions are inauthentic decisions. In this posting I will argue that under certain conditions outsourced ethical decisions using AAMD could be authentic ones.

Before proceeding I must make it clear what I mean by algorithmic assisted moral decision making, outsourcing and authenticity. Any moral decision simply made by an algorithm is not an authentic decision. In my previous posting I suggested when initialising an AAMD system we should first use a top down approach and install simple human values such as avoiding harm. However once initialised such a system should be fine-tuned by the user from the bottom up by adding his personal weights to the installed values. This primitive system might then be further modified from the bottom up using of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can form a basis for similar future decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions employed. The user might then further refine the system either by altering these weights or highlighting differences between the current decision and any previous decisions the system employed. According to Danaher outsourcing can take two forms. Cognitive outsourcing means someone using a device to perform cognitive tasks that she would otherwise have to perform himself. Affective outsourcing means someone using a device to perform an affective task that she would otherwise have to perform himself. I will assume here that an authentic decision is a decision that the decision maker identifies herself with or cares about.

According to Danaher taking responsibility for certain outcomes is an important social and personal virtue. Further, someone only takes responsibility for certain decisions if he voluntary wills his chosen outcomes of these decisions. Authenticity is an important social and personal virtue. Getting an app to automatically send flowers to someone’s partner on her birthday doesn’t seem to be an authentic action because the sender doesn’t cause the action. However, here I am only interested in outsourcing our ethical decisions, does outsourcing such decisions damage their authenticity?

I will now argue the answer to the above question depends not on outsourcing, per se, but on the manner of the outsourcing. Let us assume that in the future there exists a computer which makes decisions based on a set of values built into it by a committee of philosophers. Let us consider someone who outsources his moral decisions to this computer. I would suggest that if she implements a moral decision made in this way that his decision is an inauthentic one. It is hard to see how someone in this situation could either identify with the decision or consider herself to be responsible for the outcome. Let us now consider someone who outsources her moral decision making to a AAMD system which is finely tuned by the user as outlined above, are her decisions also inauthentic? I would suggest someone who makes a moral decision in this way is acting authentically because she can identify with his decision. She is able to identify with the systems decisions because, once initialised, the system is built from the bottom up. Her weights are attached to the incorporated values and her past decisions are built into its database.

I suggested that some who uses such a system must accept or reject its decisions. Someone might object that someone who simply accepts the systems decisions without reflection is not acting authentically. In response I would point in virtue ethics someone can simply act and still be regarded as acting authentically. My objector might respond by pointing out Christine Korsgaard pictures the simply virtuous human as a sort of Good Dog (1). Perhaps someone who simply accepts the results of an AADM system might also be pictured as behaving as a good dog with the system replacing the dog’s owner. Surely such a person cannot be regarded as acting authentically. In response I would suggest what matters is that the agent identified himself with the system’s decision. To identify with a decision someone has to be satisfied with that decision. What does it mean to be satisfied with a decision? According to Frankfurt satisfaction entails,

“an absence of restlessness or resistance. A satisfied person may be willing to accept a change in his condition, but he has no active interest in bringing about a change.” (2)

I’m not sure that an absence of restlessness or resistance with a decision is sufficient to guarantee its authenticity. I would suggest authentic decisions are ones that flow from our true self. I have argued our true self is defined by what we are proud or ashamed of, see  true selves do they exist . Let consider someone who accepts the recommendation of an AAMD system without feeling any shame, is her acceptance an authentic one or simply not an inauthentic one? I have argued that there are two types of shame . Type one shame is anxiety about social disqualification. Type two shame is someone’s anxiety about harming the things she cares about, loves and identifies with. Let us accept someone must feel type two shame when she acts in a way which harms the things she cares about, loves and identifies with. In the above situation if someone simply accepts the recommendation of an AAMD system without feeling any type two shame then he is acting in accordance with what he loves and identifies with and is acting authentically.

What conclusions can be drawn from the above. If someone outsources some of his moral decision making to a computer, she may not be acting authentically. However, if she outsources such decision making to an AAMD system designed using a bottom up approach as outlined above it is at least conceivable that she is acting authentically.

  1. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 3. 
  2. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press, page 103.

Wednesday, 25 May 2016

Cosmetic Surgery, Enhancement and the Aims of Medicine

  
Jessica Laimann wonders whether we should prohibit breast implants (1). She proceeds to argue that we shouldn’t prohibit breast implant surgery but then suggests we might compensate individuals who decide not to have such surgery. She seems to be uneasy with the idea that breast implant surgery could be a legitimate aim of medicine, I agree with Laimann that we shouldn’t prohibit breast surgery and would and suggest that the skills of medical practitioners might be better employed elsewhere. However, there is a difference between what could be a legitimate aim of medicine and what we should prohibit. Let us assume that in the future medical practitioners can satisfy all the now commonly accepted aims of medicine, in these circumstances could breast implant surgery become a legitimate aim of medicine? In these circumstances could human enhancement become a legitimate aim of medicine? In this posting I want to examine these questions.

In order to examine these questions, I must first examine what the aims of medicine should be. The aims I am concerned with a list of aims, such as repairing heart valves, treating cancer and so on but with aims common to all medical procedures. It might be suggested that aim of all medicine is obvious, to make people better. But what do we mean by better? William Mayo expressed the traditionally held view that “the aim of medicine is to prevent disease and prolong life, the ideal of medicine is to eliminate the need of a physician.” Mayo’s definition might be extended to include the treatment of injury and disability. According to the traditional view medicine makes us better by the treatment of disease, injury, disability and the prolongation of life. If we accept this definition then cosmetic surgery, assisted reproduction and any enhancement, with the possible exception of life extension, wouldn’t be things that make us better. A slightly different definition of the aims of medicine is given by Silver.

“The proper ends of medicine are to use medical skills and training to maintain or improve the position of the person involved, subject to her autonomous consent.” (2)

If we accept Silver’s definition then cosmetic surgery and some forms of enhancement might be considered as making us better. How can we decide which of the above definitions to accept? Let us accept that medicine is a caring profession. Let us also accept that medical practitioners should exercise their skills to serve those interests of patients which can only be served by medical means.

Unfortunately accepting the above doesn’t automatically help us in deciding which of these different aims of medicine to accept. Firstly, what is better for a patient might simply be defined as her medical interests as defined by her doctors. Secondly, what is better for her might be partly defined by what she sees to be her interests, her subjective interests. Let us accept that doctors should respect a patient’s autonomy. I have previously argued that a purely Millian account of autonomy is an incomplete account, see autonomy and beneficence revisited . I argued that a more complete account means that respecting someone’s autonomy requires that one must sometimes act beneficently towards her by attempting to satisfy her desires provided so doing does not harm her on balance and does not cause me significant inconvenience. Autonomy and some forms of beneficence are linked. If the above argument is accepted, then it seems to me that we should accept that a patient’s interests must include her subjective interests provided her general health interests can be easily satisfied. Such satisfaction is difficult now but might be more easily achieved in the future. If we accept the above it might be concluded that we should accept Silver’s definition, such a conclusion would be premature.

Let us assume that breast implants might be in the subjective interests of some individuals. However, it does not automatically follow that breast implantation surgery should be a legitimate aim of medicine. Breast implantation might damage society by sending a damaging picture of what it means to be a woman to both to some men and vulnerable young women. In this situation should we give greater weight to the interests of individual women or to the interests of society? I now want to argue that the above is a false dichotomy and that by respecting individual rights we benefit rather than damage society. Let us accept that breast implantation does some damage to society by projecting a damaged picture of what it means to be a woman. I now want to argue that a ban on breast implantation surgery would cause even greater damage to society. If we fail to respect the right of individuals to make their own decisions, then we fail to see them as the kind of people who can make their own decisions. This failure has two bad consequences, first we fail to truly respect those individuals and secondly we might be accused of moral arrogance. Even more importantly in this failure is the implicit belief that society should shape its members’ decisions. I believe such a belief is dangerous because it is too simplistic. Let us accept that when individual members of a society make decisions that those decisions are partly shaped by the society they live in. However, society both shapes and is shaped by the decisions of its individual members. A flourishing society resembles a living entity that evolves and changes over time. This change is in part shaped by the decisions of its individual members. In order for this shaping to take place such a society must be prepared to accept these decisions. Mill makes much the same point when he suggests that the human race is damaged by silencing the expression of an opinion.

What conclusions can be drawn from the above? Firstly, that Silver is right and that the aim of medicine should be to use medical skills are both to maintain or improve the position of the person involved, subject to her autonomous consent. Let us also accept that in achieving this aim precedence should be maintaining rather than improving the position of the person involved if resources are scarce. Secondly provided resources aren’t scarce then cosmetic surgery and assisted reproduction can and should be a legitimate aim of medical practice. Lastly the above suggests that we have some reason to accept that other forms of enhancement, of those who autonomously desire enhancement, should be a legitimate aim of medical practice unless compelling reasons can be advanced as to why such enhancement causes greater damage to society than the satisfaction these autonomous desires.

  

  1. Jessica Laimann, 2015, Should we Prohibit Breast Implants? Journal of Practical Ethics 3(2)
  2. Silver M, 2003, Lethal injection, autonomy and the proper ends of medicine, Bioethics 17(2).

Wednesday, 27 April 2016

Diversity and Editing Our Children’s Genes


I have recently been reading ‘Should you edit your children’s gene?’ by Erika Check Hayden in nature . Hayden is not concerned with editing genes, which might enhance a child’s cognitive abilities or physical prowess, but rather with editing genes for specific diseases or conditions. Such editing might be achieved by using CRISPR to edit embryos. In this posting I want to consider two related arguments, both based on diversity, which Hayden outlines against adopting such a policy. In my discussion I will assume without any argument that these diseases and conditions harm those who experience them to some degree, even if this degree is small. Some might object to this assumption, for instance some deaf people do not see their deafness as a disability and some deaf parents would even prefer to have deaf children.

Until recently disabled people were often treated badly but changing attitudes, at least in the Western World, has improved their lives. It might be argued that these changing attitudes has not only benefitted disabled people but have also benefitted all of us by creating a more caring society. At some time in life disability is likely to directly affect most people because we are prone to experience sickness, accidents and age-related decline. Let us accept without argument that a more caring society which cares for the disabled benefits us all. It might then be argued if we try to eliminate various disabilities we might inadvertently damage all by creating a less caring society.

The above argument seems to depend on the premise that a more diverse society is a more caring society. I want to challenge this premise. Let us imagine we now start using CRISPR to edit embryos. The motive to do so is a caring one, we want to reduce disability, which I have assumed above harms those disabled to some degree. Let us now imagine that by 2050 we have eliminated many current disabilities and that by doing so have created a less diverse society. At this point someone suggests that in order to create a more diverse society that we now use CRISPR, or some future technology, to create some disabled embryos. The purpose of doing so would be to create more diversity and hence caring by deliberately creating disabled children. Let us assume that these disabled children would of course still have meaningful lives they wanted to live. A similar society is satirised by Kurt Vonnegut in his short story ‘Harrison Bergeron’. It seems to me that any future society would find such a course of action totally abhorrent. It would seem that such a society’s policy of rejecting using CRISPR to produce disabled children is in total opposition to the policy of a society which rejects using CRISPR to reduce disability. Why should some future society find such a policy abhorrent? I would suggest it would do so because it cares about harming its members. It follows any future society which rejects such a policy would a caring one even if it was slightly less diverse.


At this point an objector might accept that whilst such a society would remain a caring one it might also be a less caring one than one which contained greater diversity. He then might suggest we should care about increasing caring. A consequentialist account of caring, more caring is better. Unfortunately for my objector the above seems to commit him to the abhorrent conclusion that in some circumstances it would be right to use CRISPR, or some future technology, to create some disabled embryos subject to the proviso that any resultant children would be able to live meaningful lives they wanted to live, in order to increase diversity and hence increase caring. My objector is using the term ‘’caring in two different ways. Firstly, ‘caring’ means something is important it matters to him, secondly the ‘caring’ that is important to him means sympathy or empathy. I will now argue it makes no sense to attempt to ‘care about’, in the first sense, maximising ‘caring about, in the second sense. Let us examine what ‘caring’ in the second sense involves. It must involve some empathic concern for others. If someone ‘cares about’, in the first sense, increasing caring, in the second sense, then he is not exhibiting this empathic concern if he is prepared to alter some perfectly healthy embryos to produce disabled children. By ‘caring about’ in the first sense, ‘caring about’, in the second sense, he is failing to ‘care about’, in the second sense. He believes something is important but doesn’t act as if it is important. Such a course of action is nonsensical.

Let us accept if we edit our children’s genes that we might create a less diverse society but that doing so doesn’t harm society in general by making it an uncaring one. I now want to address a second but related argument again based on diversity against editing our children’s genes. It might be argued even if a less diverse society doesn’t harm all of us it nonetheless might care less those who remain disabled. A society with less disabled people in it might care less for disabled people because it is less able to cope with their needs. Such a society might fail to cope adequately with their needs for two reasons. Firstly, such a society might allocate less resources to the needs of the disabled and secondly it might fail to understand these needs as well a more diverse one. Let us examine the first of these reasons. Prima facie a society with a lower proportion of disabled people in it should have more resources to devote to the disabled people than a similar one with a greater proportion. It might be accepted that such a society has greater resources but argued it might still be less responsive to the disabled’s needs. It might be less responsive because the lower number of disabled people means their voice carries less weight. I find this argument unconvincing. Let us accept in such a society the disabled can still express their needs. Let us also accept that such a society remains a caring one. I can see no reason why such a society should be unresponsive to the expressed needs of the disabled. Now let us examine whether a society with less disabled people in it would be less able to understand their needs. I accept that it is possible that such a society might understand the lives of the disabled less well than a society which contains a greater proportion of disabled people with a stronger voice. However, understanding the lives of the disabled is not the same as responding their needs. In any advanced society if the needs of the disabled can be expressed they can be acted on. If such a society remains a caring one then the expressed needs of the disabled should be acted on. It is also possible that in the future automation might mean members of such a society have more time to try to understand those who are disabled even if the number of those disabled forms a lower proportion of that society. It follows even if members of such a society don’t fully understand the lives of the disabled as members of a more diverse society that there is time for dialogue to better understand these needs.


I have argued any argument against editing our genes based on disability benefitting our society due to increased diversity is unsound. Accepting my argument of course does not mean we should edit our children’s genes as there may be other stronger arguments against so doing.

Wednesday, 30 March 2016

Factitious Virtue


In this posting I want to consider Mark Alfano’s idea of factitous virtue, I will only consider factitious moral virtue (1). In recent years the whole idea that human beings can possess virtues has come under sustained attack from moral psychologists and many would now question whether virtue ethics has any real future. Moral psychologists argue that all that really matters when we make moral decisions are the situations we find ourselves in and not any supposed virtues we might possess. However, if all that matters are the situations people find themselves in when making such decisions then everyone should act in a similar fashion in a similar situation. Clearly this isn’t true. People’s character varies. It is conceivable that someone’s character is partly shaped by her moral behaviour, being a trusty person is part of that person’s character. Perhaps virtue is hard and limited to a few people or perhaps most people only have limited virtue. In this posting I will argue that if virtue matters that it does so in two specific domains. I will then consider whether Alfano’s factitious virtue can be considered a virtue in the traditional sense. Lastly I will consider whether factitious virtue matters.

Let us consider the way we make moral decisions. When making important moral decisions with wide scale implications virtue ethics is not really useful. Some might disagree with the above. When making important moral decisions we don’t simply do what a virtuous person would do, we think. We think of the consequences or perhaps we question whether any decision we make could be made into a universal law. When making important decisions, such as those concerning the consequences of global warming or whether terminally ill patients should have the right to assisted suicide, then thinking about consequences or universal laws seem to be better way forward than simply asking what a virtuous person would do. I will not consider whether we should employ consequentialist or deontological methods here. It might be thought in the light of the above that I believe virtue should play no part in moral decision making. Such a thought would be premature. Not all moral decisions are of wide scale importance, for instance a daughter might have to decide whether to help her aged mother to go shopping or spend an enjoyable afternoon by herself in her garden on a beautiful summer’s day. Such decisions are not made after careful consideration but rather by simply deciding, deciding in accordance with our virtues, provided of course this is possible. It follows there is a possible place for the virtues in making some moral decisions, Bernard Williams would have classed such decisions as ethical decisions, see Internet Encyclopaedia of Philosophy . Virtue would be useful in the domain of making every-day moral or ethical decisions provided virtue is possible. I now want to argue that virtue might also matter in a second domain. Alfano suggests that,

“People enjoy acting in accordance with their own self-concepts, even those aspects of self-concept that are evaluatively neutral; they’re averse to acting contrary to their self-concepts, especially the evaluatively positive aspects of their self-concepts.” (2)

I’m not sure Alfano is totally correct when he suggests people enjoy acting in accordance with their self-concepts. I would suggest people are satisfied with acting in accordance with their self-concept and hence have no reason to act otherwise. I would however agree with Alfano that people do act in accordance with their self-concepts. The daughter in in example I used above makes her decision based on her self-concept. She might consider herself to be a caring person and as a result takes her mother shopping. It follows that if we partly define ourselves by the virtues we possess that virtue matters in the domain of self-definition.

Let us accept that there is a possible domain for virtue in moral decision making. I would suggest that this is not a trivial domain because most of the moral decisions we make are everyday ones and our concept of self matters. I would further suggest that we have evolved a capacity to make every day moral decisions and find it hard to transcend this capacity. However, even if there is a possible domain for the virtues in making moral decision this possibility by itself doesn’t mean the virtues exist. A lot of psychological research seems to point to the situation someone finds herself in when making moral decisions being much more important than any supposed virtue she might possess. In 1972 Alice Isen and Paula Levin conducted a famous experiment which showed participants who found a dime in a payphone were much more likely to aid someone needing help (3). Many other studies have replicated Isen and Levin’s finding that what really matters when making a moral decision is the context the decision is made in rather than any supposed virtue the decision maker possesses. Let us accept for the sake of argument that virtue is weak or rare in most people and hence not a useful concept as far as most people are concerned.

In the light of the situationist challenge Alfano argues that the idea of factitious virtue is useful. What exactly is factitious virtue? Alfano suggests that factitious virtue is a kind of self-fulfilling prophecy. He gives us an example of a self-fulfilling prophecy.

“Were United States Federal Reserve Chairman Ben Bernanke to announce …. On a Sunday evening that the stock market would collapse the next day, people would react by selling their portfilios, leading indeed to a stock market crash. (4)

A factitious virtue is analogous to a self-fulfilling prophecy. Alfano argues if you label someone as having a virtue that she comes to act as if she possesses the virtue, she has factitious virtue.

“Virtue labelling causes factitious virtue, in which people behave in accordance with virtue not because they possess the trait in question but because that trait has been attributed to them.” (5)

For labelling to be effective it should be made in public and believable to the person labelled. Let us return to my previous example. Telling the daughter in my example that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and would mean that she would be more likely to help her mother with her shopping.

Let us examine the status of a factitious virtue. The question then naturally arises is factitious virtue a real virtue? Alfano uses an analogy between a placebo and factious virtue to explain how factitious virtue works. If someone believes that a placebo will help her then her belief is a self-fulfilling one. In the same way someone believes she has a virtue due to labelling then she has factitious virtue. But a placebo isn’t a drug and it might be argued by analogy that factitious virtue is a not real virtue. What do we mean by a virtue? According to the Cambridge Online Dictionary virtue is “a good moral quality in a person or the general quality of being morally good.” If we accept the above definition then factitious virtue is a real virtue in a narrow sense because it induces a good quality in a person and the argument by analogy fails, however labelling does not seem to induce the more global quality of someone being morally good.

I now want to examine whether factitious virtue is a real virtue in the broader sense of being connected to being a morally good person. Factitious virtue differs from the more traditional virtues in the way it is acquired, does this difference in acquisition mean factitious virtue is not a real virtue? Julia Annas argues we acquire the virtues by learning (6). Learning requires some skill. If someone acquires a factitious virtue of caring by means of labelling then her acquisition need not involve any skill. It follows, provided Annas is correct, that factitious virtue is not a real virtue. Annas further argues we cannot acquire a moral virtue in isolation, for instance someone cannot learn to be caring without also learning to be just. Perhaps we can acquire non-moral virtues such as courage in isolation. It follows if someone acquires one moral virtue that in doing so she must acquire others because there is some unity of the moral virtues and this leads her to being a morally good person. Beneficence is a moral virtue and someone might become more beneficent by being labelled as caring. Acquiring the factitious virtue of caring by labelling doesn’t require that someone acquires any other moral virtues. It again follows provided Annas is correct that factitious virtue is not a real virtue in the broader sense. However factitious virtue remains a real virtue in the narrow sense because it induces a good quality in a person.

I now want to consider two objections to regarding factitious virtue as a real virtue in even the narrow sense. Firstly, it might be argued that any real virtue must be stable over time and that once labelling ceases a factitious virtue slowly decays over time. Michael Bishop argues that positive causal networks PCN are self-sustaining (7). A PCN is a cluster of mental states which sustain each other in a cyclical way. For instance, confidence and optimism might aid some to be more successful and her success in turn boosts her confidence and optimism. Bishop argues that successful relationships, positive affect and healthy relationships skills/patterns form such a network (8). Healthy relationship skills include trusting, being responsive to someone’s needs and offering support. Healthy relationship skills involve caring and so it is possible that caring is part of a self-sustaining network. It follows it is possible that if the factitious virtue of caring is induced in someone that once induced this factious virtue has some stability. Whether such a possibility exists for other factitious virtues is not a question for philosophy but for empirical research. It would appear that at least one important factitious virtue, the one of caring, might be stable over time and that this might be true of others.

Secondly it might be argued that a virtue is not something we simply accept, not something induced in us in the same way a virus might induce a disease. It might be argued that unless we autonomously accept some virtue, it isn’t a real virtue. I accept this argument. It might then be further argued that because we don’t autonomously accept a factitious virtue that factitious virtues aren’t really virtues. I would reject this further argument. There is a difference between autonomously accepting something and making an autonomous decision. What does it mean to autonomously accept something? I would suggest it means identifying oneself with the thing one accepts. It means caring about something. This caring about means someone “makes himself vulnerable to losses and susceptible to benefits depending upon whether what he cares about is diminished or enhanced” according to Frankfurt (9). It might be suggested if a factitious virtue is induced in us that there is no need for us to identify with that virtue. I now want to argue that the above suggestion is unsound. According to Frankfurt what someone loves, ‘cares about’ or identifies with is defined by her motivational structures.

“That a person cares about something or that he loves something has less to do with how things make him feel, or his opinions about them, than the more or less stable motivational structures that shape his preferences and guide his conduct (10).
Frankfurt also believes our motivational structures are defined by what we are satisfied with, passively accept (11). To autonomously accept something means we are satisfied with our acceptance and experience no resistance to or restlessness with that acceptance. Let us return to factitious virtue. Labelling if it is to be effective must be done in the right circumstances. Labelling must be public and believable to the person labelled. In my previous example telling the daughter in question that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and she would be unlikely to resist such labelling. If we accept the above analysis of autonomous acceptance then the daughter autonomously accepts the factitious virtue. I would also suggest that a lack resistance or restlessness to accepting what children are being taught is the way in which traditional virtue ethicists see them as coming to autonomously accept the virtues they are being taught. It follows that we autonomously accept factitious virtues in much the same way we accept real virtues.

Does factitious virtue matter? Let us accept without argument that the world would be a better place if people acted virtuously. Let us also accept that factitious virtues act in much the same way as real virtues at least for a period. It follows factitious virtues can make the world a better place for a period even if these virtues are relatively short lived. It would also appear that because the factitious virtue of caring has some stability it can have improve the world in a more lasting way. Intuitively a more caring world is a better world. However, it might be argued that our intuitions are unsound. Factitious virtue might indeed make people more caring but only by caring more for those already close to them to the detriment of others. In response to the above argument firstly I would point out not all ethical decisions are best made by considering what a virtuous person would do. Some ethical decisions are best made using consequentialist or deontological considerations. Secondly it might be feasible to extend the domain of factitious caring by well-considered labelling. Labelling someone as caring for strangers in the right circumstances might extend this domain. Accepting the above means accepting that the factitious virtue of caring might well improve the world in a more lasting way and that the factitious virtue of caring matters.

  1. Mark Alfano, 2013, Character as Moral Fiction, Cambridge University Press.
  2. Alfano, 4.1
  3. Alice Isen & Paula Levin, 1972, The effect of feeling good on helping; cookies and kindness, Journal of Personality and Social Psychology, 34, 385-83.
  4. Alfano, 4.2.2
  5. Alfano, 4.3.1
  6. Julia Annas, 2011, Intelligent Virtue, Oxford University Press, page 84.
  7. Michael Bishop. 2015, The Good Life, Oxford University Press, Chp 3.
  8. Bishop, page 75.
  9. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 83.
  10. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 129
  11. Frankfurt, 1999, Necessity, Volition, and Love, page 103.

Monday, 7 March 2016

Algorithmic Assisted Moral Decision Making


Albert Barqué-Duran wonders whether humanity wants computers making moral decisions? In the face of the coronavirus outbreak when we are faced by difficult and complex moral decisions it might be suggested that we need all the help we can get. However is it even right to outsource any of our moral decisions? If we can’t outsource at least part of our moral decision-making, then the whole idea of applied philosophy has very shaky foundations. I believe applied philosophy remains a meaningful discipline, even if some of its practitioners seem to over-elaborate some things in an attempt to justify both their position and the discipline itself. Let us assume outsourcing some moral decisions can be justified. A doctor might for instance trust a bioethicist. If we assume that it can be right to outsource some moral decision-making to experts in some fields could it also be right to outsource some moral decision-making to algorithms or machines? In this posting I want to consider this question.

What is a moral decision? It isn’t simply a decision with ethical implications, computers can already make these, it is a decision based on moral considerations. It is feasible that in the future computers might make moral decisions completely by themselves or computers might aid people to make such decisions in much the same way as computers now aid people to design things. In this posting I want to examine whether humanity might want computers to make moral decisions. I will consider three reasons why it shouldn’t. Firstly, it might be argued only human beings should make moral decisions. Secondly, it might be argued that even if computers can make moral decisions that some of these decisions would not in the interests of human beings. Lastly it might be argued that computers might make decisions which human beings don’t understand.

First let us assume that some computers could make moral decisions independently of human beings. A driverless car would not possess such a computer as it makes decisions based on parameters given to it by human beings and in doing so acts instrumentally to serve purely human ends. Personally I am extremely doubtful, whether a computer which can acquire the capacity to make moral decisions independently of human beings is feasible in the foreseeable future. Nonetheless such a computer remains a feasibility because human beings have evolved such a capacity. If we accept that such a computer is at least a feasibility do any of the three above reasons justify our fears about it making moral decisions? Firstly, should only human beings make moral decisions? If any sentient creature has the same or better cognitive capacities as ourselves then it should have the same capacity as we do to make moral decisions. We seem quite happy with the idea that aliens can make moral decisions. Prima facie provided a machine can become sentient then it should be able and allowed to make moral decisions. In fiction at least we seem to be quite happy about computers or robots making moral decisions. Secondly might such a computer make decisions which are not in the interests of human beings? It might, but I would suggest what really matters is that it takes into account human interests. Let us accept that the domain of creatures we could possibly feel sympathy for defines the domain of creatures that merits our moral concern, this includes animals but not plants. If a computer tries to make moral decisions without some form of sympathy then it might mistakenly believe it is making moral decisions about rocks, shoes and even clouds. Once again I would reiterate that at present a computer which feels sympathy is an extremely fanciful proposition. Let us accept that a computer that cannot feel sympathy cannot make moral decisions independently of human beings. Let us assume a computer capable of feeling sympathy is possible. Let us also assume that such a computer will also have the cognitive powers to understand the world at least as well as we do. It follows such a computer might make some decisions which are not in human interests but that it should always consider human interests, surely this is all we can ask for. Thirdly might we not be able to understand some of the moral decisions made by any computer capable of making moral decisions independently of human beings? John Danaher divides this opacity into three forms, intentional, illiterate and intrinsic algorithmic opacity . I have argued above that any computer which can make moral decisions must be capable of feeling a form of sympathy, because of this I will assume intentional opacity should not be a problem. Illiterate opacity means some of us might not understand how a computer reaches its decision, but does this matter as long as we understand the decision is a genuine moral decision which take human interests into account? Lastly intrinsic opacity means there may be a mismatch between how humans and computers capable making moral decisions understand the world. Understanding whether such a mismatch is possible is fundamental to our understanding of morality itself. Can any system of morality be detached from affect and can any system of morality be completely alien to us? I have tried to cast some doubt on this possibility above by considering the domain of moral concern. If my doubts are justified, then this suggests that any mismatch in moral understanding cannot be very large.

Let us accept that even if computers can make moral decisions independently of human beings are possible that this possibility will only come into existence in the future, probably the far distant future. Currently there is interest in driverless cars having ethical control systems and we have computer aided design. It follows that it is then at least conceivable that we might develop a system of computer aided moral decision-making. In practice it would be the software in the computer which would aid in making any decisions so the rest of this posting will be concerned with algorithmic aided moral decision making. Giubilini and Savulescu consider an artificial moral advisor which they label as an AMA, see the artificial moral advisor . In what follows AMA will refer to algorithmic aided moral decision making. Such a system might emerge from a form of collective intelligence involving people in general, experts and machines

I now want to consider whether we should trust an AMA system I want to consider whether we need such a system. Persson and Savulescu argue we are unfit to make complicated moral decisions in the future and that there is a need for moral enhancement (1). Perhaps such enhancement might be achieved by pharmacological means but it is also possible our moral unfitness might be addressed by an AMA system which nudges us towards improved moral decision making. Of course we must be prepared to trust such a system. Indeed, AMA might be a preferable option to enhancing some emotions because of the possibility that enhancing emotions might damage our cognitive abilities (2) and boosting altruism might lead to increased ethnocentrism and parochialism, see practical ethics .

Let us accept there is a need for AMA provided that this is feasible. Before questioning whether we should trust an AMA system I need to sketch some possible features of AMA. Firstly, when initialising such a system a top down process would seem to be preferable because if we do so we can at least be confident the rules it tries to interpret are moral rules. At present any AMA system using virtue ethics would seem to be unfeasible. Secondly, we must consider whether the rules we build into such a system should be deontological or consequentialist. An AMA system using deontological rules might be feasible but because computers are good at handling such large quantities of data it might be preferable if initially we installed a consequentialist system ethics. In the remainder of this posting I will only consider AMA based on a consequentialist system of ethics. As this is only a sketch I will not consider the exact form of consequentialist ethics employed. Thirdly, any AMA system operating on a consequentialist system must have a set of values. Once again when initialising the system, we should use a top down approach and install human values. Initially these values would be fairly primitive values such as avoiding harm. In order to initialise the AMA system we must specify what we mean by harm. Perhaps the easiest way to specify harm would be to define it as suffering by some creature or loss of some capacity by that creature. We might specify the sort of creature which can be harmed by a list of sentient creatures. Next we might simply specify suffering as pain. Lastly we would have to specify a list of capacities of each sentient creature on our list of sentient creatures

At this juncture we have an extremely primitive system of AMA. This system could be regarded as a universal system and in the same circumstances would always make the same decision. A private morality is of course nonsense but nonetheless we must identify with our moral decisions and a universal morality might be hard to identify with. At this point the user of such a system might modify it by adding weights to the built in values. For instance, a user might give a greater weight to avoiding harm, acting beneficently, and a lesser weight to respecting autonomy in situations in which these two values clash. At this point we have a primitive system the user might identify with. This primitive system might now be further modified by use in a more bottom up way by the use of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can be made a basis for similar decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions used. The user might then refine the system either by altering the weights attached to the values involved and/or feeding into the system how the circumstances the current decision is based on differ from the circumstances of past decisions used. Lastly in this joined up age the system’s user might permit the system to use the weights attached to values and decisions made by other systems belonging to people she trusts or respects. Employing such a system might be seen as employing a system of collective intelligence which uses both humans and algorithms in making moral decisions.

I now want to return to my original question, should we trust the decisions such an AMA system makes? I want to consider the three reasons outlined above as to why we shouldn’t trust such a system. I will conclude each of these reasons appears to be unsound in this context. Firstly, an objector might suggest if someone relies on such a system she isn’t really making moral decisions. I suggested a moral decision is a decision based on ethical considerations. The inbuilt values of the system are moral values so it would appear any decision made by the system is based on ethical considerations. However, my objector might now suggest if someone makes a moral decision she must identify herself with that decision. It would appear even if the accept my objector’s suggestion that because the system relies on moral values built into it by the user that any decision it makes must be based on values she identifies herself with. Secondly my objector might suggest that such a system does not serve human interests. The system sketched above is a consequentialist system and it might make decisions which aren’t in the user’s self-interest, however because the values built into it are human values the system should always acts in humans’ interests. It might of course make bad decisions when trying to serve those interests but then so do humans themselves. Lastly my objector might return to Danaher’s opacity question and suggest that the user of such a system might fail to understand why the system made a particular decision. I would suggest that because the system has feedback loops built into it that this shouldn’t occur. I would further point out that because it is always the user who implements the decision and not the system that the user retains a veto over the system.

This examination has been extremely speculative. It seems to me that whether we would want such computers to make moral decisions depends on the background circumstances. All moral decisions are made against some background. Sometimes this background is objective and at sometimes it includes subjective elements. For instance someone’s decision to have an abortion contains subjective elements and someone is unlikely to use AMA to help in making such a decision. The covid-19 outbreak creates a number of moral questions for doctors treating covid patients with limited resources which need to be made against a mainly objective background. Such decisions are more amenable to AMA and perhaps now would be a good time to start designing such systems for emergencies.


  1. Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...