Wednesday, 30 March 2016

Factitious Virtue


In this posting I want to consider Mark Alfano’s idea of factitous virtue, I will only consider factitious moral virtue (1). In recent years the whole idea that human beings can possess virtues has come under sustained attack from moral psychologists and many would now question whether virtue ethics has any real future. Moral psychologists argue that all that really matters when we make moral decisions are the situations we find ourselves in and not any supposed virtues we might possess. However, if all that matters are the situations people find themselves in when making such decisions then everyone should act in a similar fashion in a similar situation. Clearly this isn’t true. People’s character varies. It is conceivable that someone’s character is partly shaped by her moral behaviour, being a trusty person is part of that person’s character. Perhaps virtue is hard and limited to a few people or perhaps most people only have limited virtue. In this posting I will argue that if virtue matters that it does so in two specific domains. I will then consider whether Alfano’s factitious virtue can be considered a virtue in the traditional sense. Lastly I will consider whether factitious virtue matters.

Let us consider the way we make moral decisions. When making important moral decisions with wide scale implications virtue ethics is not really useful. Some might disagree with the above. When making important moral decisions we don’t simply do what a virtuous person would do, we think. We think of the consequences or perhaps we question whether any decision we make could be made into a universal law. When making important decisions, such as those concerning the consequences of global warming or whether terminally ill patients should have the right to assisted suicide, then thinking about consequences or universal laws seem to be better way forward than simply asking what a virtuous person would do. I will not consider whether we should employ consequentialist or deontological methods here. It might be thought in the light of the above that I believe virtue should play no part in moral decision making. Such a thought would be premature. Not all moral decisions are of wide scale importance, for instance a daughter might have to decide whether to help her aged mother to go shopping or spend an enjoyable afternoon by herself in her garden on a beautiful summer’s day. Such decisions are not made after careful consideration but rather by simply deciding, deciding in accordance with our virtues, provided of course this is possible. It follows there is a possible place for the virtues in making some moral decisions, Bernard Williams would have classed such decisions as ethical decisions, see Internet Encyclopaedia of Philosophy . Virtue would be useful in the domain of making every-day moral or ethical decisions provided virtue is possible. I now want to argue that virtue might also matter in a second domain. Alfano suggests that,

“People enjoy acting in accordance with their own self-concepts, even those aspects of self-concept that are evaluatively neutral; they’re averse to acting contrary to their self-concepts, especially the evaluatively positive aspects of their self-concepts.” (2)

I’m not sure Alfano is totally correct when he suggests people enjoy acting in accordance with their self-concepts. I would suggest people are satisfied with acting in accordance with their self-concept and hence have no reason to act otherwise. I would however agree with Alfano that people do act in accordance with their self-concepts. The daughter in in example I used above makes her decision based on her self-concept. She might consider herself to be a caring person and as a result takes her mother shopping. It follows that if we partly define ourselves by the virtues we possess that virtue matters in the domain of self-definition.

Let us accept that there is a possible domain for virtue in moral decision making. I would suggest that this is not a trivial domain because most of the moral decisions we make are everyday ones and our concept of self matters. I would further suggest that we have evolved a capacity to make every day moral decisions and find it hard to transcend this capacity. However, even if there is a possible domain for the virtues in making moral decision this possibility by itself doesn’t mean the virtues exist. A lot of psychological research seems to point to the situation someone finds herself in when making moral decisions being much more important than any supposed virtue she might possess. In 1972 Alice Isen and Paula Levin conducted a famous experiment which showed participants who found a dime in a payphone were much more likely to aid someone needing help (3). Many other studies have replicated Isen and Levin’s finding that what really matters when making a moral decision is the context the decision is made in rather than any supposed virtue the decision maker possesses. Let us accept for the sake of argument that virtue is weak or rare in most people and hence not a useful concept as far as most people are concerned.

In the light of the situationist challenge Alfano argues that the idea of factitious virtue is useful. What exactly is factitious virtue? Alfano suggests that factitious virtue is a kind of self-fulfilling prophecy. He gives us an example of a self-fulfilling prophecy.

“Were United States Federal Reserve Chairman Ben Bernanke to announce …. On a Sunday evening that the stock market would collapse the next day, people would react by selling their portfilios, leading indeed to a stock market crash. (4)

A factitious virtue is analogous to a self-fulfilling prophecy. Alfano argues if you label someone as having a virtue that she comes to act as if she possesses the virtue, she has factitious virtue.

“Virtue labelling causes factitious virtue, in which people behave in accordance with virtue not because they possess the trait in question but because that trait has been attributed to them.” (5)

For labelling to be effective it should be made in public and believable to the person labelled. Let us return to my previous example. Telling the daughter in my example that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and would mean that she would be more likely to help her mother with her shopping.

Let us examine the status of a factitious virtue. The question then naturally arises is factitious virtue a real virtue? Alfano uses an analogy between a placebo and factious virtue to explain how factitious virtue works. If someone believes that a placebo will help her then her belief is a self-fulfilling one. In the same way someone believes she has a virtue due to labelling then she has factitious virtue. But a placebo isn’t a drug and it might be argued by analogy that factitious virtue is a not real virtue. What do we mean by a virtue? According to the Cambridge Online Dictionary virtue is “a good moral quality in a person or the general quality of being morally good.” If we accept the above definition then factitious virtue is a real virtue in a narrow sense because it induces a good quality in a person and the argument by analogy fails, however labelling does not seem to induce the more global quality of someone being morally good.

I now want to examine whether factitious virtue is a real virtue in the broader sense of being connected to being a morally good person. Factitious virtue differs from the more traditional virtues in the way it is acquired, does this difference in acquisition mean factitious virtue is not a real virtue? Julia Annas argues we acquire the virtues by learning (6). Learning requires some skill. If someone acquires a factitious virtue of caring by means of labelling then her acquisition need not involve any skill. It follows, provided Annas is correct, that factitious virtue is not a real virtue. Annas further argues we cannot acquire a moral virtue in isolation, for instance someone cannot learn to be caring without also learning to be just. Perhaps we can acquire non-moral virtues such as courage in isolation. It follows if someone acquires one moral virtue that in doing so she must acquire others because there is some unity of the moral virtues and this leads her to being a morally good person. Beneficence is a moral virtue and someone might become more beneficent by being labelled as caring. Acquiring the factitious virtue of caring by labelling doesn’t require that someone acquires any other moral virtues. It again follows provided Annas is correct that factitious virtue is not a real virtue in the broader sense. However factitious virtue remains a real virtue in the narrow sense because it induces a good quality in a person.

I now want to consider two objections to regarding factitious virtue as a real virtue in even the narrow sense. Firstly, it might be argued that any real virtue must be stable over time and that once labelling ceases a factitious virtue slowly decays over time. Michael Bishop argues that positive causal networks PCN are self-sustaining (7). A PCN is a cluster of mental states which sustain each other in a cyclical way. For instance, confidence and optimism might aid some to be more successful and her success in turn boosts her confidence and optimism. Bishop argues that successful relationships, positive affect and healthy relationships skills/patterns form such a network (8). Healthy relationship skills include trusting, being responsive to someone’s needs and offering support. Healthy relationship skills involve caring and so it is possible that caring is part of a self-sustaining network. It follows it is possible that if the factitious virtue of caring is induced in someone that once induced this factious virtue has some stability. Whether such a possibility exists for other factitious virtues is not a question for philosophy but for empirical research. It would appear that at least one important factitious virtue, the one of caring, might be stable over time and that this might be true of others.

Secondly it might be argued that a virtue is not something we simply accept, not something induced in us in the same way a virus might induce a disease. It might be argued that unless we autonomously accept some virtue, it isn’t a real virtue. I accept this argument. It might then be further argued that because we don’t autonomously accept a factitious virtue that factitious virtues aren’t really virtues. I would reject this further argument. There is a difference between autonomously accepting something and making an autonomous decision. What does it mean to autonomously accept something? I would suggest it means identifying oneself with the thing one accepts. It means caring about something. This caring about means someone “makes himself vulnerable to losses and susceptible to benefits depending upon whether what he cares about is diminished or enhanced” according to Frankfurt (9). It might be suggested if a factitious virtue is induced in us that there is no need for us to identify with that virtue. I now want to argue that the above suggestion is unsound. According to Frankfurt what someone loves, ‘cares about’ or identifies with is defined by her motivational structures.

“That a person cares about something or that he loves something has less to do with how things make him feel, or his opinions about them, than the more or less stable motivational structures that shape his preferences and guide his conduct (10).
Frankfurt also believes our motivational structures are defined by what we are satisfied with, passively accept (11). To autonomously accept something means we are satisfied with our acceptance and experience no resistance to or restlessness with that acceptance. Let us return to factitious virtue. Labelling if it is to be effective must be done in the right circumstances. Labelling must be public and believable to the person labelled. In my previous example telling the daughter in question that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and she would be unlikely to resist such labelling. If we accept the above analysis of autonomous acceptance then the daughter autonomously accepts the factitious virtue. I would also suggest that a lack resistance or restlessness to accepting what children are being taught is the way in which traditional virtue ethicists see them as coming to autonomously accept the virtues they are being taught. It follows that we autonomously accept factitious virtues in much the same way we accept real virtues.

Does factitious virtue matter? Let us accept without argument that the world would be a better place if people acted virtuously. Let us also accept that factitious virtues act in much the same way as real virtues at least for a period. It follows factitious virtues can make the world a better place for a period even if these virtues are relatively short lived. It would also appear that because the factitious virtue of caring has some stability it can have improve the world in a more lasting way. Intuitively a more caring world is a better world. However, it might be argued that our intuitions are unsound. Factitious virtue might indeed make people more caring but only by caring more for those already close to them to the detriment of others. In response to the above argument firstly I would point out not all ethical decisions are best made by considering what a virtuous person would do. Some ethical decisions are best made using consequentialist or deontological considerations. Secondly it might be feasible to extend the domain of factitious caring by well-considered labelling. Labelling someone as caring for strangers in the right circumstances might extend this domain. Accepting the above means accepting that the factitious virtue of caring might well improve the world in a more lasting way and that the factitious virtue of caring matters.

  1. Mark Alfano, 2013, Character as Moral Fiction, Cambridge University Press.
  2. Alfano, 4.1
  3. Alice Isen & Paula Levin, 1972, The effect of feeling good on helping; cookies and kindness, Journal of Personality and Social Psychology, 34, 385-83.
  4. Alfano, 4.2.2
  5. Alfano, 4.3.1
  6. Julia Annas, 2011, Intelligent Virtue, Oxford University Press, page 84.
  7. Michael Bishop. 2015, The Good Life, Oxford University Press, Chp 3.
  8. Bishop, page 75.
  9. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 83.
  10. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 129
  11. Frankfurt, 1999, Necessity, Volition, and Love, page 103.

Monday, 7 March 2016

Algorithmic Assisted Moral Decision Making


Albert Barqué-Duran wonders whether humanity wants computers making moral decisions? In the face of the coronavirus outbreak when we are faced by difficult and complex moral decisions it might be suggested that we need all the help we can get. However is it even right to outsource any of our moral decisions? If we can’t outsource at least part of our moral decision-making, then the whole idea of applied philosophy has very shaky foundations. I believe applied philosophy remains a meaningful discipline, even if some of its practitioners seem to over-elaborate some things in an attempt to justify both their position and the discipline itself. Let us assume outsourcing some moral decisions can be justified. A doctor might for instance trust a bioethicist. If we assume that it can be right to outsource some moral decision-making to experts in some fields could it also be right to outsource some moral decision-making to algorithms or machines? In this posting I want to consider this question.

What is a moral decision? It isn’t simply a decision with ethical implications, computers can already make these, it is a decision based on moral considerations. It is feasible that in the future computers might make moral decisions completely by themselves or computers might aid people to make such decisions in much the same way as computers now aid people to design things. In this posting I want to examine whether humanity might want computers to make moral decisions. I will consider three reasons why it shouldn’t. Firstly, it might be argued only human beings should make moral decisions. Secondly, it might be argued that even if computers can make moral decisions that some of these decisions would not in the interests of human beings. Lastly it might be argued that computers might make decisions which human beings don’t understand.

First let us assume that some computers could make moral decisions independently of human beings. A driverless car would not possess such a computer as it makes decisions based on parameters given to it by human beings and in doing so acts instrumentally to serve purely human ends. Personally I am extremely doubtful, whether a computer which can acquire the capacity to make moral decisions independently of human beings is feasible in the foreseeable future. Nonetheless such a computer remains a feasibility because human beings have evolved such a capacity. If we accept that such a computer is at least a feasibility do any of the three above reasons justify our fears about it making moral decisions? Firstly, should only human beings make moral decisions? If any sentient creature has the same or better cognitive capacities as ourselves then it should have the same capacity as we do to make moral decisions. We seem quite happy with the idea that aliens can make moral decisions. Prima facie provided a machine can become sentient then it should be able and allowed to make moral decisions. In fiction at least we seem to be quite happy about computers or robots making moral decisions. Secondly might such a computer make decisions which are not in the interests of human beings? It might, but I would suggest what really matters is that it takes into account human interests. Let us accept that the domain of creatures we could possibly feel sympathy for defines the domain of creatures that merits our moral concern, this includes animals but not plants. If a computer tries to make moral decisions without some form of sympathy then it might mistakenly believe it is making moral decisions about rocks, shoes and even clouds. Once again I would reiterate that at present a computer which feels sympathy is an extremely fanciful proposition. Let us accept that a computer that cannot feel sympathy cannot make moral decisions independently of human beings. Let us assume a computer capable of feeling sympathy is possible. Let us also assume that such a computer will also have the cognitive powers to understand the world at least as well as we do. It follows such a computer might make some decisions which are not in human interests but that it should always consider human interests, surely this is all we can ask for. Thirdly might we not be able to understand some of the moral decisions made by any computer capable of making moral decisions independently of human beings? John Danaher divides this opacity into three forms, intentional, illiterate and intrinsic algorithmic opacity . I have argued above that any computer which can make moral decisions must be capable of feeling a form of sympathy, because of this I will assume intentional opacity should not be a problem. Illiterate opacity means some of us might not understand how a computer reaches its decision, but does this matter as long as we understand the decision is a genuine moral decision which take human interests into account? Lastly intrinsic opacity means there may be a mismatch between how humans and computers capable making moral decisions understand the world. Understanding whether such a mismatch is possible is fundamental to our understanding of morality itself. Can any system of morality be detached from affect and can any system of morality be completely alien to us? I have tried to cast some doubt on this possibility above by considering the domain of moral concern. If my doubts are justified, then this suggests that any mismatch in moral understanding cannot be very large.

Let us accept that even if computers can make moral decisions independently of human beings are possible that this possibility will only come into existence in the future, probably the far distant future. Currently there is interest in driverless cars having ethical control systems and we have computer aided design. It follows that it is then at least conceivable that we might develop a system of computer aided moral decision-making. In practice it would be the software in the computer which would aid in making any decisions so the rest of this posting will be concerned with algorithmic aided moral decision making. Giubilini and Savulescu consider an artificial moral advisor which they label as an AMA, see the artificial moral advisor . In what follows AMA will refer to algorithmic aided moral decision making. Such a system might emerge from a form of collective intelligence involving people in general, experts and machines

I now want to consider whether we should trust an AMA system I want to consider whether we need such a system. Persson and Savulescu argue we are unfit to make complicated moral decisions in the future and that there is a need for moral enhancement (1). Perhaps such enhancement might be achieved by pharmacological means but it is also possible our moral unfitness might be addressed by an AMA system which nudges us towards improved moral decision making. Of course we must be prepared to trust such a system. Indeed, AMA might be a preferable option to enhancing some emotions because of the possibility that enhancing emotions might damage our cognitive abilities (2) and boosting altruism might lead to increased ethnocentrism and parochialism, see practical ethics .

Let us accept there is a need for AMA provided that this is feasible. Before questioning whether we should trust an AMA system I need to sketch some possible features of AMA. Firstly, when initialising such a system a top down process would seem to be preferable because if we do so we can at least be confident the rules it tries to interpret are moral rules. At present any AMA system using virtue ethics would seem to be unfeasible. Secondly, we must consider whether the rules we build into such a system should be deontological or consequentialist. An AMA system using deontological rules might be feasible but because computers are good at handling such large quantities of data it might be preferable if initially we installed a consequentialist system ethics. In the remainder of this posting I will only consider AMA based on a consequentialist system of ethics. As this is only a sketch I will not consider the exact form of consequentialist ethics employed. Thirdly, any AMA system operating on a consequentialist system must have a set of values. Once again when initialising the system, we should use a top down approach and install human values. Initially these values would be fairly primitive values such as avoiding harm. In order to initialise the AMA system we must specify what we mean by harm. Perhaps the easiest way to specify harm would be to define it as suffering by some creature or loss of some capacity by that creature. We might specify the sort of creature which can be harmed by a list of sentient creatures. Next we might simply specify suffering as pain. Lastly we would have to specify a list of capacities of each sentient creature on our list of sentient creatures

At this juncture we have an extremely primitive system of AMA. This system could be regarded as a universal system and in the same circumstances would always make the same decision. A private morality is of course nonsense but nonetheless we must identify with our moral decisions and a universal morality might be hard to identify with. At this point the user of such a system might modify it by adding weights to the built in values. For instance, a user might give a greater weight to avoiding harm, acting beneficently, and a lesser weight to respecting autonomy in situations in which these two values clash. At this point we have a primitive system the user might identify with. This primitive system might now be further modified by use in a more bottom up way by the use of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can be made a basis for similar decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions used. The user might then refine the system either by altering the weights attached to the values involved and/or feeding into the system how the circumstances the current decision is based on differ from the circumstances of past decisions used. Lastly in this joined up age the system’s user might permit the system to use the weights attached to values and decisions made by other systems belonging to people she trusts or respects. Employing such a system might be seen as employing a system of collective intelligence which uses both humans and algorithms in making moral decisions.

I now want to return to my original question, should we trust the decisions such an AMA system makes? I want to consider the three reasons outlined above as to why we shouldn’t trust such a system. I will conclude each of these reasons appears to be unsound in this context. Firstly, an objector might suggest if someone relies on such a system she isn’t really making moral decisions. I suggested a moral decision is a decision based on ethical considerations. The inbuilt values of the system are moral values so it would appear any decision made by the system is based on ethical considerations. However, my objector might now suggest if someone makes a moral decision she must identify herself with that decision. It would appear even if the accept my objector’s suggestion that because the system relies on moral values built into it by the user that any decision it makes must be based on values she identifies herself with. Secondly my objector might suggest that such a system does not serve human interests. The system sketched above is a consequentialist system and it might make decisions which aren’t in the user’s self-interest, however because the values built into it are human values the system should always acts in humans’ interests. It might of course make bad decisions when trying to serve those interests but then so do humans themselves. Lastly my objector might return to Danaher’s opacity question and suggest that the user of such a system might fail to understand why the system made a particular decision. I would suggest that because the system has feedback loops built into it that this shouldn’t occur. I would further point out that because it is always the user who implements the decision and not the system that the user retains a veto over the system.

This examination has been extremely speculative. It seems to me that whether we would want such computers to make moral decisions depends on the background circumstances. All moral decisions are made against some background. Sometimes this background is objective and at sometimes it includes subjective elements. For instance someone’s decision to have an abortion contains subjective elements and someone is unlikely to use AMA to help in making such a decision. The covid-19 outbreak creates a number of moral questions for doctors treating covid patients with limited resources which need to be made against a mainly objective background. Such decisions are more amenable to AMA and perhaps now would be a good time to start designing such systems for emergencies.


  1. Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...