Wednesday 30 March 2016

Factitious Virtue


In this posting I want to consider Mark Alfano’s idea of factitous virtue, I will only consider factitious moral virtue (1). In recent years the whole idea that human beings can possess virtues has come under sustained attack from moral psychologists and many would now question whether virtue ethics has any real future. Moral psychologists argue that all that really matters when we make moral decisions are the situations we find ourselves in and not any supposed virtues we might possess. However, if all that matters are the situations people find themselves in when making such decisions then everyone should act in a similar fashion in a similar situation. Clearly this isn’t true. People’s character varies. It is conceivable that someone’s character is partly shaped by her moral behaviour, being a trusty person is part of that person’s character. Perhaps virtue is hard and limited to a few people or perhaps most people only have limited virtue. In this posting I will argue that if virtue matters that it does so in two specific domains. I will then consider whether Alfano’s factitious virtue can be considered a virtue in the traditional sense. Lastly I will consider whether factitious virtue matters.

Let us consider the way we make moral decisions. When making important moral decisions with wide scale implications virtue ethics is not really useful. Some might disagree with the above. When making important moral decisions we don’t simply do what a virtuous person would do, we think. We think of the consequences or perhaps we question whether any decision we make could be made into a universal law. When making important decisions, such as those concerning the consequences of global warming or whether terminally ill patients should have the right to assisted suicide, then thinking about consequences or universal laws seem to be better way forward than simply asking what a virtuous person would do. I will not consider whether we should employ consequentialist or deontological methods here. It might be thought in the light of the above that I believe virtue should play no part in moral decision making. Such a thought would be premature. Not all moral decisions are of wide scale importance, for instance a daughter might have to decide whether to help her aged mother to go shopping or spend an enjoyable afternoon by herself in her garden on a beautiful summer’s day. Such decisions are not made after careful consideration but rather by simply deciding, deciding in accordance with our virtues, provided of course this is possible. It follows there is a possible place for the virtues in making some moral decisions, Bernard Williams would have classed such decisions as ethical decisions, see Internet Encyclopaedia of Philosophy . Virtue would be useful in the domain of making every-day moral or ethical decisions provided virtue is possible. I now want to argue that virtue might also matter in a second domain. Alfano suggests that,

“People enjoy acting in accordance with their own self-concepts, even those aspects of self-concept that are evaluatively neutral; they’re averse to acting contrary to their self-concepts, especially the evaluatively positive aspects of their self-concepts.” (2)

I’m not sure Alfano is totally correct when he suggests people enjoy acting in accordance with their self-concepts. I would suggest people are satisfied with acting in accordance with their self-concept and hence have no reason to act otherwise. I would however agree with Alfano that people do act in accordance with their self-concepts. The daughter in in example I used above makes her decision based on her self-concept. She might consider herself to be a caring person and as a result takes her mother shopping. It follows that if we partly define ourselves by the virtues we possess that virtue matters in the domain of self-definition.

Let us accept that there is a possible domain for virtue in moral decision making. I would suggest that this is not a trivial domain because most of the moral decisions we make are everyday ones and our concept of self matters. I would further suggest that we have evolved a capacity to make every day moral decisions and find it hard to transcend this capacity. However, even if there is a possible domain for the virtues in making moral decision this possibility by itself doesn’t mean the virtues exist. A lot of psychological research seems to point to the situation someone finds herself in when making moral decisions being much more important than any supposed virtue she might possess. In 1972 Alice Isen and Paula Levin conducted a famous experiment which showed participants who found a dime in a payphone were much more likely to aid someone needing help (3). Many other studies have replicated Isen and Levin’s finding that what really matters when making a moral decision is the context the decision is made in rather than any supposed virtue the decision maker possesses. Let us accept for the sake of argument that virtue is weak or rare in most people and hence not a useful concept as far as most people are concerned.

In the light of the situationist challenge Alfano argues that the idea of factitious virtue is useful. What exactly is factitious virtue? Alfano suggests that factitious virtue is a kind of self-fulfilling prophecy. He gives us an example of a self-fulfilling prophecy.

“Were United States Federal Reserve Chairman Ben Bernanke to announce …. On a Sunday evening that the stock market would collapse the next day, people would react by selling their portfilios, leading indeed to a stock market crash. (4)

A factitious virtue is analogous to a self-fulfilling prophecy. Alfano argues if you label someone as having a virtue that she comes to act as if she possesses the virtue, she has factitious virtue.

“Virtue labelling causes factitious virtue, in which people behave in accordance with virtue not because they possess the trait in question but because that trait has been attributed to them.” (5)

For labelling to be effective it should be made in public and believable to the person labelled. Let us return to my previous example. Telling the daughter in my example that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and would mean that she would be more likely to help her mother with her shopping.

Let us examine the status of a factitious virtue. The question then naturally arises is factitious virtue a real virtue? Alfano uses an analogy between a placebo and factious virtue to explain how factitious virtue works. If someone believes that a placebo will help her then her belief is a self-fulfilling one. In the same way someone believes she has a virtue due to labelling then she has factitious virtue. But a placebo isn’t a drug and it might be argued by analogy that factitious virtue is a not real virtue. What do we mean by a virtue? According to the Cambridge Online Dictionary virtue is “a good moral quality in a person or the general quality of being morally good.” If we accept the above definition then factitious virtue is a real virtue in a narrow sense because it induces a good quality in a person and the argument by analogy fails, however labelling does not seem to induce the more global quality of someone being morally good.

I now want to examine whether factitious virtue is a real virtue in the broader sense of being connected to being a morally good person. Factitious virtue differs from the more traditional virtues in the way it is acquired, does this difference in acquisition mean factitious virtue is not a real virtue? Julia Annas argues we acquire the virtues by learning (6). Learning requires some skill. If someone acquires a factitious virtue of caring by means of labelling then her acquisition need not involve any skill. It follows, provided Annas is correct, that factitious virtue is not a real virtue. Annas further argues we cannot acquire a moral virtue in isolation, for instance someone cannot learn to be caring without also learning to be just. Perhaps we can acquire non-moral virtues such as courage in isolation. It follows if someone acquires one moral virtue that in doing so she must acquire others because there is some unity of the moral virtues and this leads her to being a morally good person. Beneficence is a moral virtue and someone might become more beneficent by being labelled as caring. Acquiring the factitious virtue of caring by labelling doesn’t require that someone acquires any other moral virtues. It again follows provided Annas is correct that factitious virtue is not a real virtue in the broader sense. However factitious virtue remains a real virtue in the narrow sense because it induces a good quality in a person.

I now want to consider two objections to regarding factitious virtue as a real virtue in even the narrow sense. Firstly, it might be argued that any real virtue must be stable over time and that once labelling ceases a factitious virtue slowly decays over time. Michael Bishop argues that positive causal networks PCN are self-sustaining (7). A PCN is a cluster of mental states which sustain each other in a cyclical way. For instance, confidence and optimism might aid some to be more successful and her success in turn boosts her confidence and optimism. Bishop argues that successful relationships, positive affect and healthy relationships skills/patterns form such a network (8). Healthy relationship skills include trusting, being responsive to someone’s needs and offering support. Healthy relationship skills involve caring and so it is possible that caring is part of a self-sustaining network. It follows it is possible that if the factitious virtue of caring is induced in someone that once induced this factious virtue has some stability. Whether such a possibility exists for other factitious virtues is not a question for philosophy but for empirical research. It would appear that at least one important factitious virtue, the one of caring, might be stable over time and that this might be true of others.

Secondly it might be argued that a virtue is not something we simply accept, not something induced in us in the same way a virus might induce a disease. It might be argued that unless we autonomously accept some virtue, it isn’t a real virtue. I accept this argument. It might then be further argued that because we don’t autonomously accept a factitious virtue that factitious virtues aren’t really virtues. I would reject this further argument. There is a difference between autonomously accepting something and making an autonomous decision. What does it mean to autonomously accept something? I would suggest it means identifying oneself with the thing one accepts. It means caring about something. This caring about means someone “makes himself vulnerable to losses and susceptible to benefits depending upon whether what he cares about is diminished or enhanced” according to Frankfurt (9). It might be suggested if a factitious virtue is induced in us that there is no need for us to identify with that virtue. I now want to argue that the above suggestion is unsound. According to Frankfurt what someone loves, ‘cares about’ or identifies with is defined by her motivational structures.

“That a person cares about something or that he loves something has less to do with how things make him feel, or his opinions about them, than the more or less stable motivational structures that shape his preferences and guide his conduct (10).
Frankfurt also believes our motivational structures are defined by what we are satisfied with, passively accept (11). To autonomously accept something means we are satisfied with our acceptance and experience no resistance to or restlessness with that acceptance. Let us return to factitious virtue. Labelling if it is to be effective must be done in the right circumstances. Labelling must be public and believable to the person labelled. In my previous example telling the daughter in question that she is a caring person when she has just parked in a disabled bay would not be a case of virtue labelling. Telling the daughter in public that she is a caring person when she has just helped someone to cross the road would be a case of virtue labelling and she would be unlikely to resist such labelling. If we accept the above analysis of autonomous acceptance then the daughter autonomously accepts the factitious virtue. I would also suggest that a lack resistance or restlessness to accepting what children are being taught is the way in which traditional virtue ethicists see them as coming to autonomously accept the virtues they are being taught. It follows that we autonomously accept factitious virtues in much the same way we accept real virtues.

Does factitious virtue matter? Let us accept without argument that the world would be a better place if people acted virtuously. Let us also accept that factitious virtues act in much the same way as real virtues at least for a period. It follows factitious virtues can make the world a better place for a period even if these virtues are relatively short lived. It would also appear that because the factitious virtue of caring has some stability it can have improve the world in a more lasting way. Intuitively a more caring world is a better world. However, it might be argued that our intuitions are unsound. Factitious virtue might indeed make people more caring but only by caring more for those already close to them to the detriment of others. In response to the above argument firstly I would point out not all ethical decisions are best made by considering what a virtuous person would do. Some ethical decisions are best made using consequentialist or deontological considerations. Secondly it might be feasible to extend the domain of factitious caring by well-considered labelling. Labelling someone as caring for strangers in the right circumstances might extend this domain. Accepting the above means accepting that the factitious virtue of caring might well improve the world in a more lasting way and that the factitious virtue of caring matters.

  1. Mark Alfano, 2013, Character as Moral Fiction, Cambridge University Press.
  2. Alfano, 4.1
  3. Alice Isen & Paula Levin, 1972, The effect of feeling good on helping; cookies and kindness, Journal of Personality and Social Psychology, 34, 385-83.
  4. Alfano, 4.2.2
  5. Alfano, 4.3.1
  6. Julia Annas, 2011, Intelligent Virtue, Oxford University Press, page 84.
  7. Michael Bishop. 2015, The Good Life, Oxford University Press, Chp 3.
  8. Bishop, page 75.
  9. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 83.
  10. Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 129
  11. Frankfurt, 1999, Necessity, Volition, and Love, page 103.

Monday 7 March 2016

Algorithmic Assisted Moral Decision Making


Albert Barqué-Duran wonders whether humanity wants computers making moral decisions? In the face of the coronavirus outbreak when we are faced by difficult and complex moral decisions it might be suggested that we need all the help we can get. However is it even right to outsource any of our moral decisions? If we can’t outsource at least part of our moral decision-making, then the whole idea of applied philosophy has very shaky foundations. I believe applied philosophy remains a meaningful discipline, even if some of its practitioners seem to over-elaborate some things in an attempt to justify both their position and the discipline itself. Let us assume outsourcing some moral decisions can be justified. A doctor might for instance trust a bioethicist. If we assume that it can be right to outsource some moral decision-making to experts in some fields could it also be right to outsource some moral decision-making to algorithms or machines? In this posting I want to consider this question.

What is a moral decision? It isn’t simply a decision with ethical implications, computers can already make these, it is a decision based on moral considerations. It is feasible that in the future computers might make moral decisions completely by themselves or computers might aid people to make such decisions in much the same way as computers now aid people to design things. In this posting I want to examine whether humanity might want computers to make moral decisions. I will consider three reasons why it shouldn’t. Firstly, it might be argued only human beings should make moral decisions. Secondly, it might be argued that even if computers can make moral decisions that some of these decisions would not in the interests of human beings. Lastly it might be argued that computers might make decisions which human beings don’t understand.

First let us assume that some computers could make moral decisions independently of human beings. A driverless car would not possess such a computer as it makes decisions based on parameters given to it by human beings and in doing so acts instrumentally to serve purely human ends. Personally I am extremely doubtful, whether a computer which can acquire the capacity to make moral decisions independently of human beings is feasible in the foreseeable future. Nonetheless such a computer remains a feasibility because human beings have evolved such a capacity. If we accept that such a computer is at least a feasibility do any of the three above reasons justify our fears about it making moral decisions? Firstly, should only human beings make moral decisions? If any sentient creature has the same or better cognitive capacities as ourselves then it should have the same capacity as we do to make moral decisions. We seem quite happy with the idea that aliens can make moral decisions. Prima facie provided a machine can become sentient then it should be able and allowed to make moral decisions. In fiction at least we seem to be quite happy about computers or robots making moral decisions. Secondly might such a computer make decisions which are not in the interests of human beings? It might, but I would suggest what really matters is that it takes into account human interests. Let us accept that the domain of creatures we could possibly feel sympathy for defines the domain of creatures that merits our moral concern, this includes animals but not plants. If a computer tries to make moral decisions without some form of sympathy then it might mistakenly believe it is making moral decisions about rocks, shoes and even clouds. Once again I would reiterate that at present a computer which feels sympathy is an extremely fanciful proposition. Let us accept that a computer that cannot feel sympathy cannot make moral decisions independently of human beings. Let us assume a computer capable of feeling sympathy is possible. Let us also assume that such a computer will also have the cognitive powers to understand the world at least as well as we do. It follows such a computer might make some decisions which are not in human interests but that it should always consider human interests, surely this is all we can ask for. Thirdly might we not be able to understand some of the moral decisions made by any computer capable of making moral decisions independently of human beings? John Danaher divides this opacity into three forms, intentional, illiterate and intrinsic algorithmic opacity . I have argued above that any computer which can make moral decisions must be capable of feeling a form of sympathy, because of this I will assume intentional opacity should not be a problem. Illiterate opacity means some of us might not understand how a computer reaches its decision, but does this matter as long as we understand the decision is a genuine moral decision which take human interests into account? Lastly intrinsic opacity means there may be a mismatch between how humans and computers capable making moral decisions understand the world. Understanding whether such a mismatch is possible is fundamental to our understanding of morality itself. Can any system of morality be detached from affect and can any system of morality be completely alien to us? I have tried to cast some doubt on this possibility above by considering the domain of moral concern. If my doubts are justified, then this suggests that any mismatch in moral understanding cannot be very large.

Let us accept that even if computers can make moral decisions independently of human beings are possible that this possibility will only come into existence in the future, probably the far distant future. Currently there is interest in driverless cars having ethical control systems and we have computer aided design. It follows that it is then at least conceivable that we might develop a system of computer aided moral decision-making. In practice it would be the software in the computer which would aid in making any decisions so the rest of this posting will be concerned with algorithmic aided moral decision making. Giubilini and Savulescu consider an artificial moral advisor which they label as an AMA, see the artificial moral advisor . In what follows AMA will refer to algorithmic aided moral decision making. Such a system might emerge from a form of collective intelligence involving people in general, experts and machines

I now want to consider whether we should trust an AMA system I want to consider whether we need such a system. Persson and Savulescu argue we are unfit to make complicated moral decisions in the future and that there is a need for moral enhancement (1). Perhaps such enhancement might be achieved by pharmacological means but it is also possible our moral unfitness might be addressed by an AMA system which nudges us towards improved moral decision making. Of course we must be prepared to trust such a system. Indeed, AMA might be a preferable option to enhancing some emotions because of the possibility that enhancing emotions might damage our cognitive abilities (2) and boosting altruism might lead to increased ethnocentrism and parochialism, see practical ethics .

Let us accept there is a need for AMA provided that this is feasible. Before questioning whether we should trust an AMA system I need to sketch some possible features of AMA. Firstly, when initialising such a system a top down process would seem to be preferable because if we do so we can at least be confident the rules it tries to interpret are moral rules. At present any AMA system using virtue ethics would seem to be unfeasible. Secondly, we must consider whether the rules we build into such a system should be deontological or consequentialist. An AMA system using deontological rules might be feasible but because computers are good at handling such large quantities of data it might be preferable if initially we installed a consequentialist system ethics. In the remainder of this posting I will only consider AMA based on a consequentialist system of ethics. As this is only a sketch I will not consider the exact form of consequentialist ethics employed. Thirdly, any AMA system operating on a consequentialist system must have a set of values. Once again when initialising the system, we should use a top down approach and install human values. Initially these values would be fairly primitive values such as avoiding harm. In order to initialise the AMA system we must specify what we mean by harm. Perhaps the easiest way to specify harm would be to define it as suffering by some creature or loss of some capacity by that creature. We might specify the sort of creature which can be harmed by a list of sentient creatures. Next we might simply specify suffering as pain. Lastly we would have to specify a list of capacities of each sentient creature on our list of sentient creatures

At this juncture we have an extremely primitive system of AMA. This system could be regarded as a universal system and in the same circumstances would always make the same decision. A private morality is of course nonsense but nonetheless we must identify with our moral decisions and a universal morality might be hard to identify with. At this point the user of such a system might modify it by adding weights to the built in values. For instance, a user might give a greater weight to avoiding harm, acting beneficently, and a lesser weight to respecting autonomy in situations in which these two values clash. At this point we have a primitive system the user might identify with. This primitive system might now be further modified by use in a more bottom up way by the use of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can be made a basis for similar decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions used. The user might then refine the system either by altering the weights attached to the values involved and/or feeding into the system how the circumstances the current decision is based on differ from the circumstances of past decisions used. Lastly in this joined up age the system’s user might permit the system to use the weights attached to values and decisions made by other systems belonging to people she trusts or respects. Employing such a system might be seen as employing a system of collective intelligence which uses both humans and algorithms in making moral decisions.

I now want to return to my original question, should we trust the decisions such an AMA system makes? I want to consider the three reasons outlined above as to why we shouldn’t trust such a system. I will conclude each of these reasons appears to be unsound in this context. Firstly, an objector might suggest if someone relies on such a system she isn’t really making moral decisions. I suggested a moral decision is a decision based on ethical considerations. The inbuilt values of the system are moral values so it would appear any decision made by the system is based on ethical considerations. However, my objector might now suggest if someone makes a moral decision she must identify herself with that decision. It would appear even if the accept my objector’s suggestion that because the system relies on moral values built into it by the user that any decision it makes must be based on values she identifies herself with. Secondly my objector might suggest that such a system does not serve human interests. The system sketched above is a consequentialist system and it might make decisions which aren’t in the user’s self-interest, however because the values built into it are human values the system should always acts in humans’ interests. It might of course make bad decisions when trying to serve those interests but then so do humans themselves. Lastly my objector might return to Danaher’s opacity question and suggest that the user of such a system might fail to understand why the system made a particular decision. I would suggest that because the system has feedback loops built into it that this shouldn’t occur. I would further point out that because it is always the user who implements the decision and not the system that the user retains a veto over the system.

This examination has been extremely speculative. It seems to me that whether we would want such computers to make moral decisions depends on the background circumstances. All moral decisions are made against some background. Sometimes this background is objective and at sometimes it includes subjective elements. For instance someone’s decision to have an abortion contains subjective elements and someone is unlikely to use AMA to help in making such a decision. The covid-19 outbreak creates a number of moral questions for doctors treating covid patients with limited resources which need to be made against a mainly objective background. Such decisions are more amenable to AMA and perhaps now would be a good time to start designing such systems for emergencies.


  1. Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.

Monday 22 February 2016

Traditional and Nussbaum's Transitional Anger


The world is an angry place and it seems this anger is increasing, see for instance why are Americans so angry? Anger is a basic emotion which goes back into our evolutionary past and is one of the five basic emotions everyone seems to recognise according to Paul Ekman’s studies. In the past anger must have served some useful purpose but is anger still useful in today’s culture? Martha Nussbaum defines two types of anger. Traditional anger which,

“involves, conceptually, a wish for things to go badly, somehow, for the offender in a way that is envisaged, somehow, however vaguely, as a payback for the offense” (1)

Nussbaum also defines transitional anger as follows.

“There are many cases in which one gets standardly angry first, thinking about some type of payback, and then, in a cooler moment, heads for the Transition.” (2)

According to Nussbaum traditional anger can transmute into transitional anger,

“quickly puts itself out of business, in that even the residual focus on punishing the offender is soon seen as part of a set of projects for improving both offenders and society.” (3)

The type of anger given to us by evolution appears to be traditional anger but we are no longer hunter gatherers and so perhaps traditional anger no longer serves its original purpose and we should always transmute it into transitional anger. Perhaps if such a transmutation is possible our society might become less angry. In this posting I will argue that whilst in most situations we should transmute our anger into useful action there are some situations in which it is right to maintain anger.

Stoics such as Seneca argued that anger is a dangerous emotion, a type of temporary madness and ought to be eliminated or controlled. Emotions aren’t simply somatic responses. According to Michael Brady emotions act as a kind of mental alarm. They do so by facilitating,

“reassessment through the capture and consumption of attention; emotions enable us to gain a “true and Stable” evaluative judgement.” (3)

Alarms need attending to and this requires action, anger requires attention and action rather than simply control or elimination. Simply controlling anger leads to resentment which is bad both for individuals and society. Traditional and transitional anger lead to different sorts of actions and I will examine the appropriateness of these different actions in our society.

If we accept anger is a kind of warning about some harm then this explains why being angry makes no sense in some situations. However if someone becomes angry when diagnosed with cancer her anger does not act as a warning. But anger isn’t just a general warning about any situation, it’s a warning about social situations. Anger should be a call to action connected to some wrongdoing. Anger as traditionally envisioned has a target and a focus. The target is the person or institution which inflicted the wrongdoing and the focus is the wrongdoing itself. This wrongdoing causes harm and the call for action created by anger seeks to address this harm. Traditional anger and transitional anger seek to address this harm in different ways. Traditional anger seeks to mend the harm done by making the offender suffer, changing the status of the offender. Nussbaum regards this as a kind of magical thinking rooted in our past. Traditional anger, as so conceived, is not only rooted in our past but deals with past harm, however the harm is done and making the offender suffer does not appear to mitigate this harm. Nussbaum argues we should reject such a concept and replace it by the concept of transition anger. Transitional anger doesn’t focus on status instead it focuses on future welfare from the start by trying to mitigate the harm involved. I agree with Nussbaum that our concept of anger sometimes needs updating but will argue that traditional anger still has an important part play.

Traditional anger is concerned with the difference in status between the target and the victim.  Concern for this difference in status can lead to non-productive behaviour. Nussbaum gives an excellent example.

“People in academic life who love to diss scholars who have criticized them and who believe that this does them some good, have to be focusing only on reputation and status, since it’s obvious that injuring someone else’s reputation does not make one’s own work better than it was before, or correct whatever flaws the other person has found in it.” (5)

Nussbaum’s example clearly shows concern with differences in status can lead to rather silly behaviour provided anger is only concerned with the past and present wrongs. If traditional anger is only focussed on past and present harms then perhaps we should always transmute our traditional anger into transitional anger provided of course we are the sort of creatures capable of carrying out such a transmutation. Greg Caruso believes empirical evidence suggests that the strike back emotion plays an important role in our moral responsibility beliefs and practices making such a transmutation difficult, see psychology today . However anger is sometimes focussed on the future, indeed if anger acts as some kind of alarm requiring action then it’s very nature means it must contain a forward looking element.

Let us accept that anger should trigger a forward looking element. However even if we accept the above it doesn’t automatically mean we should always try to transmute traditional anger into transitional anger. Nussbaum herself suggests that transitional anger,

“focuses on future welfare from the start. Saying ‘Something should be done about this” (6)
Let us now accept that transitional anger is forward looking by seeking to alleviate the harm which caused anger.

I now want to argue that the nature of the harm involved should determine whether traditional or transitional anger should be the appropriate response. Nussbaum uses a case of rape as an example.

“Offender O has raped Angela’s close friend Rebecca on the campus where both Angela and Rebecca are students. Angela has true beliefs about what has occurred, about how seriously damaging it is, and about the wrongful intentions involved: O, she knows, is mentally competent, understood the wrongfulness of his act.” (7)
Angela is justifiably angry but Nussbaum suggest nonetheless that she should try to transmute her raw traditional anger into transitional anger.

“Angela is likely to take a mental turn toward a different set of future-directed attitudes. Insofar as she really wants to help Rebecca and women in Rebecca’s position …..  helping Rebecca get on with her life, but also setting up help groups, trying to publicize the problem of campus rape and to urge the authorities to deal with it better.” (8)

Let us assume O was Rebecca’s boyfriend, sees he acted wrongfully, is remorseful and is no more likely to rape someone in the future than anyone else. With these caveats in place then punishing O will not lessen the harm done to Rebecca and I am inclined to agree with Nussbaum that Angela would be right to transmute her traditional anger into transitional anger. Such a transmutation might prove difficult due to angers usefulness in our evolutionary past and even if such a transmutation could be achieved such a case should still involve justice. In these circumstances I would suggest the justice should be restorative justice.

Some harms are not physical, some involve intimidation and others involve recognition. In what follows I will argue that maintaining our anger, rather than transmuting it, is a more appropriate response to both of these harms. In some sports a bad tackle by player A might injure player B, the physical injury inflicted by A cannot be undone by B causing A to suffer, but if A’s purpose was to intimidate B then B’s retaliation causing A to suffer might well target A’s intimidation. Maintaining traditional anger would be more appropriate in this situation than transitional anger. A wife’s abuse by her husband in order to intimidate her might also be better adressed by maintaining traditional anger, provided of course this is possible. Intimidation whilst a serious problem is not a widespread problem. A failure to recognise the rights of others is a more widespread problem. This failure might be due to inconsideration, a lack of attention, or even intentional. Let us reconsider Nussbaum’s example. Let us assume O doesn’t recognise the wrongfulness of his actions and also doesn’t recognise women merit the same status as men. In this scenario it seems to me that maintaining traditional anger would be a more appropriate response than transitional anger. I accept that the harm done to Rebecca cannot be undone by making O suffer, nonetheless O’s continuing failure to recognise women as having the same rights as men might be targeted by making O suffer, might be addressed by traditional anger. It appears in cases in which anger is generated by a lack of recognition that raw traditional anger ought to be the appropriate response. Anger in this situation must still be transmuted into action appropriate to gaining this recognition and this action might justifiably include inflicting harm on the offender in order to achieve this recognition. I believe the above appearance needs to be qualified. At the beginning of this post I remarked people appear to be getting angrier, perhaps this anger is because our society is not very good at recognising individuals. Useful anger must be effective anger. I would suggest targeting society using traditional anger is not useful and it would be better to employ transitional anger. The boundary between offenders who should be targeted by traditional anger or transitional anger is hard to define. Clearly society as a whole should be targeted by transitional anger and some individuals by traditional anger but what about corporations and other organisations?


What conclusions can be drawn from the above? Dylan Thomas asks us “not go gently into that good night but rage, rage against the dying of the light.” If anger is an alarm then rage, anger, at those things we can do nothing about is inappropriate. Anger if it is to be a useful emotion must be capable of being transmuted into something else. It follows in situations in which a transmutation of any sort is impossible anger that is not a useful emotion and should be avoided provided this is possible. Secondly there are some situations in which the focus of anger is not ongoing and transitional anger seems the right sort of anger to employ, once again provided this is possible. In such situation the infliction of harm on the wrongdoer seems to be pointless. Retributive justice might require some harm but I am considering anger in isolation from justice. I have suggested above that in such situations restorative rather than retributive justice would be more appropriate. According to Nussbaum in such situations anger should be transmuted into actions aimed at a set of projects for improving both offenders and society. Lastly there are situations in which the focus of anger is intimidation or a failure of recognition, in these situations traditional anger ought to be employed and the infliction of harm on the wrongdoer might be appropriate. In this situation the aim of anger mustn’t be payback but recognition and the anger employed should be transmuted into actions appropriate to achieving this recognition. 


  1. MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 46.
  2. Nussbaum, page 53.
  3. Nussbaum, page 51.
  4. Michael Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press, page147.
  5. Nussbaum, page 49.
  6. Nussbaum, page 54.
  7. Nussbaum, page 46.
  8. Nussbaum, page 49

Tuesday 2 February 2016

Terminally ill patients and the right to try new untested drugs


In the United States nearly half of the states have passed a “right to try” law, which attempt to give terminally ill patients access to experimental drugs. Some scientists and health policy experts believe such laws can be harmful by causing false hopes and even suffering.  Rebecca Dresser argues that states should not implement such laws due to dashed hopes, misery, and lost opportunities which can follow from resorting to unproven measures, see  hastings centre report . For instance, someone might lose the opportunity to spend his last days with his family in a futile attempt to extend his life. In this posting I want to examine the right of terminally ill patients to try experimental drugs which have not been fully tested.My comments here will only apply to experimental drugs but I would suggest that they could equally apply to all experimental treatments such as the use of Crispr gene editing tools. In what follows experimental drugs will refer to new drugs which have not yet been fully tested. Of course pharmaceutical companies must be willing to supply these drugs. I am only examining the right of patients to try experimental drugs which pharmaceutical companies are willing to supply and not patient’s rights to demand these drugs. In practice pharmaceutical companies might be unwilling to supply such drugs because of a fear of litigation, I will return to this point at the end of my posting.

I fully accept Dresser is correct when she asserts that experimental drugs might cause dashed hopes, misery, and lost opportunities. Untested drugs can cause harm. It is this harm that forms the basis for not allowing terminally ill patients access to these drugs. I now want to examine in more detail the harm that access to experimental drugs might cause to the patients who take them. I will then examine how access to these drugs might harm future patients by distorting drug trials.

How might access to experimental drugs harm the patients who take them? Firstly they might further limit a patient’s already limited lifespan. Secondly they might cause a patient greater physical suffering. Lastly they might cause him psychological suffering by falsely raising hopes and then dashing these hopes if they fail. I will now examine each of these three possible harms in turn. Previously I have argued that terminally ill patients, those suffering from Alzheimer’s disease and other degenerative conditions should have a right to assisted suicide, see alzheimers and suicide  . If terminally ill patients have a right to end their lives it seems to follow that the fact that experimental drugs might possibly shorten someone’s life does not give us a reason to prohibit the taking of such drugs. It might be objected that someone taking a drug to end his life and someone taking an experimental drug to extend his life have diametrically opposed ends. However, even if this is true a patient taking a drug to try and extend his life should be aware that it might do the opposite. Provided a patient is reasonably competent and aware that such a drug might shorten his life it should be up to him to decide if he is prepared to accept the risk of shortening his life in order to have the possibility of extending it. It might now be objected that by providing experimental drugs to someone we are not acting in a caring way, we are not acting beneficently. In response I would argue the opposite holds and that if we prohibit the use of these drugs we are caring for patients rather than caring about them. Caring for differs from caring about. If I care for a dog I must care about what is in its best interests. If I care about a person I must care about what is in his best interests and what he thinks are in his best interests. Failure to do so is a failure to see him as the sort of creature who can decide about his own future and displays moral arrogance. I have argued elsewhere that If I care about someone in a truly empathic way I must care about what he cares about, rather than simply what I think might be in his best interests, see woolerscottus . It appears to follow that competent patients should not be prohibited from taking experimental drugs which might shorten their lives provided they are aware of this fact. After all smoking shortens many smokers’ lives but because we respect autonomy smoking is permissible.

It might be objected that the above argument is unsound as often terminally ill patients are not the sort of creatures who can really make decisions about their own future. The above objection as it stands is unsound as the terminally can make some decisions about their treatment. For instance, it is perfectly acceptable for a patient to choose forgoing some life extending treatment in order to have better quality of life with his family. The above argument can be modified. It might be argued that terminally ill patients are not good at making decisions about their future or lack of it. This might be caused by stress, a disposition for false or exaggerated optimism and an inability to understand probabilities. In response I would point it is not only the terminally ill but the public and some doctors who are not very good at understanding probability, see Helping doctors and patients make sense of health statistics . Nonetheless false optimism remains and this false optimism might distort a terminally ill patient’s decision making capacity. What exactly is meant by false optimism? Is it just a failure to understand probability or is it someone assigning different values, weights, to the things he finds to be important? No decisions are made without reference to these weights, our values, and it follows changing our values might change the decisions we make without any alteration to the probability of certain events occurring. What might appear to us as false optimism might be someone giving different weights to what he finds to be important. I would argue we must accept that the terminally ill have a right to determine their own values and assign their own weights to things they find pertinent to their decision-making for two reasons. Firstly, we should recognise that the terminally ill remain the sort of creatures who can and should make decisions about their own future. Secondly, most of us are in a state of epistemic ignorance about what it means to experience terminal illness and if we criticise the values of the terminally ill we are guilty of epistemic arrogance. It would appear if we accept that the terminally ill are the kind of creatures who can make decisions about their own future the fact that experimental drugs might shorten their lives does not give us reason to prohibit them from using such drugs.

Patients who take experimental drugs might cause themselves physical harm. The first principle in medical ethics is to do no harm, non-maleficence, so it might be argued that the prescription of such drugs by medical practitioners should be prohibited. The above argument is unsound. Chemotherapy harms patients but this harm is offset by its benefits. Let us accept that an experimental drug might harm a patient but that it might also benefit him. Indeed, such drugs are only tested because it is believed that they might benefit patients. The above argument might be modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking these drugs can be shown to offset any harm they cause. However, the above argument is also unsound. Chemotherapy does not always benefit patients and so does not always offset any harm it causes. If we accept the above argument then we should prohibit chemotherapy, such a suggestion is nonsensical. However, the modified argument might be still further modified. It might now be argued that that the prescription of experimental drugs by medical practitioners should be prohibited until such a time as the benefits of taking such drugs can be shown to offset any harm they cause in the majority of cases. This further modified argument is about how much risk patients should be exposed to.

The reason we don’t want to expose patients to excessive risk is because we care about them. However we don’t prohibit paragliding because we care about those who participate. Who should determine what risks are acceptable? I can use the same argument I employed above, showing that patients have the right to risk shortening their lives if there is some limited chance of life extension provided they understand the risk involved, to deal with the risk of patients harming themselves. I would suggest that if we prohibit the use of experimental drugs which might harm patients but also might benefit them that once again we are caring for patients rather than caring about them. I argued above that we should care about people in a different way to the way we care for dogs. Failure to do so is a failure to see patients as the sort of creature who can make decisions about their own future and displays moral arrogance. If patients understand the risks involved it should be up to patients to decide if they are prepared to accept these risks. The last way the use experimental by patients might harm them is by causing psychological suffering by raising false hopes and dashing these hopes if these drugs fail. I believe in this context the above argument can once again be applied and I will not repeat it. To summarise it would seem that possible harm to actual patients is not a reason to prohibit access to experimental drugs provided patients are aware of this possible harm.

Even if we accept the above somewhat tentative conclusion it doesn’t follow that we don’t still have a reason to prohibit access of experimental drugs to terminally ill patients. Future patients might be harmed because the effectiveness of drugs might not be fully tested in the future. Drug trials are expensive and if pharmaceutical companies can rely on data obtained by using a drug on terminally ill patients then they might be reluctant to finance fully fledged trials. Doing so might lead to two problems. Firstly, some drugs which appear not to harm terminally ill patients might harm other patients. The long term effects of a drug, which extends a patient’s life in the short term, might not become apparent. Secondly some drugs which do not appear to have any effect on terminally ill patients might be effective on less seriously ill patients. Such drugs might not become available to future patients. Can these two problems be solved?

Regulation might solve the first problem. Experimental drugs might be used on terminally ill patients if they desire them but their use should not be permitted on other patients until after undergoing a full clinical trial. It might appear that because there are less terminally ill patients compared to other patients that pharmaceutical companies would continue to conduct full clinical trials on experimental drugs. However this appearance might be unsound. Pharmaceutical companies might try to extend the definition of a terminally patient so as to continue using some drugs without them ever having to undergo a full trial. This problem might be overcome by regulatory authorities insisting that experimental drugs are only used on those who are terminally ill. Applied philosophers might aid them in this task by better defining what is meant by terminal illness. The well-known physicist Stephen Hawking has motor neurone disease and it is probable that this disease will kill him but at the present he would not be classed as terminally ill. Terminal illness should be defined by how long someone will probably live rather than the probability that his illness will kill him. Perhaps someone should not be considered to have a terminal illness unless it is probable that he has less than six months to live. Let us consider the second problem. Might some pharmaceutical companies be tempted not to fully trial some drugs which might benefit some patients on the basis of incomplete evidence gathered from their use on terminally ill patients? Once again regulation might solve this problem. I would suggest that provided that terminal illness is defined tightly enough that this problem shouldn’t arise. A tight definition of terminal illness means fewer terminally ill patients for pharmaceutical companies to test drugs on forcing them to conduct full clinical trials. To summarise once again it appears harm to future patients does not give us reason to prohibit access to experimental drugs for the terminally ill provided that terminal illness is tightly defined.

Lastly at the beginning of this post I suggested that in practice pharmaceutical companies might be unwilling to supply experimental drugs due to a fear of litigation. It should be possible to overcome this fear if patients are required to sign a comprehensive consent form making it clear not only that there are risks involved but also that these risks include as yet unknown risks.

The above discussion leads to the rather tentative conclusion that the terminally ill should not be prohibited from trying experimental drugs subject to certain safeguards. These are,
  1. Terminal illness must be clearly and tightly defined. Philosophy can play an important part in doing this.
  2. No drugs which have not been fully tested should be used on non-terminally ill patients except for the purpose of testing
  3. Any terminally ill patient taking an experimental drug must sign a comprehensive consent form in the same way patients taking part in trials do. This form must make it clear that they are prepared to accept as yet unknown risks.

Friday 8 January 2016

Driverless Cars and Applied Philosophy




Google has developed an autonomous car and major car makers such as Ford and VW are showing an interest in doing the same. It is reported that up to ten million such cars might be on the road by 2020, see businessinsider . I am somewhat doubtful about such a figure but nonetheless autonomous cars are coming and their coming raises some ethical issues. According to Eric Schwitzgebel

“determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess”,

see autonomous cars . Clearly autonomous cars needs collision-avoiding software. Intuitively Schwitzgebel seems to be correct when he argues that an ability to address moral concerns should built into this software. For instance autonomous cars might be programmed to not to protect their passengers if by so doing a large number of pedestrians would be harmed. In this posting I want examine three questions. Firstly is Schwitzgebel correct when he argues that an ability to address moral concerns should be built into the software. Secondly is such software actually possible. Lastly if it isn’t possible to design software which can make moral decisions should we nonetheless permit autonomous cars on our roads?

What does Schwitzgebel mean when he says that the software of an autonomous car should be able to address moral concerns? In this posting I will assume he means some rules should be built into a car’s software about what to do in situations which involves some moral considerations. Does a autonomous car need such software? It is by no means certain it does. Consider a driver whose car will collide with either a young pregnant mother or an old man due to unforeseen circumstances. Does she make a decision about what to do based partly on philosophy? I suggest she does not. Of course her emotions might kick in causing her to avoid the pregnant mother. It might then be concluded if drivers don’t, or can’t, make a decision based on philosophy, that there is no reason why autonomous cars should do so. Of course that autonomous cars should be safe as possible for its passengers, other road users and pedestrians. The above leads to two tentative conclusions. Firstly, provided autonomous card are as safe as drivers then there it would seem that there are no ethical reason against their introduction. Provided autonomous cars are as safe as drivers then autonomous cars do no more harm than drivers. Secondly, provided autonomous cars are only slightly safer than drivers then it would appear that there is an ethical reason for their introduction. Autonomous cars do less harm than drivers. Of course issues concerning responsibility remain.


What objections can be raised about accepting the above conclusion? Firstly it might be objected that my example is chosen to mislead and that in other situations, when the circumstances are much clearer, people do in fact make decisions roughly based on applied philosophy. For instance a driver might be faced with the choice of hitting a concrete stanchion and killing himself or running into a queue of schoolchildren waiting at a bus stop might choose to hit the stanchion for moral reasons. I accept that in some extreme circumstances drivers might make a moral decisions. However such a decision might be based on the driver’s emotions rather than the application of applied philosophy. Applying philosophy takes time and time and may not be a viable option in a collision situation. Moreover I would suggest in real life situations this second example is just as misleading as the first. A car crashing into a queue might kill one or two people but it is unlikely to kill a very large number. It seems to me that only a large number of victims might enable a driver to make a clear moral decision quickly. I have argued drivers don’t usually make moral decisions when making collision decisions and rarely if ever do so by applying philosophy. Does this mean autonomous cars do not need a controlling system that takes account of moral considerations? The above seems to suggest that they don’t. However let us assume drivers should take into account moral considerations in collision situations provided this is possible. It follows autonomous cars should have a controlling system that takes account of moral considerations in collision situations provided this is possible. However if this isn't possible and autonomous cars are at least as safe as drivers the inability to make moral decisions shouldn't prevent the introduction of autonomous cars.

Designing systems that enable autonomous cars to make decisions which include moral considerations will be difficult. Perhaps then rather than designing such systems it might be better to make autonomous cars avoid the circumstances in which the need to make moral decisions arises. Cars and pedestrians don’t mix so perhaps it might be safer to limit autonomous cars to motorways and other major roads. Doing so might have the additional benefit that it might prove easier to design autonomous cars to avoid dangerous circumstances in which they might need to make moral decisions rather make moral decisions. Unfortunately such a course of action whilst desirable would seem to be impractical unless the way people use cars changes radically. People want cars to take them home, to work, to go shopping and their children to school. Satisfying these wants means mixing cars and pedestrians. Cars that don’t satisfy these wants would be unwanted. It would appear that even if it is very hard to do that an attempt to program the collision-avoiding software of a autonomous cars to enable them to take in to account moral considerations should be made provided this is possible.

I have argued that Schwitzgebel is correct in his assertion that the collision-avoiding software of a autonomous car should include moral considerations provided this is possible. Let us turn to the second question I posed, is such software possible? I have argued that in an emergency situation in which people have to make moral judgements that they do so quickly based on their emotions. Cars don’t have emotions so it follows the basis of any software system for making moral decisions in autonomous cars will be different from that used by drivers and based on set of rules. What sort of rules? Schwitzgebel argues that the rule of protecting a autonomous car’s occupants at all costs is too simplistic. I would question whether such a rule is indeed a moral rule at all. Might a strictly utilitarian rule of maximising the lives saved in a crash situation be adopted? Schwitzgebel points out such a rule would unfairly disregard personal accountability. For instance what if a drunken pedestrian steps in front of a car? Isn’t he accountable for his actions? If so shouldn’t his accountability be taken into account when assessing the consequences of any decision about the oncoming collision? Could a driver spot that a pedestrian was drunk in an emergency situation? I would suggest he couldn’t. At present a autonomous car’s software certainly couldn’t. It follows any rules used by autonomous cars must be primitive rules which don’t fully represent our own understanding of moral rules. It seems possible that if we are prepared to accept some primitive rules built into a autonomous cars software then it might be possible for such a car’s software to make some primitive moral decisions.

Let us consider my last question. If the rules involving moral decisions which are built into autonomous car’s software must, at least for the present, be rather primitive rules should we permit the use of such cars? I will now argue we should. Firstly I have argued that drivers don’t, or very rarely, make moral decisions in collisions situations. There is no legal requirement that drivers should make such decisions and I see can no reasons as to why a higher standard should be applied to autonomous cars. Indeed autonomous cars might be safer. Drivers can get drunk, tired and speed. Autonomous cars can’t get drunk or tired and their software can control their speed. Let us accept than any moral rules built into the software of a autonomous car must be concerned with its safe use. Let us also accept that being safe involves avoiding harm. Now let us consider a autonomous car with the primitive rule of protecting its occupants at all costs. This car safe for its occupants and avoids harming them. I would suggest we should not permit the use of autonomous cars using such a simple rule. It’s only safe for some. It might appear that the introduction of autonomous cars would only be acceptable if their software makes them safe for the public at large and avoids harming them. Achieving the above would be difficult. However the above might be amended. The introduction of autonomous cars would only be acceptable if their software makes them as safe to the public as driven cars.  I have argued above that drivers only have the time to make very limited moral decisions. It should be possible to create software for autonomous cars which can make the same sorts of moral decisions as drivers do. Indeed it might be harder to create software which recognises roadside features such as pedestrians than moral concerns.


What conclusions can be drawn from the above? Firstly provided autonomous cars are as safe as drivers then autonomous should be permissible. Secondly provided autonomous cars are only slightly safer than drivers then it would appear that there is an ethical reason for their introduction. However problems with accountability and insurance remain but these problems don’t seem insurmountable. 


Sunday 29 November 2015

Terrorism, Love and Delusion


In this posting I want to examine terrorism. As a philosopher rather than a psychologist I won’t examine the means by which potential terrorists might become radicalised, instead I will examine one of the conditions which might make some people might susceptible to radicalisation. Terrorists are sometimes seen as idealists, albeit with warped ideals. I will argue that ideals are vital to us as persons and that if someone lacks ideals that this lack creates a condition in which she becomes susceptible to radicalisation.

Usually the ideals that are important to a terrorist are grand political ideals. I’m interested in the time before she acquires such grand ideals, I’m more interested in the mundane ideals that shape people’s everyday lives. I want to link ideals mundane or otherwise to what someone loves. I will assume, as Harry Frankfurt does, that someone who loves nothing at all has no ideals (1). An ideal is something someone finds hard to betray and as a result limits her will. Love also limits the will. Love need not be grand romantic love but can sometimes simply be seen as ‘caring about’ something in a way that limits the carer’s will. I would suggest if someone loves something this something forms a sort of ideal for her as she must try to ensure the thing she loves is benefited and not harmed. If this wasn’t so she would remain indifferent to her supposed beloved rather than loving it. It is impossible for someone to be indifferent to her ideals. However accepting the above doesn’t mean that ideals have to be grand ideals, indeed someone’s ideals can be quite modest.
I now want to argue that ideals, as defined by what we love above, are essential to us as persons. According to Frankfurt someone without ideals,

“can make whatever decision he likes and shape his will as he pleases. This does not mean that his will is free. It only means that his will is anarchic, moved by mere impulse and inclination. For a person without ideals, there are no volitional laws he has bound himself to respect and to which he unconditionally submits. He has no inviolable boundaries. Thus he is amorphous with no fixed shape or identity.” (2)
Let us accept that ideals are essential to us as persons and I would suggest that someone without ideals has a sense of simply being. I would further suggest that this sense of simply being, simply existing, is one that most people would find unbearable. According to Christine Korsgaard human beings by their very nature are condemned to choosing (3). Someone without ideals has no basis on which to choose and as Frankfurt points out is ruled by impulse and inclination. It seems the combination of the need to choose even if that choice is an unconscious one and the lack of a basis for that choice is what makes simply being, simply existing, unbearable.
If one accepts the above then the need to love something, have ideals, expresses a quite primitive urge for psychic survival. I would suggest that in some cases this need to love something creates the conditions which makes some people vulnerable to radicalisation. Of course this need to love something might be met in other ways, perhaps even perhaps in such mundane ways such as keeping a pet. However the young, perhaps especially young men, want to feel important and perhaps this feeling causes them to prefer grand rather than mundane means in order to satisfy this need. In some cases the combination of the need to love and feel important creates the conditions in which some people become especially vulnerable to radicalisation.
I now want to argue that choosing to be a terrorist in order to satisfy the primitive urge to love something is a form of self-delusion. It is a self-delusion due to the nature of love. Love is not simply a matter choosing to love. According to Frankfurt, “love is a concern for the well-being or flourishing of a beloved object – a concern that is more or less volitionally constrained so that it is not a matter of entirely free choice or under full voluntary control, and that is more or less disinterested.” (4) Now if we accept Frankfurt’s position then when someone chooses to become a terrorist in order to satisfy her urge to love something she is deluding herself for two reasons. Firstly, love is not a matter of choice and it is impossible for someone to choose to love in order to satisfy this need. Secondly she is not really choosing a cause because she cares passionately about it but rather she is choosing in order to satisfy her need to love something. She is choosing to relieve her unbearable feeling of just existing.
It might be objected that I am exaggerating the importance of the need to love and underestimating the need to feel important. I will now argue even if this is so, which I don’t accept, that some of the same considerations apply. To terrorists the feeling of importance is connected to violent action. Terrorists want to be considered as heroes by some people. I have previously defined a hero as someone who chooses to recognisably benefit someone else or society in ways most people could not, in addition her actions must be beyond the call of duty and must involve some real sacrifice on her part, see Hobbs and Heroes . Now what motivates a true hero is a need to benefit someone else or society, it is not to satisfy some need to be seen as a hero. Some who pushes someone into a river in order to rescue them certainly isn’t a hero. Someone might choose to become a hero but if the motivation for her actions is a desire to be a hero then she is deluding herself about her actions even if this desire is an unconscious one because no real sacrifice is involved. Indeed it is even possible to argue that someone who resists her desires to be seen as heroic might be better seen as a hero even if a minor one.

Let us accept that it is important to understand how people become radicalised and the conditions which make this radicalisation possible. One of the conditions which makes some people susceptible to radicalisation is a sense of simply being, simply existing, due to a lack of ideals. Other conditions may play a part but what might be done to alleviate this lack of ideals? Unfortunately there seem to no easy or quick solutions because real ideals must be acquired rather than given. In spite of these difficulties I will offer some rather tentative solutions. Firstly good parenting; good parenting should always involve love. Some deprived and inarticulate parents find it hard to give or to express their love even if they are excellent parents in other ways. Some parenting skills can be taught but loving can’t. It follows we should encourage social conditions conducive to the emergence of love. Perhaps also we should actively encourage policies that promote happiness, see action for happiness . Secondly education must be more broadly based. Education should not only be focussed on the skills valued by employers but also on the skills that help all pupils to flourish. For instance the skills needed in sport and music should not be considered to be on the educational periphery. Education should be broad enough so that all have the opportunity to acquire skills to enable them to be good at something rather just acquire skills that are good for employment. Even if terrorism can be defeated by other means or collapses due its inherently stupid doctrines the solutions outlined above would remain useful in building a more cohesive society. 

  1. Harry Frankfurt, 1999, Necessity, Volition, and Love, Cambridge University Press, page 114.
  2. Frankfurt, page 114.
  3. Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
  4. Frankfurt, page 165.



Historic wrongdoing, Slavery, Compensation and Apology

      Recently the Trevelyan family says it is apologising for its ancestor’s role in slavery in the Caribbean, see The Observer .King Ch...