Tuesday, 24 May 2011

Roboethics and Autonomy


In this posting I want to examine roboethics and responsibility. I will do this by considering autonomy. The idea of roboethics seems to have originated with Asimov’s three laws of robotics which he proposed in 1942. In this posting I will consider a revised draft of these laws based on an EPSRC/AHRC initiative in roboethics, see Alan Winfield . These revised Laws are,
1.      Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
2.      Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
3.      Robots are products. They should be designed using processes which assure their safety and security.
4.      Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5.      The person with legal responsibility for a robot should be attributed, (with responsibility?)
I agree that humans rather than robots are responsible agents. I also believe robots such as those working on car production lines are just machines like washing machines. However I am going to argue that advances in robotics are likely to smear the attribution of responsibility for events that happen due to a robot’s actions.
Let it be assumed all possible robots are used in a similar way to those working on car production lines. If a machine cause’s damage due to a design fault then the manufacturer is at fault, for instance Toyota admitted responsibility for faults with some of its cars. If however a machine causes damage due to way it is operated then the user is responsible. A driver of a Toyota car who crashes due to excessive speed is solely responsible for the crash provided there are no mechanical faults or other extenuating factors. The legal responsibility attached to the use of robots should be the same as the legal responsibility attached to the use of any other machine such as cars, washing machines or lawn mowers. I would suggest accepting my assumption means accepting there is no real need for roboethics. Roboethics can be safely confined to the realm of sci-fi and is of no practical importance.

However I believe my assumption is false. Robots are indeed instruments we use to obtain our goals as is any machine. But the operations of a robot need not be the same as any other machine. If I use a knife or washing machine the operation of knife or washing machine is completely determined by me. The goals of a robot are completely determined but the way it operates need not be. For instance a driverless train may be classed as a robot if it has no human on board even to act as a controller in an emergency such as those on the Copenhagen Metro. A computer program that automatically trades on the stock market might also be classed as a (ro)bot provided we think the terms instrument and machine are interchangeable in some contexts. If it is accepted that (ro)bots can operate in ways that are not completely determined by the designer or operator then there is indeed a need for roboethics.

Let us consider a (ro)bot that independently trades in stocks for a firm of stockbrokers. Let it be assumed for the sake of simplicity that the goal set by the roboticist who designed this machine is only to maximise profits for the stockholders. Let it be further assumed this machine buys stocks in one company for only some of its stockholders inflating the share price. It then sells the shares other stockholders hold in this company at this inflated price. This sale causes the price to return to its former level. Let it be assumed such a strategy maximises profits for stockholders in general, but that some stockholders gain at the expense of others. The actions the machine takes are unethical. Clearly this machine contravenes the 4th law of robotics as set out above. This example raises two questions. Firstly what can be done to prevent the contravention occurring again and who is responsible for this.

It seems to me there are two possible ways of ensuring the actions a machine takes are ethical. Firstly the goals of the machine must be adequately specified in order to prevent it producing unethical actions. Secondly if the machine can operate in a way not completely determined by the designer or operator then the machine must have some inbuilt standards against which it can check the morality of any its proposed actions. Let us assume goals of a machine can adequately specified so as to prevent it producing immoral actions. Let us consider this assumption in conjunction with my example of a stock trading machine. The 2nd law of robotics states that humans rather than robots are responsible agents. In my example I would worry that even though humans are responsible for the machines goals it is by no means clear who should be held accountable for any failings, responsibility becomes smeared. In my example the machine’s designer is unlikely to be fully aware of all the possibilities of stock trading whilst the machine’s owners may be unaware of how to build goals into a machine. Someone might object my worry is illusory and that the stockbrokers involved must be able to fully set out the machine’s goals to the designer. However I have other important worries. Firstly I worry whether, as the tasks machines become able to do become more complex, it is possible to completely specify the goals of a machine so none of its actions can be considered as immoral. Secondly I worry that the ends a designer sets a machine means the means it takes to achieve these ends is always justified. Because of these worries I would suggest instead of attempting to prevent a machine producing immoral actions by adequately framing the machine’s goals that it would be preferable to have a machine with some inbuilt standards against which it can check the morality of doing any its proposed actions.

Someone might object if my suggestion is adopted that the 2nd law of robotics is contravened. She might point out I am proposing giving autonomy, albeit limited, to robots and with that autonomy comes responsibility. I will now argue my suggestion means giving some form of self-government to robots but it does not mean giving them even limited autonomy. Self-government in this context simply means having the capacity to independently decide on and follow some course of action. A child can be self-governing in a domain specified by her parents. A robot can also be self governing in some limited domain specified by its operators, for example a planetary rover. However self-government as defined above does not mean being autonomous. I have argued repeatedly in this blog autonomy requires that an agent must ‘care about’ or love something. It seems extremely doubtful to me whether robots will ever ‘care about’ or love anything. It is certain present robots day cannot care. It follows it must be doubtful whether robots can be autonomous. It further follows even if a robot has some limited self government it cannot be held responsible for its actions if responsibility requires autonomy. It still further follows that my suggestion, that (ro)bots have inbuilt standards against which they can check the morality of doing any some proposed actions, does not contravene the 2nd law of robotics.

Nevertheless my suggestion does have worrying implications for responsibility. Only humans can be held responsible for the actions of (ro)bots but adopting my suggestion would mean this responsibility becomes smeared. At the present the Military is showing great interest in military robots, see (1). Let it be assumed such a robot kills a large number of non-combatants in error. Let it be further assumed this machine had inbuilt into its software a set of moral standards. The outcry would be enormous and there would be demands that someone should be held responsible. Clearly as I have argued the robot cannot be held responsible. Who then should be responsible, the operator, the software designer or the ethicist who drew up the moral standards? I would argue no one is clearly responsible as responsibility has become smeared between a large numbers of people due to the way the machine operates. Because of this smearing of responsibility it might be argued no (ro)bots should ever be made which operate with an inbuilt set of moral standards. However I would argue this smearing of responsibility is not a problem peculiar to robotics but is an inevitable consequence of the complexity of modern life. Let us assume a pilot in a plane over Libya fires a missile killing a large number of non-combatants in error. Once again who is responsible, the pilot, his commanders, or the systems designers? Once again responsibility appears to become smeared. If it is accepted this smearing of responsibility is inevitable due to the complexity of modern life rather than being peculiar to robotics then I see no reason why (ro)bots should not be built with an inbuilt set of moral standards provided this is done with great care. I would further suggest that both Philosophy and the Law need to seriously consider the consequences of this smearing of responsibility.

  1. Wendell Wallach, Colin Allen, 2009, Moral Machines, Oxford University Press, page 20.

Tuesday, 26 April 2011

Medical Ethics and Thoroughgoing Autonomy

In a posting on Practical Ethics Charles Foster defends dignity against “the allegation that dignity is hopelessly amorphous; feel-good philosophical window-dressing; the name we give to whatever principle gives us the answer to a bioethical conundrum that we think is right”. He suggests this allegation usually comes from “the thoroughgoing autonomists – people who think that autonomy is the only principle we need”. He further suggests “there aren’t many of them in academic ethics, but there are lots of them in the ranks of the professional guideline drafters”, see http://blog.practicalethics.ox.ac.uk/2011/03/autonomy-amorphous-or-just-impossible/#more-1320 . Foster believes “we need urgently to disown the monolithic fundamentalism of pop-ethics, and embrace a truly liberal pluralism that listens respectfully to the voices of many principles. Proper pluralism isn’t incoherence.” In this posting I want to defend a limited thoroughgoing autonomist’s position.

Before making my defence I must make clear the position I am defending. I do not believe autonomy is the only principle relevant to medical ethics. Even if some form of medical ethics could be based solely on patient autonomy, which I doubt, the ethos of medicine depends on the notion of caring. I accept beneficence is of major importance to medical ethics. Foster characterizes the form of autonomy he is attacking as an icy, unattractive, Millian, absolutist version of autonomy. I do not believe such a form of autonomy should form the basis of most medical ethics. Firstly I will argue a more user friendly concept of autonomy should be applied in medical ethics. I will then argue this concept of autonomy is not subject to the same problems as a more absolutist version. Lastly I will defend the position that respecting the autonomous wishes of a patient should always be given precedence over acting beneficently towards him.

I argued in my posting of 03/09/10 that an autonomous decision is simply a decision with which the agent identifies himself and “cares about”; in this context I am using “cares about” in the same way as Harry Frankfurt, see for instance (1999, Necessity, Volition, and Love, Cambridge University Press). An autonomous decision is one the agent is wholehearted about. Frankfurt argues a wholehearted decision is one with which the agent is satisfied with. He defines satisfaction as an absence of restlessness or any desire for change. If we accept this concept of autonomy then it has consequences for the autonomous decisions patients make. I argued in my posting of 01/07/08 that this form of autonomy is closely connected to the idea of satisficing. It follows patients can make autonomous decisions which are sub-optimal. But don’t some patients such as Jehovah’s Witnesses already make sub-optimal decisions in practice? Moreover a patient may also make an autonomous decision simply to trust his medical team to do what they believe is best for him. In reality I would suggest this is what most patients implicitly do when making a consent decision. This after all is how many of us explicitly make decisions outside a medical context. We simply trust lawyers or financial advisors for instance, without others questioning our autonomy. Are doctors less trustworthy than lawyers or financial advisors?

Foster believes a Millian absolutist version of autonomy causes problems for medical ethics. I agree with Foster. But does a concept of autonomy based on “caring about” cause the same problems? In what follows I will argue it does not. Firstly Foster seems to assume autonomy requires that agents are able to give an unequivocal answer to the question what do you want? He then suggests it is unusual to meet someone who is so well integrated as to be able to do so. I agree many people have difficulty giving an unequivocal answer to the question, what do you want? But how is this difficulty connected to medical ethics? It seems to me Foster must believe that informed consent requires a patient is able to give an unequivocal answer to what he wants. In medical practice a patient is seldom, if ever, asked what treatment he wants. Rather the question usually posed is simply this; “we believe this treatment is in your best interests do you give your consent?” The validity of this consent is of course dependent on the patient being given adequate information about the relevant details of the proposed treatment. If we conceive autonomy as “caring about” linked to satisfaction then autonomous decisions are also not linked to someone being able to give an unequivocal answer to the question what do you want? Autonomy is simply linked to an absence of any desire on the part of a patient to change his decision. Unequivocal answers are only required when a patient doesn’t want some treatment. The above suggests if autonomy is conceived as “caring about” that Foster’s worry about medical practice and respecting autonomy being incompatible because patients cannot always give unequivocal answers when giving informed consent is not justified.

Secondly Foster worries that the giving informed consent in clinical practice is linked to the giving of informed consent in clinical trials. Foster states informed consent requires that a patient in a consultation with a surgeon about his osteoarthritic hip talks in much the same way as a subject in a clinical trial would talk with the trial’s co-ordinator. I once again agree. However I believe if an autonomous decision depends on an agent’s satisfaction with this decision there is no good reason based respect for autonomy why the above linkage should not be broken. For instance I suggested above a patient may make an autonomous decision simply to trust his medical team to do what they believe is in his best interests because doctors are no less trustworthy than lawyers or financial advisors. Different agents may need different amounts of information to make a decision that satisfies them. Moreover an agent might need different amounts of information in order to satisfy him make a decision in different situations. For instance if I am going to have my blood pressure taken all I need to know is I am going to have my blood pressure taken. If I am going to have an operation for my osteoarthritic hip I may need information about the benefits and risks involved. I may only need to understand the risks involved in very broad terms as the pain in my hip means I will discount these to some degree. If however I am consenting to take part in a clinical trial I need to be better informed about any risks involved as I have no factors which will make me discount these risks. In the light of the above I see no reason based on respect for autonomy why the information needed to give consent to treatment should comparable to the information needed to consent to take part in a clinical trial.

Lastly Foster mentions an important paper by Agledahl, Forde and Wifstad (Journal of Medical Ethics 2011; 37) see http://jme.bmj.com/content/37/4/212.full?sid=7a5d7b0f-c5e1-4291-838e-b0d9414fc1d2 . Agledahl, Forde and Wifstad state “patients' right to autonomous choice is upheld as an ideal although the options of both the patients and the doctors are very limited” and then they rightly point out that “in the healthcare setting, choices are often neither explicit nor available.” The implication of the above seems to be that the authors believe a lack of choice means concern for patient autonomy is basically a sham. Let it be accepted all competent patients have some choice. All competent patients can consent or refuse to consent to treatment. I would suggest if autonomy is based on “caring about” linked to satisfaction that in a clinical setting concern for autonomy is not a sham. It is not a sham because a doctor does not have to offer a patient an array of options out of concern for his autonomy. One option is all that is needed. Indeed accepting a concept of autonomy based on “caring about” might mean in a clinical setting too many choices actually erode a patient’s autonomy. Frankfurt argues,
‘For if the restrictions on the choices that a person in a position to make are relaxed too far, he may become, to a greater or lesser degree, disorientated with respect to where his interests and preferences lie.’ (1999, page 109).
In the light of the above I would suggest the fact patients have limited options does not mean it is impossible to respect patient autonomy in practice.

It might be argued the concept of autonomy I have outlined above is an amorphous concept offering little practical guidance. If this is so it might be asked how I am going to defend a thoroughgoing autonomist’s position. Foster argues we need to embrace a truly liberal pluralism that listens respectfully to the voices of many principles. It seems to the soundness of his position depends on what he means by pluralism. I believe if pluralism means some sort of competition between different moral goods the result would be incoherence. I would suggest the only way to avoid this incoherence is to give priority to some moral goods. This prioritization does not imply we must be able to weigh moral goods but that we must be able to rank them. I will now argue this prioritization means giving precedence to respecting autonomy over acting beneficently. If I am going to act beneficently towards someone I must care for him. The basis of my care may be sympathy or empathy. If my beneficence is based on sympathy it seems clear to me that I may act in what I conceive to be his best interests and override his autonomy. If however my beneficence is based on empathy this option is not open to me. If I feel empathy for someone I must focus on what he cares about rather than what I think might be in his best interests. It follows if I want to act beneficently towards someone, and my beneficence is based on empathic concern rather than sympathy and I believe his best interests clash with his autonomy, I should nevertheless give precedence to respecting his autonomy over acting in these interests. Beneficence based on empathy automatically gives precedence to respecting autonomous decisions. It follows if beneficence is based on empathy it is possible to defend a thoroughgoing autonomist’s position. Agledahl, Forde and Wifstad seem to partially support my position because they believe “the right to refuse treatment is fundamental and important”. If autonomy is based on “caring about” and beneficent care based on empathy then a limited thoroughgoing autonomist’s position means doctors need not concern themselves to much with providing choices but must respect all autonomous decisions. However an objector might point out I have provided no reason as to why beneficence should be based on empathy rather than sympathy. We can act beneficently towards animals. Clearly such beneficence is based on sympathy. I would suggest we cannot feel empathy for animals due to epistemic ignorance. I would further suggest we can act more beneficently towards people if our concern is empathic rather than sympathetic. It follows good beneficent care is based on empathic concern. Accepting my suggestions means medical ethics should be prepared to accept a limited thoroughgoing autonomist’s position. Indeed I would argue such a position concurs very well with the practice of medicine with the exception that accepting such a position would mean accepting a fully autonomous decisions by a patient simply to trust his clinicians as to which is the best course of treatment for him.

Thursday, 7 April 2011

Brooks, Brudner and Justice

In my previous posting I suggested that any society which insists on compulsory beneficence or the compulsory acceptance of beneficent care is not a truly flourishing society. I further suggested that the only requirement necessary for a flourishing society is that each should be free to do as seems best for herself provided by so doing she does not harm others. This might be classed as the Millian position. In this posting I want to examine what this suggestion means for the basis of law. My examination will consider the position of Alan Brudner, (2009, Punishment and Freedom: A Liberal Theory of Penal Justice, Oxford) and a critique of that position by Thom Brooks (2011, Autonomy, Freedom, and Punishment, Jurisprudence, Forthcoming available at SSRN: http://ssrn.com/abstract=1791538 ).

Brudner believes we should punish others when they damage or threaten to damage our autonomy rather than because of any harm they cause us. Of course if we damage or threaten someone’s autonomy we harm them. In the light of the above I would suggest Brudner’s position might be described as follows. We should punish others only because of the harm they do to our autonomy and not because of any other harm. It is this re-described position I will consider in this posting even though it is by no means certain Brudner would entirely agree with my re-description. I have suggested the only requirement necessary for a truly flourishing society is that each of us should be free to do as she sees best provided by so doing she does not harm others. In the light of this suggestion it might appear to follow that I believe the basis of the law should be harm to others rather than restricted to harm an individual’s autonomy. This appearance is deceptive. Any harm to an autonomous agent always also harms her autonomy. If I harm an autonomous agent I do something to her which she would not do to herself. My actions prevent the agent from making a choice she would identify herself with or which satisfies her. My actions damage her autonomy. It follows all actions that harm autonomous agents also do harm to their autonomy.

Brooks believes Brudner’s position is unconvincing in some contexts. He points out “that there may be many actions we may want criminalized (e.g., traffic offences) that would not be clearly included” in any criminalization as defined by Brudner. Brooks also points out suicide, which ends the victim’s autonomy, and moderate alcohol use, which temporarily affects the user’s autonomy, both of which we would intuitively not want to be criminalized, might well be criminalized provided criminalization is based on harm to an agent’s autonomy. Someone might argue some traffic offences such as speeding have the potential to damage the autonomy of another and hence their criminalization can be justified on Brudner’s account. She might then proceed to point out it is possible to respect autonomy in different ways. Firstly it is possible to respect someone’s autonomous decisions and secondly to protect her capacity to make autonomous decisions. Let us assume a patient suffers from some incurable disease and makes an autonomous decision to commit suicide. If criminalization is justified by damage done by not respecting autonomous decisions rather than harm done to an agent’s capacity to make autonomous decisions then suicide and the moderate use of alcohol should not be criminalized. If criminalization is justified by damage to an agent’s capacity to make autonomous decisions then these actions should be prohibited. How do we choose between respecting autonomous decisions and protecting someone’s capacity to make these decisions? I have argued in previous postings respect for autonomy is intimately tied to respect for persons and it is impossible to respect persons if one doesn’t respect their decisions, see for instance http://woolerscottus.blogspot.com/2008_08_01_archive.html . If my arguments are accepted then it follows that whilst we must respect someone’s autonomous decisions and protect her capacity to make these decisions that when these two ways of respecting autonomy are incompatible we should give precedence to respecting autonomous decisions. It further follows Brudner’s justification of criminalization may be able to account for the cases Brooks raises.

Brooks states that he strongly agrees with Brudner’s focus on offering a political, not a moral, theory of punishment. I’m by no means sure this focus on a purely political basis is possible. It seems to me if we accept Brudner’s position, that the legality of some action depends on whether it harms or threatens to harm our autonomy, we must also implicitly accept that the domain of legal responsibility is the same as the domain of legal concern. Accepting the above means we must accept only all autonomous agents can be held legally responsible and it also means we must accept the domain of legal concern only extends as far as these agents. Accepting the domain of legal concern only extends as far as autonomous agents seems problematic. Clearly children are not fully autonomous agents and equally clearly a child is of legal concern even though she cannot be held to be legally responsible for her actions. In the case of children this problem might be overcome in the same way as traffic offences above. Brudner’s position might be slightly modified so that the legality of some action depends on whether it harms or threatens to harm our potential autonomy. However even if this modification is accepted a problem remains. Let us assume that animals cannot be autonomous. Clearly if a farmer or pet owner lets her animals starve to death the fate of these animals should be of legal concern. Even more clearly severely handicapped children and old people suffering from dementia should be of legal concern even though these people will never be autonomous.

In the light of the above it seems to me that the legality of an action cannot be based solely on political concerns about autonomy. Let it be accepted the only legal limitations a truly flourishing political society may impose on an agent’s autonomous decisions must be to limit the harm her decisions do to others. However what counts as harm is based sometimes depends on moral concerns completely unconnected to concerns about autonomy. Depending on the moral interpretation of what counts as harm the constraint of harm on the unbridled exercise of autonomy may vary greatly.

Thursday, 10 March 2011

Why a Truly Flourishing Society must give Priority to respecting Autonomous Decisions over Beneficent Care

In my previous posting I concluded that the only belief all must share in a flourishing society is that each of us should be free to do as seems best to himself provided by so doing he does not harm others. Some might find such a conclusion absurd. Surely they would argue a truly flourishing society must foster compassion and aid it’s less fortunate members. Intuitively if someone is about to commit suicide we should rush to stop him. However if my conclusion is accepted it casts doubts on this intuition. Prima facie my detractors appear to be right. Nonetheless in this posting I want to defend my conclusion.

Before defending my position I must make this position clearer. Firstly this freedom should be restricted to sane adults. I believe any adult should be presumed to be sane unless he can be shown to be insane. In addition I believe the grounds of insanity cannot be based on the outcome of someone’s decision. The fact we believe someone to have made a bad or irrational decision is not grounds for even ascribing temporary insanity to him. I do accept that the enormity of some decisions might render someone temporarily insane due to some connected factors; fear for instance. Nevertheless I believe any decision about someone’s sanity must be based on such factors and not the actual outcome of his decision. In the light of the above how should we react if we see someone about to commit suicide? In such a situation speed is usually essential and we do not have time to assess whether there are any factors that render our victim temporarily insane. For this reason we should usually act to save him. However if we not under the pressure of time and are aware that our potential suicide has no factors that would render him insane then we should not try to save him even if he would have been grateful later if we had. For instance I would argue we should not attempt to stop someone who suffers, from a chronic but not necessarily a terminal illness, from committing suicide. The above does not mean we should not care about the victim, I do. However I believe the freedom of a sane adult to do as he seems fit must be given priority to acting beneficently towards others.

My reason for believing we must accept all the decisions of most adults, even if we feel these decisions may be harmful to them when these decisions won’t harm others, is to respect autonomy. In what follows a freely made decision refers to a freely made decision that may be harmful to the agent and his decision won’t harm others. I would argue if we value autonomy we should accept the freely made decisions of others in order protect this autonomy. My objector might immediately raise two objections to the above. My objector might agree with me that we should protect our autonomy and that this might be achieved by accepting all our freely made decisions but he might then suggest this could be done more effectively by accepting only autonomous decisions. I believe his suggestion to be impractical. In everyday life it is simple to recognise a freely made decision. It is not simple to recognise an autonomous decision. Let us assume we respect all freely made decisions which will not harm the agent or others. But let us also assume in a situation which we think the agent’s decision may harm him we will only respect his decision if it is autonomous. In this situation if we are to act in caring way, be sympathetic or compassionate towards someone must we first go through something like the informed consent process to protect his autonomy? Caring is a disposition people have. This disposition is natural though I believe it can also be cultivated. The basis of caring is not a cognitive activity and applying an informed consent process in a caring context seems inappropriate. However my objector might raise a much more serious objection. He might agree autonomy should be protected but argue that I am wrong always to give priority to respecting autonomous decisions over caring. He might point out some autonomous decisions are trivial compared to someone’s real needs. Surely in such situations he might argue priority should be given to caring for someone over respecting his autonomous decision.

In order to answer this second objection we must ask why we value autonomy. It seems to me we should respect all autonomous decisions because we value respecting people. People are distinct persons. I would suggest if we fail give precedence to respecting the autonomous decisions of others over acting beneficently towards them that we fail to respect them as the kind of creatures who can make their own decisions, respect them as persons. My objector might argue that failing to respect all autonomous decisions whilst respecting most autonomous decisions does not mean we fail to respect others as the kind of creatures who can make their own decisions. He might further argue that those decisions, not respected for beneficent reasons, do usually flow from the agent’s real self. In reply to his second argument I would simply follow Berlin by pointing out what is important to us is not some idealised real self but my empirical or actual self (1969, Four essays on liberty, The Clarendon Press page 132). I would point out that failing to respect all autonomous decisions whilst respecting most autonomous decisions means we only respect other people as the kind of creatures who can make most, or even only some, of their own decisions. I’m by no means sure this is showing true respect for persons. Nevertheless I am prepared to concede for the sake of argument that it remains possible that respecting a person might mean respecting most of his decisions and acting a caring way towards him rather than respecting all the decisions he makes.

I will now consider a second reason why we should respect autonomous decisions. Decisions made at random or which are coerced in some way are not moral decisions. For all moral decisions the agent must feel the decision is his own. He cannot be detached or even semi-detached from his decision. It follows it is basic to the idea of morality that a moral agent identifies with his decision. I have argued that any decision an agent cares about, identifies with or is satisfied with is an autonomous decision, see some of my previous postings or Frankfurt (1988, The Importance of What We Care About, Cambridge University Press). It follows if we fail to respect someone’s autonomous decision that we narrow the domain of moral agents. Of course morality is not just concerned with moral agents. Children and animals are of moral concern even though animals and young children are not moral agents. Kant believed the heart of morality depends upon the ability of agents to make autonomous choices. I believe any narrowing of the domain of moral agency damages morality in general. Any system of morality in which the number of creatures of moral concern remains constant but in which the number of moral agents decreases is a weakened system. Any system of morality in which moral agents are only part-time agents, because some of their freely made decisions are constrained, is a damaged system. In the light of the above I would suggest any system of morality which gives precedence to acting beneficently over respecting all autonomous decisions makes moral agents only part-time agents and as a result is a damaged system.

Let it be accepted any flourishing society must be a moral society. I have argued any moral society must be based on respect for autonomy; we cannot have a truly flourishing society without universal acceptance of this fact. Unfortunately some people in a flourishing society might not accept the need to care for others. Equally unfortunately some people might reject beneficent care. We may deplore these facts but nonetheless if we want to live in a flourishing society we must be prepared to accept them. It follows caring about others cannot be a necessary requirement of a truly flourishing society, caring is of course desirable. Indeed a society that insists on compulsory caring for others or the compulsory acceptance of beneficent care is not a truly flourishing society. It further follows that the only requirement necessary for a flourishing society is that each should be free to do as seems best to himself provided by so doing he does not harm others.

Friday, 25 February 2011

David Cameron and Social Integration

In my last posting I suggested there is no such thing as a fully integrated society. I further suggested we would be better employed in considering how people function in our society rather than bothering about how well they are integrated into it. In a speech to the Munich Security Conference on 11/02/11 David Cameron argued,
“We have allowed the weakening of our collective identity. Under the doctrine of state multiculturalism, we have encouraged different cultures to live separate lives, apart from each other and apart from the mainstream. We’ve failed to provide a vision of society to which they feel they want to belong. We’ve even tolerated these segregated communities behaving in ways that run completely counter to our values.” He suggested we have done this separation by encouraging organisations which believe in separation rather than integration, see http://www.number10.gov.uk/news/speeches-and-transcripts/2011/02/pms-speech-at-munich-security-conference-60293 . In this posting I want once more to consider the idea of a fully integrated society. I also want to consider which beliefs groups within a society must share in order to function adequately in a flourishing society.

What exactly is meant by a fully integrated society? Is it one in which people share a set of common beliefs or is it one in which people have only a set of some common beliefs? Let us assume a fully integrated society is one in which people share a set of common beliefs. Mill famously argued,
“Mankind are greater gainers by suffering each others to live as good seems to themselves than by compelling each to live as seems good to the rest.” (Mill, 1859, On Liberty, quote from Pelican Books 1974 page 72.)
If it is accepted we can replace the word “mankind” by “society” then I would suggest Mill is arguing a fully integrated society is an impoverished society. I believe we should accept Mill’s argument for two reasons. Firstly if we were to pressurise some members of society to live as the majority see fit by sharing the majority’s beliefs we would diminish these peoples happiness and probably also diminish the overall happiness present in society. Secondly I would suggest any such society becomes closed to new ideas. Any society closed to new ideas becomes a stagnant society and a stagnant society is an impoverished society. It follows if people aim for a fully integrated society as defined above we must accept that they are creating an impoverished society. In what follows it will be assumed people want a flourishing society. The question now becomes can we really think of a society in which people only share some common beliefs as a fully integrated society?

An objector might object the above question only arises because I am talking about integration in totally the wrong way. She might suggest that we should not be considering a fully integrated society but rather simply an integrated one. She might further suggest that when considering integration we should consider whether people feel at home in that society or alienated from it. Intuitively her last suggestion seems to carry a lot of weight. Nevertheless I am reluctant to accept this last suggestion for two reasons. Firstly if we consider whether people feel at home in a society or alienated from that society, which is not fully integrated, it would appear this society is one in which people only share a set of some common beliefs. Intuitively any society in which people only share some common beliefs does not appear to be an integrated society. If our intuitions clash we have grounds not to trust these intuitions.

Secondly how do we judge whether someone is at home in a society or alienated by it? Perhaps we should simply ask them. We might conduct surveys to answer the question. But surveys are expensive to carry out and it is by no means certain that someone’s answers to questions about integration would always reflect how she actually acts in society. Let us assume that if someone identifies with the society she lives in that she must be reasonably well integrated into that society. I believe if someone identifies with something this means she must be wholehearted about or satisfied with what she identifies with. This is a common theme of this blog, see some of my previous postings or (Frankfurt, 1988, The Importance of What We Care About, Cambridge University Press, chapter 12.) If I am correct if someone identifies with the society she lives in then she must be satisfied with that society. I would now suggest that whether someone is satisfied with the society she lives in depends on how well she functions in it. If someone is unable to function in a society it would seem to be hard to for her to be satisfied with that society. Conversely if someone functions well in a society it is hard to see what grounds she has to be dissatisfied with that society. Rhetoric about integration into society seems meaningless if we have no means of gauging this integration. In the light of the above we can gauge how well a group is integrated into society by considering how they function in society. However in the light of the above the question, as to whether we can we really think of a society in which people only share some common beliefs as a fully integrated society, seems irrelevant. It follows rhetoric about integration becomes superfluous and we should simply concern ourselves with how different groups function in our society.

If any society is to function it must have some common beliefs. If any community is to function within that society it must share these basic beliefs. Of course some community might theoretically thrive in a society without sharing any of its beliefs but such a community would be parasitic on that society and does not function within it. The question now naturally arises what common beliefs must people share in a flourishing society? The answer according to David Cameron is,
“Freedom of speech, freedom of worship, democracy, the rule of law, equal rights regardless of race, sex or sexuality.”
I believe David Cameron is right and that any flourishing society must share these values. At this point I am going to assume that in the context of our discussion the words “belief” and “value” are interchangeable. My objector might claim at this point even if David Cameron is correct about these being the values our society values that I am wrong to suggest that any flourishing society must share these values. She might point out that China is a flourishing society. I would disagree and whilst I accept a non-democratic country such as China may flourish economically I do not accept such countries are genuinely flourishing, China for instance has changed little politically since the Tiananmen Square massacre. Even if we accept David Cameron’s view as the right one we must nonetheless remain aware that the domain of shared values is limited. Mill argued that,
“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right.” (Page 68)
If we accept Mill’s view, as I do, then we may not compel others to do things even if we think it is right that they should do so. For example some people believe Muslim women should be banned from wearing the burka in western society; Nicolas Sarkozy in France for instance. I personally do not approve of the burka but nonetheless believe that provided Muslim women freely choose to wear it they should be perfectly free to do so except in certain circumstances in which personal recognition matters; passport control for instance. Accepting this freedom does not mean we must remain passive in respect to the wearing of the burka. Following Mill we may of course criticise, reason with and attempt to persuade Muslim women not to wear the burka but we may not use compulsion which rules out legislation. It follows the only basic belief all must share in a flourishing society is that each of us should be free to do as seems best for herself provided that by so doing she does not harm others. I believe the values David Cameron mentioned come down to this basic belief. Unfortunately for David Cameron it follows we should accept partially integrated communities which behave in ways that run counter to many of our other values. Fortunately for him however this acceptance does not mean we should encourage such communities which already exist, or the immigration of further such communities. Indeed I would argue we should help and persuade such existing communities to function more fully in our society. Nevertheless we must be prepared to accept these communities provided they do not affect our freedom, or the freedom of some their own members, provided their actions do not harm others. In conclusion it seems to me that rhetoric about how well some ethnic communities integrate into our society is irrelevant and all that matters is that these communities function and not prevent others from functioning in society.

Thursday, 10 February 2011

Soldiers and Beta Blockers



A large number of soldiers returning from active service in Iraq and Afghanistan suffer from PTSD (post traumatic stress disorder) and the military is interested in using beta blockers to help in alleviating this disorder. Beta blockers are drugs commonly used to treat some heart conditions such as angina. Elisa Hurley is concerned that the use of these drugs may have at least one bad consequence (1). In this posting I want examine whether we should share her concern.

Before I start my examination I must briefly summarise Hurley’s argument. Let it be accepted that post battle beta blockers help prevent the formation of painful emotional memories (PEM) in soldiers. Clearly soldiers are required to kill enemy combatants in battle. Equally clearly civilians are required to not kill others. If they do so they may be charged with murder. Hurley suggests this killing in battle separates soldiers from the normal moral community. Some might reject her suggestion but I will accept it here for the sake of argument. She proceeds to argue that after battle soldiers need to be integrated back into the moral community. She then further argues coming to terms with PEM is necessary for this re-integration. She then concludes a bad consequence, of beta blockers preventing the formation of PEM, is that they also prevent this re-integration. Hurley’s argument it seems to me depends on two implicit assumptions. Firstly any normal moral community depends on emotions to some degree. Secondly a moral community must be integrated. I will question these assumptions.

Not all moral systems have an affective component. Some believe that morality is objective and that our moral behaviour should be based on norms. Clearly if we accept a non-affective moral system PEM are not necessary for a soldier to successfully himself re-integrate into such a community. However in practice I would argue that the problems autistic people and sociopaths have in forming moral judgements strongly suggests that morality must include an affective element, see for instance Nichols (2). In what follows I will accept the first of Hurley’s implicit assumptions.

I will now turn to the second of Hurley’s implicit assumptions. It seems clear to me if Hurley assumes a soldier can be reintegrated into a moral community she must also implicitly assume that this community is in some way integrated. I will now give three reasons why I find the idea of such a fully integrated moral society improbable. Firstly I would simply point out we live in a multi-cultural society and that some of the norms people live by differ. I of course accept in any society people must share some norms. Secondly I would argue a fully integrated moral society might become something akin to an exclusive club. For instance a fully integrated society might exclude some people such as schizophrenics from membership. Intuitively provided schizophrenics take drugs to control their condition they ought to be full members of moral society. This intuition is supported in practice. Society holds schizophrenics responsible for their actions provided their schizophrenia is controlled. A further example might be provided by convicted prisoners who by their actions don’t seem to be fully integrated into moral society but whom nonetheless I would argue should nevertheless be regarded as members of moral society to some degree. These examples suggest that we live in a moral society which is not fully integrated. Lastly I would argue to talk too much of integration with respect to any moral society which includes an affective element would be a mistake because we experience emotions to a varying degree. For example Michael Slote believes it might be possible to base our moral society on empathic concern for others (3) . In such society women might be better at dealing with moral problems because of their greater capacity for empathy. It seems to me that such a society would not be fully integrated. It should of course be fully inclusive. For these reasons I would reject Hurley’s second implicit assumption that we live in an integrated moral society. I would suggest it would do better to consider the ability of people to be members of, to function, in a moral society rather than integration.

Hurley posed the question does the taking of beta blockers damage a soldier’s reintegration into society after battle. I have suggested above there is no such thing as a fully integrated moral society. I have further suggested that it would do better to consider the ability of people to function in a moral society than integration. If my suggestions are accepted then Hurley’s question needs to be refined. Her original question might be refined as follows; does the prescription of beta blockers to soldiers affect their ability to function in society? It seems clear many ex-soldiers find it hard to function in our everyday society. According to James Treadwell a lecturer in criminology at the University of Birmingham statistics suggest that between 3% and 10% of the British prison population are ex-forces personnel. Former soldiers the highest occupational culture claimed by prisoners, see Howard League for Penal Reform . This situation might be partly explained by the experience of soldiers witnessing or being party to traumatic events while in the services, and then later developing Post Traumatic Stress Disorder.

At one time most schizophrenics found it hard to function in society and many were confined to asylums. Recently advances in drugs have allowed most schizophrenics to function in society. The taking of these drugs does not damage a schizophrenic’s ability to function in society, indeed it makes it possible. Let it be accepted PEM prevents soldiers from fully functioning in society. Let it be further accepted beta blockers dampen a soldier’s PEM after battle and that this helps prevent PTSD. It might then be argued by analogy beta blockers do not damage soldiers’ ability to function in society but instead enhance it. For these reasons it might be thought that I am in favour of soldiers taking beta blockers provided these prevent PTSD. In practice I share Hurley’s concern about the use of these drugs

My concern is not about the successful re-integration of soldiers back into society after battle but rather the integration of a soldier’s life with his sense of self. Consider a non-swimmer who through no fault of his own fails to rescue a child from a swollen river. Let us assume the child drowns and this person is traumatised by memories of her screams. Let us further assume there is a drug which would erase all memories of this incident from this person’s mind and hence eliminate his trauma. Some might argue there is no problem here and that the trauma victim should take the drug. I am not so confident that there is no problem. If we accept there is no problem in the above case then perhaps we should also accept there would be no problem if we took the same drug every night when we are sleeping to erase all painful memories of the day before. Such a situation would be similar that which occurs in the film “Eternal Sunshine of the Spotless Mind”. In such a scenario some past events in a person’s life appear to have little effect on a person’s sense of self; the person loses some important connections to his personal history. I would suggest any disconnection between someone’s personal history and his sense of self is damaging for at least two reasons. Firstly anyone who has a sense of self with only a selective view of his history seems to have a diminished sense of self. I would further suggest such a diminished sense of self is damaging to the individual concerned. Secondly the idea of forgiveness can play an important part in our lives. For somebody to be forgiven he must accept responsibility for his actions. However if drugs dull or pervert his memories of his actions it is hard to see how he can genuinely accept such responsibility. The idea of forgiveness is particularly important in the context of war. After a war has ended there is often a need for a soldier to become reconciled with his former enemies. It would seem to me reconciliation is impossible without accurate recollection. For the above reasons I would suggest that the taking of beta blockers to dull a soldier’s painful memories post battle is damaging.

An objector might claim that nevertheless the damage done by PTSD to a soldier’s ability to function in society may well outweigh any damage to his sense of self or need for reconciliation. He might then use this claim to conclude that the use of beta blockers post battle is acceptable. I would reject such a claim. However even if the objectors claim is accepted I don’t think his conclusion automatically follows. It is clear that schizophrenics who take drugs to successfully control their condition can function in society. Indeed in most cases it seems probable that taking these drugs is the only way they can function in society. But the situation is different with regard to soldiers. Soldiers can be treated in different ways to relieve PTSD, cognitive therapy for instance. I would suggest, provided it is accepted that beta blockers damage the connection between a soldier’s sense of self and his history, that these drugs should not be used to treat PTSD.

  1.  Elisa Hurley, 2010, Combat Trauma and the Moral Risks of Memory Manipulating Drugs, Journal of Applied Philosophy, 27(3)
  2. ShaunNichols, 2004, Sentimental Rules, Oxford University Press.
  3. Michael Slote, 2007, The Ethics of Care and Empathy, Routledge

Monday, 24 January 2011

What’s Wrong with Addiction to Video Games

In a posting on addiction Bennett Foddy points out whilst we universally regard addiction to tobacco as bad we are more ambivalent with regard to badness of addiction to video games, see http://blog.practicalethics.ox.ac.uk/. In this posting I want to examine what’s wrong with addiction to video games. Before I carry out this examination it is necessary to understand the different types of harm caused by addiction.

However if we are to understand the harm of addiction we must have a satisfactory definition of addiction which is both useful and captures our intuitions. Smoking is clearly addictive. Smoking is harmful because it damages our health. But overeating might also be regarded as harmful if it leads to obesity which damages our health. Yet we don’t regard all overeating as an addiction. I am of course not denying there are some cases of overeating which might be regarded as addiction. It follows physiological harm cannot be used in isolation to define addiction. Addiction might be defined as someone not having control over doing, taking or using something, to the point that it may be harmful, see www.nhs.uk/conditions/addictions .

Is the above definition a satisfactory definition? Let us assume someone is greedy and who because of his greed becomes obese damaging his health. Intuitively such a person need not be addicted to food, being a glutton is not the same as being an addict. However if we were to use the above definition a glutton would be classed as an addict. A glutton lacks control over food because he is greedy not because he is compelled to. Let us compare the case of a glutton with that of a smoker. In the case of a smoker as opposed to a glutton his lack of control is due to compulsion. A compulsion caused by nicotine. In the light of the above my initial definition of addiction might be modified as follows. Addiction is not having control due to some of compulsion over doing, taking or using something, to the point that it may be harmful. It is important at this point to be clear that not all compulsive behavior is a case of addiction. A mother may feel compelled to love her child, she may feel she can do no other, but nonetheless intuitively we would not regard her as addicted to either her child or love. It is of course possible for some people to become addicted to something that resembles love. However I believe it is impossible to become addicted to love for reasons I will give later.

Let us accept the above definition of addiction. There are two elements to this definition. Firstly the harm caused by the addiction and secondly the agent’s lack of control due to compulsion. I will examine the harm element first. The harm element of addiction might be physiological or psychological. I will now argue any physiological harm is not part of the harm peculiar to addiction. Let us once again consider our mother who feels naturally compelled to love her child. Let us assume this mother is a single mother who works long hours to enable her to care for her child to the best of her ability. As a result of these long hours she becomes overtired and harms her physiological health. As I have pointed out intuitively this mother is not an addict. Next let us consider two patients with damaged livers. Let us assume the physiological harm, the damage to the liver, is identical in both cases. Let us also assume that in the first case this damage is caused by disease and in the second by alcohol addiction. However it seems to me the harm caused to the alcoholic’s liver is not a peculiar type of harm connected to addiction. Viruses may cause identical damage to someone’s liver as that caused by alcohol abuse. We should of course try to eradicate addictions that cause physiological harm just as we should try and eradicate diseases which cause harm but the peculiar harm of addiction does not seem to be captured by the nature of any physiological harm.

I will now consider two forms of non-physiological harm that might be particular to addiction. Firstly a virtue ethicist might suggest that addiction damages someone’s ability to act as a moral agent. Traditionally the cardinal virtues are wisdom, justice, fortitude and temperance. Let us accept that an addict is not a temperate person. It follows provided you accept virtue ethics that someone’s addiction harms him by affecting his ability to act as a moral agent. However, even if one accepts virtue ethics, it does not seem to me that a lack of temperance is a peculiar harm to addicts. After all someone may be a temperate person before he suffers a stroke and become intemperate after.

I now want to consider a second non physiological harm that might be particular to addiction. I will now argue that addiction harms the addict by harming his status as an autonomous agent. Before making this argument I must make clear what I mean by autonomy. Autonomy is not just simply the ability to choose. A wanton may be free to choose whatsoever he wants but his will is anarchic, moved by mere impulse and inclination, see Frankfurt, 1999, Necessity, Volition and Love, Cambridge University Press. Intuitively someone whose will is moved simply by impulse and inclination is non-autonomous because autonomy involves self-government. Someone might argue that the exercise of autonomy involves an agent freely making rational choices rather than simply being free to choose. Adopting this definition means that because an addict’s choices are compelled by his addiction he cannot freely make rational choices. It then follows such an agent is unable to exercise his autonomy. Before we can decide whether we should accept either this definition or the conclusion that follows from it we must be sure about what is precisely meant by rational and freely. Firstly does rational mean logical or does rational simply mean the agent chooses what seems appropriate to him? I would suggest being autonomous means an agent must be free to choose what seems appropriate to him. Secondly does the freedom to choose involve freedom from both external and internal compulsions? I would suggest in this case being autonomous need only involve being free from external compulsions. For instance a devout Christian might feel compelled to profess his faith even if he is free from all external compulsions but few would regard his profession as non-autonomous. In the light of the above an autonomous decision might be more precisely defined as one in which is the agent’s decision is free from external compulsions and is one which feels appropriate to him. Clearly if this definition of autonomy is accepted it means any external compulsion such as drug addiction damages an agent’s ability to make autonomous choices.

Accepting the above definition means addiction damages an agent’s autonomy. Someone might now suggest that this definition is incomplete because it does not cover all forms of addiction. He might then point out the above definition appears to exclude some gamblers, compulsive consumer’s of pornography and many others as addicts. He might further point out that such intuitive addictions are caused by internal compulsions. I fully accept his point that some internal compulsions cause addiction. But I would reject his suggestion that the above definition is incomplete by arguing any agent would see such internal compulsions as inappropriate. A lover may feel compelled to love his beloved. However love is not an addiction because the lover identifies with his beloved and is satisfied by his compulsion. In other words he finds his love appropriate. On the other hand a compulsive consumer of pornography may feel compelled to consume pornography but is unlikely to totally identify himself with this consumption or be satisfied with it. He is in other words unlikely to feel his consumption is appropriate. It follows addictions caused by internal compulsions with which the agent fails to identify damage his ability to make autonomous choices. It does not follow my above definition of autonomy is incomplete. The damage addictions cause to an agent’s ability to make autonomous choices may vary. In some circumstances mild addiction may do very little damage to someone’s status as an autonomous agent. In others his addiction may mean he is unable to make decisions he identifies with and which satisfy him. In these circumstances he may suffer psychological harm and in extreme cases his sense of identity may be damaged.

I am now in a position to answer the question posed at the beginning of this posting, what is wrong with addiction to video games? I have argued the harm done by addiction may be physiological or psychological. The physiological damage of smoking is large and well documented. I have argued the psychological damage done to us by addiction is damage done to our autonomy. In the light of this I would suggest the psychological harm caused by addiction to tobacco is minimal. Smokers may prefer not to be a smoker but in all other respects they can exercise their autonomy in much the same way as non-smokers. The harm done by addiction to video games is different. The physiological damage done by addiction to video games would appear to be minimal in contrast to the damage smoking causes. However the psychological damage done to game’s addicts may be much larger than the minimal psychological damage caused by smoking. Games addicts may prefer to play these games less just as smokers may prefer not to smoke. However unlike smoking which causes minimal damage to the smoker’s autonomy the games addict’s ability to make autonomous decisions may also be limited by the time taken in the playing of these games. In addition some young children who become addicted to video games appear to become aggressive and this also may hinder their personal development and ability to make autonomous decisions, see for instance www.rcgd.isr.umich.edu/aggr/articles/... . Four conclusions follow from the above discussion. Firstly it might be concluded that the addictions of tobacco and video games cause different types of harm. Secondly the harm peculiar to addiction is harm to the agent as an autonomous agent. Thirdly the harm caused by addictive video games, though different, may be every bit as serious as that caused by smoking, perhaps even more serious. Lastly far from celebrating the addictiveness of certain games we should see this addictiveness as potentially very harmful.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...