Monday, 27 June 2011

Knobe, Erler and our ‘True Self’

In this posting I want to consider the comments by Alexandre Erler in Practical Ethics on an article by Joshua Knobe in the New York Times concerning personal identity see http://blog.practicalethics.ox.ac.uk/2011/06/what-is-my-true-self/#more-1548. Knobe uses the example of the evangelical preacher Mark Pierpont who encourages homosexuals to seek a “cure” for their sexual orientation. Pierpont himself was a homosexual and continues to battle his continuing homosexual urges. Knobe uses him as an example to question what is meant by someone’s ‘true self’. He firstly presents two common concepts of self. The first concept is an unreflective one in which our true self is determined by our nature, our urges. The second concept, as advocated by such as David DeGrazia, holds that our true self is defined by our commitments, values, and endorsements. Knobe suggests both of these concepts are challenged by the case of Pierpont. He further suggests that people regard the traits they value in someone as part of that person’s true self. In this posting I want to question whether the concept of someone’s true self as opposed to simply self is a useful concept.

To whom might the concept of a true self be useful? Firstly it might be useful to someone deciding what to do. She might ask what her true self would do in this situation when making some important decision. She might ask herself if she is acting authentically; to her own self being true. Secondly another may question what, is someone else’s true self, before ascribing praise or blame and predicting what that person will do. Intuitively it seems to me these two uses of the idea of true self are distinct and that a different concept might be useful in each case. I will deal with the second case first.

Knobe suggests that others regard the traits they value in someone as reflecting her ‘true self’. I would suggest any such concept of ‘true self’ should not help them ascribe praise or blame and it is not very useful in predicting the actions of others. Praise and blame seem to be more naturally connected to someone’s autonomy rather than her ‘true self’. Of course I accept that someone’s ‘true self’ may be connected to her autonomy. Nonetheless I believe the ascription praise and blame depends directly on someone’s autonomy and that there is no need to involve the concept of ‘true self’. It is plausible that understanding a person’s values might be useful in predicting her actions. However if our concept of ‘true self’ is limited to the values we approve of then this concept will be useful in predicting what we think someone should do rather than what she actually will do. For these reasons Knobe’s suggested concept of someone’s ‘true self’ does not seem to me to be a useful one when applied to others. Erler suggests that our ‘true self’ might be a composite of the two concepts Knobe introduced initially; our natural idea of a ‘true self ‘and an idea of a ‘true self’ being defined by our values and commitments. Is then Erler’s suggested composite concept any more useful than that of Knobe in this context? Once again I would suggest the ascription of praise and blame should be concerned with the idea of someone’s autonomy rather than her ‘true self’. Erler’s suggested concept may well be more useful in predicting someone’s actions but I wonder whether there is any useful difference in this particular context between the concepts of self and ‘true self’.

Is then any concept of a ‘true self’ useful to someone making a decision? Most ordinary decisions are largely unreflective and we just act without questioning our motives too much. However for some big decisions such as whether to pursue a particular career or start a family much more reflection is usually involved. Perhaps in these cases we might question what our ‘true self’ would do; what is the authentic thing to do. It seems clear to me if someone accepts her ‘true self’ is defined only by the values others value in her and she acts in accordance with this concept then she isn’t acting authentically. Knobe’s suggestion about true self is not useful in this context either. Let us return to the second concept of ‘true self’ initially introduced by Knobe. I would argue our deeply held commitments, values, and endorsements are what we ‘care about’. According to Harry Frankfurt if someone ‘cares about’ something she identifies herself with what she cares about and makes herself vulnerable to losses and susceptible to benefits depending on whether what she cares about is diminished or enhanced, see (1988, The Importance of What We Care About, Cambridge University Press, page 83.) Prima facie it might be assumed this concept of ‘true self seems to be a useful concept for someone to employ when deciding how to make some big decision and there is no need for Erler’s composite definition. Prior to making that decision they ask themselves what they really ‘care about’. It would further seem when Pierpont encourages homosexuals to seek a “cure” for their sexual orientation he is acting in accordance with this concept of his ‘true self’. He is acting authentically.

I believe it is certainly true people act with respect to what they ‘care about’. This concept of ‘true self’ determines their actions. However even if it is accepted that one’s ‘true self’ determines one’s actions it does not automatically follow that one’s true self is useful in deciding what to do. What someone intends to do and what he ‘cares about’ need not be identical. Frankfurt argues someone might be unable to carry out her intentions. She might discover, when the chips are down, that she simply cannot bring herself to pursue the course of action which she has decided upon (page 85). For instance a single mother might intend to have her baby adopted. She might believe he would have a better life if this was done. She might further believe this is what she ‘cares about’. However when the time comes for the adoption she finds she cannot go through with the adoption; it is not what she really ‘cares about’. The reason for the mother’s false belief is the way Frankfurt links ‘caring about’ to wholeheartedness and satisfaction. Frankfurt defines satisfaction as an absence of restlessness or resistance. A satisfied person may be willing to accept a change in her condition, but she has no active interest in bringing about a change (1999, Necessity, Volition, and Love, Cambridge University Press, page 103). Our mother may not be able to accurately predict what will satisfy her prior to her actually having to hand over her child for adoption. It follows the concept of our true self defined by our deeply held commitments, values, and endorsements may not be as useful in making big decisions as I assumed above. It might of course be that I am wrong to equate our deeply held commitments, values, and endorsements with Frankfurt’s ideas on ‘caring about’ or that I have misinterpreted Frankfurt’s ideas on wholeheartedness and satisfaction. Nevertheless it seems to me in making a big decision someone should consider what she ‘cares about’ in conjunction with her nature and urges. It follows Erler’s composite idea of true self’, may be more useful than a concept based purely on ‘caring about’ when making major decisions.

Wednesday, 1 June 2011

Sexually Coercive Offers

James Rocha asks what is wrong with the following coercive offer.

“Hal is Vera’s supervisor at food services company which is expanding into the global market. The company decides to staff its international offices with workers from the US. Hal must send one of his employees to the new Paris or Bucharest office. Vera, while happy to accept a new foreign assignment with much higher pay, would much prefer Paris. Unfortunately the company has randomly assigned her to Bucharest. Hal, knowing the contents and strength of Vera’s preferences, offers to change her to Paris in exchange for sex. If Vera refuses, she will simply be assigned to Bucharest, which has the benefit not only of higher pay, but gets her away from Hal” (2011, The Sexual Harassment Coercive Offer, Journal of Applied Philosophy, 28(2)).

Rocha connects the wrongness of Hal’s action to his disrespect for Vera’s autonomy. He states it is possible to respect an agent’s autonomy whilst changing her actions to a more preferable autonomous action, page 206. I have argued in this blog that autonomy is not simply about choices. Autonomy concerns what the agent cares about or values. I have also argued respecting autonomy means simply respecting the choices autonomous agents make about provided these choices do not harm others. Rocha argues Hal disrespects Vera’s autonomy by inserting influence over her sexuality standards, page 210. Caring about something means you identify yourself with that thing and that this identification must have persistence. It seems to me provided Vera cares about her sexual standards Hal’s offer is unlikely to influence these standards. It follows provided Hal accepts any decision Vera makes he does not disrupt her autonomous decision making.

Nevertheless there seems to be something morally wrong about Hal’s offer. Before continuing to consider disrespecting autonomy I will briefly point out two of these. Firstly Hal seems to have no natural sympathy or empathy for Vera. Slote defines a morally wrong action as one that reflects or exhibits or expresses an absence (or lack) of a fully developed empathic concern for (caring about) others on behalf of the agent. If Hal felt empathic concern for Vera he might offer her the Paris posting unconditionally. Secondly a virtue ethicist might point out Hal’s proposed offer is simply not one a virtuous man would make.

Rocha argues what is disrespectful in connection with autonomy in Hal’s offer is that it seeks to alter Vera’s ends. I would agree with Rocha that Hal’s offer is disrespectful to Vera’s autonomy but would argue this disrespect is not primarily connected with Hal seeking to alter Vera’s ends; it is connected to Hal failing see Vera as an end in herself. Hal sees Vera primarily as a means to his own sexual gratification. We respect autonomy because it has value. Autonomy has both instrumental and intrinsic value. According to Dworkin,

“there is a value connected with being self-determining that is not a mater of either of bringing about good results or the pleasure of the process itself. This is the intrinsic desirability of exercising the capacity for self-determination. We desire to be recognized as the kind of creature capable of determining our own destiny.” (1988, The Theory and Practice of Autonomy. Cambridge University Press page 112)

I have argued Hal’s offer is unlikely to disrupt Vera’s decision making. It follows in practice Hal’s offer is unlikely to disrupt the instrumental value of Vera’s autonomy. Perhaps then Hal’s offer does not disrespect the instrumental value of Vera’s autonomy. However Hal by making his offer fails to see Vera as the kind of creature who can fully determine her own future. Hal’s offer means he disrespects the intrinsic value of Vera’s autonomy and as a consequence fails to truly respect her as a person. I would suggest we value being truly respected as a person more than we desire good options to choose from. My suggestion is open to empirical investigation. However provided my suggestion is correct then the real harm Hal’s offer does to Vera’s autonomy is that he fails to respect its intrinsic value.

Tuesday, 24 May 2011

Roboethics and Autonomy


In this posting I want to examine roboethics and responsibility. I will do this by considering autonomy. The idea of roboethics seems to have originated with Asimov’s three laws of robotics which he proposed in 1942. In this posting I will consider a revised draft of these laws based on an EPSRC/AHRC initiative in roboethics, see Alan Winfield . These revised Laws are,
1.      Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
2.      Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
3.      Robots are products. They should be designed using processes which assure their safety and security.
4.      Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5.      The person with legal responsibility for a robot should be attributed, (with responsibility?)
I agree that humans rather than robots are responsible agents. I also believe robots such as those working on car production lines are just machines like washing machines. However I am going to argue that advances in robotics are likely to smear the attribution of responsibility for events that happen due to a robot’s actions.
Let it be assumed all possible robots are used in a similar way to those working on car production lines. If a machine cause’s damage due to a design fault then the manufacturer is at fault, for instance Toyota admitted responsibility for faults with some of its cars. If however a machine causes damage due to way it is operated then the user is responsible. A driver of a Toyota car who crashes due to excessive speed is solely responsible for the crash provided there are no mechanical faults or other extenuating factors. The legal responsibility attached to the use of robots should be the same as the legal responsibility attached to the use of any other machine such as cars, washing machines or lawn mowers. I would suggest accepting my assumption means accepting there is no real need for roboethics. Roboethics can be safely confined to the realm of sci-fi and is of no practical importance.

However I believe my assumption is false. Robots are indeed instruments we use to obtain our goals as is any machine. But the operations of a robot need not be the same as any other machine. If I use a knife or washing machine the operation of knife or washing machine is completely determined by me. The goals of a robot are completely determined but the way it operates need not be. For instance a driverless train may be classed as a robot if it has no human on board even to act as a controller in an emergency such as those on the Copenhagen Metro. A computer program that automatically trades on the stock market might also be classed as a (ro)bot provided we think the terms instrument and machine are interchangeable in some contexts. If it is accepted that (ro)bots can operate in ways that are not completely determined by the designer or operator then there is indeed a need for roboethics.

Let us consider a (ro)bot that independently trades in stocks for a firm of stockbrokers. Let it be assumed for the sake of simplicity that the goal set by the roboticist who designed this machine is only to maximise profits for the stockholders. Let it be further assumed this machine buys stocks in one company for only some of its stockholders inflating the share price. It then sells the shares other stockholders hold in this company at this inflated price. This sale causes the price to return to its former level. Let it be assumed such a strategy maximises profits for stockholders in general, but that some stockholders gain at the expense of others. The actions the machine takes are unethical. Clearly this machine contravenes the 4th law of robotics as set out above. This example raises two questions. Firstly what can be done to prevent the contravention occurring again and who is responsible for this.

It seems to me there are two possible ways of ensuring the actions a machine takes are ethical. Firstly the goals of the machine must be adequately specified in order to prevent it producing unethical actions. Secondly if the machine can operate in a way not completely determined by the designer or operator then the machine must have some inbuilt standards against which it can check the morality of any its proposed actions. Let us assume goals of a machine can adequately specified so as to prevent it producing immoral actions. Let us consider this assumption in conjunction with my example of a stock trading machine. The 2nd law of robotics states that humans rather than robots are responsible agents. In my example I would worry that even though humans are responsible for the machines goals it is by no means clear who should be held accountable for any failings, responsibility becomes smeared. In my example the machine’s designer is unlikely to be fully aware of all the possibilities of stock trading whilst the machine’s owners may be unaware of how to build goals into a machine. Someone might object my worry is illusory and that the stockbrokers involved must be able to fully set out the machine’s goals to the designer. However I have other important worries. Firstly I worry whether, as the tasks machines become able to do become more complex, it is possible to completely specify the goals of a machine so none of its actions can be considered as immoral. Secondly I worry that the ends a designer sets a machine means the means it takes to achieve these ends is always justified. Because of these worries I would suggest instead of attempting to prevent a machine producing immoral actions by adequately framing the machine’s goals that it would be preferable to have a machine with some inbuilt standards against which it can check the morality of doing any its proposed actions.

Someone might object if my suggestion is adopted that the 2nd law of robotics is contravened. She might point out I am proposing giving autonomy, albeit limited, to robots and with that autonomy comes responsibility. I will now argue my suggestion means giving some form of self-government to robots but it does not mean giving them even limited autonomy. Self-government in this context simply means having the capacity to independently decide on and follow some course of action. A child can be self-governing in a domain specified by her parents. A robot can also be self governing in some limited domain specified by its operators, for example a planetary rover. However self-government as defined above does not mean being autonomous. I have argued repeatedly in this blog autonomy requires that an agent must ‘care about’ or love something. It seems extremely doubtful to me whether robots will ever ‘care about’ or love anything. It is certain present robots day cannot care. It follows it must be doubtful whether robots can be autonomous. It further follows even if a robot has some limited self government it cannot be held responsible for its actions if responsibility requires autonomy. It still further follows that my suggestion, that (ro)bots have inbuilt standards against which they can check the morality of doing any some proposed actions, does not contravene the 2nd law of robotics.

Nevertheless my suggestion does have worrying implications for responsibility. Only humans can be held responsible for the actions of (ro)bots but adopting my suggestion would mean this responsibility becomes smeared. At the present the Military is showing great interest in military robots, see (1). Let it be assumed such a robot kills a large number of non-combatants in error. Let it be further assumed this machine had inbuilt into its software a set of moral standards. The outcry would be enormous and there would be demands that someone should be held responsible. Clearly as I have argued the robot cannot be held responsible. Who then should be responsible, the operator, the software designer or the ethicist who drew up the moral standards? I would argue no one is clearly responsible as responsibility has become smeared between a large numbers of people due to the way the machine operates. Because of this smearing of responsibility it might be argued no (ro)bots should ever be made which operate with an inbuilt set of moral standards. However I would argue this smearing of responsibility is not a problem peculiar to robotics but is an inevitable consequence of the complexity of modern life. Let us assume a pilot in a plane over Libya fires a missile killing a large number of non-combatants in error. Once again who is responsible, the pilot, his commanders, or the systems designers? Once again responsibility appears to become smeared. If it is accepted this smearing of responsibility is inevitable due to the complexity of modern life rather than being peculiar to robotics then I see no reason why (ro)bots should not be built with an inbuilt set of moral standards provided this is done with great care. I would further suggest that both Philosophy and the Law need to seriously consider the consequences of this smearing of responsibility.

  1. Wendell Wallach, Colin Allen, 2009, Moral Machines, Oxford University Press, page 20.

Tuesday, 26 April 2011

Medical Ethics and Thoroughgoing Autonomy

In a posting on Practical Ethics Charles Foster defends dignity against “the allegation that dignity is hopelessly amorphous; feel-good philosophical window-dressing; the name we give to whatever principle gives us the answer to a bioethical conundrum that we think is right”. He suggests this allegation usually comes from “the thoroughgoing autonomists – people who think that autonomy is the only principle we need”. He further suggests “there aren’t many of them in academic ethics, but there are lots of them in the ranks of the professional guideline drafters”, see http://blog.practicalethics.ox.ac.uk/2011/03/autonomy-amorphous-or-just-impossible/#more-1320 . Foster believes “we need urgently to disown the monolithic fundamentalism of pop-ethics, and embrace a truly liberal pluralism that listens respectfully to the voices of many principles. Proper pluralism isn’t incoherence.” In this posting I want to defend a limited thoroughgoing autonomist’s position.

Before making my defence I must make clear the position I am defending. I do not believe autonomy is the only principle relevant to medical ethics. Even if some form of medical ethics could be based solely on patient autonomy, which I doubt, the ethos of medicine depends on the notion of caring. I accept beneficence is of major importance to medical ethics. Foster characterizes the form of autonomy he is attacking as an icy, unattractive, Millian, absolutist version of autonomy. I do not believe such a form of autonomy should form the basis of most medical ethics. Firstly I will argue a more user friendly concept of autonomy should be applied in medical ethics. I will then argue this concept of autonomy is not subject to the same problems as a more absolutist version. Lastly I will defend the position that respecting the autonomous wishes of a patient should always be given precedence over acting beneficently towards him.

I argued in my posting of 03/09/10 that an autonomous decision is simply a decision with which the agent identifies himself and “cares about”; in this context I am using “cares about” in the same way as Harry Frankfurt, see for instance (1999, Necessity, Volition, and Love, Cambridge University Press). An autonomous decision is one the agent is wholehearted about. Frankfurt argues a wholehearted decision is one with which the agent is satisfied with. He defines satisfaction as an absence of restlessness or any desire for change. If we accept this concept of autonomy then it has consequences for the autonomous decisions patients make. I argued in my posting of 01/07/08 that this form of autonomy is closely connected to the idea of satisficing. It follows patients can make autonomous decisions which are sub-optimal. But don’t some patients such as Jehovah’s Witnesses already make sub-optimal decisions in practice? Moreover a patient may also make an autonomous decision simply to trust his medical team to do what they believe is best for him. In reality I would suggest this is what most patients implicitly do when making a consent decision. This after all is how many of us explicitly make decisions outside a medical context. We simply trust lawyers or financial advisors for instance, without others questioning our autonomy. Are doctors less trustworthy than lawyers or financial advisors?

Foster believes a Millian absolutist version of autonomy causes problems for medical ethics. I agree with Foster. But does a concept of autonomy based on “caring about” cause the same problems? In what follows I will argue it does not. Firstly Foster seems to assume autonomy requires that agents are able to give an unequivocal answer to the question what do you want? He then suggests it is unusual to meet someone who is so well integrated as to be able to do so. I agree many people have difficulty giving an unequivocal answer to the question, what do you want? But how is this difficulty connected to medical ethics? It seems to me Foster must believe that informed consent requires a patient is able to give an unequivocal answer to what he wants. In medical practice a patient is seldom, if ever, asked what treatment he wants. Rather the question usually posed is simply this; “we believe this treatment is in your best interests do you give your consent?” The validity of this consent is of course dependent on the patient being given adequate information about the relevant details of the proposed treatment. If we conceive autonomy as “caring about” linked to satisfaction then autonomous decisions are also not linked to someone being able to give an unequivocal answer to the question what do you want? Autonomy is simply linked to an absence of any desire on the part of a patient to change his decision. Unequivocal answers are only required when a patient doesn’t want some treatment. The above suggests if autonomy is conceived as “caring about” that Foster’s worry about medical practice and respecting autonomy being incompatible because patients cannot always give unequivocal answers when giving informed consent is not justified.

Secondly Foster worries that the giving informed consent in clinical practice is linked to the giving of informed consent in clinical trials. Foster states informed consent requires that a patient in a consultation with a surgeon about his osteoarthritic hip talks in much the same way as a subject in a clinical trial would talk with the trial’s co-ordinator. I once again agree. However I believe if an autonomous decision depends on an agent’s satisfaction with this decision there is no good reason based respect for autonomy why the above linkage should not be broken. For instance I suggested above a patient may make an autonomous decision simply to trust his medical team to do what they believe is in his best interests because doctors are no less trustworthy than lawyers or financial advisors. Different agents may need different amounts of information to make a decision that satisfies them. Moreover an agent might need different amounts of information in order to satisfy him make a decision in different situations. For instance if I am going to have my blood pressure taken all I need to know is I am going to have my blood pressure taken. If I am going to have an operation for my osteoarthritic hip I may need information about the benefits and risks involved. I may only need to understand the risks involved in very broad terms as the pain in my hip means I will discount these to some degree. If however I am consenting to take part in a clinical trial I need to be better informed about any risks involved as I have no factors which will make me discount these risks. In the light of the above I see no reason based on respect for autonomy why the information needed to give consent to treatment should comparable to the information needed to consent to take part in a clinical trial.

Lastly Foster mentions an important paper by Agledahl, Forde and Wifstad (Journal of Medical Ethics 2011; 37) see http://jme.bmj.com/content/37/4/212.full?sid=7a5d7b0f-c5e1-4291-838e-b0d9414fc1d2 . Agledahl, Forde and Wifstad state “patients' right to autonomous choice is upheld as an ideal although the options of both the patients and the doctors are very limited” and then they rightly point out that “in the healthcare setting, choices are often neither explicit nor available.” The implication of the above seems to be that the authors believe a lack of choice means concern for patient autonomy is basically a sham. Let it be accepted all competent patients have some choice. All competent patients can consent or refuse to consent to treatment. I would suggest if autonomy is based on “caring about” linked to satisfaction that in a clinical setting concern for autonomy is not a sham. It is not a sham because a doctor does not have to offer a patient an array of options out of concern for his autonomy. One option is all that is needed. Indeed accepting a concept of autonomy based on “caring about” might mean in a clinical setting too many choices actually erode a patient’s autonomy. Frankfurt argues,
‘For if the restrictions on the choices that a person in a position to make are relaxed too far, he may become, to a greater or lesser degree, disorientated with respect to where his interests and preferences lie.’ (1999, page 109).
In the light of the above I would suggest the fact patients have limited options does not mean it is impossible to respect patient autonomy in practice.

It might be argued the concept of autonomy I have outlined above is an amorphous concept offering little practical guidance. If this is so it might be asked how I am going to defend a thoroughgoing autonomist’s position. Foster argues we need to embrace a truly liberal pluralism that listens respectfully to the voices of many principles. It seems to the soundness of his position depends on what he means by pluralism. I believe if pluralism means some sort of competition between different moral goods the result would be incoherence. I would suggest the only way to avoid this incoherence is to give priority to some moral goods. This prioritization does not imply we must be able to weigh moral goods but that we must be able to rank them. I will now argue this prioritization means giving precedence to respecting autonomy over acting beneficently. If I am going to act beneficently towards someone I must care for him. The basis of my care may be sympathy or empathy. If my beneficence is based on sympathy it seems clear to me that I may act in what I conceive to be his best interests and override his autonomy. If however my beneficence is based on empathy this option is not open to me. If I feel empathy for someone I must focus on what he cares about rather than what I think might be in his best interests. It follows if I want to act beneficently towards someone, and my beneficence is based on empathic concern rather than sympathy and I believe his best interests clash with his autonomy, I should nevertheless give precedence to respecting his autonomy over acting in these interests. Beneficence based on empathy automatically gives precedence to respecting autonomous decisions. It follows if beneficence is based on empathy it is possible to defend a thoroughgoing autonomist’s position. Agledahl, Forde and Wifstad seem to partially support my position because they believe “the right to refuse treatment is fundamental and important”. If autonomy is based on “caring about” and beneficent care based on empathy then a limited thoroughgoing autonomist’s position means doctors need not concern themselves to much with providing choices but must respect all autonomous decisions. However an objector might point out I have provided no reason as to why beneficence should be based on empathy rather than sympathy. We can act beneficently towards animals. Clearly such beneficence is based on sympathy. I would suggest we cannot feel empathy for animals due to epistemic ignorance. I would further suggest we can act more beneficently towards people if our concern is empathic rather than sympathetic. It follows good beneficent care is based on empathic concern. Accepting my suggestions means medical ethics should be prepared to accept a limited thoroughgoing autonomist’s position. Indeed I would argue such a position concurs very well with the practice of medicine with the exception that accepting such a position would mean accepting a fully autonomous decisions by a patient simply to trust his clinicians as to which is the best course of treatment for him.

Thursday, 7 April 2011

Brooks, Brudner and Justice

In my previous posting I suggested that any society which insists on compulsory beneficence or the compulsory acceptance of beneficent care is not a truly flourishing society. I further suggested that the only requirement necessary for a flourishing society is that each should be free to do as seems best for herself provided by so doing she does not harm others. This might be classed as the Millian position. In this posting I want to examine what this suggestion means for the basis of law. My examination will consider the position of Alan Brudner, (2009, Punishment and Freedom: A Liberal Theory of Penal Justice, Oxford) and a critique of that position by Thom Brooks (2011, Autonomy, Freedom, and Punishment, Jurisprudence, Forthcoming available at SSRN: http://ssrn.com/abstract=1791538 ).

Brudner believes we should punish others when they damage or threaten to damage our autonomy rather than because of any harm they cause us. Of course if we damage or threaten someone’s autonomy we harm them. In the light of the above I would suggest Brudner’s position might be described as follows. We should punish others only because of the harm they do to our autonomy and not because of any other harm. It is this re-described position I will consider in this posting even though it is by no means certain Brudner would entirely agree with my re-description. I have suggested the only requirement necessary for a truly flourishing society is that each of us should be free to do as she sees best provided by so doing she does not harm others. In the light of this suggestion it might appear to follow that I believe the basis of the law should be harm to others rather than restricted to harm an individual’s autonomy. This appearance is deceptive. Any harm to an autonomous agent always also harms her autonomy. If I harm an autonomous agent I do something to her which she would not do to herself. My actions prevent the agent from making a choice she would identify herself with or which satisfies her. My actions damage her autonomy. It follows all actions that harm autonomous agents also do harm to their autonomy.

Brooks believes Brudner’s position is unconvincing in some contexts. He points out “that there may be many actions we may want criminalized (e.g., traffic offences) that would not be clearly included” in any criminalization as defined by Brudner. Brooks also points out suicide, which ends the victim’s autonomy, and moderate alcohol use, which temporarily affects the user’s autonomy, both of which we would intuitively not want to be criminalized, might well be criminalized provided criminalization is based on harm to an agent’s autonomy. Someone might argue some traffic offences such as speeding have the potential to damage the autonomy of another and hence their criminalization can be justified on Brudner’s account. She might then proceed to point out it is possible to respect autonomy in different ways. Firstly it is possible to respect someone’s autonomous decisions and secondly to protect her capacity to make autonomous decisions. Let us assume a patient suffers from some incurable disease and makes an autonomous decision to commit suicide. If criminalization is justified by damage done by not respecting autonomous decisions rather than harm done to an agent’s capacity to make autonomous decisions then suicide and the moderate use of alcohol should not be criminalized. If criminalization is justified by damage to an agent’s capacity to make autonomous decisions then these actions should be prohibited. How do we choose between respecting autonomous decisions and protecting someone’s capacity to make these decisions? I have argued in previous postings respect for autonomy is intimately tied to respect for persons and it is impossible to respect persons if one doesn’t respect their decisions, see for instance http://woolerscottus.blogspot.com/2008_08_01_archive.html . If my arguments are accepted then it follows that whilst we must respect someone’s autonomous decisions and protect her capacity to make these decisions that when these two ways of respecting autonomy are incompatible we should give precedence to respecting autonomous decisions. It further follows Brudner’s justification of criminalization may be able to account for the cases Brooks raises.

Brooks states that he strongly agrees with Brudner’s focus on offering a political, not a moral, theory of punishment. I’m by no means sure this focus on a purely political basis is possible. It seems to me if we accept Brudner’s position, that the legality of some action depends on whether it harms or threatens to harm our autonomy, we must also implicitly accept that the domain of legal responsibility is the same as the domain of legal concern. Accepting the above means we must accept only all autonomous agents can be held legally responsible and it also means we must accept the domain of legal concern only extends as far as these agents. Accepting the domain of legal concern only extends as far as autonomous agents seems problematic. Clearly children are not fully autonomous agents and equally clearly a child is of legal concern even though she cannot be held to be legally responsible for her actions. In the case of children this problem might be overcome in the same way as traffic offences above. Brudner’s position might be slightly modified so that the legality of some action depends on whether it harms or threatens to harm our potential autonomy. However even if this modification is accepted a problem remains. Let us assume that animals cannot be autonomous. Clearly if a farmer or pet owner lets her animals starve to death the fate of these animals should be of legal concern. Even more clearly severely handicapped children and old people suffering from dementia should be of legal concern even though these people will never be autonomous.

In the light of the above it seems to me that the legality of an action cannot be based solely on political concerns about autonomy. Let it be accepted the only legal limitations a truly flourishing political society may impose on an agent’s autonomous decisions must be to limit the harm her decisions do to others. However what counts as harm is based sometimes depends on moral concerns completely unconnected to concerns about autonomy. Depending on the moral interpretation of what counts as harm the constraint of harm on the unbridled exercise of autonomy may vary greatly.

Thursday, 10 March 2011

Why a Truly Flourishing Society must give Priority to respecting Autonomous Decisions over Beneficent Care

In my previous posting I concluded that the only belief all must share in a flourishing society is that each of us should be free to do as seems best to himself provided by so doing he does not harm others. Some might find such a conclusion absurd. Surely they would argue a truly flourishing society must foster compassion and aid it’s less fortunate members. Intuitively if someone is about to commit suicide we should rush to stop him. However if my conclusion is accepted it casts doubts on this intuition. Prima facie my detractors appear to be right. Nonetheless in this posting I want to defend my conclusion.

Before defending my position I must make this position clearer. Firstly this freedom should be restricted to sane adults. I believe any adult should be presumed to be sane unless he can be shown to be insane. In addition I believe the grounds of insanity cannot be based on the outcome of someone’s decision. The fact we believe someone to have made a bad or irrational decision is not grounds for even ascribing temporary insanity to him. I do accept that the enormity of some decisions might render someone temporarily insane due to some connected factors; fear for instance. Nevertheless I believe any decision about someone’s sanity must be based on such factors and not the actual outcome of his decision. In the light of the above how should we react if we see someone about to commit suicide? In such a situation speed is usually essential and we do not have time to assess whether there are any factors that render our victim temporarily insane. For this reason we should usually act to save him. However if we not under the pressure of time and are aware that our potential suicide has no factors that would render him insane then we should not try to save him even if he would have been grateful later if we had. For instance I would argue we should not attempt to stop someone who suffers, from a chronic but not necessarily a terminal illness, from committing suicide. The above does not mean we should not care about the victim, I do. However I believe the freedom of a sane adult to do as he seems fit must be given priority to acting beneficently towards others.

My reason for believing we must accept all the decisions of most adults, even if we feel these decisions may be harmful to them when these decisions won’t harm others, is to respect autonomy. In what follows a freely made decision refers to a freely made decision that may be harmful to the agent and his decision won’t harm others. I would argue if we value autonomy we should accept the freely made decisions of others in order protect this autonomy. My objector might immediately raise two objections to the above. My objector might agree with me that we should protect our autonomy and that this might be achieved by accepting all our freely made decisions but he might then suggest this could be done more effectively by accepting only autonomous decisions. I believe his suggestion to be impractical. In everyday life it is simple to recognise a freely made decision. It is not simple to recognise an autonomous decision. Let us assume we respect all freely made decisions which will not harm the agent or others. But let us also assume in a situation which we think the agent’s decision may harm him we will only respect his decision if it is autonomous. In this situation if we are to act in caring way, be sympathetic or compassionate towards someone must we first go through something like the informed consent process to protect his autonomy? Caring is a disposition people have. This disposition is natural though I believe it can also be cultivated. The basis of caring is not a cognitive activity and applying an informed consent process in a caring context seems inappropriate. However my objector might raise a much more serious objection. He might agree autonomy should be protected but argue that I am wrong always to give priority to respecting autonomous decisions over caring. He might point out some autonomous decisions are trivial compared to someone’s real needs. Surely in such situations he might argue priority should be given to caring for someone over respecting his autonomous decision.

In order to answer this second objection we must ask why we value autonomy. It seems to me we should respect all autonomous decisions because we value respecting people. People are distinct persons. I would suggest if we fail give precedence to respecting the autonomous decisions of others over acting beneficently towards them that we fail to respect them as the kind of creatures who can make their own decisions, respect them as persons. My objector might argue that failing to respect all autonomous decisions whilst respecting most autonomous decisions does not mean we fail to respect others as the kind of creatures who can make their own decisions. He might further argue that those decisions, not respected for beneficent reasons, do usually flow from the agent’s real self. In reply to his second argument I would simply follow Berlin by pointing out what is important to us is not some idealised real self but my empirical or actual self (1969, Four essays on liberty, The Clarendon Press page 132). I would point out that failing to respect all autonomous decisions whilst respecting most autonomous decisions means we only respect other people as the kind of creatures who can make most, or even only some, of their own decisions. I’m by no means sure this is showing true respect for persons. Nevertheless I am prepared to concede for the sake of argument that it remains possible that respecting a person might mean respecting most of his decisions and acting a caring way towards him rather than respecting all the decisions he makes.

I will now consider a second reason why we should respect autonomous decisions. Decisions made at random or which are coerced in some way are not moral decisions. For all moral decisions the agent must feel the decision is his own. He cannot be detached or even semi-detached from his decision. It follows it is basic to the idea of morality that a moral agent identifies with his decision. I have argued that any decision an agent cares about, identifies with or is satisfied with is an autonomous decision, see some of my previous postings or Frankfurt (1988, The Importance of What We Care About, Cambridge University Press). It follows if we fail to respect someone’s autonomous decision that we narrow the domain of moral agents. Of course morality is not just concerned with moral agents. Children and animals are of moral concern even though animals and young children are not moral agents. Kant believed the heart of morality depends upon the ability of agents to make autonomous choices. I believe any narrowing of the domain of moral agency damages morality in general. Any system of morality in which the number of creatures of moral concern remains constant but in which the number of moral agents decreases is a weakened system. Any system of morality in which moral agents are only part-time agents, because some of their freely made decisions are constrained, is a damaged system. In the light of the above I would suggest any system of morality which gives precedence to acting beneficently over respecting all autonomous decisions makes moral agents only part-time agents and as a result is a damaged system.

Let it be accepted any flourishing society must be a moral society. I have argued any moral society must be based on respect for autonomy; we cannot have a truly flourishing society without universal acceptance of this fact. Unfortunately some people in a flourishing society might not accept the need to care for others. Equally unfortunately some people might reject beneficent care. We may deplore these facts but nonetheless if we want to live in a flourishing society we must be prepared to accept them. It follows caring about others cannot be a necessary requirement of a truly flourishing society, caring is of course desirable. Indeed a society that insists on compulsory caring for others or the compulsory acceptance of beneficent care is not a truly flourishing society. It further follows that the only requirement necessary for a flourishing society is that each should be free to do as seems best to himself provided by so doing he does not harm others.

Friday, 25 February 2011

David Cameron and Social Integration

In my last posting I suggested there is no such thing as a fully integrated society. I further suggested we would be better employed in considering how people function in our society rather than bothering about how well they are integrated into it. In a speech to the Munich Security Conference on 11/02/11 David Cameron argued,
“We have allowed the weakening of our collective identity. Under the doctrine of state multiculturalism, we have encouraged different cultures to live separate lives, apart from each other and apart from the mainstream. We’ve failed to provide a vision of society to which they feel they want to belong. We’ve even tolerated these segregated communities behaving in ways that run completely counter to our values.” He suggested we have done this separation by encouraging organisations which believe in separation rather than integration, see http://www.number10.gov.uk/news/speeches-and-transcripts/2011/02/pms-speech-at-munich-security-conference-60293 . In this posting I want once more to consider the idea of a fully integrated society. I also want to consider which beliefs groups within a society must share in order to function adequately in a flourishing society.

What exactly is meant by a fully integrated society? Is it one in which people share a set of common beliefs or is it one in which people have only a set of some common beliefs? Let us assume a fully integrated society is one in which people share a set of common beliefs. Mill famously argued,
“Mankind are greater gainers by suffering each others to live as good seems to themselves than by compelling each to live as seems good to the rest.” (Mill, 1859, On Liberty, quote from Pelican Books 1974 page 72.)
If it is accepted we can replace the word “mankind” by “society” then I would suggest Mill is arguing a fully integrated society is an impoverished society. I believe we should accept Mill’s argument for two reasons. Firstly if we were to pressurise some members of society to live as the majority see fit by sharing the majority’s beliefs we would diminish these peoples happiness and probably also diminish the overall happiness present in society. Secondly I would suggest any such society becomes closed to new ideas. Any society closed to new ideas becomes a stagnant society and a stagnant society is an impoverished society. It follows if people aim for a fully integrated society as defined above we must accept that they are creating an impoverished society. In what follows it will be assumed people want a flourishing society. The question now becomes can we really think of a society in which people only share some common beliefs as a fully integrated society?

An objector might object the above question only arises because I am talking about integration in totally the wrong way. She might suggest that we should not be considering a fully integrated society but rather simply an integrated one. She might further suggest that when considering integration we should consider whether people feel at home in that society or alienated from it. Intuitively her last suggestion seems to carry a lot of weight. Nevertheless I am reluctant to accept this last suggestion for two reasons. Firstly if we consider whether people feel at home in a society or alienated from that society, which is not fully integrated, it would appear this society is one in which people only share a set of some common beliefs. Intuitively any society in which people only share some common beliefs does not appear to be an integrated society. If our intuitions clash we have grounds not to trust these intuitions.

Secondly how do we judge whether someone is at home in a society or alienated by it? Perhaps we should simply ask them. We might conduct surveys to answer the question. But surveys are expensive to carry out and it is by no means certain that someone’s answers to questions about integration would always reflect how she actually acts in society. Let us assume that if someone identifies with the society she lives in that she must be reasonably well integrated into that society. I believe if someone identifies with something this means she must be wholehearted about or satisfied with what she identifies with. This is a common theme of this blog, see some of my previous postings or (Frankfurt, 1988, The Importance of What We Care About, Cambridge University Press, chapter 12.) If I am correct if someone identifies with the society she lives in then she must be satisfied with that society. I would now suggest that whether someone is satisfied with the society she lives in depends on how well she functions in it. If someone is unable to function in a society it would seem to be hard to for her to be satisfied with that society. Conversely if someone functions well in a society it is hard to see what grounds she has to be dissatisfied with that society. Rhetoric about integration into society seems meaningless if we have no means of gauging this integration. In the light of the above we can gauge how well a group is integrated into society by considering how they function in society. However in the light of the above the question, as to whether we can we really think of a society in which people only share some common beliefs as a fully integrated society, seems irrelevant. It follows rhetoric about integration becomes superfluous and we should simply concern ourselves with how different groups function in our society.

If any society is to function it must have some common beliefs. If any community is to function within that society it must share these basic beliefs. Of course some community might theoretically thrive in a society without sharing any of its beliefs but such a community would be parasitic on that society and does not function within it. The question now naturally arises what common beliefs must people share in a flourishing society? The answer according to David Cameron is,
“Freedom of speech, freedom of worship, democracy, the rule of law, equal rights regardless of race, sex or sexuality.”
I believe David Cameron is right and that any flourishing society must share these values. At this point I am going to assume that in the context of our discussion the words “belief” and “value” are interchangeable. My objector might claim at this point even if David Cameron is correct about these being the values our society values that I am wrong to suggest that any flourishing society must share these values. She might point out that China is a flourishing society. I would disagree and whilst I accept a non-democratic country such as China may flourish economically I do not accept such countries are genuinely flourishing, China for instance has changed little politically since the Tiananmen Square massacre. Even if we accept David Cameron’s view as the right one we must nonetheless remain aware that the domain of shared values is limited. Mill argued that,
“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right.” (Page 68)
If we accept Mill’s view, as I do, then we may not compel others to do things even if we think it is right that they should do so. For example some people believe Muslim women should be banned from wearing the burka in western society; Nicolas Sarkozy in France for instance. I personally do not approve of the burka but nonetheless believe that provided Muslim women freely choose to wear it they should be perfectly free to do so except in certain circumstances in which personal recognition matters; passport control for instance. Accepting this freedom does not mean we must remain passive in respect to the wearing of the burka. Following Mill we may of course criticise, reason with and attempt to persuade Muslim women not to wear the burka but we may not use compulsion which rules out legislation. It follows the only basic belief all must share in a flourishing society is that each of us should be free to do as seems best for herself provided that by so doing she does not harm others. I believe the values David Cameron mentioned come down to this basic belief. Unfortunately for David Cameron it follows we should accept partially integrated communities which behave in ways that run counter to many of our other values. Fortunately for him however this acceptance does not mean we should encourage such communities which already exist, or the immigration of further such communities. Indeed I would argue we should help and persuade such existing communities to function more fully in our society. Nevertheless we must be prepared to accept these communities provided they do not affect our freedom, or the freedom of some their own members, provided their actions do not harm others. In conclusion it seems to me that rhetoric about how well some ethnic communities integrate into our society is irrelevant and all that matters is that these communities function and not prevent others from functioning in society.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...