Tuesday 25 March 2008

Practical Rationality

In chapter 7 of “The Ethics of Care and Empathy” Slote questions whether the concept of practical rationality is a meaningful concept. In this posting I will deal directly with this question and will not comment on Slote’s arguments.
Hume famously argued that it is not contrary to reason to prefer an acknowledged lesser good to an acknowledged greater good (A Treatise of Human Nature, Oxford University Press, 1978, originally 1739-40, page 416). It is important to be clear that Hume is referring to practical reason and not referring to theoretical rationality. Theoretical rationality is concerned with the truth or falsity of a statement. Practical rationality roughly speaking is simply concerned with someone making decisions about what would make her life go well. If Hume’s statement is accepted it is hard to see why practical reason should even exist. This follows because by definition practical rationality exists to useful. If Hume is correct it is hard to see how any sort of rationality would be useful in helping agents make decisions about which course of action to follow. In this posting I will argue these difficulties fade away when practical rationality is not considered in isolation but is considered in connection with the concept of autonomy.
Lionesses often co-operate when hunting. They do this by forming an ambush to capture their prey. Such behaviour appears to be both practical and follow a rational course. Perhaps the behaviour of the lionesses is rational but any reasoning is unconscious and would appear to be non-reflective. Intuitively even if the lionesses’ behaviour is rational any reasoning involved does not appear to be an instance of practical rationality. This suggests practical rationality can only be applied by creatures capable of conscious reasoning. Let us consider a conscious robot, assuming such a thing is possible, consciously following a rational course of action. Would such a creature be using practical rationality? In science fiction robots are often thought as excellent examples of rational creatures because their thoughts are uncontaminated by emotions. However even in the vacuum of outer space practical reason does not exist in isolation. Psychologists usually define practical rationality as whatever kind of thinking best helps people to achieve their goals. It would seem practical reason can only be useful among creatures with conscious goals. It appears to follow the question as to whether a conscious robot can use practical reason depends on whether such a robot can have conscious goals. It might be further argued this depends on whether the conscious robot is autonomous. It seems to me most questions about the status of more advanced robots and computers should not be tackled by considering their rationality alone, but rather by considering whether such entities can ever be regarded as autonomous.
Autonomy is often thought of as simply someone’s second-order capacity to reflect on her desires and to accept or change these desires in the light of her goals and values (Dworkin (1988) The Theory and Practice of Autonomy, Cambridge University Press, page 20). Such a simple definition concurs with our intuitions. The trouble with such an intuitive definition is that there seems to be very little to differentiate autonomy, so defined, from an individual’s capacity for practical reason. Practical reason might be defined as autonomy lite. Moreover an autonomous action might be simply defined as the product of extra careful practical reasoning. It would appear if the above simple definition of autonomy is adopted the question as to whether a conscious robot is capable of using practical reason cannot be dependent on whether the robot in question is autonomous. This appearance depends on the conclusion that it is impossible to differentiate between autonomy and practical reason in a meaningful way. I suggested above that intuitively an autonomous agent must reflect on her decisions. I would also suggest that intuitively an autonomous agent must identify in some way with her decisions. If it is accepted that an autonomous agent must identify with her decisions then it might be possible to differentiate between autonomy and practical reason. What is meant by an agent identifying with her decisions? The fact that an agent reflects long and hard about a decision alone does not guarantee she identifies with her decision. Indeed the fact an agent thinks long and hard about her decision may be a sign that she is unable to identify with her decision. It would seem to be hard for an agent to identify herself with any decision she doesn’t care about at all. In such circumstances the agent might as well make a random choice. It can be concluded if an agent identifies with her decision then she must care about her decision.
It is important to be clear about what is meant by “caring about a decision” in the context of an agent identifying with her decision. The fact an agent cares a great deal about some decision she faces does not mean she identifies with the actual decision she makes. For instance a young woman may have both academic abilities and tennis prowess. Furthermore she may be faced with a decision about which of these two capacities she should develop. She may care a great deal about this decision but still be unable to make a clear cut choice. Let it be accepted without argument a clear cut choice is one the agent identifies with. If the above is accepted then caring about a decision means an agent makes a choice she identifies with and cares about. The question now naturally arises what is meant by a choice one cares about? I concur with Frankfurt (1988, The Importance of What We Care About, page 83) in believing that in this context “caring about a choice” means someone becomes vulnerable to the losses and susceptible to the benefits dependent on whether her choice is diminished or enhanced. Let it be assumed in the above example the young women chooses to concentrate on her tennis career. Let it be further assumed our potential tennis star will be pleased if she is successful in her career and her choice is accepted by those close to her. Furthermore she will be hurt if her career is unsuccessful and her choice is not accepted. Let it be still further assumed our potential star also takes pleasure her academic abilities and if she decides to follow a tennis career then she will feel damaged by being unable to develop these abilities in a way that would satisfy her. In these circumstances it must be questioned if our potential star can fully identify with her decision to concentrate on her tennis career. It seems to follow our potential star can only identify with her decision to follow a tennis career provided she cares more about this career than her academic aspirations. Accepting the above leads to the conclusion that if an agent is to identify with something she must care about it more than something else it conflicts with. A further question now arises, how do we know an agent clearly cares about something in preference to another thing she also cares about? It might be argued if someone chooses X in preference to Y and is satisfied with her decision and exhibits no restlessness to change her decision, then she can be said to have made a choice she as to whether robots might care about or love and the consequences of their so doing are not purely an intellectual exercise. These ideas care about. It follows if our potential tennis star is satisfied with her decision and shows no restlessness to change this decision that she clearly cares about her tennis career and identifies with her choice. Near the beginning of this posting I suggested the answer to the question as to whether a conscious robot can use practical reason depends on whether such a robot is autonomous. In the light of the above discussion the question as to whether a robot is autonomous depends on whether a robot can care about or love. This question was explored in a light-hearted way in Spielberg’s 2001film AI.
It seems possible for a non-autonomous mother to go out binge drinking, which she doesn’t care about, in preference to caring for her child which she loves a great deal. Such a decision has nothing to do with practical rationality as without goals and values the agent cares about practical rationality can serve no useful purpose. It follows a non-autonomous agent may indeed choose something she cares about a very little in preference to something she greatly cares for. Intuitively such a decision appears to be an incompetent decision, however in the light of the previous discussion the reason why such a decision appears to be incompetent is not simply because it is irrational but rather because it is non-autonomous. Accepting the above means it can be concluded it is impossible for an autonomous agent to prefer a good she acknowledges is a lesser good rather than a good she acknowledges is greater. To understand this conclusion, consider an autonomous agent choosing between two conflicting options she cares about. If this agent could choose an option she cares about less rather than an option she cared about more then because she is autonomous it would imply she could choose this option without any restlessness to change her decision. Such a situation seems to be inconceivable. I have argued practical rationality is of no use to non-autonomous agents. This does not mean practical does not exist or is of no use to any agents. At the beginning of my posting I suggested if Hume’s statement is accepted it is hard to see why practical reason should even exist. It seems such a suggestion is baseless when considered in conjunction with autonomous agents. When considered in conjunction with autonomy practical rationality is useful in helping agents make decisions about which course of action to follow in order to achieve their goals.

Historic wrongdoing, Slavery, Compensation and Apology

      Recently the Trevelyan family says it is apologising for its ancestor’s role in slavery in the Caribbean, see The Observer .King Ch...