Tuesday, 24 May 2011

Roboethics and Autonomy


In this posting I want to examine roboethics and responsibility. I will do this by considering autonomy. The idea of roboethics seems to have originated with Asimov’s three laws of robotics which he proposed in 1942. In this posting I will consider a revised draft of these laws based on an EPSRC/AHRC initiative in roboethics, see Alan Winfield . These revised Laws are,
1.      Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
2.      Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
3.      Robots are products. They should be designed using processes which assure their safety and security.
4.      Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5.      The person with legal responsibility for a robot should be attributed, (with responsibility?)
I agree that humans rather than robots are responsible agents. I also believe robots such as those working on car production lines are just machines like washing machines. However I am going to argue that advances in robotics are likely to smear the attribution of responsibility for events that happen due to a robot’s actions.
Let it be assumed all possible robots are used in a similar way to those working on car production lines. If a machine cause’s damage due to a design fault then the manufacturer is at fault, for instance Toyota admitted responsibility for faults with some of its cars. If however a machine causes damage due to way it is operated then the user is responsible. A driver of a Toyota car who crashes due to excessive speed is solely responsible for the crash provided there are no mechanical faults or other extenuating factors. The legal responsibility attached to the use of robots should be the same as the legal responsibility attached to the use of any other machine such as cars, washing machines or lawn mowers. I would suggest accepting my assumption means accepting there is no real need for roboethics. Roboethics can be safely confined to the realm of sci-fi and is of no practical importance.

However I believe my assumption is false. Robots are indeed instruments we use to obtain our goals as is any machine. But the operations of a robot need not be the same as any other machine. If I use a knife or washing machine the operation of knife or washing machine is completely determined by me. The goals of a robot are completely determined but the way it operates need not be. For instance a driverless train may be classed as a robot if it has no human on board even to act as a controller in an emergency such as those on the Copenhagen Metro. A computer program that automatically trades on the stock market might also be classed as a (ro)bot provided we think the terms instrument and machine are interchangeable in some contexts. If it is accepted that (ro)bots can operate in ways that are not completely determined by the designer or operator then there is indeed a need for roboethics.

Let us consider a (ro)bot that independently trades in stocks for a firm of stockbrokers. Let it be assumed for the sake of simplicity that the goal set by the roboticist who designed this machine is only to maximise profits for the stockholders. Let it be further assumed this machine buys stocks in one company for only some of its stockholders inflating the share price. It then sells the shares other stockholders hold in this company at this inflated price. This sale causes the price to return to its former level. Let it be assumed such a strategy maximises profits for stockholders in general, but that some stockholders gain at the expense of others. The actions the machine takes are unethical. Clearly this machine contravenes the 4th law of robotics as set out above. This example raises two questions. Firstly what can be done to prevent the contravention occurring again and who is responsible for this.

It seems to me there are two possible ways of ensuring the actions a machine takes are ethical. Firstly the goals of the machine must be adequately specified in order to prevent it producing unethical actions. Secondly if the machine can operate in a way not completely determined by the designer or operator then the machine must have some inbuilt standards against which it can check the morality of any its proposed actions. Let us assume goals of a machine can adequately specified so as to prevent it producing immoral actions. Let us consider this assumption in conjunction with my example of a stock trading machine. The 2nd law of robotics states that humans rather than robots are responsible agents. In my example I would worry that even though humans are responsible for the machines goals it is by no means clear who should be held accountable for any failings, responsibility becomes smeared. In my example the machine’s designer is unlikely to be fully aware of all the possibilities of stock trading whilst the machine’s owners may be unaware of how to build goals into a machine. Someone might object my worry is illusory and that the stockbrokers involved must be able to fully set out the machine’s goals to the designer. However I have other important worries. Firstly I worry whether, as the tasks machines become able to do become more complex, it is possible to completely specify the goals of a machine so none of its actions can be considered as immoral. Secondly I worry that the ends a designer sets a machine means the means it takes to achieve these ends is always justified. Because of these worries I would suggest instead of attempting to prevent a machine producing immoral actions by adequately framing the machine’s goals that it would be preferable to have a machine with some inbuilt standards against which it can check the morality of doing any its proposed actions.

Someone might object if my suggestion is adopted that the 2nd law of robotics is contravened. She might point out I am proposing giving autonomy, albeit limited, to robots and with that autonomy comes responsibility. I will now argue my suggestion means giving some form of self-government to robots but it does not mean giving them even limited autonomy. Self-government in this context simply means having the capacity to independently decide on and follow some course of action. A child can be self-governing in a domain specified by her parents. A robot can also be self governing in some limited domain specified by its operators, for example a planetary rover. However self-government as defined above does not mean being autonomous. I have argued repeatedly in this blog autonomy requires that an agent must ‘care about’ or love something. It seems extremely doubtful to me whether robots will ever ‘care about’ or love anything. It is certain present robots day cannot care. It follows it must be doubtful whether robots can be autonomous. It further follows even if a robot has some limited self government it cannot be held responsible for its actions if responsibility requires autonomy. It still further follows that my suggestion, that (ro)bots have inbuilt standards against which they can check the morality of doing any some proposed actions, does not contravene the 2nd law of robotics.

Nevertheless my suggestion does have worrying implications for responsibility. Only humans can be held responsible for the actions of (ro)bots but adopting my suggestion would mean this responsibility becomes smeared. At the present the Military is showing great interest in military robots, see (1). Let it be assumed such a robot kills a large number of non-combatants in error. Let it be further assumed this machine had inbuilt into its software a set of moral standards. The outcry would be enormous and there would be demands that someone should be held responsible. Clearly as I have argued the robot cannot be held responsible. Who then should be responsible, the operator, the software designer or the ethicist who drew up the moral standards? I would argue no one is clearly responsible as responsibility has become smeared between a large numbers of people due to the way the machine operates. Because of this smearing of responsibility it might be argued no (ro)bots should ever be made which operate with an inbuilt set of moral standards. However I would argue this smearing of responsibility is not a problem peculiar to robotics but is an inevitable consequence of the complexity of modern life. Let us assume a pilot in a plane over Libya fires a missile killing a large number of non-combatants in error. Once again who is responsible, the pilot, his commanders, or the systems designers? Once again responsibility appears to become smeared. If it is accepted this smearing of responsibility is inevitable due to the complexity of modern life rather than being peculiar to robotics then I see no reason why (ro)bots should not be built with an inbuilt set of moral standards provided this is done with great care. I would further suggest that both Philosophy and the Law need to seriously consider the consequences of this smearing of responsibility.

  1. Wendell Wallach, Colin Allen, 2009, Moral Machines, Oxford University Press, page 20.

No comments:

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...