Monday, 7 March 2016

Algorithmic Assisted Moral Decision Making


Albert Barqué-Duran wonders whether humanity wants computers making moral decisions? In the face of the coronavirus outbreak when we are faced by difficult and complex moral decisions it might be suggested that we need all the help we can get. However is it even right to outsource any of our moral decisions? If we can’t outsource at least part of our moral decision-making, then the whole idea of applied philosophy has very shaky foundations. I believe applied philosophy remains a meaningful discipline, even if some of its practitioners seem to over-elaborate some things in an attempt to justify both their position and the discipline itself. Let us assume outsourcing some moral decisions can be justified. A doctor might for instance trust a bioethicist. If we assume that it can be right to outsource some moral decision-making to experts in some fields could it also be right to outsource some moral decision-making to algorithms or machines? In this posting I want to consider this question.

What is a moral decision? It isn’t simply a decision with ethical implications, computers can already make these, it is a decision based on moral considerations. It is feasible that in the future computers might make moral decisions completely by themselves or computers might aid people to make such decisions in much the same way as computers now aid people to design things. In this posting I want to examine whether humanity might want computers to make moral decisions. I will consider three reasons why it shouldn’t. Firstly, it might be argued only human beings should make moral decisions. Secondly, it might be argued that even if computers can make moral decisions that some of these decisions would not in the interests of human beings. Lastly it might be argued that computers might make decisions which human beings don’t understand.

First let us assume that some computers could make moral decisions independently of human beings. A driverless car would not possess such a computer as it makes decisions based on parameters given to it by human beings and in doing so acts instrumentally to serve purely human ends. Personally I am extremely doubtful, whether a computer which can acquire the capacity to make moral decisions independently of human beings is feasible in the foreseeable future. Nonetheless such a computer remains a feasibility because human beings have evolved such a capacity. If we accept that such a computer is at least a feasibility do any of the three above reasons justify our fears about it making moral decisions? Firstly, should only human beings make moral decisions? If any sentient creature has the same or better cognitive capacities as ourselves then it should have the same capacity as we do to make moral decisions. We seem quite happy with the idea that aliens can make moral decisions. Prima facie provided a machine can become sentient then it should be able and allowed to make moral decisions. In fiction at least we seem to be quite happy about computers or robots making moral decisions. Secondly might such a computer make decisions which are not in the interests of human beings? It might, but I would suggest what really matters is that it takes into account human interests. Let us accept that the domain of creatures we could possibly feel sympathy for defines the domain of creatures that merits our moral concern, this includes animals but not plants. If a computer tries to make moral decisions without some form of sympathy then it might mistakenly believe it is making moral decisions about rocks, shoes and even clouds. Once again I would reiterate that at present a computer which feels sympathy is an extremely fanciful proposition. Let us accept that a computer that cannot feel sympathy cannot make moral decisions independently of human beings. Let us assume a computer capable of feeling sympathy is possible. Let us also assume that such a computer will also have the cognitive powers to understand the world at least as well as we do. It follows such a computer might make some decisions which are not in human interests but that it should always consider human interests, surely this is all we can ask for. Thirdly might we not be able to understand some of the moral decisions made by any computer capable of making moral decisions independently of human beings? John Danaher divides this opacity into three forms, intentional, illiterate and intrinsic algorithmic opacity . I have argued above that any computer which can make moral decisions must be capable of feeling a form of sympathy, because of this I will assume intentional opacity should not be a problem. Illiterate opacity means some of us might not understand how a computer reaches its decision, but does this matter as long as we understand the decision is a genuine moral decision which take human interests into account? Lastly intrinsic opacity means there may be a mismatch between how humans and computers capable making moral decisions understand the world. Understanding whether such a mismatch is possible is fundamental to our understanding of morality itself. Can any system of morality be detached from affect and can any system of morality be completely alien to us? I have tried to cast some doubt on this possibility above by considering the domain of moral concern. If my doubts are justified, then this suggests that any mismatch in moral understanding cannot be very large.

Let us accept that even if computers can make moral decisions independently of human beings are possible that this possibility will only come into existence in the future, probably the far distant future. Currently there is interest in driverless cars having ethical control systems and we have computer aided design. It follows that it is then at least conceivable that we might develop a system of computer aided moral decision-making. In practice it would be the software in the computer which would aid in making any decisions so the rest of this posting will be concerned with algorithmic aided moral decision making. Giubilini and Savulescu consider an artificial moral advisor which they label as an AMA, see the artificial moral advisor . In what follows AMA will refer to algorithmic aided moral decision making. Such a system might emerge from a form of collective intelligence involving people in general, experts and machines

I now want to consider whether we should trust an AMA system I want to consider whether we need such a system. Persson and Savulescu argue we are unfit to make complicated moral decisions in the future and that there is a need for moral enhancement (1). Perhaps such enhancement might be achieved by pharmacological means but it is also possible our moral unfitness might be addressed by an AMA system which nudges us towards improved moral decision making. Of course we must be prepared to trust such a system. Indeed, AMA might be a preferable option to enhancing some emotions because of the possibility that enhancing emotions might damage our cognitive abilities (2) and boosting altruism might lead to increased ethnocentrism and parochialism, see practical ethics .

Let us accept there is a need for AMA provided that this is feasible. Before questioning whether we should trust an AMA system I need to sketch some possible features of AMA. Firstly, when initialising such a system a top down process would seem to be preferable because if we do so we can at least be confident the rules it tries to interpret are moral rules. At present any AMA system using virtue ethics would seem to be unfeasible. Secondly, we must consider whether the rules we build into such a system should be deontological or consequentialist. An AMA system using deontological rules might be feasible but because computers are good at handling such large quantities of data it might be preferable if initially we installed a consequentialist system ethics. In the remainder of this posting I will only consider AMA based on a consequentialist system of ethics. As this is only a sketch I will not consider the exact form of consequentialist ethics employed. Thirdly, any AMA system operating on a consequentialist system must have a set of values. Once again when initialising the system, we should use a top down approach and install human values. Initially these values would be fairly primitive values such as avoiding harm. In order to initialise the AMA system we must specify what we mean by harm. Perhaps the easiest way to specify harm would be to define it as suffering by some creature or loss of some capacity by that creature. We might specify the sort of creature which can be harmed by a list of sentient creatures. Next we might simply specify suffering as pain. Lastly we would have to specify a list of capacities of each sentient creature on our list of sentient creatures

At this juncture we have an extremely primitive system of AMA. This system could be regarded as a universal system and in the same circumstances would always make the same decision. A private morality is of course nonsense but nonetheless we must identify with our moral decisions and a universal morality might be hard to identify with. At this point the user of such a system might modify it by adding weights to the built in values. For instance, a user might give a greater weight to avoiding harm, acting beneficently, and a lesser weight to respecting autonomy in situations in which these two values clash. At this point we have a primitive system the user might identify with. This primitive system might now be further modified by use in a more bottom up way by the use of two feedback loops. Firstly, the user of a system must inform the system whether she accepts any proposed decision. If the user accepts the proposed decision, then this decision can be made a basis for similar decisions in much the same way as in the legal judgements set precedents for further judgements. If the user doesn’t accept a particular decision, then the system must make it clear to the user the weights which are attached to the values it used in making this decision and any previous decisions used. The user might then refine the system either by altering the weights attached to the values involved and/or feeding into the system how the circumstances the current decision is based on differ from the circumstances of past decisions used. Lastly in this joined up age the system’s user might permit the system to use the weights attached to values and decisions made by other systems belonging to people she trusts or respects. Employing such a system might be seen as employing a system of collective intelligence which uses both humans and algorithms in making moral decisions.

I now want to return to my original question, should we trust the decisions such an AMA system makes? I want to consider the three reasons outlined above as to why we shouldn’t trust such a system. I will conclude each of these reasons appears to be unsound in this context. Firstly, an objector might suggest if someone relies on such a system she isn’t really making moral decisions. I suggested a moral decision is a decision based on ethical considerations. The inbuilt values of the system are moral values so it would appear any decision made by the system is based on ethical considerations. However, my objector might now suggest if someone makes a moral decision she must identify herself with that decision. It would appear even if the accept my objector’s suggestion that because the system relies on moral values built into it by the user that any decision it makes must be based on values she identifies herself with. Secondly my objector might suggest that such a system does not serve human interests. The system sketched above is a consequentialist system and it might make decisions which aren’t in the user’s self-interest, however because the values built into it are human values the system should always acts in humans’ interests. It might of course make bad decisions when trying to serve those interests but then so do humans themselves. Lastly my objector might return to Danaher’s opacity question and suggest that the user of such a system might fail to understand why the system made a particular decision. I would suggest that because the system has feedback loops built into it that this shouldn’t occur. I would further point out that because it is always the user who implements the decision and not the system that the user retains a veto over the system.

This examination has been extremely speculative. It seems to me that whether we would want such computers to make moral decisions depends on the background circumstances. All moral decisions are made against some background. Sometimes this background is objective and at sometimes it includes subjective elements. For instance someone’s decision to have an abortion contains subjective elements and someone is unlikely to use AMA to help in making such a decision. The covid-19 outbreak creates a number of moral questions for doctors treating covid patients with limited resources which need to be made against a mainly objective background. Such decisions are more amenable to AMA and perhaps now would be a good time to start designing such systems for emergencies.


  1. Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
  2. Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.

No comments:

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...