Google has developed an autonomous car and major car makers
such as Ford and VW are showing an interest in doing the same. It is reported
that up to ten million such cars might be on the road by 2020, see businessinsider
. I am somewhat doubtful about such a figure but nonetheless autonomous cars
are coming and their coming raises some ethical issues. According to Eric
Schwitzgebel
“determining how a car will steer in a risky situation is a
moral decision, programming the collision-avoiding software of an autonomous
vehicle is an act of applied ethics. We should bring the programming choices
into the open, for passengers and the public to see and assess”,
see autonomous cars
. Clearly autonomous cars needs collision-avoiding software. Intuitively
Schwitzgebel seems to be correct when he argues that an ability to address
moral concerns should built into this software. For instance autonomous cars
might be programmed to not to protect their passengers if by so doing a large number
of pedestrians would be harmed. In this posting I want examine three questions.
Firstly is Schwitzgebel correct when he argues that an ability to address moral
concerns should be built into the software. Secondly is such software actually
possible. Lastly if it isn’t possible to design software which can make moral
decisions should we nonetheless permit autonomous cars on our roads?
What does Schwitzgebel mean when he says that the software
of an autonomous car should be able to address moral concerns? In this posting
I will assume he means some rules should be built into a car’s software about
what to do in situations which involves some moral considerations. Does a autonomous
car need such software? It is by no means certain it does. Consider a driver
whose car will collide with either a young pregnant mother or an old man due to
unforeseen circumstances. Does she make a decision about what to do based partly
on philosophy? I suggest she does not. Of course her emotions might kick in
causing her to avoid the pregnant mother. It might then be concluded if drivers
don’t, or can’t, make a decision based on philosophy, that there is no reason
why autonomous cars should do so. Of course that autonomous cars should be safe
as possible for its passengers, other road users and pedestrians. The above
leads to two tentative conclusions. Firstly, provided autonomous card are as
safe as drivers then there it would seem that there are no ethical reason
against their introduction. Provided autonomous cars are as safe as drivers
then autonomous cars do no more harm than drivers. Secondly, provided autonomous
cars are only slightly safer than
drivers then it would appear that there is an ethical reason for their
introduction. Autonomous cars do less harm than drivers. Of course issues
concerning responsibility remain.
What objections can be raised about accepting the above
conclusion? Firstly it might be objected that my example is chosen to mislead
and that in other situations, when the circumstances are much clearer, people
do in fact make decisions roughly based on applied philosophy. For instance a driver
might be faced with the choice of hitting a concrete stanchion and killing
himself or running into a queue of schoolchildren waiting at a bus stop might
choose to hit the stanchion for moral reasons. I accept that in some extreme
circumstances drivers might make a moral decisions. However such a decision
might be based on the driver’s emotions rather than the application of applied
philosophy. Applying philosophy takes time and time and may not be a viable
option in a collision situation. Moreover I would suggest in real life
situations this second example is just as misleading as the first. A car
crashing into a queue might kill one or two people but it is unlikely to kill a
very large number. It seems to me that only a large number of victims might
enable a driver to make a clear moral decision quickly. I have argued drivers
don’t usually make moral decisions when making collision decisions and rarely
if ever do so by applying philosophy. Does this mean autonomous cars do not
need a controlling system that takes account of moral considerations? The above
seems to suggest that they don’t. However let us assume drivers should take
into account moral considerations in collision situations provided this is
possible. It follows autonomous cars should have a controlling
system that takes account of moral considerations in collision situations
provided this is possible. However if this isn't possible and autonomous cars are at least as safe
as drivers the inability to make moral decisions shouldn't prevent the
introduction of autonomous cars.
Designing systems that enable autonomous cars to make decisions
which include moral considerations will be difficult. Perhaps then rather than
designing such systems it might be better to make autonomous cars avoid the
circumstances in which the need to make moral decisions arises. Cars and
pedestrians don’t mix so perhaps it might be safer to limit autonomous cars to
motorways and other major roads. Doing so might have the additional benefit
that it might prove easier to design autonomous cars to avoid dangerous
circumstances in which they might need to make moral decisions rather make
moral decisions. Unfortunately such a course of action whilst desirable would
seem to be impractical unless the way people use cars changes radically. People
want cars to take them home, to work, to go shopping and their children to
school. Satisfying these wants means mixing cars and pedestrians. Cars that
don’t satisfy these wants would be unwanted. It would appear that even if it is
very hard to do that an attempt to program the collision-avoiding software of a
autonomous cars to enable them to take in to account moral considerations
should be made provided this is possible.
I have argued that Schwitzgebel is correct in his assertion
that the collision-avoiding software of a autonomous car should include moral
considerations provided this is possible. Let us turn to the second question I
posed, is such software possible? I have argued that in an emergency situation
in which people have to make moral judgements that they do so quickly based on
their emotions. Cars don’t have emotions so it follows the basis of any
software system for making moral decisions in autonomous cars will be different
from that used by drivers and based on set of rules. What sort of rules? Schwitzgebel
argues that the rule of protecting a autonomous car’s occupants at all costs is
too simplistic. I would question whether such a rule is indeed a moral rule at
all. Might a strictly utilitarian rule of maximising the lives saved in a crash
situation be adopted? Schwitzgebel points out such a rule would unfairly
disregard personal accountability. For instance what if a drunken pedestrian
steps in front of a car? Isn’t he accountable for his actions? If so shouldn’t his
accountability be taken into account when assessing the consequences of any
decision about the oncoming collision? Could a driver spot that a pedestrian
was drunk in an emergency situation? I would suggest he couldn’t. At present a autonomous
car’s software certainly couldn’t. It follows any rules used by autonomous cars
must be primitive rules which don’t fully represent our own understanding of
moral rules. It seems possible that if we are prepared to accept some primitive
rules built into a autonomous cars software then it might be possible for such
a car’s software to make some primitive moral decisions.
Let us consider my last question. If the rules involving
moral decisions which are built into autonomous car’s software must, at least
for the present, be rather primitive rules should we permit the use of such cars?
I will now argue we should. Firstly I have argued that drivers don’t, or very
rarely, make moral decisions in collisions situations. There is no legal requirement
that drivers should make such decisions and I see can no reasons as to why a
higher standard should be applied to autonomous cars. Indeed autonomous cars
might be safer. Drivers can get drunk, tired and speed. Autonomous cars can’t
get drunk or tired and their software can control their speed. Let us accept
than any moral rules built into the software of a autonomous car must be
concerned with its safe use. Let us also accept that being safe involves
avoiding harm. Now let us consider a autonomous car with the primitive rule of
protecting its occupants at all costs. This car safe for its occupants and
avoids harming them. I would suggest we should not permit the use of autonomous
cars using such a simple rule. It’s only safe for some. It might appear that
the introduction of autonomous cars would only be acceptable if their software
makes them safe for the public at large and avoids harming them. Achieving the
above would be difficult. However the above might be amended. The introduction
of autonomous cars would only be acceptable if their software makes them as safe
to the public as driven cars. I have
argued above that drivers only have the time to make very limited moral
decisions. It should be possible to create software for autonomous cars which
can make the same sorts of moral decisions as drivers do. Indeed it might be
harder to create software which recognises roadside features such as
pedestrians than moral concerns.
What conclusions can be drawn from the above? Firstly provided
autonomous cars are as safe as drivers then autonomous should be permissible.
Secondly provided autonomous cars are only slightly safer than drivers then it would appear that there is an
ethical reason for their introduction. However problems with accountability and
insurance remain but these problems don’t seem insurmountable.
No comments:
Post a Comment