In an interesting post John Danaher asks if someone can be
friends with a robot, see philosophicaldisquisitions
. He argues virtue friendship might be possible with a robot. Virtue friendship
involves two entities sharing values and beliefs which benefit them. Let us
accept that any entity which is capable of having values and beliefs can be
regarded as a person. Perhaps one of the great apes might be regarded as a
person but can the same be said of a robot? Does it make sense to say a robot
might have rights or can be regarded as a person? In what follows I will limit my
discussion to robots but my discussion could equally well be applied to some
advanced system of AI or algorithms. At present he actions of some robot have
some purpose but this purpose doesn’t have any meaning which is independent of
human beings. At present the actions of a robot have no more meaning which is
independent of us than the action of the wind in sculpting a sand dune. In the
future it is conceivable that this situation might change but I am somewhat
sceptical and believe at the present time there is no need to worry about
granting rights to robots akin to human rights. In this posting I will argue
the nature of belief means to worry about robot personhood is both premature
and unnecessary.
How should we regard the actions of a robot if it has no
beliefs? What are the differences between the wind sculpting a sand dune and
the actions of a robot? One difference is that even if both the wind and a
robot don’t have beliefs that nonetheless a robot’s actions are in accordance
with someone’s beliefs, its designer or programmer. But does this difference
matter? A refrigerator is acting in accordance with our belief that it will
keep our food cold. If we don’t want to grant personhood to refrigerators, why
should we do so for robots? Perhaps then we might implant some beliefs into
robots and after some time such robots might acquire their own emergent
beliefs. Perhaps such robots should be regarded as persons. Implanting such
beliefs will not be easy and may well be impossible. However, I see no reason, even
if such implantation is possible, why we should regard a such a robot as some
sort of person. If a person has some belief, then this belief causes him to
behave in certain ways. How do we implant a belief in a robot? We instruct the
robot how to behave in certain circumstances. In this situation the of course
the robot behaves in accordance with the implanted belief but the primary
cause of this behaviour is not this implanted belief but rather a belief of
those who carried out the implantation. A robot in this situation cannot be
said to be behaving authentically. In this situation I can see no reason why we
should attribute personhood to a robot which uses implanted beliefs as outlined
above.
At this point it might be objected that even if a robot
shouldn’t be considered as a person it might be of moral concern. According to
Peter Singer what matters for something to matter morally is not that it can
think but that it can feel. Animals can feel and should be of moral concern.
Present day robots can’t and shouldn’t. Present day robots are made of
inorganic materials such as steel and silicon. However it might be possible to
construct a robot partly from biological material, see Mail
Online. If such a robot could feel then it should be of moral
concern but this doesn’t mean we should consider it as a person, frogs can
feel and should be of moral concern but they aren't persons. Nonetheless I would suggest that the ability to feel is a necessary
precursor for believing which is a precursor for personhood.
For the sake of argument let us assume that it is possible
to create a robot which the primary cause of its behaviour is its implanted or
emergent beliefs. What can be said about
this robot’s beliefs? When such a robot
decides to act the primary cause of the action is its internal beliefs, it is
acting in a manner which might be regarded as authentic. How might such a
robot’s beliefs and actions be connected? Perhaps they are linked by Kant’s hypothetical
imperative. The hypothetical imperative
states,
“Whoever wills an end also wills (insofar as reason has
decisive influence on his actions) the indispensably means to it that are
within his power. (1)
Some might suggest that having a set of beliefs and
accepting Kant’s hypothetical imperative are necessary conditions for
personhood, some might even regard them as sufficient conditions. They might
further suggest that any robot meeting these conditions should be regarded as a
candidate for personhood. Of course it might be possible to design a robot
which conforms to the hypothetical imperative, but conforming is not the same
as accepting. Let us accept anyone or anything that can be regarded as person
must have some beliefs and must accept rather than conform to the hypothetical
imperative.
What does it mean for someone to accept the hypothetical
imperative? Firstly, he must believe it is true, the hypothetical imperative is
one of his beliefs. Someone might believe that he is made up of atoms but this
belief doesn’t require any action when action is possible. The hypothetical
imperative is different because it connects willed ends with action. Can the
hypothetical imperative be used to explain why a robot should act on its
beliefs, be they implanted by others or emergent? Kant seems to believe that
the hypothetical imperative can be based on reason. I will argue reason can
only give us reason to act in conjunction with our caring about something. I
will now argue the hypothetical imperative only makes sense if an agent views
beliefs in a particular way. What does it mean to will an end? I would suggest if
someone wills an end that he must care about that end. If someone doesn’t care
about or value some end, then he has no reason to pursue that end. What then
does it mean to care about something? According to Frankfurt if someone cares
about something he becomes vulnerable when that thing is diminished and is
benefited when it is enhanced. (2) People by nature can suffer and feel joy robots
can’t. It is worth noting animals can also suffer and feel joy making them like
people with rights rather than like robots. The above raises an interesting
question. Must any entity which is capable of being conscious, robot or animal,
be able to suffer and feel joy? If we accept the above then the ends we will
must be things we care about. Moreover, if we care about ends then we must
value them. It follows if the hypothetical imperative is to give us
cause to act on any belief that that belief must be of value to us. It
follows the hypothetical imperative can only be used to explain why a robot
should act on its beliefs provided such a robot values those beliefs which
requires it becoming vulnerable. A right is of no use to any entity for which the implementation of the right doesn't matter, isn't vulnerable to the right not being implemented.
I have argued any belief which causes us to act must be of
value to us and that if we find something valuable we are vulnerable to the
fate of the thing we find valuable. What then does it mean to be vulnerable? To
be vulnerable to something means that we can be harmed. Usually we are vulnerable
to those thing we care about in a psychological sense. Frankfurt appears to
believe that we don’t of necessity become vulnerable to the diminishment of the
things we value by suffering negative affect. He might argue we can become
dissatisfied and seek to alleviate our dissatisfaction without suffering any
negative affect. I am reluctant to accept becoming vulnerable can be
satisfactorily explained by becoming dissatisfied without any negative affect. It
seems to me being dissatisfied must involve some desire to change things and that
this desire must involve some negative affect. I would argue being vulnerable
to those thing we value involves psychological harm and that this harm must
involve negative affect.
Let us accept that in order to be a person at all someone
or something must accept and act on the hypothetical imperative. Let us also
accept that the hypothetical imperative only gives someone or something reason
to act on some belief provided that someone or something must value that
belief. Let us still further accept that to value something someone or
something must care about what they value and that caring about of necessity
must include some affect. People feel affect and so are candidates for
personhood. It is hard to see how silicon based machines or algorithms can feel
any affect, positive or negative. It follows it is hard to see why silicon
based machines or algorithms should be considered as candidates for personhood.
It appears the nature of belief means any worries concerning robot personhood when
the robot intelligence are silicon based are unnecessary. Returning to my
starting point it would appear that it is acceptable for young children to have
imaginary friends but it that is delusional for adults to believe they have
robotic friends. However I will end on a note of caution. We don’t fully
understand consciousness so we don’t fully understand what sort of entity is
capable of holding beliefs and values. It follows we cannot categorically rule
out a silicon machine becoming conscious. Perhaps also it might become possible
to build some machine not entirely based on silicon which does become
conscious.
- Immanuel Kant, 1785, Groundwork of the Metaphysics of Morals,
- Harry Frankfurt, 1988, The Importance of What We Care about, Cambridge University Press, page 83.
No comments:
Post a Comment