Monday, 30 January 2023

Engaging with Robots

 

In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different question about robots and AI more generally, in what follows the term robot will refer to robots and AI in general. Using the word control hints at agency. We talk of driving cars, operating machinery but of controlling ourselves, others, crowds and even our pets. If the things we control have agency then what and how we control them matters morally. Self-control is a virtue and controlling another competent adult a vice. Who we control matters for humans and what we are controlling matters for robots. The question this posting will consider is how can we know the nature of what we are controlling when considering robots.

 

If a robot is simply a tool then using a robot is an extension of our agency and when controlling a robot we are controlling our own agency. In this context controlling a robot raises questions as to how well we can understand our own agency which I won’t consider here. Can a robot have some sort if agency? I will assume here that for something to possess agency it must have some consciousness, some might disagree. If we accept the above then when considering controlling robots it is important to consider whether a robot can be conscious.

 

I am somewhat pessimistic about the chances that a robot might be conscious and certainly believe that none of today’s robots are. However most animals are material creatures and appear to be conscious and it seems possible that we might be able to construct other conscious material creatures such as robots. At the preent time we don’t understand what makes us conscious any better than Descartes did. It is possible that there are some things we can never know as shown by Turing’s halting problem, we shall return to Turin later/. The nature of consciousness might be one of these things. However most people seem to believe we might sometime be capable of understanding the nature of consciousness. If we can understand the nature of consciousness then it seems probable that we could create conscious robots and that my pessimism about doing so is unjustified. Of course such robots might be constructed from materials other than metal and silicon. If at some future date we ask how we should we control robots then we should first ask whether we are trying to control a conscious or unconscious robot. This question is matters morally. For instance how we should control and treat a sex robot would depend on whether it was conscious or not. Accepting this of course doesn’t mean how someone treats an unconscious sex robot doesn’t matter morally, see Inner Virtue . The rest of this posting will consider how we might tell       whether a robot is conscious or not.

 

Because we don’t have a theory of consciousness it is difficult to de design a test to decide if someone or something is conscious. If someone is in a coma we might simply prod her but this is unlikely to work with robots because of their construction. At this point someone might question whether we need such a test. He might point out that by the time such a test is needed that we might have a viable theory of conscious. In this scenario if we desire a conscious robot we simply build it in accordance with the theory.  I accept my objector’s point but in turn point out that we don’t have such a theory yet. Perhaps one way of building such a theory would be to attempt building a conscious robot In this scenario we would need a test for robot consciousness.  Because of these difficulties we might fall back on the Turing test. This basically involves having a conversation through a terminal with a robot and if we cannot tell the difference between this conversation and one with a person then we should assume that the robot is conscious. If we are going to apply the test them we assume that we can have a conversation with a robot. But must we accept this assumption? First someone with locked in syndrome suggests the possibility that a robot might be conscious but unable to communicate. However because we design and build robots I find such a scenario unlikely and won’t pursue it here. However there is another reason why a robot might be conscious and unable converse. According to Wittgenstein if a lion could speak we couldn’t understand him because of different ways of living. The same argument might be applied to robots. Robots might use language we can’t understand or inhabit a radically world. In response I would suggest that the worlds robots and we live in touch and that any creatures that live in worlds that touch can communicate to some degree, see if a lion could speak . Let us accept that we might use the Turing test as a test for robot consciousness. Unfortunately this isn’t a very reliable test as some unconscious bots seem able to pass it. How might we sharpen it up?

 

 

In the Turing test a person tries to decide if he is conversing with a machine or another person using a terminal and if he cannot then he assumes that he is talking to a person. Unfortunately human beings have a tendency to anthropomorphise things, for instance some might say it would be wrong to kick a robot dog. It follows that human beings might not be best placed to decide if a conversation is between two conscious creatures. Recently Edward Tian created an app that detects essays written by AI. This suggests that the Turing test might be amended as follows. A person would still have a conversation with another but the decision as to whether this conversation is between two conscious entities would be made by an AI program. I have previously argued that general or genuine AI must have a will, see emerging AI . Secondly the Turing test might be amended so the conversation with the unknown entity focusses on the will. To have a will any entity must care about something and perhaps the conversation should focus on what the unknown entity cares about. At this point it might be objected that what a robot cares about might be implanted and any will resulting from this implant isn’t genuine. In response I would argue that whilst a creatures is defined by what it cares about, whether it is conscious or not is determined by its reaction to this. Any creature in which what it cares about has been implanted isn’t authentic but it can still have a will and be conscious.  The Turing test might be amended so it probes what the unknown entity cares about and how it reactions to this in different situations. This amendment is interesting but I am doubt it would be useful at the present time in assessing conscious but it could be a useful tool for investigating the nature of consciousness. Lastly it might be suggested that a Turing test might not only be assessed by AI but be conducted by some AI system. Perhaps AI might find better questions than a human being. I would be reluctant to accept this suggestion for it might mean accepting an attribution of consciousness without any means of ascertaining the accuracy of the attrition.

 

What conclusions can be drawn from the above? First whilst how we collaborate with robots is an interesting question it raises two equally interesting potential questions. First what are we collaborating with and secondly how can we ascertain this. Both these questions may be illusory, probably are illusory, but if they are we need to be able show why. Trying to answer these questions is important for another reason it might shed some light in the nature of consciousness. It also highlights the problems of using the Turing test to answer these questions because if we accept that animals are conscious we cannot use a Turing test to ascertain this.

 

No comments:

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...