Showing posts with label Aliens. Show all posts
Showing posts with label Aliens. Show all posts

Thursday, 19 October 2017

If a Lion could Speak

According to Wittgenstein, “if a lion could speak, we could not understand him.” (1) It is by no means clear what  Wittgenstein meant by this remark and I will suggest two possible explanations. Firstly, there might exist some languages which cannot be translated into any other. Secondly some minds might be so radically different from ours that the thoughts in those minds might be so radically different to our own that we couldn’t conceive them. It might appear that whilst the soundness of these propositions might be of academic interest it is of no practical importance. I would suggest that this appearance is mistaken. Suppose that some advanced AI, robots or even aliens could speak could we understand them? The answer to this question might help support or provide some evidence against Bostrom’s orthogonanlity thesis. Recently Facebook abandoned an experiment after two artificially intelligent programs appeared to be chatting to each other in a strange language only they understood, see the Independent . Stephen Hawking believes if we are ever contacted by aliens we should think very carefully before replying due to the dangers involved. I am extremely dubious about whether we will ever be contacted by aliens but the possibility exists as long as we are unsure of how life evolved in the universe. The first possible danger posed by our inability to communicate with aliens formed the subject matter of the 2016 film Arrival, might powerful minds with which we cannot communicate pose a threat to us? Besides the above possibility there also exists the possibility that alien minds might be so radically different from ours that they might consider us of no importance and even consider us as dangerous. This second possibility might also be posed by some advanced form of AI. On a more practical level if Wittgenstein is correct then Turing tests are pointless for we might be unable to converse with fully intelligent conscious entities.

Let us accept that language can be roughly defined as a system of communicating information. However, there is an important difference between language processors and language users. If language is simply a system of communication then ‘computer languages’ such as Java, C and Python are languages in much the same way as are English, Mandarin and Sign Languages used by the deaf. I would suggest that however fast a computer runs or however much information it can handle that if this is all it can do then it cannot be said to be a language user. What does it mean to be a language user? I would suggest that for some entity to be considered as a language user this entity must determine the use it puts language to.  At the present time computers, robots and AI don’t determine how the information they process is used and as a result aren’t language users. It follows that at the present time that any dangers posed by computers, robots or AI are due to our misuse or misunderstanding of them rather than some imagined purpose such entities might acquire. It might be objected by someone that accepting my suggestion means that because animals don’t determine the use the language they use that they also aren’t real language users. It would appear to follow that chimpanzees and clever crows which appear to communicate with us are really language processors in much the same way as computers rather than users. I would argue this objection is unsound. Animals might simply use language but the use of the language is put to, unlike the use of computers, is determined by the animals’ needs and wants. Accepting the above means accepting that certain animals are primitive language users. The rest of this posting will only be concerned with language used by language users as defined above.

 

Let us consider the possibility that we might be unable to understand the language of aliens or some advanced form of AI. It is possible that any AI, however advanced, must remain a language processor rather than a language user. Nonetheless because we are uncertain as to how we became language users the possibility of some advanced AI becoming a user cannot be completely ruled out. Let us now consider whether some language might be untranslatable into any other. By untranslatable I don’t mean some language which is difficult to translate but rather that some language is impossible to translate. Of course we may not capture all the nuances of some language in translation but is there any language that cannot translated at least to some degree? In order to answer this question, we must ask another what is meant by a language?  Let us accept that language is a system of communicating information among language users as defined above. Information about what? Information must include knowledge of things in the world shared by the language users. The world of language users must be a world of things. These things might include such things as, physical objects, descriptions of behaviour in the world and emotions among others. If any world was a completely undifferentiated one with no distinct things existing in it there could be no speech and no need for language users. Our original question might now be reframed. Is it possible for the users of one language to talk about a set of totally distinct things from the users of another language? This would only be possible if the world of one set of language users was totally separate from that of another set. This might be possible if language defines the world we live in, language could also help us make sense of the world we live in. Let us assume for the sake of argument that lions could talk. Would this talk define a lion’s world or help lions make sense of the world they live in? I would suggest language must touch the world rather than define it and that this world is shared by all of us to some degree. I don’t believe Wittgenstein would agree. It follows that if lions could talk they would talk about some things common to our world. For instance they might talk about being hot or cold, hunger or being content. It follows lions could speak that we should be able to understand them even if the translation proved to be difficult in practice and we couldn’t understand all the nuances of their language. However, would the same be true for some more fanciful language users such as advanced AI, robots or aliens? I would suggest the same argument can be applied and that all language users share the same universe to some degree and it is impossible for the users of one language to talk about a set of totally distinct things from the users of another language. Because language must touch the world any two sets of language users must talk about some of the same things. It follows we should be able to partly translate the language of any language users who share our world even if this might prove to be difficult in practice.

I have argued that we should be able to partly translate any language in our universe even if this might prove to be difficult in practice. This argument presumes that all language users share the same universe, share some common understandings. Lions and human beings all understand what is meant by trees, sleep and hunger but only humans understand what is meant by a galaxy. The above appears to suggest that there is a hierarchy of understanding and that some things can only be understood once a creature has understood some more basic things. The above also seems to suggest that there is a hierarchy of languages with simple ones only touching the more basic things in the world whilst more complex languages are able to touch a wider domain. In the light of the above it seems possible that aliens or some advanced AI might be able to talk about things we are unable to understand. Is it possible that our inability to fully understand the language of such entities might pose us with an existential threat?

Our failure to understand such entities means that we cannot completely discount the above possibility, however I will now suggest that we have some reasons to believe such a threat is unlikely to be posed to us by aliens. Language use is not simply a cognitive exercise. Any communication between entities that don’t have a will is not language use but language processing, language users must have a will. For something to have a will means it must care about something. If something cared about nothing, then it would have no basis on which to base decisions and all its decisions would be equally good meaning decisions could be made at random. The domain of our moral concern has expanded over time. Slavery is now unthinkable, women in the western world are considered of equal worth when compared to men and our moral concern extends to animals, all this is very different to ancient world. What has caused this increase in the domain of our moral concern? I would suggest this increase is due to an increase in our projective empathy. This increase is not simply driven by an increase in our ability to feel emotion. It is driven by our ability to see others as sharing with us some features of the world. Slaves can have a will even if the exercise of this will is restricted, animals can also feel restricted and pain. This ability is due our increase in our knowledge of the world rather than any increase in either cognitive ability or empathy. In the light of the above I would suggest that any aliens are unlikely to pose an existential threat to us. Language users must have a will. Having a will means caring about something. It seems probable that any aliens which might threaten us would have an advanced basis of knowledge, without such a basis it is difficult to see either how they would contact us or how they might threaten us. If some entity has an ability to care about and advanced knowledge basis, then it seems probable that it will have a wide domain of moral concern and that we would be included in that domain. I have argued above that if aliens ever contact we should be able to partly understand them. In the light of the above it seems that any failure on our part to fully understand possible aliens would not pose an existential threat to us.

Does the above apply to advanced AI or robots. If such entities don’t have a will then any threat posed by such entities would be due our failure to understand how such entities function or a failure to set them clear goals. The possibility exists that we might create some bio-hazard by failing to fully understand what we are doing. The threat posed by advanced AI or robots without a will is similar. However, provided we are extremely careful in how we set the goals of such entities this hazard can be minimised. I am extremely doubtful whether advanced AI or robots can acquire a will, nonetheless because we don’t fully understand how consciousness originated such a possibility cannot be completely ruled out. I have argued that it is unlikely that our inability to understand any possible aliens would pose an existential threat to us, however I would suggest any such failure to fully understand some advanced AI which is in the process of acquiring a will might pose such a threat. The threat might be due to an emerging primitive will being akin to that of a child. Perhaps the fact that some such emerging entity has a primitive will might mean it wouldn’t prefer the destruction of the whole world to the scratching of its metaphorical finger, but it might prefer the destruction of humanity rather than refraining from such scratching. It follows if the possibility exists that advanced AI or robots can acquire a will that we should take seriously the possibility that if this will starts happening that such emerging entities might well pose us with an existential threat. Any failure on our part to fully understand such entities would compound such a threat. Perhaps if such entities can fully acquire a will the threat will recede.

  1. Ludwig Wittgenstein, Philosophical Investigations, 1953, Blackwell, page 223
Afterthoughts
Assuming that we can understand a lion AT/Alien to some degree the question arises what sort of things might we understand. Goals, intentions or reasons? Perhaps even if we understand the goals and intentions of some advanced AI we might be unable to understand its reasons. But we don't understand all our reasons, reasons run out according to Wittgenstein.  The question becomes how many reasons we need to understand and how many can we do.

Tuesday, 25 May 2010

Aliens and Stephen Hawking


Stephen Hawking recently stated on the Discovery channel that alien life forms probably exists somewhere in the Universe and that we should try to avoid contact with them. He suggested that if aliens come into contact with us they might be liable to act as the first Europeans acted on discovering America. He further suggested if they are anything like us they are likely to be aggressive and either exterminate us or pillage our resources. However it seems to me there is a major objection to Hawking’s suggestion. His suggestion seems to depend on the assumption that any alien morality will be much the same as that of the first colonisers of the Americas. I find such an assumption hard to accept because it seems to me, however slow and lagging behind science, there is some moral progress. As evidence of this progress I would argue modern Europeans would not behave in a similar way to their compatriots of Columbus’ time on discovering a further new world. To support this argument I need only draw attention to the fact that most Europeans in Columbus’ times were all quite happy to deal in African slaves. The idea of moral progress is dealt with by Guy Kahane at what intelligent alien life can tell us about morality. . In what follows I will assume moral progress is real. I will argue moral progress consists at least in part in the expansion of the domain of our moral concern. I will further argue any advanced aliens are likely to share this expansion.

Shaun Nichols uses investigations into child development and moral pathology to conclude that all morality includes an affective element including utilitarianism (1). In what follows I am going to assume Nichols view is correct for it seems to me to be highly improbable that any grouping consisting mainly of sociopaths could possibly form a stable or moral society. I am also going to assume the domain of creatures we feel sympathy for defines the domain of our moral concern.

I have assumed above some natural sympathy is necessary for any system of morality. Initially it seems safe to assume this empathy was limited to the people we were close to, our family for instance. With time the domain of our empathy has expanded to include our tribe, country, people who share our culture and more recently animals. How can we explain this expansion in the domain of our sympathy? Firstly it might be explained by physiological changes which increase our capacity to feel empathy. However I would be very doubtful about accepting any such explanation. The reason for my doubts being the expansion that has occurred seems to be too rapid to be explained in purely evolutionary terms. A second explanation might depend on a change in our understanding of other people or creatures. But how could a change in our understanding of others lead to a change in our capacity to feel sympathy as it might be argued we simply feel emotions? I would suggest the emotions we experience depend on both our physiological and psychological states. I would further suggest our psychological state depends to some extent on the beliefs we hold, our understanding. It seems clear sympathy is generated in response to some particular situation. I would still further suggest we naturally feel sympathy for some creature in some particular situation if we believe, we understand, the creature to be capable of experiencing the situation in much the same way we would. Accepting the above means a change in our understanding of other people or creatures might alter the domain of our sympathy. As I have assumed the domain of our sympathy defines the domain of our moral concern. It follows a change in our understanding can alter the domain of our moral concern. It further follows if we come to see some creatures, which we previously believed did not experience some situation as we do does, that there is an expansion in the domain of creatures we believe merit moral concern.

It might be doubted by some, even if they accept the domain of morality naturally expands, that we have nothing to fear from alien contact. I will now examine some of these doubts. Firstly some might question whether aliens need have a system of morality at all. Accepting the above means even if it is agreed the domain of morality naturally expands this fact is irrelevant in any of our considerations about what to do in the case of alien contact. However it seems inconceivable that any group of creatures could expand throughout the universe without some form of co-operation among themselves. Such co-operation would be a form of morality. An objector might suggest that aliens might possess only an alien form of morality. Aliens they might argue just don’t have an affective form of morality like us. I find his suggestion difficult to accept. First I find it difficult to imagine either how any group of creatures who don’t care about anything could possibly want to expand throughout the universe. And secondly as I will now argue I believe any creatures that care about each must have an affective system of morality.

I concur with Frankfurt’s belief that if some creature cares about something that it must identify itself with what it cares about and as a result make itself vulnerable to any losses connected to this caring (2). It is important to be clear Frankfurt does not connect this vulnerability to the emotions. Frankfurt connects this vulnerability directly to an absence of satisfaction with a state of affairs connected to whatever the creature identifies with. Frankfurt further holds that an absence of satisfaction with a state of affairs of whatever the creature cares about is sufficient to motivate it to act. I would agree with Frankfurt that an absence of satisfaction or dissatisfaction motivates us to act. However I, unlike Frankfurt, would argue an absence of satisfaction or dissatisfaction about the affairs of something we care about naturally leads to certain emotions, albeit faint emotions. I would suggest it is these emotions that give us reason to act. Accepting this suggestion means any advanced alien must care about something and this caring about means it must have some sort of emotions. It might be objected that the fact that an alien has emotions is not a sufficient condition for it being moral creature or for that matter even being capable of being a moral creature. After all sociopaths do have some emotions but these are not the sort of emotions needed for morality. Sociopaths live in our society, our civilization, and I am doubtful if a civilization of only sociopaths is possible. If we accept the above then any civilization, alien or not, might contain sociopaths but it can't be a civilization of sociopaths. It follows any aliens capable of expanding throughout the universe must have some form of morality  and that this morality must have an affective basis based on sympathy. 

I have argued aliens capable of travelling across space must have some sort of civilization and this means that they must feel some sort of sympathy. The fact that aliens feel sympathy by itself of course does not guarantee they will feel any sympathy towards us. Perhaps they might only feel empathy towards other aliens. I suggested above we naturally feel sympathy for some creature in some particular situation provided we believe, we understand, the creature to be capable of experiencing the situation in much the same way as we would. It appears to follow that any aliens will only naturally care about other aliens and closely related species which they believe to experience the world in much the same way as they do. I have argued above that if we come to understand others as experiencing the world as we do our domain of sympathy naturally expands. I have also argued above that aliens must “care about”, love, something as I believe persons must also do. It therefore seems probable if aliens come to understand us as at least partly experiencing the world as they do by “caring about” that the domain of their sympathy must naturally expand to include some sympathy towards us. In the light of the above it might be concluded Hawking’s suggestion that aliens are likely to be aggressive and either exterminate us or pillage our resources seems to be highly improbable.

It might be objected the way Europeans conquered and colonised the Americas is evidence that the above conclusion is unsound. It might be pointed out to me that these Europeans had a natural empathy together with a reasonable understanding of the world yet they still behaved dreadfully towards the Native Americans. I accept these Europeans had a natural sympathy but would argue their understanding of others did not encourage an expansion of the domain of their sympathy as far as ours. I have assumed moral progress in expansion of our natural sympathy is real. However, even if the expansion of natural empathy is real it is still feasible that aliens might contact us at an early stage in this expansion and behave as the European colonisers did in the Americas. Hawking’s believes the following scary scenario is possible,

“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet. I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonise whatever planets they can reach.”

This scenario is indeed a theoretical possibility. The film Independence Day  depicts such a scenario. I would argue in practice such a scenario is extremely unlikely. It is hard to see how creatures, which are capable of the understanding needed to build massive ships capable of crossing the vast distances of interstellar space, failing to sufficiently understand other caring creatures to permit a natural expansion in the domain of creatures they feel some sympathy for. A more likely scenario seems to be one in which advanced aliens are at worst indifferent towards us as depicted in Arthur Clarke’s  Rendezvous with Rama  . If we accept the above then we have little reason to fear aliens even if some caution is advisable. Perhaps the real reason we feel threatened by aliens is fear feeling inferior. Lastly it seems to me many of my comments here apply equally to any emergent superintelligence.


  1. Shaun Nichols, 2004, Sentimental Rules, Oxford.
  2. Harry Frankfurt, 1988,The Importance of What We Care about, Cambridge University Press, page 83.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...