According to Wittgenstein, “if a lion could speak, we could
not understand him.” (1) It is by no means clear what Wittgenstein meant by this remark and I will suggest two
possible explanations. Firstly, there might exist some languages which cannot
be translated into any other. Secondly some minds might be so radically
different from ours that the thoughts in those minds might be so radically
different to our own that we couldn’t conceive them. It might appear that
whilst the soundness of these propositions might be of academic interest it is
of no practical importance. I would suggest that this appearance is mistaken. Suppose
that some advanced AI, robots or even aliens could speak could we understand
them? The answer to this question might help support or provide some evidence
against Bostrom’s orthogonanlity thesis. Recently Facebook abandoned an
experiment after two artificially intelligent programs appeared to be chatting
to each other in a strange language only they understood, see the Independent . Stephen Hawking
believes if we are ever contacted by aliens we should think very carefully before
replying due to the dangers involved. I am extremely dubious about whether we
will ever be contacted by aliens but the possibility exists as long as we are
unsure of how life evolved in the universe. The first possible danger posed by
our inability to communicate with aliens formed the subject matter of the 2016
film Arrival, might powerful minds with which we cannot communicate pose a
threat to us? Besides the above possibility there also exists the possibility
that alien minds might be so radically different from ours that they might
consider us of no importance and even consider us as dangerous. This second possibility
might also be posed by some advanced form of AI. On a more practical level if Wittgenstein is correct then Turing tests are pointless for we might be unable to converse with fully intelligent conscious entities.
Let us accept that language can be roughly defined as a
system of communicating information. However, there is an important difference
between language processors and language users. If language is simply a
system of communication then ‘computer languages’ such as Java, C and Python
are languages in much the same way as are English, Mandarin and Sign Languages
used by the deaf. I would suggest that however fast a computer runs or however
much information it can handle that if this is all it can do then it cannot be
said to be a language user. What does it mean to be a language user? I
would suggest that for some entity to be considered as a language user this
entity must determine the use it puts language to. At the present time computers, robots and AI
don’t determine how the information they process is used and as a result aren’t
language users. It follows that at the present time that any dangers posed by
computers, robots or AI are due to our misuse or misunderstanding of them rather
than some imagined purpose such entities might acquire. It might be objected by
someone that accepting my suggestion means that because animals don’t determine
the use the language they use that they also aren’t real language users. It
would appear to follow that chimpanzees and clever crows which appear to
communicate with us are really language processors in much the same way as
computers rather than users. I would argue this objection is unsound. Animals
might simply use language but the use of the language is put to, unlike the use
of computers, is determined by the animals’ needs and wants. Accepting the
above means accepting that certain animals are primitive language users. The
rest of this posting will only be concerned with language used by language
users as defined above.
Let us consider the possibility that we might be unable to
understand the language of aliens or some advanced form of AI. It is possible
that any AI, however advanced, must remain a language processor rather than a
language user. Nonetheless because we are uncertain as to how we became language
users the possibility of some advanced AI becoming a user cannot be completely ruled
out. Let us now consider whether some language might be untranslatable into any
other. By untranslatable I don’t mean some language which is difficult to
translate but rather that some language is impossible to translate. Of course
we may not capture all the nuances of some language in translation but
is there any language that cannot translated at least to some degree?
In order to answer this question, we must ask another what is meant by a
language? Let us accept that language is
a system of communicating information among language users as defined above.
Information about what? Information must include knowledge of things in the
world shared by the language users. The world of language users must be a world
of things. These things might include such things as, physical objects,
descriptions of behaviour in the world and emotions among others. If any world
was a completely undifferentiated one with no distinct things existing in it
there could be no speech and no need for language users. Our original question
might now be reframed. Is it possible for the users of one language to talk
about a set of totally distinct things from the users of another language? This
would only be possible if the world of one set of language users was totally separate
from that of another set. This might be possible if language defines
the world we live in, language could also help us make sense of the world we
live in. Let us assume for the sake of argument that lions could talk. Would
this talk define a lion’s world or help lions make sense of the world they live
in? I would suggest language must touch the world rather than define it and that this world is shared by all of us to
some degree. I don’t believe
Wittgenstein would agree. It follows that if lions could talk they would
talk about some things common to our world. For instance they might talk about
being hot or cold, hunger or being content. It follows lions could speak that we
should be able to understand them even if the translation proved to be
difficult in practice and we couldn’t understand all the nuances of their
language. However, would the same be true for some more fanciful language users
such as advanced AI, robots or aliens? I would suggest the same argument can be
applied and that all language users share the same universe to some degree and
it is impossible for the users of one language to talk about a set of totally
distinct things from the users of another language. Because language must touch
the world any two sets of language users must talk about some of the same
things. It follows we should be able to partly translate the language of any
language users who share our world even if this might prove to be difficult in
practice.
I have argued that we should be able to partly translate
any language in our universe even if this might prove to be difficult in
practice. This argument presumes that all language users share the same
universe, share some common understandings. Lions and human beings all
understand what is meant by trees, sleep and hunger but only humans understand
what is meant by a galaxy. The above appears to suggest that there is a
hierarchy of understanding and that some things can only be understood once a
creature has understood some more basic things. The above also seems to suggest
that there is a hierarchy of languages with simple ones only touching the more basic
things in the world whilst more complex languages are able to touch a wider
domain. In the light of the above it seems possible that aliens or some
advanced AI might be able to talk about things we are unable to understand. Is
it possible that our inability to fully understand the language of such
entities might pose us with an existential threat?
Our failure to understand such entities means that we
cannot completely discount the above possibility, however I will now suggest
that we have some reasons to believe such a threat is unlikely to be posed to
us by aliens. Language use is not simply a cognitive exercise. Any
communication between entities that don’t have a will is not language use but
language processing, language users must have a will. For
something to have a will means it must care about something. If something cared
about nothing, then it would have no basis on which to base decisions and all its
decisions would be equally good meaning decisions could be made at random. The
domain of our moral concern has expanded over time. Slavery is now unthinkable,
women in the western world are considered of equal worth when compared to men
and our moral concern extends to animals, all this is very different to ancient
world. What has caused this increase in the domain of our moral concern? I
would suggest this increase is due to an increase in our projective empathy.
This increase is not simply driven by an increase in our ability to feel
emotion. It is driven by our ability to see others as sharing with us some
features of the world. Slaves can have a will even if the exercise of this will
is restricted, animals can also feel restricted and pain. This ability is due
our increase in our knowledge of the world rather than any increase in either
cognitive ability or empathy. In the light of the above I would suggest that
any aliens are unlikely to pose an existential threat to us. Language users
must have a will. Having a will means caring about something. It seems probable
that any aliens which might threaten us would have an advanced basis of
knowledge, without such a basis it is difficult to see either how they would
contact us or how they might threaten us. If some entity has an ability to care
about and advanced knowledge basis, then it seems probable that it will have a
wide domain of moral concern and that we would be included in that domain. I
have argued above that if aliens ever contact we should be able to partly
understand them. In the light of the above it seems that any failure on our
part to fully understand possible aliens would not pose an existential threat
to us.
Does the above apply to advanced AI or robots. If such entities don’t have a will then any threat posed by such entities would be due our failure to understand how such entities function or a failure to set them clear goals. The possibility exists that we might create some bio-hazard by failing to fully understand what we are doing. The threat posed by advanced AI or robots without a will is similar. However, provided we are extremely careful in how we set the goals of such entities this hazard can be minimised. I am extremely doubtful whether advanced AI or robots can acquire a will, nonetheless because we don’t fully understand how consciousness originated such a possibility cannot be completely ruled out. I have argued that it is unlikely that our inability to understand any possible aliens would pose an existential threat to us, however I would suggest any such failure to fully understand some advanced AI which is in the process of acquiring a will might pose such a threat. The threat might be due to an emerging primitive will being akin to that of a child. Perhaps the fact that some such emerging entity has a primitive will might mean it wouldn’t prefer the destruction of the whole world to the scratching of its metaphorical finger, but it might prefer the destruction of humanity rather than refraining from such scratching. It follows if the possibility exists that advanced AI or robots can acquire a will that we should take seriously the possibility that if this will starts happening that such emerging entities might well pose us with an existential threat. Any failure on our part to fully understand such entities would compound such a threat. Perhaps if such entities can fully acquire a will the threat will recede.
- Ludwig Wittgenstein, Philosophical Investigations, 1953, Blackwell, page 223