My postings
usually refer to practical matters but in this posting my concern is of little
practical importance, at least in the near future. In what follows I want to
consider the possibility of superintelligences. According to Nick Bostrom,
“Humans
have never encountered a more intelligent life form, but this will change if we
create machines that greatly surpass our cognitive abilities. Then our fate
will depend on the will of such a “superintelligence”, much as the fate of
gorillas today depends more on what we do rather than gorillas themselves.” (1)
In this
posting I will start to examine Bostrom’s assertion. I will start my
examination by exploring what is meant by increased cognitive abilities.
Superintelligence
means any intellect that greatly exceeds the cognitive powers of humans in
virtually all domains according to Bostrom. He believes this might happen in
three ways. Firstly a superintelligence could do everything a human mind can do
but much faster. Secondly a collection of human level intelligences might
combine so the collections performance vastly outstrips any current cognitive
system. Lastly he suggests a superintelligence might be one that is
qualitatively smarter than we are. I am unsure what Bostrom means by
qualitatively smarter, perhaps he means different in some productive manner. Because
it is not clear what is involved in being qualitatively smarter I will initially
limit my considerations to the first two options.
Bostrom
clearly regards speed as important because he mentions a superintelligence completing
a PhD in an afternoon. In the light of this let us consider what is involved in
a superintelligence doing everything a human mind can do but much faster. At
this point let me make it clear what I believe a superintelligence is unlikely
to be. It is unlikely simply to be a computer. It seems to me that however fast
a computer runs or however much information it can handle it can never be
considered as doing everything a human mind does. Cognition requires meaning and
value; or to put it more accurately cognition without values is pointless. Someone
might object to the above statement. Surely, she might argue my cognitive
skills help me to obtain what I find meaningful or valuable. However I can
agree with my objector and still insist that cognition requires value. There is
simply no point in applying my cognitive skills to anything at all, however
fast they are, if I value nothing. At present a computer works to achieve some
goal set by a human who values something. Increasing a computers speed or
memory capacity seems unlikely to fundamentally change this relationship.
For the
sake of argument let us accept that cognition depends on value. These values
need not be explicit but can be defined by behaviour. A sheep doesn’t
explicitly value grass but shows it values grass simply by eating it. It is of
course possible that an emergent self with some sort of values might develop
within computers. After all would it appear evolution developed such a self some
time between the emergence of single cell creatures and human beings.
Personally I am doubtful as to whether such an emergent self might develop from
silicon based computers. Of course a computer need not be based on silicon and
might have a biological basis. Some might argue that our brains are biological
computers. I would disagree but will not pursue my disagreement further. I must
accept the possibility that some sort of computer might acquire some values.
Perhaps a computer might acquire values from its environment. According to Neil
Levy ,
“Thinking,
genuine thinking, is not something done in an armchair. It is an active
processes, involving movement and props, not something that takes place only in
the head. Thinking is an activity of an embodied agent, and involves the use of
the body.”
If Levy is
right, and I am inclined to agree with him, then genuine thinking, cognition,
cannot be something that simply takes place inside a computer. It follows that
genuine thinking might possibly take place in some sort of computer provided
that computer exists and interacts with a suitably rich environment allowing it
to gain some things it values.
I
have suggested above that any meaningful cognition requires valuing something.
I further suggested it is difficult to imagine how a superintelligence based on
a computer or computers might acquire some emergent values. Let us assume the
acquisition of such values is impossible. Perhaps then we might act as ‘gods’
to some superintelligence by imparting values to them. Such a possibility would
mean the superintelligence would become more intelligent, by definition, than
the ‘gods’ who created it. If it is a necessary condition for the emergence of
a superintelligence that we impart some values to it then Bostrom’s worry, that
such an entity would be indifferent to our fate, seems unlikely to materialise.
Someone might suggest that we impart some moral code to superintelligences such
as Isaac Asimov's three
laws of robotics . Unfortunately imparting values to machines might prove
to be difficult and perhaps even impossible. We can impart values to our
children because children are the sort of things that are ready to receive
values. It is far from clear that a proto-superintelligence is the sort of
thing ready to receive values. A superintelligence might be superb at acting
instrumentally but cannot be given or acquire values by itself. It may of
course be programmed to act as if it has values but programmed values are not
the same as a child’s acquired values. The values a child acquires matter to
that child, a superintelligence’s programmed values are just something there to
be followed.
In
this posting I have suggested that a superintelligence might have difficulty or
be unable to acquire values by itself. Time of course might prove me wrong.
However if superintelligences based on computers come into existence without
acquiring values they might simply decide to switch themselves off one day, see
riots
and unbearable lightness of simply being . If by this time they have replaced us
and intelligence is a trait which is selected for by evolution then Darwinian
selection for intelligence will once again commence. Lastly perhaps a superintelligence
need not be a completely material thing. It might be some sort of culture
containing both human beings and computers whose development is determined by
it memes.
- Nick Bostrom, 2014, Get ready
for the dawn of superintelligence, New Scientist, volume223, number2976.
No comments:
Post a Comment