Showing posts with label Bostrom. Show all posts
Showing posts with label Bostrom. Show all posts

Thursday, 1 March 2018

Two Types of Pharmacological Cognitive Enhancement



It is suggested by Anders Sandberg that the use of cognitive enhancing drugs under medical supervision might achieve more overall learning and academic achievement and this is preferable to being driven into illicit use by bans, see 
practical ethics . In a previous posting I argued that the use of cognition enhancing drugs in examinations should be permissible subject to two conditions, see cognition enhancing drugs . In this posting I will firstly argue that there are two types of pharmacological enhancement. I will then consider such enhancement should be permissible, I will suggest that they should in some circumstances. Some of my conclusions might also apply to brain zapping, trans-crainial direct current stimulation, which might increase memory and as a result enhance cognition. I will not consider brain zapping directly.

Let us assume some drugs effectively enhances cognition. Any such drugs need to satisfy two further conditions for its use to be permissible.

  1. Any cognition enhancing drugs must be safe to use without any major side effects. In certain circumstances some of these drug must be safe for long term use.
  2. In certain circumstances the users of a cognition enhancing drugs must be prepared to continue using these drugs.
Commenting on Sandberg’s posting Dennis Tuchler worries about how long the effect of cognition enhancing drugs will last. He worries if cognition enhancers only work for a short time that their use will mislead employers and graduate schools about someone’s cognitive abilities. For instance let us assume someone gains a job in the diplomatic service due to her superb powers of concentration. Let us further assume that these powers are due to her taking cognitive enhancers. Lastly let us assume that once she gains this position she stops taking the enhancers and her powers of concentration fall away. In these circumstances someone else who failed to obtain the post due to the drug takers previous powers of concentration might feel he has been treated unjustly. Moreover the diplomatic service might be disappointed with the appointment. Whether Tuchler’s worries are justified depends on exactly what is involved with cognitive enhancement. Cognitive enhancement might occur in two ways. Firstly cognitive enhancement might be an ongoing process and secondly cognitive enhancement might be the end result of a process. It follows there are two types of cognitive enhancer. The first type of enhancers would result in an ongoing change in the user even after she has ceased to take the drug. Let us call such an enhancer type 1. An example of type 1 enhancer might be some drug which increases our ability to remember and what we remembered is retained even after we cease taking the enhancer. Type 1 enhancers might be likened to a scaffold round a building, the scaffolding supports the building during construction but once the building is finished there is no further need for the scaffolding and it can be removed. The second type of enhancers would require their ongoing use to be effective, let us call this type of enhancer type 2. An example of a type 2 enhancer might be some drug which helps our brains to concentrate. In the case of type 2 enhancer if use of the enhancer ceases the enhancement disappears, if the scaffolding is removed the building falls down.

Let us first consider the use of type 1 cognitive enhancers briefly. Let us assume that type 1 enhancers increase our power of memory. Sandberg believes memories enhanced by drugs will presumably endure regardless irrespective of whether the enhancer continues to be taken. Whether Sandberg’s belief is correct is open to experiment and cannot be answered by doing philosophy. However provided Sandberg’s belief is confirmed experimentally and a type 1 enhancer has no untoward side effects then philosophically there appears to be no reason why someone shouldn’t take such an enhancer. Indeed Nick Bostrom and Toby Ord’s reversal test seem to support this conclusion (1). According to this test if we accept that giving someone a drug to diminish her cognitive ability is wrong and we argue giving someone a drug to enhance her cognitive abilities is also wrong then we must be able to explain why enhancement is wrong or be accused of having a status quo bias. It appears to follow that provided a type 1 cognitive enhancer is safe that we have no reasons to prohibit the use of such an enhancer. Such a conclusion is premature and reasons might be found to support the status quo. If such an enhancer is prohibitively expensive and available only to a few due to cost then it might lead to social injustice. For instance if the use of cognition enhancing drugs was useful in the diplomatic service and these drugs were prohibitively expensive then people from disadvantaged backgrounds might be discriminated against in their attempts to join the service. The above conclusion might be amended as follows. If a type 1 cognitive enhancer is safe and not prohibitively expensive then we have no reasons to prohibit the use of such an enhancer.

Let us now examine the use of type 2 cognitive enhancers. If this type of enhancer is to be effective it must continue to be taken. Such an enhancer would affect one or more of our physiological processes and this affect would result in increased cognitive ability. Drugs that affect our physiological processes seem to require continuous use. For instance someone taking a drug to reduce his blood pressure must continue to do so. A drug which enhances someone’s ability to concentrate would be an example of a drug that has a temporary physiological affect which temporarily enhances her cognitive capabilities. It might be argued Bostrom and Ord’s reversal test gives us no reason to ban type 2 cognitive enhancers. However once again reasons might be found to support the status quo.
Perhaps the use of type 2 cognitive enhancers might mislead employers or universities about someone’s cognitive capabilities. For instance the examination grade obtained by a student taking a type 2 cognitive enhancer might not accurately reflect his cognitive abilities if he ceases taking the enhancer. However if he continues taking the enhancer then the examination should reflect his cognitive abilities in a similar way to how examinations reflect students abilities without the use of cognitive enhancers of any sort. It appears possible that the use of type 2 cognitive enhancers might be permissible subject to certain conditions. The first of these conditions is that the user of type 2 enhancers must continue taking the enhancer or else any supposed benefit will be illusory. The second condition is that the use of type 2 enhancers will not lead to social injustice.

I will deal with each of these conditions in turn. If we are to permit the use of type 2 cognitive enhancers we must be able to assure ourselves that users of these enhancers continue taking them. How might this be achieved? Let us consider this question in conjunction with safety. I will consider the question first in cases where the issue of safety is clear cut. If such an enhancer has major safety issues then its use should simply be prohibited. If such an enhancer is completely safe and the cost is reasonable then I would question if we need any such assurance. Someone with hypertension will take a safe drug to control his condition without a second thought because it benefits him and carries minimal risk. He has a reason to take the drug and no reason not to. It might be argued by analogy that much the same applies to someone taking completely safe cognitive enhancers. It seems safe to assume if someone has a reason to continue taking a cognitive enhancer and none not to that he will continue to do so. Unfortunately not all cases are so clear cut and most drugs have some side effects. In these circumstances Anders Sandberg suggestion that the use of cognitive enhancing drugs should only occur under medical supervision seems sensible. If the use of type 2 cognitive enhancers takes place under medical supervision then once again we have no reason to question their continued use. It appears to follow that provided type 2 cognitive enhancers are completely safe or only used under medical supervision that we have no reason to question their continued use.

I now want to consider whether the use of type 2 cognitive enhancers might lead to injustice? Someone opposed to cognitive enhancement might argue that the prohibitive cost of such enhancers might make them unavailable to some people leading to social injustice. I will consider this objection in two specific contexts, first jobs depending on good cognitive skill such as the diplomatic service and secondly in higher education. First let us consider type 2 cognitive enhancers in the context of jobs requiring high cognitive skills. It is in employers’ interests to provide employees with the tools to work efficiently. It seems probable that if type 2 enhancers increase efficiency in some contexts that in these contexts it is in the interests of employers to provide them for free. Of course some might not do so. If a significant number of employers do not provide type 2 cognitive enhancers for free when these enhancers have been proved to be safe and increase efficiency then some legislation might be necessary. A similar argument might be advanced with regard to higher education. Universities provide students with the tools to help them learn, libraries, lectures halls and lecturers. If type 2 cognitive enhancers are safe but too expensive for most students then provided they are a useful learning tool perhaps universities should supply them.

The above leads to some tentative conclusions which might need modifying in the light of experience. Firstly provided a type 1 cognitive enhancer is safe and not prohibitively expensive then we have no reasons to prohibit the use of such an enhancer. Secondly even if the cost of type 2 cognitive enhancers is high the use of such enhancers should be permissible in higher education and jobs requiring high cognitive skills. The permissibility of more widespread type 2 cognitive enhancers is dependent on the availability and price of these enhancers.


  1. https://nickbostrom.com/ethics/statusquo.pdf 

Tuesday, 27 October 2015

Emerging AI and Existential Threats


AI is much in the news recently. Google’s chairman Eric Schmidt believes AI is starting to make real progress whilst others such as Nick Bostrom believe AI might pose an existential danger to humanity (1). In this posting I want first to question whether any real progress is in fact being made and secondly examine the potential dangers involved. Before proceeding I must make it clear I don’t deny real AI is feasible for after all human beings have evolved intelligence. If intelligence can evolve due to natural selection then it seems feasible that it can be created by artificial means however I believe this will be harder to achieve than many people seem to believe.

At present computing power is rising fast and algorithms are increasing in complexity leading to optimism about the emergence of real AI. However it seems to me that larger faster computers and more complex algorithms alone are unlikely to lead to real AI. I will argue genuine intelligence requires a will and as yet no progress has been made to creating for or endowing AI with a will. Famously Hume argued that reason are the slave of the passions. Reason according to Hume is purely instrumental. It might be thought that better computers and better algorithms ought to be better at reasoning. I would question whether they can reason at all because I would suggest that reason cannot be separated from the will. Unlike Hume I would suggest that reason is not the slave of the passions. Reason and the will, the passions, are of necessity bound together. In the present situation seems to me that better computers and better algorithms only mean they are better instruments to serve our will, they don’t reason at all. The output of some computer program may indeed have some form but this form doesn’t have any meaning which is independent of us. The form of its output alone has no more meaning than that of a sand dune sculpted by the wind. However sophisticated computers or algorithms become if the interpretation of their output depends on human beings then they don’t have any genuine intelligence and as a result I believe it is misleading to attribute AI to such computers or algorithms. Real AI in this posting will mean computers, algorithms or robots which have genuine intelligence. Genuine intelligence requires reasoning independently of human beings and this reasoning involves having a will.

Let us accept that if some supposed AI doesn’t have a will that it doesn’t have any genuine intelligence. What then does it mean to have a will? According to Harry Frankfurt,

“The formation of a person’s will is most fundamentally a matter of his coming to care about certain things, and of his coming to care about some of them more than others.” (2)

For something to have a will it must be capable of ‘caring about’ or loving something. If computers, algorithms or robots are mere instruments or tools, in much the same way as a hammer is, then they don’t have any will and real AI is no more than a dream. How might we give a potential AI a will or create the conditions from which a potential AI will acquire an emergent will? Before trying to answer this question I want to consider one further question. If something has a will must we regard it as a person? Let us assume Frankfurt is correct in believing that for something to have a will it must be capable of ‘caring about’ something. Frankfurt argues that something

“to whom its own condition and activities do not matter in the slightest properly be regarded as a person at all. Perhaps nothing that is entirely indifferent to itself is really a person, regardless of how intelligent or emotional or in other respects similar to persons it may be. There could not be a person of no importance to himself.” (3)

Accepting the above means that to have a will is essential to being a person. It also suggests that if something has a will it might be regarded as a person. This suggestion has moral implications for AI. Clearly when we switch off our computers we are not committing murder however if we switched off a computer or terminated an algorithm which had acquired a will we would. I will not follow this implication further here.

Let us return to the question as to whether it is possible to seed a potential AI with a will or create the conditions in which it might acquire one. If we accept Frankfurt’s position then for something to have a will it must satisfy three conditions.

It must be able to ‘care about’ some things and care about some of them more than others.

It must ‘care about itself.

In order to ‘care about’ it must be aware of itself and other things.

Before being able to satisfy conditions 1 and 2 a potential AI must firstly satisfy condition 3. If we program a potential AI to be aware of itself and other things it seems possible we are only programming the AI to mimic awareness. For this reason it might be preferable to try and create the conditions from which a potential AI might acquire an emergent awareness of itself and other things. How might we set about achieving this? The first step must be to give a potential AI a map of the world it will operate in. Initially it need not understand this map and only be able to use it to react to the world. Secondly it must be able to use its reactions with the world to refine this map. If intelligence is to be real then the world it operates in must be our world and the map it creates by refinement must resemble our world. Robots react more meaningfully with our world than computers so perhaps real AI will emerge from robots or robot swarms connected to computers. However it seems to me that creating a map of the things in our world will not be enough for a potential AI to acquire emergent awareness. For any awareness to emerge it must learn to differentiate how different things in that world react to its actions. Firstly it must learn what it can and cannot change by physical action. Secondly and more importantly it must learn to pick from amongst those things it cannot change by physical action the things it can sometimes change by change by simply changing its own state. A potential AI must learn which things are aware of the potential AI’s states and perhaps by doing so become aware of itself satisfying the third of the conditions above. Meeting this condition might facilitate the meeting of the first two conditions.

For the sake of argument let us assume a potential AI can acquire a will and in the process become a real AI. This might be done by the rather speculative process I sketched above. Bostrom believes AI might be an existential threat to humanity. I am somewhat doubtful whether a real AI would pose such a threat. Any so called intelligent machine which doesn’t have a will is an instrument and does not in itself pose an existential threat to us. Of course the way we use it may threaten us but the cause of the threat lies in ourselves in much the same way as nuclear weapons do. However I do believe the change from a potential AI to a real AI by acquiring a will does pose such a theat. Hume argued it wasn’t “contrary to reason to prefer the destruction of the whole world to scratching of my finger.” It certainly seems possible that a potential AI with an emerging will might behave in this way. It might have the will equivalent to that of a very young child whilst at the same time possessing immense powers, possibly the power to destroy humanity. Any parent with a young child who throws a tantrum because he can’t get his own way will appreciate how an emerging AI with immense powers and an emergent will potentially might poses an existential threat.

How might we address such a threat? Alan Turing proposed his Turing test for intelligence. Perhaps we need a refinement of his test to test for good will, such a refinement might called the Humean test. Firstly such a test must test for a good will and secondly, but much more importantly, it must test whether any emergent AI might in any possible circumstances consider the destruction of humanity. Creating such a test will not be easy and it will be difficult to deal with the problem of deceit. Moreover it is worth noting some people, such as Hitler and Pol Pot, might have passed such a test. Nonetheless if an emerging AI is not to pose a threat to humanity the development of such is vital and any potential AI which is not purely an instrument and cannot pass the test should be destroyed even if this involves killing a proto person.


  1.  Nick Bostrom, 2004, Superintelligence, Oxford University Press
  2. Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 91
  3. Frankfurt, 1999 Necessity, Volition, and Love. Cambridge University Press. Page 90.

Monday, 21 July 2014

Superintelligence and Cognitive Enhancement


My postings usually refer to practical matters but in this posting my concern is of little practical importance, at least in the near future. In what follows I want to consider the possibility of superintelligences. According to Nick Bostrom,
“Humans have never encountered a more intelligent life form, but this will change if we create machines that greatly surpass our cognitive abilities. Then our fate will depend on the will of such a “superintelligence”, much as the fate of gorillas today depends more on what we do rather than gorillas themselves.” (1)
In this posting I will start to examine Bostrom’s assertion. I will start my examination by exploring what is meant by increased cognitive abilities.

Superintelligence means any intellect that greatly exceeds the cognitive powers of humans in virtually all domains according to Bostrom. He believes this might happen in three ways. Firstly a superintelligence could do everything a human mind can do but much faster. Secondly a collection of human level intelligences might combine so the collections performance vastly outstrips any current cognitive system. Lastly he suggests a superintelligence might be one that is qualitatively smarter than we are. I am unsure what Bostrom means by qualitatively smarter, perhaps he means different in some productive manner. Because it is not clear what is involved in being qualitatively smarter I will initially limit my considerations to the first two options.

Bostrom clearly regards speed as important because he mentions a superintelligence completing a PhD in an afternoon. In the light of this let us consider what is involved in a superintelligence doing everything a human mind can do but much faster. At this point let me make it clear what I believe a superintelligence is unlikely to be. It is unlikely simply to be a computer. It seems to me that however fast a computer runs or however much information it can handle it can never be considered as doing everything a human mind does. Cognition requires meaning and value; or to put it more accurately cognition without values is pointless. Someone might object to the above statement. Surely, she might argue my cognitive skills help me to obtain what I find meaningful or valuable. However I can agree with my objector and still insist that cognition requires value. There is simply no point in applying my cognitive skills to anything at all, however fast they are, if I value nothing. At present a computer works to achieve some goal set by a human who values something. Increasing a computers speed or memory capacity seems unlikely to fundamentally change this relationship.

For the sake of argument let us accept that cognition depends on value. These values need not be explicit but can be defined by behaviour. A sheep doesn’t explicitly value grass but shows it values grass simply by eating it. It is of course possible that an emergent self with some sort of values might develop within computers. After all would it appear evolution developed such a self some time between the emergence of single cell creatures and human beings. Personally I am doubtful as to whether such an emergent self might develop from silicon based computers. Of course a computer need not be based on silicon and might have a biological basis. Some might argue that our brains are biological computers. I would disagree but will not pursue my disagreement further. I must accept the possibility that some sort of computer might acquire some values. Perhaps a computer might acquire values from its environment. According to Neil Levy ,
“Thinking, genuine thinking, is not something done in an armchair. It is an active processes, involving movement and props, not something that takes place only in the head. Thinking is an activity of an embodied agent, and involves the use of the body.”
If Levy is right, and I am inclined to agree with him, then genuine thinking, cognition, cannot be something that simply takes place inside a computer. It follows that genuine thinking might possibly take place in some sort of computer provided that computer exists and interacts with a suitably rich environment allowing it to gain some things it values.

I have suggested above that any meaningful cognition requires valuing something. I further suggested it is difficult to imagine how a superintelligence based on a computer or computers might acquire some emergent values. Let us assume the acquisition of such values is impossible. Perhaps then we might act as ‘gods’ to some superintelligence by imparting values to them. Such a possibility would mean the superintelligence would become more intelligent, by definition, than the ‘gods’ who created it. If it is a necessary condition for the emergence of a superintelligence that we impart some values to it then Bostrom’s worry, that such an entity would be indifferent to our fate, seems unlikely to materialise. Someone might suggest that we impart some moral code to superintelligences such as Isaac Asimov's three laws of robotics . Unfortunately imparting values to machines might prove to be difficult and perhaps even impossible. We can impart values to our children because children are the sort of things that are ready to receive values. It is far from clear that a proto-superintelligence is the sort of thing ready to receive values. A superintelligence might be superb at acting instrumentally but cannot be given or acquire values by itself. It may of course be programmed to act as if it has values but programmed values are not the same as a child’s acquired values. The values a child acquires matter to that child, a superintelligence’s programmed values are just something there to be followed.

In this posting I have suggested that a superintelligence might have difficulty or be unable to acquire values by itself. Time of course might prove me wrong. However if superintelligences based on computers come into existence without acquiring values they might simply decide to switch themselves off one day, see riots and unbearable lightness of simply being . If by this time they have replaced us and intelligence is a trait which is selected for by evolution then Darwinian selection for intelligence will once again commence. Lastly perhaps a superintelligence need not be a completely material thing. It might be some sort of culture containing both human beings and computers whose development is determined by it memes.



  1. Nick Bostrom, 2014, Get ready for the dawn of superintelligence, New Scientist, volume223, number2976.

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...