- Imgmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
- Christoph Bublitz, 2016, Moral Enhancement and Mental Freedom, Journal of Applied Philosophy, 33(1), page 91.
This blog is concerned with most topics in applied philosophy. In particular it is concerned with autonomy, love and other emotions. comments are most welcome
Monday, 7 March 2016
Algorithmic Assisted Moral Decision Making
Monday, 22 February 2016
Traditional and Nussbaum's Transitional Anger
- MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 46.
- Nussbaum, page 53.
- Nussbaum, page 51.
- Michael Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press, page147.
- Nussbaum, page 49.
- Nussbaum, page 54.
- Nussbaum, page 46.
- Nussbaum, page 49
Tuesday, 2 February 2016
Terminally ill patients and the right to try new untested drugs
- Terminal illness must be clearly and tightly defined. Philosophy can play an important part in doing this.
- No drugs which have not been fully tested should be used on non-terminally ill patients except for the purpose of testing
- Any terminally ill patient taking an experimental drug must sign a comprehensive consent form in the same way patients taking part in trials do. This form must make it clear that they are prepared to accept as yet unknown risks.
Friday, 8 January 2016
Driverless Cars and Applied Philosophy
Sunday, 29 November 2015
Terrorism, Love and Delusion
- Harry Frankfurt, 1999, Necessity, Volition, and Love, Cambridge University Press, page 114.
- Frankfurt, page 114.
- Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
- Frankfurt, page 165.
Wednesday, 11 November 2015
Autonomy and Beneficence Revisited
- Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
Tuesday, 27 October 2015
Emerging AI and Existential Threats
AI is much in the news recently.
Google’s chairman Eric Schmidt believes AI is starting to make real progress
whilst others such as Nick Bostrom believe AI might pose an existential danger
to humanity (1). In this posting I want first to question whether any real
progress is in fact being made and secondly examine the potential dangers
involved. Before proceeding I must make it clear I don’t deny real AI is
feasible for after all human beings have evolved intelligence. If intelligence
can evolve due to natural selection then it seems feasible that it can be
created by artificial means however I believe this will be harder to achieve
than many people seem to believe.
At present computing power is
rising fast and algorithms are increasing in complexity leading to optimism
about the emergence of real AI. However it seems to me that larger faster
computers and more complex algorithms alone are unlikely to lead to real AI. I
will argue genuine intelligence requires a will and as yet no progress has been
made to creating for or endowing AI with a will. Famously Hume argued that
reason are the slave of the passions. Reason according to Hume is purely instrumental.
It might be thought that better computers and better algorithms ought to be
better at reasoning. I would question whether they can reason at
all because I would suggest that reason cannot be separated from the will.
Unlike Hume I would suggest that reason is not the slave of the passions.
Reason and the will, the passions, are of necessity bound together. In the
present situation seems to me that better computers and better algorithms only
mean they are better instruments to serve our will, they don’t reason at all.
The output of some computer program may indeed have some form but this form
doesn’t have any meaning which is independent of us. The form of its output
alone has no more meaning than that of a sand dune sculpted by the wind.
However sophisticated computers or algorithms become if the interpretation of
their output depends on human beings then they don’t have any genuine
intelligence and as a result I believe it is misleading to attribute AI to such
computers or algorithms. Real AI in this posting will mean computers,
algorithms or robots which have genuine intelligence. Genuine intelligence
requires reasoning independently of human beings and this reasoning involves
having a will.
Let us accept that if some
supposed AI doesn’t have a will that it doesn’t have any genuine intelligence.
What then does it mean to have a will? According to Harry Frankfurt,
“The formation of a person’s
will is most fundamentally a matter of his coming to care about certain things,
and of his coming to care about some of them more than others.” (2)
For something to have a will it
must be capable of ‘caring about’ or loving something. If computers, algorithms
or robots are mere instruments or tools, in much the same way as a hammer is,
then they don’t have any will and real AI is no more than a dream. How might we
give a potential AI a will or create the conditions from which a potential AI
will acquire an emergent will? Before trying to answer this question I want to
consider one further question. If something has a will must we regard it as a
person? Let us assume Frankfurt is correct in believing that for something to
have a will it must be capable of ‘caring about’ something. Frankfurt argues
that something
“to whom its own condition and
activities do not matter in the slightest properly be regarded as a person at
all. Perhaps nothing that is entirely indifferent to itself is really a person,
regardless of how intelligent or emotional or in other respects similar to
persons it may be. There could not be a person of no importance to himself.”
(3)
Accepting the above means that to have a will is essential to being a person. It also suggests that if something has a will it might be regarded as a person. This suggestion has moral implications for AI. Clearly when we switch off our computers we are not committing murder however if we switched off a computer or terminated an algorithm which had acquired a will we would. I will not follow this implication further here.
Let us return to the question as
to whether it is possible to seed a potential AI with a will or create the
conditions in which it might acquire one. If we accept Frankfurt’s position
then for something to have a will it must satisfy three conditions.
It must be able to ‘care about’
some things and care about some of them more than others.
It must ‘care about itself.
In order to ‘care about’ it must
be aware of itself and other things.
Before being able to satisfy
conditions 1 and 2 a potential AI must firstly satisfy condition 3. If we
program a potential AI to be aware of itself and other things it seems possible
we are only programming the AI to mimic awareness. For this reason it might be
preferable to try and create the conditions from which a potential AI might
acquire an emergent awareness of itself and other things. How might we set
about achieving this? The first step must be to give a potential AI a map of
the world it will operate in. Initially it need not understand this map and
only be able to use it to react to the world. Secondly it must be able to use
its reactions with the world to refine this map. If intelligence is to be real
then the world it operates in must be our world and the map it creates by
refinement must resemble our world. Robots react more meaningfully with our
world than computers so perhaps real AI will emerge from robots or robot swarms
connected to computers. However it seems to me that creating a map of the
things in our world will not be enough for a potential AI to acquire emergent
awareness. For any awareness to emerge it must learn to differentiate how
different things in that world react to its actions. Firstly it must learn what
it can and cannot change by physical action. Secondly and more importantly it
must learn to pick from amongst those things it cannot change by physical
action the things it can sometimes change by change by simply changing its own
state. A potential AI must learn which things are aware of the potential AI’s
states and perhaps by doing so become aware of itself satisfying the third of
the conditions above. Meeting this condition might facilitate the meeting of
the first two conditions.
For the sake of argument let us
assume a potential AI can acquire a will and in the process become a real AI.
This might be done by the rather speculative process I sketched above. Bostrom
believes AI might be an existential threat to humanity. I am somewhat doubtful
whether a real AI would pose such a threat. Any so called intelligent machine
which doesn’t have a will is an instrument and does not in itself pose an existential
threat to us. Of course the way we use it may threaten us but the cause of the
threat lies in ourselves in much the same way as nuclear weapons do. However I
do believe the change from a potential AI to a real AI by acquiring a will does
pose such a theat. Hume argued it wasn’t “contrary to reason to prefer the
destruction of the whole world to scratching of my finger.” It certainly seems
possible that a potential AI with an emerging will might behave in this way. It
might have the will equivalent to that of a very young child whilst at the same
time possessing immense powers, possibly the power to destroy humanity. Any
parent with a young child who throws a tantrum because he can’t get his own way
will appreciate how an emerging AI with immense powers and an emergent will
potentially might poses an existential threat.
How might we address such a
threat? Alan Turing proposed his Turing test for intelligence. Perhaps we need
a refinement of his test to test for good will, such a refinement might called
the Humean test. Firstly such a test must test for a good will and secondly,
but much more importantly, it must test whether any emergent AI might in any
possible circumstances consider the destruction of humanity. Creating such a
test will not be easy and it will be difficult to deal with the problem of
deceit. Moreover it is worth noting some people, such as Hitler and Pol Pot,
might have passed such a test. Nonetheless if an emerging AI is not to pose a
threat to humanity the development of such is vital and any potential AI which
is not purely an instrument and cannot pass the test should be destroyed even
if this involves killing a proto person.
- Nick Bostrom, 2004, Superintelligence, Oxford University Press
- Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 91
- Frankfurt, 1999 Necessity, Volition, and Love. Cambridge University Press. Page 90.
Engaging with Robots
In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...
-
In his posting on practical ethics Shlomit Harrosh connects the rights of death row inmates in certain states of the USA to choose the met...
-
According to Max Wind-Cowie shame should be liberated rather than legislated for. By this I take Wind-Cowie to mean that shame should pl...
-
Kristjan Kristjansson argues too much attention is paid to promoting an individual’s self esteem and not enough to promoting his self res...