This blog is concerned with most topics in applied philosophy. In particular it is concerned with autonomy, love and other emotions. comments are most welcome
Friday, 8 January 2016
Driverless Cars and Applied Philosophy
Sunday, 29 November 2015
Terrorism, Love and Delusion
- Harry Frankfurt, 1999, Necessity, Volition, and Love, Cambridge University Press, page 114.
- Frankfurt, page 114.
- Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
- Frankfurt, page 165.
Wednesday, 11 November 2015
Autonomy and Beneficence Revisited
- Christine Korsgaard, 2009, Self-Constitution, Oxford University Press, page 1.
Tuesday, 27 October 2015
Emerging AI and Existential Threats
AI is much in the news recently.
Google’s chairman Eric Schmidt believes AI is starting to make real progress
whilst others such as Nick Bostrom believe AI might pose an existential danger
to humanity (1). In this posting I want first to question whether any real
progress is in fact being made and secondly examine the potential dangers
involved. Before proceeding I must make it clear I don’t deny real AI is
feasible for after all human beings have evolved intelligence. If intelligence
can evolve due to natural selection then it seems feasible that it can be
created by artificial means however I believe this will be harder to achieve
than many people seem to believe.
At present computing power is
rising fast and algorithms are increasing in complexity leading to optimism
about the emergence of real AI. However it seems to me that larger faster
computers and more complex algorithms alone are unlikely to lead to real AI. I
will argue genuine intelligence requires a will and as yet no progress has been
made to creating for or endowing AI with a will. Famously Hume argued that
reason are the slave of the passions. Reason according to Hume is purely instrumental.
It might be thought that better computers and better algorithms ought to be
better at reasoning. I would question whether they can reason at
all because I would suggest that reason cannot be separated from the will.
Unlike Hume I would suggest that reason is not the slave of the passions.
Reason and the will, the passions, are of necessity bound together. In the
present situation seems to me that better computers and better algorithms only
mean they are better instruments to serve our will, they don’t reason at all.
The output of some computer program may indeed have some form but this form
doesn’t have any meaning which is independent of us. The form of its output
alone has no more meaning than that of a sand dune sculpted by the wind.
However sophisticated computers or algorithms become if the interpretation of
their output depends on human beings then they don’t have any genuine
intelligence and as a result I believe it is misleading to attribute AI to such
computers or algorithms. Real AI in this posting will mean computers,
algorithms or robots which have genuine intelligence. Genuine intelligence
requires reasoning independently of human beings and this reasoning involves
having a will.
Let us accept that if some
supposed AI doesn’t have a will that it doesn’t have any genuine intelligence.
What then does it mean to have a will? According to Harry Frankfurt,
“The formation of a person’s
will is most fundamentally a matter of his coming to care about certain things,
and of his coming to care about some of them more than others.” (2)
For something to have a will it
must be capable of ‘caring about’ or loving something. If computers, algorithms
or robots are mere instruments or tools, in much the same way as a hammer is,
then they don’t have any will and real AI is no more than a dream. How might we
give a potential AI a will or create the conditions from which a potential AI
will acquire an emergent will? Before trying to answer this question I want to
consider one further question. If something has a will must we regard it as a
person? Let us assume Frankfurt is correct in believing that for something to
have a will it must be capable of ‘caring about’ something. Frankfurt argues
that something
“to whom its own condition and
activities do not matter in the slightest properly be regarded as a person at
all. Perhaps nothing that is entirely indifferent to itself is really a person,
regardless of how intelligent or emotional or in other respects similar to
persons it may be. There could not be a person of no importance to himself.”
(3)
Accepting the above means that to have a will is essential to being a person. It also suggests that if something has a will it might be regarded as a person. This suggestion has moral implications for AI. Clearly when we switch off our computers we are not committing murder however if we switched off a computer or terminated an algorithm which had acquired a will we would. I will not follow this implication further here.
Let us return to the question as
to whether it is possible to seed a potential AI with a will or create the
conditions in which it might acquire one. If we accept Frankfurt’s position
then for something to have a will it must satisfy three conditions.
It must be able to ‘care about’
some things and care about some of them more than others.
It must ‘care about itself.
In order to ‘care about’ it must
be aware of itself and other things.
Before being able to satisfy
conditions 1 and 2 a potential AI must firstly satisfy condition 3. If we
program a potential AI to be aware of itself and other things it seems possible
we are only programming the AI to mimic awareness. For this reason it might be
preferable to try and create the conditions from which a potential AI might
acquire an emergent awareness of itself and other things. How might we set
about achieving this? The first step must be to give a potential AI a map of
the world it will operate in. Initially it need not understand this map and
only be able to use it to react to the world. Secondly it must be able to use
its reactions with the world to refine this map. If intelligence is to be real
then the world it operates in must be our world and the map it creates by
refinement must resemble our world. Robots react more meaningfully with our
world than computers so perhaps real AI will emerge from robots or robot swarms
connected to computers. However it seems to me that creating a map of the
things in our world will not be enough for a potential AI to acquire emergent
awareness. For any awareness to emerge it must learn to differentiate how
different things in that world react to its actions. Firstly it must learn what
it can and cannot change by physical action. Secondly and more importantly it
must learn to pick from amongst those things it cannot change by physical
action the things it can sometimes change by change by simply changing its own
state. A potential AI must learn which things are aware of the potential AI’s
states and perhaps by doing so become aware of itself satisfying the third of
the conditions above. Meeting this condition might facilitate the meeting of
the first two conditions.
For the sake of argument let us
assume a potential AI can acquire a will and in the process become a real AI.
This might be done by the rather speculative process I sketched above. Bostrom
believes AI might be an existential threat to humanity. I am somewhat doubtful
whether a real AI would pose such a threat. Any so called intelligent machine
which doesn’t have a will is an instrument and does not in itself pose an existential
threat to us. Of course the way we use it may threaten us but the cause of the
threat lies in ourselves in much the same way as nuclear weapons do. However I
do believe the change from a potential AI to a real AI by acquiring a will does
pose such a theat. Hume argued it wasn’t “contrary to reason to prefer the
destruction of the whole world to scratching of my finger.” It certainly seems
possible that a potential AI with an emerging will might behave in this way. It
might have the will equivalent to that of a very young child whilst at the same
time possessing immense powers, possibly the power to destroy humanity. Any
parent with a young child who throws a tantrum because he can’t get his own way
will appreciate how an emerging AI with immense powers and an emergent will
potentially might poses an existential threat.
How might we address such a
threat? Alan Turing proposed his Turing test for intelligence. Perhaps we need
a refinement of his test to test for good will, such a refinement might called
the Humean test. Firstly such a test must test for a good will and secondly,
but much more importantly, it must test whether any emergent AI might in any
possible circumstances consider the destruction of humanity. Creating such a
test will not be easy and it will be difficult to deal with the problem of
deceit. Moreover it is worth noting some people, such as Hitler and Pol Pot,
might have passed such a test. Nonetheless if an emerging AI is not to pose a
threat to humanity the development of such is vital and any potential AI which
is not purely an instrument and cannot pass the test should be destroyed even
if this involves killing a proto person.
- Nick Bostrom, 2004, Superintelligence, Oxford University Press
- Harry Frankfurt, 1988, The Importance of What We Care About. Cambridge University Press, page 91
- Frankfurt, 1999 Necessity, Volition, and Love. Cambridge University Press. Page 90.
Friday, 18 September 2015
Do Same Sex couples have a greater right to Fertility Treatment?
- Emily McTernan,
2015, Should Fertility Treatment be State Funded? Journal of Applied Philosophy, 32,3. Page 237
Wednesday, 2 September 2015
The Philosophy of Rudeness
In this posting I want examine rudeness. It might be
thought that rudeness is of minor concern to society and hence not of any great
philosophical interest. In the age of Trump and Brexit I believe rudeness
should be of far greater concern to society. For instance, consider the former
Chief Constable of Northumbria Police who resigned over alleged rudeness to
senior colleagues, see the
guardian . It also seems possible that rude
and aggressive behaviour, for rudeness and aggression seem to be linked, might
make teaching more difficult. Lastly it appears that someone’s creativity and
willingness to help others might be damaged by rudeness, see the
psychologist . It follows
there are some reasons as to why rudeness should be of concern to society. I
would suggest that any civilised society must pay attention to the views of all
members of that society. Rudeness involves an inattention to the views of others
and as a result damages discussion by shifting the focus of our attention from
the issues involved to the tone of the discussion. Rudeness means we talk at
each other rather than engage in a meaningful discussion. In the light of the
above I would suggest that any society which accepts a high degree of rudeness
is not a civilised society even if it possesses advanced technology. In this
posting I want to consider a different aspect of rudeness, is rudeness linked
to immorality? Is rudeness a vice?
What do we mean by rudeness? Rudeness might be defined as a
lack of manners or being discourteous. In what follows I won’t deal with
etiquette and mainly focus on someone being discourteous. What then do we mean
when we say someone acts discourteously? One can’t be discourteous to oneself,
discourteousness applies to relationships. Someone acts discourteously in his
relationships if he focusses solely on his needs and wishes without considering
the needs, views and wishes of others. Such a definition of discourteousness
seems to be too broad. For instance someone might not consider the needs, views
and wishes of others due to ignorance. Rudeness, acting discourteously, might
be better defined as knowingly not considering the needs, views and wishes of
others. It might be objected this definition remains too broad as there is a
difference between acting selfishly and acting rudely. My objector might then proceed
to suggest that real rudeness means someone not only not considering the needs,
views and wishes of others but also making explicit his lack of consideration
and perhaps even his contempt for them. In response to my objector in what
follows I will argue that knowing selfishness is a form of rudeness. I would
further respond that my objector is really pointing to more extreme form
rudeness which might be better defined as a type of arrogance rather than
proposing a different concept. Of course it is possible that a more basic form
of rudeness might foster arrogance.
Before proceeding let us be clear what the above definition
entails. It must include a lack of consideration for the views and wishes of
another and not just his needs. If only needs were involved I could be rude to
my dog by not considering his need for exercise. However the above definition
remains inadequate. For instance I could ignore my sleeping partner’s needs,
views and wishes but my lack of consideration would not be a case of rudeness.
Let us modify our definition of rudeness; rudeness might be defined as someone
knowingly not considering the needs, views and wishes of another and at the
time of this inconsideration the other is aware of this inconsideration.
Accepting the above definition means having a joke at
someone else’s expense is not being rude for the joke to be effective one must
be consider the views of another, More importantly accepting the above means that
rudeness and morality are linked. Rudeness need not be linked to
consequentialism or deontology but there seems to be a link with virtue ethics.
However differences remain between acting rudely and acting immorally. Morality
very roughly consists of someone considering the needs of others and acting to
meet these needs provided he judges or feels action is appropriate. Acting
rudely only involves a lack of consideration. It follows rude behaviour need
not necessary be immoral behaviour but that rudeness is on the road to immoral
behaviour and might be regarded as a minor vice. Let us consider an example.
Suppose I knowingly fail to consider ways to get my partner to work, when her
car has broken down and that she is aware of my lack of consideration. Clearly I
have acted rudely. However whether I have also acted immorally depend on the
circumstances. If I had an important doctor’s appointment then I have acted
rudely but not acted in an immoral manner. However if I only want to sleep a bit
longer and a little less sleep would not harm me and I fail to run my partner
to work then I have acted both rudely and acted in a slightly immoral way. It
is also true that behaving in an immoral way towards someone need not be rude
behaviour. I can behave in an immoral way when the subject of my bad behaviour
is unaware of my behaviour. For instance if a charming sociopath might use his
charm to further his own ends without consideration of someone’s needs then he
may be acting immorally but he is not acting rudely.
I now want examine the causes of the lack of consideration
which seems to be an essential element of rudeness. Firstly someone might
attach great importance to his self. Secondly he may lack empathy. This second
reason might explain why it appears that on average men display greater
rudeness than women. In what follows a lack of consideration refers to a
knowing lack of consideration when those who are not considered are aware of
this lack. Someone’s needs will refer to his needs, views and wishes.
The first cause I wish to examine is when someone overvalues
his self-importance. Some of the endemic rudeness on twitter might be partly
due to this overvaluation. Such a person when deciding on how to act focusses solely
on his own needs. If someone focusses on his own needs and these needs don’t
affect others then he is acting prudently rather than rudely. However if someone
focusses on his own needs without any consideration of the needs of others and
he makes others aware of his inconsideration then he acts rudely. If someone
always bases his actions on his own self-importance then I would suggest he
fails to see others of equal importance. But his failure has an additional
element he fails to recognise something essential about his own nature, he
fails to recognise his nature as a social animal. Such a failure damages both
the relationships which help foster society and him personally. Such a failure
also damages discourse which fosters society. Rudeness means people talk at
each other rather than to each other as exemplified by many of the replies on
twitter.
The second important cause of rudeness is that someone
lacks empathy. I must make it clear by empathy I mean associative rather than
projective empathy. A sociopath can project himself into the minds of others
and understand the feelings of others. He might use this understanding to experience
pleasure in the pain of others. Associative empathy means someone experiences
the feelings of others. It seems to me a rude person might have projective
empathy but that he does not have associative empathy. I should make it clear
at this point that I don’t believe only having projective empathy necessarily makes
someone into a sociopath. It makes him indifferent. It also gives him one of
the tools a sociopath needs. I would suggest a lack of associative empathy
damages someone as a person as he lacks an essential element needed in the makeup
of social animals.
I have argued that whilst even if rudeness is not always
immoral it is on the road to immorality. I further argued that rudeness damages
a rude person’s status as a social animal. I would suggest that for the most
people being a social animal is a good. It follows rudeness damages most people
and should regarded as a vice. Rudeness might also be regarded as an epistemic
vice, a way of behaving which makes the acquisition of knowledge difficult, due
to its close relationship with arrogance. At the beginning of this posting I
gave three examples which pointed to rudeness damaging society. What then can
be done to combat rudeness? One thing that might be done is that society should
become less accepting towards rudeness. What is entailed in being less
accepting? Less acceptance means not being indifferent to rudeness but pointing
out to rude people that their rudeness damages them as social animals. However,
less acceptance should simply mean less acceptance and not slip into aggressively
challenging rudeness which might itself might become a form of rudeness. Perhaps
we should ask someone who is rude to us whether they really meant to be rude.
Ask if his sexist remark was really intended or simply bullshit. If such a
strategy fails we should ask why he holds such beliefs, try and make him
justify them, rather than trying to directly confront his beliefs. Secondly we
must become more prepared to accept that other people are the same sort of creatures
as ourselves. We must respect the autonomy of others. This means we must give
priority to respecting someone’s autonomy before acting beneficently towards
him. Indeed acting to satisfy our perception of someone’s needs instead of
attempting to satisfy his expressed needs might be seen as a form of rudeness,
see woolerscottus
. Respecting autonomy means we must be tolerant of persons and their views. However
this toleration should not extend to their attitude towards others if this
attitude is a rude one. Sometimes we must be prepared to simply accept that our
views and those of others differ and do no more, see practicalethics
. Thirdly I have argued that a lack of associative empathy is one of the
root causes of rudeness. It follows we might combat rudeness by addressing this
lack. Unfortunately doing so is not easy, it can’t be done by simply increasing
awareness or cognition. Michael Slote argues that parental love helps a child
develop associative empathy (1) but even if combatting rudeness by increasing
parental love is possible it will be a slow process.
- Michael Slote,
2014, A Sentimentalist Theory of the
Mind, Oxford, pages 128-134.
Wednesday, 29 July 2015
Work, Automation and Happiness
In a posting in philosophical
disquisitions John Danaher wonders whether work makes us happy.
Happiness matters to us so this is an important question. Moreover, as Danaher
points out increasing automation might mean that there will be less work in the
future which adds further importance to the question. In this posting I will
argue work can make us happier but that this depends on what we mean by work.
Hannah Arendt makes a distinction between labour and work.
According to Arendt we labour to meet our basic biological needs. In this
posting I won’t be concerned this this basic idea of labour but the broader
concept of work. Perhaps we might try to define work simply as making an effort
for some economic reward or hope of such a reward. Perhaps some people are
lucky and enjoy such work but for many people work so defined is simply a chore
which takes up time they could use to enjoy themselves in other ways. Work for
many people is simply a job. They work for money to enable them to do the
things they really want to, work is instrumental in allowing them to do these
things. However, we don’t have to define work in this way. A stay at home mum
works. Someone else might work in his garden simply because doing so brings him
pleasure. Work, so defined, has intrinsic value. It would seem all work
involves effort. However, we might make an effort for something and in this
case work has instrumental value or we might make an effort at
doing something and work has intrinsic value. It follows that work can be
defined in two ways, either as making an effort for something, working for, or
making an effort at doing something, working at.
Let us now consider the first definition of work, work
defined as making an effort for something. Let us assume that the goods we seek
by work could be delivered by automation. Let us further assume that these
goods could be shared reasonably equitably. Perhaps in the future the state
might introduce a basic income guarantee UBI which would be large enough to
allow people to obtain the goods which previously their income from work provided
for. A guaranteed UBI might only work provided the goods people seek are not
subject to over inflation. If people want ever bigger cars, houses and even
more exotic holidays a guaranteed UBI might prove to be insufficient to deliver
the goods they seek, it should be noted that in such a context work also might
provide insufficient funds to provide these goods. Such a guaranteed UBI is
highly speculative but for the sake of argument let us assume such a guarantee
is both affordable by some future state and can deliver the goods people seek
from work. In this situation it might be suggested, that because the things
people value can be delivered without work and ‘work for something’ has no
intrinsic value that working would not contribute to people’s happiness.
In his posting Danaher considers one argument as to why we
should reject the above suggestion. The argument he considers was made
initially by Nicholas Carr (1). This argument depends on three premises.
Firstly it is assumed that the ‘flow’ state is an important part of human
well-being. The idea of flow has been made popular by Mihaly
Czikszentmihalyi. When
someone is in a flow state she is performing an activity in which she is
fully immersed, losing any feeling of reflective self-consciousness and she has
an energised focus. This state leads to positive emotions making someone happy
whilst in the state. Secondly it is assumed that people are bad judges of what
will get them into such ‘flow’ states. Thirdly it is assumed that working for
something sometimes gives people a flow state. It appears to follow that work
for something is desirable not only because it delivers the means to achieve
the goods we seek but also sometimes gives people a flow state which increases
their happiness. It appears to further follow that vastly increased automation,
leading to large scale unemployment, would be a bad thing because it would lead
to a decrease in many people’s happiness even if they still obtained the goods
they had previously obtained by working because they would experience a
decrease in flow states. Other arguments could be made as to why work might
contribute to someone’s happiness, for instance the workplace might be conducive
to friendship. However, in what follows I will only consider Danaher’s
argument.
I now want to argue the above appearance are false. I am
prepared to accept the first two premises of the above argument. Flow is an
important element of human wellbeing and that people generally aren’t very good
at judging what gets them into a flow state. I am also prepared to accept that
some work can sometimes deliver a flow state. When I’m writing I occasionally
enter into a flow state and perhaps someone who is fully engaged playing some
sport might do likewise. In these circumstances someone is working at something
which she believes has intrinsic value. Can someone enter into a flow state if
she is working for something in a purely instrumental way in a low skilled job?
Let us assume someone works at a job she finds completely uninteresting solely to
support her family. In these circumstances achieving flow is not part of her
goal. Nonetheless it might be suggested that even in this scenario such a
person might sometimes enter into a flow state meaning her work has some
intrinsic value even she isn’t consciously aware of this value. It appears
conceivable that in these circumstances working for something has both
instrumental and intrinsic value.
Let it be assumed that in some circumstances when the goods
we seek are available without working for them the instrumental value of work
vanishes. Nonetheless in the light of the above it might be suggested that even
in these circumstances work retains some intrinsic value. Let us accept that
work only has some intrinsic value when we work at something we care about. In
addition, if we work at something we care about it seems highly probable that
this work will provide some flow. However, I now want to argue that the above suggestion
that, if work has no instrumental value and we work at something we don’t care
about or even dislike that nonetheless such work might retain some intrinsic
value, is unsound. Purposeless work is unlikely to provide us with any flow.
Let us accept that if we work in a completely aimless
fashion at something we don’t care about that such work will not result in a
flow state. Let us also accept that if work is to provide flow that this work
must be goal orientated and that this goal must be something we care about. For
instance, someone might work to provide for her children she cares about. Let
us now assume that the state provides a basic income so she doesn’t have to
work to support her children and satisfy her other needs. Let us further assume
she continues to work and that her sole goal is to obtain a flow state in order
to increase her happiness. All the things she cares about can be provided by
automation and that she finds the work she undertakes to be dreary. Nonetheless
she persists in working with the goal of achieving flow in order to increase
her happiness. I will now argue by analogy that such work would not result in a
flow state. I would suggest just as we cannot choose to be in love, love is
constrained, so we cannot just choose to be in a flow state. Love just comes to
us and similarly a flow state only comes to us when we work at what we love or care
about. Accepting this suggestion means that if automation removes the
need to work for the goods we care about that continuing to work solely to
obtain some flow is impossible.
However even if the above is accepted it might be argued
that working still retains some value. Some people might find, if they have no
work, they have an unbearable sense of simply being, simply existing. It seems
probable such a state would make them unhappy. Work doesn’t simply have value
because any resultant flow state makes people happy, work also has value
because it helps to stop people becoming unhappy. It appears to follow that if
automation removes the need for work that it should be resisted. However, if we
accept the above argument it seems we must also accept that someone might work
at a boring repetitive job in order not to be bored. Such an implication seems
nonsensical. Nonetheless it remains true that if automation removes the need to
work for something that it can also lead to boredom and a resultant decrease in
happiness. Such a scenario is both possible and important. In response I
would argue that automation requires a broader focus in education.
Automation might mean education should focus less on educating people to work
for something and more on educating them so they are enabled to work at
something they love or care about. Increasing automation might lead to an
increased importance of the humanities. Universities and schools might need to
give greater emphasis to the humanities and life-long learning.. However
caution is needed when considering changes in education we mustn’t be over
elitist, music, crafts and sport all matter.
Engaging with Robots
In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...
-
In his posting on practical ethics Shlomit Harrosh connects the rights of death row inmates in certain states of the USA to choose the met...
-
According to Max Wind-Cowie shame should be liberated rather than legislated for. By this I take Wind-Cowie to mean that shame should pl...
-
Kristjan Kristjansson argues too much attention is paid to promoting an individual’s self esteem and not enough to promoting his self res...