- Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
- Stephen Pinker, 2011, The Better Angels of our Nature, Viking.
- Persson & Savulescu, page 113.
- Persson & Savelescu, page 112.
- Bloom, Paul. Against Empathy: The Case for Rational Compassion (pp. 207-208). Random House.
- Bloom, page 207.
This blog is concerned with most topics in applied philosophy. In particular it is concerned with autonomy, love and other emotions. comments are most welcome
Tuesday, 2 May 2017
Widespread Moral Enhancement
Monday, 10 April 2017
Psychopaths and Moral Enhancement
Michael Cook questions whether psychopaths
should be morally bio-enhanced. This posting will examine his
question. In attempting to answer Cook’s question I will attempt to answer
several related questions. A psychopath might be roughly defined as someone who
lacks feelings for others and has no remorse about any of his actions, past or
present. A psychopath is someone, who even if he understands moral
requirements, does not accept these requirements. In this posting it will be
assumed that moral bio-enhancement should be focussed on this non acceptance. The
first related question I want to address is whether a psychopath’s non
acceptance of moral norms is a form of disability? Secondly I will consider
whether any moral bio-enhancement of psychopaths should be mandatory, I will
argue it shouldn’t. Thirdly I will consider whether we have a moral duty to
offer moral bio-enhancement to someone convicted of some crime due to his non
acceptance of moral norms, I will argue we do. Lastly I will suggest if it is
permissible to offer moral bio-enhancement to psychopaths there is no reason
not to permit moral bio-enhancement more generally.
Let us accept that if someone suffers from a disability and
we can mitigate the effects of his disability that we have a prima facie duty
to do so provided the costs associated with so doing are not too onerous. Let
us also accept that some form of safe moral bio-enhancement becomes possible,
such safe enhancement is unavailable at the present time. It appears to follow in
such circumstances provided that a psychopath’s failure to accept moral norms
is a form of disability that we have a prima facie duty to mitigate the effects
of this disability. It further appears to follow that if we can only mitigate
this disability by bio-enhancement that we have a duty to do so provided such
enhancement is safe. Is a psychopaths non acceptance of moral norms a disability?
Most psychopaths are able to understand moral requirements and so their failure
to act in accordance with these requirements is not caused by an inability to understand
moral norms. It appears to follow that a psychopath’s non acceptance of
moral norms is not a disability. This appearance is too simplistic. Let us
accept that most psychopaths can understand moral norms even if they don’t
accept these norms. Perhaps this lack of acceptance might be due to an
inability to feel the force of moral norms and that this inability to feel
should be classed as a disability. It follows that a psychopath’s failure to
accept moral norms might be regarded as a disability.
Does this moral disability matter? I will now argue whether
it matters depends on the context. It has been suggested that some CEO of some
large companies have psychopathic tendencies. Having psychopathic tendencies
might be seen as enhancing by a CEO whilst the same tendencies might be seen as
a disability by someone if they lead to him being imprisoned for some crime. I
argued above that if someone suffers from a disability and that we can mitigate
the effects of his disability then we have a moral duty to do so, provided the
costs associated with doing so are not too onerous. It follows if a psychopath
lives in circumstances in which his condition might be classed as a disability he
should be morally bio-enhanced. This enhancement should only take place subject
to the provision that means used are safe and costs involved aren’t too
onerous.
The above conclusion needs some clarification. A psychopath
who is the CEO of a large company might not want to be morally enhanced even if
his condition disables him in some social contexts. I would suggest that we
only have a duty to offer moral enhancement to psychopaths. It might be
objected that my suggestion is too weak. My objector might point out that some
psychopaths damage society and other people. He might proceed to argue that for
such people moral enhancement should be mandatory rather than voluntary due to
the need to protect society. I accept that we need to protect people and
society from psychopaths but I do not accept we must do so by means of mandatory
biomedical moral enhancement. We can protect society from those psychopaths who
harm it by restricting their freedom. Let us assume there is a safe bio-medical
form of enhancement which prevents psychopaths from committing crimes due to
their condition. My objector might now argue that mandatory moral
bio-enhancement is both a cheaper and a more humane way of treating psychopaths
who have committed crimes than detention. Mandatory moral bio-enhancement would
be better for both psychopaths and society.
I would reject such an argument which could easily be
extended to include paedophiles. Let us accept most psychopaths retain their
autonomy. Unfortunately, whilst exercising their autonomy some psychopaths damage
society. My objector wants to limit the damage done to society by removing some
of a psychopath’s capacity for autonomy. Is it possible to remove some of
someone’s capacity for autonomy? We can of course restrict the exercise of
someone’s autonomy but this is not the same as removing some of someone’s capacity
for autonomous action. I would suggest that we should limit the damage psychopaths
do to society by limiting his ability to exercise his autonomy rather than
modifying his autonomy for autonomous action. Some might question whether there
is a meaningful difference between these two approaches. I now want to argue
there is. If someone’s ability to make autonomous decisions is modified, then
he is changed as a person. If someone’s ability to exercise his autonomy is
removed, then he is not changed as a person even though the exercise of his
will is frustrated. Does the difference between changing someone as a person
and frustrating his will matter? If we change someone as a person we treating
him simply as a thing. We are treating him in much the same way as something we
can own and can do with it as we please. Psychopaths may differ from most of us
but they are still human beings and should be treated as such, they should not
be treated in the same way as something we own, should not be treated in the
same way as an animal. If we frustrate a psychopath’s will by detaining him, we
are not treating him as something we own but merely protecting ourselves. We
are still accepting him as a person, albeit a damaged person. In the light of
the above I would suggest that the mandatory moral bio-enhancement of
psychopaths would be wrong. I also would suggest that psychopaths should be
offered voluntary moral bio-enhancement. It seems probable most psychopaths
would accept such enhancement on a voluntary basis if the alternative might be
compulsory detention. Accepting the above would mean that we are still respecting
the autonomy of those psychopaths who need to be detained.
I have argued that we should offer voluntary moral
bio-enhancement to psychopaths but it is feasible that the exactly the same form
of enhancement might be offered to people in general. Prima facie such an enhancement would not be
regarded as correcting some disability. It might then be argued that because
such enhancement is not correcting any disability that it cannot be argued by
analogy that a more general moral bio-enhancement is desirable. I would reject
such argument because I don’t believe the prima facie assumption stands up to close
examination. Ingmar Persson and Julian Savulescu suggest we are unfit to face
the feature as our morality has not developed enough to permit us to cope with technological
progress (1). What exactly does unfit mean? I would suggest being unfit means
we are unable to counter some of the dangers created by our technology. If we
are unable to do something in some circumstances, then we have an inability, in
these circumstances we have a disability. It is conceivable that prior to our
most recent technological advances our morality was fit for purpose. It might
be argued our morality remains fit for purpose but that these advances have
made it difficult for us to accept the full implications of our moral norms
disabling us in much the same way psychopaths are disabled. It follows that the
prima facie assumption that a more general moral enhancement by bio-medical
means should not be regarded as correcting some disability is unsound. It might
be concluded that if technological changes make our morality unfit for our
purposes by morally disabling people that it can be argued by analogy that more
general moral enhancement by bio-medical means is desirable. It might be
objected that this conclusion is not the only option available in these
circumstances, we might try to change our current circumstances. My objector
might suggest that instead of a more general moral enhancement we should reject
our most recent technological advances and seek to return to circumstances in
which we accept the norms of our evolved morality. Such a suggestion seems
impractical for two reasons. First, once the genie is out of the bottle it is
hard to put it back in. Secondly I am doubtful if our morality was ever fit for
purpose once we ceased being hunter gatherers.
We live in a dangerous
world, provided safe moral bio-enhancement becomes available should such
enhancement be mandatory? In the light of the dangers we face such an option
seems to be an attractive one, but I would somewhat reluctantly reject it.
Mandatory moral bio-enhancement would damage our autonomy. Our autonomy forms
the basis of us being moral agents and damaging our agency would also damage
our moral systems. If safe moral bio-enhancement becomes available, then it
should encouraged, perhaps subsidised, but it should remain voluntary.
- Ingmar Persson & Julian Savulescu, 2012, UNFIT FOR THE FUTURE, Oxford University Press.
Tuesday, 14 March 2017
Automation, Work and Education
- Tim Dunlop, Why the future is workless, (Kindle Locations 1748-1749). New South. Kindle Edition.
- Harry Frankfurt, 2006, The Reasons of Love, Princetown, University Press, page 54
- Samuel Clark, 2017, Good Work, Journal of Applied Philosophy 34(1), Page 66.
- Clark, page 66.
- Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 135.
Wednesday, 22 February 2017
Sex with Robots
In the near future it seems probable that some people will
have sex with robots, see the rise of the love
droids . In this posting I will discuss some of the problems this possibility
raises. I will divide my discussion into two parts. For the most part my
discussion will consider sex with robots which are simply machines before
moving on, and much more fancifully, to discussing sex with robots which might
be considered as persons.
Let us consider someone having sex with a robot which isn’t
a person, is simply a machine. Human beings have created objects to be used for
sexual purposes such as vibrators and other sex toys. If a robot isn’t a person,
then it might appear that someone having sex with a robot is unproblematic in
much the same way as is the use of these artefacts. I now want to argue that
this appearance is false. But before making my argument I want to consider the
nature of sex. Sex among humans isn’t simply a matter of reproduction. Human
beings enjoy sex. Neither is this enjoyment a purely mechanical thing.
According to Robert Nozick,
“Sex is not simply a matter of frictional force. The
excitement comes largely in how we interpret the situation and how we perceive
the connection to the other. Even in masturbatory fantasy, people dwell upon
their actions with others; they do not get excited by thinking of themselves
whilst masturbating. “(1)
If we accept that Nozick’s view what does having sex with a
robot really mean to the person having sex? Provided a robot has been supplied with
the appropriate genitalia would someone might want to have sex with it? I would
suggest it does not in many cases. Let us assume that a robot has the
appropriate genitalia, four legs, one arm and several detachable eyes. I would
suggest very few people would want to have sex with such a machine. Nozick
argues even when masturbating someone is imaging having sex with another person
and I would suggest much the same applies to having sex with a robot. If
someone has sex with a robot, he would want it to look like a beautiful person
because he is imagining having sex with such a person.
What are the implications of accepting the importance of such
imagining? First I would suggest having sex with a robot is just an enhanced
form of masturbation. Masturbation isn’t wrong because it doesn’t harm others.
Having sex with any robot which is purely a machine doesn’t harm others and so
by analogy also isn’t wrong. Indeed, in some circumstances masturbation might
be an acceptable choice for those who are physically or emotionally
incapacitated and perhaps also for those who are incarcerated. However even if
we accept the above masturbation isn’t ideal and neither would be sex with a
robot. Someone having imaginary sex with a person is having inferior sex
because what he desires is real sex.
I have argued that the first reason why someone might want
to have sex with a robot is that he cannot have sex with another person and
that there is nothing wrong with his actions. Anyone having sex with a robot
knows he cannot harm the robot. This gives rise to a second reason why someone
might want to have sex with a robot. Someone might know that the type of sexual
activity he wants to indulge in might be harmful to another human being and
because he knows he cannot harm a robot he prefers to indulge in this activity
with a robot. Does acting on such a preference matter for after all he isn’t
harming anyone else? Kant argued we shouldn’t be cruel to animals as this might
make us cruel to human beings. Might it be then if someone engages in such sexual
activity with a robot that this activity might make him more likely to engage
in harmful sexual acts with other human beings? At present there is no conclusive evidence to
support Kant’s argument that if someone is cruel to animals that this cruelty
makes him more likely to be cruel to other people. If this is so it seems
doubtful that if someone engages in such sexual activity with a robot that his activity
would not make him more likely to do so with another human being. The above is
an empirical question and cannot be settled by philosophical analysis. However,
someone engaging in sex with a robot, which would be harmful to a human being
might harm himself. I have previously argued that for the users of pornography
there is a split between fantasy and reality, see wooler.scottus
. I further argued in the case of sexual practices which might harm others that
the maintenance of the split between fantasy and reality is absolutely
essential. I have argued above that someone having sex with a robot imagines he
is having sex with a person. It follows for someone engaging in sex with a
robot, which might harm another human being, that the maintenance of the split
between fantasy and reality is also essential. I further argued that if someone
uses pornography that this split threatens the unity of his will which is
damaging to his identity. It follows that someone engaging in sex with a robot,
which would be harmful to a human being might harm himself by damaging his
identity.
Some people assume at some time in the future some robots
might become persons. I am extremely sceptical about this possibility but
nonetheless I will now consider some of the problems of someone having sex with
such a robot. However, before I do so I will question whether anyone would want
sex with such a robot. Let us accept Nozick is correct in his assertion that
“sex is not simply a matter of frictional force. The excitement comes largely
in how we interpret the situation and how we perceive the connection to the
other.” How do we perceive the connection to a robot which is also a person? I
suggested above that a robot can take many forms. Would anyone want to have sex
with a robot with four legs, one arm, several detachable eyes, appropriate
genitalia even if it could be considered as a person? Persons are partly
defined by the actions they are capable of enacting and these actions are
partly defined by their bodies’ capabilities. Robots can have very different
bodies from us. A robot with a different body structure might be capable of
very different actions to us, such a robot even if it is considered as a person
might be very different sort of person to the sort we are. The same might also
be true of a robot with similar structure which is constructed from different
materials. If someone or something is very different to us then the connection
between us and that someone or something becomes tenuous. Would someone want to
sex with any robot with which he had only a tenuous connection, I doubt it. Of
course someone might want to have sex with such a robot provided it looked like
a beautiful human being. But if this is so isn’t he really imaging having sex
with a person and the problems associated with having sex with a robot which is
purely a machine once again become relevant.
In conclusion I have argued that someone would not harm others by having sex with a robot and his actions would not be morally wrong. However, I argued whilst it might not be wrong to have sex with any robot which is purely a machine that it might nonetheless be damaging to the user’s identity, in much the same way as pornography, by splitting character. Lastly questioned whether anyone would really want to have sex with any robot which might be considered as a person.
- 1. Robert Nozick, 1989, The Examined Life, Touchstone, page 61
Monday, 23 January 2017
Robots and Persons
- Immanuel Kant, 1785, Groundwork of the Metaphysics of Morals,
- Harry Frankfurt, 1988, The Importance of What We Care about, Cambridge University Press, page 83.
Monday, 21 November 2016
Cryonic Preservation and Physician Assisted Suicide
- Type 1 CP will be defined as the preservation by freezing of a dead body in the hope that a cure for the disease that body died from becomes available in the future in the hope that he may be resuscitated and cured.
- Type 2 CP will be defined as the preservation by freezing of someone’s body whilst he is alive in the hope that a cure for the disease he suffers from becomes available in the future in the hope that he may be resuscitated and cured.
Tuesday, 8 November 2016
Nussbaum, Transitional Anger and Unconditional Forgiveness
2. Martha Nussbaum, 2016, Anger and Forgiveness, Oxford University Press, Chapter 3.
3. Nussbaum, chapter 3.
5. List of Griswold’s conditions as outlined by Nussbaum.
· Acknowledge she was the responsible agent.
· Repudiate her deed (by acknowledging it. Express regret to the injured at having caused this particular injury to her
· Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words.
· Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
· Acknowledge she was the responsible agent. Repudiate her deed (by acknowledging its wrongness) and herself as the cause.
· Express regret to the injured at having caused this particular injury to her.
· Commit to becoming a better short of person who does not commit injury and show this commitment through deeds as well as words.
· Show how she understands from the injured person’s perspective the damage done by the injury. Offer a narrative of accounting for how she came to do the wrong, how the wrongdoing does not express the totality of the person and how she became worthy of approbation.
6. MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 51.
7. Michael S. Brady, 2013, Emotional Insight; The Epistemic Role of Emotional Experience, Oxford University Press
8. MARTHA C. NUSSBAUM, 2015, Transitional Anger. Journal of the American Philosophical Association, page 54.
Engaging with Robots
In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...
-
In his posting on practical ethics Shlomit Harrosh connects the rights of death row inmates in certain states of the USA to choose the met...
-
According to Max Wind-Cowie shame should be liberated rather than legislated for. By this I take Wind-Cowie to mean that shame should pl...
-
Kristjan Kristjansson argues too much attention is paid to promoting an individual’s self esteem and not enough to promoting his self res...