Showing posts with label Virtual Virtues. Show all posts
Showing posts with label Virtual Virtues. Show all posts

Thursday, 15 May 2014

Virtual Morality


Mike LaBossiere asks whether we should behave morally to virtual beings. He wonders “whether or not we can have moral obligations to such beings. Or, to put it another way, is it possible for there to be virtually virtuous acts regarding such virtual entities or not,” see talkingphilosophy . Specifically, LaBossiere considers acting virtuously towards Dogmeat in the violent video game Fallout lll. He believes we should act virtuously in such games. Intuitively such a position seems ridiculous for how can we possibly harm a string of ones and zeros. Nonetheless in what follows I will argue LaBossiere is basically correct. I will argue that whilst we have no duties to virtual beings nonetheless that we should treat them in a moral fashion.

Let us accept we cannot harm Dogmeat but that we might harm others or ourselves by playing Fallout lll. LaBossiere bases his argument for virtually virtuous acts on Kant’s argument for treating animals well. Kant argued that as animals are not rational beings we have no duties to animals. However, he argued someone “must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.” It might then be argued by analogy that someone who enjoys violent video games might be more prone to violence in real life. It should be noted that this is an argument based on Kant’s insight rather than Kantian morality because the argument is basically a consequentialist one. There appears to some evidence that people who are cruel to animals are cruel to people but unfortunately for the argument by analogy there is little conclusive evidence that people who play violent video games are more violent in real life.

I will now argue that if we don’t act morally to virtual beings we damage ourselves. We damage our character. We damage our character by splitting it. We act morally in one domain and act without any moral scruples in another. It might be objected the virtual domain simply isn’t of moral concern. Violence in a virtual world causes no destruction in the real world. Nonetheless we cannot explain violence in the virtual world without invoking our concept of real violence. One of the main differences between playing chess and Fallout lll depends on the concept of violence. I would suggest that we cannot explain acting morally to virtual beings without appealing to our moral sentiments and these sentiments are real sentiments not virtual sentiments.

I will argue that that our underlying moral sentiments are identical in the real and virtual worlds. Let us assume someone behaves in the virtual world in a way that in the real world that we would regard as wrong. It seems to me that in the virtual world he overrides or ignores his natural moral sentiments such as empathy. It might be objected that acting in the virtual world doesn’t just involve no virtual moral sentiments, it involves no moral sentiments at all. However, I would argue the ideas of rescuing, loving or punishing someone in the virtual world are nonsensical without reference to our natural moral sentiments. If my argument is accepted then someone who acts badly in the virtual world must override or ignore these sentiments. It seems to me that there are two main dangers associated with overriding or ignoring these sentiments.

Firstly, someone who overrides these sentiments in the virtual world might find herself inadvertently overriding these sentiments in the real world. Such overriding in the real world might cause harm to others. I will not pursue this danger further as the evidence that such games cause harm is as yet inconclusive as noted above. Let us assume someone is able to override these sentiments in the virtual world but does not override them in the real world. It might then be suggested that this overriding in the virtual world does no harm. I will now argue this overriding might still be harmful; it might harm the agent herself.

I will pursue an argument I have previously used with respect to pornography, see pornography and the corrosion of character . Frankfurt argues that,
“the health of the will is to be unified and in this sense wholehearted. A person is volitionally robust when he is wholehearted in his higher order attitudes and inclinations, in his preferences and decisions, and in other movements of the will.” (1).
Now if someone behaves in the virtual world in a way we would regard as wrong and behaves morally in the real world she splits the way she reacts to her natural moral sentiments in these different worlds. I would suggest this split threatens the unity of her will, the health of her will, and as a result is damaging to her identity, to her character.

Two objections might be raised to the above. Firstly, it objected that even if acting badly in video games, whilst acting well in real life, is inconsistent it does not split the player’s will. Secondly he might argue that even if the player’s will is split her reason should allow her to manage this split without any harm to her character. Let us consider the first objection. My objector might argue that someone’s will is determined by what she cares about and finds important. He might then suggest the playing of video games, like a liking for ice cream, is something someone finds enjoyable but is not something she cares about or finds important. In response to this objection I would suggest if someone continually buys ice cream that her liking for ice cream plays a part, albeit a small part, in the creation of her identity. I would also suggest if someone continually plays video games these games form part of her identity to some degree. One of my grandsons suffers from asperger’s syndrome and the playing of video games is definitely part of his identity. My objector might now point out many things matter to us besides moral constraints. He might proceed to argue that reason may allow the player to manage this split. Reason might allow someone to see it is appropriate to disregard her moral sentiments in some situations and inappropriate in others. If my objector is correct, then because we are able to manage this split, it will not corrode our character. I am not sure this split can easily be managed because we don’t always apply reason. I am however prepared to accept the possibility.

I will now present a second argument as to why acting badly to virtual creatures might corrode our character. Let us accept that for someone to be a person of any sort she must care about something. Let us also accept a person must have some values. I have argued this means she must care about what he cares about. I would suggest such meta-caring about must involve feelings of pride and shame, see Helm (2). I would then suggest how someone treats Dogmeat must involve to some small degree feelings of pride and shame. If she treats Dogmeat badly she will feel some shame. This shame is not anxiety about social disqualification. It is anxiety about harming the things she cares about, in this case her meta-cares, see two types of shame. Prima facie this sort of shame corrodes someone’s character.


It might be thought that the playing of violent video games and the possible splitting of character is of no practical importance. However, I believe violence and the splitting of character matters. Consumers of pornography might also harm their character by splitting it. Moreover, soldiers kill in war but not at home. This killing leads to a similar splitting of character to that described above and may lead to moral injuries, see aeon .

  1. Harry Frankfurt, 1999, Necessity, Volition, and Love. Cambridge University Press. Page 100
  2. Bennett Helm, 2010, Love, Friendship, & the Self, Oxford University Press, page 128

Engaging with Robots

  In an interesting paper Sven Nyholm considers some of the implications of controlling robots. I use the idea of control to ask a different...