Reading C. Daniel Batson’s ‘Moral Masquerades: Experimental exploration of the nature of moral motivation’ Phenomenology and the Cognitive Sciences (2008) 7, pp. 51-66.
This is a very interesting study regarding moral choices, and what its author, C. Daniel Batson, refers to as ‘moral hypocrisy’. The Abstract is here:
‘Why do people act morally—when they do? Moral philosophers and psychologists often assume that acting morally in the absence of incentives or sanctions is a product of a desire to uphold one or another moral principle (e.g., fairness). This form of motivation might be called moral integrity because the goal is to actually be moral. In a series of experiments designed to explore the nature of moral motivation, colleagues and I have found little evidence of moral integrity. We have found considerable evidence of a different form of moral motivation, moral hypocrisy. The goal of moral hypocrisy is to appear moral yet, if possible, avoid the cost of being moral. To fully reach the goal of moral hypocrisy requires self-deception, and we have found evidence of that as well. Strengthening moral integrity is difficult. Even effects of moral perspective taking—imagining yourself in the place of the other (as recommended by the Golden Rule)—appear limited, further contributing to the moral masquerade.’
Batson created a study in which participants are asked to choose which of two tasks to assign themselves and which of them to assign to another participant they’re not going to meet (who is actually fictitious). They are left alone to make the decision—they are not watched. For each correct answer for one of the tasks, the positive one, the participant will go into a raffle to win a $30 prize. The other task is dull and is not worth anything in prizes. The participant is then asked to rate their choice morally. For the next stage, participants are provided with a coin which they can choose to use to make the decision. This is then replaced by a coin with sticky notes on each side identifying which of the participants (themselves or the fictitious) have won which task, which again they are free to use or not use. There are various other stages of less interest. In the final stage, a mirror is introduced—participants make their decision in front of this mirror.
You can see the results here:
Batson’s ‘moral hypocrisy’ refers to what he sees as a desire to appear moral to others as well as to oneself without having to bear the costs of actually acting morally in circumstances where there are no costs for acting in discordance with our morals. Drawing on evolutionary psychology, and in accord with concept of the Ring of Gyges proposed to Socrates by Glaucon in Plato’s Republic in which a shepherd, using a ring that makes him invisible, kills the king, and commandeers the queen and the throne, Batson contends that, given half the chance, we’d just as soon act immorally. And, moreover, and this is where the hypocrisy enters, when acting immorally or amorally, we’d prefer to see our actions as moral regardless of whether they are or not. Why would we care whether we see our own actions are moral or not? Why the self-deception? He notes that ‘Most people are adept at moral rationalization, at justifying to themselves—if not to others—why a situation that benefits them or those they care about does not violate their principles. […] Why storing one’s nuclear waste in someone’s backyard or using a disproportionate amount of the earth’s natural resources is fair. Why the killing innocent’s by one’s own side is a regrettable necessity, whereas such killings by the other side is an atrocity.’ (52)
Batson provides something of an answer to these questions via economist Robert Frank and biologist Robert Trivers. Frank argued that it seems to make economic sense to cheat while appearing to be committed to moral principles and trust, but due to our ability, perhaps developed in some kind of evolutionary arms race, to identify and reveal to others free-loaders, it makes more sense for one to actually commit to morals and be trustworthy. But Trivers raises the idea that if I can deceive myself into believing I’m acting morally, I have a greater chance of deceiving others. (53) Rather than the most effective strategy thus being to act morally, therefore, we’re better off acting morally only when we’re likely to be found out and penalised for not doing so, but if and when we do act immorally, we’re best off believing we’re acting morally in order to make it less likely to be caught by others.
Without going into figures (see table), what Batson found is that initially, most people assigned themselves the positive task, and they rated themselves pretty poorly for doing so. Of those who then chose to use the coin, most still elected the positive task for themselves. Interestingly, however, they gave themselves a higher moral rating for going so, for going to the effort of using the coin, even though many of them either ignored its call, or flipped again until assigned the positive task. The sticky notes didn’t change this very much. And again interesting, when the participants made the decision with the coin in front of a mirror, the rate at which they selected the positive task fell down to 50%, in accord with chance. Somehow, the mirror convinced them to act morally.
The coin and the mirror in this experiment make for useful analogies. The coin is a kind of deferral of morals, guilt or responsibility. By going through the motions of using a coin, even though they don’t do what it calls, participants are able to believe they’ve done the right thing. This says something about the danger of cultural norms and institutions, belief systems, and governmental and inter-governmental bodies that could create a similar deferral of responsibility. Would a citizen believe that their country is more moral for being a signatory to, for example, the Convention Relating to the Status of Refugees than a country that is not, regardless of how extensive their country’s attempts are to deny asylum claims? Are we so satisfied with the infrastructure, the rules, that we needn’t follow them?
And then the mirror. The mirror is the witness, and this is interesting for all kinds of reasons, but I’m concerned mostly with the dehumanising that happened during the Holocaust, the vile, and still largely denied, treatment of Australian Aboriginals, and what continues to happen in asymmetrical warfare, from Vietnam to Iraq and Afghanistan to Israel–Palestine where a powerful state has to justify what they are doing to a powerless one. Is this dehumanising the removal of the mirror, the witness? While there are clearly those who are able to act immorally with an audience, and some that even relish the opportunity, for others, it may be far easier to sleep at night once the mirror is smashed and the witness vilified and made to seem less than human. There is also the element of justifying their actions to the a wider community to consider, however.
Finally then, is it possible for someone to subject themselves to the same standards they subject others?