Psychologist logo
Why we cheat
Ethics and morality

Why we cheat

Hank Rothgerber on appearing moral while avoiding the costs of actually being moral.

11 October 2023

My high school yearbook quote from 1988 reads: 'Scandal has already smeared baseball, football, and basketball. The only sports we can still trust are chess contests and marble tournaments.' – New York Times. Why had I picked out this particular comment?

Partly it was a protest to the jock culture prevalent at my institution, which led most of my peers to favour classic rock lyrics for their own yearbook entries. And part of it was identity signalling: I was an aspiring chess master with an interest in ethics, a curiosity cultivated from being a student in a social justice class that masqueraded as a religion requirement. I had also taken the first high school philosophy class offered in my state earlier in my graduating year. Along with some friends, I had formed a philosophy club that we thought all too cool.   

I don't follow marble tournaments, but the chess allusion in my yearbook quote now sounds quaint, given the recent controversy between former world champion Magnus Carlsen and American GM Hans Niemann. In an unprecedented and shocking gesture, Carlsen resigned in protest in a tournament after one move against his American opponent, whom Carlsen suspected of prior cheating. Although the allegations that Niemann cheated in a face-to-face tournament game against Carlsen (or anyone else for that matter) haven't been substantiated, Niemann has admitted to cheating in online games. 

You might attribute Niemann's unprincipled behavior to financial incentives in winning online events, or to the long-term benefit to his brand in gaining fame from online success. But there's a conundrum.  

Niemann isn't the only person who cheats at internet chess. In fact, the popular website chess.com suspends about 500 accounts a day for people using chess engines. They predict that they will terminate one million accounts by mid-2023. The vast majority of these are not titled players competing for cash prizes. They are just enthusiasts competing against the likes of me. I did eventually become a U.S. national chess master and receive almost weekly updates from the site informing me that one of my recent opponents has been suspected of cheating.  

Before this fuels any further negative stereotypes about chess players, similar questions can be asked of aficionados in other domains. Consider the curious case of Wordle, an online puzzle game owned by the New York Times, in which you try to guess a concealed five-letter word in as few tries as possible after receiving feedback on each guess.

On the surface, Wordlers seem a respectable crowd interested in testing their sleuthing skills against friends and family. And yet, one study found that 14 per cent of Wordle players admitted to cheating. Online searches for Wordle answers increased dramatically from December 2021 to February 2022, mapping the rise in the game's popularity.  

Why would people cheat at trivial online games not tied to any financial incentives or other tangible rewards? Do they believe they are better chess players or puzzlers, after beating hapless human players or a computer? Does cheating simply make people feel good?

The stories we tell ourselves

My professional interest in studying right from wrong took a big hit when I floundered in a college ethics course – the finer points of utilitarianism vs. deontology did not hold my interest as much as I had once suspected. However, I became a psychologist pulled toward ethics but from a cognitive behavioural perspective. How is it that we can do things that are morally questionable and still convince ourselves that we are good? 

Much of my work has examined how people navigate through what is called the meat paradox – that individuals morally care for animals and wish them no harm yet simultaneously eat them as food. The experience of the meat paradox is one of cognitive dissonance – we are pulled in two directions because when we eat meat a value or belief we hold (the importance of treating animals humanely) is contradicted by our behaviour.

Although some change their diet, many meat eaters resort to psychological strategies that allow them to maintain eating meat. Most of these are rationalisations and justifications that explain the troubling behaviour. I have identified no less than fourteen such strategies produced by the meat-motivated mind. We are quite adept at using motivated reasoning and telling ourselves stories that make us appear moral, and much of this can be applied to what seems like more straightforward dishonesty, like cheating at chess or Wordle.

In general terms, cheating brings us personal gain but at the risk of damaging our individual reputation. There is abundant social science evidence that we approach decisions laden with opportunities for cheating as moral hypocrites – we are motivated to appear moral, while if possible, avoiding the costs of actually being moral. 

According to the pioneering moral psychologist C. Daniel Batson, moral hypocrisy is not simply about persuading others of our morality. It is internally driven, guiding us to convince ourselves that we are good, honourable people while still reaping the rewards of being selfish. From this lens, life is a constant struggle between seeking selfish opportunities and maintaining a desirable self-image as moral.  

Convincing ourselves that what we are about to do or have done is moral when it really isn't requires self-deception: the ability to fool ourselves (and maybe others) about some threatening, self-implicating reality that we would rather not accept. Self-deception may emerge in the most private endeavours, as a person who ignores signs that their partner may be unfaithful. Or in more public, professional spheres, as politicians and physicians who believe that contributions from influence groups don't affect their judgments, or researchers telling themselves receiving funding from private sources doesn't impact their work.  

So, do people in part cheat at games like chess or Wordle because they actually dupe themselves into believing that their positive performance reflects their aptitude, and that praise from others is deserved? In one experiment by Zoe Chance and colleagues, participants who had scored higher on a math quiz because they were given an opportunity to view the answer key at the bottom of the page greatly overestimated their future performance on a math test in which it was clear the answer key would not be available. They started believing their exaggerated performance was a reflection of their true skill. While fooling others, we come to believe our own dishonesty!

When the stakes get higher

Cheating may also be so common in negligible contests such as online chess or Wordle precisely because the stakes seem so low.  People can act dishonestly and not consider themselves cheaters – it feels like such inconsequential behaviour without any real victims. Consider the 'victimless' crime of sharing passwords for streaming services, committed by approximately 73 million subscribers worldwide. This behaviour is so normalised it hardly seems troublesome. It's like letting a neighbour borrow your lawn mower. Yet password sharing costs subscription services $2.3 billion in annual lost membership revenue, with Netflix alone losing $790 million.  

In psychological studies, what we can do as researchers is vary how high the stakes are. Along with several colleagues, Duke University Professor Dan Ariely has created a procedure for measuring how dishonest people are when it comes to reporting performance linked with monetary rewards. 

In the typical study, participants receive a sheet of paper with a bunch of different matrices on it. Each matrix contains 12 numbers, such as 1.69, 4.67, 5.82, etc. Given five minutes, the task is to find two numbers within each matrix that add to ten. Every successful find is awarded $0.50 at the end of the experiment.  

When time has expired, control participants hand their answers to the experimenter, who checks their work and pays them the appropriate sum. In the experimental condition, wiggle room is introduced. Instead of handing one's answer to the experimenter, participants are asked to count their own number of correct answers, shred their work, and report their score to the experimenter.

Although it is impossible in this method to know whether any single individual is cheating, researchers can infer how rampant cheating is for the entire group. Because participants are randomly assigned to experimental and control groups, there is no reason to think that one group as a whole should be better at the matrix task. Therefore, greater reported performance for the experimental group indicates cheating, and the bigger the discrepancy between the groups' overall scores, the more cheating has occurred.  

It turns out that cheating is common in this paradigm. On average, members of the control group identified about four successful solutions during the allotted five minutes. Members of the experimental group report solving about six matrices correctly – a 50 per cent increase.  

The stakes weren't that high – only $0.50 an answer. Maybe it just wasn't worth cheating. But if you think cheating would increase as the rewards do, you'd be wrong, at least based on another Ariely study. When paid $10 for every correct answer, cheating was actually slightly lowered. Perhaps some people were worried about getting caught, especially when the stakes were higher.

But cheating rates held constant, even when the experimenter was literally blind and when participants could pay themselves from an unattended pile of money. Despite these measures, perhaps people were worried about attracting suspicion if they reported solving too many problems. But even when falsely told the average was eight correct problems, people still reported solving an average of six correct in the experimental condition. Why? Is there something magical about the number six? 

The answer speaks to the power of bounded self-deception as a prerequisite for unscrupulous behaviour. For a task where people legitimately identify four correct answers on average, reporting six doesn't seem too large a jump to violate our self-standards. It's just fudging it a little. That is, we will cheat a little bit to gain material rewards, but not so much that it violates our self-image as reasonably honest. Going too far would make something inside us feel rotten.  

This explanation seems to jibe with data in real-world situations where individuals have the opportunity to remove a product from a stand and then leave payment. In these circumstances, the Dutch spend 20 per cent less for candy bars than the list price (according to a 2002 study by Marco Haan and Peter Kooreman). Americans leave 12 per cent less than the list price for bagels and doughnuts (that one by Steven Levitt).

In one study, 18 per cent of golfers said that would take a mulligan or do-over on their first tee shot, and 8 per cent admitted that they would move their ball four inches with their club to improve their shot. In all of these cases, the dishonesty is limited enough that people seem to enjoy the best of both worlds – good feelings about themselves and material gain.  

Flipping unfair

A clever series of experiments led by Daniel Batson also establishes the lengths people will go to pursue self-interest under the veil of moral integrity. In the first study, undergraduate students faced an ethical dilemma regarding procedural fairness. They were assigned to distribute two tasks between themselves and another student participant. One of the tasks was attractive, where every correct answer earned a raffle ticket for a $30 gift certificate. The other was unattractive, described as dull and boring and offering no opportunity for financial gain. 

So, what did the students do? Did they keep the good task for themselves or give it to their unnamed partner? Eighty per cent of the students maximised their material gain and chose the attractive task for themselves, even though most later revealed that they didn't think it was morally correct to do so. So, when the hard choice is between being overly generous or selfish, most people choose to maximise personal gain.  But in many situations, we have additional options.

For example, in another study led by Batson, participants faced the same decision but with an additional twist: They were told that most students believed that giving everyone an equal chance by flipping a coin the experimenters provided was the fairest way to assign the tasks, although the choice was entirely up to them. Would you allow yourself to be at the mercy of the coin flip, or do something else?  

Because they were alone, the students had an additional choice that afforded them some moral wiggle room: They could simply flip the coin and not follow it. Nearly everyone agreed that the coin flip was the most moral way to assign tasks, yet only about one-half chose to do so. Of those not flipping the coin, 90 per cent gave themselves the desirable task. This is just straight-up self-interest, with no attempts to camouflage it. But although the coin flippers may seem more heroic, it is not quite so straightforward. 90 per cent of those flipping the coin also chose the attractive task for themselves: clearly higher than the 50 per cent expected from a random coin flip. 

What's interesting is that these tricksters who flipped the coin and gave themselves the desirable task anyway rated the morality of their actions considerably higher than those who assigned roles without flipping. So, they convinced themselves that they had acted morally even though many of them didn't abide by the outcome of the coin flip. This is the essence of moral hypocrisy: give me a chance to convince myself that my selfish behaviour really wasn't selfish, and I'll take it.

You may wonder if the higher rating among those who flipped the coin before assigning tasks was simply a product of those who honestly won the flip. After all, about half of those who flipped the coin would have won the toss and could legitimately have assigned themselves the attractive task. To make this distinction, Batson colour-coded the labels on the sides of the coins and secretly observed the coin flippers.

Those who flipped the coin but lost or manipulated it to win (for example, by flipping it until they won), but still assigned themselves the favourable task, rated their morality as moderately higher than those who didn't proceed with the coin flip. This shows that their sham use of the coin flip allowed them to convince themselves that they had acted morally. Bear in mind, they thought everything was private, so all the show was to persuade themselves, not anyone else.

People, then, can convince themselves that they are moral even when rigging or ignoring a fair outcome. In this case, the participants may have reasoned that they had fairly flipped the coin without continuing with the thought that they had fiddled with the result. We're flexible at the mental gymnastics self-deception requires. 

The persuasive self

What would you do in a 'dictator game', where one person in the experiment gets to decide how to allocate a sum of money between themselves and another participant?  Based on what we know about wanting to appear honest or moral, we can understand why most dictators don't keep all the money for themselves. Instead, they give around 20 per cent on average to their partner. Like a wealthy person donating a portion of their income to charity to appease their guilt, the 20 per cent is a tax that we pay to achieve feelings of being fair.

But even this act isn't as selfless as it seems. It turns out that when we give people an opportunity to convince themselves that keeping more money still makes them fair, they do so.

For example, in a study by Serhiy Kandul and Olexander Nikolaychuk, being told that they were the dictator because their guess of a random number was closer than the other participants enabled dictators to justify keeping more money than normal. It is not hard to understand how these dynamics would motivate the wealthy in society to believe that their wealth is deserved or obtained through hard work and merit. Much effort then goes into making the poor believe the door of advancement is open for those with requisite character. It's no exaggeration to note that the social order depends on our ability to self-deceive.

Sometimes our deservedness is ambiguous, as in the case where we bring a positive credential to the table but so does our partner. In one study by Van Avermaet, the situation was manipulated so that sometimes participants believed they had either worked longer (effort) or answered more questionnaires (productivity) than their partner, while their partner had the opposite claim. Dictators valued whichever principle benefited themselves. They gave themselves more money when they had shown greater effort but less productivity than their partner, and gave themselves more money when they were more productive but had put forth less effort. 

Such findings clearly show how flexible and opportunistic our evaluations are. What's more important, is your child doing well in school, getting along with others and having a lot of friends, or showing an accomplished skill, like being a star athlete or musician? Your answer is probably determined by which best describes your child!

In another study led by Shalvi, when participants rolled a die three times but were asked to only report the result of the first roll, something curious happened. They reported rolling a higher number – which resulted in a higher reward – than those who only rolled the die one time.

Although each roll was only seen by the participant, there's no reason to believe rolling a die three separate times would cause a higher number to occur on the first time. Instead, rolling three times helped convince participants that it was okay to report a higher number, probably because they had rolled such a number on the second or third try.  

As Dale Miller eloquently summarises, 'People's desire to see themselves as fair exerts a powerful and constraining force on the pursuit of their self-interest but only to the extent that they are unable to convince themselves of the inherent fairness of their self-interested actions.' In other words, if we can justify keeping more than our fair share, we will. And the self, it seems, is very persuasive.

Be wary of moral outrage

The recognition that so many of us are moral hypocrites may make readers appalled at human behaviour. But be careful about expressing moral outrage. Recent work that I have conducted with Daniel Rosenfeld suggests that moral outrage may in part be driven by a desire to bolster a threatened moral identity. When our meat-eating participants read a passage reminding them of the harms associated with their diet, they expressed greater moral outrage at SeaWorld Park for animal abuse violations.

In a separate experiment, when we allowed meat eaters to express moral outrage at factory farm owners and operators, they reported feeling less guilt and rated their own moral character higher. Those who get angry at moral transgressors, then, may not simply have pro-social motivations. They may be trying to compensate for some perceived moral inadequacy.  

New research by Mengchen Dong and colleagues also suggests that we should be wary of those who claim to have a strong moral character. They asked participants to evaluate various moral transgressions and to imagine that the moral failing was either their own or that of a co-worker.

For those who weren't as concerned about managing their reputation, participants who claimed high moral character placed harsher blame on themselves than on their co-workers. However, among people who worried about their reputation, participants espousing high moral character assigned harsher blame to others than to themselves. That is, despite claiming to be morally principled, they were more lenient of their own moral shortcomings.

The lesson of all this? From chess to Wordle to earning money to justifying our selfish behaviour, we are all prone to being moral shysters. Be especially dubious of those who strongly signal that they are not.

Hank Rothgerber, PhD, is a Professor of Psychology at Bellarmine University. His research focuses on motivated reasoning, especially as it pertains to moral issues. Visit his blog and follow him on Twitter.

Key sources

Arielly, D. (2012). The (honest) truth about dishonesty: How we lie to everyone—especially ourselves. New York: Harper.
Batson, C.D. (2016). What's wrong with morality?: a social-psychological perspective. Oxford: Oxford University Press.
Batson, C.D., Kobrynowicz, D., Dinnerstein, J.L., Kampf, H.C. & Wilson, A.D. (1997). In a very different voice: Unmasking moral hypocrisy. Journal of Personality and Social Psychology, 72(6), 1335–1348.
Batson, C.D., Thompson, E.R., Seuferling, G., Whitney, H. & Strongman, J.A. (1999). Moral hypocrisy: Appearing moral to oneself without being so. Journal of Personality and Social Psychology, 77(3), 525–537.
Camerer, C.F. (2003). Behavioural studies of strategic thinking in games. Trends in cognitive sciences, 7(5), 225-231.
Dong, M., Kupfer, T. R., Yuan, S. & van Prooijen, J.W. (2023). Being good to look good: Self‐reported moral character predicts moral double standards among reputation‐seeking individuals. British Journal of Psychology, 114(1), 244-261.
Kandul, S. & Nikolaychuk, O. (2017). I deserve more! An experimental analysis of illusory ownership in dictator games (No. 17-12). IRENE Working Paper.
Miller, D.T. (2020). The Power of Identity Claims: How We Value and Defend the Self. New York: Routledge.
Rothgerber, H. (2020). Meat-related cognitive dissonance: A conceptual framework for understanding how meat eaters reduce negative arousal from eating animals. Appetite, 146, 1-16.  
Rothgerber, H. & Rosenfeld, D.L. (2021). Meat‐related cognitive dissonance: The social psychology of eating animals. Social and Personality Psychology Compass, 15(5), e12592.  
Rothgerber, H., Rosenfeld, D. L., Keiffer, S., et al. (2022). Motivated Moral Outrage Among Meat-Eaters. Social Psychological and Personality Science, 13(5), 916-926.  
Van Avermaet, E. (1974). Equity: A theoretical and experimental analysis. Unpublished doctoral dissertation, University of California, Santa Barbara.