*[Ahem....some of Arieli's data from studies on honesty has come under suspicion for being faked. And one of the studies here, on the Ten Commandments, has not replicated. You can read about this here]*

Dan Arieli, a behavioral economist at Duke University, has publshed a book for popular audiences called, *The (Honest) Truth about Dishonesty*. Some of the research from his book was summarized recently in the Wall Street Journal.

Dr. Arieli and his colleagues have concluded from their research that almost everybody has the capacity to cheat and lie--just a little. Their research investigates the situational forces that make cheating more or less likely. But how do you study cheating in a laboratory? Arieli and his colleagues have creatively operationalized cheating using a math test technique. Here's how he describes it:

Test subjects (usually college students) are given a sheet of paper containing a series of 20 different matrices...and are told to find in each of the matrices two numbers that add up to 10. They have five minutes to solve as many of the matrices as possible, and they get paid based on how many they solve correctly. When we want to make it possible for subjects to cheat on the matrix task, we introduce what we call the "shredder condition." The subjects are told to count their correct answers on their own and then put their work sheets through a paper shredder at the back of the room. They then tell us how many matrices they solved correctly and get paid accordingly.

...In the control condition, it turns out that most people can solve about four matrices in five minutes. But in the shredder condition, something funny happens: Everyone suddenly and miraculously gets a little smarter. Participants in the shredder condition claim to solve an average of six matrices—two more than in the control condition. This overall increase results not from a few individuals who claim to solve a lot more matrices but from lots of people who cheat just by a little.

Like most programmatic research, Arieli's is theory-driven. Specifically, their theory states that people are motivated to *rationalize* cheating to themselves: They try to balance the gains of cheating against their own self-respect. Importantly, their theory also states what is NOT happening: People are not driven to cheat simply because they compute the gains and losses of cheating: they don't rationally think about how much money they will gain and how likely it is they will be caught.

Read this description of one of their studies to see how their data supports the rationalization theory, and not the gain-loss theory:

Would putting more money on the line make people cheat more? We tried varying the amount that we paid for a solved matrix, from 50 cents to $10, but more money did not lead to more cheating. In fact, the amount of cheating was slightly lower when we promised our participants the highest amount for each correct answer. (Why? I suspect that at $10 per solved matrix, it was harder for participants to cheat and still feel good about their own sense of integrity.)

a) How does this result support their "rationalization" theory, rather than the "rational gain-loss theory?"

Arieli's laboratory has conducted a number of studies to find out which situational forces are likely to affect people's cheating--either to make it worse, or make it better.

b) For each example below, identify the manipulated variable and the measured variable.

c) What causal statement can each study make? Explain how the experimental nature of each of the studies helps support the causal claim.

Example i.

One thing that increased cheating in our experiments was making the prospect of

a monetary payoff more "distant," in psychological terms. In one variation of

the matrix task, we tempted students to cheat for tokens (which would

immediately be traded in for cash). Subjects in this token condition cheated

twice as much as those lying directly for money.

Example ii.

Another thing that boosted cheating: Having another student in the room who was

clearly cheating. In this version of the matrix task, we had an acting student

named David get up about a minute into the experiment (the participants in the

study didn't know he was an actor) and implausibly claim that he had solved all

the matrices. Watching this mini-Madoff clearly cheat—and waltz away with a wad

of cash—the remaining students claimed they had solved double the number of

matrices as the control group. Cheating, it seems, is infectious.

Example iii.

We took a group of 450 participants, split them into two groups and set them

loose on our usual matrix task. We asked half of them to recall the Ten

Commandments and the other half to recall 10 books that they had read in high

school. Among the group who recalled the 10 books, we saw the typical widespread

but moderate cheating. But in the group that was asked to recall the Ten

Commandments, we observed no cheating whatsoever.

**Suggested answers**

Example i:

b) The manipulated variable was whether people could earn tokens or actual dollars for solving math problems. The measured variable was how many math problems people said they solved.

c) The causal claim they can make is that "having people earn tokens rather than real dollars causes people to cheat more on this math task."

They can support this causal claim because they have covariance (people in the token condition claimed solving twice as many problems as those in the cash condition). They also have temporal precedence (the token/cash condition was administered before subjects had the chance to cheat). And we might assume internal validity, too. People were probably randomly assigned to the token and cash conditions; both groups of people were solving the same math problems in the same settings, with the same absolute value assigned to the payoff. The only thing that differed was the form that the payoff took (cash or token). This causal claim seems justified!

## Comments

You can follow this conversation by subscribing to the comment feed for this post.