View sample communication research paper on deception. Browse research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Deception is one of the sexier topics in communication research, and people seem to have a love-hate relationship with the topic. Being duped is usually undesirable and something to be avoided, as is being labeled a liar. People everywhere teach their children that lying is bad. Yet despite moral and ethical prohibitions against lying, deception is a behavior that most people do at least occasionally. In most cultures, little white lies, polite exaggerations, and other minor forms of deceptions are routine, normative, and mandated by politeness. Furthermore, learning to lie well seems to be a part of normal human development and is an essential part of being socially skilled. In extreme circumstances, lying is necessary for survival. However, deception can also be used for exploitation, manipulation, swindling, fraud, and other antisocial activities. In spite of its unsavory connotation, people have a morbid fascination for the topic, as can be seen in the popularity of books such as A Treasury of Deception: Liars, Misleaders, Hoodwinkers, and the Extraordinary True Stories of History’s Greatest Hoaxes, Fakes and Frauds (Farquhar, 2005).
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
The social science of deception is also fascinating. Not only is it a sexy topic but also research on the topic continues to yield surprising and counterintuitive findings. Communication, and other social scientific research, is sometimes criticized as only documenting the obvious. Such a critique does not apply to work on deception, which seems to frequently contradict conventional wisdom and common sense. Simply put, things are often not what they seem in the realm of deception.
This research paper focuses on human-to-human deception, with an emphasis on the communicative aspects of deception. Deception is defined, and the communication research on deception is summarized.
An Introduction to Deception
Defining Deception
Deception is usually defined as intentionally, or at least knowingly, misleading another person. Deception involves purposely getting someone to believe something that the deceiver knows to be false. There are several implications to defining deception in this way.
For a start, truth and deception are not polar opposites, and deception and falsehoods are far from synonymous. For example, a so-called honest mistake, that is, saying something that one incorrectly believes to be true, is not deception. Or saying something known to be false is not deceptive if it is said in such a way that the hearer should know it is false. Sarcasm is an obvious example. None of these cases involves intent to mislead. However, saying something that is literally true in a sarcastic way so that the listener infers something false can be deceptive. In short, what is literally true can be deceptive, and saying something false need not be a lie.
Following this line of thought, useful distinctions can be made between actual deceptions, deceptive attempts, messages perceived as deceptive, and messages that are functionally deceptive. Actual deception is meant to deceive and achieves this end. The target person is misled by design. In deceptive attempts, someone tries to deceive, and there is deceptive intent, but the target is not actually misled. This situation may be thought of as failed deception. In perceived deception, the target person thinks that someone was trying to deceive him or her even though there may or may not be deceptive intent. Finally, messages that are functionally deceptive mislead others regardless of the intent or perceived intent. Functionally deceptive messages lead to the same outcome as deception without getting into peoples’ head to ascertain intent. So honest mistakes can be perceived as deception, functionally deceptive, or both.
A related implication is that message intent, message function or impact, and message features need to be distinguished because these things do not map perfectly onto one another. Someone can say something that is objectively false, omit information, or change the subject in a manner that is likely to, or intended to, deceive. The objective truth or falsity of messages may or may not actually function as deception, and such messages may or may not be perceived as deception. In short, the combination of speaker intent and message consequence defines deception, not the objective qualities of messages or information dimensions such as truth or falsity or degree of omission.
Development of Deception in Childhood
Because deception involves knowingly or intentionally misleading another person, deception requires both the mental ability to think about others’ thoughts and the ability to use communication to affect others’ thoughts. For example, telling a falsehood will not function as deception if the listener knows the truth. Saying that the sun comes up in the north every morning or that one’s best friend is 10 feet tall or some other ludicrous thing is unlikely to fool people. In short, deception requires consciously misleading others, and to do so effectively, one needs some idea of what they already know and do not know. The mental ability to understand that another’s thoughts are different from one’s own thoughts and to think about what others might be thinking is called theory of mind. Theory of mind develops in most human children between the ages of 3 and 5, and with this cognitive development comes the ability to deceive. Before this age, children can say things that are false, but they do not grasp the concept of deception. By age 5, however, most children can spontaneously lie to achieve goals when the truth is problematic (Peskin, 1992). Learning when to deceive, when not to, and the social ramifications of deception continues to develop throughout childhood, and the ability to deceive well is typically well learned by adolescence.
Types of Deception
While an outright lie (saying something that is false) may be the most obvious way we deceive and the first type of deception to come to mind, it is not the only way people deceive others, or even the most common. False information can be mixed in with the truth to create deception. Other types of deception besides falsehoods include omission, evasion, and equivocation. Deceiving with omission simply involves selectively withholding information, and this is probably the easiest and most common form of deception. Evasion involves actively steering a conversation away of the withheld information, while equivocation involves ambiguous language open to multiple and erroneous interpretations. While falsification, omission, evasion, and equivocation are not the only ways to deceive, they are the most common.
These four ways to deceive correspond to Grice’s (1989) communication maxims of quality, quantity, relevance, and manner. Grice’s maxims are reasonable presumptions that people make to make sense of and understand others’ communication. Information manipulation theory (IMT; McCornack, 1992) suggests that this correspondence between Grice’s maxims and the common types of deception is not mere coincidence. Instead, deception works by exploiting the presumptions that guide everyday nondeceptive discourse. Deception happens when people covertly violate one or more of Grice’s four maxims. People are misled because they presume that others are following the maxims that guide communication when they are not. The different presumptions people operate under when making sense of others’ messages lead to a useful way to categorize these different types of deceptions, with falsification corresponding to Grice’s maxim of quality, omission to quantity, evasion to relevance, and equivocation to manner. Numerous studies have shown that, as specified by IMT, messages that covertly violate Grice’s four maxims are perceived as deceptive. Preliminary findings suggest that the conclusions extend across cultures. Successful replications of IMT have been done in Japan, Korea, and Hong Kong.
The four main ways people deceive others is by proving false information, passively omitting or hiding information, actively evading or diverting attention away from the concealed information, and a strategic lack of clarity, which we might call equivocation or obfuscation. None of these are mutually exclusive, and any or all can be used in combination. Also, these are not all-or-nothing categories. People can mix true and false information, disclose all but a small, crucial bit of information, and so on. Considerable shading is possible and perhaps even typical. Also, people do all these things all the time without meaning to be deceptive. These count as deception only when consciously used to mislead.
Prevalence of Deception
Because deception, by definition, involves deceptive intent, many if not most messages that depart from the whole truth and nothing but the truth fall well short of deception. Most communication, for example, is of necessity truncated. If someone asks how you are, a simple “fine” will usually suffice, and a fully disclosive answer is typically inappropriate. After all, Grice’s (1989) maxim of quantity also applies to “too much information” as well as too little. As a consequence, studies assessing the prevalence of “information control” are not necessarily informative about the frequency deception. Omission, evasion, and inaccuracies are commonplace in conversation, and while this is no doubt true, this is a different issue from deception.
While it is not possible to randomly sample deception, at least two studies provide an idea of how often people lie in everyday life. DePaulo, Kashy, Kirkendol, Wyer, and Epstein (1996) had 70 college students and 70 nonstudents keep a lying diary for 1 week. Over the week, the vast majority of respondents (95%) reported at least one lie, and on average, college students reported two lies per day, while nonstudents reported a single lie per day. For the students, a lie was told in 30% of all interactions, and 20% of conversations for the nonstudents contained a lie. These findings suggest that lying is not only an everyday occurrence but also relatively infrequent compared with nondeceptive communication. If one considers the sheer amount of communication we engage in during the course of a day, one to two deceptive messages is proportionally small.
More recently, Serota and Levine (2008) accessed the prevalence of deception in American life with a different methodology. They asked a nationally representative sample of 1,000 individuals (stratified by age, sex, education, income, and region of the country) if they had lied in the past 24 hours. The mean number of lies per day was 1.65, a value similar to that reported above. The distribution, however, was highly skewed. A total of 60% of respondents reported no lies in the past 24 hours, but a few of those who did lie reported as many as 54 lies. These findings suggest that many people may not lie each and every day, although it is a strong probability that almost everyone lies sometimes. More interestingly, these findings suggest that most lies may be told by a relatively few, very prolific liars. Together with the earlier findings, the conclusion seems to be that lying is prevalent in that we are likely to encounter lies on a daily basis, but it is infrequent in comparison with everyday honest communication. In other words, most people lie sometimes, but most people are more honest than not.
Deception Motives
Despite widespread social and moral prohibitions against deception, deception nevertheless occurs. The question that naturally arises is why do people deceive others?
The answer is that people lie for a reason. Since deception is, by definition, intentional, this must be the case. But knowing what leads people to form deceptive intent tells us something about what happens when people deceive and why they do so. People are prone to deceive others when the truth poses some obstacle to goal obtainment. In the absence of some psychopathology, people usually do not deceive when the truth works just fine. In short, most people follow the maxim “Do not lie if you do not have to” most of the time.
This maxim is consistent with what the noted philosopher and ethicist Sissela Bok (1999) has labeled the “Principle of Veracity.” According to Bok, there exists a moral asymmetry between truth and deception. The telling of the truth requires no justification; deceit does. Honesty and trust provide a necessary foundation for human relations and symbolic exchange. Violating these requires ethical justification, whereas adherence does not.
Early work on deception motives worked to classify them in a variety of ways. Categories of deception motives included things such as saving face, maintaining relationships, exploiting others for personal gain or profit, and avoiding conflict. These more specific motives can be grouped according to who benefits from the deception. For some, the motivation for deception can be for self-benefit, for other benefit, or to benefit the relationship. However, none of the goals achieved through deception are unique to deception. That is, the various category systems listing the various motives for deception do not differ from the more general social motivations guiding nondeceptive behavior as well. For example, consider face goals. The goal of a face-maintaining message is not to deceive per se but to manage self and others’ face needs, and these ends can be accomplished through both honest and deceptive means. Similarly, virtually all instrumental and relational goals can, depending on the situation, be achieved through both honest and deceptive actions. Thus, deception is best thought of as a possible tactic, strategy, or means for goal attainment rather than a desired end in itself.
The probability of using deceptive rather than honest means for goal attainment is likely conditional on situational features and constraints, not on the nature or type of the goal pursued. According to Bok’s (1999) Principle of Veracity, the moral culpability associated with deception creates an initial imbalance in the assessment of deceptive and truthful alternatives, and adopting deceptive means requires justification that is not necessary for truthful means. So while deception is in almost everyone’s social repertoire, it is generally employed as a tactical or strategic option of last resort or path of least resistance. People will not be deceptive when the truth is sufficient, efficient, and effective for goal attainment. It is only when the truth poses an obstacle to goal obtainment, regardless of what that goal might be, that people entertain the possibility of being deceptive. That is, people are deceptive only when truthful alternatives are more effortful or less efficacious.
Recent research is consistent with the idea that people deceive when the truth makes honest communication difficult. For example, imagine that a friend invites you over for dinner. If you think that the dinner is delicious and delightful, then when asked, providing an honest answer is easy. If, however, you thought that the meal was truly awful, you might lie about how bad it was. DePaulo and Bell (1996) observed people discussing artwork with artists. When they liked a painting that the artist painted, they said so. However, when they disliked a painting, they tried to downplay that. Similarly, Levine, Kim, and Hammel (2008) conducted a series of studies where people were asked what they would do in situations where either the truth worked just fine or honesty might make goal attainment difficult. People were honest nearly 100% of the time when the situation did not require deception. In situations where the truth was a problem, however, people were deceptive about 60% of the time. So people are not always deceptive when they have a motive, but they are almost never deceptive when they do not have a motive, and the motives that guide both honest and deceptive communication seem to be the same.
Deception Research Methods
The majority of research studies investigating deception seek either to identify behaviors that distinguish truths from lies or to assess people’s ability to distinguish truths and lies. The former set is called cue studies, and the latter is called detection studies. Both sets of studies typically use experimental designs, and both require obtaining collections of truths and lies that can either be coded for behavioral differences or be judged for veracity. The important methodological considerations in these types of studies include issues of ground truth, sanctioning, and stakes.
In deception research, ground truth needs to be known with absolute certainty. Ground truth means that the research must know which messages are honest and which are deceptive. Second, unsanctioned lies are usually preferred. Sanctioned lies are made in response to researcher instructions, that is, deceivers are told to lie, whereas unsanctioned lies are ones in which the message sources decide for themselves whether to lie or not. Although most previous deception research has involved sanctioned lies, unsanctioned lies are desirable for reasons of ecological validity and diagnostic utility. For similar reasons, relatively high stakes are usually preferred. The stakes refer to the consequences for the liar if the deception is uncovered. High-stakes lies are presumed to be more arousing, and behavioral differences are more likely to be apparent in high-stakes situations (DePaulo et al., 2003). Whether the study is interactive or mediated on the particular type of scaling used seems to make no difference.
While most deception research involves laboratory experiments, other methods are used as well. Surveys with hypothetical situations, retrospective accounts, and diary studies have all added to the literature.
Theories of Deception
Much of the current thinking about deception has evolved from Paul Ekman’s (2001) idea of “leakage.” The idea is that (1) there are emotional consequences of deception;
(2) emotions are conveyed nonverbally; and (3) emotional expression is not entirely under conscious control. According to this view, compared with an honest message source, deceivers are likely to experience strong emotions such as guilt and fear of detection. Emotions are largely communicated nonverbally, especially through facial expressions and also through the voice and body language. Deceivers, of course, try to control behavioral displays so as not to give themselves away, but nonverbal cues to deception leak out anyway, often through channels that are thought to be more difficult to control. So leakage refers to inadvertent, unintentional, behaviors that stem from deception and that give away the liar.
According to Ekman (2001), emotional leakage can often be seen in microfacial expressions. Microfacial expressions are momentary signs of emotion that flash briefly on people’s faces. The microfacial expressions are hard to see, but according to Ekman, if they can be spotted, they often give away a liar.
The leakage idea was expanded by Zuckerman, DePaulo, and Rosenthal ’s (1981) four-factor theory. The four-factor framework specifies four internal states that differentiate truths and lies: emotions, arousal, cognitive effort, and overcontrol. Because relative to truth tellers, liars are more likely to experience arousal, emotions such as fear and guilt, cognitive effort, and overcontrol of nonverbal displays, and because each of these internal states is thought to be associated with specific nonverbal behaviors (e.g., increased cognitive effect leads to long response latencies), clues to deception are leaked nonverbally.
The most recent iteration of this thinking is reflected in interpersonal deception theory (IDT; Buller & Burgoon, 1996). Liars strategically present themselves as honest but nonstrategically leak deception cues. Message receivers pick up on these cues and become suspicious. Liars, however, pick up on leaked suspicion and strategically adapt. So do receivers. Net accuracy depends on the liar’s encoding skill relative to the receiver’s decoding skill and how the interaction progresses over time.
Deception Cues
Nonverbal Cues
Most past and current deception theory holds that, relative to truth, deception provokes arousal, it leads to various emotional responses such as guilt and fear of getting caught; it is cognitively effortful, and liars try harder to monitor their performances than honest people. These factors should lead to systematic differences in nonverbal behavior that distinguish deceivers from their honest counterparts. Behaviors that actually differentiate truth tellers and liars are called authentic deception cues.
These authentic cues can be distinguished from stereotypical deception cues and decoded-as-deception cues. Stereotypical deception cues are those behaviors that people believe signal deception. So if we did a poll and asked people, “How can you spot a liar?” the answers would reflect stereotypical cues. Now, imagine we had people watch others and we asked them if they thought the people were lying. Then, we could look at what the people who were believed were doing differently than those who were seen as deceptive. Those behaviors that differentiate honest-looking people from those seen as deceptive are labeled decoded-as-deception cues.
Interestingly, research indicates that stereotypical deception cues and decoded-as-deception cues are not always the same. This indicates that people are often not aware of which cues they are using to assess honesty and deceptiveness. Furthermore, research shows that neither stereotypical deception cues nor decoded-as-deception cues map nicely onto authentic deception cues. That is, what people think liars do, what liars actually do, and what people use to infer deception lead to three different lists that lack strong correspondence.
A noteworthy example is the case of eye gaze. Most readers have probably heard the expression that a liar won’t look you in the eye. Perhaps the reader has even asked someone to look him or her in the eye to be sure that the person was being honest. Interestingly, this belief is surprisingly widespread. Bond and colleagues (2006b) asked people in 75 different countries about how to spot a liar. The “liars won’t look you in the eye” belief was, by far, the most common answer worldwide. People everywhere believe this. But decades of nonverbal cue research has shown this to be absolutely false (DePaulo et al., 2003). There is no link at all between eye gaze and actual deception. Eye gaze is a stereotype that has no basis in reality, and someone looking you in the eye or not has no diagnostic utility.
Research shows that although people believe that deception is signaled by nonverbal behavior and that people use nonverbal behaviors to form opinions about the honesty of others, no surefire nonverbal deception indicator exists. DePaulo and colleagues (2003), in the most extensive review to date, summarized the results of 116 different studies of 158 different deception cues. The vast majority of nonverbal cues were unrelated to deception, and of the few that were different, the differences were small. On average, deceivers did exhibit more vocal tension, a higher pitch, more fidgeting, fewer gestures, and less facial pleasantness than truth tellers. Again, the differences were small. Thus, research to date has failed to find nonverbal behaviors that have much diagnostic utility.
Verbal and Linguistic Cues
Most deception involves the use of words, so there are verbal differences between honest and deceptive messages. Most deception involves some elements of falsehoods, omissions, evasive language, or equivocal language. The telling of deceptive falsehoods and the strategic omission of information, however, are difficult to spot unless one knows either the speaker’s true motivation or what the truth really is. The problem is, when we are being deceived, we know neither.
DePaulo and colleagues’ (2003) cue research finds that there are no surefire verbal behaviors that always signal deceit. However, compared with nonverbal behaviors, verbal behaviors are more diagnostic. Lies, relative to truths, tend to provide fewer details; they are less logical, less plausible, and exhibit less verbal immediacy. While the nonverbal differences between truths and lies might be characterized as small effects, these verbal differences are moderate to large. Thus, research suggests that one way to spot a lie is to simply apply common sense. Does what is said make sense? If little information is provided and it does not sound reasonable, maybe it isn’t. The catch, of course, is that some lies are quite plausible, well constructed, and compelling. Nevertheless, critically listening to content can help spot poorly crafted lies.
Statement validity analysis and reality monitoring approaches presume that truthful and deceptive accounts will systematically differ because of differences between true memories and fabricated stories. For example, the language used to describe an authentic memory should be higher in imagery, emotional connotation, and contextual information than that describing an imagined event. Consistent with these views, several studies report statistically significant differences in language usage that differentiate truthful and deceptive messages.
More recently, computer-based linguistic software has been used to examine differences between honest and deceptive language. So far, every study has reported that linguistic differences between truths and lies exist, but none of the findings have replicated from one study to the next. So while verbal and linguistic analysis seems more promising than nonverbal cues, research is far from conclusive.
Deception Detection Accuracy
Research finds that people are not very good at detecting deception. In fact, the finding that people are statistically significantly, but only slightly, better than chance at detecting deception is perhaps one of the most reliable and welldocumented findings in all of social science. Meta-analysis of more than 200 separate lie detection experiments finds that people are, on average, about 54% accurate when they have a 50-50 chance of being right (Bond & DePaulo, 2006a). The results of most individual studies fall within ±10% of this across-study average (i.e., between 45% and 65%). Not surprisingly, this finding has become very widely accepted among deception researchers.
More recently, Bond and DePaulo (2008) looked at the variance in accuracy judgments rather than just average accuracy levels. This research divided accuracy scores into four components: demeanor, truth bias, transparency, and ability. Demeanor is the tendency of a person being judged to appear honest (or deceptive), independent of whether or not he or she is lying. Variance in demeanor means that some people are just more believable than are others. Truth bias is the tendency to believe others whether or not they are telling the truth. Individual variance in truth bias means that some people are more gullible than others; others are more skeptical. Transparency refers to people who are good or bad liars. That is, people who are transparent liars tend to leak the fact that they are lying and are therefore relatively easier to detect. Finally, ability is an individual difference in skill at telling if someone is lying or not. Thus, demeanor and transparency reflect sources of sender variance, while truth bias and ability reflect variance in message receivers. Furthermore, demeanor and truth bias reflect different sources of bias; in other words, our tendencies to believe (or not) are independent of actual honesty, whereas transparency and ability reflect variance in the ability to discriminate correctly between honest and deceptive messages.
Bond and DePaulo (2008) found that variance in demeanor is huge, both in an absolute sense and relative to other sources of variation. Some people are just much more believable than are others, and this aura of believability has a large impact on how people perceive them. There are also substantial individual differences in truth bias, with these differences being much smaller than the variance in demeanor but much larger than the other two factors. So some people are more gullible than are others, while others are more suspicious. Research on generalized communicative suspicion (GCS; Levine & McCornack, 1991) assesses this factor as a communication trait. Finally, the variance in transparency is much larger than the variance in ability. Individual differences in ability contribute little (maybe only ±1% or 2% in overall accuracy). Thus, variance in believability and accuracy stems more from the target person (the person being judged who is lying or not) than the person judging the message, and the variance in bias swamps variance in ability. This explains why accuracy values across studies are so stable. The lack of individual differences in ability leads to small standard errors and stable findings.
Factors Affecting Accuracy
The slightly better-than-chance accuracy finding is very consistent, so not surprisingly there are few variables that affect accuracy. Those variables that do affect accuracy tend to have a relatively small impact. For example, nonverbal training improves accuracy only slightly, on average leading only to a 4% improvement in raw accuracy (Frank & Feeley, 2003). Variables that have little consistent impact on accuracy include source expertise/occupation, source-receiver relationship, extent of interaction, question asking, and whether honesty values are scaled or dichotomous. Common sense might suggest that the better we know someone, the better will we be able to tell when they are lying. This is false. Relationship closeness has little impact on deception detection accuracy (McCornack & Parks, 1986). Or, as another example, if we ask probing questions, one would think that accuracy might improve. Again, research suggests that this is false. Research finds that asking questions, or even hearing another person questioned, compared with a lack of questioning, makes no difference in the ability to distinguish deceptive from honest answers (Levine & McCornack, 2001). Instead, both knowing the other person and hearing probing questions answered makes people more likely to believe the answer, regardless of actually honesty.
Reasons for (In)Accuracy
There are several reasons why people tend to be inaccurate lie catchers. First, there do not appear to be any strong, cross-situation behavioral cues that would make high accuracy possible. Although statistically reliable cues to deception are observed across studies, these are too inconsistent to be of much use in detecting specific instances of deception (Levine, Feeley, McCornack, Harms, & Hughes, 2005). Second, people pay attention to cues that lack diagnostic utility. For example, there is a widely held, crosscultural belief that liars do not look other people in the eye, yet truth tellers and liars do not differ in eye behavior, and eye gaze has no diagnostic utility. Third, research designs preclude much potentially useful information. Research indicates that when people do detect lies, it is often done well after the fact and on the basis of information gained other than at-the-time verbal and nonverbal behavior (Park, Levine, McCornack, Morrison, & Ferrara, 2002). Instead, detection is often based on inconsistencies with prior knowledge, information from third parties, and physical evidence. Such information is not available in most deception detection experiments. Fourth, people are overconfident of their ability to detect deception. People tend to think that they can detect others’ lies, but confidence is not related to actual accuracy (DePaulo, Charlton, Cooper, Lindsay, & Muhlenbruck, 1997). Finally, people are often truth biased and often fail even to consider the possibility of deceit (Levine, Park, & McCornack, 1999).
Truth Bias
Another reliable finding from the accuracy literature is truth bias. Truth bias is the tendency to judge messages as honest, independent of actual message veracity (Levine et al., 1999). Although there are individual differences in truth bias, it also exhibits a strong situational component. Research finds that truth bias tends to be stronger when people are interacting with others they know and trust; it is stronger in face-to-face interaction when the communication is mediated, and it is weakened by situational factors increasing suspicion.
Importantly, however, truth bias is reliably observable and has substantial impact across both individuals and situations. That is, truth bias varies in degree from person to person and situation to situation, but despite these differences, most people are truth biased most of the time.
There are at least three important reasons behind the strength and persistence of truth bias. First, truth bias stems, in part, from innate, hardwired, cognitive systems that govern how we process incoming information (Gilbert, 1991). Belief is a mental default, and while people can reject information as false, doing so requires additional cognitive resources and processes subsequent to comprehension. Doing otherwise would require a less efficient cognitive system, and thus there is likely an evolutionary basis for truth bias. Second, communication requires truth bias. If one questioned the veracity of everything others told us, communication could not operate. Making sense out of what others say requires a presumption that they are cooperating in the communication (Grice, 1989). Finally, humans are social, and getting along with others requires some degree of trust, coordination, and consideration. People give others considerable leeway so that social interaction is not disrupted.
The “veracity effect” (Levine et al., 1999) is an important implication of truth bias. The veracity effect refers to the finding that accuracy for truthful messages is usually higher than accuracy for lies. This follows from truth bias. Because people are truth biased, people are more accurate for truths than for lies, and therefore source veracity affects detection accuracy. Consistent with the veracity effect, when accuracy is calculated separately for truths and lies, truth accuracy tends to be well above 50%, while lie accuracy is often below 50%. This also means that accuracy depends on the number of judgments of truths and lies. The 54% detection accuracy finding only applies to experiments where there are an equal number of truths and lies. As the proportion of messages that are honest increases, so does accuracy, but accuracy declines predictably when most messages are deceptive (Levine, Kim, Park, & Hughes, 2006).
How People Really Detect Lies
According to a study titled “How People Really Detect Lies” (Park, Levine, McCornack, Morrison, & Ferrara, 2002), all judges in most of these experiments have to go on is the at-the-time verbal and nonverbal behavior of the message sources. Outside the research lab, however, people can check the facts, talk to others, and so forth. Consequently, lies outside the deception lab are most often detected well after the fact and by discovery methods other than verbal and nonverbal source behaviors at the time of deception. Park and colleagues (2002) simply asked participants to recall a lie that they had detected, to describe what happened, how they found out that the person was lying, and how much time had elapsed between the telling of the lie and its detection. Only 2% of the recalled lies were caught at the time of the telling, based on source verbal and nonverbal behaviors. Most were detected after the fact, often much later, and discovery methods often included information from others, physical evidence, and later confessions.
Future Directions
There are several directions for future research on deception. First, almost all research on the topic has been done either in North America or in Europe. Very little comparative or indigenous research exists outside Western cultures. Remedying this is probably the most pressing issue for deception research.
Second, deception research desperately needs new and better theory. Current leakage-based theories have produced limited yield, and alternatives are needed.
Third, deception research needs to study deception from a more interactive perspective. Previous cue research shows little in the way of universal deception cues, and detection research finds meager accuracy. However, there may exist strategies that a questioner can use to prompt deception tells. For example, the behavioral analysis interview is an investigative technique developed and taught by John E. Reid and Associates, Inc. It is a nonaccusatory interview that tries to bait potential suspects into providing incriminating information. Future research should shift the focus from passive observation of cues to strategic veracity assessment.
Finally, recent technologies in brain scanning hold potential for detection deception. Although less than perfect, at present, the polygraph remains the most accurate deception detection device. New technologies such as functional magnetic resonance imaging (fMRI) will be increasingly the focus of investigation in the future.
Conclusion
A substantial and intriguing literature exists on topics related to deceptive communication. Deception is when someone knowingly misleads another person or persons. Lying is discouraged in every human culture, yet most people lie at least once a week if not daily. Thus, deception is a common occurrence, although it is infrequent in relation to honest communication. People deceive others when the truth proves problematic, and most people learn to do this by age 5.
Research has failed to uncover any reliable, diagnostically useful set of behaviors that can be used to distinguish truths from lies. Deception theory predicts that people unintentionally leak such deception cues, but the findings of research looking for cues fail to replicate from study to study. Deception cue research has focused heavily on nonverbal behavior, but recent findings suggest that research on verbal behaviors holds more promise.
Research looking at people’s ability to detect deception yields much more consistent findings. People are significantly, but only slightly, better than chance at detecting deception. When the chance level is 50%, people average 54% accuracy, with the results of most studies falling within ±10%. Despite this poor performance, people believe that they can tell when others lie to them. That is, people are overly confident in their deception detection abilities. Instead, while some people are much better liars than are others, there is less variance in detection ability. Finally, people are almost always truth biased. They tend to believe others independent of whether the person is honest or not. As a result, people are usually correct in believing honest others but tend to mistake lies for truths. This is probably just as well because the tendency to believe what others say allows communication to function and is thus highly adaptive.
Bibliography:
- Bok, S. (1999). Lying: Moral choice in public and private life. New York: Vintage Books.
- Bond, C. F., Jr., & DePaulo, B. M. (2006a). Accuracy of deception judgments. Review of Personality and Social Psychology, 10, 214–234.
- Bond, C. F., & Global Deception Research Team. (2006b). A world of lies. Journal of Cross-Cultural Psychology, 37, 60–74.
- Bond, C. F., Jr., & DePaulo, B. M. (2008). Individual difference in judging deception: Accuracy and bias. Psychological Bulletin, 134, 477–492.
- Burgoon, J. K., & Buller, D. B. (1996). Interpersonal deception theory. Communication Theory, 6, 203–242.
- DePaulo, B. M., & Bell, K. L. (1996). Truth and investment: Lies are told to those who care. Journal of Personality and Social Psychology, 71, 703–716.
- DePaulo, B. M., Charlton, K., Cooper, H., Lindsay, J. J., & Muhlenbruck, L. (1997). The accuracy-confidence correlation in the detection of deception. Personality and Social Psychology Review, 1, 346–357.
- DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979–995.
- DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbrick, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129, 74–118.
- Ekman, P. (2001). Telling lies. New York: W. W. Norton.
- Farquhar, M. (2005). A treasury of deception: Liars, misleaders, hoodwinkers, and the extraordinary true stories of history’s greatest hoaxes, fakes and frauds. New York: Penguin.
- Frank, M. G., & Feeley, T. H. (2003). To catch a liar: Challenges for research in lie detection training. Journal of Applied Communication Research, 31, 58–75.
- Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46, 107–119.
- Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.
- Inbau, F. E., Reid, J. E., Buckley, J. P., & Jayne, B. P. (2001). Criminal interrogations and confessions. Gaithersburg, MD: Aspen.
- Levine, T. R., Feeley, T., McCornack, S. A., Harms, C., & Hughes, M. (2005). Testing the effects of nonverbal training on deception detection accuracy with the inclusion of a bogus training control group. Western Journal of Communication, 69, 203–218.
- Levine, T. R., Kim, R. K., & Hammel, L. (2008). People lie for a reason: An experimental test of the Principle of Veracity. Unpublished manuscript, Michigan State University.
- Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73, 243–260.
- Levine, T. R., & McCornack, S. A. (1991). The dark side of trust: Conceptualizing and measuring types of communicative suspicion. Communication Quarterly, 39, 325–340.
- Levine, T. R., & McCornack, S. A. (2001). Behavioral adaption, confidence, and heuristic-based explanations of the probing effect. Human Communication Research, 27, 471–502.
- Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125–144.
- McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1–16.
- McCornack, S. A. (1997). The generation of deceptive messages: Laying the groundwork for a viable theory of interpersonal deception. In J. O. Greene (Ed.), Messages production (pp. 91–126). Mahwah, NJ: LEA.
- McCornack, S. A., & Levine, T. R. (1990). When lovers become leery: The relationship between suspicion and accuracy in detecting deception. Communication Monographs, 57, 219–230.
- McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication yearbook 9 (pp. 377–389). Beverly Hills, CA: Sage.
- Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68, 201–210.
- Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69, 144–157.
- Peskin, J. (1992). Ruse and representation: On children’s ability to conceal information. Developmental Psychology, 28, 84–89.
- Serota, K. B., & Levine, T. R. (2008). The prevalence of deception in American life. Unpublished manuscript, Michigan State University.
- Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1–59). New York: Academic Press.