Review: “His Lips Are Moving: Pinocchio Effect and Other Lexical Indicators of Political Deceptions”

Near the tail end of November 2015 somebody shared with me a link to an NPR story about a new study looking at the lexical features of political lying, “Researchers Examine How to Spot a Lying Politician“:

SHANKAR VEDANTAM, BYLINE: Well, Steve, this is research by social scientists who are doing what social scientists do, which is they look for patterns in human behavior. Michael Braun, Lyn Van Swol, and Lisa Vang at Millikin University and the University of Wisconsin at Madison, they recently analyzed several hundred claims made by politicians that had been fact checked PolitiFact. This is a group that tries to assess…

INSKEEP: Sure.

VEDANTAM: …Whether political claims are accurate or inaccurate. They find that politicians with inaccurate or untruthful statements as judged by PolitiFact are more likely to use lengthier sentences both in prepared speeches and in less scripted settings. So there’s an old joke that says, Steve, do you know how a politician is lying? His lips are moving.

 

This story had me hooked. PolitiFact consistently denies that it passes any judgment on whether the subjects of its fact checks knowingly tell lies. How had academics found a way past that, using PolitiFact’s ratings to help evaluate differences between liars and truth-tellers?

We contacted the lead author, Michael T. Braun, who graciously shared a copy of the study.

His Lips Are Moving: Pinocchio Effect and Other Lexical Indicators of Political Deceptions

We’ll use HLAM for future references to the title of the paper authored by Michael T. Braun, Lyn M. Van Swol, and Lisa Vang.

Here’s the bulk of the abstract:

Using the software program LIWC (Linguistic Inquiry and Word Count), this study used political statements classified as truths and lies by website Politifact.com and examined lexical differences between statement type (lie or truth) and the setting (interactive or scripted) in which the statement was given. In interactive settings (where statements given are prompted by questioning), politicians used shorter sentences, simpler words, and fewer causation statements than statements given in scripted settings (where statements are prepared in advance). Lying politicians used more words and negation statements than truth-tellers in both interactive and scripted settings. In interactive settings, politicians used more first-person pronouns.

 

The HLAM researchers used political claims rated as truths or lies by PolitiFact to study structural differences between the two.

In this review, we will keep our focus on our first issue of concern. Is it legitimate to use PolitiFact’s judgments to classify statements as lies?

What are “Lies”?

The term “lie” carries two primary meanings in popular usage. First, it denotes willful deception. Second, it may refer to simple untruth without any intent to deceive. The difference between the two types of lies carries great significance for studies like HLAM. For that reason, researchers ordinarily try to keep a clear distinction between the two types of lies.

Columbia University researcher  Julia Hirschberg and her colleagues published the following guide in 2008:

What is Deception?
• Deliberate choice to mislead
– Without prior notification
– To gain some advantage or to avoid some penalty
• Deception is Not:
– Self-deception, delusion, pathological behavior
– Theater
– Falsehoods due to ignorance/error

 

HLAM routinely encourages confusion between the two types of lies. The abstract says PolitiFact distinguishes between lies and truths, but PolitiFact only claims to (relatively) distinguish false from true. HLAM intertwines “lie” with “false” as it introduces its methodology (bold emphasis added):

To make use of political lies in the context of lexical analysis, this study uses LIWC to analyze lexical factors in statements of public figures the website Politifact.com had classified as true and false. Using Politifact.com offers several advantages that help build on previous research and theory on deception. First, all statements are freely given by the source; this is different from much past laboratory research on deception in which participants are instructed to lie (for review, see DePaulo et al., 2003). Second, Politifact’s extensive work to categorize the statements, as well as their close link with the news media, suggests an objective view on which statements are true and which are false.

 

Note the above paragraph backtracks on the description of PolitiFact’s classifications. In the abstract, PolitiFact distinguished between lies and truths. That changes to distinguishing between true and false. Yet the researchers say the collected data will allow us to build on earlier research on “deception.”

We fault HLAM for its nearly nonexistent effort to clearly define the term most important to the research. If the statements PolitiFact calls “false” are not also deceptive, the research loses credibility in describing the lexical features of willfully deceptive political speech.

We’ll pick up on this issue after addressing the reliability of PolitiFact’s ratings.

Objective PolitiFact?

More than once HLAM refers to PolitiFact’s ratings as “objective.” We’ve already quoted HLAM’s most detailed support of that description and we’ll repeat it here:

Politifact’s extensive work to categorize the statements, as well as their close link with the news media, suggests an objective view on which statements are true and which are false.

 

We disagree that doing extensive work to classify statements argues in any substantive way for the objective results of that work. Obtaining objective results depends on a clear system of classification and consistency in applying that system. We likewise bring a skeptical eye to the notion that a “close link with the news media” supports the objectivity of the ratings.

Fortunately, HLAM provides a short list of examples (Table 1, Page 9) of PolitiFact’s true and false ratings. We have the opportunity to test, at least in a limited way, the reliability of the ratings.

Table 1 offers six examples of true and false ratings. We think PolitiFact rated two of them incorrectly.

John McCain/True

John McCain: “The Japanese were tried and convicted and hung for war crimes committed against American POWs. Among those charges for which they were convicted was waterboarding.”

PolitiFact’s rating: True

It’s true that some Japanese were tried and convicted for crimes committed against American POWs. But the “water cure” as administered by the Japanese was usually, if not always, literal drowning instead of simulated drowning. More importantly, there was no “charge” of waterboarding. The charge was failing to treat POWs humanely according to the laws of war. Convicted Japanese defendants whose charges included specifications of the “water cure” were typically accused of other mistreatment also, such as beating POWs and forcing them to work.

McCain’s statement has enough error to move it out of the “True” category and perhaps all the way to “False.”

Joe Biden/False

Joe Biden: “John doesn’t have a record in the Senate. John’s only passed four bills. They’re all about post offices.”

Biden’s statement is a least partly hyperbole (“doesn’t have a record in the Senate”). The fact that Biden leads with hyperbole offers enough reason to consider the entire statement as hyperbole. Are hyperbolic statements false? Literally speaking, yes. But such statements tend to communicate something true without deceiving the audience.

PolitiFact’s “False” ruling departs from its customary practice of crediting a claimant’s underlying point.

We think these two examples from HLAM’s Table 1 provide enough evidence to question PolitiFact’s objectivity and reliability. Getting two wrong from a set of six examples does the opposite of inspiring confidence.

HLAM makes no attempt to discuss problems with its definition of “lie” until reaching its section on “Contributions, Limitations, and Future Research” (bold emphasis added):

These politicians, like many of their most fervent supporters, may be “true believers.” Thus, alleging, for example, that the Affordable Care Act requires citizen healthcare decisions to go before a “death panel” may not register as a lie to the person who makes the claim. Although some previous research has found that self-deception has similar lexical qualities as deception (Pennebaker, 2011), others (Schober & Glick, 2011) have identified ways in which self-deceptive speech may differ. For example, Schober and Glick (2011) suggest that self-deception may be defensive speech and found that people using defensive speech had a higher word count, used more exclusive words, used more negations, and had more repetition. One limitation was that we could not distinguish between deliberate and self-deception to examine lexical characteristics of self-deception.

 

HLAM’s section on limitations acknowledges that some researchers find liars use more words while others find liars use fewer words. If the person making the claim does not recognize the claim as a lie, it negates every theory for lexical differences that depends on motivations for concealing the truth. In other words, people who think they are telling the truth have no reason to hide their deceptions.

What use is research on intentional lying if it uses statements the claimant does not view as lies?

We looked for an explanation among the limitations the HLAM authors described. But HLAM never addresses the problem directly:

Our political actors may intend to be truthful in interactive settings, find the truth to be problematic for their message or audience, insert some degree of deception, and then return to truthful information; this can make lies and truths difficult to demarcate and reveals a limitation in current knowledge about how widespread lexical differences between lies and truths may be. Do the differences manifest themselves in the buildup to the lie, in the lie itself, or in sentences after the lie?

 

Or, political actors may intend to be truthful in both settings, speak inaccurately (or accurately!) and have PolitiFact rate the results as false. What then? HLAM does not address the problem.

What theory would explain differences between false and true claims where in either case the claimants believe they are telling the truth?

HLAM, it seems to us, constructs its research on a thoroughly squishy foundation. HLAM fails to establish a consistent definition of “lie,” leading to problems of ambiguity throughout the paper. HLAM offers no scientifically supportable reason to view PolitiFact’s ratings as objective. On the contrary, its list of six examples offers reason to mistrust PolitiFact’s ratings.

We contacted HLAM’s lead author, Michael T. Braun, to try to resolve our questions about the paper. The results of our inquiry did not allay our concerns.

Proceed to Page 2 for that part of the story.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.