Working paper doesn’t work

A critical review of  Lying in Politics: Evidence from the US, a working paper by Alessandro Bucciol and Luca Zarri

Researchers Alessandro Bucciol (assistant professor of econometrics) and Luca Zarri (associate professor of economics), both of the University of Verona, wanted to examine the tendency of politicians to lie.  But they were confronted the problem of objectively assessing whether and how often politicians exercised their prerogative to prevaricate. But the growing popularity of political fact checking offered the researchers hope.  What if fact checkers objectively measured the degree of untruth in political statements?  That vista of opportunity drew the researchers forward as they developed a working paper (henceforth Lying) exploring statistics they culled from mainstream fact checker PolitiFact:

Our data source, the website PolitiFact.com, on a daily basis selects claims that are objectively verifiable and relevant in journalistic terms. In addition, a person reading or hearing each of the chosen claims should wonder: “Is that true?” The consequence of this selection is that our sample is representative of questionable (that is, “not patently  true”) but unambiguously provable claims, although in general it is likely to oversample  false claims and more hotly debated topics. Moreover, our data capture a constant flow of claims by a large number of Democratic and Republican politicians within a long sample period (almost six years), that is a type of data which it would not have been possible to obtain through alternative fact-checking sources.

 

Our critique begins and ends with the researchers’ decision to use PolitiFact data as the foundation for their research.  We trust that Bucciol and Zarri counted the numbers and performed their other calculations correctly, and we sympathize with their view of the importance of an informed electorate to the proper function of a democratic republic.  But we deeply question the data on which the rest of Lying relies.

Considering the source

Lying presents a credulous account of PolitiFact’s history and practices.  We’ll address in turn the researchers’ stated reasons for believing PolitiFact’s data serve as an acceptable foundation for their conclusions.  We’ll consider the descriptions of PolitiFact found in section III of Lying.

Objective and impartial?

The researchers accept the idea that PolitiFact is objective and impartial:

The  growingly important  presence  of independent, objective referees is crucial for the health of a modern democracy relying on appropriate checks and balances. Our data mainly come from the national section of the website PolitiFact.com (www.politifact.com), created and managed since 2007 by the independent newspaper Tampa Bay Times published in Florida. The website serves as an impartial watchdog of  political news, checking and reporting the accuracy of claims  publicly made in the media  (talks,  newspapers, television,  etc.) by elected officials at all levels of government, candidates, leaders of political parties, political activists as well as lobbyists, talk show hosts, economists and columnists.

 

We’re not accustomed to seeing PolitiFact described as objective and impartial in professional literature at any level; PolitiFact doesn’t even ordinarily describe itself that way.  But PolitiFact makes an exception when pitching itself to potential advertisers.  Later mention in this section of Lying relating to PolitiFact’s audience demographic hints that the researchers used PolitiFact’s outreach to advertisers as a source.  Lying contains the same rundown of PolitiFact’s audience that PolitiFact pitches to advertisers.

While it’s absurd on its face to think PolitiFact’s staff is politically impartial, it’s more important to note that PolitiFact’s grading system makes it effectively impossible to assign the ratings objectively.  PolitiFact’s definitions for its “Truth-O-Meter” ratings are as slippery as any politician’s speech.

PolitiFact rates objectively verifiable claims?

Lying says PolitiFact rates objectively verifiable claims.  That’s not always true.  In fact, PolitiFact’s “Principles of PolitiFact and the Truth-O-Meter” page explains what it does if a person makes a claim PolitiFact cannot verify:

People who make factual claims are accountable for their words and should be able to provide evidence to back them up. We will try to verify their statements, but we believe the burden of proof is on the person making the statement.

 

In practice, this means if a politician makes a claim that PolitiFact can’t verify, the politician receives a “False” rating.  Obviously PolitiFact would need no such burden of proof standard if it rated only verifiable claims.

A representative sample?

Lying says PolitiFact’s editorial selections result in a representative sample of questionable political claims.  We detect no reasonable argument in Lying supporting this position.

There’s no good reason to assume PolitiFact journalists are unbiased.  There’s no good reason to assume the journalists’ bias carries no effect on which stories they choose to cover or how they choose to cover those stories.

More data over a longer time period?

Lying says PolitiFact serves as its ideal data source because it has rated more statements and people over a longer period than other fact checkers.  We concede that PolitiFact rates more statements, but even Lying concedes, via footnote, that Factcheck.org has a longer record of fact checking than PolitiFact.

Regarding Factcheck.org’s record, Factcheck.org’s archive page may have misled the researchers into underestimating the extent of the fact checker’s record before 2007.

Isolated claims?

Lying says PolitiFact consistently follows a rigorous procedure for its fact checks, including isolating individual claims.  In fact, PolitiFact often rates compound claims and publishes a “Truth-O-Meter” rating based on averaging the truth value for two or more claims.  Lying does not describe whether an attempt is made to account for this wrinkle in the data.

Assessments always based on official statistics?

Lying says PolitiFact reporters always base their truth assessments on official statistics.  We see no foundation for that claim.  We tested the claim by picking out a page of stories from 2010 from PolitiFact’s archives.  We found three in a row (Michael Steele, Howard Dean, John Boehner) where no official statistic was available to back the ruling.  All three statements were arguably expressions of opinion.

Bucciol responds

We sent the researchers a preliminary version of our objections to using PolitiFact data to study the truth-telling of politicians.

Alessandro Bucciol graciously offered a response to our concerns:

We chose to use PolitiFact data because, so far, we view it as the main independent fact-checker for the US, also due to the fact that it holds a larger record of public claims than other fact-checkers.

Before writing the paper, we had an extensive discussion with Bill Adair. We, and the PolitiFact staff as well, are aware that their fact-checking may have some problems. However, we believe that our results are not biased for at least three reasons: 1) the nature of the sample, that is representative of interesting claims, as they are “not patently true”, at first glance, 2) the method of analysis, as we do not make a point distinction between the different degrees of truthiness, and 3) the availability of a very large number of claims for a long period: any existing error at the single claim level should become irrelevant when we look at the whole dataset.

 

We’ll address Bucciol’s three items in order, noting that he may have other reasons aside from these.

Representative sample

The opinion of PolitiFact journalists that a statement is not patently true, by itself, serves as a very poor check on selection bias.  Ideological bias, particularly among like-minded journalists, will tend to influence which statements are viewed as questionable.

In our original message to the researchers, we mentioned that Bill Adair has said the organization tries to choose an equal number of claims by Democrats and Republicans.  That dynamic directly undercuts a party-blind attempt to fact check political statements.  If the journalists see one party getting more attention, they may accept less questionable items from the other party in their attempt to even out the numbers.  We don’t see Lying accounting for this at all.

Method of analysis

The researchers refine the PolitiFact data by creating two “dummy variables.”  They group claims rated “Pants on Fire” with those rated “False” to get a group they label “black lies.”  They group claims rated “Half True” and below into a group combining “black lies” with “grey lies.”  They use these groupings to produce their conclusions about political lies.

We acknowledge that the method in Lying smooths over some of the problems in the PolitiFact data:  Obviously if PolitiFact made a mistake by classifying a statement as “Mostly False” instead of “Half True” the mistake will not at all affect the analysis in Lying.

Unfortunately, the research method only justifies the paper’s conclusions if the number of errors in the data is sufficiently low and/or largely confined to mistakes weeded out by using the dummy variables.  We see nothing in the paper to suggest that the researchers have attempted to verify or quantify the reliability of PolitiFact’s ratings.

Single claim errors irrelevant given the size of the sample?

As we note above, the researchers’ method does not justify the research conclusions if enough errors alter the black and grey lie groups in Lying.  What if 20 percent of PolitiFact’s ratings contain errors leading to that condition?  How did the researchers assess the risk of error?  Again, we find nothing in Lying addressing a key question.

We’ve encouraged the researchers to send us the list of PolitiFact stories making up their data set along with an estimate of the number of errors that would significantly affect their conclusions.

We’re betting it would be easy to find enough likely categorization errors to jeopardize the findings in Lying without even looking at the problem of selection bias.  Consider what former Cleveland Plain Dealer editor John Kroll recently wrote:

Even if one could parse out the differences, the Truth-O-Meter mixed apples and oranges. Its ratings are a combination of both whether a statement is true and whether it was misleading. Where the balance between those two values was struck in picking a rating was crucial. And as far as I could tell, looking at PolitiFact ratings from the national site as well as local ones, the final choices were coin flips. Much-debated coin flips conducted by honest journalists trying to be fair — but coin flips, nonetheless.

 

Kroll was in a position to know.  The Cleveland Plain Dealer was PolitiFact’s partner in running PolitiFact Ohio.

Do we want to base scholarly conclusions about political lies on data that amounts to a pile of coin flips?  Let the answer ever be “no.”

Conclusion

Bucciol and Zarri pursue a worthy aim.  The success of representative democracy depends largely on the ability of voters to make rational choices when voting.  Political rhetoric makes up part of that equation, so it’s a ripe area for study.  But any serious consideration of Lying must follow a serious, successful and public effort by the researchers to ensure the quality of its data set.  The working paper version of Lying leaves that work entirely to later versions of the paper, and we believe the effort will prove fruitless if attempted.

 

Correction June 14, 2015: We consistently misspelled Alessandro Bucciol’s name as “Brucciol” throughout this article until correcting each misspelling on this date. Our sincere apologies to Mr. Bucciol and our readers for the misspelling.

2 Comments

  1. counterirritant

    Feynman has been gone for 25 years now and nothing much has changed.
    http://youtu.be/IaO69CF5mbY

    Reply
    1. Bryan W. White (Post author)

      Thanks for reading and commenting.

      I’m inclined to give economists a bit more credit than social scientists for using the scientific method correctly, present case excluded.

      The Feynman video you shared should stay relevant for my next review of a scholarly work featuring PolitiFact: “The Epistemology of Fact-checking” by Uscinski and Butler.

      Based on the abstract, I’m not expecting things to go well for PolitiFact.

      http://www.tandfonline.com/doi/full/10.1080/08913811.2013.843872#preview

      Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.