Review of fact-checking study: ‘Making a Difference?’

A critical review of Michelle A. Amazeen’s review of fact checking in 2012

October of 2013 provided us with more than one research paper on fact checking from the New America Foundation.

Making a Difference?: AAmazeen paper Oct 2013 Critical Assessment of Fact-checking in 2012” [Updated URL, Aug. 23, 2019] by Michelle A. Amazeen proves at the same time less ambitious and more successful than the paper by Brendan Nyhan and Jason Reifler we reviewed last month.  Nyhan and Reifler tried to show whether a threat of fact checking, with consequences spelled out, affected politicians’ behavior.  Amazeen simply takes stock of fact checking for the 2012 year and offers her observations.

Spot on, more or less

Amazeen does a nice job of providing an overview of the three major fact checkers, FactCheck.org, the Washington Post Fact Checker and PolitiFact.  She notes the difficulty of rating the effectiveness of fact checks, the pitfalls of using a rating scale and the selection bias problem.

Amazeen provides valuable interview material from important people associated with the fact checking industry.  That includes two of our favorite critics, Mark Hemingway and Jason Hart.

Amazeen appropriately applies a pin to Annenberg Fact Check’s balloon of a study showing fact checks make readers more informed:

While these results suggest that fact-checking is making a difference, the study cannot definitively rule out the possibility that more informed voters seek out fact-checking websites.

We made the same observation while evaluating an article by Annenberg’s Brooks Jackson, “Firefighters, fact-checking and American Journalism.”

Amazeen concludes the paper with a fair set of “best practices” suggestions.

That’s the bulk of the paper.  But we found a few things to criticize.

“We’re not going to let our campaign be dictated by fact-checkers.”

The media, sometimes including fact checkers, have tried to discredit politicians and the Romney campaign in particular by wielding the “We’re not going to let our campaign be dictated by fact checkers” line from Romney pollster Neil Newhouse.  The quotation was taken out of context, as many have shown.  Amazeen mentions the statement’s role in putting focus on the efficacy of fact checking.  The statement did play that role, but only after it was wrenched out of context by President Obama and the media.

Amazeen helps cement the false media narrative by failing to supply the missing context.

Citing Nyhan and Reifler

Studies by political scientists Nyhan and Reifler, such as ones supposedly supporting a “backfire effect” with fact checking, deserve the same skepticism Amazeen applies to the Annenberg study.  When a fact check makes a group of people believe “misinformation” even more strongly the researchers call it the “backfire effect.”  The “backfire effect” occurs with bad fact checking.  When researchers don’t recognize bad fact checking it shows their difficulty in removing themselves from their experiment.

“Apology Tour” debunked?

Amazeen uses PolitiFact editor Angie Drobnic Holan’s example of a campaign repeating a debunked claim:

A prominent example Holan offered was Romney talking about Obama apologizing.“I think all the fact-checkers debunked that,” said Holan. “And because it was in his book in 2010, we debunked it very early. And yet, he continued to say it. He said it in the third debate. That was just something he wasn’t going to stop saying, no matter what the fact-checkers said.”

We debunked the fact-checkers’ opinion of Romney’s “apology tour” claim in a October 2012 fact check, showing the weakness of their cases and using descriptions of political apologies from professional journals to support Romney’s phrasing.  The three mainstream fact checkers have yet to issue retractions or corrections.

The Romney Jeep ad fact check fail

Amazeen recounts the fact checkers’ harsh treatment of the Romney campaign’s Jeep ad.  The fact checkers panned the ad for “giving the misleading impression that Jeep was moving its US manufacturing to China.”  The fact checkers failed to tell readers that some U.S. Jeep manufacturing would move to China, among a host of problems.   The fact checkers providing graphic ratings gave their worst ratings.  PolitiFact’s “Lie of the Year” for 2012 was no less than partially true.  That’s normal for PolitiFact.  Yet in Amazeen’s paper we only see mention of a “Lie of the Year” controversy over the Democratic Party talking point about the Republicans voting to end Medicare.

Biased fact checkers?

Amazeen says critics look to two studies to support claims of fact checker bias.  As one of the editors of the blog “PolitiFact Bias” I know a thing or two about what evidences those critics use.  Amazeen’s treatment of the issue disappoints.

Eric Ostermeier of “Smart Politics” and the University of Minnesota did a year-long study looking at how the national PolitiFact distributed its ratings.  For ratings associated with party figures, Republicans drew most of the harshest ones.

George Mason University later did a similar study, also finding PolitiFact gave Republicans the greater helping of its lowest ratings.

The researchers in both cases say PolitiFact portrays Republicans as worse liars than Democrats.  Amazeen notes:

The problem with these critiques is that a discrepancy in rated statements does not prove bias. There is no reason to assume that the number of misleading claims found by fact-checkers would be equal between the parties at any given time.

Neither study attempts to prove bias.  That’s a problem for critics who use the studies as proof of bias, but on the other hand both studies strongly suggest that PolitiFact’s story selection bias results in more harsh ratings of Republicans.  And there are plenty of reasons to expect that outcome, including the liberal lean of journalists as a group.

PolitiFact Bias hosts (via Google Docs) a study I did measuring PolitiFact’s bias in applying its “Pants on Fire” rating.  It’s not easy to explain away the data, and it’s a better proof of PolitiFact’s ideological bias than studies that measure the numbers and proportions of false ratings.

Amazeen’s conclusions

“They should … weigh the controversy created by ratings systems”

Doubtless each of the three major fact checker has weighed the controversy.  Amazeen covers FactCheck.org’s principled stand against using a rating system in her paper.  The Washington Post Fact Checker and PolitiFact have also weighed the controversy and consider any increase in popular appeal achieved through gimmickry worth the cost in education.

Amazeen’s best practices

As noted above, we find Amazeen’s recommendations for best practices generally agreeable, not coincidentally because we implemented nearly all of them over a year ago before Amazeen published her critique.

“1. Evaluate knowable facts”

We don’t follow this one strictly, because we recognize that politicians can connect misleading rhetoric to unknowable claims.  We deal with this issue by not issuing a truth rating where real doubt exists.

“2. Use reputable sources that are widely recognized as credible”

We don’t have this stated as a principle, but it should go without saying that readers respond to sources they feel they can trust.

“3. Provide necessary context for readers”

Zebra Fact Check stands second to none for contextualizing the statements we check.  We’ve implemented groundbreaking transparency by even making our full interviews of expert sources (questions and answers) available to readers.

“4. Integrate fact-checking into coverage and avoid circulating false claims”
“5. Separate fact-checkers from beat reporters.”

We don’t offer news reporting as such, but our commentary content dovetails with our fact check content.

“6. Admit mistakes”

We announce major corrections on our homepage in the same format as a fresh story (Update Nov. 24, 2014:  Changes to the site made earlier this year have only fact check material occurring on the home page. Major story corrections have their own tab on the site menu). Clarifications and minor corrections, other than minor typos or formatting changes, are mentioned at the bottom of a story between its conclusion and the reference material.

We think we walk the walk on corrections better than the big boys at FactCheck.org and PolitiFact, and we’ll discuss that in a future article.

Wrapping up

Overall, we think Amazeen crafted a pretty good overview of fact checking.  Yes, the paper perpetuates some fact-checker-crafted legends, of which I mentioned some.  And Amazeen doesn’t yet seem to have noticed the discrepancies between PolitiFact practices and PolitiFact’s principles, but despite that she ends up with her conclusions and recommendations on mostly solid ground.

Update Sept. 30, 2014:

Updated URL for the .pdf file of Amazeen’s review paper.

2 Comments

  1. counterirritant

    One difference between Kessler and PolitiFact is that Kessler admits the Pinocchios are “subjective” and a “gimmick” while PolitiFact tries to convey the idea that their ratings follow some sort of standards.

    Amazeen’s major problem is that the effects of cognitive biases are willingly accepted for readers but dismissed for writers. That’s not what the science tells us–but it does demonstrate a cognitive bias of her own.

    Reply
    1. Bryan W. White (Post author)

      Thanks for commenting. You’re right that Kessler has a more responsible take on the rating gimmick than PolitiFact. Though FactCheck.org beats both of them on that point by avoiding the gimmick.

      “Amazeen’s major problem is that the effects of cognitive biases are willingly accepted for readers but dismissed for writers.”

      That’s nearly always the problem with these academic assessments of fact checking. But I think her conclusions largely transcended the bias problem. Her set of recommendations was good even if she has an unrealistic view of PolitiFact’s conformity to those standards (for example).

      Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.