May 2018 Q&A with Alexios Mantzarlis

ZEBRA FACT CHECK

From the start, the rating systems for PolitiFact and the Washington Post Fact Checker have drawn criticism over their inherent subjectivity. At the same time, such systems draw some strong support from members of the fact-checking community (a Duke Reporters Lab article from 2016 estimates a majority of fact-checking organizations use such rating systems). 

PolitiFact’s founder, Bill Adair, reportedly participated in a debate with Full Fact’s Will Moy at the 2014 Global Fact-Checking Summit over the use of a rating system. We were not able to locate the details of Adair’s argument*.

What is the best comprehensive argument in favor of using sliding scale rating systems?

While it’s possible you may not count yourself as the one to make this argument, We’re asking the question in part to encourage the IFCN to see the importance of clearly justifying the practices adopted by the community of fact checkers. Somebody should develop the strongest possible version of the argument and make it public.

 

ALEXIOS MANTZARLIS

Thanks for this important question.

Let me open by saying that Pagella Politica, the Italian fact-checking website I edited before joining the International Fact-Checking Network, uses ratings. My co-founders and I were very clear about the limitations of this instrument. Our methodology explained that ratings don’t reflect all the nuances that might be present in the full article and that the distinction between “bordering” ratings can be hard to pin down. Similarly, we explicitly warned that each politician’s collection of ratings is not a statistically relevant indicator of their credibility but a semi-serious guide to the truthfulness of the small sample of claims verified.

I still stand by those caveats.

At the same time, I think that the fact-checking endeavor does require that a verdict be reached; that a reader be informed of what the best available facts mean for the accuracy of the statement being scrutinized. Merely listing a bunch of facts that are related to the one being assessed isn’t fact-checking, it’s fact dumping. And it leaves the audience more confused and less informed. That, to me, is the strongest case for ratings: They are a key to interpret a long and often data-heavy piece of journalism.

I also think ratings have a psychological effect on the fact-checker writing the piece. In my previous job, I remember that if it looked like a fact check would get a harsh rating I felt an even higher burden of responsibility about its accuracy. When fact-checking is conducted as a public service — as I think most of it genuinely is — it takes very seriously the job of calling out someone’s words as false. Not that other forms of journalism don’t have this burden – but ratings call for an additional layer of rigor in the analysis.

Finally, ratings are meant to help accomplish fact-checking’s mission to reduce the spread of public falsehoods. Put perhaps too succinctly: receiving “Pinocchios” is embarrassing. And public figures don’t want to be embarrassed. If a rating makes it more likely that a false talking point gets dropped, then I think that’s a good reason to use them (though I’d definitely welcome more research on this.)

Ratings, of course, come in many different forms. After launching Pagella Politica, I discovered and became jealous of a few “non-linear” scales such as those used by Chequeado and El Sabueso that included the option of a “unsupported” rating. That indicates there is no evidence to back something up — but none to disprove it either. The ABC Fact Check in Australia uses three buckets (“in the red”, “in the green” and “in between”) within which it assigns different short summation of the term.

When I teach fact-checking to journalists and other professionals, I make clear that fact checks can but don’t have to have a rating. The fundamental element is an adjudicated conclusion on the relative truthfulness of the statement being examined. That can come in various forms, a linear scale being only the most popular one.

A fact-checking project that chooses to adopt ratings must explain them to readers — and we are going to be spelling out that requirement in the new iteration of our code of principles application process (though it is currently implicit in the requirement about transparent methodology). I’m also very glad, for instance, to see Full Fact ask its readers for thoughts on its own system of “conclusions” (which is not a linear ratings scale) and look forward to reading what they learn. The IFCN too is going to be conducting research on how different headlines impact reader understanding over the course of 2018 and I hope that those lessons will provide pointers on how and when to use ratings.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.