Popular mainstream fact checker PolitiFact published an article on Nov. 14, 2022 flatly proclaiming Donald Trump counts as the least accurate politician it has ever covered. The claim creates a problem for PolitiFact, for an objective proof of “least accurate” presents an exceptionally tall order. If PolitFact lacks that proof then its article ends up expressing a derogatory opinion about a politician. That’s something allegedly forbidden to the “nonpartisan” fact-checking class. PolitiFact has assured its readers that it is unbiased. Is the judgment of Trump minus objective proof supposed to count as an unbiased opinion?
Put plainly, there exists no easy way of objectively determining a most or least accurate politician. And PolitiFact’s subjective “Truth-O-Meter” rating system will never reasonably serve that purpose. Zebra Fact Check will not try to objectively identify the least accurate politician PolitiFact has ever covered. We see no way forward for that fact check without following an involved an expensive scientific approach. Instead, we will show PolitiFact’s stance untenable.
The Horns of Dilemma
PolitiFact’s article places it on the horns of dilemma. PolitiFact has assured readers its system makes it “not biased.” But if PolitiFact publishes a statement identifying a “least accurate” politician without objective backing evidence than that statement counts as opinion at best. We rightly find “unbiased opinion” on oxymoron lists.
We find PolitiFact has no objective evidence supporting its claim.
PolitiFact’s Supporting Evidence
We say PolitiFact failed to make its case. We take seriously our responsibility to prove the point. PolitiFact offered a summary of its argument, more or less, in its “If Your Time Is Short” summary leading the article.
“PolitiFact has fact-checked Donald Trump 960 times since 2011.”
The 960 fact checks PolitiFact mentions stands, we believe, as the current record number of fact checks. But even though fact checkers have a well-known bias toward publishing fact checks that find fault, having the most fact checks serves as no kind of convincing evidence that an entity was the least accurate. Nine-hundred sixty “True” ratings, for example, ought to stand as the most accurate record if nobody was fact checked more.
The first point offers no real help in making PolitiFact’s case.
“Around 75% of Trump’s claims have landed on our rating scale’s bottom half: Mostly False, False or Pants on Fire!”
Even if we were to foolishly assume the objectivity of “Truth-O-Meter” ratings, we would have two problems with this line of evidence.
First, it looks arbitrary on its face. Why count the percentage of the bottom three ratings instead of the bottom two, or even the bottom four? What makes three so special?
Second, other politicians in the PolitiFact “database” scored higher percentages than Trump’s 75%, at the time of PolitiFact’s article, by the same criterion. We did not attempt an exhaustive list:
- Mark Finchem, five ratings, 100% “False” or worse
- Blake Masters, three ratings, 100% “Mostly False” or worse
- Lauren Boebert, three ratings, 100% “False” or worse
- Kevin McCarthy, 24 ratings, 79% “Mostly False” or worse
- Michele Bachmann, 62 ratings, 76% “Mostly False” or worse
- Louie Gohmert, 14 ratings, 86% “Mostly False” or worse
- Ben Carson, 30 ratings, 80% “Mostly False” or worse
What makes Trump less accurate than the above politicians? PolitiFact’s second point offers no convincing support for PolitiFact’s argument.
“Trump’s recently fact-checked statements were false claims about election fraud.”
Is a politician less accurate with a “Truth-O-Meter” record chronologically back-loaded with false ratings? Why is that less accurate than a record chronologically front-loaded with false ratings? Or did we misunderstand? Is it that election fraud falsehoods count as less accurate than other types of falsehoods? We do not understand how this point does anything to assist PolitiFact’s argument.
PolitiFact threw in a few other factoids perhaps intended to help its case.
“False is his most common rating”
Again, other politicians in the PolitiFact “database” have “False” as their most common rating. Trump’s “False” ratings accounted for 37% of his record when PolitiFact published its article.
- Mark Finchem, most common rating “False” at 60%
- Blake Masters, most common rating “False” at 66%
- Lauren Boebert, most common rating “False” at 66%
- Kevin McCarthy, most common rating “False” at 54%
- Michele Bachmann, most common rating “False” at 37%
- Louis Gohmert, most common rating “False” at 50%
- Ben Carson, most common rating “False” at 40%
What about Trump’s 37% “False” ratings makes him less accurate than the politicians above?
PolitiFact’s arguments don’t make sense.
Three “Lie of the Year” Awards
PolitiFact’s “Lie of the Year” award, its editors’ choice of the year’s most significant falsehood, lacks any objective foundation. Trump was associated with PolitiFact’s choice three times. That’s interesting, but it doesn’t count as objective evidence Trump qualifies as the least accurate politician.
“Trump is known for repeating debunked claims.”
After alleging Trump is known for repeating debunked claims, PolitiFact supported its claim with two examples. Does that give us reason to regard Trump as the least accurate politician PolitiFact has ever covered? Is repeating a debunked claim less accurate than stating a novel falsehood? We don’t see how, and we suspect not.
The Missing Elephant
There’s a criterion curiously missing from PolitiFact’s argument. When people, including reckless scholars, use fact checker ratings to compare the accuracy of politicians, their go-to stat is the grade average. It’s just one more area where our list of competitors would tend to outdo Trump’s inaccuracy.
Fix the Problem?
Zebra Fact Check suggested PolitiFact either provide objective support for its claim or else make clear to readers it had offered a mere opinion that Trump was the least accurate politician it had covered. Either option might address the problem with the article.
How did PolitiFact respond?
We received PolitiFact’s standard email response, assuring us “We read all of your messages, but we don’t reply to
them all.” To date, PolitiFact has made no effort to either improve its argument or inform readers that its claim about Trump was just an opinion.
The International Fact-Checking Network to the Rescue?
Dissatisfied with PolitiFact’s response, Zebra Fact Check filed a formal complaint about PolitiFact with the International Fact-Checking Network. Our complaint alleged that PolitiFact had failed either its commitment to non-partisanship or else its commitment to open and honest corrections.
We reasoned that neither the offering of political opinions nor the failure to correct an error could pass the IFCN’s stringent criteria for its fact-checking signatories.
The IFCN acted quickly. In less than a week it had credited Bryan White (me) with the resolution of the problem. The IFCN marked the problem as resolved despite PolitiFact making no move to fix the problem. The complaint remains marked as “not approved,” meaning the IFCN will not forward it to an external assessor when considering the renewal of PolitiFact’s application for continued status as an approved IFCN signatory.
Of note, the Poynter Institute owns the International Fact-Checking Network and PolitiFact.
Something Rotten at the Poynter Institute
When a fact-checker publishes unsupported political opinion as fact, we count that as an unacceptable state of affairs.
When a fact-checker accountability organization turns a blind eye to a fact checker publishing unsupported opinion as fact, we count that as an unacceptable state of affairs.
When the fact checker and and the fact-checker accountability organization share the same roof, we call that an unacceptable state of affairs.