When Fact Checkers Can’t See Error

Zebra Fact Check logo

Two examples from the past several weeks help solidify our impression that the new journalistic genre of “fact-checking” needs a basic upgrade. In both examples, key principals in the fact-checking industry could not see an error staring them in the face.

 

PolitiFact, Louis Jacobson

On Sept. 12, 2018 PolitiFact published a fact check of MSNBC host Joe Scarborough through its “PunditFact” division. Scarborough said the GOP under Trump had, in one year, added more debt than America had incurred during its first 200 years.

That fact check, by veteran PolitiFact fact-checker Louis Jacobson, committed a simple and significant error. In testing Scarborough’s claim by percentage of GDP, a method described as more telling by an expert PolitiFact cited, PolitiFact substituted cumulative debt in 2017 for debt added in 2017. Below an embedded chart showing annual debt as a percentage of GDP from 1940 through 2023, PolitiFact said this:

The chart does show that, when looking at a percentage of GDP, Scarborough is correct in his comparison. Debt as a percentage of GDP in 2017 was far higher (almost 77 percent) than it was in 1976 (about 27 percent).

But Scarborough did not claim that the total debt in 2017 was greater than the total debt in 1976. He said that added debt in one year (PolitiFact chose 2017) was greater than the total debt in 1976.

It’s a huge error to mistakenly substitute total debt for added debt. Added debt in 2017 makes up a fraction of the total debt U.S. debt in 2017. PolitiFact’s mistake made the core of its fact check false.

It Gets Worse: Ignoring Warnings About an Error

If there’s one thing worse than making a big and obvious error in a fact check, it’s failing to heed warnings about that mistake.

We saw the error the same day PolitiFact published it and used Twitter to alert the author, Jacobson, to the problem.

We wrote up a summary of the problem published at the PolitiFact Bias blog, also on the same day. Hours later, still on Sept. 12, we sent an email message to PolitiFact pointing out the error and asking for a correction.

Each day we used Twitter to publicize the problem, often tagging PolitiFact with the @ symbol. By Sept. 19, 2018 at 2 p.m. the error remained intact. PolitiFact made no noticeable change to the story and offered no explanation for its behavior. We did receive an automated email assuring us that every word we had sent would get a reading.

We also emailed Jacobson directly on Sept. 19, 2018, mentioning our preparation of a story about the error and PolitiFact’s response. We told Jacobson we would welcome his on-the-record comments. We will update this item if we receive any response from Jacobson.

PolitiFact Makes Charitable Interpretation Difficult

Zebra Fact Check strongly advocates the principle of charitable interpretation. Put simply, that just means that we try to give the benefit of the doubt by putting the very best face reasonably possible on the cases we evaluate. In PolitiFact’s case we have no less than three possible explanations for its inactivity in fixing its error.

  1. PolitiFact has not read the messages pointing out its error
  2. PolitiFact recognizes the error but sees no need to fix it promptly
  3. PolitiFact read at least one message pointing out its error but does not agree that it erred

We would view No. 1 as the most charitable option if not for the fact that PolitiFact sent out an automated email ensuring us that our message was read. A long delay in reading our message implies that PolitiFact sends out misleading messages.

We mark the second option as wholly uncharitable, for PolitiFact affirms in its statement of principles that it fixes errors promptly.

That leaves us with the third option: PolitiFact knows of the complaint but does not agree that it erred.

The third option means concluding that PolitiFact fails to see an obvious and, in our view, easy-to-understand error. Charitable interpretation would ordinarily steer us away from this conclusion but we see no better option.

Tech companies like Facebook and Google rely on organizations like PolitiFact to help improve the accuracy of information on the Web. It counts as a big problem when one of those fact-checking services cannot recognize its own simple and substantial error.

We could take solace in the fact that the International Fact-Checking Network was created to help hold fact-checking organizations to account. Except that topic leads us to part two of our story.

 

The International Fact-Checking Network, Daniel Funke

Daniel Funke joined the International Fact-Checking Network at the Poynter Institute in September 2017. The not-for-profit Poynter Institute directly owns PolitiFact. Poynter describes Funke as a “fact-checking reporter.”

On Aug. 10, 2018 IFCN at Poynter published a story about a language audit commissioned by PolitiFact. The audit looked for evidence of politically biased language. It reported finding no significant evidence of biased language and offered some recommendations for future approaches that might better detect biased language.

Funke’s story about that report led with a false headline (“Is PolitiFact biased? This content analysis says no”). It also framed the report as a ready answer PolitiFact could offer to critics accusing PolitiFact of bias (quoting the first three paragraphs of Funke’s story):

It’s a critique that PolitiFact has long been accustomed to hearing.

“PolitiFact is engaging in a great deal of selection bias,” The Weekly Standard wrote in 2011. “’Fact Checkers’ Overwhelmingly Target Right-Wing Pols and Pundits” reads an April 2017 headline from NewsBusters, a site whose goal is to expose and combat “liberal media bias.” There’s even an entire blog dedicated to showing the ways in which PolitiFact is biased.

The fact-checking project, which Poynter owns, has rebuffed those accusations, pointing to its transparent methodology and funding (as well as its membership in the International Fact-Checking Network) as proof that it doesn’t have a political persuasion. And now, PolitiFact has an academic study to back it up.

Funke’s framing colored the rest of the article as well:

So what does that say about PolitiFact’s alleged bias? Not much, according to the researchers.

“This part of our analysis finds no obvious differences in the language that is used to describe individuals of each party in a way that shows any indication of bias or differential treatment,” they wrote.

In their external analysis, the researchers didn’t reach any different conclusions.

Note that Funke mentions “alleged bias” before quoting from the report, as though the findings from the report address those allegations. But so far as we are aware, no persistent critic of PolitiFact said PolitiFact’s bias stems from biased language. Critics mainly charge PolitiFact with story selection bias and inconsistent grading standards. A language audit addresses neither of those charges, Funke’s framing notwithstanding.

Funke flirted with adding balance to his reporting near the end of his story by quoting  PolitiFact Executive Director Aaron Sharockman. Sharockman cautioned against giving the study more weight than it deserves.

And Funke used his last paragraph to quote one of the researchers. The quotation contradicted the title of Funke’s article:

But in a follow-up email to Poynter, Noah Smith, one of the report’s co-authors, added a caveat to the findings.

“This could be because there’s really nothing to find, or because our tools aren’t powerful enough to find what’s there,” he said.

Smith said PolitiFact may have used biased language the research tools were not able to detect. Funke said the research found PolitiFact is not biased.

Is it that hard to see the difference?

It Gets Worse: No Acknowledgement of Error

We contacted the IFCN and Funke directly about Funke’s inaccuracy and false framing, and even contacted the Poynter Institute while waiting for a response.

Funke’s reply was disappointing.

Funke said “The headline makes clear that the content analysis’ main conclusion is that PolitiFact’s articles contain no substantial evidence of biased treatment of either political party.”

Funke’s defense of his headline is no more true than the headline. The headline made nothing clear about the actual findings of the study, instead sending the false message that the study answered “no” to the question of whether PolitiFact is biased. We succeeded in getting Poynter to change another headline about the same study. Poynter adopted the exact language we recommended: “PolitiFact language audit finds no bias toward either U.S. political party.”

In a follow-up email, we asked Funke if he agreed that the title of his article is literally false.

“No,” Funke said. “The conclusion of the analysis (while tentative, like all research) was that PolitiFact’s content exhibits no discernible political bias.”

We assume that Funke still does not realize why his summary of the study’s findings is about as false as the headline. If a text exhibits no discernible political bias then no amount of searching will ever find it (thanks to the definition of “discernible”). But if a searcher simply admits not finding political bias with one type of search it does not close the door on other attempts that may find discernible bias. Funke’s key caveat, “while tentative, like all research” was simply missing from the title.

We also asked Funke if he agreed that his article framed the language audit as an effective answer to those accusing PolitiFact of bias.

He said “No,” but suggested that a person could read that into what he wrote.

Find the meat of that email conversation (minus pleasantries) here.

 

What These Two Cases Show

Both our examples stem from types of logic problems. In PolitiFact’s case, it erred in identifying the logic of a math problem. In Funke’s case, he failed to distinguish between logically distinct truth propositions, making a type of mistake we train scientists to avoid.

These two cases, then, serve as examples illustrating and reinforcing journalists’ reputation for weak math and science skills. We suggest that the strongest fact-checking organizations ought to draw not only from polarized political groups but also from writers or editors with firm backgrounds in math, science and logic.

What Else These Two Cases Show

We chose these two examples because they show fact-checking writers resisting or ignoring needed correction. These two cases help show that we cannot trust journalists to police themselves, and especially not in cases where organizations represent relatively homogeneous political views and/or a general weakness in math or science.

We can expect fact-checking to improve by encouraging ideological and educational diversity. Likewise, those who serve to hold fact-checkers accountable will achieve more robust results with purposely diverse politics and education.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.