Email exchange with reporter Daniel Funke of the International Fact-Checking Network

This is a formatted version of an email exchange taking place between Aug. 13, 2018 and Sept. 5, 2018. We have condensed the conversation to focus on the topic, omitting pleasantries and material not clearly offered “on the record.”

 

ZEBRA FACT CHECK (Aug. 13, 2018)

Your August 10, 2018 article on the PolitiFact language inventory study misleads in ways totally unacceptable for an organization associated with fact-checking.

The title of the article greatly exaggerates the meaning of the study. Lack of evidence is not necessarily evidence of lack. The study did not find a “no” answer to the question of whether PolitiFact is biased. If [sic] found a “no” answer on whether one set of biased-language indicators showed a bias at PolitiFact. Understanding the difference is essential to understanding how science works. Science does not verify, as such. It falsifies hypotheses and what it fails to falsify gains in credence.

The introductory paragraphs magnify the error, presenting the language inventory as a reply supporting PolitiFact’s supposed rebuffs of accusations of bias*. The apples of the language inventory do not address the oranges of the bias critics attach to PolitiFact.

The quotations of Aaron Sharockman and Noah Smith provided the most accurate assessments of the meaning of the language inventory, yet they make up the buried lede near the end of your article:.

Sharockman (bold emphasis added):

“It certainly reinforces how we try we to act and carry ourselves. So of course we’re pleased to see no red flags,” said executive director Aaron Sharockman in a message. “But as a fact-checker who has scrutinized plenty of research, we also shouldn’t give this study more weight than it’s worth. This is one look at the language and words we use to write our fact checks.”

So much for not giving the study more weight than it’s worth.

Smith (bold emphasis added):

This could be because there’s really nothing to find, or because our tools aren’t powerful enough to find what’s there,” he said.

Smith said, in effect, that the study did not rule out bias because the tools may not have been powerful enough to detect it. And of course that applies only to the particular type of bias a language inventory examines. Smith contradicted the title of your article.

Counterexample: 

When PolitiFact Florida states it is true that Rep. Bill McCollum (R) repeatedly voted to raise his own pay, and later PolitiFact Ohio, while clearing Democrat Sherrod Brown of a similar charge, states that members of Congress are legally prevented from voting to raise their own pay, PolitiFact has used a type of biased language that this language inventory would never find in a million years.The same goes for PolitiFact’s defense of the ACA’s slowing of Medicare cost growth (it’s not a “cut” because spending increases year by year) followed by its shift when Republicans slowed the spending growth of Medicaid: “(W)e don’t find it unreasonable for Schumer to call cumulative reductions to Medicare and Medicaid spending in the hundreds of billions of dollars ‘cuts.’

And I could do this all day because I’ve been doing it for years. The language inventory does not undercut this type of evidence.

Daniel, you and the IFCN should disavow the incorrect statements and implications in the article and issue a correction. Without undue delay.

Thanks for reading.

*Including the specific wording of those rebuffs would certainly have increased my enjoyment of the article, by the way. In my experience, PolitiFact likes to answer the charge of bias rarely, obliquely and without addressing specifics. Anything departing from that pattern would pique my interest

 

 

DANIEL FUNKE (Aug. 15, 2018)

We appreciate your feedback about my article on PolitiFact’s content analysis. Might I address a couple of your chief complaints in bullet points:
  • The headline makes clear that the content analysis’ main conclusion is that PolitiFact’s articles contain no substantial evidence of biased treatment of either political party. As with every study, there are limitations and I note a couple of those in the article. You can also understand why it’s impossible to fit every caveat about an academic study into a headline.
  • The top of my story isn’t explicitly rebuffing those accusations of bias. I wanted to set up why a casual reader would care about this in the first place, and provide a few examples of past criticism.
  • By including both Noah and Aaron’s quotes, as well as some examples of how the study is limited, I think we fairly represent both the study and its conclusions. I would also note that no other fact-checking organization (to my knowledge) has conducted a study like this before.

 

 

ZEBRA FACT CHECK (Aug. 15, 2018)
I’ll reply briefly (or that was my intent when I started!) to each of the points in your reply.

  • The headline makes clear that the content analysis’ main conclusion is that PolitiFact’s articles contain no substantial evidence of biased treatment of either political party. As with every study, there are limitations and I note a couple of those in the article. You can also understand why it’s impossible to fit every caveat about an academic study into a headline.

Yes, the headline makes clear that the content analysis’ main conclusion is that PolitiFact’s articles contain no substantial evidence of biased treatment of either political party. But that’s false. The study’s main conclusion was that the methods it employed to measure bias in PolitiFact’s articles found no such evidence. This is an important distinction that your article ignores (except for the quotations near the end by Sharockman (barely!) and Smith. Let me rephrase that. It’s a critical distinction that your article ignores. As I wrote in analogy, it’s like responding to reports that X is in Cuba by sending a research team to look for evidence of X in Havana. The researchers reported seeing no evidence of X in Havana. It does not follow that the research effort showed that X is not in Cuba. Printing a headline to the effect that the researchers found no substantial evidence of X in Cuba is misleading. Yes it’s impossible to fit every detail of a study into a headline. But it’s important to end up with a headline that isn’t misleading OR promptly correct any misleading impression promptly in the following story. Instead, the article strongly reinforces the misleading impression with its opening paragraphs and overall does almost nothing to correct the misleading impression it offers. The most significant antidote to the misinformation is the quotation of Smith.

  • The top of my story isn’t explicitly rebuffing those accusations of bias. I wanted to set up why a casual reader would care about this in the first place, and provide a few examples of past criticism.

Agreed that the first few paragraphs do not explicitly say the language audit debunks the accusations of bias. But it clearly implies it by stating PolitiFact has an academic study supporting its rebuffs. That plus the title of the article establishes your journalistic framing. As I pointed out, the rebuffs do not specifically address the accusations. None of them do (transparency of method, transparency of donors, IFCN verification, language inventory). And that information is conspicuous in its absence from your story. The quotation of Smith is the only thing that hints at it. And that’s somewhat buried near the end, wouldn’t you agree?

  • By including both Noah and Aaron’s quotes, as well as some examples of how the study is limited, I think we fairly represent both the study and its conclusions. I would also note that no other fact-checking organization (to my knowledge) has conducted a study like this before.

The quotations of Sharokman [sic] and Smith were buried, and the reader may well have wondered what Sharockman was talking about. “But this article I’m reading says the study says the answer to the question of whether PolitiFact is biased is ‘no.’ Huh.” Maybe Sharockman was just being modest. The article gives no content to his caveat.

Examples of how the study is limited? You have to be talking about something more than this (again from very near the end of the article):

There are some limitations. Because of the imbalance of articles about Republicans and Democrats, the text classification could only predict with 58 percent accuracy a party from a set of words. Researchers also did not run a more in-depth sentiment analysis on PolitiFact’s content, partly due to the high cost and partly because of the potential for human annotators to respond subjectively to fact check ratings.

What red flag is clearly communicated in that paragraph? The 58 percent predictive accuracy only supports the lack of evidence angle, right? The expense and concern over subjectivity contaminating the study with expanded approaches don’t temper the findings of the existing study, do they? What other relevant paragraphs am I missing? What specific limitations were mentioned that temper the overall framing of the story?

What’s missing is an explicit admission that the study does very little (if anything) to counter any of the accusations of bias. The buried quotation of Smith is the closest thing you’ve got. When you skip explaining that the language audit doesn’t get PolitiFact off the hook for accusations about selection bias or biased grading (not to mention partisan patterns of mistaken reporting) you leave readers with a false impression.

And while I’ve got your attention, you took that line from TWS out of context (bold emphasis added showing what you used):

You can believe that Republicans lie more than three times as often as Democrats. Or you can believe that, at a minimum, PolitiFact is engaging in a great deal of selection bias, to say nothing of pushing tendentious arguments of its own.

TWS mentioned two options. You cut it down to one. Not a problem? TWS was really just summarizing the Smart Politics study by Dr. Eric Ostermeier. Ostermeier didn’t level a flat-out accusation of selection bias. He called for a closer examination of the issue because of the framing PolitiFact was giving its fact checks.

 

ZEBRA FACT CHECK (Aug. 29, 2018)

You indicated you believe you have little more to say about your article on the PolitiFact language inventory unless I wished to ask some specific questions. And you indicated you would be happy to answer them. I’ve designed the questions to allow you great liberty in achieving either brevity or thoroughness.

Q1 Do you agree that the title of your article (“Is PolitiFact biased? This content analysis says no”) is literally false?

Q2 Do you agree that your article contains multiple features that frame the PolitiFact language inventory as an effective answer to critics who say PolitiFact is biased?

 

 

DANIEL FUNKE (Sept 4, 2018)

Here are some answers to your questions:
Q1 Do you agree that the title of your article (“Is PolitiFact biased? This content analysis says no”) is literally false?
 

No. The conclusion of the analysis (while tentative, like all research) was that PolitiFact’s content exhibits no discernible political bias.

Q2 Do you agree that your article contains multiple features that frame the PolitiFact language inventory as an effective answer to critics who say PolitiFact is biased?

 
No. I believe it could be read that way if you’re looking for it, but my intention was to provide background on how past articles have framed PolitiFact’s alleged bias.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.