The ‘Truth-O-Meter’ Trap

“Lord knows the decision about a Truth-O-Meter rating is entirely subjective.”

–Bill Adair, 2016

“It’s not as subjective as some people think.”

–Bill Adair, 2022

When we first encountered PolitiFact in late 2007, we immediately noticed the apparent subjectivity of its trademarked “Truth-O-Meter.” We thereafter realized their aggregated ratings could easily mislead people. And we realized the ratings were opinion, presenting a potential means of objectively measuring PolitiFact’s bias.

PolitiFact’s “Truth-O-Meter,” developed by founding editor Bill Adair, rated claims on a six-point scale ranging from “True” to “Pants on Fire.” The definitions of the ratings mixed seemingly objective factors like “accurate” and “not accurate” with ambiguous factors like “needs clarification or additional information,” “leaves out important details or takes things out of context” and “ignores critical facts that would give a different impression.”

What secret process does PolitiFact use to objectively draw lines between those apparently overlapping criteria?

Fifteen years later we still do not know. The contents of PolitiFact’s legendary and proprietary “Truth-O-Meter Owner’s Manual” remain secret. But we do have various statements from PolitiFact staffers describing the subjectivity of “Truth-O-Meter” ratings. Perhaps the most notable of those come from Adair himself.

How Subjective Is It?

As our epigraphs show, Adair at different times has said the “Truth-O-Meter” is “entirely subjective” and “not as subjective as some people think.” With the latter statement Adair apparently has in mind those who believed his earlier statement.

The second epigraph comes from an article Adair wrote commemorating 15 years of his “Truth-O-Meter.” Adair’s attempt to explain why people overestimate the subjectivity of the “Truth-O-Meter” helps show that Adair and PolitiFact haven’t come to grips with the fact that its popular gimmick doubles as a albatross.

Not As Subjective As Some People Think Because?

Adair explains:

(W)hile it relies on the judgment of the journalists, it’s not as subjective as some people think. Each fact-check is thoroughly researched and documented, and PolitiFact has a detailed methodology for its ratings. 

The “detailed methodology” offers no assurance of objectivity in the ratings. Adair’s linked reference steers his audience to PolitiFact’s statement of principles, to which we have already linked in this article. The six “Truth-O-Meter” categories offer ambiguous descriptions, followed by a relatively brief additional set of instructions such as asking “Is the statement literally true?” and “How have we handled similar statements in the past?” The principles fail to snuff out subjectivity. PolitiFact may rate a true statement false or give separate cases of essentially the same claim different ratings.

As an example of the first, see Mark Hemingway’s The Weekly Standard story about Mitt Romney’s accurate recitation of numbers ruled “Mostly False.” PolitiFact can also find literally false statements “Half True.”

As an example of the second, PolitiFact flubbed the test of consistency by rating two very similar gender pay gap claims at opposite ends of its “Truth-O-Meter” spectrum.

President Obama said “Women (are) paid 77 cents on the dollar for doing the same work as men.” He received a “Mostly False” rating. Oregon Democrat Brad Avakian said “”In Oregon, women earn an average of 79 cents for every dollar that men earn for doing the same job.” Avakian received a “Mostly True” rating. But Obama and Avakian made nearly identical errors on labor statistics for the U.S. and Oregon, respectively. Avakian’s error was arguably worse, given that he wrongly called a median figure an “average.”

Adair points his readers to PolitiFact’s facade of objectivity. Thorough research does not lead inevitably to objectivity, particularly when filtered through ambiguous definitions and criteria. Thorough study of ice cream will never make the subjective preference for chocolate an objective preference.

We have nothing from Adair to refute his claim “Truth-O-Meter” ratings are “entirely subjective.”

The Trap

Adair and PolitiFact fashioned a self-made trap which now enmeshes them.

The “Truth-O-Meter” serves an unintended function. As with the gender wage gap example above, it serves to mark fact checker inconsistency and expose the lack of objectivity in fact-checking. Jeff D and I noticed it early, even before we teamed to publish the PolitiFact Bias blog. The University of Minnesota’s Eric Ostermeier sniffed it out in 2011, when he questioned whether PolitiFact’s process avoided the error of selection bias. The University of Miami’s Joseph E. Uscinski and Ryden Butler sifted through PolitiFact’s methodological problems in 2013 with a paper called The Epistemology of Fact Checking.

Sliding scale fact check rating systems like Adair’s “Truth-O-Meter” obligate the fact checkers to express opinions. Researchers may use those opinions to detect bias. At the PolitiFact Bias blog, I created a study of PolitiFact’s subjective choice between “False” and “Pants on Fire” ratings. “False” ratings, under PolitiFact’s subjective system, represent inaccurate claims and “Pants on Fire” ratings represent ridiculous inaccurate claims. What’s the objective difference between ridiculous and non-ridiculous inaccurate claims? PolitiFact has never had an answer to share. Without it, PolitiFact can’t justify the fact that claims it regards as false from Republicans are 32 percent more likely to receive “Pants on Fire” ratings than those from Democrats over its entire history. And 336 percent more likely in 2022 as of December 13.

PolitiFact’s “database” of fact checks represents a gold mine for media bias researchers.

The “Truth-O-Meter” Also Traps the Audience

The “Truth-O-Meter” does double duty as a trap. It traps its wielders by revealing their bias. It also traps PolitiFact’s audience by encouraging prejudice.

How It Works

PolitiFact aggregates its fact checks to produce what it has called “report cards” for politicians and other entities. It actively promotes these to its audience. And it intends the report cards to give readers a sense of the comparative truthfulness of politicians.

Why should readers trust “report card” grades based on subjective ratings of biased sets of statements?

They shouldn’t. But presenting the biased results in graphic form plays to their cognitive biases and produces distrust in politicians who receive poor grades.

A paper published Aug. 15, 2019, Counting the Pinocchios: The effect of summary fact-checking data on perceived accuracy and favorability of politicians, found aggregated PolitiFact ratings effectively reduced trust in poorly-rated politicians:

This study … compares the effects of summary fact-checking data and individual fact-check ratings on views of politicians who make misleading claims. Consistent with our preregistered hypotheses, summary fact-checking data reduced perceptions of politicians’ accuracy and favorability more than exposure to a negative individual fact-check rating did.

Passing Opinion Off as Objective Fact-Checking

PolitiFact, then, has found a subtle means of lading its fact check content with misleading and routinely partisan editorial content. At the same time, PolitiFact misinforms its audience with the assurance it is not biased.

On top of that, PolitiFact declares in a pop-up appeal for donor support “Our only agenda is to publish the truth so you can be an informed participant in democracy.” PolitiFact’s persistent choice in misleading its readers with the “Truth-O-Meter” trap belies that claim.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.