Updater
February 28, 2025 , in technology

What Use is Fact-Checking?

Fact-checking has been in the news since Meta's decision to discontinue the practice on its social media. But what exactly are the purposes of fact-checking and what are the alternatives?

Eidosmedia Fact-Checking

Meta Drops Fact-Checking on Facebook – What Happens Next? | Eidosmedia

Meta’s recent decision to abandon fact-checking in favor of “community notes” has been widely dismissed as politically motivated, but the efficacy of fact-checking has long been in dispute. Others claim fact-checking leads to censorship. We take a look at the real value of fact-checking.

In the world of journalism, fact-checking is an important and integral piece of the accuracy puzzle. Pen American sees it as fundamental to building trust, defining it as “a verification method used to ensure that something is true, and when made by a news organization, it is a commitment made to its readers – a claim that any statement it has made in a story can be backed up.” Best practices require that each statement is confirmed with one primary source, or two secondary sources when a primary is not available.

In the realm of social media, on the other hand, where everyone from influencers to politicians to your neighbor can make and share claims that may be dubious at best, what constitutes fact-checking is more nebulous. (More on the ins and outs of that later.) In Meta's particular case, while users can still report outright hate speech, SPAM, or other kinds of content that goes against Meta’s guidelines, professional fact-checkers will no longer be adding their expertise to the efforts against misinformation online.

Does social media fact-checking work?

Back in 2016, Facebook began the practice of fact-checking, paying independent groups — such as Reuters Fact Check, Australian Associated Press, Agence France-Presse, and PolitiFact – to verify some stories and articles. A 2019 study found that “20,000 people found a ‘significantly positive overall influence on political beliefs’” according to Scientific American . However, the same research found that the more polarized the issue, the less effective fact-checking was.

On Facebook, content that was ruled as false was tagged with a warning and shown to fewer users. There seemed to be additional cooling effects on false claims, as people were more likely to ignore flagged content. But in recent months Facebook has claimed that the fact-checkers themselves were biased, at least insofar as they often fact-checked conservative claims more than that of liberals. Of course, this may be simply due to the greater incidence of dubious material among conservative posters.

“It’s largely because the conservative misinformation is the stuff that is being spread more,” Jay Van Bavel, a psychologist at New York University in New York City, told Scientific American. “When one party, at least in the United States, is spreading most of the misinformation, it’s going to look like fact-checks are biased because they’re getting called out way more.”

While claims of censorship have plagued social media fact-checking since the beginning, ironically, it’s been tough to prove. The head of the International Fact-Checking Network, Angie Drobnic Holan, pointed out in a statement, “Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracy theories. The fact-checkers used by Meta follow a Code of Principles requiring nonpartisanship and transparency.” Still, Meta’s CEO Mark Zuckerberg seems to have always been reluctant to place any restrictions on the content his platform purveys, and he’s getting his way — at least for now.

Replacing fact-checking with community notes

Meta is replacing fact-checking with the community notes format pioneered by X (formerly Twitter). This model puts the onus of fact-checking on other social media users who can add context and caveats to posts. Back in 2022, PCMag reported on the Twitter program known as Birdwatch: “Users can apply online to become part of the Birdwatch team; once approved, Birdwatchers can propose notes that add context to a tweet. The wider community then rates those notes.”

It’s easy to see the potential flaw in that plan, especially in light of what has happened to X under Elon Musk’s leadership. Millions of users have abandoned the increasingly toxic ship, leaving the proverbial fox to guard the hen house. So, who is providing the notes, and who is voting on their helpfulness?

The EU steps in

More importantly, community notes are currently under investigation by the European Union. According to a press release, “The European Commission has opened formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers.”

The investigation continues with X required to provide information about its algorithms in the run-up to the recent German elections.

In the meantime, industry watchdogs are worried about what this all means for misinformation online. In 2023, The Conversation reports, Meta displayed warnings on more than 9.2 million pieces of content on Facebook and over 510,000 posts on Instagram just in Australia.

Going forward, social media users will be wise to apply a little extra dose of skepticism to any story that does not come from a reputable news source with fact-checkers of their own.

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH