Meta is scrapping its fact-checking program with trusted partners and replacing it with a community-driven system similar to X’s Community Notes, CEO Mark Zuckerberg said Tuesday.
The change will affect Facebook and Instagram, two of the largest social media platforms in the world, each boasting billions of users, as well as Threads.
"We're gonna get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms," Zuckerberg said in a video. "More specifically, here's what we're going to do. First, we're going to get rid of fact checkers and replace them with community notes similar to X, starting in the U.S."
Zuckerberg pointed to the election as a major influence on the company's decision, and criticized "governments and legacy media" for allegedly pushing "to censor more and more."
"The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech," he said.
"So we're gonna get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms."
He also said the systems the company had created to moderate its platforms were making too many mistakes, adding that the company would continue to aggressively moderate content related to drugs, terrorism and child exploitation.
"We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes," Zuckerberg said. "Even if they accidentally censor just 1% of posts, that's millions of people, and we've reached a point where it's just too many mistakes and too much censorship."
The change comes as Meta and social media companies broadly have in recent years reversed course on content moderation due in part to the politicization of moderation decisions and programs. Republicans have long criticized Meta’s fact-checking system and fact-checking in general as unfair and favoring Democrats.
X’s Community Notes system, which CEO Elon Musk has used to replace X’s previous efforts around misinformation, has been celebrated by conservatives, and it has allowed for a mixture of fact-checking, trolling and other community-driven behavior.
Meta’s initial fact-checking system, which was launched on Facebook in 2016, worked by running information on its platforms through third-party fact-checkers certified by the International Fact-Checking Network and the European Fact-Checking Standards Network. The program included more than 90 organizations that would fact-check posts in more than 60 languages. In the United States, they have included groups like PolitiFact and Factcheck.org.
In a news release, Meta wrote that it was able to identify posts that might be promoting misinformation based on how people were responding to certain pieces of content and how fast posts would spread. Independent fact-checkers would also work to identify posts with possible misinformation on their own. Posts that were said to include misinformation would then be shown lower in feeds as they waited for review.
The independent fact-checkers would then work to verify the accuracy of the content that had been flagged and give it a “content rating,” labeling content as “False,” “Altered,” “Partly False,” “Missing Context,” “Satire” or “True” and adding notices to the posts.
Those fact-checking measures applied to any posts on Facebook, and they expanded to include Instagram in 2019 and Threads last year. Fact-checkers were able to review content including “ads, articles, photos, videos, Reels, audio and text-only posts.”
Under the system, Meta noted, fact-checkers did not have the ability to remove content, and content would be removed if it violated the company’s community standards, which was discerned by Meta itself.