In a significant policy overhaul, Meta's CEO Mark Zuckerberg emphasized a return to principles of free expression, while facing backlash from hate speech campaigners who view the move as a dangerous political maneuver.
Meta Embraces User-Based Moderation, Ditches Independent Fact Checkers
Meta Embraces User-Based Moderation, Ditches Independent Fact Checkers
Meta has announced the elimination of independent fact checkers from Facebook and Instagram, shifting towards a user-driven "community notes" approach that allows for peer evaluation of post accuracy.
Meta, the parent company of Facebook and Instagram, is pivoting away from using independent fact checkers in favor of a largely user-driven system called "community notes." CEO Mark Zuckerberg made the announcement via a video shared alongside a blog post on Tuesday, claiming that third-party moderators had demonstrated political bias and reinforcing the company's commitment to free expression.
Zuckerberg's decision follows ongoing criticisms from Republican figures including US President-elect Donald Trump, who have labeled Meta's previous fact-checking practices as censorship of conservative viewpoints. Trump expressed approval of Meta's new direction, noting their progress and suggesting the changes could be in response to past tensions between them.
Joel Kaplan, who will take over from Sir Nick Clegg as Meta's global affairs chief, commented that while previous moderation efforts were well-meaning, they had inadvertently stifled free speech. This has prompted concerns from anti-hate speech groups who argue that the decision may prioritize political allegiances over the need to manage disinformation.
Historically, Meta's fact-checking process involved third-party verification of disputed posts, labeling those found misleading and altering their visibility on user feeds. However, as part of this new shift, community notes will allow users with differing perspectives to contribute their insights to controversial posts. This system mirrors a similar initiative from X (formerly Twitter), introduced post-Elon Musk's acquisition.
While the changes are set to be implemented first in the United States, Meta indicated that no immediate adjustments to the fact-checking system in the UK or the EU will occur. Concerns remain for content related to self-harm or eating disorders, with Meta confirming that strict measures will continue in those areas to protect vulnerable users.
Critics like Chris Morris from the fact-checking organization Full Fact label the modification as disappointing, arguing that it could complicate efforts to address harmful content on the platform. Kaplan acknowledged the difficult balance between censorship and the need for effective content moderation, admitting that the new approach might lead to a reduction in detecting harmful content.
Meta's blog suggested that the recent shifts in moderation policies reflect a broader cultural shift towards prioritizing free speech, particularly in the lead-up to Trump's inauguration. The company's decision was reportedly communicated to Trump's team prior to its public announcement, establishing its intention to move closer to the incoming administration.
As the digital landscape increasingly intertwines with political agendas, experts like Kate Klonick have noted a radical swing in social media content governance, away from trust and safety towards less stringent moderation. These developments signal an evolving political narrative within the tech industry, as companies reassess their roles in content regulation amidst shifting political climates.