Whistleblowers Reveal Alarming Trends in Meta and TikTok Algorithms
Social media giants made decisions that allowed more harmful content on people's feeds after internal research indicated that outrage drove engagement, whistleblowers shared with the BBC.
Over a dozen insiders illustrated that the companies risked user safety on critical issues such as violence and sexual blackmail in their race for user attention.
One Meta engineer mentioned being directed to permit borderline harmful content—anything from misogyny to conspiracy theories—to ensure competitiveness against TikTok, citing corporate pressures linked to stock price declines.
A TikTok employee provided insights on internal protocols, revealing how content involving political figures was prioritized over user complaints regarding harmful posts involving minors.
The whistleblowers detail an algorithmic landscape where engagement metrics triumphed over user safety, portraying a disturbing shift in content moderation practices.
'Delete TikTok'
One TikTok safety team member indicated a troubling reality where excessive workloads compromised the effectiveness of content moderation, particularly for vulnerable users.
Evidence presented through internal dashboards showcased a preference for handling cases involving political figures compared to serious matters impacting minors.
Furthermore, there are alarming indicators that discriminatory and violent content has grown normalized among users, and young people reported feeling desensitized to real-world violence fueled by prevalent online narratives.
'Do whatever we can to catch up'
As Meta's Instagram launched the Reels feature, the rapid deployment came at the expense of sufficient safety measures. Whistleblowers suggested a clear inclination towards maximizing viewership, with significant sacrifices in user welfare.
Their testimonies indicate a broader industry dilemma where financial incentives conflict with ethical obligations, encompassing concerns about the dangers these algorithms pose for impressionable minds.
Meta's response to these accusations firmly denied any willful amplification of harmful content for profit, asserting ongoing efforts to enhance user safety.
Meanwhile, TikTok disputed the narrative of prioritizing political content over child safety, emphasizing enhanced features for young users.
Ultimately, the whistleblowers' revelations compel a critical reassessment of how leading social media platforms balance engagement and user safety amidst rapid algorithmic development.



















