Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers

Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC.

More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta described how senior management instructed teams to allow more borderline harmful content in users' feeds to compete with TikTok. The engineer stated, They sort of told us that it's because the stock price is down. Meanwhile, a TikTok employee revealed that prioritization was given to cases involving political figures over reports involving harmful posts featuring children.

The whistleblowers highlight a decision-making process where the pressure to retain user engagement often overshadowed safety concerns. This has resulted in social media platforms becoming breeding grounds for harmful content, with internal studies indicating higher incidences of bullying and hate speech on Meta's new features compared to older ones.

In response to these claims, Meta and TikTok denied any allegations of intentionally promoting harmful content. However, the whistleblowers' accounts raise significant questions about the ethical implications of the algorithms designed to maximize user engagement.