Whistleblowers Allege TikTok and Meta Prioritized Engagement Over User Safety
Former employees claim social media companies knowingly allowed harmful content to boost algorithmic engagement.
Whistleblowers have alleged that TikTok and Meta deliberately allowed more harmful content on users' feeds as part of an intensifying competition to maximize algorithmic engagement, according to testimony heard by the BBC.
The former employees claim both companies understood their recommendation algorithms were driven by content that provoked strong emotional reactions, including outrage, but chose to prioritize user engagement over safety considerations.
According to the whistleblowers, this approach was part of what they described as an "algorithm arms race" between major social media platforms, where companies competed to keep users on their platforms for longer periods by serving increasingly provocative content.
The allegations suggest that executives at both companies were aware their algorithmic systems amplified divisive or harmful material because such content generated higher levels of user interaction and time spent on the platforms.
Neither TikTok nor Meta have publicly responded to these specific allegations. Both companies have previously stated they employ content moderation systems and safety measures to reduce harmful content on their platforms.
The claims add to ongoing scrutiny of how major social media companies design their recommendation algorithms and balance user engagement with content safety policies.