X Announces Suspension Policy for Unlabeled AI Content Depicting Armed Conflict
Social media platform X will suspend creators from its revenue-sharing program for posting unlabeled AI-generated content showing armed conflict.

Social media platform X announced it will suspend creators from its revenue-sharing program for posting unlabeled artificial intelligence-generated content depicting armed conflict, according to the company's updated policy guidelines.
Creators who violate the policy will face a three-month suspension from the revenue-sharing program. Those who continue to break the rules after their initial suspension will be permanently banned from the monetization program.
The policy appears to address growing concerns about AI-generated misinformation in conflict zones. Recent developments have shown how artificial intelligence tools can be used to create fake satellite images and other deceptive content related to warfare, potentially spreading false information about ongoing conflicts.
The announcement comes amid broader scrutiny of AI-generated content on social media platforms and its potential to mislead users about real-world events. Platform policies around synthetic media have evolved as AI tools become more sophisticated and accessible to everyday users.
X has not specified the exact mechanisms it will use to detect unlabeled AI-generated content or provided details about the appeals process for suspended creators. The company also has not indicated whether the policy applies only to its revenue-sharing program or extends to other platform features.