YouTube Expands AI Deepfake Detection Tool to Politicians and Journalists
YouTube is extending its AI-powered deepfake detection feature to politicians, government officials, and journalists in a pilot program.

YouTube announced Tuesday that it is expanding access to its artificial intelligence-powered deepfake detection tool to include politicians, government officials, and journalists in a new pilot program.
The likeness detection feature, which identifies AI-generated content that mimics real people without their consent, has previously been available to millions of content creators on the platform. The expanded access will allow public figures to monitor and flag unauthorized deepfakes of themselves for potential removal.
The tool represents YouTube's effort to address growing concerns about synthetic media and its potential impact on public discourse, particularly as AI-generated content becomes increasingly sophisticated and harder to detect with the naked eye.
The pilot program specifically targets individuals who may be at higher risk of being impersonated through deepfake technology, including political candidates, elected officials, and members of the press. These groups often face heightened scrutiny and may be more likely targets for malicious deepfake content.
YouTube has not disclosed the size of the pilot group or provided a timeline for broader rollout of the feature. The company's existing deepfake detection system relies on machine learning algorithms to identify synthetic content that uses someone's likeness without permission.