Child Safety Groups Urge YouTube to Address AI-Generated Content for Kids
Over 200 advocacy groups and experts called on YouTube to better regulate artificial intelligence-generated videos on its platform, citing concerns about child development.
More than 200 organizations and experts have signed a letter urging YouTube to implement stricter controls on artificial intelligence-generated content targeting children, expressing concerns about the impact on child development and online safety.
The letter, sent Wednesday to YouTube CEO Neal Mohan and Google CEO Sundar Pichai, was organized by children's advocacy group Fairplay and signed by 135 organizations including the American Federation of Teachers and American Counseling Association, along with around 100 individual experts such as educators and child psychiatrists.
The advocacy groups argue that AI-generated videos, which they term "AI slop," harm children's development by distorting their sense of reality and overwhelming their learning processes. They specifically cite concerns about fast-paced content with bright colors, lively music and clickbait titles designed to capture young viewers' attention and extend screen time.
Fairplay is calling for YouTube to clearly label all AI-generated content and ban such content entirely from YouTube Kids. The group also wants restrictions on recommending AI-generated videos to users under 18 and parental controls to block this content even when children search for it.
YouTube spokesperson Boot Bullwinkle responded that the platform maintains "high standards for content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels." The company currently requires creators to disclose when realistic content is made with AI but does not mandate disclosure for clearly unrealistic animated content.
The campaign follows recent legal developments, including a California jury verdict finding YouTube liable for designing its platform to hook young users without regard for their well-being. YouTube head Mohan had previously identified "managing AI slop" as a company priority for 2025 in a January blog post, stating the company was building systems to combat low-quality, repetitive content.