On Tuesday, YouTube announced it will soon implement stricter measures on realistic AI-generated content hosted by the service. “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” the company wrote in a statement. The changes will roll out over the coming months and into next year.
The move by YouTube comes as part of a series of efforts by the platform to address challenges posed by generative AI in content creation, including deepfakes, voice cloning, and disinformation. When creators upload content, YouTube will provide new options to indicate if the content includes realistic AI-generated or AI-altered material. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” YouTube writes.
In the detailed announcement, Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, explained that the policy update aims to maintain a positive ecosystem in the face of generative AI. “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube,” they write. “We have long-standing policies that prohibit technically manipulated content that misleads viewers … However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”
YouTube will also introduce a new labeling system on the platform that will inform viewers about the nature of the content they are watching. For instance, a new label will be added to the description panel and video player for content that has been altered or is synthetic, especially when discussing sensitive topics like “elections, ongoing conflicts and public health crises, or public officials,” the company says.
Also, content created by YouTube’s own generative AI products, such as AI-powered video creator Dream Screen, will be automatically labeled as altered or synthetic. The company shared three mock-ups of what these labels may look like, although they may change over time.
Creators who choose to avoid AI-use disclosure may be subject to penalties, including content removal or suspension from the YouTube Partner Program. Further, YouTube is planning to deploy AI-powered content moderation tools that aim to enhance the speed and accuracy of identifying and handling content that violates the new rules.
Response to deepfake and artist imitation concerns
YouTube also announced plans to allow individuals to request the removal of AI-generated content that simulates identifiable individuals, including their faces or voices, such as deepfakes, through a privacy request process. “Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests,” they write. “This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”
Along those lines, YouTube will also introduce a policy for artists or music publishers to request the removal of AI-generated music that mimics an artist’s unique singing or rapping voice. Like the privacy requests, potential take-downs will consider whether the content is part of news reporting, analysis, or critique of the synthetic vocals, the company says.
With the needs of parody, fair use, and political commentary in mind, YouTube says it is attempting to balance new applications of AI with its community safety efforts. “We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI,” they write. “We’ll work hand-in-hand with creators, artists, and others across the creative industries to build a future that benefits us all.”