Google’s recent announcement about the implementation of artificial intelligence (AI) on its platforms has had a significant impact on how we interact with content and how certain crucial aspects are controlled in the digital age. These changes, aimed at providing a more refined and secure experience, are designed to address emerging problems surrounding generative AI and its implications. In this article, we will address the new features on YouTube as part of these changes.
One of the most striking changes involves the introduction of real-time conversations with AI within YouTube. This feature, exclusive to Premium users, allows users to start dialogues with the YouTube Assistant to receive additional information about the video they are watching. This initiative, which will be available for a limited time, aims to improve the user experience, but it is currently limited to Premium subscribers in the United States with Android devices during December 2023 and January 2024, as part of a testing package.
In addition, YouTube has incorporated a new button called “Topics” to display AI-categorized comments in the comments section of long videos, as a way to organize and improve the interaction between creators and viewers. The ability to categorize comments by topic will make it easier for creators to manage discussions, although for now it will only be available for a select number of Premium videos with extensive comments.
However, one of the most important changes involves the content labeling policy for AI-generated content. Starting next year, YouTube will require creators to identify AI-generated content as “realistic”. This is crucial in sensitive contexts such as elections or ongoing conflicts. Failure to comply with this policy could result in sanctions such as content removal or demonetization.
The complexity arises when trying to define what constitutes “realistic” content and how removal policies will be applied. YouTube will allow removal requests for content that simulates identifiable individuals, but it will evaluate factors such as parody, satire, or the public status of the individual in question. These are crucial elements in legitimate use cases, and there are currently no specific laws governing this area.
In addition, the special protection granted to the music industry to request removal of content that imitates the voice of an artist raises questions about creative freedom and musical criticism on the platform. The discrepancy between the rules applied to the music industry and general content raises concerns about the equitable applicability of the policies and their potential implications for content creators.
This complex landscape poses significant challenges for both YouTube and content creators. Although an attempt is being made to establish a framework for regulating the use of generative AI, the ambiguity in definitions and the lack of effective tools for the accurate detection of AI-generated content could generate controversy and legal challenges in the future.
We will be watching how these changes evolve and how YouTube addresses emerging challenges to ensure a fair and safe environment for all users and creators on its platform.