Media giant YouTube is on its way to addressing the recently viralised “synthetic singing” AI can provide via deepfaking a singer’s voice. The company has worked to design and make available a new tool designed to protect creators and combat unauthorised AI-generated content — which, funnily enough, also runs on artificial intelligence.
The software, scheduled for release in 2025, will identify and manage “soundalike” vocals, a type of synthetic singing that mimics the voices of real artists through feeding it material of said singers. This algorithm is expected to be particularly beneficial for rightsholders who wish to further protect their artists’ voices.
VP of product management for creator products Amjad Hanif said this new piece of technology will be available directly through YouTube’s Content ID. And even though further details are not really public yet, they’ve said eligible partners will be able “to automatically detect and manage AI-generated content that simulates their singing voices”.
Tools like this one emerge as concerns rise regarding the increasing prevalence of “deepfake” content and the potential for the misuse of AI technology. YouTube’s efforts align with a stronger industry trend, as most media and streaming giants are coming up with solutions to tackle the AI-generated content deal.
[H/T] Digital Music News
*Cover image: rafapress