December 2, 2025
YouTube's new AI deepfake tracking tool is alarming experts and creators
A YouTube tool that uses creators' biometrics to remove AI-generated videos also allows Google to train its AI models on that sensitive data, experts told CNBC.

Beata Zawrzel | Nurphoto | Getty Images

A YouTube tool that uses creators’ biometrics to help them remove AI-generated videos that exploit their likeness also allows Google to train its artificial intelligence models on that sensitive data, experts told CNBC.

In response to concern from intellectual property experts, YouTube told CNBC that Google has never used creators’ biometric data to train AI models and it is reviewing the language used in the tool’s sign-up form to avoid confusion. But YouTube told CNBC it will not be changing its underlying policy.

The discrepancy highlights a broader divide inside Alphabet, where Google is aggressively expanding its AI efforts while YouTube works to maintain trust with creators and rights holders who depend on the platform for their businesses.

YouTube is expanding its “likeness detection,” a tool the company introduced in October that flags when a creator’s face is used without their permission in deepfakes, the term used to describe fake videos created using AI. The feature is being expanded to millions of creators in the YouTube Partner Program as AI-manipulated content becomes more prevalent throughout social media.

The tool scans videos uploaded across YouTube to identify where a creator’s face may have been altered or generated by artificial intelligence. Creators can then decide whether to request the video’s removal, but to use the tool, YouTube requires that creators upload a government ID and a biometric video of their face. Biometrics are the measurement of physical characteristics to verify a person’s identity.

Experts say that by tying the tool to Google’s privacy policy, YouTube has left the door open for future misuse of creators’ biometrics. The policy states that public content, including biometric information, can be used “to help train Google’s AI models and build products and features.”

“Likeness detection is a completely optional feature, but does require a visual reference to work,” YouTube spokesperson Jack Malon said in a statement to CNBC. “Our approach to that data is not changing. As our Help Center has stated since the launch, the data provided for the likeness detection tool is only used for identity verification purposes and to power this specific safety feature.”

YouTube told CNBC it is “considering ways to make the in-product language clearer.” The company has not said what specific changes to the wording will be made or when they will take effect.

Experts remain cautious, saying they raised concerns about the policy to YouTube months ago.

“As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves,” said Dan Neely, CEO of Vermillio, which helps individuals protect their likeness from being misused and also facilitates secure licensing of authorized content. “Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back.”

Vermillio and Loti are third-party companies working with creators, celebrities and media companies to monitor and enforce likeness rights across the internet. With advancements in AI video generation, their usefulness has ramped up for IP rights holders.

Loti CEO Luke Arrigoni said the risks of YouTube’s current biometric policy “are enormous.”

“Because the release currently allows someone to be able to attach that name to the actual biometrics of the face, they could create something more synthetic that looks like that person,” Arrigoni said.

Neely and Arrigoni both said they would not currently recommend that any of their clients sign up for likeness detection on YouTube.

YouTube’s head of creator product, Amjad Hanif, said YouTube built its likeness detection tool to operate “at the scale of YouTube,” where hundreds of hours of new footage are posted every minute. The tool is set to be made available to the more than 3 million creators in the YouTube Partner Program by the end of January, Hanif said.

“We do well when creators do well,” Hanif told CNBC. “We’re here as stewards and supporters of the creator ecosystem, and so we are investing in tools to support them on that journey.”

The rollout comes as AI-generated video tools rapidly improve in quality and accessibility, raising new concerns for creators whose likeness and voice are central to their business.

YouTuber Doctor Mike, whose real name is Mikhail Varshavski, makes videos reacting to TV medical dramas, answering questions on health fads and debunking myths that have flooded the internet for nearly a decade.

Doctor Mike

YouTube creator Mikhail Varshavski, a physician who goes by Doctor Mike on the video platform, said he uses the service’s likeness detection tool to review dozens of AI-manipulated videos a week.

Varshavski has been on YouTube for nearly a decade and has amassed more than 14 million subscribers on the platform. He makes videos reacting to TV medical dramas, answering questions on health fads and debunking myths. He relies on his credibility as a board-certified physician to inform his viewers.

Rapid advances in AI have made it easier for bad actors to copy his face and voice in deepfake videos that could give his viewers misleading medical advice, Varshavski said.

He first encountered a deepfake of himself on TikTok, where an AI-generated doppelgänger promoted a “miracle” supplement.

“It obviously freaked me out, because I’ve spent over a decade investing in garnering the audience’s trust and telling them the truth and helping them make good health-care decisions,” he said. “To see someone use my likeness in order to trick someone into buying something they don’t need or that can potentially hurt them, scared everything about me in that situation.”

AI video generation tools like Google’s Veo 3 and OpenAI’s Sora have made it significantly easier to create deepfakes of celebrities and creators like Varshavski. That’s because their likeness is frequently featured in the datasets used by tech companies to train their AI models.

Veo 3 is trained on a subset of the more than 20 billion videos uploaded to YouTube, CNBC reported in July. That could include several hundred hours of video from Varshavski.

Deepfakes have “become more widespread and proliferative,” Varshavski said. “I’ve seen full-on channels created weaponizing these types of AI deep fakes, whether it was for tricking people to buy a product or strictly to bully someone.”

At the moment, creators have no way to monetize unauthorized use of their likeness, unlike the revenue-sharing options available through YouTube’s Content ID system for copyrighted material, which is typically used by companies that hold large copyright catalogs. YouTube’s Hanif said the company is exploring how a similar model could work for AI-generated likeness use in the future.

Earlier this year, YouTube gave creators the option to permit third-party AI companies to train on their videos. Hanif said that millions of creators have opted into that program, with no promise of compensation.

Hanif said his team is still working to improve the accuracy of the product but early testing has been successful, though he did not provide accuracy metrics.

As for takedown activity across the platform, Hanif said that remains low largely because many creators choose not to delete flagged videos.

“They’ll be happy to know that it’s there, but not really feel like it merits taking down,” Hanif said. “By and far the most common action is to say, ‘I’ve looked at it, but I’m OK with it.'”

Agents and rights advocates told CNBC that low takedown numbers are more likely due to confusion and lack of awareness rather than comfort with AI content.

WATCH: AI narrative is shifting towards Google with its complete stack, says Plexo Capital’s Lo Toney