In a move that’s hardly surprising but still unsettling, Google has confirmed it uses YouTube videos, including those from creators, to train its advanced video generation model, Veo 3.
Buried in YouTube’s Terms of Service is a clause granting the platform broad rights to use uploaded content, but few creators realized this could mean their faces, voices, and creative styles would fuel an AI capable of replicating them. As concerns mount over unauthorized use and deepfake risks, Google is taking steps to protect creators, including legal safeguards and a new partnership with Creative Artists Agency (CAA). Here’s what’s at stake and how Google is navigating the fallout.
YouTube’s Fine Print: Your Videos, Their AI
YouTube’s Terms of Service explicitly state that uploading content grants the platform a “worldwide, non-exclusive, royalty-free” license to use it for purposes like “operating and improving” services. For most creators, this seems innocuous—maybe it’s for search optimization or ad targeting. But as revealed in a June 2025 report by Reuters, Google has been leveraging YouTube’s vast video library to train Veo 3, its generative AI model for creating hyper-realistic videos. This includes public videos from creators like Australian YouTuber Brodie Moss, whose content showed a 71/100 visual similarity and 90+/100 audio match with Veo 3 outputs, according to Vermillio’s content analysis platform.
The revelation has sparked unease among creators, many of whom were unaware their work was being used to train AI models capable of mimicking their likeness or style. Vermillio’s analysis, which scanned over 1,000 creator videos, flagged multiple instances where Veo 3-generated content closely resembled YouTube uploads, raising questions about consent and transparency. While Google insists it uses only publicly available videos and anonymizes data, the lack of explicit disclosure has left creators feeling blindsided. As one creator told Reuters, “I didn’t sign up for my face to be an AI training dummy.”
Google’s Defense: Protecting Data and Creators
Google maintains that its use of YouTube videos complies with its Terms of Service and is essential for advancing AI innovation. A Google spokesperson emphasized that training data is “protected from misuse” by other companies, citing ongoing investigations into competitors like OpenAI and Apple for allegedly scraping YouTube content without permission. Unlike OpenAI’s Sora, which faced scrutiny for similar practices, Google controls YouTube’s ecosystem, giving it legal leverage to use its own platform’s data.
To address creator concerns, Google has introduced safeguards. For videos generated by Veo 3, the company promises to handle any copyright disputes, covering legal costs and liabilities if AI outputs infringe on existing content. This move aims to reassure creators who fear their likenesses could be exploited in deepfakes or unauthorized reproductions. Additionally, YouTube’s Content ID system, which scans for copyright matches, has been upgraded to detect AI-generated content, with 85% accuracy for visual matches and 92% for audio, per internal tests reported by TechCrunch.
A Game-Changing Partnership with CAA
In a proactive step, YouTube has partnered with Creative Artists Agency (CAA), one of Hollywood’s top talent agencies, to empower high-profile creators in the fight against deepfakes. Announced in May 2025, the partnership gives CAA-represented creators—think YouTubers with millions of subscribers—access to advanced tools for monitoring and flagging unauthorized AI-generated content. CAA’s legal team will also provide faster recourse for takedowns and intellectual property claims, addressing a key pain point: the slow response time of YouTube’s existing systems. This move targets top-tier influencers, but YouTube plans to roll out similar tools to smaller creators by 2026, potentially integrating them into Creator Studio.
The CAA deal also aims to give creators more control over their digital likeness. For instance, creators can opt into a “digital twin” registry, allowing them to license their AI-generated likeness for specific campaigns while restricting unauthorized uses. This could turn deepfake risks into revenue opportunities, with CAA estimating that top creators could earn $50,000–$200,000 annually from licensed AI content.
The Bigger Picture: Ethics and Opportunity
The controversy highlights a broader tension in the creator economy: balancing innovation with ethical use of data. YouTube’s 2.5 billion monthly users upload billions of hours of video, making it a goldmine for AI training. But the lack of explicit consent for such use has fueled distrust, especially among creators who rely on YouTube for their livelihoods. A 2025 Pew Research survey found that 62% of creators want clearer disclosure about how their content is used, and 48% worry about AI replicating their work without credit.
On the flip side, Google’s efforts to protect creators could set a new standard. By shielding users from copyright disputes and partnering with CAA, YouTube is addressing deepfake risks more proactively than competitors like TikTok or Instagram. The platform’s $70 billion payout to creators since 2007 underscores its reliance on their content, and safeguarding their trust is critical as AI tools like Veo 3 become mainstream. However, the optics of training AI on creators’ work without clear notification remain a sticking point, with some calling for an opt-out mechanism.
Also read:
- Hollywood’s Underbelly on Screen: 2025’s Breakout TV Trend
- Arkham Tracks 87% of Strategy’s Bitcoin Holdings, Sparking Transparency Debate
- Leaked System Prompts of AI Vibe-Coding Tools: A Deep Dive into Cursor, Bolt, Lovable, and Manus
What’s Next for Creators?
For creators, the takeaway is clear: read the fine print. YouTube’s Terms of Service grant broad rights, but Google’s legal protections and CAA partnership offer a safety net. Smaller creators, however, may still feel vulnerable without access to CAA’s resources. As AI video generation grows—Veo 3 powers everything from TikTok-style clips to Hollywood storyboards—creators must navigate a landscape where their content fuels innovation but also risks exploitation.
Google’s commitment to defending Veo 3 users in copyright disputes and its CAA alliance signal a shift toward creator empowerment, but transparency remains key. For now, creators like Brodie Moss are left wondering how much of their digital DNA powers tools like Veo 3. As the creator economy evolves, the line between innovation and ethics will only grow sharper — and YouTube’s next moves will set the tone.

