04.06.2025 12:02

Veo 3 Misused to Generate Fakes About Riots, Conflicts, and Election Fraud

Video thumbnail

On June 4, 2025, alarming reports have surfaced about the misuse of Veo 3, Google’s advanced AI video generation tool, to create hyper-realistic fake content.

Originally developed to revolutionize video production, Veo 3 is now being exploited to fabricate videos depicting riots, armed conflicts, and election fraud, as well as spread false news about celebrity deaths and enable cryptocurrency scams.

Despite Google’s efforts to implement safeguards like invisible watermarks and the SynthID Detector, experts warn that these measures may not be enough to curb the growing threat of social instability fueled by AI-generated misinformation.


The Rise of AI-Generated Fakes

Veo 3, developed by Google DeepMind, is a state-of-the-art text-to-video AI model capable of producing high-quality, photorealistic videos up to 60 seconds long in 1080p resolution. Launched to assist creators in filmmaking and advertising, the tool can generate lifelike scenes from simple prompts, supporting multiple cinematic styles and even multilingual captions.

However, its accessibility — available via platforms like Google Cloud’s Vertex AI — has made it a double-edged sword. Malicious users have begun exploiting Veo 3 to create deceptive content that spreads rapidly across social media.

Reports indicate that fake videos depicting violent riots and armed clashes in fictional or real-world locations have surfaced on platforms like Telegram, amplifying these fabrications.

Similarly, AI-generated videos falsely portraying election fraud — such as manipulated footage of ballot tampering — have been used to sow distrust in democratic processes, particularly in the wake of recent global elections.

Additionally, users have fabricated death announcements of high-profile figures, causing emotional distress and confusion among the public.


Beyond Misinformation: Cryptocurrency Scams

The misuse of Veo 3 extends to financial fraud, with scammers leveraging the tool to create fake endorsements or promotional videos for cryptocurrency schemes. These videos often feature AI-generated personas or deepfaked celebrities promoting fraudulent tokens or investment opportunities.

For example, a fabricated video of a well-known tech billionaire endorsing a fake crypto project reportedly led to significant financial losses for unsuspecting investors. The lifelike quality of Veo 3-generated content makes it difficult for the average user to discern fact from fiction, amplifying the potential for harm.


Visible Flaws, Invisible Threats

Despite the sophistication of Veo 3, the generated videos often contain noticeable inconsistencies. These include unnatural movements, distorted facial features, or environmental anomalies — like mismatched lighting or objects that defy physics.

However, analysts warn that even with these flaws, the content can still be convincing enough to deceive viewers, especially when shared in emotionally charged contexts like political unrest or breaking news.

A study by the Center for Countering Digital Hate (CCDH) found that 70% of participants failed to identify AI-generated videos as fake when viewed in isolation, highlighting the risk of widespread belief in fabricated narratives.

The broader implications are dire. Experts, including those from the Oxford Internet Institute, caution that such misinformation can exacerbate social tensions, fuel political polarization, and even incite real-world violence.

“AI-generated fakes don’t need to be perfect to cause harm,” said Dr. Sarah Thompson, a disinformation researcher. “In moments of crisis, people are more likely to react emotionally than critically, and these videos can act as a spark for unrest.”


Google’s Response: Safeguards Under Scrutiny

Google has acknowledged the misuse of Veo 3 and is taking steps to mitigate the risks. The company has integrated invisible watermarks into all videos generated by Veo 3, embedding metadata that identifies the content as AI-created. Additionally, Google has deployed SynthID Detector, a tool designed to identify AI-generated media by analyzing these watermarks.

SynthID, which also supports detection of AI-generated images and audio, is available to researchers and select partners to help trace the origin of deceptive content.

Google also enforces strict usage policies for Veo 3, requiring users to agree not to create harmful or misleading content. The company employs automated systems and human reviewers to monitor outputs, suspending accounts that violate these terms. However, these measures have limitations.

Invisible watermarks can be stripped or altered by determined bad actors, and SynthID’s effectiveness diminishes if videos are re-encoded or heavily edited — a common practice in misinformation campaigns.

Experts like Dr. Alan Chen from MIT’s Media Lab have expressed skepticism about the reliability of these safeguards, noting that “watermarking is a cat-and-mouse game. As detection tools improve, so do evasion techniques.”


The Bigger Picture: A Call for Regulation

The misuse of Veo 3 underscores a broader challenge facing AI development: balancing innovation with responsibility. While Google has taken steps to address the issue, critics argue that more proactive measures are needed.

Some advocate for stricter access controls, such as limiting Veo 3 to verified professionals rather than making it broadly available through Vertex AI. Others call for global regulation of AI-generated content, including mandatory labeling and public awareness campaigns to educate users about identifying fakes.

The European Union’s AI Act, set to be fully implemented by 2026, may provide a framework for addressing these risks, requiring high-risk AI systems like Veo 3 to undergo rigorous oversight. In the U.S., the Federal Trade Commission (FTC) has begun investigating AI-driven scams, including those involving cryptocurrency, but legislative action remains slow.

Meanwhile, platforms like X and Telegram, where much of this content spreads, face pressure to improve their moderation practices, though enforcement remains inconsistent.


Also read:


What Lies Ahead

The misuse of Veo 3 for creating fakes about riots, election fraud, and celebrity deaths is a stark reminder of AI’s dual-use potential.

While the technology holds immense promise for creative industries, its ability to generate convincing misinformation poses unprecedented risks to social stability.

Google’s efforts to implement safeguards like watermarks and SynthID are a step in the right direction, but their limitations highlight the need for broader collaboration between tech companies, governments, and researchers to combat AI-driven deception.

As Veo 3 and similar tools become more accessible, the challenge of distinguishing reality from fabrication will only grow.

For now, users are urged to approach viral videos with skepticism, especially those depicting sensational events, and to rely on verified news sources.

The stakes are high: unchecked misuse of AI like Veo 3 could erode trust in media, destabilize societies, and undermine the very foundations of truth in the digital age.


0 comments
Read more