Enthusiasts Build OSINT Tool to Profile YouTube Commenters, Raising Privacy Concerns

A new open-source intelligence (OSINT) tool called YouTube-Tools has emerged, claiming to have scraped an staggering 20 billion comments from YouTube videos spanning 2005 to 2025, left by 1.4 billion unique users.

While the tool’s creators say it’s designed for law enforcement, its accessibility—priced at just $20 a month with minimal sign-up barriers — has sparked significant privacy concerns.
How It Works

For example, a test run linked a commenter to Italy based on language patterns and cultural references in their posts. The developer, who has previously built similar tools for platforms like Twitch and League of Legends, claims the AI summaries are meant to help investigators by highlighting “points of interest” in potentially thousands of comments, without replacing manual investigation.
The service’s scale is unprecedented: 20 billion comments from 1.4 billion users, amassed over two decades. This massive dataset, combined with AI analysis, turns what might seem like innocuous public comments into a goldmine for profiling. For law enforcement in countries like Portugal and Belgium, where the tool is already in use, it could be a powerful resource.
But the lack of oversight and ease of access — anyone with a credit card and email can sign up — means it’s also ripe for misuse.
Privacy Risks and Ethical Questions

However, there’s little to stop malicious actors from using it for targeted harassment, a concern amplified by reports that harassment-focused online communities have already experimented with the developer’s other tools.
YouTube’s policies explicitly prohibit unauthorized scraping, stating that data can only be scraped in accordance with its robots.txt file or with prior written permission.
Yet, as of May 29, 2025, YouTube has not publicly addressed whether it’s taking action against YouTube-Tools. This lack of enforcement raises questions about how platforms protect user data in an era where AI can turn public information into deeply personal profiles.

- Grayscale Research Introduces Artificial Intelligence Crypto Sector, Expanding Its Crypto Sectors Framework
- UAE Becomes First Country to Offer Free ChatGPT Plus to All Residents Amid Stargate Initiative
- Google Chrome Adds AI Screen Sharing Feature for Gemini, Limited to U.S. AI Pro and Ultra Subscribers
A Broader Trend

While the developers of YouTube-Tools claim to have revoked access for users with “illegitimate purposes” — such as one using a temporary email — these measures seem insufficient to prevent abuse.
The tool’s $20 monthly fee and lack of rigorous vetting make it accessible to virtually anyone, not just the law enforcement agencies it claims to target.
If you told someone in 2010 that their YouTube comments could one day be used to predict where they live or what they believe, they’d likely dismiss it as science fiction. Today, it’s a stark reality.
As AI continues to blur the line between public data and private life, tools like YouTube-Tools force us to rethink what “public” really means — and whether platforms like
YouTube are doing enough to protect their users from the unintended consequences of their digital footprint.