Hello!
Law enforcement officers in New Mexico used an AI-generated image of fake teenage girl to attract pedophiles, a lawsuit filed by the state reveals.
The suit was filed last week in New Mexico against the social media app Snapchat, which prosecutors say has failed to "protect children from sextortion, sexual exploitation, and harm."
As flagged by Ars Technica, the filing reveals that part of the cops' "undercover investigation" involved the state's Department of Justice officials setting up a "decoy Snapchat account for a 14-year-old named Heather."
As "Heather," the officers "found and exchanged messages" with accounts belonging to obvious pedophiles, with disturbing usernames like "child.rape" and "pedo_lover10," according to the filing. Historically, as Ars notes, police conducting similar investigations would use images of younger-looking adult women — often police officers — to convince child predators they were speaking to a real teenage girl.
But in this case, cops used an AI-generated image of a sexualized 14-year-old to persuade the perpetrators that Heather was, in fact, the real deal.
According to the officers, the tactic worked: fooled by the AI-generated photo, many of the accounts the officers interacted with allegedly attempted to goad "Heather" into sharing with them explicit sexual images or child sexual abuse material (CSAM.)
But while this investigation was successful in revealing disturbing dark realities of Snapchat's algorithms, as Ars notes, the officers' use of AI raises new ethical questions. For example, AI-generated CSAM is already on the rise— so should the government really be making more of it, even if it's fake?
"Of course, it would be ethically concerning if the government were to create deepfake AI child sexual abuse material (CSAM)," the lawyer Carrie Goldberg, who famously represented several victims of sex abuse by Harvey Weinstein, told Ars, "because those images are illegal, and we don't want more CSAM in circulation."
C(AI)tch-22
There are also ethical questions regarding the AI training datasets the cops' efforts leaned on.
To generate fake images of children, an AI model has to be trained on photos of real kids. It's hard to argue that a child can give their full consent for their image to be used for AI training in the first place — a question made all the more serious when AI is being used to generate sexualized or otherwise harmful images of them.
Elsewhere, on a practical level, Goldberg warned Ars that using AI-made photos of fake kids could provide useful kindling for entrapment defenses by perpetrators.
All in all, the investigators' use of AI represents a catch-2022 for law enforcement. On one hand, according to the lawsuit, predators took the bait. But if the goal is to protect actual kids, regurgitating images of real children into sexualized, AI-generated images of fake ones feels like a far cry from total protection.
Thank you!
Join us on social media!
See you!