AI-Generated Child Abuse Imagery Raises Concerns for Law Enforcement and Child Advocates

A child psychiatrist who made a bunch of girls look nude in a Facebook first-day-of-school photo. A U.S. Army soldier accused of drawing sexually abused children he knew. A software engineer who creates vivid sexually explicit child images.

Law enforcement agencies around the U.S. are cracking down on artificial intelligence-created child sexual assault material, from edited photos of real children to violent depictions of computer-generated minors. Justice Department officials said they’re aggressively prosecuting AI tool abusers, while states race to punish “deepfakes” and other disturbing photos of children.

Existing federal prohibitions apply to such content, and the Justice Department recently brought the first federal lawsuit involving AI-generated pictures, meaning the children are virtual. Also in August, federal officials detained an Alaskan soldier accused of putting harmless photos of real children he knew through an AI chatbot to render them sexually explicit.

The indictments come as child advocates try to curtail technological usage to prevent a flood of unsettling photos that could make it harder to rescue real victims. Law enforcement worries investigators will squander time and money searching for non-existent victimized youngsters.

Meanwhile, lawmakers are passing a flurry of regulations to allow local prosecutors to charge AI-generated “deepfakes” and other sexually explicit photos of children under state law. The National Center for Missing & Exploited Children found that more than a dozen governors had passed laws banning digitally manufactured or manipulated child sexual abuse imagery this year.

Nasarenko spearheaded a bill signed by Gov. Gavin Newsom last month banning AI-generated child sexual assault material. Nasarenko said California’s law required prosecutors to prove AI-generated content depicted a real child, thus his office could not prosecute eight cases between December and mid-September.

Police claim AI-generated child sexual assault photos can seduce children. Kids can be strongly affected by sexually explicit images even if they are not physically assaulted.

Experts say criminals prefer open-source AI models that users may download on their computers and train or tweak to depict children explicitly. Officials claim dark web abusers share AI tool manipulation methods to make such content.

Top tech companies like Google, OpenAI, and Stability AI have partnered with Thorn to stop child sexual abuse photos from spreading.

However, experts argue more should have been done to prevent misuse before the technology became generally available. In recent court papers, a Justice Department prosecutor observed that corporations’ efforts to make future AI tools tougher to abuse “will do little to prevent” offenders from running previous models on their computers “without detection.”

In 2002, the Supreme Court overturned a federal ban on virtual child sexual abuse. However, a federal legislation passed the following year prohibits drawings of youngsters engaging in sexually explicit behavior as “obscene.” The Justice Department says cartoon representation of child sexual abuse has been charged under that legislation, which states there’s no necessity “that the minor depicted actually exist.”

A Wisconsin software engineer was charged in May by the Justice Department with using AI tool Stable Diffusion to create photorealistic images of children engaging in sexually explicit conduct. He was caught after sending some to a 15-year-old boy via Instagram direct message. In an email to the AP, the man’s lawyer, who is seeking First Amendment dismissal, declined to comment.

The Justice Department is charging “deepfakes,” where a genuine child’s photo is digitally changed to be sexually explicit, under the federal “child pornography” legislation. Last year, a North Carolina child psychiatrist was convicted of federal charges for using an AI program to digitally “undress” girls posing on the first day of school in a decades-old Facebook photo.

Reference