() – “It is enough to have a human form to be a victim.” This is how attorney Carrie Goldberg describes the risk of pornography deepfake in the era of artificial intelligence (AI).
Although revenge porn, or the non-consensual sharing of sexual images, has been around almost as long as the internet, the proliferation of AI tools means that anyone can be subject to this form of harassment, even if they have never taken or sent a nude photo. AI tools can now superimpose a person’s face onto a naked body or manipulate existing photos to make it appear as if a person is not wearing clothes.
In the last year, the targets of non-consensual AI-generated pornographic images have ranged from prominent women such as Taylor Swift and Representative Alexandria Ocasio-Cortez until high school girls.
For someone who discovers that they, or their children, have been subjected to pornography deepfakethe experience is often frightening and overwhelming, said Goldberg, who runs the New York-based firm CA Goldberg Law, which represents victims of sexual crimes and online harassment. “Especially if they are young and don’t know how to deal with it, and the internet is a big, huge, nebulous place,” he explains.
But there are steps victims of this form of harassment can take to protect themselves and places to turn for help, Goldberg said in an interview on ‘s new tech podcast, Terms of Service with Clare Duffy.
Goldberg explained that for people who are targeted by AI-generated sexual images, the first step – even if it is counterintuitive – should be to take a screenshot.
“The knee-jerk reaction is to remove them from the internet as soon as possible,” says Goldberg. “But if you want to have the option of criminally reporting it, you need the evidence.”
Next, they can search for the forms that platforms like Google, Goal and snapchat offer to request the removal of explicit images. Non-profit organizations such as StopNCII.org and Take It Down They can also help facilitate the removal of those images on multiple platforms at once, although not all sites cooperate with the groups.
A bipartisan group of senators shipment in August an open letter in which he asked almost a dozen technology companies, including X and Discord, to join the programs.
The fight against explicit non-consensual images and deepfakes has received rare bipartisan support. A group of teenagers and parents affected by AI pornography testified at a hearing on Capitol Hill, where Republican Senator Ted Cruz introduced a bill – supported by Democratic Senator Amy Klobuchar and others – that would criminalize the publication of such images and require social media platforms that will remove them upon notice from the victims.
But for now, victims have to navigate a patchwork of state laws. In some places, there are no criminal laws against creating or sharing deepfakes explicit adults. (AI-generated sexual images of children often fall under child sexual abuse material laws.)
“My proactive advice on “It’s actually for potential criminals: don’t be a scumbag and try to steal someone’s image and use it to humiliate them,” says Goldberg. “There’s not much victims can do about it… We can never be completely safe in a digital society, but it’s up to each of us not to be complete idiots.”
Add Comment