Girl using a laptop – UNSPLASH/CC/ANNIE SPRATT
Aug. 28 (Portaltic/EP) –
Minors also create and share their own explicit sexual images and use generative artificial intelligence (AI) tools to create realistic nude photographs of people their age without consent, a fact that only increases the risks of sexual abuse of children and adolescents in the digital environment, and which requires a broad understanding in order to effectively protect them.
Generative AI technology is new, but there are behaviors such as harassment, bullying, and abuse that are not just bullying, and when combined can cause harm that transcends the digital plane, with consequences that cannot be minimized, especially when it comes to minors.
This conclusion is included in the report ‘Perspectives of young people on online safety, 2023′carried out by the American organization Thornfocused on creating technology to prevent child sexual abuse, in collaboration with the consulting firm BSG.
Her research focuses on children between 9 and 17 years old in the United States and their behavior in relation to technology, but her conclusions reflect a reality that can be extrapolated to other societies with similar digital services and behaviors.
They start from the fact that minors use social networks, regardless of the restrictions that these digital services establish. YouTube (98%), TikTok (84%), Roblox (80%), Minecraft (78%) and Fortnite (73%) are the most used by minors, who also turn to other services focused on adults, such as dating apps, in search of romantic or sexual experiences.
According to the data from this research, collected between November 3 and December 1, 2023, 17 percent of minors reported having used a dating app, such as Tinder, Grindr or Hinge, and almost the same percentage (16%) a pornography website. This type of activity is likely (up to three times more) in adolescents than in children under 12 years of age.
Internet access is not without risk: 59 percent reported having had harmful experiences online and 35 percent reported having had sexual interactions, either with other minors under 18 (28%) or with those they believed to be adults (28%).
These types of interactions were most likely on apps like Omegle (36%), Kik (23%), Snapchat (23%), Telegram (22%), Instagram (20%) and Marco Polo (20%).
SEXTORTION AND DEEPFAKES
One of the risks faced by minors is sextortion, that is, the threat of publishing an explicit image of the minor if he or she does not do what the aggressor asks (have a relationship, perform a sexual act, provide explicit photographs of friends or siblings, etc.). According to the report, one in 17 minors has been a victim of sextortion.
Without any threat, minors also share their own explicit videos and photographs, in which they appear naked and even with someone else. For one in four this practice is seen as normal. Of those who have shared such images, the majority (83%) have done so with someone they know ‘offline’, but 46 percent have sent them to exclusively digital acquaintances.
This sending of explicit images is also done without the consent of the person depicted in them. 7 percent admit to having forwarded other people’s images, while 19 percent have seen images forwarded by others without consent. 38 percent have blamed the victim for the circulation of the images.
Images created by minors also extend to those generated by AI tools with a realistic result, which is known as ‘deepfake’. Although the majority of minors do not believe that their peers have used these tools to create explicit images of other children, the report states that 11 percent know of cases in which they have.
“While the motivation behind these events is more likely to be driven by adolescent misbehavior than intent to sexually abuse, the resulting harms to victims are real and should not be minimized in attempts to evade responsibility,” the authors of the study said.
They also point to generative AI, noting that “it is critical that we speak proactively and directly about the harms caused by deepfakes and reinforce a clear understanding of what behavior is unacceptable in our communities and schools, regardless of the technology being used.”
The aim of this study is to obtain data that will reinforce prevention strategies and to “inspire” further research. “It is essential that we understand the full and nuanced risks that children face online in order to develop systems that can effectively protect them,” they conclude.
Add Comment