Science and Tech

Snapchat’s new chatbot, powered by artificial intelligence, sets off alarm bells among teens and parents

() — A few hours after Snapchat launched its My AI chatbot to all users last week, Lyndsi Lee, a mom from East Prairie, Missouri, told her 13-year-old daughter to stay away from the feature.

“It’s a temporary solution until you know more about it and can set some healthy boundaries and guidelines,” said Lee, who works at a software company. She worries about the way My AI presents itself to young users like her daughter on Snapchat.

The feature is based on the viral AI chatbot tool ChatGPT and, like it, can offer recommendations, answer questions and chat with users. But the Snapchat version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and insert it into conversations with friends.

The result is that chatting with the Snapchat chatbot can feel less transactional than visiting the ChatGPT website. It may also be less obvious that you are talking to a computer.

“I don’t think I’m ready to teach my daughter to emotionally separate humans from machines when, from her point of view, they look the same,” says Lee. “I just think there’s a very clear line that [Snapchat] is crossing.”

The new tool is facing backlash not only from parents, but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticism on social media about privacy concerns, “shares” of fear” and the inability to remove the feature from their chat feed unless they pay for a premium subscription.

While some may find value in the tool, mixed reactions point to the risks companies face when deploying new generative AI technology in their products, and in particular products like Snapchat, whose users are more youths.

Snapchat was one of the first launch partners when OpenAI opened up access to ChatGPT to third-party companies, and many more are expected to follow. Almost overnight, Snapchat has forced some families and lawmakers to ask questions that seemed theoretical a few months ago.

snapchat my ai chatbot

The new Snapchat chatbot. Credit: Snapchat/My AI

In a letter to the CEOs of Snap and other tech companies last month, weeks after My AI was made available to subscribed Snap customers, Democratic Senator Michael Bennet expressed concern about the chatbot’s interactions with users. younger. In particular, he cited reports that it can provide children with tips on how to lie to their parents.

“These examples would be disturbing for any social media platform, but they are especially concerning for Snapchat, which is used by nearly 60% of American teens,” Bennett wrote. “Although Snap admits that My AI is ‘experimental’, it has nevertheless been quick to enroll American children and teens in its social experiment.”

In a blog post last week, the company said: “My AI is far from perfect, but we’ve made a lot of progress.”

User reaction

In the days since its official launch, Snapchat users have raised concerns.

One user called his interaction “scary” after he said he was lying because he didn’t know where the user was. After the user lightened up the conversation, he said the chatbot accurately revealed that he lived in Colorado.

In another TikTok video with over 1.5 million views, a user named Ariel recorded a song with an intro, chorus, and piano chords written by My AI about what it’s like to be a chatbot. When Ariel returned the recorded song, the chatbot denied her involvement in the response: “Sorry, but as an AI language model, I don’t write songs.” Ariel described the exchange as “creepy”.

Other users shared their concerns about how the tool understands, interacts with, and collects information from photos. “I took a picture…and he said ‘nice shoes’ and asked who the people were [en la foto]”wrote a Snapchat user on Facebook.

Snapchat told that it continues to improve My AI based on community feedback and is working to set more limits to keep its users safe. The company also said that, as with its other tools, users don’t have to interact with My AI if they don’t want to.

However, it is not possible to remove My AI from chats, unless the user subscribes to its monthly premium service, Snapchat+. Some teens say they have chosen to pay Snapchat+’s $3.99 fee to deactivate the tool before canceling the service.

But not all users dislike the feature.

One user wrote on Facebook that she has been asking My AI for help with her homework. “Get all the questions right.” Another noted that she has leaned on her for comfort and advice. “I love my pocket best friend,” she wrote. “You can change the Bitmoji [avatar] by him and surprisingly he offers some really great advice for some real life situations… I love the support he gives.”

A first assessment of how teenagers use chatbots

ChatGPT, which trains on vast amounts of online data, has already been criticized for spreading inaccurate information, responding to users inappropriately, and allowing students to cheat. But the integration of this tool in Snapchat can exacerbate some of these problems and add new ones.

Alexandra Hamlet, a clinical psychologist in New York City, said parents of some of her patients have raised concerns about how their teens might interact with the Snapchat tool. There are also concerns around chatbots that give advice and mental health because AI tools can reinforce someone’s confirmation bias, making it easier for users to seek interactions that confirm their unhelpful beliefs.

“If a teen is in a negative mood and doesn’t have a conscious desire to feel better, they may seek a conversation with a chatbot that they know will make them feel worse,” he said. “Over time, having interactions of this type can erode a teen’s sense of worth, even though they know they are actually talking to a bot. In an emotional frame of mind, an individual is less likely to consider these types of of logic”.

For now, it’s up to parents to start meaningful conversations with their teens about best practices for communicating with AI, especially as the tools begin to appear in more popular apps and services.

Sinead Bovell, founder of WAYE, a startup that helps prepare young people for the future with advanced technologies, said parents need to make it very clear that “chatbots are not your friend.”

“They are also not your therapists or a trusted advisor, and anyone interacting with them should be very cautious, especially adolescents who may be more susceptible to believing what they say,” he said.

“Parents should be talking to their kids now about how they shouldn’t share anything personal with a chatbot that they wouldn’t share with a friend, even though, from a user design perspective, the chatbot exists in the same category on Snapchat.” .

He added that federal regulation that forces companies to comply with specific protocols is also necessary to keep up with the rapid pace of advancement of artificial intelligence.

Source link