() – Instagram on Tuesday announced its biggest attempt yet to protect young users from the dangers on its platform, rolling out new “teen accounts” settings that will automatically make millions of teen accounts private and restrict the type of content those users can see on the app.
The change in the way Instagram allows teens to use its platform comes nearly three years after the explosive “Facebook Papers” brought attention to the risks the platform poses to young users.
The new restrictions are also designed to force teens to adopt parental supervision through the app. Instagram will automatically apply the new “teen accounts” settings to all users under 18. After the update, users ages 16 and 17 will be able to manually change the app’s settings to their preferences, but users ages 13 to 15 will need to get parental approval for any changes.
The new “teen accounts” settings build on more than 30 parental monitoring and wellness tools that parent company Meta has rolled out over the past few years, such as reminders to “take a break” and restrictions on “age-inappropriate” content and posts about eating disorders. Despite those updates in the past, the company has continued to face criticism for placing too much responsibility in the hands of parents and, in some cases, teens themselves. Parental monitoring tools, for example, relied on teens telling parents they were on the app.
Pressure for Meta to do more to protect teens increased again after a Facebook employee turned whistleblower, Arturo Bejar, told a Senate subcommittee hearing in November that Meta’s top executives, including CEO Mark Zuckerberg, for years ignored warnings about harms to teens on its platforms.
Court documents from recent lawsuits against the company have also alleged that Zuckerberg repeatedly thwarted teen wellness initiatives, that Meta knowingly refused to close accounts belonging to children under 13, and that the company has enabled accounts of child predators.
At a Senate hearing in January, Zuckerberg apologized to families who said their children had been harmed by social media.
Meta says the latest changes are intended to “address parents’ biggest concerns: who teens are talking to online, what content they’re viewing, and whether their time is being well spent.”
The “teen accounts” update means that accounts of users under 18, both new and existing, will automatically be set to private and placed on the strictest messaging settings. The overhaul will allow teen users to receive messages only from people they are already connected to. Instagram will also limit who can tag teens in photos or mention them in comments, which in both cases will be only to people they follow.
Additionally, teens will be placed under Instagram’s most restrictive content control settings. The change limits the types of “sensitive” content teens can see on their Explore page and in Reels, such as posts promoting cosmetic procedures.
Instagram had already begun implementing this strategy in a more limited way earlier this year.
Teen users will also receive time limit reminders urging them to exit after spending 1 hour on the app each day. And the app will default to “sleep mode,” silencing notifications and sending automatic replies to direct messages between 10 p.m. and 7 a.m.
Instagram plans to apply the changes to all teen accounts in a number of countries, including the United States, starting next week.
The app will also add new features to its parental monitoring tool, allowing parents to see which accounts their teen has recently messaged, set total daily time limits for teens’ Instagram use, block teens from using Instagram at night or during other specific time periods, and see the topics their teen is viewing content about.
The changes are expected to roll out to all teen accounts in the US, UK, Canada and Australia within the next 60 days, before rolling out to other countries later this year and next.
But the effectiveness of some of the changes may be hampered by a simple truth: Meta has no way of knowing for sure whether it’s a parent who’s actually monitoring teens’ accounts rather than, say, an older friend. Meta doesn’t do formal parental verification, but says it relies on signals such as an adult user’s birth date and how many other accounts they monitor to determine whether it should be allowed to monitor a teen’s account, a spokesperson said.
Meta has also long faced criticism for not doing more to prevent teens from lying about their age when creating a new account to bypass security restrictions.
The company says it is deploying artificial intelligence technology that will aim to identify teen accounts that may have incorrectly listed an adult birth date.
Meta says the new features were developed in consultation with its Safety Advisory Board, comprised of independent online safety experts and organizations, and a group of youth advisors, as well as feedback from other teens, parents and government officials.
Add Comment