July 23, 2025
Meta updates safety features for teens. More than 600,000 accounts linked to predatory behavior
Meta said it removed more than 600,000 accounts that were linked to predatory behavior on Instagram and Facebook.

Facebook and Instagram icons are seen displayed on an iPhone.

Jakub Porzycki | Nurphoto | Getty Images

Meta on Wednesday introduced new safety features for teen users, including enhanced direct messaging protections to prevent “exploitative content.”

Teens will now see more information about who they’re chatting with, like when the Instagram account was created and other safety tips, to spot potential scammers. Teens will also be able to block and report accounts in a single action.

“In June alone, they blocked accounts 1 million times and reported another 1 million after seeing a Safety Notice,” the company said in a release.

This policy is part of a broader push by Meta to protect teens and children on its platforms, following mounting scrutiny from policymakers who accused the company of failing to shield young users from sexual exploitation.

Meta said it removed nearly 135,000 Instagram accounts earlier this year that were sexualizing children on the platform. The removed accounts were found to be leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children.

The takedown also included 500,000 Instagram and Facebook accounts that were linked to the original profiles.

Read more CNBC tech news

Meta is now automatically placing teen and child-representing accounts into the strictest message and comment settings, which filter out offensive messages and limit contact from unknown accounts.

Users have to be at least 13 to use Instagram, but adults can run accounts representing children who are younger as long as the account bio is clear that the adult manages the account.

The platform was recently accused by several state attorneys general of implementing addictive features across its family of apps that have detrimental effects on children’s mental health.

Meta announced last week it removed about 10 million profiles for impersonating large content producers through the first half of 2025 as part of an effort by the company to combat “spammy content.”

Congress has renewed efforts to regulate social media platforms to focus on child safety. The Kids Online Safety Act was reintroduced to Congress in May after stalling in 2024.

The measure would require social media platforms to have a “duty of care” to prevent their products from harming children.

Snapchat was sued by New Mexico in September, alleging the app was creating an environment where “predators can easily target children through sextortion schemes.”