/bmi/media/media_files/2025/02/27/Rn267pPaDtPnj342w8Fz.jpeg)
New Delhi: Meta, the parent company of Instagram, unveiled a suite of enhanced safety features on Wednesday aimed at protecting teen users and adult-managed accounts featuring children, as part of its ongoing efforts to combat predatory behaviour and improve platform safety.
The announcement comes amid heightened scrutiny of social media’s impact on young users and follows the removal of over 635,000 accounts linked to exploitative activity.
The new measures target Instagram’s teen accounts and extend protections to accounts run by adults, such as parents or talent managers, that primarily showcase children under 13.
These accounts, often featuring family vlogs or child influencers, must clearly state in their bio that they are adult-managed, as Instagram’s policy prohibits children under 13 from operating their own accounts.
For teen users, Instagram is rolling out improved direct messaging (DM) safeguards to prevent “exploitative content.” Teens will now see detailed safety notices when messaging, including information about the account they’re interacting with, such as its creation date, to help identify potential scammers. A new combined “block and report” button allows teens to take swift action against suspicious accounts with a single tap. Meta reported that in June 2025 alone, teens blocked 1 million accounts and reported another 1 million after viewing safety notices, demonstrating the effectiveness of these prompts.
Teen accounts, which are private by default since 2024, already restrict private messages to only those from accounts the user follows or is connected to. Meta is also using artificial intelligence to detect users misrepresenting their age. Accounts identified as belonging to users under 13 are automatically converted to teen accounts with stricter settings.
For adult-managed accounts featuring children, Meta is implementing protections similar to those for teen accounts. These accounts, often run by parents or talent managers, will now default to Instagram’s strictest messaging settings, limiting contact from unknown users, and activate Hidden Words to filter out offensive comments. A notification will appear at the top of these accounts’ feeds to inform managers of the updated safety settings. Additionally, Meta will restrict access to these profiles for users previously blocked by teen accounts, reducing the risk of inappropriate interactions.
This move addresses concerns about predatory behaviour targeting child-focused accounts, which have been a growing issue. Earlier this year, Meta removed 135,000 Instagram accounts for leaving sexualised comments or requesting explicit images from adult-managed child profiles, alongside 500,000 related accounts on Instagram and Facebook.
The company’s crackdown on predatory accounts follows a broader effort to curb harmful content, including the removal of 10 million profiles impersonating large content creators in the first half of 2025. Meta emphasised its commitment to child safety, stating, “We’re not just improving our tools—we’re cracking down on bad actors at scale.
The new features build on recent safety initiatives, such as location notices in DMs to flag accounts messaging from different countries and a nudity protection tool that blurs potentially explicit images. As concerns about child exploitation and mental health grow, Meta’s proactive measures aim to restore trust and align with regulatory expectations.
Adam Mosseri, head of Instagram, underscored the importance of these updates, noting that they reflect Meta’s ongoing commitment to creating a safer environment for young users. While the company faces challenges in balancing user engagement with safety, these enhancements signal a step toward addressing the evolving risks of social media for teens and children.