Instagram's tools designed to protect teenagers from harmful content are failing to stop them from seeing suicide and self-harm posts, a study has claimed.

Researchers also said the social media platform, owned by Meta, encouraged children 'to post content that received highly sexualised comments from adults'.

The testing, conducted by child safety groups and cyber researchers, found 30 out of 47 safety tools for teens on Instagram were 'substantially ineffective or no longer exist'.

Meta has disputed the research and its findings, claiming its protections have resulted in teens seeing less harmful content on the platform.

According to a spokesperson from Meta, 'This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today.'

They emphasized that 'Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls.'

The company introduced teen accounts to Instagram in 2024, aiming to enhance protections for young users and allow greater parental oversight, extending it to Facebook and Messenger in 2025.

Research by the US research centre Cybersecurity for Democracy found significant issues with the tools after setting up fake teen accounts. They identified that only eight out of 47 safety tools they analyzed were functioning effectively, meaning teens were still exposed to content that violated Instagram's own guidelines.

Examples of problematic content included posts detailing 'demeaning sexual acts' and autocomplete suggestions for searches related to suicide and eating disorders.

Andy Burrows, the CEO of the Molly Rose Foundation, which advocates for online safety laws, criticized the findings, stating that they reflect a corporate culture at Meta that prioritizes engagement over safety.

The researchers also discovered instances of children appearing under the age of 13 posting videos and found that the algorithm incentivized risky behaviors.

Despite several commitments from Meta to improve child safety, experts like Dr. Laura Edelson argue that the existing tools still have significant shortcomings, calling the effort a 'PR stunt' rather than a genuine approach to mitigate safety risks.