Instagram's Reels algorithms serves up sexualized content of children: report

Wall Street Journal test accounts that followed only young gymnasts, cheerleaders and youth influencers, received recommendations that included ‘risqué footage of children’

Article content

Instagram’s Reels, which recommends an endless stream of videos to social media users, is also serving up sexually suggestive content to accounts that follow preteens, according to a new investigation.

The Wall Street Journal found that test accounts that followed only young gymnasts, cheerleaders and teen and preteen influencers, received recommendations from Instagram’s algorithm that included “risqué footage of children” and overtly sexual adult videos, served among ads for American brands.

Advertisement 2

Article content

Article content

The Journal set up the accounts after observing that “thousands of followers of such young people’s accounts often include large numbers of adult men.” Many of those men, the paper alleged, “also had demonstrated interest in sex content related to both children and adults.”

The newspaper reported that in the stream of recommended videos, a test Instagram account came across an ad for the dating app Bumble. That ad appeared between a video of a person stroking a life-sized latex doll and a video of what the paper calls a “young girl” flashing her midriff to the camera. Another test resulted in a video of a man on a bed with a 10-year-old girl, according to the caption, followed by a Pizza Hut commercial. An ad for Disneyland appeared beside a video described as “an adult acting out having sex with her father.”

The Journal lists multiple companies whose ads appear beside what it calls “inappropriate content.” Those include Disney, Walmart, the dating company Match Group, and Hims, a telehealth company that sells sexual dysfunction medication in addition to offering therapy and hair loss services. The Wall Street Journal’s ads also ran alongside this inappropriate content.

Advertisement 3

Article content

When the Journal reached out to several of these companies, the paper was told they would conduct an outside audit of their advertising and reach out to Instagram’s parent company.

“We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.

Meta, the company that owns Instagram and Facebook, said the tests were a manufactured experience, not matching what billions of users of the social media platform experience. But, Meta declined to comment on why algorithms served up a stream of videos of children, sexual content and advertisements.

In October, Meta launched new brand standards that give advertisers more control over where their ads appear. The company also said Instagram removes or reduces the prominence of four million videos monthly.

“Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, a Meta vice president who handles relations with the advertising industry.

Advertisement 4

Article content

A June investigation by the Wall Street Journal found that Meta’s algorithms connect what it alleges are “large communities of users interested in pedophilic content.” Since then, Meta launched a task force that has expanded its automated systems to detect suspicious users. It takes down tens of thousands of accounts each month.

Jonathan Stray, senior scientist for the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, said that “niche content provides a much stronger signal than general interest content.”

The tests, experts said, showed that Meta’s algorithms have found that some users who follow preteen girls will also want to engage with videos sexualizing children, and therefore suggests that content to them.

The Journal reported that current and former Meta employees said it was a known internal problem that the algorithms would aggregate child sexualization content, and that this was a known risk when the Reels video product launched on Instagram to compete with TikTok.

Both Reels and TikTok use algorithms to show videos that a viewer is believed to be interested in based on their past behaviour, as opposed to only showing content from accounts they follow.

Advertisement 5

Article content

Another set of tests, done by the the Canadian Centre for Child Protection, found that Instagram served videos of children that are known sexual abuse videos.

Lianna McDonald, executive director for the Canadian centre, said social media has changed online child sexual abuse. Children are sexualized on these platforms, generally without showing nudity, and these posts help recruit more people into private forums with illicit content.

“Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities,” McDonald said.

Related Stories

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our daily newsletter, Posted, here.

Article content