Business

A New Report Documents How Easily Children Can Access Graphic Images of the War

Violent, distressing imagery related to the conflict between Hamas and Israel, including graphic posts showing dead children and adults, are easily accessible to young users on platforms such as Instagram, researchers have found.

The Institute for Strategic Dialogue, a research group that studies online platforms, created accounts on Instagram, TikTok and Snapchat under the guise of British 13-year-olds. Within a 48-hour period from Oct. 14 through 16, the researchers said, they found more than 300 problematic posts. More than 78 percent of the posts were on Instagram, and about 5 percent were on Snapchat. The figures were released in a report on Wednesday.

The researchers said they switched on Instagram’s Sensitive Content Control feature and TikTok’s Restricted mode — which are meant to shield young users from potentially risky material — before running their searches.

Despite policies and features meant to protect increasingly online youth, the researchers found that grisly content was not difficult to find: 16.9 percent of the posts that surfaced when searching for the “Gaza” hashtag on Instagram were graphic or violent, compared with 3 percent on TikTok and 1.5 percent on Snapchat. TikTok’s search function was sometimes automatically populated with phrases like “Gaza dead children” and “dead woman Gaza,” the researchers found.

“In times of conflict, where misinformation and disinformation run rampant, it becomes even more critical to safeguard young people from the potential emotional impact of such material, and provide the support necessary to process and contextualize this type of content,” Isabelle Frances-Wright, an author of the report, said in an emailed statement.

Meta, which owns Instagram, addressed its efforts to balance safety and speech in a blog post about the war on Friday. It noted that it established a special operations center with expert monitors working in Hebrew and Arabic, who removed or flagged more than 795,000 pieces of harmful content in the first three days of the conflict. The company also said that Instagram allows users to control how much sensitive content they are recommended.

In its own blog post last weekend, TikTok said it had also opened a command center and added more Arabic- and Hebrew-speaking moderators, removing more than 500,000 videos and closing 8,000 livestreams since Hamas’ attack on Oct. 7. The platform said it is automatically detecting and removing graphic and violent content, placing opt-in screens over disturbing images and adding restrictions to its livestreaming function amid the hostage situation.

Snapchat’s parent company, Snap, said in a statement that it is “continuing to rigorously monitor” the platform and “determining any additional measures needed to mitigate harmful content.” The platform does not have an open newsfeed or livestreaming abilities, which limits harmful content from going viral, the company said.

Amid a flood of posts about the war, some schools have urged parents to delete their children’s online accounts to shield them from Hamas’s attempts at psychological warfare. (Hamas accounts have been blocked by platforms like Instagram and TikTok but remains active on Telegram.) The chief executive of the parental app BrightCanary told USA Today that online searches for hostages among users between 9 and 13 years old surged 2,800 percent in recent days.

Thierry Breton, an official with the European Commission who works on issues such as disinformation and digital regulation, sent letters last week urging TikTok, Meta and X, the platform formerly known as Twitter, to mitigate a surge of false and violent images from the conflict in the Middle East.

Related Articles

Back to top button