All children surveyed by media regulator viewed online violence.
Research from the media watchdog reveals that violent online content is now “inescapable” for UK children, with many encountering it during their primary school years.
Each British child interviewed in the Ofcom study had viewed violent content online, ranging from videos of local school and street altercations shared in group chats to explicit and extreme graphic violence, including gang-related material.
Children acknowledged the existence of even more extreme content on the deeper parts of the web but had not actively searched for it, the report stated.
These results led the NSPCC to criticize tech platforms for neglecting their responsibility to protect young users.
Rani Govender, a senior policy officer for online child safety, expressed deep concern, stating, “It’s alarming that children are indicating that encountering violent content unintentionally has become a regular occurrence in their online experiences. It’s unacceptable that algorithms persist in promoting harmful content, which we are aware can result in severe mental and emotional repercussions for young individuals.”
The study, carried out by the Family, Kids and Youth agency, forms part of Ofcom’s groundwork for its enhanced responsibilities under the Online Safety Act, enacted last year. This legislation empowers the regulator to take action against social networks that fail to safeguard their users, particularly children.
Gill Whitehead, Ofcom’s director of online safety, emphasized, “Children should not perceive seriously harmful content – such as material depicting violence or advocating self-injury – as an inevitable or unavoidable aspect of their online lives. Today’s research conveys a strong message to tech companies that the time to act is now, ensuring they are prepared to fulfill their child protection obligations under the new online safety laws. Later this spring, we will seek input on our expectations for the industry to ensure children can access a safer online experience appropriate for their age.”
Nearly all major tech companies were referenced by the children and young people surveyed by Ofcom, with Snapchat and Meta’s apps Instagram and WhatsApp mentioned most frequently.
“Children described how there were private, often anonymous, accounts solely dedicated to sharing violent content – typically involving local school and street altercations,” the report states. “Almost all of the children in this study who engaged with these accounts reported finding them on either Instagram or Snapchat.”
In private conversations, they worried that reporting would label them as “informants,” potentially leading to embarrassment or retaliation from peers. Additionally, they lacked confidence that platforms would enforce meaningful consequences for those sharing violent content.
The emergence of powerful algorithm-driven timelines, such as those seen on TikTok and Instagram, introduced another layer of complexity. Children shared a collective belief that engaging with violent content (even in the act of reporting it) could increase the likelihood of encountering it in their recommendations.
Professionals involved in the study expressed concerns about the impact of violent content on children’s mental health. A separate report, released on Thursday by the children’s commissioner for England, revealed that over 250,000 children and young people were awaiting mental health assistance after being referred to NHS services. This means that one in every 50 children in England is on the waiting list. Among those who accessed support, the average waiting time was 35 days. However, in the past year, nearly 40,000 children endured waits of over two years.
In response, a spokesperson from Snapchat stated, “There is absolutely no tolerance for violent content or threatening behavior on Snapchat. When such content is discovered, we act swiftly to remove it and take appropriate action against the responsible account.”