- Rachel de Souza, Children’s Commissioner for England1
Throughout my time as Children’s Commissioner, I have heard from a million children and young people about their hopes, ambitions, and concerns. An issue which frequently comes up in these conversations is how they can spend time online safely and protect themselves from distressing or harmful content. Children are digital pioneers, and the adults in charge of online platforms should put the protection of children before profit.
As a former teacher and headteacher, I have witnessed the spike in children’s time spent online during the past 20 years. Research conducted by Ofcom in 2021 found that by the age of eight, a child would typically spend 2 hours and 45 minutes online a day. The figure rises to more than four hours a day by age 11-12.1 Just this week, my own nationally representative poll of children aged eight to 15 backed these findings—25% of children spent two or three hours a day using an internet-enabled device such as a computer, smartphone, tablet, or gaming console, and 23% of children spent more than four hours a day on such a device.2
In my four years as Children’s Commissioner, I have been deeply shocked to hear how the online world is negatively impacting children and young people. Children have told me that they see harmful and illegal content frequently online (including pornography, suicide, self-harm, and eating disorder content), as well as abusive content, anonymous trolling, and hate content (including racism and sexism).3
Girls as young as nine have told my team about strangers asking for their home address online. In a room of 15 and 16 year olds, three quarters had been sent a video of someone being beheaded.4 The average age a child first sees pornography is just under 13, with 10% of children and young people aged 16 to 21 saying they had seen online pornography by the age of nine.5
My research found that children are frequently exposed to a range of inappropriate and harmful content.4 Around half of children I surveyed told me that they had seen online content that they felt was inappropriate or made them feel worried or upset. Boys were more likely to have seen harmful content. Children will always be curious, but most told me that they rarely sought out this content. Instead, the content was promoted and offered to them by complex recommendation algorithms designed to capture and retain the child’s attention.4
In recent months there has been much debate around the role of mobile phones in children’s lives, and many calls to ban them—particularly in schools. I wanted to know what is happening in schools, so I used my statutory powers to conduct the largest survey of schools and colleges to discover the policies in place in schools today.
My landmark research, with responses from 19 000 schools and colleges—representing nearly 90% of schools in England—has provided comprehensive evidence on mobile phone policies in schools. The findings are clear: most schools—99.8% of primary and 90.0% of secondary—already have policies in place that limit or restrict the use of mobile phones during the school day.2 Despite these policies, schools have told me in the same survey that they remain deeply concerned about children’s online safety. The hours that children are spending on screens every day are not happening in school hours. If we want to protect children, we need to turn our attention and our energy to keeping them safe online when they aren’t under the rules of their teachers.
Outside of school, children are being left to explore the internet unsupervised and unprotected. Parents often believe that parental controls, combined with in-app features like reporting and blocking, will keep children safe online. But online platforms are constantly being created and changing, so even the most tech savvy parents are still learning how to navigate the online world that their children experience.
We cannot solely rely on these functions to keep children safe. The shareable nature of content means children can still be exposed to harmful or distressing content by friends, social media groups, or messaging apps (even with parental controls in place). This underpins the importance of regulation in keeping children safe, and the need for safety measures to be built into platforms and accounts by default.
Tech companies are keen to promote their platform’s safety features to reassure parents, but young people are still often seeing harmful content online—whether they look for it or not. Keeping children safe online is paramount. Social media and the online world are an integral part of young lives today. This is a digital native generation who have never known a world without social media, smartphones, and instant 24 hour communication.
The online world has brought many benefits. Children and young people tell me how they can connect and learn from one another. However, it is increasingly clear that the online world, where so much of childhood is now spent, is not designed with children’s safety, wellbeing, and best interests in mind.
My young ambassadors have highlighted the need for better protections online. They have joined me to speak to tech companies about the inappropriate and harmful content children have seen on social media platforms, how easy it is to access it, and how difficult it is to report it.
Children’s experiences highlight the need for better regulation that acknowledges their concerns and acts on their views. Ofcom recently announced guidance—as part of the broader Online Safety Act—that requires robust age verification measures to prevent children from accessing harmful online content, particularly pornography.6
This means that services hosting or allowing access to pornography will have to introduce effective age assurance, such as photo identification matching, facial age estimation, or credit card checks to verify whether a user is an adult. Further evidence is needed to check that these measures have been adopted and are working effectively.
Ofcom’s age checks are expected to be implemented by July 2025.6 The measures are welcome, but we must ensure that their implementation is robust. Ofcom has a duty to hold tech companies accountable for enforcing the protections, to ensure online safety for children.
The Online Safety Act has the potential to keep children safe.7 Now, it needs to be implemented in a way that keeps pace with an evolving online world. Ofcom is rolling out its implementation of the Act—platforms will be required to carry out risk assessments and put safety measures in place. The tech companies will need to assess the function of their platforms, the algorithms that recommend content, and even their company governance for any risks they might present to children. We cannot continue to think of the internet in terms of adult spaces and children’s spaces—it is a shared space. We must recognise the potential risks of harm to younger users. I have been clear in my discussions with politicians, Ofcom, and social media and tech companies that children’s online safety must be their top priority.
We have the chance to make important steps towards protecting children online. Every incoming measure in the Act is an opportunity to safeguard children, and each measure must be robust and ambitious.
Footnotes
Competing interests: None declared.
Provenance and peer review: Commissioned, not externally peer reviewed.