
The UK’s Online Safety Act was supposed to mark a turning point for child protection online. Platforms are now legally bound to reduce harmful content, improve reporting systems, and use stronger age checks to keep children away from dangerous material. But new research from Internet Matters suggests many families are still waiting to see meaningful change.
The UK-based nonprofit reveals a complicated reality: while parents and children are noticing some improvements, harmful content remains widespread, and many safety measures are still easy to bypass – like drawing a fake mustache.
The Online Safety Act 2023 targeted social media companies, search engines, and online platforms operating in the UK. Under the law, companies must identify risks to children, reduce exposure to harmful content, and provide safer online experiences.
The legislation also gave regulator Ofcom the authority to investigate non-compliant platforms and impose significant penalties. Platforms can now face fines of up to 10% of global revenue if they fail to meet safety obligations.
Key requirements include:
For many parents, OSA represents long-overdue action after years of children encountering harmful material online.
Internet Matters surveyed families shortly after the Act’s child safety protections came into force. The report found that many parents and children have already noticed visible changes across online platforms.
The research also shows that visible changes do not necessarily translate into safer experiences.
Despite the new rules, children continue to encounter harmful material online at concerning levels – even when logging in as young children. The study highlights ongoing exposure to:
Internet Matters concluded that the Online Safety Act has “not delivered the step change needed” to significantly improve children’s online wellbeing.
One major concern involves age verification systems. While they have become more common, many children find them easy to bypass using fake birthdates, VPNs, or alternative accounts. Parents expressed frustration that platforms often place too much responsibility on families instead of building safer systems by default.
Here are some key findings, in the words of parents and children themselves:
'I'm definitely supportive [of new rules] but what worries me is how effective they can really be.' – Mum of boy, 16
'Lots of my friends on TikTok have age restrictions on their profile, so they can't message people or share videos with them.' – Girl, 14
'I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.' – Mum of boy, 12
'I think it’s good [age verification] so people not the right age can’t get onto like, gambling stuff.' – Girl, 13
'A blanket ban on social media is going to be a lot more effective than the sorts of things they're trying to do at the moment.' – Dad of girl, 11
I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13
'I know WhatsApp's quite safe because you have to properly enter their number, and if it's just a random person messaging you usually there's a really quick block button.' – Boy, 13
'On WhatsApp and Snapchat, they have these group chats with kids from other schools my child won't even know, and they can say some horrible, nasty things.' – Dad of girl, 11
'I definitely say I spend a lot of time on my phone. I'm on it at 3AM on a school night.' – Girl, 16
The Online Safety Act is still in its early implementation phase, and Ofcom continues rolling out guidance and enforcement measures. More reviews, compliance checks, and child safety assessments are expected throughout 2026 and beyond.
But regulators face a serious challenge: internet platforms evolve far faster than legislation.
New AI tools, encrypted messaging systems, anonymous social apps, and algorithmic recommendation models are constantly reshaping online experiences for young users. Safety policies that work today may be outdated tomorrow.
Meanwhile, platforms must balance child protection efforts against privacy concerns, freedom of expression debates, and technical limitations around age verification.
That tension means the Online Safety Act is unlikely to be the final word on online safety regulation.
CEO Rachel Huggins said in the report:
Within this context, parents continue to shoulder much of the responsibility for keeping children safe in an increasingly complex digital environment. While families recognise their role, they are clear that they cannot do this alone. Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm. The UK Government is currently consulting on what more can be done to keep children safe in a digital world, offering a timely opportunity for positive change. Although views may differ on what action is needed, the goal is clear: to support children to thrive online while delivering meaningful protection from harm.
Even with stronger regulations, parents still play a critical role in helping children navigate digital spaces safely. Some practical steps include:
Keep conversations open
Children are more likely to report harmful experiences when they feel they can discuss their online encounters without fear of punishment.
Review privacy and safety settings
Many apps now offer improved parental controls, screen time tools, and content filtering options.
Teach critical thinking
Help children recognize manipulation tactics, misinformation, online scams, and algorithm-driven recommendations.
Watch for emotional changes
Sudden anxiety, withdrawal, secrecy, or mood changes can sometimes indicate harmful online experiences.
Use layered protection
Security software, parental controls, and safe browsing tools can add safeguards beyond platform-level protections. Bitdefender Family Security plans offer parental controls alongside robust threat protection, helping families guard against risky content and unsafe sites.
You may also like to read:
Safer Messaging for Kids: How to Set Up a Parent-Managed WhatsApp Account for Your Child
Discord Delays New Age Verification Rollout After Backlash
How Kids Bypass Age Verification Online and What Families Can Do About It
tags
Filip has 17 years of experience in technology journalism. In recent years, he has focused on cybersecurity in his role as a Security Analyst at Bitdefender.
View all posts