The Online Safety Act Is Changing the Internet for Kids — But Families Say It’s Still Not Enough

Filip TRUȚĂ

May 06, 2026

The Online Safety Act Is Changing the Internet for Kids — But Families Say It’s Still Not Enough

The UK’s Online Safety Act was supposed to mark a turning point for child protection online. Platforms are now legally bound to reduce harmful content, improve reporting systems, and use stronger age checks to keep children away from dangerous material. But new research from Internet Matters suggests many families are still waiting to see meaningful change.

The UK-based nonprofit reveals a complicated reality: while parents and children are noticing some improvements, harmful content remains widespread, and many safety measures are still easy to bypass – like drawing a fake mustache.

Key takeaways

  • Families are beginning to notice new online safety features introduced under the UK Online Safety Act.
  • Children still encounter harmful content at alarming rates despite new regulations.
  • Many age-verification systems are viewed as ineffective or easy to circumvent.
  • Parents want stronger enforcement and clearer accountability for platforms.

Changes brought by the Online Safety Act

The Online Safety Act 2023 targeted social media companies, search engines, and online platforms operating in the UK. Under the law, companies must identify risks to children, reduce exposure to harmful content, and provide safer online experiences.

The legislation also gave regulator Ofcom the authority to investigate non-compliant platforms and impose significant penalties. Platforms can now face fines of up to 10% of global revenue if they fail to meet safety obligations.

Key requirements include:

  • Stronger moderation of harmful content
  • Better reporting and complaint systems
  • Age-assurance mechanisms for adult or dangerous content
  • Protections against harmful material
  • Greater transparency around platform safety measures

For many parents, OSA represents long-overdue action after years of children encountering harmful material online.

Families are seeing some positive changes

Internet Matters surveyed families shortly after the Act’s child safety protections came into force. The report found that many parents and children have already noticed visible changes across online platforms.

  • 68% of children and 67% of parents reported seeing more safety features such as ways to report and filter content
  • Some 53% of children say they were recently asked to verify their age on platforms
  • 39% of parents and 42% of children say it has become safer recently
  • 54% of children report that the content they have seen online recently is more child friendly

Where the OSA falls short

The research also shows that visible changes do not necessarily translate into safer experiences.

  • 46% of children think that age checks are easy to bypass.
  • A third (32%) of children have used methods like entering a fake birthday or even drawn on facial hair to bypass age checks; one parent said “I did catch my son using an eyebrow pencil to draw a mustache on his face, and it verified him as 15 years old.”
  • 26% of parents have allowed their child to bypass age checks.
  • Children still continue to encounter harm online, with 49% of children saying they have experienced harm online in the past month.
  • Parents and children remain worried about how much time children spend online and the rise of AI-generated content.

Harmful content is still out there

Despite the new rules, children continue to encounter harmful material online at concerning levels – even when logging in as young children. The study highlights ongoing exposure to:

  • Violent or disturbing content
  • Bullying and harassment
  • Misogynistic material
  • Self-harm and eating disorder content
  • Dangerous online challenges
  • Explicit material

Internet Matters concluded that the Online Safety Act has “not delivered the step change needed” to significantly improve children’s online wellbeing.

One major concern involves age verification systems. While they have become more common, many children find them easy to bypass using fake birthdates, VPNs, or alternative accounts. Parents expressed frustration that platforms often place too much responsibility on families instead of building safer systems by default.

Here are some key findings, in the words of parents and children themselves:

'I'm definitely supportive [of new rules] but what worries me is how effective they can really be.' – Mum of boy, 16
'Lots of my friends on TikTok have age restrictions on their profile, so they can't message people or share videos with them.' – Girl, 14
'I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old.' – Mum of boy, 12
'I think it’s good [age verification] so people not the right age can’t get onto like, gambling stuff.' – Girl, 13
'A blanket ban on social media is going to be a lot more effective than the sorts of things they're trying to do at the moment.' – Dad of girl, 11
I have helped my son get around them. It was to play a game, and I knew the game, and I was happy and confident that I was fine with him playing it. – Mum of non-binary child, 13
'I know WhatsApp's quite safe because you have to properly enter their number, and if it's just a random person messaging you usually there's a really quick block button.' – Boy, 13
'On WhatsApp and Snapchat, they have these group chats with kids from other schools my child won't even know, and they can say some horrible, nasty things.' – Dad of girl, 11
'I definitely say I spend a lot of time on my phone. I'm on it at 3AM on a school night.' – Girl, 16

The challenge of regulating a moving target

The Online Safety Act is still in its early implementation phase, and Ofcom continues rolling out guidance and enforcement measures. More reviews, compliance checks, and child safety assessments are expected throughout 2026 and beyond.

But regulators face a serious challenge: internet platforms evolve far faster than legislation.

New AI tools, encrypted messaging systems, anonymous social apps, and algorithmic recommendation models are constantly reshaping online experiences for young users. Safety policies that work today may be outdated tomorrow.

Meanwhile, platforms must balance child protection efforts against privacy concerns, freedom of expression debates, and technical limitations around age verification.

That tension means the Online Safety Act is unlikely to be the final word on online safety regulation.

CEO Rachel Huggins said in the report

Within this context, parents continue to shoulder much of the responsibility for keeping children safe in an increasingly complex digital environment. While families recognise their role, they are clear that they cannot do this alone. Stronger action is needed from both government and industry to ensure that children can only access online services appropriate for their age and stage and where safety is built in from the outset, rather than added in response to harm. The UK Government is currently consulting on what more can be done to keep children safe in a digital world, offering a timely opportunity for positive change. Although views may differ on what action is needed, the goal is clear: to support children to thrive online while delivering meaningful protection from harm.

What parents can do to protect their kids online

Even with stronger regulations, parents still play a critical role in helping children navigate digital spaces safely. Some practical steps include:

Keep conversations open

Children are more likely to report harmful experiences when they feel they can discuss their online encounters without fear of punishment.

Review privacy and safety settings

Many apps now offer improved parental controls, screen time tools, and content filtering options.

Teach critical thinking

Help children recognize manipulation tactics, misinformation, online scams, and algorithm-driven recommendations.

Watch for emotional changes

Sudden anxiety, withdrawal, secrecy, or mood changes can sometimes indicate harmful online experiences.

Use layered protection

Security software, parental controls, and safe browsing tools can add safeguards beyond platform-level protections. Bitdefender Family Security plans offer parental controls alongside robust threat protection, helping families guard against risky content and unsafe sites.

You may also like to read:

Safer Messaging for Kids: How to Set Up a Parent-Managed WhatsApp Account for Your Child

Discord Delays New Age Verification Rollout After Backlash

How Kids Bypass Age Verification Online and What Families Can Do About It

tags


Author


Filip TRUȚĂ

Filip has 17 years of experience in technology journalism. In recent years, he has focused on cybersecurity in his role as a Security Analyst at Bitdefender.

View all posts

You might also like

Bookmarks


loader