4 min read

ChatGPT Now Has Parental Controls: What Parents Can Now Do and What They Can’t

Cristina POPOV

January 19, 2026

Promo
Protect all your devices, without slowing them down.
Free 30-day trial
ChatGPT Now Has Parental Controls: What Parents Can Now Do and What They Can’t

OpenAI has rolled out a new set of parental safety tools for ChatGPT teen accounts (ages 13 to 18). The update, which started at the end of September 2025 and is now gradually available worldwide, aims to give parents more visibility and control over how their teenagers use the popular AI chatbot.

But while these new guardrails mark an important step toward safer AI use for families, they also raise new questions about privacy, trust, and how far technology should go in monitoring young users.

What’s new in ChatGPT for teens

Parents can now link their ChatGPT account with their teen’s account and set a series of safety preferences once both sides agree to connect. The setup is optional and requires consent from both the parent and the teen, a design choice meant to preserve autonomy and privacy.

Once the link is established, parents can:

  • Reduce or block sensitive content in their teen’s chat experience, including violent, sexual, or overly graphic topics, romantic or role-play conversations, and “extreme beauty ideals.”
  • Set quiet hours, blocking ChatGPT access during certain times (for example, from 8 p.m. to 10 a.m.).
  • Turn off chat memory, voice mode, image generation, and prevent the teen’s data from being used to train future AI models.

These controls can be found under Settings → Parental Controls in the ChatGPT app or web interface. After sending the invitation to their teen, parents can choose how to receive alerts — by email, text, or push notification.

Related: My Child Is Chatting with ChatGPT. Should I Be Worried?

Safety alerts for worrying prompts

One of the most talked-about changes is ChatGPT’s new review and alert process for self-harm or suicide-related prompts. If a teen writes something concerning, the message is automatically flagged and reviewed by a human moderator.

If moderators confirm the concern, OpenAI will attempt to contact the parent by every available channel within a few hours. The alert does not include the teen’s exact words but instead states that their child may have mentioned self-harm or suicidal thoughts, along with suggested conversation strategies from mental-health experts.

If parents can’t be reached and moderators believe the teen is in danger, OpenAI says it may contact law enforcement, though how this process works internationally remains unclear.

The alerts don’t show full chat logs, and teens can see that their account is connected to a parent. They can also choose to unlink at any time, which would disable the parental alerts and restrictions.

Critics say that this opt-in system limits effectiveness for at-risk teens who might refuse to link their accounts. OpenAI acknowledges that “guardrails help, but they’re not foolproof and can be bypassed if someone tries hard enough.”

Related: TikTok and Roblox Just Added New Safety Features—How They Can Help You Protect Your Child Online

How to enable ChatGPT parental controls

If you’re a parent who wants to try the new settings:

  1. Open ChatGPT and go to your profile icon → Settings → Parental Controls.
  2. Send an invitation to your teen’s ChatGPT account via email.
  3. Once your teen accepts, select their profile under “Family Members.”
  4. Adjust the controls: quiet hours, sensitive content filters, data sharing, and feature restrictions.
  5. Choose how to receive safety alerts (email, SMS, or app notification).

For now, these parental controls are available to users aged 13–18, while children under 13 are not allowed to use ChatGPT. OpenAI also says it’s developing an age-verification system to automatically detect younger users and route them into a more protected experience.

Related: Don't Let Your Child Lie About Their Age in Games. Here's Why.

 

A response to growing concerns

This launch comes after a tragic case in the United States, where a family claimed that conversations their teenager had with ChatGPT contributed to his death. The incident sparked a lawsuit and intensified the public debate over how AI should respond when users express distress or suicidal thoughts. In response, OpenAI accelerated its efforts to build stronger protections for young users and clearer guidelines for parents.

Other AI platforms, like Character.ai, have also introduced parental visibility tools, though they currently stop short of content-specific alerts.

Related: Instagram's AI Detects Teens Who Lie About Their Age and Moves Them into Teen Accounts — Here's What That Means

 

What parents should know going forward

These tools don’t replace real conversations or mental-health support — but they can give parents a starting point. The alerts are designed to signal when something might be wrong, not to replace care, empathy, or professional help.

If you or someone you know is struggling with suicidal thoughts, contact 988 (U.S.) for free, 24-hour support, or visit the International Association for Suicide Prevention for helplines worldwide.

Why broader parental protection still matters

ChatGPT’s new parental controls are a meaningful step toward safer AI for families. Parents can finally shape how their teens interact with generative AI — but only if both sides agree to connect.

That’s why having traditional parental controls in place still matters. Not every platform will offer the same level of safety tools, and you can’t rely on each app your child uses to do the job well. Some, like ChatGPT, are just getting started; others may never include parental features at all.

Bitdefender Parental Control helps parents protect children’s devices and online experiences across all platforms, ensuring a safer balance between independence and protection. With it, you can guide healthy internet time, filter inappropriate content, track location safely, and manage what apps your child can use.

Bitdefender Parental Control is part of the Bitdefender Family Plans, designed to protect every member of the household — children, teens, and even grandparents — from online threats, scams, and privacy risks.

Real digital safety doesn’t come from a single app’s settings. It comes from a consistent layer of protection, communication, and care across everything your family does online.

Find out more your family safety plan, here.

Sources: Openai.com, nytimes.com, wired.com

tags


Author


Cristina POPOV

Cristina Popov is a Denmark-based content creator and small business owner who has been writing for Bitdefender since 2017, making cybersecurity feel more human and less overwhelming.

View all posts

You might also like

Bookmarks


loader