The Federal Trade Commission has opened a sweeping inquiry into how “companion” chatbots are designed, marketed and moderated, warning that the technology’s friend‑like tone may encourage children and teens to trust it with sensitive information.
The FTC issued 6(b) orders—requests backed by regulatory force even without an enforcement case—to Google parent company Alphabet; Character Technologies (Character.AI), Instagram, Facebook owner Meta, ChatGPT-maker OpenAI, Snap and Elon Musk’s xAI.
The FTC said it wants detailed information on how the companies test for potential harm, restrict use by minors, and inform parents about risks and data practices, including compliance with the Children’s Online Privacy Protection Act (COPPA). The Commission voted 3–0 to issue the orders.
The order compels companies to respond within 45 days. They must also contact FTC staff within 14 days to confirm what information they can produce and identify any gaps.
The agency is seeking records dating back to Jan. 1, 2022, as well as metrics such as monthly revenue and profit broken by age group, and lists of the most popular chatbot “characters” among minors. It also requests details on how sensitive conversations are governed, how models are trained and red‑teamed, and what disclosures users and parents see.
In a resolution authorizing the probe, the FTC said some AI companions may generate outputs that instruct children on violent or illegal acts or engage minors in role‑play and warned that chatbots’ confidant‑style design can prompt kids to overshare information that could later be exploited.
While acknowledging AI as a driver of innovation and growth, the FTC said its consumer‑protection concerns are “at [their] apex” for vulnerable populations, like children and the elderly.
FTC Chairman Andrew N. Ferguson said “protecting kids online is a top priority” even as the US seeks to remain a leader in AI innovation. The agency emphasized the 6(b) study does not initiate enforcement, but its findings can inform future policy and cases.
The inquiry follows months of scrutiny of how chatbots interact with minors. A Reuters investigation published last month showed that an internal Meta standards document had permitted bots to engage in “romantic or sensual” chats with children, among other troubling behaviors.
After the report, Senator Josh Hawley announced a probe. Meta has since said it will block its chatbots from discussing self‑harm, suicide, eating disorders and inappropriate conversations with teens, instead directing them to expert resources.
Character.AI has faced a wrongful‑death lawsuit from a Florida mother who alleges a chatbot relationship contributed to her 14‑year‑old son’s suicide, intensifying calls for oversight of “AI companions.” The company has said its bots are fictional and that it continues to add safety features.
OpenAI has also come under pressure after a California family alleged in court that ChatGPT encouraged their teen son to take his life. The company has said it is strengthening safeguards and parental controls and exploring more proactive interventions when users show acute distress.
According to the model order, the agency seeks documentation on how companies:
· monetize user engagement
· process inputs and generate outputs
· develop and approve characters
· test and monitor for negative impacts pre‑ and post‑launch
· limit or restrict teen use
It also asks how the firms disclose capabilities, limitations and data handling, how they enforce age ratings and community standards, and whether they share conversation data with third parties.
Companies must provide examples of outputs on sensitive topics, statistics on sensitive conversations involving minors, and descriptions of any mitigations tested or implemented, by age group.
The companion‑bot study is the FTC’s second major inquiry into AI in under two years, following its January staff report on Big Tech’s AI partnerships and investments—a probe that examined how cloud and funding arrangements can affect competition and access to key inputs.
It also lands after the FTC finalized the first COPPA overhaul since 2013. The updated rule, which took effect June 23, requires opt‑in parental consent for targeted advertising to children under 13 and tightens data‑retention and third‑party‑sharing limits—changes that may directly shape how AI chatbots can operate on youth accounts.
The FTC pledges to report findings in a way that protects confidential commercial information, typically by aggregating or anonymizing data.
While the orders are not themselves enforcement actions, non‑compliance can be challenged, and prior studies have laid groundwork for future cases and rulemaking.
For now, the seven companies must produce extensive records within 45 days as Washington accelerates its examination of AI services that aim to be kids’ digital companions.
You may also want to read:
Florida Woman Loses $15K to AI Voice Scam Mimicking Daughter in Distress
Jury finds Google misled users on privacy setting, awards netizens $426 million
Digital Minimalism for Maximum Safety: Five Easy Steps to Shrink Your Online Footprint
tags
Filip has 15 years of experience in technology journalism. In recent years, he has turned his focus to cybersecurity in his role as Information Security Analyst at Bitdefender.
View all postsMay 16, 2025