
Privacy groups are urging regulators to halt Meta’s plan to use your AI conversations for advertising. The move highlights a broader trend: Big Tech is quietly turning your words into AI training data.
Under the proposal, starting Dec. 16, Meta would harvest interactions between users and its suite of AI chatbots across Facebook, Instagram, and WhatsApp. These exchanges — meant to mimic casual, friendly conversations — would become training data for Meta’s algorithms and a new source of behavioral insight to fine-tune the ads users see.
Meta’s move has triggered a major backlash from privacy advocates.
A coalition of more than 30 digital rights and civil liberties organizations — including the Electronic Privacy Information Center (EPIC), Public Citizen, and the Center for Digital Democracy — has called on the U.S. Federal Trade Commission (FTC) to block Meta’s plan to use chatbot conversations for advertising and content personalization.
In the letter, the coalition calls on the FTC to:
While traditional data collection involves tracking what you like, follow, or buy, AI chat data goes deeper. It is conversational, contextual, and often emotional. It may reveal mental health struggles, relationship issues, or financial worries.
That’s the kind of content advertisers dream of accessing — but it’s also the kind of information users never expect to be commercialized.
As the coalition warned, allowing this data to fuel ad targeting would normalize a dangerous erosion of privacy under the guise of AI innovation.
“Without FTC intervention […] Meta’s actions will normalize invasive AI data practices across the industry, further undermining consumer privacy and protection,” reads the press release published by epic.org.
Privacy experts warn that this is a major overhaul of how personal data is monetized. Chatbots are designed to feel human and intimate. Users may share thoughts, frustrations, and details they’d never post publicly. Turning those chats into data points for ad targeting could open a new chapter in AI-driven surveillance marketing.
“The FTC has a sordid history of letting Meta off the hook, and this is where it’s gotten us: industrial-scale privacy abuses brought to you by a chatbot that pretends to be your friend,” said John Davisson, Director of Litigation for EPIC. “[…] It’s time to get serious about reining in Meta.”
“Chatbot surveillance for ad targeting is not a distant threat — it is happening now,” said Katharina Kopp, Deputy Director, Center for Digital Democracy (CDD). “Meta’s move will accelerate a race in which other companies are already implementing similarly invasive and manipulative practices, embedding commercial surveillance deeper into every aspect of our lives.”
The FTC has previously clashed with Meta over privacy violations and deceptive practices, so the watchdogs hope the agency will intervene before the feature launches. But history suggests users shouldn’t rely solely on regulators to protect their personal data.
Meta isn’t the only tech giant expanding its data collection repertoire for the AI era. LinkedIn recently began using user profile data and public posts to train its generative AI systems and improve advertising performance — a policy users can reject, but only through a manual opt-out.
Read: LinkedIn gives you until Monday to stop AI from training on your profile
These moves fit a clear pattern: AI enrichment through data expansion. As companies race to develop smarter algorithms and personalized experiences, they’re feeding their models with every scrap of content users create — posts, messages, images, and now even private-seeming conversations.
For the average consumer, that means AI training is no longer something that happens elsewhere. It’s happening on your screen, in your chat, and around your digital identity — often without clear consent.
You can’t stop Big Tech from pursuing AI enrichment, but you can take steps to limit how much of your data becomes training fodder:
Meta’s chatbot data plan is more than a policy tweak — it’s a preview of how Big Tech envisions the future of AI: deeply integrated, endlessly hungry for human input, and monetized at every turn.
The social network frames the move as “personalization,” but the reality is conversations will become commodities, mined for patterns that make ads more persuasive and platforms more profitable.
You may also want to read:
LinkedIn Gives You until Monday to Stop AI from Training on Your Profile
YouTube’s New AI Tool Fights Deepfakes — But Creators Still Need Real Protection
Tinder Rolls Out ‘Face Check’ to More US States – Here’s What It Means for You
tags
Filip has 15 years of experience in technology journalism. In recent years, he has turned his focus to cybersecurity in his role as Information Security Analyst at Bitdefender.
View all postsOctober 14, 2025
October 13, 2025