Anthropic Shifts Privacy Stance, Lets Users Share Data for AI Training

Vlad CONSTANTINESCU

September 01, 2025

Promo
Protect all your devices, without slowing them down.
Free 30-day trial
Anthropic Shifts Privacy Stance, Lets Users Share Data for AI Training

Individual Claude users can now opt in to having their chats improve Anthropic’s AI models.

Policy pivot on data usage

Anthropic, the AI startup known for championing user privacy, is revising its stance on data collection. Since launching Claude, the company set itself apart by promising not to use consumer conversations to train its systems. That policy is now evolving: individual users from Claude’s Free, Pro and Max plans will be asked whether they wish to share data to help refine the chatbot.

In an update to its consumer terms and privacy policy, Anthropic explained that the move aims to make models more effective and secure. Unlike industry peers that have historically collected user data by default, Anthropic is framing this as a voluntary contribution.

Who is affected by the change

The adjustment applies only to individual consumer plans, leaving commercial users untouched. Claude for Work, Claude Gov, Claude for Education, and API usage through Amazon Bedrock and Google’s Cloud Vertex AI will keep the existing privacy protections.

This means tenterprise and institutional consumers remain insulated from the new data practices. For everyday users, however, the decision to contribute conversations will now become part of the sign-up or continued use process.

How to manage data preferences

Existing users will be prompted with an on-screen toggle labeled “You can help improve Claude.” Accepting enables the model to train with future chats and coding sessions, while declining keeps the previous data retention policy. Regardless of the choice, users can revisit privacy settings later to reverse their decision.

The deadline for setting preferences is Sept. 28, after which continuing to use Claude requires a decision. Notably, opting in allows Anthropic to retain contributed data for up to five years, compared to the 30-day retention for those who opt out. Deleted conversations will not be used for training under any circumstance.

Training data collection, an industry trend

Anthropic emphasized that collected data will never be sold to third parties and will undergo filtering to reduce sensitive data exposure. Still, the update points at an undeniable trend in the AI industry: companies are increasingly seeking user input to fuel improvements.

Anthropic’s decision comes after the company’s Threat Intelligence report revealed that Claude’s AI-powered coding tool, Claude Code, has been exploited in sweeping cyber-extortion campaigns. A criminal operation dubbed GTG-2002 weaponized the AI to conduct reconnaissance, network penetration, credential harvesting and ransom negotiation across at least 17 organizations in healthcare, government and emergency services.

tags


Author


Vlad CONSTANTINESCU

Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.

View all posts

You might also like

Bookmarks


loader