Anthropic Claude’s New File Feature Raises Security Red Flags

Vlad CONSTANTINESCU

September 11, 2025

Promo
Protect all your devices, without slowing them down.
Free 30-day trial
Anthropic Claude’s New File Feature Raises Security Red Flags

Anthropic warns that its latest productivity tool could expose sensitive user data to threat actors.

Claude taps into file creation

A major update to Anthropic’s Claude AI platform allows the chatbot to create and edit Word documents, Excel spreadsheets, PowerPoint slides, PDFs and other common business documents. The tool, available through Claude’s website and desktop apps for Windows and macOS, promises faster, more flexible content generation.

The feature is initially limited to Claude Max, Team and Enterprise users, with the Pro version to roll out in the coming weeks. To test it, subscribers must enable “Upgraded file creation and analysis” under the experimental settings tab.

Security concerns from Anthropic itself

Anthropic has acknowledged that the feature is not without risk. The tool requires some internet access to fetch code libraries, and threat actors could exploit this through tactics like prompt injection. Such attacks may trick Claude into running arbitrary code or pulling sensitive information from connected sources.

The AI operates in a sandboxed environment with restricted network access. However, security researchers note that even controlled environments can sometimes be manipulated to leak data. Anthropic itself cautions that the feature “may put your data at risk.”

Mitigation measures and user burden

To counter these threats, Anthropic has implemented a variety of safeguards for individuals:

  • The ability to freely toggle the feature on and off
  • Monitoring Claude’s actions in real time while using the feature (highly recommended)
  • Reviewing and auditing completed tasks
  • Restricting the duration of sandbox sessions
  • Capping network and storage resources to limit the potential damage of any exploit
  • Implementing a detection mechanism for prompt injections that stops execution upon detecting them

Still, much of the responsibility lies with users. Anthropic recommends stopping Claude immediately if it behaves unexpectedly and reporting suspicious activity. Critics argue that this leaves individuals and organizations to police their own use of the tool rather than benefiting from stronger built-in safeguards.

Proceeding with caution

Anthropic says it has conducted extensive red-teaming and ongoing security testing but advises organizations to evaluate the feature against their own standards before adopting it. For home users, the best defense may be the simplest: avoid feeding Claude personal or confidential information altogether.

These warnings also come in light of broader debates over Anthropic’s handling of security and privacy. The company recently shifted its stance to allow sharing of users’ conversations for AI training, raising questions about how the platform may be storing and reusing sensitive data. Additionally, researchers identified a global extortion campaign in which cybercriminals weaponized Claude’s coding capabilities to perform reconnaissance, data theft and even ransom negotiations.

tags


Author


Vlad CONSTANTINESCU

Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.

View all posts

You might also like

Bookmarks


loader