Anthropic warns that its latest productivity tool could expose sensitive user data to threat actors.
A major update to Anthropic’s Claude AI platform allows the chatbot to create and edit Word documents, Excel spreadsheets, PowerPoint slides, PDFs and other common business documents. The tool, available through Claude’s website and desktop apps for Windows and macOS, promises faster, more flexible content generation.
The feature is initially limited to Claude Max, Team and Enterprise users, with the Pro version to roll out in the coming weeks. To test it, subscribers must enable “Upgraded file creation and analysis” under the experimental settings tab.
Anthropic has acknowledged that the feature is not without risk. The tool requires some internet access to fetch code libraries, and threat actors could exploit this through tactics like prompt injection. Such attacks may trick Claude into running arbitrary code or pulling sensitive information from connected sources.
The AI operates in a sandboxed environment with restricted network access. However, security researchers note that even controlled environments can sometimes be manipulated to leak data. Anthropic itself cautions that the feature “may put your data at risk.”
To counter these threats, Anthropic has implemented a variety of safeguards for individuals:
Still, much of the responsibility lies with users. Anthropic recommends stopping Claude immediately if it behaves unexpectedly and reporting suspicious activity. Critics argue that this leaves individuals and organizations to police their own use of the tool rather than benefiting from stronger built-in safeguards.
Anthropic says it has conducted extensive red-teaming and ongoing security testing but advises organizations to evaluate the feature against their own standards before adopting it. For home users, the best defense may be the simplest: avoid feeding Claude personal or confidential information altogether.
These warnings also come in light of broader debates over Anthropic’s handling of security and privacy. The company recently shifted its stance to allow sharing of users’ conversations for AI training, raising questions about how the platform may be storing and reusing sensitive data. Additionally, researchers identified a global extortion campaign in which cybercriminals weaponized Claude’s coding capabilities to perform reconnaissance, data theft and even ransom negotiations.
tags
Vlad's love for technology and writing created rich soil for his interest in cybersecurity to sprout into a full-on passion. Before becoming a Security Analyst, he covered tech and security topics.
View all postsMay 16, 2025