Anthropic Asks Users to Choose – Contribute Chats or Say No
Anthropic is revising its data policies, requiring Claude users to make a decision by September 28 on whether their chats can be used to train AI systems. Until now, conversations were deleted within 30 days or retained for two years in special cases. Now, data may be kept for five years if users do not opt out. Enterprise clients remain unaffected by this policy update.
The company frames the change as a benefit, explaining that users who share data help Claude detect harmful content more accurately and improve reasoning and coding skills. At the same time, the move provides Anthropic access to a massive dataset of real-world interactions, essential to staying competitive with OpenAI, Google, and other AI leaders.
The policy shift highlights the increasing tension between innovation and user privacy in the AI sector. OpenAI’s ongoing court case requiring indefinite retention of ChatGPT data exemplifies similar challenges. Many users may not realize the extent of the change, creating concerns about truly informed consent in AI usage.
Anthropic’s rollout presents new users with a consent screen at signup and existing users with a pop-up displaying a large “Accept” button and a small toggle for training permissions, pre-enabled. Privacy experts warn this may lead to inadvertent consent, highlighting the challenge of balancing AI progress with user privacy obligations.