Privacy vs. Progress: Anthropic Users Must Choose Chat Sharing or Opt Out

Claude users must decide by September 28 whether Anthropic can use their conversations to train AI models. Previously, data was automatically deleted within 30 days or stored for two years under special circumstances. With the new policy, data may be retained for five years for users who do not opt out, while business clients remain unaffected.
Anthropic presents this as a user-focused feature, claiming it enhances model safety and improves reasoning and coding abilities. The underlying goal, however, is to collect real-world conversational data necessary to keep Claude competitive against OpenAI and Google. Such datasets are crucial for refining AI performance.
The policy update also underscores industry-wide challenges. OpenAI is subject to legal mandates to retain all ChatGPT interactions indefinitely. Many users are unaware of the changes and may consent without understanding, raising concerns over informed consent in AI.
The interface shows new users a choice at signup, while existing users see a pop-up with a prominent “Accept” button and a small, pre-enabled toggle. Experts warn that this may lead to users inadvertently agreeing, highlighting the ongoing tension between AI innovation and privacy protection.