GuruFocusGuruFocus

Anthropic Gives Claude Users More Control On Data Use

Menos de 1 minuto de lectura

Anthropic, the AI startup backed by Amazon AMZN and Google GOOG, is giving Claude users more say in how their chats are used. Starting now, users can choose whether conversations on Claude Free, Pro and Max are fed into future model training.

Current users have until Sept. 28, 2025, to opt out, while new signups will pick their preference right at onboarding. Importantly, only new or resumed chats count older conversations with no activity will stay untouched.

Anthropic is also stretching its data retention period to five years for those who allow training, saying user participation helps sharpen Claude's coding, analysis and reasoning. You'll also help us improve safety, making harmful-content detection more accurate and less likely to flag harmless conversations, the company told users.

The update doesn't apply to business and government contracts or API access through Amazon Bedrock and Google's Vertex AI. It also mirrors what competitors like OpenAI's ChatGPT, Google Gemini and Meta already do showing Anthropic wants to strike a balance between user trust and building smarter models.