Anthropic to use private Claude chats for AI training — opt-out required
Anthropic to use private Claude chats and coding sessions for AI training — opt-out required Summary: Anthropic (the US AI company behind Claude) plans to use private chat logs from the Claude app and private programming/coding sessions of consumer users to train its models. Users who do not want their content used must actively opt out. The company also intends to retain user data for up to five years by default unless a user objects. English translation of original (from German) "Anthropic plans to from now on also use chat transcripts from the Claude app and programming sessions from private customers for training its models. Those who do not want personal content to be used in the development of the systems must actively object. At the same time, the company…