Even if you opt out of training, two significant categories of your content — rated conversations and safety-flagged content — are permanently available for Anthropic's training, which limits the effectiveness of the opt-out.
Your conversations with Claude are used by default to train Anthropic's AI models, and even if you opt out, clicking thumbs up or down on any response or having a message flagged for safety review means that content can still be used for training. US users are bound by mandatory arbitration and cannot participate in class action lawsuits against Anthropic, significantly limiting legal remedies. You can opt out of conversation training by navigating to your account settings on Claude.ai.
How other platforms handle this
Globally Shared Content means Content you shared with everyone on BeReal. In exchange for using our Services, you grant us a worldwide, non-exclusive, royalty-free, sublicensable license to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute the content you share ...
To the extent necessary to provide the Services to you and others, to protect you and the Services, and to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit,...
By making creations available on Patreon or otherwise posting on Patreon, you grant us a royalty-free, perpetual, irrevocable, non-exclusive, sublicensable, worldwide license covering your creation or what you post in all formats and channels now known or later developed anywhere in the world to use...
This clause could change without notice.
Get alerted when Anthropic Claude updates this policy — with plain-language summaries and severity ratings.
We read the privacy policies and terms of service of 38 AI platforms. Here is what they say about training, retention, arbitration, and liability.
Don't miss changes to this clause.
Anthropic Claude has updated this policy before. Get alerted on the next change.
Watch Anthropic ClaudeEven if you opt out of training, two significant categories of your content — rated conversations and safety-flagged content — are permanently available for Anthropic's training, which limits the effectiveness of the opt-out.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Anthropic Claude.