By default, everything you type to Claude and every response you receive can be used to train Anthropic's AI. You can turn this off in your account settings, but Anthropic can still use your conversations for training if it decides to flag them for safety review.
Your conversation content — including potentially sensitive personal information you share with Claude — can be used to train Anthropic's AI models by default, and even after opting out, Anthropic may still use flagged conversations, meaning your opt-out does not provide a complete guarantee that your data will not be used for training.
Cross-platform context
See how other platforms handle Model Training Opt-Out with Safety Review Override and similar clauses.
Compare across platforms →The opt-out mechanism has a significant exception that Anthropic controls unilaterally: any conversation it labels as a 'safety review' remains eligible for training regardless of your preference, which substantially limits the practical value of the opt-out.
1. REGULATORY FRAMEWORK: This provision implicates GDPR Art. 6(1)(f) (legitimate interests as lawful basis for training), Art. 21 (right to object to legitimate-interest processing), and Art. 22 (automated decision-making). Under CCPA/CPRA Cal. Civ. Code §1798.120 (right to opt out of sale/sharing) and §1798.105 (right to deletion), the carve-out for safety-flagged data may constitute an impermissible limitation. Brazil LGPD Art. 18 (data subject rights) and South Korea PIPA Art. 37 (right to suspend processing) are also engaged. The EU AI Act (Regulation 2024/1689) imposes transparency obligations on GPAI model providers regarding training data, relevant to Anthropic's model development practices. Primary enforcement authorities: EDPB/national DPAs (EU), CPPA (California), ANPD (Brazil), PIPC (South Korea). 2.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.