We may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings. Even if you opt-out, we will use Inputs and Outputs for model improvement when: (1) your conversations are flagged for safety review to improve our ability to detect harmful content, enforce our policies, or advance AI safety research, or (2) you've explicitly reported the materials to us (for example via our feedback mechanisms).
The safety-review exception means your opt-out does not fully protect your conversations from being used in AI training, which is a meaningful limitation that may not be obvious to most users.
Anthropic collects your conversation content (prompts and AI responses), device identifiers, browsing behavior, and any personal data you include in messages to Claude, and may use this data to train its AI models. Users who opt out of model training should be aware that conversations flagged for safety or policy review can still be used for training without their consent, representing a meaningful limitation on the opt-out right. You can opt out of having your conversations used for model training by navigating to your Claude.ai account settings, or submit a data deletion request by emailing privacy@anthropic.com.