This creates a significant exception to the opt-out right — any conversation Anthropic deems relevant to safety can still be retained and used for training, reducing the practical value of the opt-out.
Anthropic collects your conversation inputs and outputs, device data, and usage information, and may use that data to train its AI models unless you opt out. Even after opting out, Anthropic retains the right to use your conversations for training if they are flagged for safety review. You can opt out of having your conversations used for model training by adjusting your account settings at claude.ai.