These rights give you meaningful control over your personal data held by Anthropic, though some requests — particularly corrections to AI model outputs — may be technically impossible to fulfill.
Anthropic collects your conversation inputs and outputs, device data, and usage information, and may use that data to train its AI models unless you opt out. Even after opting out, Anthropic retains the right to use your conversations for training if they are flagged for safety review. You can opt out of having your conversations used for model training by adjusting your account settings at claude.ai.