Enterprise users or people accessing Claude through third-party apps may have weaker direct privacy rights against Anthropic, as their employer or the app developer is the responsible party.
Anthropic collects your conversation inputs and outputs, device data, and usage information, and may use that data to train its AI models unless you opt out. Even after opting out, Anthropic retains the right to use your conversations for training if they are flagged for safety review. You can opt out of having your conversations used for model training by adjusting your account settings at claude.ai.