Your conversation data and account information could be disclosed to law enforcement or regulators without your knowledge if Anthropic determines it is legally required or necessary for safety or fraud prevention.
Anthropic collects your conversation inputs and outputs, device data, and usage information, and may use that data to train its AI models unless you opt out. Even after opting out, Anthropic retains the right to use your conversations for training if they are flagged for safety review. You can opt out of having your conversations used for model training by adjusting your account settings at claude.ai.