Anthropic collects your conversation content (prompts and outputs), device identifiers, usage patterns, and payment information when you use Claude.ai, and by default uses your conversations to train its AI models. Users who opt out of model training may still have their conversations used if those conversations are flagged for safety review, which is a significant limitation on the opt-out right. You can opt out of having your conversations used for model training by navigating to your account settings on Claude.ai, and you can delete individual conversations which are removed from your history immediately and purged from Anthropic's back-end within 30 days.
How other platforms handle this
By linking and sharing an account to a shared view, you authorize Wealthfront to share with your co-owner, for the duration such account is rightfully linked, certain personal information collected from you, including your name, the name of the linked financial institution, the name of your linked a...
Your Personal Data may be transferred to and stored by us in the United States and by our affiliates and third-parties listed in Section 6 above. Therefore, your Personal Data may be processed and stored outside your country or jurisdiction, including in places that are not subject to an adequacy de...
We process your personal information based on our legitimate interests, including to operate, improve, and secure our payment network; to detect, prevent, and investigate fraud, security incidents, and other potentially illegal or prohibited activities; and to conduct research, analytics, and report...
Your conversation history and personal data could end up in the hands of an entirely different company with potentially different privacy practices if Anthropic is sold, merged, or goes through bankruptcy.
Your genetic data may be transferred to a new owner as a business asset. Here is what the Terms of Service actually say and what you can do right now.