This is Anthropic's privacy policy explaining what data they collect when you use Claude and their other products, how they use it, and what rights you have over it. Anthropic may use your conversations with Claude to train its AI models, but you can opt out of this in your account settings. You have rights to access, delete, or correct your personal data by contacting privacy@anthropic.com.
Anthropic's Privacy Policy (effective January 12, 2026) governs the collection, use, disclosure, and processing of personal data when users interact with Anthropic-controlled services including Claude.ai and Claude Team plan. The policy distinguishes between Anthropic's role as data controller (consumer-facing services) and data processor (enterprise deployments), with the policy applying only to the former. Key provisions include the use of user inputs and outputs for model training unless opted out, data subject rights (access, deletion, correction, objection, portability), and disclosures to affiliates, service providers, and pursuant to legal requirements. Regional supplements apply for EU/EEA, UK, Switzerland, California, Canada, Brazil, and South Korea residents, incorporating GDPR, UK GDPR, CCPA, and equivalent frameworks. Notably, Anthropic retains the right to use inputs/outputs for training even after opt-out when content is flagged for safety review.
🔒 Institutional analysis locked
Regulatory exposure by statute, material risk assessment, vendor due diligence action items, and enforcement precedent. Available on Professional.
Upgrade to Professional — $149/mo1 change analyzed since monitoring began.
Cross-platform context
See how other platforms handle AI Model Training Using Conversation Data and similar clauses.
Compare across platforms →Anthropic is more transparent than most AI companies about data retention. Here's exactly what happens when you delete your data, and how t…