This is Anthropic's privacy policy explaining what data they collect when you use Claude and their other products, how they use it, and what rights you have over it. Anthropic may use your conversations with Claude to train its AI models, but you can opt out of this in your account settings. You have rights to access, delete, or correct your personal data by contacting privacy@anthropic.com.
Anthropic's Privacy Policy (effective January 12, 2026) governs the collection, use, disclosure, and processing of personal data when users interact with Anthropic-controlled services including Claude.ai and Claude Team plan. The policy distinguishes between Anthropic's role as data controller (consumer-facing services) and data processor (enterprise deployments), with the policy applying only to the former. Key provisions include the use of user inputs and outputs for model training unless opted out, data subject rights (access, deletion, correction, objection, portability), and disclosures to affiliates, service providers, and pursuant to legal requirements. Regional supplements apply for EU/EEA, UK, Switzerland, California, Canada, Brazil, and South Korea residents, incorporating GDPR, UK GDPR, CCPA, and equivalent frameworks. Notably, Anthropic retains the right to use inputs/outputs for training even after opt-out when content is flagged for safety review.
This policy engages GDPR (EU/EEA and UK), CCPA/CPRA (California), LGPD (Brazil), PIPA (South Korea), and Canadian privacy law (PIPEDA/provincial equivalents), with regional supplemental disclosures f…
This policy engages GDPR (EU/EEA and UK), CCPA/CPRA (California), LGPD (Brazil), PIPA (South Korea), and Canadian privacy law (PIPEDA/provincial equivalents), with regional supplemental disclosures for each jurisdiction. Legal and compliance teams should note the dual controller/processor distincti…
Compliance intelligence locked
Regulatory exposure, material risk, and due diligence action items.
1 change analyzed since monitoring began.