Do Not Compromise Elections or Democratic Processes
As AI becomes a significant tool for political communication and persuasion, this provision establishes clear boundaries against AI-enabled election interference — a growing area of regulatory focus globally.
Anthropic's Usage Policy affects all users by establishing clear boundaries on how Claude can be used, with real consequences including throttling, suspension, or permanent termination of access for violations. The policy's active monitoring by a dedicated Safeguards Team means user inputs may be reviewed, and CSAM-related violations will be reported to law enforcement. You can report harmful, biased, or inaccurate AI outputs directly to usersafety@anthropic.com or via the thumbs-down feedback button in Anthropic's products.