Users must follow OpenAI's separate Usage Policies — which prohibit harmful, illegal, or policy-violating uses — and are personally responsible for ensuring their use of the AI is lawful.
Users are legally responsible for ensuring their use of ChatGPT and other OpenAI services complies with both the law and OpenAI's Usage Policies. Using OpenAI's tools to generate illegal content, engage in deception, or violate third-party rights could result in account termination and potential legal liability.
Cross-platform context
See how other platforms handle Usage Policy Compliance Obligation and similar clauses.
Compare across platforms →Violation of OpenAI's Usage Policies can result in account suspension or termination, and users bear personal legal responsibility for how they use AI-generated outputs.
REGULATORY FRAMEWORK: Incorporation of Usage Policies by reference creates a multi-document contractual obligation that users may not have reviewed. FTC Act Section 5 applies to deceptive practices facilitated by AI tools. EU AI Act Articles 52 and 53 impose transparency and prohibited use obligations. Computer Fraud and Abuse Act (18 U.S.C. §1030) and equivalent state laws may apply to policy-violating uses. Copyright Act applies to AI-generated content that infringes third-party works.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.