OpenAI prohibits building AI systems that deny being AI when users genuinely ask, and prohibits using its tools to impersonate real people or organizations in ways that could mislead others.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision protects users' right to know they are interacting with an AI system, and prohibits the use of OpenAI tools for impersonation-based fraud or disinformation, though it permits custom AI personas with operator-assigned names and personalities as long as the AI nature is not actively denied.
Interpretive note: The distinction between a permitted custom AI persona and prohibited impersonation of a real person or organization requires case-by-case judgment not fully specified in the document.
Users interacting with AI products built on OpenAI's technology have a policy-backed expectation that the AI will not deny being an AI when sincerely asked, even if it operates under a custom persona name assigned by the operator; this protection is stated in the policy but its enforcement in practice depends on operator implementation and OpenAI's monitoring capabilities.
How other platforms handle this
Be Creative But Don't Impersonate: Don't impersonate public figures or private individuals, or use someone's name, likeness, or persona without permission or outside of permissible contexts.
Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.— Excerpt from OpenAI's OpenAI Usage Policies
(1) REGULATORY LANDSCAPE: This provision engages with FTC Act Section 5 prohibitions on deceptive practices, the EU AI Act's transparency obligations for AI systems interacting with natural persons (which require disclosure that users are interacting with an AI system), and the EU Digital Services Act's requirements regarding automated systems. Various state consumer protection statutes may also apply to AI impersonation and deceptive chatbot practices. California's BOT Disclosure Act (AB 1950 / Business and Professions Code Section 17941) requires disclosure that a bot is not human in certain consumer-facing contexts. (2) GOVERNANCE EXPOSURE: Medium. The policy permits custom AI personas while prohibiting active denial of AI nature, but the line between a permitted custom persona and prohibited impersonation of a real person or organization may require case-by-case judgment. Operators building customer service bots, virtual assistants, or branded AI personas should assess whether their deployment satisfies both the policy and applicable transparency regulations. (3) JURISDICTION FLAGS: EU operators face mandatory AI disclosure obligations under the EU AI Act for AI systems interacting with users, with potential penalties for non-compliance. California operators should assess bot disclosure obligations. Operators in regulated industries (financial services, healthcare) may face sector-specific requirements regarding disclosure of automated systems. (4) CONTRACT AND VENDOR IMPLICATIONS: Operators using custom AI personas should review their user-facing disclosures, terms of service, and onboarding flows to ensure adequate disclosure of AI nature. Vendor contracts should address the operator's disclosure obligations and how they are satisfied within the product design. (5) COMPLIANCE CONSIDERATIONS: Operators should audit their product UX for compliance with the no-denial-of-AI-nature requirement; review marketing and onboarding materials for accurate disclosure of AI involvement; implement technical controls ensuring the AI cannot be configured to deny its nature; and assess jurisdiction-specific bot disclosure obligations in all markets where the product operates.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision protects users' right to know they are interacting with an AI system, and prohibits the use of OpenAI tools for impersonation-based fraud or disinformation, though it permits custom AI personas with operator-assigned names and personalities as long as the AI nature is not actively denied.
Users interacting with AI products built on OpenAI's technology have a policy-backed expectation that the AI will not deny being an AI when sincerely asked, even if it operates under a custom persona name assigned by the operator; this protection is stated in the policy but its enforcement in practice depends on operator implementation and OpenAI's monitoring capabilities.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.