OpenAI · OpenAI Usage Policies · View original document ↗

Prohibition on Deceptive AI Personas and Impersonation

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

OpenAI prohibits building AI systems that deny being AI when users genuinely ask, and prohibits using its tools to impersonate real people or organizations in ways that could mislead others.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision protects users' right to know they are interacting with an AI system, and prohibits the use of OpenAI tools for impersonation-based fraud or disinformation, though it permits custom AI personas with operator-assigned names and personalities as long as the AI nature is not actively denied.

Interpretive note: The distinction between a permitted custom AI persona and prohibited impersonation of a real person or organization requires case-by-case judgment not fully specified in the document.

Consumer impact (what this means for users)

Users interacting with AI products built on OpenAI's technology have a policy-backed expectation that the AI will not deny being an AI when sincerely asked, even if it operates under a custom persona name assigned by the operator; this protection is stated in the policy but its enforcement in practice depends on operator implementation and OpenAI's monitoring capabilities.

How other platforms handle this

Character.AI Medium

Be Creative But Don't Impersonate: Don't impersonate public figures or private individuals, or use someone's name, likeness, or persona without permission or outside of permissible contexts.

Amazon Medium

Fraud and Deception. Attempting to defraud or misrepresent yourself or your services to others, including impersonating individuals or entities. Engaging in phishing, pharming, or other deceptive activities.

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

See all platforms with this clause type →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Don't claim to be human when directly and sincerely asked, use AI to deceive people about its fundamental nature, or impersonate real people or organizations in misleading ways.

— Excerpt from OpenAI's OpenAI Usage Policies

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages with FTC Act Section 5 prohibitions on deceptive practices, the EU AI Act's transparency obligations for AI systems interacting with natural persons (which require disclosure that users are interacting with an AI system), and the EU Digital Services Act's requirements regarding automated systems. Various state consumer protection statutes may also apply to AI impersonation and deceptive chatbot practices. California's BOT Disclosure Act (AB 1950 / Business and Professions Code Section 17941) requires disclosure that a bot is not human in certain consumer-facing contexts. (2) GOVERNANCE EXPOSURE: Medium. The policy permits custom AI personas while prohibiting active denial of AI nature, but the line between a permitted custom persona and prohibited impersonation of a real person or organization may require case-by-case judgment. Operators building customer service bots, virtual assistants, or branded AI personas should assess whether their deployment satisfies both the policy and applicable transparency regulations. (3) JURISDICTION FLAGS: EU operators face mandatory AI disclosure obligations under the EU AI Act for AI systems interacting with users, with potential penalties for non-compliance. California operators should assess bot disclosure obligations. Operators in regulated industries (financial services, healthcare) may face sector-specific requirements regarding disclosure of automated systems. (4) CONTRACT AND VENDOR IMPLICATIONS: Operators using custom AI personas should review their user-facing disclosures, terms of service, and onboarding flows to ensure adequate disclosure of AI nature. Vendor contracts should address the operator's disclosure obligations and how they are satisfied within the product design. (5) COMPLIANCE CONSIDERATIONS: Operators should audit their product UX for compliance with the no-denial-of-AI-nature requirement; review marketing and onboarding materials for accurate disclosure of AI involvement; implement technical controls ensuring the AI cannot be configured to deny its nature; and assess jurisdiction-specific bot disclosure obligations in all markets where the product operates.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority under Section 5 of the FTC Act to pursue deceptive practices, including AI systems that mislead consumers about their nature or that impersonate real entities.
    File a complaint →
  • State AG
    State attorneys general have consumer protection authority over deceptive bot and AI impersonation practices under state unfair and deceptive trade practices statutes.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
OpenAI Usage Policies
Entity
OpenAI
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011727
Document ID
CA-D-00753
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
7bc76af79d3d7702e7ce284199b0b15a9dc7dd89f62958bd0823240c00eaab06
Analysis generated
May 11, 2026 12:43 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Usage Policies
Record ID: CA-P-011727
Captured: 2026-05-11 12:43:28 UTC
SHA-256: 7bc76af79d3d7702…
URL: https://conductatlas.com/platform/openai/openai-usage-policies/prohibition-on-deceptive-ai-personas-and-impersonation/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Prohibition on Deceptive AI Personas and Impersonation clause do?

This provision protects users' right to know they are interacting with an AI system, and prohibits the use of OpenAI tools for impersonation-based fraud or disinformation, though it permits custom AI personas with operator-assigned names and personalities as long as the AI nature is not actively denied.

How does this clause affect you?

Users interacting with AI products built on OpenAI's technology have a policy-backed expectation that the AI will not deny being an AI when sincerely asked, even if it operates under a custom persona name assigned by the operator; this protection is stated in the policy but its enforcement in practice depends on operator implementation and OpenAI's monitoring capabilities.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.