GPT-4o has built-in blocks to prevent creating fake intimate images of real people without their consent, but OpenAI admits these protections are imperfect and the risk remains.
Your image or likeness could potentially be used to generate non-consensual intimate imagery using GPT-4o's vision and image capabilities — OpenAI has partial protections in place but explicitly acknowledges they are not fully effective.
Cross-platform context
See how other platforms handle Non-Consensual Intimate Imagery (NCII) Risk and similar clauses.
Compare across platforms →NCII — sometimes called deepfake pornography — causes severe psychological harm to victims, predominantly women, and is now illegal in many jurisdictions; OpenAI's acknowledgment that its mitigations are 'imperfect' is a significant safety and legal exposure disclosure.
(1) REGULATORY FRAMEWORK: This provision implicates the DEEPFAKES Accountability Act (proposed), state NCII statutes (California AB 602/AB 3286, Texas Penal Code §21.165, Virginia Code §18.2-386.2, and over 45 other states with NCII laws), EU AI Act Article 5 on prohibited practices causing harm, GDPR Article 9 on special category data (biometric data used to generate intimate images), and the UK Online Safety Act 2023 (Part 5, non-consensual intimate images). Enforcement authorities include state law enforcement, the UK Ofcom, and national DPAs under GDPR. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.