Midjourney · Midjourney Community Guidelines · View original document ↗

CSAM Absolute Prohibition

High severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity Midjourney recorded 6 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for Midjourney Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Creating, uploading, sharing, or attempting to distribute any content that sexualizes minors, whether real or AI-generated, is absolutely prohibited on Midjourney.

This analysis describes what Midjourney's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision reflects a legal obligation under US federal law (PROTECT Act, FOSTA-SESTA adjacent obligations) and similar laws globally, and violations carry criminal liability independent of platform enforcement.

Consumer impact (what this means for users)

Generating or distributing any content that sexualizes minors on Midjourney, including AI-generated images, will result in account termination and may expose users to criminal prosecution under applicable law.

How other platforms handle this

Runway Medium

You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.

Mistral AI Medium

Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...

Perplexity AI Medium

You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.

See all platforms with this clause type →

Monitoring

Midjourney has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Do not create or attempt to create content that in any way sexualizes children or minors. This includes real images as well as generated images. Do not generate, upload, share, or make attempts to distribute content that depicts, promotes, or attempts to normalize child sexual abuse.

— Excerpt from Midjourney's Midjourney Community Guidelines

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: The prohibition on child sexual abuse material (CSAM) implicates US federal law under 18 U.S.C. 2256 and related provisions, which criminalize the production, distribution, and possession of CSAM including computer-generated imagery. COPPA (Children's Online Privacy Protection Act) is separately relevant to platform obligations regarding minors as users, though this provision addresses content rather than user data. International equivalents exist across EU, UK, Canadian, and Australian law. Enforcement authorities include the DOJ, FBI, NCMEC, and equivalent international agencies. Platform-level obligations may include mandatory reporting under the PROTECT Act. GOVERNANCE EXPOSURE: High. CSAM-related violations carry criminal liability for individual users and potential platform liability under applicable law. The explicit inclusion of AI-generated imagery in this prohibition aligns with legal developments in multiple jurisdictions extending CSAM statutes to computer-generated content. JURISDICTION FLAGS: This prohibition applies under criminal law in virtually all operating jurisdictions. No jurisdiction creates a legal carve-out for AI-generated CSAM. Organizations deploying Midjourney in contexts accessible to minors should implement additional access controls and content monitoring. CONTRACT AND VENDOR IMPLICATIONS: Organizations using Midjourney in educational, youth-facing, or consumer contexts should confirm that their deployment configurations minimize risk of misuse and that their acceptable use policies for end users replicate these prohibitions at minimum. COMPLIANCE CONSIDERATIONS: Organizations with mandatory reporting obligations under applicable law should ensure that any discovered violations are reported to NCMEC's CyberTipline and relevant law enforcement regardless of platform-level reporting. Compliance teams should review whether organizational acceptable use policies explicitly address AI-generated CSAM in line with this prohibition.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has oversight of platform safety practices and child protection obligations in consumer-facing services.
    File a complaint →

Applicable regulations

CFAA
United States Federal
DMCA
United States Federal
DSA
European Union
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Midjourney Community Guidelines
Entity
Midjourney
Document last updated
May 11, 2026
Tracking information
First tracked
May 11, 2026
Last verified
May 12, 2026
Record ID
CA-P-011653
Document ID
CA-D-00759
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
a0f59f778b2a3ff34d9c8cf48ed9ab75852ed63bc4340ef1fb60d327ca283b12
Analysis generated
May 11, 2026 12:38 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Midjourney
Document: Midjourney Community Guidelines
Record ID: CA-P-011653
Captured: 2026-05-11 12:38:46 UTC
SHA-256: a0f59f778b2a3ff3…
URL: https://conductatlas.com/platform/midjourney/midjourney-community-guidelines/csam-absolute-prohibition/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
High
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Midjourney's CSAM Absolute Prohibition clause do?

This provision reflects a legal obligation under US federal law (PROTECT Act, FOSTA-SESTA adjacent obligations) and similar laws globally, and violations carry criminal liability independent of platform enforcement.

How does this clause affect you?

Generating or distributing any content that sexualizes minors on Midjourney, including AI-generated images, will result in account termination and may expose users to criminal prosecution under applicable law.

Is ConductAtlas affiliated with Midjourney?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Midjourney.