Users cannot use Midjourney to create images for election campaigns, to spread false information, to deceive or defraud people, or to misrepresent generated images as real or from a different source.
This analysis describes what Midjourney's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
These prohibitions directly address AI-generated content in political and deceptive contexts, which are areas subject to increasing regulatory scrutiny globally, and violations can result in account termination.
Interpretive note: The terms 'misinformation,' 'disinformation,' and 'political campaigns' are not defined in the document, leaving the scope of these prohibitions subject to Midjourney's discretionary interpretation.
Users generating images for political, electoral, or deceptive purposes risk account suspension or banning; additionally, these prohibitions place affirmative obligations on users to disclose the AI-generated nature of images they share.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Certain use cases, such as violence, hate speech, fraud, and privacy violations, are strictly prohibited. Developers must outline and get approval for their use case to access the Cohere API, understanding the models and limitations.
All content on this Internet site ("the delta.com website") is owned or controlled by Delta Air Lines and is protected by worldwide copyright laws.
Monitoring
Midjourney has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated images about their nature or source.— Excerpt from Midjourney's Midjourney Community Guidelines
REGULATORY LANDSCAPE: The prohibition on election influence imagery intersects with emerging AI-specific disclosure requirements across multiple jurisdictions, including provisions in the EU AI Act addressing manipulative AI-generated content and electoral integrity. US federal and state election laws may separately prohibit AI-generated deceptive political content, with enforcement by the FEC at the federal level and state election authorities. The FTC's guidance on AI-generated endorsements and deceptive content is also relevant to the misinformation and fraud prohibitions. The prohibition on misleading recipients about the nature or source of generated images aligns with disclosure requirements emerging under various AI transparency frameworks. GOVERNANCE EXPOSURE: Medium. The prohibition is broadly worded and does not define 'misinformation' or 'disinformation,' leaving enforcement scope to Midjourney's discretion. For organizations producing political communications or public interest content, the scope of 'political campaigns' is not defined and may create ambiguity about permissible advocacy content. JURISDICTION FLAGS: EU member states implementing the EU AI Act, California (AB 2839 regarding AI-generated election content), and other jurisdictions with AI disclosure mandates create heightened compliance exposure. Organizations operating across multiple jurisdictions should assess whether platform-level prohibitions align with or fall short of applicable regulatory obligations. CONTRACT AND VENDOR IMPLICATIONS: Media organizations, political communications firms, and public affairs consultancies should assess whether their intended use of Midjourney is compatible with these restrictions. The undefined scope of 'political campaigns' warrants clarification before organizational deployment. COMPLIANCE CONSIDERATIONS: Compliance teams in regulated industries producing public communications should map these platform-level prohibitions against applicable AI disclosure and electoral content laws. The obligation not to mislead recipients about the nature or source of generated images may require organizations to implement AI content disclosure practices in downstream publishing workflows.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
These prohibitions directly address AI-generated content in political and deceptive contexts, which are areas subject to increasing regulatory scrutiny globally, and violations can result in account termination.
Users generating images for political, electoral, or deceptive purposes risk account suspension or banning; additionally, these prohibitions place affirmative obligations on users to disclose the AI-generated nature of images they share.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Midjourney.