You are not allowed to claim that AI-generated content produced by the Cerebras service was created by a human.
This analysis describes what Cerebras's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision directly addresses AI transparency obligations and, if violated, could expose users to regulatory risk under emerging AI disclosure requirements and FTC guidance on deceptive AI use.
Interpretive note: The term 'represent' is not defined in the document, and whether implicit or contextual misrepresentation (such as publishing without disclosure) is captured is subject to interpretation.
Users and businesses who use Cerebras to generate content must not present that content as human-created, which has implications for marketing, journalism, customer service, and any context where human authorship is material.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Customer will not, and will not permit any other person (including any End User) to: ... (d) attempt to reverse engineer, decompile, or otherwise attempt to discover the source code or underlying components (e.g., algorithms, weights, or systems) of the Mistral AI Products, including using the Outpu...
You may not use the Services to attempt to circumvent, disable, or otherwise interfere with safety-related features of the Services, including features that prevent or restrict the generation of certain types of content.
Monitoring
Cerebras has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You shall not (i) represent that Output was human-generated— Excerpt from Cerebras's Cerebras Terms of Service
REGULATORY LANDSCAPE: This provision aligns with FTC guidance on AI-generated content disclosure and the principle that representing AI outputs as human-created may constitute a deceptive practice. The EU AI Act imposes transparency requirements on AI systems that interact with natural persons, requiring disclosure that the interaction is with an AI. This contractual prohibition on misrepresenting outputs as human-generated is consistent with but does not substitute for those regulatory obligations. GOVERNANCE EXPOSURE: Medium. The prohibition is clear in direction but the document does not define 'represent' or specify what constitutes adequate disclosure. Implicit misrepresentation (e.g., publishing AI-generated copy without disclosure on a website) may or may not be captured by this clause depending on interpretation. The regulatory risk for violations of this provision lies primarily with the user, not Cerebras, since Cerebras's terms are attempting to shift this obligation downstream. JURISDICTION FLAGS: Mandatory AI disclosure requirements exist or are emerging in the EU (EU AI Act), California (SB 942, the California AI Transparency Act for large AI systems), and various other jurisdictions. Compliance with this contractual provision may be necessary but not sufficient to satisfy applicable legal disclosure obligations, which may impose additional affirmative requirements beyond simply not misrepresenting outputs. CONTRACT AND VENDOR IMPLICATIONS: Enterprise customers who use AI-generated content in customer-facing products, advertising, or regulated communications should review this provision alongside applicable disclosure laws. Marketing and legal teams should establish internal guidelines for labeling AI-generated content that comply with both this contractual requirement and applicable regulatory mandates. Failure to comply could trigger both contractual breach and regulatory exposure. COMPLIANCE CONSIDERATIONS: Organizations should implement content labeling or disclosure policies for AI-generated material produced via Cerebras. This is particularly relevant for regulated industries (financial services, healthcare, legal) where AI-generated content may be subject to additional disclosure obligations. Legal teams should ensure that downstream customer and content policies reflect this contractual requirement.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision directly addresses AI transparency obligations and, if violated, could expose users to regulatory risk under emerging AI disclosure requirements and FTC guidance on deceptive AI use.
Users and businesses who use Cerebras to generate content must not present that content as human-created, which has implications for marketing, journalism, customer service, and any context where human authorship is material.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cerebras.