You cannot use Vercel's AI tools to create false or misleading content, impersonate people, infringe copyrights, spread disinformation, or discriminate against protected groups.
This analysis describes what Vercel AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision establishes specific behavioral obligations for users of Vercel's AI features, covering content generation, impersonation, disinformation, and discrimination, and creates compliance obligations that interact with both existing law and emerging AI-specific regulation.
Interpretive note: The prohibition on content that violates any applicable law or regulation is jurisdictionally indeterminate and requires account holders to independently assess legal compliance across all jurisdictions where their applications operate.
Users and developers deploying Vercel's AI features are prohibited from generating deceptive, discriminatory, or disinformation content, and from impersonating individuals or organizations, which directly governs the types of AI-powered applications that can be lawfully built and hosted on the platform.
How other platforms handle this
You may not use Runway's tools to create content that promotes, glorifies, or facilitates acts of terrorism, mass violence, or genocide, or that could be used to provide material support to individuals or organizations engaged in such activities.
Do not generate images for political campaigns or to try to influence the outcome of an election. Do not generate images to spread misinformation or disinformation. Do not generate images to attempt to or to actually deceive or defraud anyone. Do not intentionally mislead recipients of generated ima...
All content on this Internet site ("the delta.com website") is owned or controlled by Delta Air Lines and is protected by worldwide copyright laws.
Monitoring
Vercel AI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"You may not use Vercel's AI features to: generate content that is intended to deceive, manipulate, or defraud users; generate content that violates any applicable law or regulation; generate content that infringes on any third party's intellectual property rights; use AI to produce or distribute disinformation or fake news; use AI to impersonate individuals or organizations without their consent; or use AI in ways that discriminate against individuals based on protected characteristics.— Excerpt from Vercel AI's Vercel AI Acceptable Use Policy
REGULATORY LANDSCAPE: This provision directly engages the EU AI Act, which prohibits certain AI practices including subliminal manipulation, exploitation of vulnerabilities, and social scoring by public authorities. It also interacts with the FTC Act's prohibition on unfair and deceptive practices, the Lanham Act (impersonation and false designation of origin), and emerging US state AI disclosure and deepfake laws. For EU customers, the AI Act's transparency obligations for AI-generated content and the prohibition on certain high-risk AI applications create a parallel regulatory layer that this contractual provision does not fully map to. GOVERNANCE EXPOSURE: High. The prohibition on generating content that violates any applicable law or regulation is broad and shifts significant interpretive and compliance burden onto account holders, who must assess legal compliance across the jurisdictions in which their AI-powered applications operate. The anti-discrimination provision engages multiple enforcement frameworks including Title VII, the Fair Housing Act, and EU non-discrimination directives depending on the application's context. JURISDICTION FLAGS: EU and EEA customers face the most significant exposure given the EU AI Act's specific and enforceable requirements for AI system deployers. California customers should assess compliance with the California Deepfake laws and the California Consumer Privacy Act's requirements around automated decision-making. Illinois customers deploying AI in employment or housing contexts face additional exposure under the Illinois Human Rights Act. CONTRACT AND VENDOR IMPLICATIONS: Organizations building AI-powered products on Vercel should ensure their product design, content moderation, and user agreement frameworks address each of the prohibited categories listed in this provision. Vendor assessments should verify that any third-party AI models integrated through Vercel's platform are evaluated for risks of generating prohibited content types. COMPLIANCE CONSIDERATIONS: Legal teams should map the prohibited AI conduct categories in this provision against their organization's existing AI governance policies, particularly around content moderation, bias testing, and impersonation prevention. Organizations subject to the EU AI Act should assess whether this contractual prohibition aligns with their deployer obligations under that regulation, and whether additional technical or organizational measures are required beyond contractual compliance with Vercel's AUP.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision establishes specific behavioral obligations for users of Vercel's AI features, covering content generation, impersonation, disinformation, and discrimination, and creates compliance obligations that interact with both existing law and emerging AI-specific regulation.
Users and developers deploying Vercel's AI features are prohibited from generating deceptive, discriminatory, or disinformation content, and from impersonating individuals or organizations, which directly governs the types of AI-powered applications that can be lawfully built and hosted on the platform.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Vercel AI.