Vercel AI · Vercel AI Acceptable Use Policy · View original document ↗

AI Disclosure Obligation

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Vercel AI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

If you use Vercel's AI features to generate content that your users interact with, you must tell those users that the content is AI-generated whenever the law requires you to do so.

This analysis describes what Vercel AI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

This provision places a contractual disclosure obligation on account holders that mirrors and cross-references emerging legal requirements for AI-generated content transparency, creating a dual compliance obligation under both Vercel's AUP and applicable law.

Interpretive note: The obligation is triggered by 'applicable law,' which varies by jurisdiction and requires account holders to independently assess disclosure requirements across all markets where their applications operate; the document provides no guidance on which specific laws apply.

Consumer impact (what this means for users)

Developers deploying AI-powered applications on Vercel are required to disclose AI-generated content to end users where legally mandated, which engages disclosure requirements under the EU AI Act, emerging US state AI laws, and any other applicable jurisdiction-specific regulations.

How other platforms handle this

Microsoft Medium

Microsoft commits to transparency about when users are interacting with AI systems, including disclosure of AI-generated content, notification when AI is being used in consequential contexts, and provision of meaningful information about AI system capabilities and limitations to enable informed user...

Mistral AI Medium

Training Datasets. In some cases, we access datasets provided by third parties for our model training purposes. These datasets may include personal data (even if such third parties and Mistral AI use good practices to filter out such personal data), proprietary data, or public data. [...] Data publi...

Apple Medium

Apps using AI-generated content must clearly indicate when content is AI-generated. Apps must not use AI-generated content to deceive or mislead users. Developers must disclose in their privacy nutrition labels if their app uses AI to generate content that could be mistaken for real people or events...

See all platforms with this clause type →

Monitoring

Vercel AI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
You must disclose to your users when they are interacting with AI-generated content where such disclosure is required by applicable law.

— Excerpt from Vercel AI's Vercel AI Acceptable Use Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision directly engages the EU AI Act's transparency obligations for providers and deployers of AI systems, including requirements to disclose AI-generated content under Articles 50 and 52 of the EU AI Act. It also interacts with emerging US state laws, including California's AB 2602 and other deepfake and AI disclosure statutes, as well as the FTC's guidance on AI-generated endorsements and deceptive practices. The provision's scope depends entirely on which jurisdiction's 'applicable law' governs the specific deployment, creating significant interpretive complexity for globally deployed applications. GOVERNANCE EXPOSURE: Medium. The provision is clear in its obligation but entirely dependent on account holders correctly identifying when disclosure is legally required across all applicable jurisdictions. Organizations without active AI governance programs or legal monitoring for AI disclosure law developments face heightened risk of inadvertent non-disclosure. JURISDICTION FLAGS: EU and EEA customers face the most immediate and specific exposure under the EU AI Act's transparency requirements, which are directly enforceable. California customers should assess compliance with applicable California AI and deepfake disclosure statutes. Organizations deploying AI in consumer-facing applications across multiple jurisdictions should conduct a jurisdiction-by-jurisdiction review of AI disclosure obligations. CONTRACT AND VENDOR IMPLICATIONS: Procurement and legal teams should ensure that their AI governance policies and product design specifications address disclosure obligations for AI-generated content in all markets where their Vercel-hosted applications operate. This may require updating user-facing interfaces, terms of service, and privacy notices to incorporate AI-generated content disclosures. COMPLIANCE CONSIDERATIONS: Compliance teams should implement a monitoring process for AI disclosure law developments in all jurisdictions where their applications operate, and establish a review cycle for updating disclosure mechanisms in Vercel-hosted AI applications. Legal teams should assess whether existing product disclosures satisfy both the EU AI Act requirements and any applicable US state disclosure obligations.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has issued guidance on AI-generated content disclosure and endorsements and has enforcement authority over deceptive practices where AI-generated content is not disclosed to consumers.
    File a complaint →
  • State AG
    State attorneys general have enforcement authority over state-level AI disclosure and deepfake statutes, which vary by jurisdiction and are directly implicated by this provision.
    File a complaint →

Applicable regulations

California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Vercel AI Acceptable Use Policy
Entity
Vercel AI
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011817
Document ID
CA-D-00795
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
0730c1d755c16df96dd0393e7c4bb6d3d176980d12fede128df88e5ffc5dfb0a
Analysis generated
May 12, 2026 15:18 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Vercel AI
Document: Vercel AI Acceptable Use Policy
Record ID: CA-P-011817
Captured: 2026-05-12 15:18:17 UTC
SHA-256: 0730c1d755c16df9…
URL: https://conductatlas.com/platform/vercel-ai/vercel-ai-acceptable-use-policy/ai-disclosure-obligation/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Vercel AI's AI Disclosure Obligation clause do?

This provision places a contractual disclosure obligation on account holders that mirrors and cross-references emerging legal requirements for AI-generated content transparency, creating a dual compliance obligation under both Vercel's AUP and applicable law.

How does this clause affect you?

Developers deploying AI-powered applications on Vercel are required to disclose AI-generated content to end users where legally mandated, which engages disclosure requirements under the EU AI Act, emerging US state AI laws, and any other applicable jurisdiction-specific regulations.

Is ConductAtlas affiliated with Vercel AI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Vercel AI.