When you use AI21's products, the text you type into the AI and the responses you receive may be saved and used to improve AI21's AI models. Enterprise API customers have separate contractual protections that may limit this use.
This analysis describes what AI21 Labs's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Your prompts may contain sensitive, personal, or confidential information, and understanding that this content could be used for model training is important for deciding what to share with AI21's products.
Interpretive note: The exact scope of model training use and the conditions under which it applies to specific product tiers are not fully detailed in the available document excerpt, creating interpretive uncertainty.
Consumer product users' prompts and AI-generated outputs may be retained and used for model training and improvement, while enterprise API customers are governed by a separate data processing agreement that may restrict this use. Users who inadvertently share sensitive information in prompts should be aware that this content is collected and potentially used beyond the immediate interaction.
How other platforms handle this
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Monitoring
AI21 Labs has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"When you use our Services, we collect the prompts you submit and the outputs generated by our AI models. We may use this information to improve our Services, including training and fine-tuning our AI models. If you are an enterprise customer using our API, your data is handled pursuant to the applicable data processing agreement between AI21 and your organization.— Excerpt from AI21 Labs's AI21 Labs Privacy Policy
REGULATORY LANDSCAPE: This provision engages GDPR Article 5 (purpose limitation and data minimisation), Article 6 (lawful basis for processing), and Article 13 (transparency obligations). The use of prompt data for model training may require a clear lawful basis, and if legitimate interests is asserted, a balancing test is required. CCPA and CPRA also apply to California residents' prompt data as personal information. The EU AI Act may impose additional obligations on high-risk AI system training data practices. GOVERNANCE EXPOSURE: High. The use of user-submitted prompts for model training creates significant compliance exposure because users may submit sensitive, confidential, or special-category data (health information, financial details, legal matters) without realizing it could be retained and used for training. The policy's distinction between consumer and enterprise users is operationally important but may not be consistently communicated at the point of data entry. JURISDICTION FLAGS: EU and EEA users are protected by GDPR purpose limitation requirements, which may constrain the use of prompts for model training without explicit consent or a clearly documented legitimate interest assessment. California residents have CCPA rights to know and opt out of certain uses. UK GDPR applies equivalent restrictions for UK users. Enterprise customers in regulated sectors (healthcare, legal, financial) face heightened exposure if employees submit sector-specific data via consumer-facing interfaces rather than enterprise API endpoints. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should request and review the applicable data processing agreement to confirm that prompt data is not used for model training without explicit authorization. The policy's reference to a separate enterprise agreement creates a material distinction that should be reflected in vendor assessment documentation. B2B contracts should specify which product tier is being used and which data handling regime applies. COMPLIANCE CONSIDERATIONS: Legal teams should evaluate whether the current consent or notice mechanism adequately discloses model training use at the point of prompt submission. Data mapping exercises should distinguish between consumer and API data flows. If the organization operates in a regulated sector, a data protection impact assessment (DPIA) may be warranted before deploying AI21 consumer products for business use.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Your prompts may contain sensitive, personal, or confidential information, and understanding that this content could be used for model training is important for deciding what to share with AI21's products.
Consumer product users' prompts and AI-generated outputs may be retained and used for model training and improvement, while enterprise API customers are governed by a separate data processing agreement that may restrict this use. Users who inadvertently share sensitive information in prompts should be aware that this content is collected and potentially used beyond the immediate interaction.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by AI21 Labs.