Cursor will not use your code inputs or the AI suggestions it generates to train AI models unless you actively choose to allow it. You can manage this setting within the Cursor application.
This analysis describes what Cursor's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision establishes that the default position is no use of user content for AI training, which is a contractually explicit opt-in framework rather than a passive opt-out arrangement.
Under this provision, users' code inputs and AI-generated suggestions are not used to train AI models by default; users who have not explicitly consented retain this protection and can verify or manage their consent status within the Service settings.
How other platforms handle this
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Users under 18 years old interact with an age-appropriate model specifically designed to reduce the likelihood of exposure to sensitive or suggestive content. Our under-18 model has additional and more conservative classifiers than the model for our adult users so we can enforce our content policies...
Monitoring
Cursor has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"ANYSPHERE WILL NOT USE CONTENT TO TRAIN, OR ALLOW ANY THIRD PARTY TO TRAIN, ANY AI MODELS, UNLESS YOU'VE EXPLICITLY AGREED TO THE USE OF CONTENT FOR TRAINING. You can find instructions in the Service for how to manage your preferences regarding the use of Inputs and Suggestions for training.— Excerpt from Cursor's Cursor Terms of Service
REGULATORY LANDSCAPE: This provision engages GDPR Article 6 (lawful basis for processing) and Article 9 where code may contain special category data, as well as CCPA obligations regarding use of personal information beyond the disclosed purpose. The FTC has issued guidance on AI transparency and data use disclosures relevant to this commitment. Where Anysphere acts as a data processor for enterprise customers, GDPR Article 28 may require this commitment to be formalized in a Data Processing Agreement. GOVERNANCE EXPOSURE: Medium. The provision creates a contractual commitment against using Content for AI training absent explicit user consent. Compliance exposure arises if platform technical controls do not fully implement this commitment or if the definition of 'Content' versus 'Usage Data' (Section 5.4) creates operational ambiguity about what data may still be processed for model-adjacent purposes such as analytics or service improvement. JURISDICTION FLAGS: EU/EEA organizations face heightened exposure if this commitment is not backed by a formal Data Processing Agreement, as GDPR requires documented processing instructions. California-based enterprise users may also evaluate this under CCPA service provider limitations on secondary use of personal information. CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should request confirmation that this opt-in commitment is technically enforced at the infrastructure level, not solely contractual. Vendor assessments should include verification of audit mechanisms that demonstrate compliance with this restriction, particularly for any subprocessors Anysphere engages. COMPLIANCE CONSIDERATIONS: Legal teams should review whether the platform's in-app consent mechanism for training opt-in constitutes valid, freely given, specific, and informed consent under GDPR. Data mapping exercises should clearly distinguish Content (Inputs and Suggestions) from Usage Data (Section 5.4), as only Content is subject to the training opt-in restriction; Usage Data may still be processed and disclosed to third parties in aggregated or de-identified form.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision establishes that the default position is no use of user content for AI training, which is a contractually explicit opt-in framework rather than a passive opt-out arrangement.
Under this provision, users' code inputs and AI-generated suggestions are not used to train AI models by default; users who have not explicitly consented retain this protection and can verify or manage their consent status within the Service settings.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cursor.