Cursor · Cursor Privacy Policy · View original document ↗

AI Training Data Opt-In (Inputs and Suggestions)

Medium severity High confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Cursor Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

Cursor states it does not use the code or text you type (Inputs) or the AI responses you receive (Suggestions) to train its models by default. This protection has three exceptions: if content is flagged for security review, if you report it as feedback, or if you explicitly opt in.

This analysis describes what Cursor's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The security review exception means that Inputs flagged for Terms of Service enforcement purposes may be analyzed by Anysphere, which is a conditional pathway that applies even without the user's explicit consent to training use.

Consumer impact (what this means for users)

The policy states your submitted code and AI-generated responses are excluded from model training by default, but content flagged for security review or reported as Feedback may be used for analysis. Users can manage their training preferences through in-Service settings.

What you can do

⚠️ These actions may provide transparency or partial mitigation but may not fully address the underlying issue. Effectiveness varies by jurisdiction and individual circumstances.
  • Opt Out of Arbitration
    Open the Cursor application, navigate to Settings, and locate the preferences section for Inputs and Suggestions training use. Confirm your opt-out preference is selected.

How other platforms handle this

Writer Medium

Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.

Ideogram Medium

We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.

Roblox Medium

We are simplifying our Terms of Use, including clarifications around the use of AI tools, and their data use. We have moved the terms that describe AI Features, which were previously written for a Creator audience and located under the AI-Based Tools Supplemental Terms and Disclaimer, into the User ...

See all platforms with this clause type →

Monitoring

Cursor has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We do not use Inputs or Suggestions to train our models, or permit third parties to use them for training, unless: (1) they are flagged for security review (in which case we may analyze them to improve our ability to detect and enforce our Terms of Service), (2) you explicitly report them to us (for example, as Feedback), or (3) you've explicitly agreed to their use for such training purposes. You can find instructions in the Service on how to manage your preferences regarding the use of Inputs and Suggestions for training.

— Excerpt from Cursor's Cursor Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

(1) REGULATORY LANDSCAPE: This provision engages GDPR principles of purpose limitation and data minimization (Articles 5 and 6) for EEA users, as well as CCPA provisions governing use of personal data. The opt-in structure for training use aligns with consent-based processing requirements under GDPR Article 6(1)(a). The FTC has authority over deceptive practices related to data use representations for US users. (2) GOVERNANCE EXPOSURE: Medium. The provision creates a clearly defined opt-in default for training use, which reduces exposure for standard use cases. However, the security review exception introduces a processing pathway under which Inputs may be analyzed without user consent to training. Organizations handling proprietary or sensitive source code should evaluate whether this exception is compatible with their confidentiality obligations and data classification policies. (3) JURISDICTION FLAGS: EEA and UK users may evaluate whether the security review exception constitutes a separately identified legal basis under GDPR, given that it is not framed as consent. California users should note the policy's alignment with CCPA restrictions on use of personal data beyond disclosed purposes. Organizations in regulated industries (financial services, healthcare, legal) face heightened exposure if sensitive code or data is submitted as Inputs and subsequently falls within the security review exception. (4) CONTRACT AND VENDOR IMPLICATIONS: Enterprise procurement teams should confirm whether customer agreements with Anysphere provide contractual limitations on the security review exception or additional protections for submitted code. The provision does not assert that flagged Inputs are retained or used for training, but the language authorizes analysis for enforcement purposes, which should be addressed in DPA negotiations. (5) COMPLIANCE CONSIDERATIONS: Compliance teams should map internal data classification categories against the types of Inputs users submit to Cursor, and assess whether any categories of sensitive code could trigger the security review pathway. Consent mechanism reviews should confirm that the opt-in for training use is implemented as a genuine affirmative action rather than a pre-checked setting. Employee notification obligations may apply in jurisdictions requiring disclosure of workplace monitoring if administrators can access Input history.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over deceptive or unfair data use practices; the training opt-in representation is a material disclosure subject to FTC oversight.
    File a complaint →

Applicable regulations

EU AI Act
European Union
California AB 2013 AI Training Data Transparency
US-CA
Colorado AI Act
US-CO
EU AI Act - High Risk Provisions
EU
GDPR
European Union
Texas AI Act
Texas, USA
Trump Executive Order on AI Policy Framework
US

Provision details

Document information
Document
Cursor Privacy Policy
Entity
Cursor
Document last updated
May 5, 2026
Tracking information
First tracked
May 7, 2026
Last verified
May 12, 2026
Record ID
CA-P-011599
Document ID
CA-D-00452
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
1e5849a4a5fbaa739f760d04f8a003ee1ec366c9f4216cb1cb0ea9b8cf9d01f3
Analysis generated
May 7, 2026 17:01 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Cursor
Document: Cursor Privacy Policy
Record ID: CA-P-011599
Captured: 2026-05-07 17:01:07 UTC
SHA-256: 1e5849a4a5fbaa73…
URL: https://conductatlas.com/platform/cursor/cursor-privacy-policy/ai-training-data-opt-in-inputs-and-suggestions/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Cursor's AI Training Data Opt-In (Inputs and Suggestions) clause do?

The security review exception means that Inputs flagged for Terms of Service enforcement purposes may be analyzed by Anysphere, which is a conditional pathway that applies even without the user's explicit consent to training use.

How does this clause affect you?

The policy states your submitted code and AI-generated responses are excluded from model training by default, but content flagged for security review or reported as Feedback may be used for analysis. Users can manage their training preferences through in-Service settings.

Is ConductAtlas affiliated with Cursor?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cursor.