This analysis describes what Cursor's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
For developers, the code you write and the AI suggestions you receive may contain proprietary or sensitive information; this clause limits how that content can be used for model training.
Interpretive note: The document does not define the threshold or process for a 'security review' flag, leaving the scope of that exception ambiguous.
The policy states that Cursor collects code Inputs you submit and AI-generated Suggestions, along with IP addresses, device information, log data, and usage and browsing activity within the service. For users on workplace or enterprise accounts, the policy states that account-related information (including email address and account status) may be disclosed to the employer organization, and administrators may access and manage the user's service activity. You can manage your preferences regarding whether Inputs and Suggestions are used for model training through the settings available within the Service.
How other platforms handle this
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
We may use the content you provide to us, including prompts and generated images, to train and improve our AI models and services.
Training Datasets. In some cases, we access datasets provided by third parties for our model training purposes. These datasets may include personal data (even if such third parties and Mistral AI use good practices to filter out such personal data), proprietary data, or public data. [...] Data publi...
Monitoring
Cursor has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We do not use Inputs or Suggestions to train our models, or permit third parties to use them for training, unless: (1) they are flagged for security review (in which case we may analyze them to improve our ability to detect and enforce our Terms of Service), (2) you explicitly report them to us (for example, as Feedback), or (3) you've explicitly agreed to their use for such training purposes. You can find instructions in the Service on how to manage your preferences regarding the use of Inputs and Suggestions for training.— Excerpt from Cursor's Cursor Privacy Policy
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
For developers, the code you write and the AI suggestions you receive may contain proprietary or sensitive information; this clause limits how that content can be used for model training.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Cursor.