Whatnot may use your personal data, including your activity and content on the platform, to train artificial intelligence and machine learning systems.
This analysis describes what Whatnot's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
Your purchases, messages, viewing history, and other activity may be used to build and refine AI systems, and the policy does not specify limitations on how long or for what purposes this training data may be retained.
Interpretive note: The provision does not specify which categories of personal data are used for AI training, making it difficult to assess the full scope of this use without additional technical documentation.
User-generated content and behavioral data may be used to train AI models, which could involve processing at scale with limited transparency about how inferences derived from that training are used downstream.
How other platforms handle this
We use Personal Data to detect and prevent fraud, and to develop and improve our fraud detection models and other machine learning systems. This may include using transaction data, device information, and other Personal Data to train and refine our systems.
We use your personal data to develop, train, and improve our artificial intelligence and machine learning models. This includes using your transaction data, behavioral data, and interaction data to enhance our fraud detection, credit assessment, and personalization capabilities. We take steps to pro...
Writer does not use Customer Data to train its AI models without explicit customer permission. Customer Data means the data, content, and information that customers and their end users submit to or through the Services.
Monitoring
Whatnot has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We may use the information we collect to develop, train, and improve our AI and machine learning models and systems, including to personalize your experience on the platform, to improve our recommendations, and to develop new features and services.— Excerpt from Whatnot's Whatnot Privacy Policy
REGULATORY LANDSCAPE: This provision may require evaluation under GDPR Articles 13 and 14 regarding transparency of data use purposes, and Article 22 on automated decision-making and profiling if AI outputs are used to make decisions affecting users. The EU AI Act may impose additional obligations depending on the risk classification of AI systems trained on user data. The FTC has issued guidance on AI and data practices. UK ICO guidance on AI and data protection is also relevant for UK users. GOVERNANCE EXPOSURE: Medium. The provision is broadly worded and does not specify categories of data used for AI training, retention periods for training datasets, or the types of AI systems being developed. This lack of specificity may create exposure under GDPR transparency requirements and emerging AI governance frameworks. JURISDICTION FLAGS: EU and UK users face heightened exposure given GDPR's strict transparency and purpose limitation requirements. California's CPRA may require disclosure of AI-related data uses in the privacy notice. Illinois BIPA could be relevant if AI systems process biometric data derived from user content. CONTRACT AND VENDOR IMPLICATIONS: If AI training is conducted by or with third-party vendors, data processing agreements must address the use of personal data for model training, including restrictions on vendor use of the data for their own model development. This is an active area of regulatory scrutiny. COMPLIANCE CONSIDERATIONS: Legal teams should assess whether the current privacy notice provides sufficient specificity about AI training data uses to satisfy GDPR transparency requirements, and whether a data protection impact assessment is required. The policy should be reviewed to confirm that AI training uses are consistent with the original purposes for which data was collected, or that appropriate consent or legitimate interest assessments have been conducted.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
Your purchases, messages, viewing history, and other activity may be used to build and refine AI systems, and the policy does not specify limitations on how long or for what purposes this training data may be retained.
User-generated content and behavioral data may be used to train AI models, which could involve processing at scale with limited transparency about how inferences derived from that training are used downstream.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Whatnot.