Stripe uses your transaction history, device data, and other personal information to train its fraud detection algorithms and other AI/machine learning systems.
Your transaction data and device information feed into Stripe's machine learning fraud models, meaning automated systems trained on your behavior may make decisions about whether future transactions — including those of other consumers — are flagged as fraudulent, with limited ability to contest such decisions.
Cross-platform context
See how other platforms handle Use of Transaction Data for Fraud Prevention and Machine Learning and similar clauses.
Compare across platforms →Your personal payment behavior is used as training data for Stripe's commercial AI systems, which may affect how future transactions by you or others are assessed, with limited transparency about how these models operate.
REGULATORY FRAMEWORK: GDPR Art. 22 restricts automated decision-making with legal or similarly significant effects, requiring either explicit consent (Art. 6(1)(a)), contractual necessity (Art. 6(1)(b)), or specific member state law authorization; affected individuals have the right to human review. GDPR Art. 5(1)(b) purpose limitation requires that use of data for ML training be compatible with the original collection purpose. The EU AI Act (Regulation 2024/1689) classifies certain fraud detection systems as high-risk AI, imposing transparency and conformity assessment obligations. FTC Act Section 5 applies to deceptive AI practices.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.