PayPal uses your personal data — including your transaction history and account information — to train its AI systems, and also uses automated systems to make decisions about fraud risk and product eligibility.
PayPal uses your transaction history, browsing behavior, and other personal data to train AI models, and automated decision-making tools may affect whether you are flagged for fraud or approved for financial products without a human reviewing your case.
Cross-platform context
See how other platforms handle AI Model Training Using Personal Information and similar clauses.
Compare across platforms →Your financial and behavioral data is being used to train AI models, and automated systems are making consequential decisions about you without human review, which can affect your access to services and your financial standing.
1. REGULATORY FRAMEWORK: This provision implicates GDPR Art. 22 (automated decision-making including profiling), which grants individuals the right not to be subject to solely automated decisions with significant effects, and requires disclosure of meaningful information about the logic involved. The EU AI Act (Regulation 2024/1689) classifies credit scoring and fraud detection AI as high-risk AI systems under Annex III, requiring conformity assessments and transparency obligations. CCPA/CPRA does not provide an explicit automated decision-making opt-out right but the CPPA has proposed regulations. FTC Act Section 5 applies to deceptive or unfair AI practices. 2.
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.