Salesforce publicly commits to building trustworthy AI products with human oversight, including autonomous AI agents, as part of its ethical and humane use framework.
Organizations using Salesforce AI products — including Agentforce autonomous agents — should understand that Salesforce's AI trust commitments are policy statements, not contractual warranties, and the binding AI-related obligations will be in their master subscription agreement and any AI-specific addenda.
Cross-platform context
See how other platforms handle AI Trust and Ethical Use Framework and similar clauses.
Compare across platforms →As Salesforce deploys autonomous AI agents within its products, the ethical use commitments and trust frameworks referenced here have direct implications for customers who rely on AI-driven automation in their business operations.
(1) REGULATORY FRAMEWORK: Salesforce's AI products implicate the EU AI Act (Regulation 2024/1689), which classifies certain AI systems as high-risk (e.g., those used in employment, credit, or public services contexts) with mandatory transparency, human oversight, and conformity assessment requirements. FTC Act Section 5 applies to deceptive AI practices. The NIST AI Risk Management Framework (AI RMF 1.0) is a relevant voluntary standard. State AI laws (Colorado SB 205, Illinois, Texas HB 4664) are emerging. (2)
Compliance intelligence locked
Regulatory citations, enforcement risk, and due diligence action items.
Watcher: regulatory citations. Professional: full compliance memo.