D&B uses AI systems to generate scores, ratings, and analytics about businesses and individuals, and has obtained a third-party certification (TRUSTe Responsible AI) attesting to responsible AI practices as of 2024.
This analysis describes what Dun & Bradstreet's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
AI-generated scores and ratings produced by D&B may influence credit decisions, business risk assessments, and professional due diligence about individuals, making the governance of these systems material to both individuals and the organizations that rely on D&B data.
Interpretive note: The specific AI systems in scope, their risk classifications, and the precise scope of the TRUSTe audit are disclosed only on a linked sub-page not reproduced in this document, limiting the ability to fully assess this provision.
AI-generated outputs from D&B, such as creditworthiness scores or risk ratings, may affect how businesses assess your organization or your professional standing. The TRUSTe certification provides third-party attestation to responsible AI practices, but the specific AI systems and their scope are detailed only on a linked sub-page not fully reproduced in this document.
How other platforms handle this
ISO/IEC 42001:2023
When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.
We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...
Monitoring
Dun & Bradstreet has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"Some of the systems we use to process data are AI Systems. We aggregate data, combine, and generate data, including scores, ratings, and other analytics. TRUSTe Responsible AI Certification (2024)— Excerpt from Dun & Bradstreet's D&B Privacy Policy
REGULATORY LANDSCAPE: The use of AI to generate scores and ratings implicates the EU AI Act, particularly provisions relating to high-risk AI systems used in creditworthiness assessment and risk classification (Annex III). The FTC's guidance on AI and algorithmic accountability is also relevant for U.S. operations. State-level AI governance frameworks, including the Colorado AI Act (SB 205), which addresses consequential decisions made by algorithmic systems, may apply depending on use case and jurisdiction. GOVERNANCE EXPOSURE: Medium. The TRUSTe Responsible AI Certification (2024) provides a baseline assurance signal, but the certification standard's specific requirements and audit scope are not detailed in this document. Organizations using D&B AI-generated scores for consequential decisions (credit, hiring, risk) should assess whether those use cases require additional human review or disclosure obligations under applicable law. JURISDICTION FLAGS: EU operations are most exposed given the EU AI Act's explicit requirements for high-risk AI systems, including documentation, human oversight, and transparency obligations that may apply to credit risk or business scoring systems. Colorado's AI Act and analogous state-level proposals in the U.S. create additional compliance surface area for AI-driven decisioning that affects Colorado consumers or businesses. CONTRACT AND VENDOR IMPLICATIONS: Organizations licensing D&B AI-generated scores for use in automated decisioning should assess whether their vendor agreements include representations about model governance, bias testing, and explainability. The existence of a TRUSTe Responsible AI Certification may be cited in vendor due diligence, but procurement teams should request the specific scope and findings of the certification audit. COMPLIANCE CONSIDERATIONS: Compliance teams should review D&B's linked AI Systems sub-page for detailed disclosures about which AI systems are in scope, what data they process, and what governance controls are in place. Teams operating in the EU should assess whether D&B's AI systems qualify as high-risk under the EU AI Act and whether contractual obligations on D&B as an AI system provider are adequately addressed.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
AI-generated scores and ratings produced by D&B may influence credit decisions, business risk assessments, and professional due diligence about individuals, making the governance of these systems material to both individuals and the organizations that rely on D&B data.
AI-generated outputs from D&B, such as creditworthiness scores or risk ratings, may affect how businesses assess your organization or your professional standing. The TRUSTe certification provides third-party attestation to responsible AI practices, but the specific AI systems and their scope are detailed only on a linked sub-page not fully reproduced in this document.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Dun & Bradstreet.