Dun & Bradstreet · D&B Privacy Policy · View original document ↗

AI Systems Use and TRUSTe Responsible AI Certification

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Monitor governance changes for Dun & Bradstreet Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

D&B uses AI systems to generate scores, ratings, and analytics about businesses and individuals, and has obtained a third-party certification (TRUSTe Responsible AI) attesting to responsible AI practices as of 2024.

This analysis describes what Dun & Bradstreet's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

AI-generated scores and ratings produced by D&B may influence credit decisions, business risk assessments, and professional due diligence about individuals, making the governance of these systems material to both individuals and the organizations that rely on D&B data.

Interpretive note: The specific AI systems in scope, their risk classifications, and the precise scope of the TRUSTe audit are disclosed only on a linked sub-page not reproduced in this document, limiting the ability to fully assess this provision.

Consumer impact (what this means for users)

AI-generated outputs from D&B, such as creditworthiness scores or risk ratings, may affect how businesses assess your organization or your professional standing. The TRUSTe certification provides third-party attestation to responsible AI practices, but the specific AI systems and their scope are detailed only on a linked sub-page not fully reproduced in this document.

How other platforms handle this

GitHub Medium

ISO/IEC 42001:2023

ClickUp Medium

When you use AI features of the Services, you acknowledge that your inputs may be processed by third-party AI providers. ClickUp may use anonymized and aggregated data derived from your use of the Services to improve and train AI models and features.

Windsurf Medium

We may leverage OpenAI models independent of user selection for processing other tasks (e.g. for summarization). We may leverage Anthropic models independent of user selection for processing other tasks (e.g. for summarization). We may leverage these models independent of user selection for processi...

See all platforms with this clause type →

Monitoring

Dun & Bradstreet has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
Some of the systems we use to process data are AI Systems. We aggregate data, combine, and generate data, including scores, ratings, and other analytics. TRUSTe Responsible AI Certification (2024)

— Excerpt from Dun & Bradstreet's D&B Privacy Policy

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: The use of AI to generate scores and ratings implicates the EU AI Act, particularly provisions relating to high-risk AI systems used in creditworthiness assessment and risk classification (Annex III). The FTC's guidance on AI and algorithmic accountability is also relevant for U.S. operations. State-level AI governance frameworks, including the Colorado AI Act (SB 205), which addresses consequential decisions made by algorithmic systems, may apply depending on use case and jurisdiction. GOVERNANCE EXPOSURE: Medium. The TRUSTe Responsible AI Certification (2024) provides a baseline assurance signal, but the certification standard's specific requirements and audit scope are not detailed in this document. Organizations using D&B AI-generated scores for consequential decisions (credit, hiring, risk) should assess whether those use cases require additional human review or disclosure obligations under applicable law. JURISDICTION FLAGS: EU operations are most exposed given the EU AI Act's explicit requirements for high-risk AI systems, including documentation, human oversight, and transparency obligations that may apply to credit risk or business scoring systems. Colorado's AI Act and analogous state-level proposals in the U.S. create additional compliance surface area for AI-driven decisioning that affects Colorado consumers or businesses. CONTRACT AND VENDOR IMPLICATIONS: Organizations licensing D&B AI-generated scores for use in automated decisioning should assess whether their vendor agreements include representations about model governance, bias testing, and explainability. The existence of a TRUSTe Responsible AI Certification may be cited in vendor due diligence, but procurement teams should request the specific scope and findings of the certification audit. COMPLIANCE CONSIDERATIONS: Compliance teams should review D&B's linked AI Systems sub-page for detailed disclosures about which AI systems are in scope, what data they process, and what governance controls are in place. Teams operating in the EU should assess whether D&B's AI systems qualify as high-risk under the EU AI Act and whether contractual obligations on D&B as an AI system provider are adequately addressed.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has issued guidance on AI and algorithmic accountability and has authority over unfair or deceptive practices in AI-driven data processing by data brokers operating in the U.S.
    File a complaint →

Applicable regulations

GDPR
European Union

Provision details

Document information
Document
D&B Privacy Policy
Entity
Dun & Bradstreet
Document last updated
May 5, 2026
Tracking information
First tracked
May 7, 2026
Last verified
May 10, 2026
Record ID
CA-P-007990
Document ID
CA-D-00722
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
d8b56bc5d2b8bea4b35bf727a3c9d12d285801ea1c487d138b87ed807ca66d3d
Analysis generated
May 7, 2026 15:50 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: Dun & Bradstreet
Document: D&B Privacy Policy
Record ID: CA-P-007990
Captured: 2026-05-07 15:50:32 UTC
SHA-256: d8b56bc5d2b8bea4…
URL: https://conductatlas.com/platform/dun-bradstreet/db-privacy-policy/ai-systems-use-and-truste-responsible-ai-certification/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Related Analysis

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does Dun & Bradstreet's AI Systems Use and TRUSTe Responsible AI Certification clause do?

AI-generated scores and ratings produced by D&B may influence credit decisions, business risk assessments, and professional due diligence about individuals, making the governance of these systems material to both individuals and the organizations that rely on D&B data.

How does this clause affect you?

AI-generated outputs from D&B, such as creditworthiness scores or risk ratings, may affect how businesses assess your organization or your professional standing. The TRUSTe certification provides third-party attestation to responsible AI practices, but the specific AI systems and their scope are detailed only on a linked sub-page not fully reproduced in this document.

Is ConductAtlas affiliated with Dun & Bradstreet?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by Dun & Bradstreet.