OpenAI · OpenAI Safety Standards · View original document ↗

Preparedness Framework for Catastrophic Risk

Medium severity Medium confidence Explicitdocumentlanguage Unique · 0 of 325 platforms
Share 𝕏 Share in Share 🔒 PDF
Recent governance activity OpenAI recorded 5 documented changes in the last 30 days.
Start monitoring updates
Monitor governance changes for OpenAI Create a free account to receive the weekly governance digest and monitor one platform for governance changes.
Create free account No credit card required.
Document Record

What it is

OpenAI states it uses an internal framework to assess whether its most powerful AI models pose catastrophic risks before releasing them to the public.

This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology

ConductAtlas Analysis

Why it matters (compliance & governance perspective)

The framework governs whether high-capability AI models are deployed at all, which directly affects what AI capabilities users and developers can access and under what safety conditions those systems are made available.

Interpretive note: The document describes the framework's existence and general purpose but does not specify evaluation criteria, thresholds, or governance override procedures, making operational assessment uncertain.

Consumer impact (what this means for users)

The Preparedness Framework as described determines the safety evaluation threshold for AI model deployment; users of OpenAI products interact with models that have passed these internal evaluations, though the specific criteria and results are determined internally by OpenAI and are not independently verified by a third party per this document.

Cross-platform context

See how other platforms handle Preparedness Framework for Catastrophic Risk and similar clauses.

Compare across platforms →

Monitoring

OpenAI has changed this document before.

Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.

Start Watcher free trial Or create a free account →
▸ View Original Clause Language DOCUMENT RECORD
"
We have developed a Preparedness Framework to evaluate, forecast, and protect against catastrophic risks from frontier AI models. It creates a structured process for model evaluations ('evals') and establishes safety baselines that must be met before models can be deployed or developed further.

— Excerpt from OpenAI's OpenAI Safety Standards

ConductAtlas Analysis

Institutional analysis (Compliance & governance intelligence)

REGULATORY LANDSCAPE: This provision engages with the EU AI Act's requirements for risk assessments and conformity evaluations for high-risk and general-purpose AI systems, as well as the US Executive Order on AI's provisions regarding safety evaluations for frontier models. The FTC and EU AI Office are the most relevant enforcement authorities. The document's description of the framework is voluntary and self-reported; it does not constitute a regulatory filing or compliance certification under any cited framework. GOVERNANCE EXPOSURE: Medium. The Preparedness Framework is described at a high level without specifying evaluation methodologies, scoring thresholds, or governance structures for overriding safety findings. Organizations deploying OpenAI models in regulated sectors may face scrutiny regarding whether this internal framework satisfies their own regulatory due diligence obligations. JURISDICTION FLAGS: EU/EEA organizations face the highest exposure given EU AI Act requirements for GPAI model providers to publish summaries of training data, conduct adversarial testing, and maintain technical documentation. US federal contractors and entities subject to sector-specific AI guidance face secondary exposure. The voluntary nature of the framework means it may not satisfy mandatory transparency requirements in these jurisdictions. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams assessing OpenAI as a vendor for high-risk AI applications should request access to Preparedness Framework evaluation summaries or system cards rather than relying on this public page. The document does not assert audit rights for customers or third parties, and no independent verification mechanism is described. COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether reliance on OpenAI's self-reported Preparedness Framework satisfies their organization's AI risk management obligations under applicable regulation. Organizations subject to EU AI Act Article 55 obligations for GPAI models should map this framework against required technical documentation and systemic risk assessment requirements.

Full compliance analysis

Regulatory citations, enforcement risk, and due diligence action items.

Track 1 platform — free Try Watcher free for 14 days

Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.

Applicable agencies

  • FTC
    The FTC has authority over unfair or deceptive practices related to AI safety claims and consumer protection implications of AI deployment decisions.
    File a complaint →

Provision details

Document information
Document
OpenAI Safety Standards
Entity
OpenAI
Document last updated
May 12, 2026
Tracking information
First tracked
May 12, 2026
Last verified
May 12, 2026
Record ID
CA-P-011956
Document ID
CA-D-00822
Evidence Provenance
Source URL
Wayback Machine
Content hash (SHA-256)
46e71f573cc43a08729a6d0f09664a16c71e3f8e5fb577e6a1437e692885647e
Analysis generated
May 12, 2026 16:33 UTC
Methodology
Evidence
✓ Snapshot stored   ✓ Hash verified
Citation Record
Entity: OpenAI
Document: OpenAI Safety Standards
Record ID: CA-P-011956
Captured: 2026-05-12 16:33:49 UTC
SHA-256: 46e71f573cc43a08…
URL: https://conductatlas.com/platform/openai/openai-safety-standards/preparedness-framework-for-catastrophic-risk/
Accessed: May 13, 2026
Permanent archival reference. Stable identifier suitable for legal filings, compliance documentation, and research citation.
Classification
Severity
Medium
Categories

Other risks in this policy

Professional Governance Intelligence

Need to monitor specific governance provisions?

Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.

Arbitration clauses AI governance Data rights Indemnification Retention policies
Start Professional free trial

Or start with Watcher →

Built from archived source documents, structured governance mappings, and historical version tracking.

Frequently Asked Questions

What does OpenAI's Preparedness Framework for Catastrophic Risk clause do?

The framework governs whether high-capability AI models are deployed at all, which directly affects what AI capabilities users and developers can access and under what safety conditions those systems are made available.

How does this clause affect you?

The Preparedness Framework as described determines the safety evaluation threshold for AI model deployment; users of OpenAI products interact with models that have passed these internal evaluations, though the specific criteria and results are determined internally by OpenAI and are not independently verified by a third party per this document.

Is ConductAtlas affiliated with OpenAI?

No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.