OpenAI states it uses an internal framework to assess whether its most powerful AI models pose catastrophic risks before releasing them to the public.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
The framework governs whether high-capability AI models are deployed at all, which directly affects what AI capabilities users and developers can access and under what safety conditions those systems are made available.
Interpretive note: The document describes the framework's existence and general purpose but does not specify evaluation criteria, thresholds, or governance override procedures, making operational assessment uncertain.
The Preparedness Framework as described determines the safety evaluation threshold for AI model deployment; users of OpenAI products interact with models that have passed these internal evaluations, though the specific criteria and results are determined internally by OpenAI and are not independently verified by a third party per this document.
Cross-platform context
See how other platforms handle Preparedness Framework for Catastrophic Risk and similar clauses.
Compare across platforms →Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We have developed a Preparedness Framework to evaluate, forecast, and protect against catastrophic risks from frontier AI models. It creates a structured process for model evaluations ('evals') and establishes safety baselines that must be met before models can be deployed or developed further.— Excerpt from OpenAI's OpenAI Safety Standards
REGULATORY LANDSCAPE: This provision engages with the EU AI Act's requirements for risk assessments and conformity evaluations for high-risk and general-purpose AI systems, as well as the US Executive Order on AI's provisions regarding safety evaluations for frontier models. The FTC and EU AI Office are the most relevant enforcement authorities. The document's description of the framework is voluntary and self-reported; it does not constitute a regulatory filing or compliance certification under any cited framework. GOVERNANCE EXPOSURE: Medium. The Preparedness Framework is described at a high level without specifying evaluation methodologies, scoring thresholds, or governance structures for overriding safety findings. Organizations deploying OpenAI models in regulated sectors may face scrutiny regarding whether this internal framework satisfies their own regulatory due diligence obligations. JURISDICTION FLAGS: EU/EEA organizations face the highest exposure given EU AI Act requirements for GPAI model providers to publish summaries of training data, conduct adversarial testing, and maintain technical documentation. US federal contractors and entities subject to sector-specific AI guidance face secondary exposure. The voluntary nature of the framework means it may not satisfy mandatory transparency requirements in these jurisdictions. CONTRACT AND VENDOR IMPLICATIONS: Procurement teams assessing OpenAI as a vendor for high-risk AI applications should request access to Preparedness Framework evaluation summaries or system cards rather than relying on this public page. The document does not assert audit rights for customers or third parties, and no independent verification mechanism is described. COMPLIANCE CONSIDERATIONS: Compliance teams should evaluate whether reliance on OpenAI's self-reported Preparedness Framework satisfies their organization's AI risk management obligations under applicable regulation. Organizations subject to EU AI Act Article 55 obligations for GPAI models should map this framework against required technical documentation and systemic risk assessment requirements.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
The framework governs whether high-capability AI models are deployed at all, which directly affects what AI capabilities users and developers can access and under what safety conditions those systems are made available.
The Preparedness Framework as described determines the safety evaluation threshold for AI model deployment; users of OpenAI products interact with models that have passed these internal evaluations, though the specific criteria and results are determined internally by OpenAI and are not independently verified by a third party per this document.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.