OpenAI states it has dedicated a team and 20% of its computing resources to researching how to ensure that AI systems far more capable than current models remain aligned with human values and oversight.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This commitment describes how OpenAI allocates internal resources toward long-term safety research for AI systems that do not yet exist; it signals the organization's assessment of risk timelines and its stated prioritization of alignment research.
Interpretive note: The document states the 20% compute commitment but does not define how compute is measured, over what period the commitment applies, or what verification exists.
This provision describes a research investment commitment rather than a consumer-facing right or obligation; it does not directly affect current users' data, fees, or legal rights, but it represents OpenAI's stated position on the safety trajectory of systems that may power future products users access.
Cross-platform context
See how other platforms handle Superalignment Research Commitment and similar clauses.
Compare across platforms →Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"We believe that superintelligence could arrive within the decade. Solving the technical problems around superintelligence alignment is one of the most important tasks of our time. We've established a Superalignment team to focus on this challenge, committing 20% of our compute to solving it.— Excerpt from OpenAI's OpenAI Safety Standards
REGULATORY LANDSCAPE: Long-term AI alignment research of the type described here is not currently subject to specific mandatory regulatory frameworks in most jurisdictions, though it intersects with the EU AI Act's requirements for systemic risk assessments for GPAI models with significant capabilities. The document does not specify how this research output translates into regulatory compliance deliverables. GOVERNANCE EXPOSURE: Low. This is a strategic research commitment rather than an operational provision with near-term compliance implications. However, organizations with long-horizon AI governance programs may wish to track whether stated research commitments translate into published methodologies or safety standards. JURISDICTION FLAGS: No immediate jurisdiction-specific exposure. Future AI regulation in the EU, UK, and US may impose mandatory safety research or reporting requirements that would make commitments of this type subject to regulatory scrutiny. CONTRACT AND VENDOR IMPLICATIONS: This provision does not create contractual obligations for customers or partners. It is a voluntary corporate commitment with no stated verification or reporting mechanism accessible to external parties. COMPLIANCE CONSIDERATIONS: Governance teams tracking AI vendor commitments should note the 20% compute allocation claim; if this commitment is relevant to procurement decisions, teams should seek documentation of how it is measured and reported, as no such mechanism is described in this document.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This commitment describes how OpenAI allocates internal resources toward long-term safety research for AI systems that do not yet exist; it signals the organization's assessment of risk timelines and its stated prioritization of alignment research.
This provision describes a research investment commitment rather than a consumer-facing right or obligation; it does not directly affect current users' data, fees, or legal rights, but it represents OpenAI's stated position on the safety trajectory of systems that may power future products users access.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.