Businesses and developers who build products using GPT-4o are responsible for adding their own safety measures on top of OpenAI's, because OpenAI's built-in protections are designed as a starting point, not a complete solution for every application.
This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
This provision establishes that safety responsibility is shared between OpenAI and API operators, and that operators who deploy GPT-4o in sensitive contexts cannot rely solely on OpenAI's mitigations to address use-case-specific risks.
Interpretive note: The precise contractual language allocating responsibility between OpenAI and API operators is contained in the OpenAI API terms of service rather than this system card; the system card's characterization of mitigations as a baseline is used here.
Consumers using applications built on GPT-4o by third-party developers should be aware that the level of safety protection they experience depends not only on OpenAI's baseline restrictions but also on what additional safeguards the specific application operator has implemented.
Cross-platform context
See how other platforms handle Operator Responsibility for Downstream Deployment Safeguards and similar clauses.
Compare across platforms →Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"The system card states that operators deploying GPT-4o through the API are responsible for implementing appropriate safeguards in their specific deployment contexts, and that OpenAI's mitigations represent a baseline rather than a comprehensive solution for all possible use cases.— Excerpt from OpenAI's GPT-4o System Card (PDF)
REGULATORY LANDSCAPE: The allocation of safety responsibility between model providers and downstream operators is an active area of regulatory development under the EU AI Act, which distinguishes obligations for providers and deployers of AI systems. The FTC has signaled interest in how AI liability is allocated across the deployment chain. This provision's characterization of OpenAI's mitigations as a baseline interacts with deployer obligations under the EU AI Act. GOVERNANCE EXPOSURE: Medium. The explicit framing of OpenAI's mitigations as a baseline creates documented evidence that operators cannot treat them as sufficient for all contexts, which may be relevant in regulatory investigations or litigation involving harms caused by GPT-4o deployments. JURISDICTION FLAGS: EU operators face the most clearly defined deployer obligations under the EU AI Act. US operators should monitor FTC guidance on AI liability allocation. Healthcare and financial services operators face sector-specific obligations regardless of how the model provider allocates responsibility. CONTRACT AND VENDOR IMPLICATIONS: API agreements should be reviewed to understand how liability is allocated between OpenAI and operators for harms caused by GPT-4o deployments. Operators should document their own risk assessments and safeguards as evidence of due diligence. COMPLIANCE CONSIDERATIONS: Compliance teams at organizations deploying GPT-4o should maintain records of their own risk assessments, safeguard implementations, and any deviations from OpenAI's recommended deployment guidelines, to demonstrate independent due diligence in the event of regulatory inquiry.
Full compliance analysis
Regulatory citations, enforcement risk, and due diligence action items.
Free: track 1 platform + weekly digest. Watcher: 10 platforms + same-day alerts. No credit card required.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
This provision establishes that safety responsibility is shared between OpenAI and API operators, and that operators who deploy GPT-4o in sensitive contexts cannot rely solely on OpenAI's mitigations to address use-case-specific risks.
Consumers using applications built on GPT-4o by third-party developers should be aware that the level of safety protection they experience depends not only on OpenAI's baseline restrictions but also on what additional safeguards the specific application operator has implemented.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.