This analysis describes what OpenAI's agreement states, permits, or reserves. It does not constitute a legal determination about enforceability. Regulatory applicability and practical outcomes may vary by jurisdiction, enforcement context, and individual circumstances. Read our methodology
As AI agents become more capable of taking real-world actions, the consequences of model errors or misuse become more significant and harder to reverse, and this provision acknowledges that current safety measures are not sufficient to guarantee safe autonomous operation.
The document discloses that GPT-4o can process real-time audio and visual inputs, and that OpenAI identified and applied mitigations against risks including unauthorized speaker identification from voice inputs, generation of voices resembling real people without consent, and inference of emotional states from audio. Consumers interacting with GPT-4o through ChatGPT or third-party applications built on the API may be subject to these capabilities depending on how operators configure the model. You can review OpenAI's usage policies and the system card at openai.com to understand what behaviors have been restricted and what residual risks OpenAI has acknowledged.
How other platforms handle this
Investing in industry-leading approaches to advance safety and security research and benchmarks, pioneering technical solutions to address risks, and sharing our learnings with the ecosystem.
For information on how we process personal data through "profiling" and "automated decision-making", please see our FAQ.
Our Additional Use Case Guidelines apply to certain other use cases, including consumer-facing chatbots, products serving minors, agentic use, and Model Context Protocol servers.
Monitoring
OpenAI has changed this document before.
Receive same-day alerts, structured change summaries, and monitoring for up to 10 platforms.
"In agentic contexts, GPT-4o must apply particularly careful judgment about when to proceed versus when to pause and verify with the operator or user, since mistakes may be difficult to reverse, and could have downstream consequences within the same pipeline. We advise operators and users to follow the principle of minimal footprint where possible.— Excerpt from OpenAI's GPT-4o System Card (PDF)
How 10 AI platforms describe the use of user data for model training, improvement, and development, based on archived governance provisions.
Professional Governance Intelligence
Need to monitor specific governance provisions?
Professional includes provision-level monitoring, governance timelines, regulatory mapping, and audit-ready analysis.
Built from archived source documents, structured governance mappings, and historical version tracking.
As AI agents become more capable of taking real-world actions, the consequences of model errors or misuse become more significant and harder to reverse, and this provision acknowledges that current safety measures are not sufficient to guarantee safe autonomous operation.
No. ConductAtlas is an independent monitoring service. We are not affiliated with, endorsed by, or sponsored by OpenAI.