The EU Artificial Intelligence Act is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system for AI systems, with obligations scaled to the level of risk an AI system poses to health, safety, and fundamental rights.
The Act categorizes AI systems into four risk levels: unacceptable risk (banned outright — social scoring, real-time remote biometric identification in public spaces with exceptions), high risk (subject to strict requirements including conformity assessments, data governance, transparency, human oversight), limited risk (transparency obligations — users must be informed they are interacting with AI), and minimal risk (no specific obligations beyond voluntary codes of conduct).
General-purpose AI (GPAI) models, including large language models, face specific obligations around technical documentation, transparency, copyright compliance, and — for models posing systemic risk — adversarial testing and incident reporting. Penalties for non-compliance can reach up to EUR 35 million or 7% of worldwide annual turnover for the most serious violations.
Get alerted when platforms change their policies — including EU AI Act-relevant provisions.
Subscribe to Watcher — $9.99/mo