In a rapidly evolving technological context, the European Union is making inroads with a historic move: the introduction of the revolutionary AI Act (I'll link you to the text). This new regulatory framework, expected to be voted on in 2024, aims to establish new global standards in security and transparency for AI developers, including OpenAI and others. With the aim of balancing innovation and fundamental rights, the AI Act could represent a significant turning point in the regulation of AI worldwide.
What does the AI Act represent?
The AI Act, as mentioned, is a turning point in EU technology policy, because it introduces a complete framework for the safety and transparency of artificial intelligence.
After more than 36 hours (actual time) of intense discussions, EU officials have finalized a set of guidelines, which are currently the most stringent globally. This act sets Europe as an example for other nations in the field of AI regulation.
What are the points of the new AI Act?
The core of the AI Act is the classification of AI tools and applications into three “risk categories”. AIs with the highest level of risk face intense regulatory scrutiny. These include autonomous vehicles, critical infrastructure tools, medical devices and biometric identification systems.
These “high risk” systems will require fundamental rights impact assessments, strict transparency requirements and will need to be registered in a public EU database.
Prohibitions and sanctions
In addition to defining risk categories, the AI Act categorically prohibits certain uses of artificial intelligence. Among these, real-time facial recognition (a goal pursued for some time), emotion recognition and “social credit” systems are prohibited.
Major US technology companies (such as OpenAI and Google) which operate “general purpose AI systems”, will have to comply with new standards imposed by the EU. These include updating EU authorities on their model training methods and creating policies to adhere to EU copyright laws.
Tech companies that violate these rules could face significant fines, which vary between 1,5% and 7% of their total turnover. Much? Little? Enough to dissuade, or not? Questions destined to remain unanswered, for now.
And here doubts arise
Despite widespread support, the AI Act has raised concerns among European privacy experts. Some believe the framework places little emphasis on fundamental human rights, contrasting with previous approaches such as the GDPR. Again: there is concern that the risk-based approach may not provide a complete view of the future impact of seemingly low-risk AI tools.
Pending the final vote in 2024 (and perhaps some changes), the impact of this regulation on the AI landscape remains a subject of speculation. The rapid evolution of AI poses significant challenges to its application and long-term effectiveness.
The AI Act nevertheless represents an ambitious and laudable attempt by the European Union to navigate the complex balance between technological innovation and the protection of fundamental rights. Although the path is still uncertain, the AI Act could serve as a model for global regulation of artificial intelligence, significantly influencing the future of the technology and society.