In Brussels, the atmosphere is as tense as on election day. In the corridors of the Berlaymont, people speak in hushed tones: some say "simplification," others "surrender." The dossier AI Act returns to the table with a proposal for a breakA year of respite for high-risk AI developers, time to "adapt without blocking the market." In reality, this is the first crack in the European regulatory wall.
AI Act: Europe slows down to avoid being left behind
The AI Act was born as Europe's answer to Silicon Valley: a framework of rules designed to regulate algorithms and generative models with the rigor of an industrial regulation. Now the Commission proposes to grant a grace periods twelve months to "high-risk" artificial intelligence producers, postponing sanctions until 2027. The official motivation is pragmatic: "to avoid hindering innovation." Underlying this, however, is the fear of losing competitiveness with respect to the United States and especially China.
The decision comes at a delicate political moment. Pressure from Big Tech and the new American administration is strong. The Union, which had promised the ethical “gold standard” of AI, risks becoming a field of compromise. Brussels talks about simplification, but the effect is that of easing the brakes just as the global race for artificial intelligence is accelerating.
The price of caution
The paradox is clear: Europe fears it's too slow and decides to slow down even further. In the draft circulated in November, the Commission provides that companies already operating on the market can "adapt their practices without interrupting service." In exchange, the enforcement of fines for violating transparency rules would be postponed by one year. It's a compromise that sounds like an implicit favor to major global players, from Meta to Google, ready to celebrate a more lenient 2026.
Behind the bureaucratic formulas ("simplification package," "process harmonization"), we glimpse the face of a Europe tired of its own rigidity. After years of negotiations, technical discussions, and ethical proclamations, the world's first AI law risks becoming a handbook of exceptions. And perhaps this is the true sign that the continent, in its attempt to govern technology, is ending up suffering from its pace.
A diplomacy algorithm
The Commission refuses to admit it, but the fear of a clash with Washington is real. After the provisional trade agreement in August, a regulatory tightening could have reignited tensions over tariffs and technology supplies. A technical delay, therefore, is better, as it represents a political truce. Meanwhile, European companies are demanding clarity: compliance costs remain high, guidelines are still vague, and the AI Office—the new coordinating body—has yet to demonstrate its effectiveness.
Meanwhile, the debate shifts to the cultural level: how much control is sustainable without blocking research? The AI Act was created to defend citizens' rights, but it risks penalizing local startups more than global giants. It's a fragile balance, reflecting a Europe often caught between protecting values and fearing economic loss.
AI Act, the silence of the machines
Big Tech is patiently watching. A one-year delay gives them room to consolidate market positions, optimize models, and build more effective lobbying. Every additional month without sanctions is worth millions in research and competitive advantage. The same companies Brussels wanted to rein in now have more freedom to maneuver: a quiet victory, disguised as a technical compromise.
Perhaps this is the real crux: the AI Act was created to tame speed. Now it seems to be succumbing to its allure. After all, even an algorithm obeys the rules as long as no one changes them.
Europe, in its attempt to shape the future, is discovering that even bureaucracy can be a form of artificial intelligence: it learns from mistakes, but always one step behind those who make them.