Have you ever wondered how far we are from an artificial intelligence that surpasses human capabilities in almost every area? According to Ben Buchanan, a former White House AI adviser, it could take just two or three years. I translate: Artificial general intelligence will emerge during Donald Trump's term. Mind you, this revelation does not come from a Silicon Valley guru with commercial interests, but from someone who has had access to the most confidential information on the progress of artificial intelligence. Theaccelerationism, that philosophy that holds that we must race toward the technological future without excessive regulatory brakes, has become the official doctrine of the Trump administration, with potentially revolutionary consequences for the entire world.
The New American Doctrine and the “5 Big Impacts” of Accelerationism
When we talk about accelerationism in the context of artificial intelligence, we are no longer discussing an abstract theory but a concrete political strategy. The Trump administration, with figures such as Elon Musk, Marc andreessen e JD Vance at the helm, has taken a radically different path than the one traced by Biden. It is no longer a question of balancing innovation and caution, but to push the accelerator to the floor, convinced that the real threat is not the absence of rules but the risk of being left behind in the global race for AGI.
Buchanan he talks about it in a long interview with Ezra Klein in the New York Times: even within the White House (far, perhaps, from the commercial pressures of private laboratories) the data clearly indicated that general artificial intelligence systems would arrive much sooner than expected, probably during Trump's second term.
This circumstance will produce five gigantic impacts on the planet's geopolitical order. The first impact, it has been clear since the new US president took office, is a real earthquake in technology policy, with shock waves spreading throughout the world. These are not simple adjustments, but a fundamental rethinking of the relationship between State, innovation and security.
US-China Competition Better Observed
The second way where accelerationism is transforming global politics is about redefining the US-China competition in terms of technological dominance. It is no longer just about weapons, diplomatic influence, or economic power: the real battle is fought on the terrain of artificial intelligence.
I export controls (especially technological ones) implemented by the Biden administration, and likely to be intensified in the near future by Trump, are the economic weapon with which the United States seeks to maintain its technological advantage. It is essential for the national security of the United States to continue to lead in the field of AI: it is not a preference, but a strategic imperative.
This vision is perfectly summarized by the reminder that Buchanan refers to Kennedy's speech on the space race: "For space science, as with nuclear science and all technology, has no conscience of its own. Whether it will become a force for good or evil depends on man. And only if the United States occupies a position of preeminence can it help decide whether this new ocean will be a sea of peace or a terrifying new theater of war."
The question is no longer whether China can become a threat, but what it would mean for the world if Beijing were to get to AGI first. Accelerationists have turned this competition into an existential race, where whoever comes in second risks losing everything.
Rethinking Technology Regulation
The third impact It's about regulation. The intervention of JD Vance at the AI summit in Paris is emblematic: “I am not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I am here to talk about AI opportunities.” A sentence that sounds like a manifesto of theState accelerationism. A vision that completely overturns the traditional approach to technological regulation.
If for decades the mantra has been “rules first, innovation later,” now the philosophy is the opposite. “Innovate quickly, regulate later.” When Vance says that “limiting the development of AI now would mean crippling one of the most promising technologies we have seen in generations,” is drawing a clear line with the European approach, which is much more cautious and focused on preventive regulation.
I am interested to point out that this also represents a break with some initiatives of the Biden administration, which, while not particularly restrictive, had nevertheless created a framework for voluntary safety and testing. The new approach seems to be much more radical: innovate at all costs, addressing problems only when (and if) they arise.
This philosophy extends to relations with Europe: Vance has made it clear that the US will not follow the European regulatory approach and may even respond with retaliatory measures if EU rules penalize American AI companies.
Redefining the public-private relationship
The fourth way in which accelerationism is transforming global politics concerns, as I anticipated before, the relationship between the State and technology companies. If historically the great technological innovations (from nuclear energy to the Internet) have been driven by government investments, AI represents the first, truly radical break with this model.
As emphasized Buchanan: “This is the first revolutionary technology that is not funded by the Department of Defense.” An unprecedented situation, which creates a new balance of power between government and private companies, with the latter holding unprecedented control over the development of a potentially transformative technology.
The accelerationist approach seems to embrace this new reality, seeing technology companies not as entities to be regulated but as strategic partners in the global race to AI. It is no coincidence that the Trump administration has brought to power figures closely linked to Silicon Valley, such as Musk e andreessen, and that the “deregulation” policy seems designed to give American companies the maximum competitive advantage.
This new public-private relationship represents a significant break from the American tradition of government oversight of strategic technologies, with potentially enormous implications for the governance of AI (and beyond) globally.
Work and the economy in the age of accelerationism
The fifth impact, you will have understood, it concerns the labor market and the economy. It will have a disruptive impact on employment, and there are no concrete answers to this imminent challenge yet: those that arrived are insufficient.
The accelerationist approach seems to be: “Let's run with this revolution and we'll find solutions to problems when they arise.” But the implications for the labor market are potentially enormous. Entire professional categories (from marketers to programmers) could see a drastic reduction in labor demand in a very short time.
What worries me is the lack of planning. Buchanan admits that during the Biden administration the issue was discussed mainly as an “intellectual exercise” rather than as a preparation for concrete policies. And now, with an administration even more oriented towards technological acceleration, the risk is that the social impact of AI will simply be ignored until it becomes a concrete crisis.
Accelerationism is transforming labor politics by pushing it into uncharted territory: What happens when cognitive automation arrives at a speed to which traditional systems of reskilling and adaptation are simply too slow to respond effectively?
Accelerationism and the Restructuring of the State
One of the most interesting aspects to emerge from Buchanan's interview concerns the way in which accelerationism is redefining the very structure of the USA. Small reports conversations with figures close to the Trump administration, such as Elon Musk, who see the current “restructuring” of government (which many criticize as a “dismantling”) as an opportunity to build a civil service better suited to the AI era.
The idea is that traditional bureaucracies are simply too slow and rigid to fully exploit the potential of AI. Thus, the “creative destruction” of the state apparatus could be seen not as an attack on the state, but as an attempt to make it more efficient in the age of cognitive automation.
I am skeptical about some specific aspects of this vision., but I agree that the U.S. federal government, like most governments around the world today, is too slow to modernize technology, too slow to synergize its various agencies, too slow to radically change the way things are done.
The accelerationist vision of the State changes the very conception of public administration: no longer a deliberately slow and thoughtful apparatus, but an entity that must move at the speed of technological innovation.
Conflicts over AI safety
While the Biden administration has created an AI Security Institute, there is a clear sign that the Trump administration may be toning down those concerns in favor of faster innovation.
The fundamental debate is between those who see safety as a condition for innovation and those who see it as an obstacle. It's like during the early days of the railways: there were tons of accidents, crashes and deaths. Then safety standards and technologies like block signaling, or air brakes, came along and everything improved.
The accelerationist approach, on the other hand, seems to favor a “break things and then fix them” model., a significant break from traditional government caution. This could lead to faster innovation, but also to incidents and problems that could affect public confidence in the technology.
The ambitions to rationalize the state machine are fine, but I do not forget (and we must not forget) that advanced AI systems could also be used to strengthen mechanisms of surveillance and social control, making control more pervasive than ever (in the West as in the East).
Towards an accelerationist future
The “five big impacts” in which accelerationism is transforming global politics change the very vision of the future. The idea that in the next two or three years we could develop artificial intelligences capable of surpassing human capabilities in almost every cognitive domain represents a fundamental paradigm shift.
As he states Buchanan: “Today is the worst time AI will ever be. It’s only going to get better.” This dizzying prospect is redefining political priorities, resource allocations, and even geopolitical visions on a global scale.
Accelerationism is no longer just a technological philosophy but is becoming a force shaping the global political future. If its proponents are right, We are entering an era of unprecedented transformation, driven by a technology we don't fully understand but that could redefine every aspect of human society.
This is why the accelerationist position is that the potential benefits of AI are so enormous that delaying its development due to safety concerns is itself a greater risk.
The unanswered riddle
There are countless industries—from drug discovery to education—that could be positively transformed by advanced AI. The real “bottleneck” may not be the technology itself, but our ability to adapt real-world institutions and processes to harness its benefits.
Because that's the point. Despite the imminence of this technological revolution, we still don't have concrete answers to many of the most pressing questions it raises. How to handle the labor market earthquake? E How to balance innovation and security? How to maintain democratic control over increasingly autonomous and powerful systems?
Accelerationism is pushing us into uncharted territory at an unprecedented speed. And while it may be harder to correct course once we’re launched, there’s also the risk that too much caution could cause us to miss transformative opportunities.
What is certain is that accelerationism is no longer a theory on the fringes of political debate, but a philosophy that is actively shaping some of the most important decisions of our time. Whether or not one shares this vision, understanding its implications (perhaps putting aside the flags of supporters or detractors of Trump, Musk, Vance and co.) is essential for anyone who wants to participate in the debate on the future of artificial intelligence and human society.