Those of Google they presented something that, at least according to them, is not just a model. It is an active ingredient: AlphaEvolve, a system capable of optimizing algorithms, computational resources, mathematical formulas—and itself. The name says it all: it doesn’t just execute. It would evolve.
The stated goal is simple: accelerate the development of artificial intelligence, remove bottlenecks, find shortcuts in problems that, until yesterday, were considered… difficult. Or impossible. But what is striking is not so much the ambition. It is the mechanics.
AlphaEvolve, many new datacenters without building even one
One of the first results attributed to AlphaEvolve is the recovery of the 0,7% of Google's global computing power, simply by optimizing internal server management. In practice, it's as if they added entire data centers without pouring a single gram of concrete.
The same logic applies to models. AlphaEvolve has improved by more than 30% speedup of FlashAttention kernels, reduced training times, and made possible a type of calculation that no one had been able to optimize for 56 years: the matrix multiplication.
300-Year-Old Problems, Solved in Days
On the scientific front, AlphaEvolve has begun to move into territories where only solitary mathematicians and infinite blackboards had ventured so far. It has solved open problems for three centuries and has reached the 75% success rate on 50 mathematical problems still active in the academic community. Among them, a new estimate of the “kissing number” in 11 dimensions: 593 tangent spheres.
These are not sports records. They are changes to the very structure of knowledge. And they don't come from a group of scholars under pressure. They come from a model that improves itself as it works.

AlphaEvolve inaugurates (this time for real) the phase in which we stop understanding what is happening
AlphaEvolve creates a loop: it optimizes a system, which generates a better model, which in turn makes it more efficient. In jargon, it's called recursive self-improvement. In the real world, it's the beginning of a phase (we have it already glimpsed for the chips) where the models become too fast to be analyzed in detail.
Progress is no longer linear. It is compound. Google is already talking about a trillion-fold expansion of computing power by 1. If the numbers hold, this could accelerate to the point where it leaves no trail behind.
A structural, and not very visible, advantage
While others work on interfaces, chatbots, and voice assistants, Google is pushing the underwater part of the iceberg: deep optimization, invisible systems, architectures that improve other architectures.
According to the data presented, by 2027 over the 50% of new algorithms will be designed with AI support, with human engineers focused not on implementation, but on problem formulation.
Bottom line: humans ask the questions, machines get busy.
It's a race that's being played out at the bottom of the ecosystem. But that's where the pace is decided. And if AlphaEvolve works as they say, Google could have not just an advantage. But a lever.