What if life were just a different kind of computer? Not a metaphor, but a precise scientific definition. It's the idea that John von Neumann proposed in 1948 and that Alan Turing explored studying how leopard spots emerge from simple chemical rules. Today, with neural cellular automata that “grow” complex patterns e neural networks trained with controlled randomnessThat intuition seems increasingly solid. DNA is code. Cells are parallel processors. Reproduction is the execution of instructions. We're not talking about analogies: we're talking about computational equivalence. And if that were true, it would change everything.
When a machine duplicated itself
In 1994, a pixelated machine came to life on a screen. It read a string of instructions, copied them, and built a clone of itself. Just like von Neumann had predicted Half a century earlier. It wasn't just a computer experiment: it was concrete proof that reproduction, like calculation, can be performed by machines following coded instructions.
Von Neumann's self-replicating cellular automaton, designed in the 40s without the aid of any computer, required 6.329 cells and 63 billion time steps to complete a reproductive cycle. It was a two-dimensional Rube Goldberg machine that squatted over a 145.315-cell-long instruction tape, pumping information and using a "printing arm" to slowly print a functioning clone of itself above and to the right of the original.
Von Neumann had understood something profound: the artificial life It's not fantasy, it's computational engineering. An automaton that reads and executes instructions to build copies of itself works exactly like DNA when it commands: "If the next codon is CGA, add an arginine to the protein being built." It's not a metaphor to call DNA a "program." It is literally one.

Biological Computing vs. Digital Life: Real-World Differences
Of course, there are significant differences between biological and digital computing. DNA is thin and layered, with phenomena such as epigenetics and genetic proximity effects. Cellular DNA isn't even the whole story: our bodies contain (and continually exchange) countless bacteria and viruses, each with its own code.
Biological calculus is massively parallel, decentralized and noisy. Your cells have about 300 quintillion ribosomes, all working simultaneously. Each of these floating protein factories is, in effect, a small stochastic computer: the movements of its interlocking components, the capture and release of smaller molecules, and the manipulation of chemical bonds are all individually random, reversible, and imprecise, driven back and forth by constant thermal bombardment.
Only a statistical asymmetry favors one direction over the other, with clever molecular origami moves tending to "block" certain steps so that the next one becomes probable. This differs dramatically from the operation of logic gates in a computer, basic components that process binary inputs into outputs using fixed rules. They are irreversible and designed to be 99,99% reliable and reproducible.
Turing and Morphogenesis: The Pattern of Life Starting from Simple Rules
Alan Turing, towards the end of his life, he explored how the biological patterns like leopard spots could emerge from simple chemical rules, in a field he called morphogenesisIn his only biology paper, published in 1952, Turing proposed that asymmetry in biological systems might arise from signal molecules (morphogens) diffusing from a source, creating concentration gradients.
Turing's model of morphogenesis was a form of computation massively parallel and distributed, biologically inspired. The same goes for his earlier concept of an "unorganized machine," a randomly connected neural network modeled after a newborn's brain. These were visions of what computing might look like without a central processor: exactly as it appears in living systems.
How do you explain a recent study on quantum effects In cells, living organisms could process information billions of times faster thanks to phenomena such as superradianceLife isn't just about computing: it's about computing in ways we're still discovering.
Neural Cellular Automata: From Turing to Today
In 2020, researcher Alex Mordvintsev combined modern neural networks, Turing morphogenesis and von Neumann cellular automata in neural cellular automaton (NCA), replacing the simple per-pixel rule of a classical cellular automaton with a neural network. This network, capable of perceiving and influencing certain values representing local concentrations of morphogens, can be trained to "grow" any desired pattern or image, not just zebra stripes or leopard spots.
Real cells do not literally have neural networks inside them, but they perform highly advanced, non-linear, purposeful programs to decide what actions to take in the world, given an external stimulus and an internal state. The NCAs offer a general way to model the range of possible behaviors of cells whose actions do not involve movement, but only changes in state and the uptake or release of chemicals.
The first NCA Mordvintsev showed was a lizard emoji that could regenerate not only its tail, but also its limbs and head. A powerful demonstration of how complex multicellular life can “think locally” but “act globally”, even when every cell (or pixel) runs the same program, just like every cell of yours runs the same DNA.
Randomness is a feature of life, not a bug.
The fact that biological computation uses randomness isn't a flaw: it's a crucial feature. Many classical algorithms in computer science also require randomness (albeit for different reasons), which may explain why Turing insisted that the Ferranti Mark I, a primitive computer he helped design in 1951, include a random number instruction. Randomness is thus a small but important conceptual extension of the original Turing machine, even though any computer can simulate it by computing deterministic but random-looking, or "pseudorandom," numbers.
Parallelism is also increasingly fundamental to computing today. Modern artificial intelligence, for example, depends on both massive parallelism that from chance: in the “stochastic gradient descent” (SGD) algorithm used to train most neural networks today, in the “temperature” setting used in chatbots to introduce a degree of randomness into their output, and in the parallelism of the GPUs that power most AI in data centers.
Universal computational equivalence
Turing and von Neumann understood something fundamental: computation does not require a central processor, logic gates, binary arithmetic, or sequential programs. There are infinite ways to compute, and, fundamentally, they are all equivalent. This insight is one of the greatest achievements of theoretical computer science.
This "platform independence" or "multiple realizability" means that any computer can emulate any other. If the computers have different designs, however, emulation can be painfully slow. For this reason, von Neumann's self-replicating cellular automaton has never been physically built, although it would be fun to see it.
That 1994 demonstration, the first successful emulation of von Neumann's self-replicating automaton, couldn't have happened much earlier. A serial computer requires serious processing power to cycle through the automaton's 6.329 cells in the 63 billion time steps required for the automaton to complete its reproductive cycle.
On screen, it worked as advertised: a two-dimensional, pixelated Rube Goldberg machine, squatting over a 145.315-cell-long instruction tape that extended to the right, pumping information from the tape and reaching out with a “writing arm” to slowly print a working clone of itself above and to the right of the original. As a recent study suggests, we may be closer than we think to the emergence of forms of consciousness in complex computational systems.
Life, in the end, might really just be another kind of computer. An older, more elegant, more robust kind. Von Neumann and Turing understood this decades ago. We're only just beginning to realize it.
