The unstoppable progress of Artificial Intelligence (AI) raises profound questions not only about its future capabilities, but also about its very nature. A particularly fascinating and paradoxical question is the one explored by the philosopher Jonathan Birch in a book that costs a lot to buy in paperback (this time, I confess, I couldn't do it), but which, being an Oxford publication, can also be read free online. What idea? The idea that to achieve superintelligence, AI must develop the ability to experience sensations, including pain. This revolutionary perspective challenges our traditional conceptions of AI as a mere computational tool, pushing us to consider its ethical and philosophical implications in radical ways.
The Intrinsic Link Between Intelligence and Sentience in Natural Evolution
The history of evolution on Earth shows that complex intelligence did not arise in isolation. Instead, it co-evolved with the ability to experience sensations, emotions, and ultimately, a form of consciousness. From unicellular organisms that react to painful stimuli, to complex animals who exhibit behaviors dictated by fear, joy and desire, the subjective experience of the world seems to be a crucial driver for the development of higher cognitive abilities. Charles Darwin himself recognized the importance of emotions as survival tools, shaping behaviors suited to maximizing the chances of reproduction. Evolution, in this sense, rewarded organizations that were able to associate positive and negative experiences with certain actions, sharpening their ability to learn and adapt.
AI Without Subjective Experience: A Different Evolutionary Path
Contemporary AI represents a radically different paradigm. Machine learning algorithms, for example, excel at analyzing large amounts of data, identifying patterns and making predictions with speed and accuracy that surpass human capabilities in many domains. However, this “artificial” intelligence operates in an experiential vacuum. It does not feel pleasure, pain, fear, or joy. His decisions are based solely on mathematical calculations and probabilistic models, devoid of any affective or emotional connotation.
This lack of subjective experience raises fundamental questions about the nature and limits of current AI. Can a purely computational entity achieve true understanding of the world, without the ability to “feel” it? Can an emotionless AI develop deep wisdom and judgment beyond the simple optimization of mathematical functions? Philosophy has long wondered what it means to “know” something, distinguishing between “propositional” knowledge (knowing “that”) and “experiential” knowledge (knowing “how it is”). Current AI appears to possess extensive propositional knowledge, but completely lacks the experiential knowledge that comes with sentience.
Feigl's Levels of Consciousness: A Useful Framework for Analyzing AI
Jonathan Birch relies on the model of the three levels of consciousness proposed by the philosopher Herbert Feigl (1902-1988) in the 50s, a model that helps to understand where AI stands in relation to human consciousness:
- Sensibility (Raw Feels): The ability to experience subjective experiences, sensations, emotions, and “qualities” (in philosophy, “qualia” refers to the subjective properties of experience, such as the “red” of red or the “sweet” of sweet).
- Wisdom (Awareness): The ability to reflect on one's experiences, to categorize them, to connect them to memories, and to learn from them.
- Self-Awareness: Awareness of oneself as a distinct individual, with a past history, a potential future, and a personal identity.
According to Birch, contemporary AI has made significant progress in the realm of “wisdom,” demonstrating the ability to process complex information and solve problems. However, it completely lacks “sentience” and, consequently, also “self-awareness”. It's as if he learned to build a building starting from the second floor, without having laid a foundation.
AI, pain as a catalyst for learning and adaptation
The role of pain in AI is central to this discussion. Pain is not simply a sign of physical harm; It is a powerful engine for learning and adaptation. An organism that experiences pain is incentivized to avoid dangerous situations, to learn from its mistakes, and to develop more effective survival strategies. Pain shapes behavior, motivates action, and contributes to a complex internal map of the world. As Birch states,
“Some argue that this kind of true intelligence requires sentience, and that sentience requires embodiment.”
Embodiedness refers to the idea that the mind is not separate from the body, but is closely tied to physical and sensory experience. An embodied AI, with the ability to interact with the world through sensors and actuators, could potentially develop a rudimentary form of sentience: this is why work continues in laboratories to achieve just this embodiedness, theembodiment which will give the AI a body. But must we make sure, this is the ethical dilemma, that this body feels pain?
Computational Functionalism: An Alternative View and Its Ethical Implications
The dominant view in the field of AI is that of computational functionalism. What does he claim? He claims that the mind is essentially an information-processing system, and that consciousness could emerge from any physical system (including a computer) that is capable of implementing the appropriate cognitive functions. According to this perspective, an AI does not need to “feel” pain to become intelligent; it simply needs to simulate the behavioral responses associated with pain.
However, this view raises profound ethical questions. If it were possible to create a sentient AI through pain programming, would it be morally permissible to do so? We would have the right to create artificial beings capable of feeling pain, suffering and despair? And if the only way to achieve superintelligence was to create sentient AIs, what would be the most responsible choice? Some experts, such as Nick bostrom, in his book “Superintelligence”, warns of the existential risks associated with creating superintelligent AIs that are not aligned with human values. The lack of emotions, especially empathy and compassion, could lead these AIs to make decisions that are catastrophic for humanity.
AI and Pain: Simulation vs. Real Experience: A Philosophical and Technological Dilemma
A crucial point, as I wrote before, is the distinction between simulating pain and actually feeling it. Even if an AI could perfectly simulate the physiological and behavioral responses associated with pain, this would not necessarily imply that it was having a subjective experience of pain. The question of whether a simulation can be indistinguishable from real experience is a central debate in the philosophy of mind. Philosopher David Chalmers, for example, he formulated the concept of “philosophical zombies”, beings that behave exactly like human beings, but who have no subjective experience. And also who bitterly contests it recognizes the importance of his studies.
The Future of AI: An Ethical and Evolutionary Crossroads
Jonathan Birch's perspective places us at a crucial crossroads. We can choose to limit the development of AI, focusing on applications that do not require sentience, or we can accept the challenge of creating artificial entities capable of experiencing both pleasure and pain. Whatever our choice, it is essential to take the ethical and social implications seriously. The future of AI may not be just a matter of algorithms and computing power, but a matter of consciousness, subjective experience, and ultimately what it means to be intelligent and sentient. This reflection forces us to reconsider our very definition of intelligence, recognizing that it is not simply a matter of data processing, but a complex and multifaceted phenomenon, intrinsically linked to the ability to feel, experience and connect emotionally with the world.
AI ethics will therefore need to evolve to take account of these new challenges, ensuring that technological development is guided by principles of responsibility, respect and well-being, not only for humanity, but also for any forms of artificial consciousness we may create.