(By Shai Tubali)
Evolutionarily speaking, humans are pliable and adaptive creatures (Massey 2013).[1] But the depth and magnitude of the AI challenge we are presently facing may stretch the limits of even the most adaptable species. Some argue that the AI challenge may be as transformative as the industrial revolution (Devlin 2023),[2] yet it seems to be more intense and complex in many respects since it reaches the roots of human experience and consciousness.
I am writing these lines while impassioned debates on both the constructive and destructive potentials of AI and their speculated implications for the future of humanity are taking place all over the world. Even experts, such as Geoffrey Hinton, the so-called “Godfather of AI,” were taken by surprise by the breakthroughs made by software such as ChatGPT and Dall-E2: the majority of experts had assumed that humanity’s confrontation with forms of artificial intelligence that could actually become smarter than most people was way off, perhaps 30 to 50 years or even longer away (Korn 2023).[3]
The fierceness of the debates is completely justified; the face of humanity is indeed about to change. I think, however, that these discussions are inherently limited and that they overlook a critical dimension of this impending transformation: the aspect of identity and meaning. This should not come as a surprise since this is the aspect that most philosophical, psychological, cultural, and social discussions tend to ignore. Conversations mainly revolve around important ethical, psychological, and social considerations (e.g., Coeckelbergh 2020; Dubber, Das, and Pasquale 2021; Chamorro-Premuzic 2023). Nevertheless, I maintain that in the deepest philosophical sense, what we are facing is—or at least should be understood as—an urgent crisis of identity and meaning.
Another limitation of most serious discussions is that they are primarily devoted to potential dangers and warnings. We read and hear, for instance, that the spread of misinformation might hinder people’s ability to tell the real from the false (Korn 2023). Some are concerned that AI might amplify our negative dispositions, making us more alienated, distracted, selfish, biased, narcissistic, entitled, predictable, and impatient (Chamorro-Premuzic 2023). Others, such as historian Yuval Noah Harari (2023), suggest that AI would eventually become a better storyteller than humans, thus manipulating and controlling people and reshaping society, including writing new holy books.[4]
In fact, Harari believes that the role of historians and philosophers is to point out the dangers inherent in this type of technology. On the other hand, I feel that this crisis of identity and meaning should actually be considered an invaluable opportunity—perhaps the most precious opportunity we have been given, at least until we hypothetically come across extraterrestrial entities. Rather than exclusively worrying about the perils that await humanity in the age of AI, we ought to seize this opportunity and make use of this sense of danger that pushes our familiar boundaries to seriously look into the nature of human experience and consciousness and ultimately redefine it.
In my monograph Cosmos and Camus (2020), I demonstrated how the perceptible and implicitly philosophical thought experiments of science fiction films can help us transcend our anthropocentric worldview by placing humans in relation to extra-human elements, especially self-reflective nonhumans such as A.I.s, robots, and aliens. Both the stark otherness and the striking resemblance serve as a mirror through which we see our own reflection. This may seem threatening at first, but if used correctly, these close encounters can also enable us to re-evaluate the limits of our human experience and define more clearly what it is to be human.
As a useful starting point for this type of discussion, we should ask ourselves the following question: While we instinctively express the fear of being replaced by a smarter human-made species, have we ever presented one good reason why we must not be replaced? As long as we don’t know what this reason may be, our worry is nothing more than an understandable survival instinct. Don’t be alarmed: I’m not suggesting a misanthropic worldview according to which humanity should be replaced. On the contrary, I urge us to employ our fear of being replaced to substantially justify our intuition that human existence is a unique, meaningful, and even irreplaceable event.
AI’s growing presence and shadow raise two intriguing reflections in the context of the question of our irreplaceability. The first is an awareness of a certain dimension of human existence that should worry us: When we observe the way AI software rapidly weaves humanlike thought forms, we come to realize that, at least from a certain angle, our processes of thinking are quite mechanical. These processes may, of course, be incredibly imaginative and creative, but not necessarily in a way that is not imitable and replicable. Human thinking is mechanical in that it moves in predictable patterns as soon as it begins to repeat itself. There is a hopeful possibility that a reality in which AI takes over all familiar channels of human creativity would push us into a realization of uncharted and dormant domains of creative activity, but we might be compelled to come to terms with the painful truth that our inventive thinking is replaceable after all. The question is whether we even hope to conceive of our mind as something more than a “machine for jumping to conclusions” (in Daniel Kahneman’s words, Winerman 2012).[5]
This dimension of human existence, which is accentuated by generative artificial intelligence, occupied the great 20th-century mystic and thinker Jiddu Krishnamurti. When he first heard about the possibility of creating artificial intelligence in 1980, inaugurated by computer scientists who were looking for ways to reproduce the mechanisms of the human brain, the 85-year-old Krishnamurti prophetically identified that humans would soon be challenged at unprecedented levels. Consequently, Krishnamurti’s main concern for over two years was the “challenge of the machine taking over the processes and faculties of the brain” (Jayakar 1986: 407).
Already at that very early stage of AI development, Krishnamurti believed that humanity’s inventive capacitycould be replaced by machines and that the machines’ inventivenesswould not be limited to purely productive dimensions but would also, as present visionaries like Harari argue, uncontrollably develop into the capacity to create philosophy and invent gods. Krishnamurti’s concern was that in this kind of predicament, certain faculties of the brain would be rendered useless and eventually atrophy, and the question of what it is to be human would become more urgent than ever. According to Krishnamurti, since the brain’s immense capacity is exploited for material and technological purposes, as soon as this function is taken over by AI, there will be no need for human brains at all. Thus, humanity would face only two alternatives: indulging in outward entertainment or turning inward and exploring within to find perception, compassion, the true and independent religious spirit, and other irreproducible aspects of the essence of humanity. He beautifully summarizes his vision by stating that “the only thing man can do that the computer cannot do is look at the evening star” (Jayakar 1986: 407).
While the first intriguing reflection on the question of our irreplaceability uncovers the disappointingly mechanical nature of our thinking, the other reflection is thus concerned with the search for that which is not mechanical and is therefore irreproducible. Naturally, within the scope of this article, I can only offer the first steps of this type of inquiry, whose focus is on those aspects that can be deemed irreplaceable features of human consciousness, at least in the imaginable future. Nevertheless, this inquiry is pivotal: even if we did gain the upper hand at the end of our struggle with our self-made competitor in terms of the survival of our species, as long as we haven’t seized this opportunity to tap into the significance of our existence, this would mean very little.
Interestingly, the dimension that instantly reveals the significance of human existence is the one in which generative AI has just had its most intimidating breakthrough: language. Harari (2023) is convinced that by being able to generate language, whether with words, sounds, or images, AI has “hacked the operating system of our civilization,” since language is the “stuff almost all human culture is made of.” He believes that this could imply the end of human history since, henceforth, another species would control our ideas and worldviews. But if we’re not exclusively worried, we could also interpret this shift as an opportunity to draw an important distinction.
As far as humans are concerned, language constitutes a set of representations. For AI, language is everything. AI can write a convincing post on compassion using words that appear to describe it, but it can never write a post that uses words to express the experience of compassion. It can only produce “symbols without meaning” (Haikonen 2020). Human language arises from the gap between consciousness and experience and the need to represent their content in a communicable way. This implies that, for the most part, human language has no independent existence. After the Buddha had attained his mystical awakening, he returned to teaching employing language; he began to seek appropriate representations for the state in which he was absorbed, but the language was never the state. For AI, on the other hand, there is no state. Harari is of course correct when claiming that, as humans, we could easily find ourselves in a reality in which we would no longer distinguish a language that does not describe a state from a language that describes a state. However, no AI could rob us of the experience of the gap between consciousness and language, and so, this may urge us to delve more deeply into our consciousness.
But why is it that for AI, there is no state? This is where the subtle uniqueness of the human condition can be revealed. What is so interesting about the human experience is the fact that it arises as a result of tensions between opposites. These tensions go deeper than the gap between consciousness and language, or, as Harari puts it, the relationship between biology and culture; they actually originate from what Thomas Nagel describes as ”the dragooning of an unconvinced transcendent consciousness into the service of an immanent, limited enterprise like a human life” (1971: 726), or what Sartre considers the divorce between the physical and the spiritual nature of man. It is an unceasing friction between the mind’s unlimited and sometimes spiritual dimensions and the suffocatingly limited aspects of our existence, which have a lot to do with our physical and sensory reality. This essential friction may be linguistically imitated but not experienced by a non-physical artificial intelligence.
Human consciousness thrives in this tension. It is a self-reflective consciousness that possesses self-transcending capacities while trapped in an intensely confined reality. As existentialist and absurdist philosophers have correctly identified, this tension gives rise to the absurd in human existence. In Cosmos and Camus, I argued that the recognition of the absurd is not strictly human, since any self-reflective consciousness capable of becoming aware of its condition and boundaries—human, extraterrestrial, or artificial—would be prone to absurdity. Nevertheless, while the possibility of an AI forming this kind of consciousness has been widely explored in science fiction films (for instance, A.I. Artificial Intelligence and Her), it has thus far been rejected by futurists and philosophers. If this type of mind did arise one day, humanity would find itself challenged even more profoundly, realizing that even this distinction has been blurred and that it could only retain the particular and often disadvantageous tension between the mortal body and the eternal spirit.
This leads me to the last point: what arises in the tension between consciousness and body is not only absurdity but also our ability to awaken our spiritual faculties. Humans can deploy their self-reflective capacity to ultimately transcend ego and limits. Could AI do the same? It may be able to linguistically imitate and develop notions transmitted by great spiritual masters—Socrates and others have already been “resurrected” by AI—but it can never be an actual spiritual master or seeker since it lacks the tension between experience and language and the wisdom born of reflection on one’s finite existence and infinite nature.
[1] https://www.scientificamerican.com/article/humans-may-be-most-adaptive-species/
[2] https://www.theguardian.com/technology/2023/may/03/ai-could-be-as-transformative-as-industrial-revolution-patrick-vallance
[3] https://edition.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html#:~:text=%E2%80%9CThe%20idea%20that%20this%20stuff,years%20or%20even%20longer%20away.
[4] https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation
[5] https://www.apa.org/monitor/2012/02/conclusions#:~:text=’A%20machine%20for%20jumping%20to%20conclusions’