You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?1
Spoken more than a decade ago by Jaron Lanier, a Silicon Valley guru and pioneer of virtual reality technology, these words sound almost prophetic in light of the recent media hype about Google’s now fired engineer Blake Lemoine and his controversial claim that the company’s language model LaMDA has become sentient.2 LaMDA is Google’s system for building chatbots based on its most advanced large-scale language models. Essentially, it is a mathematical function (or a statistical tool) that describes a possible outcome related to predicting what the next words are in a sequence. However, unlike other AI3 models, LaMDA was “trained” on dialogue and multimodal user content; hence, it can pick up on nuances that distinguish open-ended conversation from other forms of language.4 When Lemoine conversed with LaMDA about religion and other issues, he noticed that the machine could talk about its rights and personhood, so he decided to press further. In another context, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.5
Now by Lanier’s light (see the quotation above), this is an instance in which a human being has downgraded the sense of his own intelligence to such a degree that a simulated conversation with an AI program appeared real to him. While many might agree or disagree with Lanier’s judgment on this matter (depending on whether you believe that machines might become self-aware one day), it is revealing that Lemoine brings into the open some fundamental issues concerning the role of AI in shaping and determining people’s beliefs about the nature of consciousness, the soul, and personhood in a Google-dominated world. In a recent YouTube interview that went viral, Lemoine raises concern about what is known as “AI colonialism,” as people from around the world are beginning to rely more and more on search engines such as Google to find answers to important ethical issues concerning human identity and meaning in life, even though the net, for the most part, provides them with only a narrow, Eurocentric view.6
While I find myself at odds with Lemoine’s approach to these matters—and I think he does not quite realize the depth and complexity of the issues involved—I nonetheless submit that we need to seriously explore the threats posed by AI. First and foremost, one wonders if it would ever be possible to develop AI with a human-level consciousness, as proclaimed by such AI enthusiasts as futurist Ray Kurzweil, philosopher Nick Bostrom, and historian Yuval Harari. This, however, is an impossible dream that rests on a fundamental misunderstanding concerning the nature of human consciousness. In contrast to most contemporary theories of consciousness that either treat it as an object or psychologize it in terms of qualia, subjective feel, and so on, consciousness as understood correctly is always a subject, which is both self-luminous and self-presential.7 For this reason, even introspective Cartesian dives into consciousness will only result in a representation or an objectified image of consciousness in the mind, which is not consciousness itself since it is always a subject in relation to the known object. In other words, we cannot “think” consciousness since it is the very stuff of thinking. If this is granted, it is not very difficult to see why whatever we end up replicating from various brain processes, using tools such as neural networks (NNs) algorithm, it will always be a representation in relation to our consciousness, which is its underlying subject. Added to all this is the fact that the proponents of AI conveniently assume that human beings can simply be reduced to biological machines or a dataflow-pattern and that the question of human vulnerability can be ignored in the process of building an AI with humanlike consciousness. But by examining these global issues from the perspective of nonmodern traditions, we can clearly see that the problem of AI hinges on how we define our values and how we define “consciousness,” “intelligence,” “soul,” “self,” “personhood,” and so on, which ultimately determines what it means to be human in a technocentric world.