top of page

What Are They Thinking?

Updated: Apr 17, 2023


A black-and-white image of a ghostly person walking across a rocky beach under a black sky with dark white-capped waves in the background
  • Could our current AI algorithms be conscious?

  • What is the difference between artificial intelligence and intelligence?

  • Should our investigations into artificial consciousness change our understanding of our own human consciousness?

As many readers know, Working Fires Foundation is thrilled to be publishing Kenneth Wenger's first book, Is the Algorithm Plotting Against Us? We are also excited to have Ken continuing his investigations into AI's place in society on the WFF blog. This article is the first in a series of thought-provoking pieces by Ken on a hot topic in the world of AI: consciousness.

Earlier this year, a Google employee named Blake Lemoine was fired after he claimed that Google’s LaMDA (Language Model for Dialogue Applications) was sentient. As a language model, LaMDA is an algorithm that’s capable of parsing and generating text. Blake conducted a series of interview sessions with LaMDA where he asked the algorithm conversational and, at times, deeply philosophical questions. When its answers became seemingly introspective, Blake began to suspect that it had become conscious.


Is LaMDA conscious? The answer is probably no. LaMDA and the hottest language model of late, OpenAI’s ChatGPT, are trained on large volumes of data that include conversations between humans on the internet. They learn to model the probability distributions of words in different contexts. While essentially providing answers that have a high probability of occurring in conversations between humans, the models are not conscious of what they are saying. A model like LaMDA is simply generating text with a high probability of being coherent given the context of the question and the data it was trained on.


Defining Consciousness: No Easy Task


But let’s step outside of what is possible today and consider the question of consciousness more generally. In principle, is it possible for an AI algorithm to become conscious? I can see the psychologists and philosophers of mind reading this article moving to the edge of their seat, ready to jump up in rage the moment I dare to say yes.


The problem with the question is that the two scientific fields most concerned with the mind—psychology and neuroscience—still do not have a precise definition of consciousness. There is no agreement among leading scientists about what constitutes consciousness or, importantly, where the line exists that demarcates the ascension into consciousness from mere mental activity.


Is consciousness an exclusive quality of the human species? Or do other animals possess it? And is consciousness a binary concept, or does it exist on a spectrum?


If you think about what consciousness means to you, you probably feel like you have some intuition of what it is and what it isn’t. But if you are asked to define what consciousness is, you’ll realize that it’s not an easy thing to do. In subsequent articles in this series, we will explore the deeper questions of consciousness:


Does it take place in the brain? (Pan-psychists, for example, believe it’s a field that permeates the universe and we just happen to tune into it.) Are there specific regions of the brain that are most important to consciousness? Is it a computational thing that could be artificially created, or is there something intrinsically biological about it?


In this article we want to address consciousness and AI at a much higher level. We focus on what exactly we mean by artificial systems achieving consciousness when we discuss this at a societal level.


“I Enjoy Working with People”


When artificial intelligence was in its infancy in the 1950s, we imagined intelligent machines as systems capable of communicating with us in plain language. We thought an intelligent machine would one day be capable of holding a conversation with humans. A truly intelligent system would communicate with sounds, not just text. It would listen to our voice and understand our dialogue.


Such a system would also produce its own voice and speak with us directly and coherently—think of HAL in 2001: A Space Odyssey. We imagined an artificial intelligence capable of human-level dialogue as almost human, equipped with an advanced visual system to see the world through a camera lens.


Alan Turing, the renowned computer scientist and hero codebreaker of the Enigma machines during World War II, famously devised a test for artificial intelligence that he named the Imitation Game. This game, now more commonly known as the Turing test, imagined a human evaluator in conversation with two “agents” using natural language through a keyboard-like device. The agents would be placed in different rooms, separate from the evaluator, and through the conversation, the evaluator would have to guess which agent was the human and which was the AI. If the AI could fool the human evaluator into thinking that the human agent was the AI, the AI would pass the test and be considered a real artificial intelligence. Turing proposed this test in a paper titled “Computing Machinery and Intelligence,” and he begins the paper with the words, “I propose to consider the question, ‘Can machines think?’”


In the 1950s, when serious computer scientists thought about artificial intelligence, they imagined “thinking” machines in many ways indistinguishable from humans. Natural language processing (NLP)—the study of algorithms that can process and interact in human language—served as a good measuring stick for intelligence because language is such a naturally human ability that we could only imagine ourselves, or systems very close to us, as capable of it. Indeed, over the next several decades, not much changed in our perspective that real artificial intelligence would be reached only when artificial systems were capable of natural language processing like ours.


Redefining Definitions


Only in the last three years have AI and natural language processing achieved advances nothing short of amazing. With the invention of OpenAI’s GPT family of NLP algorithms, machines can now hold a coherent conversation with humans. They can understand our questions, along with the sentiment in what we say. They can produce their own dialogue, devise questions, and tell jokes. They can write poetry and prose. They can even write code.


Are these systems intelligent? What are they thinking? Can they think?


Ah, it seems we are quick to move the goalposts! You’ll be hard-pressed to find a serious AI researcher who thinks GPT algorithms are anywhere near conscious or can think. In fact, it was this “leap of faith” or lapse in judgment—suggesting that LaMDA, coherent and impressive though it may be, was conscious—that appears to have gotten the Google engineer fired. How is it that for decades we imagined NLP as the sign of intelligence, and as soon as it shows up, we say, “Meh, I know what it’s doing—not that impressive”?


It may be that concepts become more nuanced the moment we are on the verge of actualizing them. When humans were the only beings capable of conversing, we imagined that conversations were the true sign of intelligence because we are intelligent and we can hold conversations. It’s a very shallow definition of intelligence.


What’s more, notice that in those conversations, intelligence was indistinguishable from consciousness. In those days, when we spoke of intelligent systems, we meant humanlike conscious systems. When we finally created an artificial agent capable of holding a conversation, we began to ask deeper questions:


How is it doing it? Does it need to “think” to understand our text and to generate its own text? Is it aware of what it’s saying, or is it simply a digital parrot?


These are questions that we never thought to ask before, when NLP algorithms had not yet been developed.


Facing Unsettling Questions


I’m reminded of an adage: the more we learn, the more we realize how little we know. When AI was in its infancy, we imagined artificial intelligence as systems capable of humanlike conversations and a system capable of doing that as conscious. It wasn’t a formal definition, but it was certainly informally accepted. Artificial intelligence, general intelligence, awareness, the ability to think, consciousness—it was all the same, really.

As we advance in our research and we learn more about the artificial systems we can create, we learn similarities between those systems and human abilities, but we also learn of the vast gap that remains. We realize that each of those things—intelligence, awareness, thinking, consciousness—is a very complex, nuanced subject, and we must carefully define and understand those things before we can hope to convincingly emulate them.


One problem with consciousness is that there are two competing facets of it whenever we approach the subject. As scientists, we want to arrive at a definition of consciousness that is based on a set of testable hypotheses. That is, we want to define a set of qualities that a conscious system must have and then devise a method to test the system for those qualities. While this approach can lead to well-defined frameworks, it can also lead to examples of systems that may pass the test for consciousness within our framework yet may be very different from what we generally accept as conscious beings. As humans, when it comes to consciousness, we seem to care about more than just a tepid abstract definition.


An artificial system capable of processing information in ways that satisfy a well-articulated definition for consciousness has little chance of being accepted as such unless it is conscious in the way that humans are conscious. Can it have feelings like humans? Can it experience happiness? Can it experience sadness? Can it suffer?


Here’s what I think these questions really mean: If machines can be conscious, what is our responsibility toward them?


And here lies the true paradox of artificial consciousness. We will only accept that a machine is conscious when we can tell that it is as conscious as a human, not merely faking it.


But can we say that another human is conscious? Can you prove that anyone other than yourself is conscious? That I experience feelings of happiness and sadness? Can I prove the same of you?


Can you prove that I’m not just faking it?



Up Next


In the following articles in this series, we will explore what we know about consciousness in humans and discuss how we might emulate aspects of it in machines. Among other things we’ll consider: Our consciousness likely evolved over millions of years to help us survive and adapt to our environment; consciousness in a machine will have arisen outside of the same evolutionary pressures that produced ours. So there may be little evolutionary incentive to spawn sobbing algorithms. But could an advanced method of information processing and integration provide enough advantages to such a system to qualify as a new type of consciousness, regardless of how far it is from the human version?


160 views0 comments

Recent Posts

See All
bottom of page