top of page
Writer's pictureKenneth Wenger

How Are We Thinking?

Updated: Apr 17, 2023


Aerial photo of lighted city at night
Photo by Nastya Dulhiier
  • What are currently the most well-regarded scientific theories about how human consciousness works?

  • How is vision integral to our study of both human and machine consciousness?

  • Is there an empirical path to understanding consciousness that takes a different starting point than traditional philosophical excursions?


The first article in this series, “What Are They Thinking?,” addressed the question of machine consciousness as a primarily semantic problem. In this article, we discuss two theories that attempt to explain how human consciousness works.


If we can establish the mechanisms responsible for our consciousness, perhaps we can hypothesize how a machine might replicate it. While there is still no established theory of consciousness, a few hypotheses address specific aspects of it.


The two we look at below exemplify advancements in our evolving understanding of conscious processes. These theories focus on which areas of the brain are active during conscious events rather than trying to explain why we experience things the way we do.


By taking this approach, perhaps we can track the flow of signals in our brain and start solving the problem from the bottom up.


Mechanism of Consciousness #1: GWT


Bernard Baars first proposed his global workspace theory (GWT) in 1988. This theory proposes that the brain is divided into two computational spaces:

  • A set of distributed and parallel specialized processors attending to sensory and motor information

  • A global workspace of interconnected neurons sending and receiving signals to distant regions of the brain through long-range connections

More plainly, the brain consists of discrete regions that attend to specific tasks like vision, movement, speech, and so on. These regions are then interconnected through a network of neurons called the global workspace.


In this framework, consciousness is defined as the broadcasting of information through ascending and descending signals in the global workspace. That is, information can be processed by the individual regions (visual cortex, motor cortex, etc.) and remain unconscious.


But there are times when the signal at a single region reaches a level of excitation that prompts it to be broadcast to the global workspace, which interconnects multiple regions. This is the moment when an individual can report a conscious experience.


Purely from an information-processing perspective, this is a remarkable claim. It suggests that consciousness is not simply a property of processing input stimuli. It is a property of combining information from different regions into one shared area.


As different regions interpret information and project it to the global workspace, the signals are transformed and abstracted. Interestingly, we also find transformation and abstraction as properties in artificial neural networks.


Although our current artificial neural networks are too primitive to exhibit consciousness, perhaps we already have the building blocks of some future conscious system. Perhaps all we need is the right architecture and scale (more on this in the next article).


Mechanism of Consciousness #2: GNW


Stanislas Dehaene and his colleagues further developed these ideas and posited the global neuronal workspace (GNW) hypothesis. Basically, they took the GWT framework into the experimental realm to explain visual consciousness.


The GNW uses a simple network of interconnected artificial neurons, known as McCulloch-Pitts neurons. The network runs simulations that show how signals can be processed by specialized regions and then broadcast to a global workspace. Interestingly, through these simulations, GNW has shown that the global workspace is a capable of gating different inputs and outputs. That is, a mechanism selects a set of signals that can enter the global workspace while inhibiting a different set that is competing for access.


Remarkably, a mechanism capable of gating information is exactly what you would need to explain attention. After all, being aware of something means devoting resources to that thing while denying them to all other competing stimuli at any given moment. This is especially remarkable because it shows how we can glean profound information by attempting to understand the elemental properties of a system.


Trying to explain attention and awareness a priori seems like a daunting task, and that was the case for centuries. But when we begin by describing the architecture responsible for signal processing, sometimes the explanation for these more complex concepts emerges naturally. If nothing else, this shows why research is important, even when our experiments seem tangential to the problems we’re trying to solve.


You have to start somewhere.


In the Blink of an Eye


Dehaene and his colleagues used their framework, built using artificial neural networks, to explain a phenomenon known as attentional blink. In an experiment, an individual is presented with two visual stimuli in succession. For example, a letter (or shape) is briefly flashed on a screen; then it is replaced by a different letter, also for a brief period. If the second letter is presented within 500 milliseconds (ms) of the first, the individual will most likely not report seeing the second letter—as if they had blinked during this period.


The researchers showed that, as the signals from stimulus 1 reach the global workspace, the excitations of those signals inhibit the signals from stimulus 2 from reaching the space. Therefore, although the signal is processed by parts of the visual cortex, it does not achieve conscious access. If the signal is presented after 500 ms, the strength of the signal can overcome stimulus 1 in the global workspace, and attention “switches” to stimulus 2.


Now we can see what is perhaps the most important contribution of GNW and GWT. By disentangling consciousness from the phenomenal aspects (why we experience things the way we do), we can instead focus on analyzing activity in different brain regions to explain how conscious and unconscious processes arise.


Defining consciousness in terms of access describes (1) understanding how different stimuli can give rise to levels of brain activity and (2) tracing the flow of activity from unconscious to conscious processing. With this approach, we can begin to discover a tangible path to consciousness that may be reproducible in artificial systems.


Starting Somewhere


Any child can tell you their favorite color. But it is difficult to design a machine that a priori likes the color blue and detests the color red.


That’s the problem we face if we try to understand and replicate consciousness at the phenomenal level. But we could monitor the brain activity of a subject as a blue object is presented in their field of view. We can then see where activity begins and how it progresses through the brain until the individual reports seeing it.


When we do this, we learn that neural networks are processing information at different levels. We can use this knowledge to build systems that can simulate this behavior.


Will these systems become conscious in a way that humans accept as conscious? We won’t know until we try. The point is that now we seem to have a starting point.


We can assume that all consciousness is indeed explained by a global workspace that interconnects information processing from different systems. We can assume that, as this information is compressed and rearranged by the global workspace, new meanings and dimensions emerge as concepts that manifest as our conscious experience.


We don’t know if this is true, but we can follow this path and see how far we get.


Up Next


The philosopher David Chalmers describes consciousness through the “hard” and “easy” problems. The hard problem concerns describing why experiences feel the way they do, while the easy problem concerns describing the physical mechanisms of consciousness. In the next article, we will discuss how we could, in principle, develop artificial systems that can replicate the hierarchical level of information processing described by GWT and GNW. But whether these systems can resemble human-level consciousness will depend on the properties that define the hard problem of consciousness emerging from such a framework. After all, we can already program a system to report some random color as its favorite. The question is, can such a predilection emerge on its own?

140 views0 comments

Recent Posts

See All

コメント


bottom of page