The Consciousness Lie

Redbeard
12 min readFeb 2, 2020
It was all a lie!

There are a few concepts that have been floating around in my brain that I want to tie together:

  1. The idea that that consciousness arises from interaction between different forms of thinking in the different brain hemispheres, as described in the book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes, which I am currently reading.
  2. The story of Adam and Eve being expelled from the Garden of Eden.
  3. Recent research in AI about how to design neural networks that are more easily interpretable.
  4. The idea (illustrated beautifully in the animated film, Bolt) that part of becoming an adult is realizing that a lot of what we learned as a child is a lie.
  5. The arguments Mercedes and I have where she tries to articulate a feeling she has, but ends up saying something that doesn’t quite capture what she wants to say, which sometimes leads me to dismiss her instincts, even though she is probably right.

With these five threads, I am going to prove, incontrovertibly, that consciousness is a lie. Buckle up.

The Bicameral Mind

Let’s start with Jaynes’ theory of the Bicameral Mind. Although I may not agree with all of the details, let me start by saying it is one of the most fascinating theories of consciousness I have ever come across. It was published in 1976, so it isn’t exactly new and doesn’t take into account a lot of things we have learned since then.

But the audacity of the theory is refreshing. Jaynes makes the amazing claim that people were not really conscious before about 3,000 years ago. On the face of it, this seems preposterous. Evolutionarily speaking, the brain probably hasn’t changed that much in the last 3,000 years. It is obvious that pre-historic humans could think, but the critical idea is that thinking is not necessarily the same as consciousness.

Basically, Jaynes argues that rather than engaging in conscious introspection, people used to just hear something like the “voice of God” from the one half of the brain (the same place that generates hallucinations) and the other half wold obey without question. In other words, the right brain would dictate and the left brain would listen. Due to the absence of back and forth between the two, people who think in this way aren’t really conscious in the way we understand it now.

Only when our logical capacity became developed enough to engage in a conversation with the voice of God did we become conscious. A fascinating review compares this to a change in software architecture, which does not depend on hardware changes.

Recently, Iain McGilchrist has advanced a similar theory in another fascinating book: The Master and His Emissary. I haven’t read that one, but I have listened to a great interview between McGilchrist and Jordan Peterson. I recommend you take a listen:

In any case, the idea that our rational capacity is naturally subjugated to our subconscious mind is a very interesting one. But let’s leave this thread for a moment and move on to the Old Testament.

The Fall of Man

One piece of evidence that Jaynes uses to show that consciousness is a relatively recent development in the software of the human brain is that ancient texts don’t really show any evidence of introspection. Here is a fascinating paragraph from the book:

[The] strange and, I think, spurious idea of a lost innocence takes its mark precisely in the breakdown of the bicameral mind as the first great conscious narratization of mankind. It is the song of the Assyrian psalms, the wail of the Hebrew hymns, the myth of Eden, the fundamental fall from divine favor that is the source and first premise of the world’s great religions. I interpret this hypothetical fall of man to be the groping of newly conscious men to narratize what has happened to them, the loss of divine voices and assurances in a chaos of human directive and selfish privacies.

So I want to dig into this idea a little deeper by looking at the story of the Garden of Eden:

And out of the ground made the Lord God to grow every tree that is pleasant to the sight, and good for food; the tree of life also in the midst of the garden, and the tree of knowledge of good and evil.

Interestingly, the term “good and evil” may simply be a literary technique that refers to everything at all. So I want to just take the tree of life and as a metaphor for the “faith” part of our mind that is effectively infinite and unknkown, and the tree of knowledge to represent the “reason” part of our mind that is abstract and rational.

Taking a bite out of the tree of knowledge and then being expelled from the Garden of Eden represents being separated from God due to developments in our rational capacity. I find it very interesting that the modern symbol of the pinnacle of scientific achievement is an apple with a bite out of it:

So the Fall of Man can be associated with reliance on reason (i.e., the tree of knowledge) to the exclusion of faith. Here is how it went down.

And the serpent said unto the woman, Ye shall not surely die:

For God doth know that in the day ye eat thereof, then your eyes shall be opened, and ye shall be as gods, knowing good and evil.

And when the woman saw that the tree was good for food, and that it was pleasant to the eyes, and a tree to be desired to make one wise, she took of the fruit thereof, and did eat, and gave also unto her husband with her; and he did eat.

And the eyes of them both were opened, and they knew that they were naked; and they sewed fig leaves together, and made themselves aprons.

Note the the development of consciousness is associated with a lie: “Ye shall not surely die.” It turns out that awareness of death is one of the first things that comes along with consciousness.

Advances in Artificial Intelligence

Now I want to take another detour to talk about some recent developments in artificial intelligence. When Jaynes wrote his book, we didn’t really have enough experience with AI to really start to apply lessons to understanding our own consciousness. Now we do.

I have previously compared the brain to a supercomputer (subconscious) with a graphing calculator (rational) bolted on. One of the big things we have learned from research in AI is that when you build a neural network and let it do it’s thing, it is hard to figure out what is going on inside. It’s a black box.

A now famous paper called Attention Is All You Need introduced a new type of network architecture that pays special attention to certain parts of the input. By analyzing where the network pays attention, you can get some glimpse of what is going on inside. However, even attention networks can be hard to analyze because in practice they have a bunch of independent “heads” doing independent calculations and paying attention to different things.

Even more recent research tweaks the architecture so that the different attention heads correspond to logical categories that we want to analyze. For example, if we are doing natural language processing, one attention head may correspond to one grammatical label we want to apply. Then instead of analyzing them all together we can ask more specific questions and get more meaningful answers.

Another recent advance is something called Hierarchical Reinforcement Learning, in which complex tasks are broken down into subtasks. The “goal” of the top level neural network is to learn how to subdivide tasks among different lower level networks.

In both cases, there is an interesting interplay between black box neural networks and rational organization (i.e., the categorized attention heads, or the reinforcement learning hierarchy). Researchers are finding that the key to AI is in creating architectures that balance the subconscious (e.g., fully connected neural networks) and the conscious (rational categorization or hierarchy).

I believe that human brain does something similar. Consider the following story:

A young man walks into a bar and orders a beer. An older man sitting next to him turns and glares, saying “I don’t like you.” The young man assesses the situation, gets up, and leaves the bar. When asked why he left, the young man says that the older man had “a look in his eye.”

At the end of the story, the young man created a narrative to explain his actions. He attributes his decision to something in the eye of the other man. He isn't’ entirely wrong, but in an important sense his story about what happened is a lie. Like an AI researcher trying to understand the functioning of a neural network, he has looked into the black box of his own decision making process and come up with an extremely oversimplified explanation.

The young man’s sense of threat is influenced by a huge number of variables. He assesses a million little things about what the older man is wearing, the lighting in the bar, the other people around, small muscle movements, etc. But all of this happens subconsciously, and the young man can’t hope to really be aware of it all. Instead, the simple idea of threat bubbles up from his subconscious, and his conscious mind attaches it to something that symbolizes that threat.

This type of narrative (i.e., the look in the eye) captures something important about the moment, but we tend to vastly overestimate the extent to which we understand our own selves. It’s like part of our brain is a modern neural network with an architecture designed to make interpretation possible. But this understandable part is built on top of an even more vast network of neurons into which we have little insight.

The lie is thinking that we understand our own motives. It’s a stretch to say we are telling the truth when we have created an intentional narrative that omits the vast majority of explanatory detail. By the way, I probably thought of this example because lately Alberta has been obsessed by the song Eye of the Tiger:

It Was All a Lie

In the movie Bolt, a young dog lives on a TV set, and the producers of the show do everything they can to create a bubble around him in which he thinks the show is real. They do this to ensure that he is a convincing actor, and he is.

Of course, eventually he becomes separated from his made-up world and has to come to terms with the fact that everything he learned as a pup was a lie. Then he has to learn some new super powers:

In a sense, after learning that everything he knew was a lie, Bolt responds by learning to lie. The main thing Alberta learned from watching the movie is that whenever she wants something she can do this face, too.

Growing up as a Mormon, I know a fair number of people who reached a certain rational capacity and decided that everything they were taught as children was a lie. Many people become quite bitter about it. But perhaps the bitterness is mitigated a bit by the idea that pretty much every narrative is a lie in some sense.

In fact, the more I watch Alberta grow up, the more I think lies are an important part of the development of consciousness. Jaynes argued that people could think, but they weren’t really conscious until a few thousand years ago.

A similar line of reasoning might lead us to conclude that human children in a modern age aren’t really conscious until they are at least a few years old. I probably don’t have to convince you that it takes a few years before we start developing permanent memories. But have you ever considered that a toddler isn’t even really conscious (in a way similar to the way Jaynes says that pre-civilized humans weren’t really conscious).

I’m pretty sure that up to a certain age, kids are just pretending to be conscious to get attention. Of course, they do this instinctively. In other words, consciousness is a kind of fake-it-till-you-make-it phenomenon. They keep acting as if they are conscious, and one of the key ways they practice is by starting to tell stories about who they are and how they feel in order to get attention (or food, or toys, etc.). Eventually, the lies themselves result in consciousness, because the constant preparation of intentional narratives about our own selves is what opens our eyes, so to speak.

Arguing with Mercedes

There is a theme that Mercedes and I have noticed in our arguments. She will have some reservations about something I want to do. She doesn’t know how to express it, so she basically just makes something up that she thinks will be convincing to me (mostly, this happens subconsciously). I sense it’s a spurious line of argument, so I get frustrated and dismiss her concern.

When this happens, one key to breaking the impasse is to not get so hung up on the lie. She is trying to express something real, but our conscious brain naturally makes up lies to satisfy our need for narrative. When she presents arguments that seem shallow or unconvincing, I need to recognize that they are a first attempt at communication, and that the thing that comes out first can’t bind the direction of the conversation for all time.

In order to communicate at all, we have start by telling some lies, and then refine them until we come to a lie we can agree on. When we find the thing that works, it ceases to be a lie and becomes a useful framework for understanding our situation and coming to an agreement.

Perhaps this phenomenon is even more obvious when I talk to Alberta. From the very beginning, Alberta was a very instinctive emotional manipulator. It was clear from early on that as soon as she began talking she started telling us what we wanted to hear (or what she thought would get her what she wanted). When she tells us that I “broke her heart” or I am “the best dad ever” it is pretty obvious that she is doing the verbal equivalent of Bolt’s puppy dog face (which a cat teaches him to employ to get food from strangers).

In fact, a big part of may parenting philosophy involves various kinds of lies. I play tricks on Alberta, I joke with her in ways that keeps her guessing about when I am serious, I encourage her to have fantasies and imagine having various powers. And of course, I don’t get too mad when she tricks me (or manipulates me) in return.

The Lie of Consciousness

In weaving together these threads, I have identified a number of ways in which consciousness is essentially a lie. And just to be clear, I was joking (lying?) about saying I was going to prove anything. The most I can do is get you thinking. So here are a few things to consider:

  1. Our conscious self believes (like the serpent told us) that knowledge and self-awareness will help us overcome death. But all it does it makes us fear death.
  2. We are convinced we tell ourselves a true narrative about our own motivations. But really all we have access to is a thin layer of brain architecture that sits on top of a black box that took hundreds of millions of years to evolve and we can’t really understand.
  3. Children aren’t really conscious, but they instinctively pretend to be conscious in order to get attention.
  4. Children become conscious by developing the ability to spin intentional narratives about themselves and the world. In other words, they become conscious by lying.
  5. Whenever we talk to each other we have to create simple, intentional narratives in order to communicate. The need to create these narratives is one of the things that led to the evolution of a brain capable of consciousness.

After saying all the ways in which consciousness is based on a lie, I want to leave on a happy note: the existence of consciousness implies that we are probably not simulated beings living in a computerized matrix.

Why do I say this? According to the (admittedly somewhat vague) theory of consciousness outline here, awareness arises out of the interaction of different neural architectures. One of which is clear, rational, and relatively simple. The other is deep, complex, and opaque.

If our environment were not deep enough to be convincing to our subconscious selves, we would know it immediately. If our deep consciousness were not substantially deeper than our rational capacity, we would not be conscious. Any environment that is sufficiently deep to be convincing to a network that is deep enough to lead to consciousness would also be sufficiently deep enough to be considered “real” by the shallow par to the network that uses language.

In other words, the very concept of a simulation implies that the environment is somehow artificial. But artificial is a relative concept that must be understood by our rational brain. And if it’s deep enough to seem real to our subconscious brain, then is sufficiently deep to not be properly considered a simulation by our rational brain.

I know this probably isn’t too satisfying. But amidst all the lies we tell ourselves, it is nice to at least believe that reality itself is real.

--

--

Redbeard

Patent Attorney, Crypto Enthusiast, Father of two daughters