The Quantum Neural Network
In this story, we learn how our brains generate our first person experience. We also learn about a new theory called Orchestrated Objective Reduction, which explains how our brains exhibit quantum computation at room temperature. š
TLDR: Iāll summarize this story for you.
The Observer Effect in Quantum Mechanics implies the universe really does revolve around us.
Iām not saying the Earth is flat, or that the Earth is the center of our universe. The Earth isnāt even in the center of our Solar Systemāwe can measure the motions of the planets to confirm that. However, we can also measure that the universe keeps its energy/information in superposition until that energy/information is rendered for an organism that has Attention.
This paradox is the focus of a book by Sir Roger Penroseās called, āShadows of The Mindā.
Penrose writes,
A scientific world-view which does not profoundly come to terms with the problem of conscious minds can have no serious pretensions of completeness. Consciousness is part of our universe, so any physical theory which makes no proper place for it falls fundamentally short of providing a genuine description of the world. I would maintain that there is yet no physical, biological, or computational theory that comes very close to explaining our consciousness.
The Virtual Reality of Reality
To begin, let's do a science experiment together. š¤
Hold one arm straight out in front of you and stick up your thumb. Notice how small your thumbnail is compared to your entire field of vision. Almost everything you can see outside that thumbnail is currently streaming into your brain in black and white. Your brain is actually hallucinating almost all those colors because we all live inside our own āvirtual realityā. š„½
Donāt believe me? To explain how this works, letās use math. A digital camera that records as much information āas our eyes seeā would require a sensor with 576 megapixels. That number is calculated using a 24mm focal length, F3.5, with 120-degree by 60-degree FOV (field of vision). Thatās a very close estimate to how much light is collected by the human eye.
The light sensors at the back of our eyes are called Retinas. The word āretinaā was first used in the 14th century to describe the delicate blood vessels at the back of the eye because they resemble a fishing net. So retinas are:
literally a net of blood vessels,
figuratively a net to catch light, and
actually a neural net of information.
The retinaās nerve fibers weave together to become our optic nerve. Our optic nerve ācablesā plug directly into our brain stem just below the brain. This āarchitectureā gives our subconscious a chance to process the raw data from reality long before āweā do.
The reason we all live inside our own hallucinated reality is because our retinas only have 130M individual chemical light sensors, called Rods and Cones. So in camera terms, each eye has 130 megapixels. Producing 576 megapixels from just 130 megapixels of raw data is impressive, but our brains are even more prolific than that. Only 6M of our 130M sensors are conesāthe ones that can detect color. The cones are all densely packed in a tiny area at the center of our retina called the Fovea. The fovea is only about the size of our thumbnail at arm's length, which is why almost all the data outside the fovea is black and white.
It gets worse. Our brains can't even use all 130 megapixels because the individual chemical light sensors need time to ārechargeā after stimulation. So our 130-megapixel eye is connected to an optic nerve that only has 1.2 million nerve fibers to stream the data. So our 576-megapixel view of reality is calculated from a 1.2-megapixel data stream, which means 99.791% of our vision is made up.
So how does our brain do this?
Donāt believe me? Well here is the ultimate twistāthose 1.2 megapixels are streaming into your brain right now upside down. Hereās what the data in your optic nerve actually looks like:
Just like Dalle-2 or Midjourney, our brains use generative-ai to enhance our raw sensor data before āweā have time to notice. Our brains use the super high-definition color data from the fovea to calculate the colors and lighting for the rest of the scene. Remember the Persistence of Vision that we learned about in the last chapter? Well that illusion is true all the way down to the quantum voxels of reality. If 99.791% of our vision is made up by us in our own heads, then reality is the story that our brains are selling themselves.
@matrixfans: Whenever I reread this section, I imagine Morpheus saying, āYour appearance now is what we call residual self-image. It is the mental projection of your digital selfā.
I was in college when āThe Matrixā came out in theaters and it blew my mind. When I got back to my dorm that night, I immediately hit the hacker forums and found a final edit from Warner Brothers on The Pirate Bay that was only missing 20 seconds of background music from the dance club scene where Neo meets Trinity. The data was 4 gigs, which was a huge download in 1997. Plenty of Duke students saw āThe Matrixā for the first time on my computer while the movie was still in theaters. The price of the show was one large Dominoās extravaganza pizza for me and my roommate, Heybitch. š
Coincidentally or ironically, I won a Chief Technology Officer of the Year Award a decade later while working for a company that was named Matrix Healthcare Services. š¤Ŗ
@WarnerBrothersDiscovery: Iām sorry about hacking the matrix. I paid to see it several times in the theater and purchased it on DVD and streaming. I stopped āhaxingā last century.
If 99% of our vision is hallucinated by our brain, itās obvious to see why two eyewitnesses struggle to agree. Two eyewitnesses can actually see different things even with the same raw input data. Our own two eyes donāt even agree.
Our eyes are positioned laterally adjacent on our face so that the subtle difference of perspective can be used to render depth. Part of the subconscious brain, called the Lateral Geniculate Nucleus (LGN), compares the raw data from both optic nerves and assembles the critical components into three dimensions before sending it to the Primary Visual Cortex. The first two layers of the LGN transmit the perception of form, movement, depth, and brightness. Layers 3, 4, 5, and 6 transmit color and fine details.
How cool is it that humans have dedicated 3D hardware?
To render our 576-megapixel continuous experience, our subconscious mind directs our eyeballs to āmicro scanā objects of interest so many times per second that we canāt even perceive our eyeballs moving. Here is a short video to demonstrate. (5 mins)
Our generative-ai brains have been ātrainingā on everything that weāve ever seen during our lifetime. Our āpersistence of consciousnessā is one of the biggest reasons the human brain needs such massive parallel processing power. Some scientists estimate that the average human brain has more computing power than the billion-dollar Fugaku supercomputer.
Musical Computation
In the second chapter of this book, we learned how the Observer Effect in Quantum Mechanics means the universe interacts with our attention, so we must have some kind of quantum connection to the universe in our brains. Fortunately, my favorite theory of consciousness, Orchestrated Objective Reduction (Orch OR), works just like that.
Orch OR was developed by two very unlikely research partners: Sir Roger Penrose and Stuart Hameroff. Hameroff is a tie-dye wearing anesthesiologist and consciousness researcher. He uses fMRI machines to watch peopleās brains on enough drugs to make them unconscious. Hameroff was researching small microtubules in the brain to understand their role in cancer and noticed that their activity dropped significantly when people were unconscious. His first book on the topic was called āUltimate Computingā, originally published in 1987.
Hereās how Hameroff describes himself,
I think more like a quantum Buddhist, in that there is a universal proto-conscious mind which we access, and can influence us. But it actually exists at the fundamental level of the universe, at the Planck scale.
Hameroff essentially believes that our consciousness is āquantumly entangledā with the universe. I agree.
Sir Roger Penrose is a coat-and-tie wearing Professor of Mathematics at Oxford University who won awards for his physics work with Stephen Hawking. The wiki for Hameroff describes their partnership like this,
Separately from Hameroff, the English mathematical physicist Roger Penrose had published his first book on consciousness, The Emperor's New Mind, in 1989. On the basis of Gƶdel's incompleteness theorems, he argued that the brain could perform functions that no computer or system of algorithms could. From this it could follow that consciousness itself might be fundamentally non-algorithmic, and incapable of being modeled as a classical Turing machine type of computer. This ran counter to the belief that it is explainable mechanistically, which remains the prevailing view among neuroscientists and artificial intelligence researchers.
Penrose saw the principles of quantum theory as providing an alternative process through which consciousness could arise. He further argued that this non-algorithmic process in the brain required a new form of quantum wave reduction, later given the name objective reduction (OR), which could link the brain to the fundamental spacetime geometry.
Hameroff was inspired by Penrose's book to contact Penrose regarding his own theories about the mechanism of anesthesia, and how it specifically targets consciousness via action on neural microtubules. The two met in 1992, and Hameroff suggested that the microtubules were a good candidate site for a quantum mechanism in the brain. Penrose was interested in the mathematical features of the microtubule lattice, and over the next two years the two collaborated in formulating the orchestrated objective reduction (Orch-OR) model of consciousness. Following this collaboration, Penrose published his second consciousness book, Shadows of the Mind (1994).
Okay, letās try to explain some of that.
Neuroscientists all agree that the brain is a neural network. Cells called Neurons are the nodes in this network. Each neuron has a long tail, called an Axon, which connects to other neurons with a Synapse. Thatās pretty standard neuroscience. Orch OR agrees with all that. But Orch OR goes on to claim that within each neuron there are 100M Microtubules that perform quantum calculations with the universe.
Stuart Hameroff has authored several research papers on Orch OR. They are highly technical, so here is one of his abstracts to summarize.
The āOrch ORā theory attributes consciousness to quantum computations in microtubules inside brain neurons. Quantum computers process information as superpositions of multiple possibilities (quantum bits or qubits) which, in Orch OR, are alternative collective dipole oscillations orchestrated (āOrchā) by microtubules. These orchestrated oscillations entangle, compute, and terminate (ācollapse of the wavefunctionā) by Penrose objective reduction (āORā), resulting in sequences of Orch OR moments with orchestrated conscious experience (metaphorically more like music than computation). Each Orch OR event selects microtubule states which govern neuronal functions. Orch OR has broad explanatory power, and is easily falsifiable.
Pretty simple, right? š«¤
The easiest metaphor to understand what these microtubules are doing is Vacuum Tubes. In primitive computers, vacuum tubes were primarily used as electronic switches and amplifiers. Vacuum tubes can be configured to perform logical operations such as AND, OR, and NOT, which are the basic building blocks of digital circuits. The microtubules inside each neuron hold analog waves and perform calculations by adding and subtracting the waves together. Analog Computation is one of the main reasons quantum computers are so much faster than digital computers.
The reason Orch OR is āorchestratedā is because the microtubules resonate at different frequencies, just like various instruments in an orchestra. Thatās what Hameroff means when he says, āorchestrated oscillations entangle, compute, and terminate resulting in sequences of moments with orchestrated conscious experienceā. In other words, microtubules organize themselves into neural network layers based on the total time they need to calculate.
If you recall from the first chapter of this book, we learned how a Tesla computer ādecidesā where the road is from raw camera data:
Millions of raw pixels are loaded into the first layer of a neural network.
The second layer looks for any line segments within the data of the first layer.
The third layer looks for any lines within the data of the second layer.
The fourth layer looks for any shapes within the data of the third layer.
With all its best guesses completed, the top layer makes the final decision about where the road is.
The ālowest levelā calculations in our brains are synchronized to happen more than one billion times per second, which is gigahertz frequency. These steps have to be super quick, like loading raw pixel data into individual neurons. The results of those calculations are fed into neural network layers that calculate millions of times per second, which is megahertz frequency. Those calculations are fed into layers that process thousands of times per second, which his kilohertz frequency. The āhighest orderā calculations in our brains are measured in hertz, which is just one time per second.
Our first person experience is essentially the top layers of an incredibly sophisticated neural network. Later in this chapter, we will look at the various brainwaves of our conscious minds, which are all separated by frequency:
Delta brainwaves (1-4 Hz) - mostly active when we sleep
Theta brainwaves (4-8 Hz) - daydreaming and imagination
Alpha brainwaves (8-12 Hz) - quiet thought and meditative states
Low Beta (12-15 Hz) - readiness for action
Mid Beta (15-20 Hz) - actively figuring things out
High Beta (20-30 Hz) - high anxiety or panic
Gamma brainwaves (30-100 Hz) - āah-haā moments
Anything faster than 100 times per second is just too fast for us to notice. Thatās roughly the cut off for our subliminal minds. Our brains hide so much of their computation from us that our consciousness is generally the last to know whatās going on. For example, when your hand gets near a hot stove, it is already moving away by the time āyouā notice the heat. š„
So just think about how slow our inner monologues really are. The average āinner chatbotā produces up to 4,000 words per minute, which is much faster than we read. But as fast as that sounds, our inner monologues are still millions of times slower than the fastest processes inside our brainās microtubules. Reality is the story that our brains are selling themselvesāthe virtual reality of reality.
If you wanna learn more about Orch OR, hereās a full lecture from Penrose and Hameroff. (113 mins)
The Orch OR theory is still controversial among scientists because of the quantum effects at room temperature, but itās explanatory, testable, and falsifiable. I believe in Orch OR because Penrose is a mathematical genius who knows more about quantum mechanics than you or I ever will, and Stuart Hameroff turns human consciousness on and off for a living. After all their research they have both concluded that our brains are quantum computing neural networks powered by the āvibesā of the universe.
Recently, neuroscientists unrelated to Penrose and Hameroff have successfully demonstrated that the brain exhibits quantum effects at room temperature. You can read more about their research in āUltraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architecturesā. Here is a video from science influencer Sabine Hossenfelder to explain this new study. (7 mins)
@philosophers: The Orch OR theory creates some very interesting philosophical questions. Letās explore them using a Minecraft analogy.
Imagine we simulate our world in Minecraft and have ChatGPT (or some other ai) play as āMinecraft-Adamā. Letās say we program Minecraft-Adamās reward system to get better at the game and the only physics that Minecraft-Adam can experience is rendered by the Minecraft physics engine. You can make this digital metaphor as elaborate as you want. So:
Does Minecraft-Adam have free-will in the game?
If we changed the āweightsā in Minecraft-Adamās neural network to behave a different way, did we rob him of his free-will?
Would Minecraft-Adam notice if we changed the weights of his neural network or would he assume his preferences changed naturally?
What would correlate to Penrose Reduction in this digital world?
Is a life made out of bits worth less than a life made out of quanta? And if thatās true, is a life made out of quanta worth less than a life of whoever can make a quantum simulation? š¤
Letās say Minecraft-Adam can measure the pixels of his reality and learns that his whole universe is actually āflatā, operating on a disk drive. How is this any different from Conformal Compactification from our perspective?
What would be the ādark matterā of Minecraft-Adamās universe?
If you entered Minecraft-Adamās universe with an avatar, how would you explain atoms to him?
If we pause our Minecraft universe to go eat dinner, would Minecraft-Adam perceive that his universe was paused? Could all the energy in our universe be āquantum pausedā at the same time? Is there a Quantum Clock Cycle?
Is generative-ai just another name for imagination?
Could Minecraft-Adam have free-will imagination AND not lie to himself?
Why would we program Minecraft-Adam to need 8 hours of sleep each day?
Why would we want to hide our existence from Minecraft-Adam? What could we learn by doing that?
If we could create a self-sentient Minecraft-Adam, it seems like his life would get streamed on the internet for everyone to watch like the Truman Show. Somebody would do that for money or for science. Is that an invasion of his privacy?
Is Earth the most interesting āreality TV showā in the multiverse? Maybe weāre not even in the top 10 most popular āquantum-reality showsā.
Are we quantum avatars for a Sim-Life type video game for other players in the multiverse? Do we perceive their game-play choices as our free-will?
Do we have subscribers from other dimensions that can experience the āquantum data feedsā that our brains produce?
Can the words in our inner monologues come from anything other than us?
Whatās the most important question I didnāt ask?
I wrote the manuscript for this book in Google Docs to inform my Google News feed while I worked. Less than a week after I wrote these Minecraft-Adam questions, Google News recommended an article to me about several computer scientists researching this exact scenario. Hereās how they describe their project,
We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention.
@google: My YouTube is so dialed into what Iām thinking about in life. Google Maps, Google Docs, and Gmail feeding the recommendation engines for YouTube and Google News are some of my best Returns on Attention and best Returns on Privacy in my life right now. Digital literacy elevated. š
Continue readingā¦
Table of Contents
HUGE THANKS š
to the sponsors of this story. Your support helps us reach new academics, scholars, and scientists all around the world.
If this story was helpful to you, please consider supporting our foundation. All your gifts are tax deductible.
Help us improve this story. Send me an email or leave a comment.
What did you love the most?
What did you hate the most?
What surprised you the most?
What confused you the most?