The Quantum Neural Network

In this story, we learn how our brains generate our first person experience. We also learn about a new theory called Orchestrated Objective Reduction, which explains how our brains exhibit quantum computation at room temperature. šŸ˜Ž


TLDR: Iā€™ll summarize this story for you.


The Observer Effect in Quantum Mechanics implies the universe really does revolve around us.

Iā€™m not saying the Earth is flat, or that the Earth is the center of our universe. The Earth isnā€™t even in the center of our Solar Systemā€”we can measure the motions of the planets to confirm that. However, we can also measure that the universe keeps its energy/information in superposition until that energy/information is rendered for an organism that has Attention.

This paradox is the focus of a book by Sir Roger Penroseā€™s called, ā€œShadows of The Mindā€.

Penrose writes,

A scientific world-view which does not profoundly come to terms with the problem of conscious minds can have no serious pretensions of completeness. Consciousness is part of our universe, so any physical theory which makes no proper place for it falls fundamentally short of providing a genuine description of the world. I would maintain that there is yet no physical, biological, or computational theory that comes very close to explaining our consciousness.

Well letā€™s see how close we can get.

Share

The Virtual Reality of Reality

To begin, let's do a science experiment together. šŸ¤“

Hold one arm straight out in front of you and stick up your thumb. Notice how small your thumbnail is compared to your entire field of vision. Almost everything you can see outside that thumbnail is currently streaming into your brain in black and white. Your brain is actually hallucinating almost all those colors because we all live inside our own ā€œvirtual realityā€. šŸ„½

Donā€™t believe me? To explain how this works, letā€™s use math. A digital camera that records as much information ā€œas our eyes seeā€ would require a sensor with 576 megapixels. That number is calculated using a 24mm focal length, F3.5, with 120-degree by 60-degree FOV (field of vision). Thatā€™s a very close estimate to how much light is collected by the human eye.

Double check this with a photographer who knows what that means.

Share

The light sensors at the back of our eyes are called Retinas. The word ā€œretinaā€ was first used in the 14th century to describe the delicate blood vessels at the back of the eye because they resemble a fishing net. So retinas are:

  • literally a net of blood vessels,

  • figuratively a net to catch light, and

  • actually a neural net of information.

The retinaā€™s nerve fibers weave together to become our optic nerve. Our optic nerve ā€œcablesā€ plug directly into our brain stem just below the brain. This ā€œarchitectureā€ gives our subconscious a chance to process the raw data from reality long before ā€œweā€ do.

The reason we all live inside our own hallucinated reality is because our retinas only have 130M individual chemical light sensors, called Rods and Cones. So in camera terms, each eye has 130 megapixels. Producing 576 megapixels from just 130 megapixels of raw data is impressive, but our brains are even more prolific than that. Only 6M of our 130M sensors are conesā€”the ones that can detect color. The cones are all densely packed in a tiny area at the center of our retina called the Fovea. The fovea is only about the size of our thumbnail at arm's length, which is why almost all the data outside the fovea is black and white.

It gets worse. Our brains can't even use all 130 megapixels because the individual chemical light sensors need time to ā€œrechargeā€ after stimulation. So our 130-megapixel eye is connected to an optic nerve that only has 1.2 million nerve fibers to stream the data. So our 576-megapixel view of reality is calculated from a 1.2-megapixel data stream, which means 99.791% of our vision is made up.

So how does our brain do this?

Generative artificial intelligence.

Share

Donā€™t believe me? Well here is the ultimate twistā€”those 1.2 megapixels are streaming into your brain right now upside down. Hereā€™s what the data in your optic nerve actually looks like:

Now do you believe that you live inside your own virtual reality ?

Share

Just like Dalle-2 or Midjourney, our brains use generative-ai to enhance our raw sensor data before ā€œweā€ have time to notice. Our brains use the super high-definition color data from the fovea to calculate the colors and lighting for the rest of the scene. Remember the Persistence of Vision that we learned about in the last chapter? Well that illusion is true all the way down to the quantum voxels of reality. If 99.791% of our vision is made up by us in our own heads, then reality is the story that our brains are selling themselves.


@matrixfans: Whenever I reread this section, I imagine Morpheus saying, ā€œYour appearance now is what we call residual self-image. It is the mental projection of your digital selfā€.

I was in college when ā€œThe Matrixā€ came out in theaters and it blew my mind. When I got back to my dorm that night, I immediately hit the hacker forums and found a final edit from Warner Brothers on The Pirate Bay that was only missing 20 seconds of background music from the dance club scene where Neo meets Trinity. The data was 4 gigs, which was a huge download in 1997. Plenty of Duke students saw ā€œThe Matrixā€ for the first time on my computer while the movie was still in theaters. The price of the show was one large Dominoā€™s extravaganza pizza for me and my roommate, Heybitch. šŸ•

Coincidentally or ironically, I won a Chief Technology Officer of the Year Award a decade later while working for a company that was named Matrix Healthcare Services. šŸ¤Ŗ


@WarnerBrothersDiscovery: Iā€™m sorry about hacking the matrix. I paid to see it several times in the theater and purchased it on DVD and streaming. I stopped ā€œhaxingā€ last century.


If 99% of our vision is hallucinated by our brain, itā€™s obvious to see why two eyewitnesses struggle to agree. Two eyewitnesses can actually see different things even with the same raw input data. Our own two eyes donā€™t even agree.

Our eyes are positioned laterally adjacent on our face so that the subtle difference of perspective can be used to render depth. Part of the subconscious brain, called the Lateral Geniculate Nucleus (LGN), compares the raw data from both optic nerves and assembles the critical components into three dimensions before sending it to the Primary Visual Cortex. The first two layers of the LGN transmit the perception of form, movement, depth, and brightness. Layers 3, 4, 5, and 6 transmit color and fine details.

How cool is it that humans have dedicated 3D hardware?

To render our 576-megapixel continuous experience, our subconscious mind directs our eyeballs to ā€œmicro scanā€ objects of interest so many times per second that we canā€™t even perceive our eyeballs moving. Here is a short video to demonstrate. (5 mins)

Our generative-ai brains have been ā€œtrainingā€ on everything that weā€™ve ever seen during our lifetime. Our ā€œpersistence of consciousnessā€ is one of the biggest reasons the human brain needs such massive parallel processing power. Some scientists estimate that the average human brain has more computing power than the billion-dollar Fugaku supercomputer.


Thanks for reading Fundamental Frequency! Subscribe for free to receive more stories like this.


Musical Computation

In the second chapter of this book, we learned how the Observer Effect in Quantum Mechanics means the universe interacts with our attention, so we must have some kind of quantum connection to the universe in our brains. Fortunately, my favorite theory of consciousness, Orchestrated Objective Reduction (Orch OR), works just like that.

Orch OR was developed by two very unlikely research partners: Sir Roger Penrose and Stuart Hameroff. Hameroff is a tie-dye wearing anesthesiologist and consciousness researcher. He uses fMRI machines to watch peopleā€™s brains on enough drugs to make them unconscious. Hameroff was researching small microtubules in the brain to understand their role in cancer and noticed that their activity dropped significantly when people were unconscious. His first book on the topic was called ā€œUltimate Computingā€, originally published in 1987.

Hereā€™s how Hameroff describes himself,

I think more like a quantum Buddhist, in that there is a universal proto-conscious mind which we access, and can influence us. But it actually exists at the fundamental level of the universe, at the Planck scale.

Hameroff essentially believes that our consciousness is ā€œquantumly entangledā€ with the universe. I agree.

Sir Roger Penrose is a coat-and-tie wearing Professor of Mathematics at Oxford University who won awards for his physics work with Stephen Hawking. The wiki for Hameroff describes their partnership like this,

Separately from Hameroff, the English mathematical physicist Roger Penrose had published his first book on consciousness, The Emperor's New Mind, in 1989. On the basis of Gƶdel's incompleteness theorems, he argued that the brain could perform functions that no computer or system of algorithms could. From this it could follow that consciousness itself might be fundamentally non-algorithmic, and incapable of being modeled as a classical Turing machine type of computer. This ran counter to the belief that it is explainable mechanistically, which remains the prevailing view among neuroscientists and artificial intelligence researchers.

Penrose saw the principles of quantum theory as providing an alternative process through which consciousness could arise. He further argued that this non-algorithmic process in the brain required a new form of quantum wave reduction, later given the name objective reduction (OR), which could link the brain to the fundamental spacetime geometry.

Hameroff was inspired by Penrose's book to contact Penrose regarding his own theories about the mechanism of anesthesia, and how it specifically targets consciousness via action on neural microtubules. The two met in 1992, and Hameroff suggested that the microtubules were a good candidate site for a quantum mechanism in the brain. Penrose was interested in the mathematical features of the microtubule lattice, and over the next two years the two collaborated in formulating the orchestrated objective reduction (Orch-OR) model of consciousness. Following this collaboration, Penrose published his second consciousness book, Shadows of the Mind (1994).

Okay, letā€™s try to explain some of that.

Neuroscientists all agree that the brain is a neural network. Cells called Neurons are the nodes in this network. Each neuron has a long tail, called an Axon, which connects to other neurons with a Synapse. Thatā€™s pretty standard neuroscience. Orch OR agrees with all that. But Orch OR goes on to claim that within each neuron there are 100M Microtubules that perform quantum calculations with the universe.

So our brains are quantum neural networks.

Share

Stuart Hameroff has authored several research papers on Orch OR. They are highly technical, so here is one of his abstracts to summarize.

The ā€˜Orch ORā€™ theory attributes consciousness to quantum computations in microtubules inside brain neurons. Quantum computers process information as superpositions of multiple possibilities (quantum bits or qubits) which, in Orch OR, are alternative collective dipole oscillations orchestrated (ā€˜Orchā€™) by microtubules. These orchestrated oscillations entangle, compute, and terminate (ā€˜collapse of the wavefunctionā€™) by Penrose objective reduction (ā€˜ORā€™), resulting in sequences of Orch OR moments with orchestrated conscious experience (metaphorically more like music than computation). Each Orch OR event selects microtubule states which govern neuronal functions. Orch OR has broad explanatory power, and is easily falsifiable.

Pretty simple, right? šŸ«¤

The easiest metaphor to understand what these microtubules are doing is Vacuum Tubes. In primitive computers, vacuum tubes were primarily used as electronic switches and amplifiers. Vacuum tubes can be configured to perform logical operations such as AND, OR, and NOT, which are the basic building blocks of digital circuits. The microtubules inside each neuron hold analog waves and perform calculations by adding and subtracting the waves together. Analog Computation is one of the main reasons quantum computers are so much faster than digital computers.

The reason Orch OR is ā€œorchestratedā€ is because the microtubules resonate at different frequencies, just like various instruments in an orchestra. Thatā€™s what Hameroff means when he says, ā€œorchestrated oscillations entangle, compute, and terminate resulting in sequences of moments with orchestrated conscious experienceā€. In other words, microtubules organize themselves into neural network layers based on the total time they need to calculate.

If you recall from the first chapter of this book, we learned how a Tesla computer ā€œdecidesā€ where the road is from raw camera data:

  1. Millions of raw pixels are loaded into the first layer of a neural network.

  2. The second layer looks for any line segments within the data of the first layer.

  3. The third layer looks for any lines within the data of the second layer.

  4. The fourth layer looks for any shapes within the data of the third layer.

  5. With all its best guesses completed, the top layer makes the final decision about where the road is.

Well, our brains work the exact same way.

Share

The ā€œlowest levelā€ calculations in our brains are synchronized to happen more than one billion times per second, which is gigahertz frequency. These steps have to be super quick, like loading raw pixel data into individual neurons. The results of those calculations are fed into neural network layers that calculate millions of times per second, which is megahertz frequency. Those calculations are fed into layers that process thousands of times per second, which his kilohertz frequency. The ā€œhighest orderā€ calculations in our brains are measured in hertz, which is just one time per second.

Our first person experience is essentially the top layers of an incredibly sophisticated neural network. Later in this chapter, we will look at the various brainwaves of our conscious minds, which are all separated by frequency:

Anything faster than 100 times per second is just too fast for us to notice. Thatā€™s roughly the cut off for our subliminal minds. Our brains hide so much of their computation from us that our consciousness is generally the last to know whatā€™s going on. For example, when your hand gets near a hot stove, it is already moving away by the time ā€œyouā€ notice the heat. šŸ”„

So just think about how slow our inner monologues really are. The average ā€œinner chatbotā€ produces up to 4,000 words per minute, which is much faster than we read. But as fast as that sounds, our inner monologues are still millions of times slower than the fastest processes inside our brainā€™s microtubules. Reality is the story that our brains are selling themselvesā€”the virtual reality of reality.

If you wanna learn more about Orch OR, hereā€™s a full lecture from Penrose and Hameroff. (113 mins)

The Orch OR theory is still controversial among scientists because of the quantum effects at room temperature, but itā€™s explanatory, testable, and falsifiable. I believe in Orch OR because Penrose is a mathematical genius who knows more about quantum mechanics than you or I ever will, and Stuart Hameroff turns human consciousness on and off for a living. After all their research they have both concluded that our brains are quantum computing neural networks powered by the ā€œvibesā€ of the universe.

Whatā€™s not to like about that theory? šŸ˜Ž

Share

Recently, neuroscientists unrelated to Penrose and Hameroff have successfully demonstrated that the brain exhibits quantum effects at room temperature. You can read more about their research in ā€œUltraviolet Superradiance from Mega-Networks of Tryptophan in Biological Architecturesā€. Here is a video from science influencer Sabine Hossenfelder to explain this new study. (7 mins)


@philosophers: The Orch OR theory creates some very interesting philosophical questions. Letā€™s explore them using a Minecraft analogy.

Imagine we simulate our world in Minecraft and have ChatGPT (or some other ai) play as ā€œMinecraft-Adamā€. Letā€™s say we program Minecraft-Adamā€™s reward system to get better at the game and the only physics that Minecraft-Adam can experience is rendered by the Minecraft physics engine. You can make this digital metaphor as elaborate as you want. So:

  • Does Minecraft-Adam have free-will in the game?

  • If we changed the ā€œweightsā€ in Minecraft-Adamā€™s neural network to behave a different way, did we rob him of his free-will?

  • Would Minecraft-Adam notice if we changed the weights of his neural network or would he assume his preferences changed naturally?

    Leave a comment

  • What would correlate to Penrose Reduction in this digital world?

  • Is a life made out of bits worth less than a life made out of quanta? And if thatā€™s true, is a life made out of quanta worth less than a life of whoever can make a quantum simulation? šŸ¤”

  • Letā€™s say Minecraft-Adam can measure the pixels of his reality and learns that his whole universe is actually ā€œflatā€, operating on a disk drive. How is this any different from Conformal Compactification from our perspective?

    Leave a comment

  • What would be the ā€œdark matterā€ of Minecraft-Adamā€™s universe?

  • If you entered Minecraft-Adamā€™s universe with an avatar, how would you explain atoms to him?

  • If we pause our Minecraft universe to go eat dinner, would Minecraft-Adam perceive that his universe was paused? Could all the energy in our universe be ā€œquantum pausedā€ at the same time? Is there a Quantum Clock Cycle?

  • Is generative-ai just another name for imagination?

    Leave a comment

  • Could Minecraft-Adam have free-will imagination AND not lie to himself?

  • Why would we program Minecraft-Adam to need 8 hours of sleep each day?

  • Why would we want to hide our existence from Minecraft-Adam? What could we learn by doing that?

    Leave a comment

  • If we could create a self-sentient Minecraft-Adam, it seems like his life would get streamed on the internet for everyone to watch like the Truman Show. Somebody would do that for money or for science. Is that an invasion of his privacy?

  • Is Earth the most interesting ā€œreality TV showā€ in the multiverse? Maybe weā€™re not even in the top 10 most popular ā€œquantum-reality showsā€.

  • Are we quantum avatars for a Sim-Life type video game for other players in the multiverse? Do we perceive their game-play choices as our free-will?

  • Do we have subscribers from other dimensions that can experience the ā€œquantum data feedsā€ that our brains produce?

  • Can the words in our inner monologues come from anything other than us?

  • Whatā€™s the most important question I didnā€™t ask?

Leave a comment


I wrote the manuscript for this book in Google Docs to inform my Google News feed while I worked. Less than a week after I wrote these Minecraft-Adam questions, Google News recommended an article to me about several computer scientists researching this exact scenario. Hereā€™s how they describe their project,

We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft that continuously explores the world, acquires diverse skills, and makes novel discoveries without human intervention.


@google: My YouTube is so dialed into what Iā€™m thinking about in life. Google Maps, Google Docs, and Gmail feeding the recommendation engines for YouTube and Google News are some of my best Returns on Attention and best Returns on Privacy in my life right now. Digital literacy elevated. šŸ™


Continue readingā€¦


Table of Contents


We need your help

@christians: We are a 501(c)3 nonprofit ministry dedicated to spreading the Good News of Jesus Christ to academics, scholars, and scientists because they have tremendous influence on everyoneā€™s world view. Their opinions directly affect our childrenā€™s education, the media, national policy, and budgets for future scientific research. Please consider investing some of your tithe this year into our mission of eradicating the false doctrine of Darwinian Evolution.

Donate

You can also help us by sharing this story with your friends on TikTok, Twitter, Threads, Instagram, Facebook, LinkedIn, YouTube, WeChat, Weibo, and QZone. šŸ™Œ

Share

@creators/influencers

@churches

@richpeople

@techexecutives

@sponsors

@bookpublishers


How can we improve this story? Please let us know in the comments.

Leave a comment