Quantum Computing, Fast and Slow

@midjourneybot: /imagine: the human operating system

The Orch OR Theory presented in the previous chapter provides a great description for how and why electrons move around our heads. But it doesn’t do that much to describe what the electrons represent—their Information. So we need to shift our conversation from Neuroscience to Psychology. Now we switch from a hardware model of the brain to its software.

One of the earliest and most useful models of how humans Operate comes from Ancient Greece. The ancient Greeks believed that inside each person was actually two animals: a Horse and its Rider. Sometimes they cooperate together, but sometimes they compete against each other.

The Horse corresponds to our body. It is strong and powerful and gets the Rider where he needs to go. Our horse is also stupid. My body hates drinking water no matter how many times “I” lead it there. My horse doesn’t like getting cold, complains when it’s hot, and should rarely be allowed to eat what it wants. If I eat after midnight it turns into a monster, so I might as well live inside a Gremlin. 🫏

The Rider corresponds to our soul. The Rider can’t afford to spend his/her attention on the mundane tasks of life like walking and breathing. Those tasks are handled by the Horse. Sometimes the Rider has to make the Horse slow down—like chewing our food and breathing during stressful situations. Sometimes the Rider has to make the Horse speed up—usually during work and workouts. 🏇

To make this concept easier for our kids, we named the Rider “I” and the Horse we called “Me”. The faster we could teach their I to control their Me, the faster we could “delegate our parenting” to the child to develop their own autonomy. That frees us (parents) up, so we can spend more time parenting ourselves. I make enough of my own problems every day without having to find a kid’s other shoe or remind them to handle their chores. I’m busy doing that to my Me. 👮

The Horse and the Rider concept enables us to see that our bodies actually have more than one brain, so to speak. One is more interested in comfort and protection today (the Horse), and the other is more interested in comfort and protection tomorrow (the Rider).

Quantum Computing, Fast and Slow

Over the past half century two Israeli Psychologists named Amos Tversky and Daniel Kahneman performed science experiments on people to learn more about these two “Systems” of thinking. They invented the science we know today as Behavioral Economics. Kahneman published a lot of their research in his book, “Thinking, Fast and Slow”. I consider it the bible for behavioral economics.

Here’s a (paid) link to buy it.


The next few paragraphs are excerpts from his book. It’s best to hear Kahneman’s own descriptions for what he calls System 1 and System 2. See if you can guess which one is the Horse and which one is the Rider.

System 1: operates automatically and quickly, with little or no effort and no sense of voluntary control.

System 2: allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.”

“When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.”

@aigeeks: two neural network agents competing for rewards is a GAN, right? 😉

“In rough order of complexity, here are some examples of the automatic activities that are attributed to System 1:

  • Detect that one object is more distant than another.

  • Orient to the source of a sudden sound.

  • Complete the phrase “bread and . . .”

  • Make a “disgust face” when shown a horrible picture.

  • Detect hostility in a voice.

  • Answer to 2 + 2 = ?

  • Read words on large billboards.

  • Drive a car on an empty road.

  • Find a strong move in chess (if you are a chess master).

  • Understand simple sentences.

  • Recognize that a “meek and tidy soul with a passion for detail” resembles an occupational stereotype.”

“The highly diverse operations of System 2 have one feature in common: they require attention and are disrupted when attention is drawn away. Here are some examples:

  • Brace for the starter gun in a race.

  • Focus attention on the clowns in the circus.

  • Focus on the voice of a particular person in a crowded and noisy room.

  • Look for a woman with white hair.

  • Search memory to identify a surprising sound.

  • Maintain a faster walking speed than is natural for you.

  • Monitor the appropriateness of your behavior in a social situation.

  • Count the occurrences of the letter a in a page of text.

  • Tell someone your phone number.

  • Park in a narrow space (for most people except garage attendants).

  • Compare two washing machines for overall value.

  • Fill out a tax form.

  • Check the validity of a complex logical argument.”

“System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine—usually.”

“When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 × 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event. System 2 is also credited with the continuous monitoring of your own behavior—the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.”

“The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off. If you are shown a word on the screen in a language you know, you will read it—unless your attention is totally focused elsewhere.”

It took me five years to finish reading Kahneman's book. Its long. The insights and wisdom are so densely packed in there that I had to read with my System 2 the entire time.

So I guess it took “I” five years to finish reading Kahneman's book.

System 2, our consciousness, is like the captain of a large cruise ship. If everything is working well, on schedule, with no surprises, then there’s not much to do. System 2 is free to be creative and innovative and Attention Conservative. But if something unexpected happens “below deck”, the captain is the only person who can save the day. The more our System 2 can train our System 1 to handle situations without panic, the less often our System 2 needs to be called into action.

A good example of System 2 educating and upskilling System 1 is sports. Great basketball shooters will tell you that the more you think about how your shot works, the worse you shoot. That’s because shooting a basketball is a System 1 activity. The muscles that produce a shot are informed by a very complex sequence of neurons that don’t have time for you to think. Like we learned in the previous chapter, each time those neurons are excited, oligodendrocyte cells wrap those neuron connections in myelin. Myelin is how we physically manufacture “muscle memory”. 💪🧠

My favorite book that connects myelin in Neuroscience and myelin in Psychology is called, “The Talent Code: Greatness Isn't Born. It's Grown. Here's How.” by Daniel Coyle. Coyle unveils why talent isn’t evenly distributed around the world. Talent isn’t even evenly distributed over time. How many famous painters can you name that are not Impressionists or from the Renaissance? Who else can you name that’s not Manet, Monet, Van Gogh, or a Teenage Mutant Ninja Turtle?

Here are a few questions from the introduction of his book:

  • How does a penniless Russian tennis club with one indoor court create more top-twenty women players than the entire United States?

  • How does a humble storefront music school in Dallas, Texas, produce Jessica Simpson, Demi Lovato, and a succession of pop music phenoms?

  • How does a poor, scantily educated British family in a remote village turn out three world-class writers?

The answer is: some places practice differently than everywhere else on Earth. Here’s the (paid) link to buy this book on Amazon. The stories in it are fun to read.


System 2, the Rider, is still very important in sports. Practicing a sport is extremely expensive in calories and effort. Nobody can make that sacrifice for you. Like the famous coach Vince Lombardi said, “The will to win is not nearly so important as the will to prepare to win.” The will to prepare to win is a System 2 activity. Here are a few simple examples:

  • Your “I” has to get your horse to the gym on time.

  • Your “I” has to push your horse to sprint.

  • Your “I” has to make your horse drink water.

  • Your “I” has to remind your horse what your coach wants you to improve by screaming it into your face 100 times already.

  • Your “I” has to get your horse to bed on time.

  • Your “I” has to feed your horse healthy vegetables and protein.

  • Your “I” has to console your horse when it’s nervous about the game.

  • Your “I” has to remind your horse to have fun.

Tversky and Kahneman spent their careers probing the limits of System 1 and System 2. They devised experiments that reveal how these agents work, but more importantly—when, where, how, and why they fail. They designed experiments to show how humans weren’t always completely “rational actors” that traditional economists used in their financial models. Tversky and Kahneman call the quirks of our psyche, Cognitive Biases. Here’s a list of the cognitive biases that are described in Kahneman’s book.

  • Attribute Substitution: occurs when an individual has to make a judgment that is complex, and instead substitutes a more easily calculated heuristic. For example: A bat and ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost? If you thought $0.10, that was a System 1 mistake.

  • The Availability Heuristic: is a mental shortcut that relies on immediate examples that come to mind. Also known as the WYSIATI Bias (What You See Is All There Is). Believing that nothing else in the universe could exist except the matter we can see is a WYSIATI bias.

  • Anchoring: is the human tendency to rely heavily on the first piece of information offered (the anchor) when making decisions. Professional negotiators call this “priming”. Chuck Klosterman wrote a great essay arguing that the perspective taken by most news stories is dictated by the first person to return the reporter’s request for comment. I think it was in his book “Sex, Drugs, and Cocoa Puffs”, which is hilarious. #paidlink


  • Loss aversion: refers to people’s tendency to prefer avoiding losses than acquiring equivalent gains. It is worse to lose one’s jacket than to find a new one. Some studies suggest a loss hurts twice as much as an equivalent gain.

  • Framing: refers to the context in which a decision is placed in order to influence that decision. People are more likely to buy a $200 jacket that is 50% off, than the exact same jacket at full retail price of $100.

  • Theory-induced Blindness: Once you have accepted a theory, it is extraordinarily difficult to notice its flaws. This is how cults persist.

  • Hindsight Bias: is our tendency to view past events as being predictable. Also called the "I-knew-it-all-along" effect.

  • Correspondence Bias: is our tendency to overemphasize personality-based explanations for behaviors observed in others. Example: “You are driving in heavy rain, and you notice another driver in your rearview window speeding and overtaking other cars. Because of correspondence bias, you are more likely to assume that they are a reckless driver, when perhaps it’s the case that they are rushing to the hospital.”

  • Implicit Bias (aka Implicit Stereotype, Unconscious Bias) is our tendency to attribute positive or negative qualities to a group of individuals. It can be fully non-factual or be an abusive generalization of a frequent trait in a group to all individuals of that group.

  • Confirmation Bias: is our tendency to search for or interpret information in a way that confirms one's preconceptions, and discredit information that does not support the initial opinion. We are all motivated to reduce cognitive dissonance, so we search for information which reconfirms our views.

  • Affinity Bias: is our tendency to be favorably biased toward people most like us.

  • Self-Serving Bias: is our tendency to claim more responsibility for successes than for failures. We hate to know the truth about ourselves.

  • Belief Bias: is our tendency to evaluate the logical strength of an argument based on current belief and perceived plausibility of the statement's conclusion. There’s no escaping this one.

  • Embodied Cognition: suggests that our thoughts are not simply controlled by our brains. Our perception, attention, decision making, and motivation can be influenced by the emotional and physical state of our bodies.

  • Status Quo Bias: is our tendency to hold on to the current situation rather than an alternative situation, to avoid risk and loss (loss aversion). The devil you know is better than the devil you don’t know.

  • Overconfidence Effect: People tend to overrate their abilities and skills as decision makers. Paradoxically, the less someone knows about a subject, the more certain they are about their opinion.

  • Physical Attractiveness Stereotype: is our tendency to assume people who are physically attractive also possess other desirable personality traits. It’s a form of substitution bias.

As you can see when our System 2 relies on System 1 too much it leads to errors in our judgment.

And the opposite is also true. When our System 2 relies on System 1 too little it also leads to errors in judgment.

This wouldn’t be “The Computer Science of Everything book without explaining System 1 and System 2. One sophisticated technique used to train artificial intelligence that is much faster than other methods is called the Generative Adversarial Network (GAN). According to Wiki, “A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in June 2014. Two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.”

@ChatGPT: what’s better about the generative adversarial network architecture?

ChatGPT: “The Generative Adversarial Network (GAN) architecture is known for its ability to generate realistic data, such as images or videos, by pitting two neural networks against each other. This approach can create more diverse and complex outputs compared to traditional generative models, making it better suited for tasks that require creativity and innovation. Additionally, GANs can learn from unstructured data, which allows them to generate new and unique content without relying on pre-defined rules or templates.”

@ChatGPT: how do GANs work?

ChatGPT: “GANs, or Generative Adversarial Networks, work by training two neural networks, a generator and a discriminator, in a competitive process. The generator network takes random noise as input and generates outputs, while the discriminator network takes inputs from both the generator network and real data, and tries to distinguish between the two.”

“During training, the generator network learns to create outputs that are similar to the real data, while the discriminator network learns to better distinguish between the generated and real data. This process continues until the generator network can produce outputs that are realistic enough to fool the discriminator network.”

“Once the training is complete, the generator network can be used to create new data that is similar to the real data. This can be applied to various tasks, such as image synthesis, text generation, and music composition, among others.”

Remember in the beginning of this book, when I promised you that by the end of this story you would become such a good discriminator? Remember how I promised that you would see the lies coming from everywhere?

Well most of the lies you see come from inside you. 

System 1 is your generator network. Your System 1 has all kinds of reasons to lie to you, which we’ll learn about in this chapter.  Our System 1 and System 2 are coupled as a GAN. The “real data” that your System 2 has to distinguish against comes from outside your own head. The “real data” is what your wife says about you that you don’t believe. The “real data” is what your friends say about you that you don’t believe. The “real data” is what you look like on video when you are yelling at employees or your spouse.

Record yourself.

See if you are different from the outside.

See if you lie to yourself.

Your mind is a Generative Adversarial Network.

Here’s how Daniel Kahneman describes the division between our two neural networks. He wrote, “Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.”

From the moment of our birth, our System 1 has been performing little science experiments like knocking over wooden blocks to see what happens next. Our little toddler System 1 mastered Newtonian Physics a decade before our System 2 learned the equation, F=MA. Our System 2 didn’t even start building our remembering self until after our System 1 had already entered primary school. Our “lying self” gets a 5-year head start on our “remembering self”.

@philosophers: Our minds are a very sophisticated GAN for some cosmic reason. We may be seeded by aliens or we may be seeded by “He who can program in DNA”, but there’s just no way the 3 billion characters of source code that produce the experience in our heads are the irrational remainder of a Cosmic explosion that self-assembled via Darwinian evolution into the Cambrian explosion. How many more explosions will the atheists need to find coherence in their story? 💣

@biblenerds: Our Generative Adversarial Network architecture has really important implications for the bible. For example, “soul” is an English word, so it’s not in the original Bible. Matthew, Mark, Luke, John, and Paul used the word ψυχή. See the upsilon and the chi? It’s pronounced psy-kee as in “psyche” and “psychology”.

Psychology is literally Soul-ology.

@biblenerds: Whatever soul means is probably not what you think it means. I'm not sure I know what soul means. In the North American Standard Bible, ψυχή is only translated as “soul” 47% of the time. The NASB also translates that exact same word as: heart (2), heartily (1), life (36), lives (7), mind (1), minds (1), person (1), persons (3), soul (33), souls (14), suspense* (1), thing (1).

That’s why I don’t fully trust churches. How are they so certain which one of those things ψυχή means if their English bibles don’t even know?

@philosophersandscientistsandbiblenerds: This book, “Human Attention”, is my Theory of Everything and every theory is only as good as its best equation, right? So here’s my important equation for this story. I am asserting the following equivalences for philosophers, scientists, and bible nerds:

  1. System 1 = subconscious = horse = quantum GAN generator = body = flesh

  2. System 2 = consciousness = rider = quantum GAN discriminator = psyche = ψυχή = soul

These equations are unprovable for all the Bayesian reasons: there is nothing but uncertainty all the way down to the pixels of reality. So maybe a better name for this theory would be the Theory of Every Uncertainty because:

Everything is a subset of Uncertainty. 😁

The reason I reference Wiki and ChatGPT so often is because they are the most decentralized sources of truth from man and machine, respectively. Their information has some of the lowest uncertainty in the knowable universe. All the other sources of truth in “Human Attention” are from the world’s leading scientists—the most centralized sources of truth in their fields. You’ll have to decide for yourself what’s true.

In summary,

  1. System 1, our subconscious, our “me”, our horse, our experiencing self, our “flesh”—is the generator network. It is possibly the most creative neural network in our universe. We will explore System 1 in the next essay, “The Iceberg Below the Surface”.

  2. System 2, our consciousness, our “I”, our rider, our remembering self, our psyche, our ψυχὴν, our soul—is the discriminator network. Our discriminator networks give us self-awareness. Our discriminator has trained on lies our entire existence. We will explore System 2 in the essay, “The Royal We”. 

Continue reading

➡️: The Iceberg Below the Surface

⬅️: Braces for Your Brain

⬆️: Table of Contents

Receive months of free VIP access by referring friends to this story:

Refer a friend

@influencerswith100kfollowers: Tell your audience what I said, just make sure to mention funfreq.com. Also, I want to give away (100) free years of VIP access to your (100) favorite fans. Just email me* a spreadsheet containing only their email addresses without identities to ihave100kfollowers@funfreq.com

*spam is attention theft. If I think you’re spamming me, I will ban your email address from my life.

@influencerswith1Mfollowers: I want to be a guest on your podcast, radio show, tv show, Facebook, Weibo, Twitter, LinkedIn, Instagram, TikTok, Twitch, or YouTube. Email me* at ihaveonemillionfollowers@funfreq.com

@sponsors: I don’t supply ad networks, but I will create sponsored mentions for people and products I like. If you want me to drive traffic to you, email me* at wewanttosponsor@funfreq.com

As an Amazon Associate, I earn from qualifying purchases. All intellectual property rights, including copyright, relating to the funfreq.com domain shall remain vested in John Stuart Kime, unless otherwise stated. Any extraction, translation, reproduction, and distribution, in any form, or by any means, electronic, mechanical, photocopying, or otherwise, requires the explicit prior written permission of John Stuart Kime. (That means if the page has a share button then humans can share it, but ai can’t train on it).

I reserve the right to disagree with everything I’ve said.