One hand clapping, p.23

One Hand Clapping, page 23

 

One Hand Clapping
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Larger Font   Reset Font Size   Smaller Font  

  Basically, encephalitis lethargica shows what happens when the brain runs out of dopamine: it stalls.

  This does not result in a complete coma. Patients were still able to chew food if it was placed in their mouths, for example. Some of them could respond to simple questions. Some would catch a ball thrown at them. They just didn’t initiate any of this on their own, as if they had absolutely no motivation to perform even the simplest actions.*

  These symptoms are consistent with what we know about other animals. Dopamine-depleted mice, in which dopamine production is genetically ablated, can also chew and swallow, but do so only if you place food directly into their mouths; they can respond to startling stimuli and hold tightly to your finger if you hold them up, but when placed in an arena in which a normal mouse immediately starts exploring, they just stand—eerily, without motion, not even flicking their tails.3

  Removing dopamine from the brain doesn’t simply paralyze it. Instead, it puts it in the dark room—a state of nonaction and nonexperience in which it does not feel compelled to do anything at all. That’s exactly what you would expect from the cortex alone, if you accept that all it wants is to align reality and expectation. The moment you remove dopamine from the equation, it successfully achieves that by doing nothing.

  So anything we do on top of basic reflexes, such as chewing the food when it is placed in our mouth, is motivated by dopamine. We would have all ended up in the dark room—the same torpor that paralyzed Rose R.—had it not been for the constant infusions of this chemical into our brains. Instead, we cannot wait to spend every waking moment of our lives in constant action. This is all because of dopamine.

  So it must be dopamine’s fault, then, that we spend every day battling with ourselves and always want to do the wrong things. If it’s there to motivate us, why is it doing such a bad job?

  To answer this question, we first need to understand what precisely dopamine does.

  What Dopamine Means

  There are a few ways to think about the essence of dopamine. The most basic way to understand it is “pleasure chemical.” That explanation is helpful as a first pass, but it is wrong.

  This idea works as follows: we do things, and when we succeed at them, we get a jolt of dopamine, which we experience as pleasure. We want more of the pleasure, so we continue to do the successful thing. As we achieve more successes, we find more and more sources of dopamine, and so our life gets broken down into various possibilities of dopamine acquisition, which become more and more sophisticated as we learn about the world. If we stop doing the successful thing, dopamine is withdrawn, and we desperately seek a way to replace it by doing some other successful thing.

  The problem with this explanation is that dopamine doesn’t actually cause pleasure. If you have a friend who takes Adderall (a drug used to treat ADHD that acts by squeezing out available dopamine from dopamine- producing neurons), they might tell you that the pills make them more focused, more productive, and put them “in the zone,” but they don’t produce euphoria.4 Studies in rats say the same thing: an injection of amphetamine (the same type of drug as Adderall) makes them work harder for the rewards but doesn’t increase their enjoyment, based on facial expressions and paw motions associated with positive and negative reactions.5

  A similar but slightly more sophisticated take is that dopamine is a “do more of that” chemical. It’s not about pleasure—it’s about memory. It helps the brain remember which actions led to the successes.

  Dopamine does, in fact, boost brain plasticity, and so it enhances memory. The bodies of neurons that manufacture dopamine are located deep in the brain, but their dopamine-releasing endings can be found throughout the brain, carrying dopamine signals into distant regions, like a broadcast. This dopamine broadcast works in parallel with other neurotransmitters, such as glutamate, the standard signal that neurons use to pass excitation to each other. If a neuron somewhere in the brain receives a typical glutamate signal from another neuron, but at the same time also receives a dopamine broadcast, the glutamate-based connection grows in strength, as if dopamine is telling the brain: “in the future, do more of what you just did.” So wherever dopamine is released, memories are stored better.

  The clearest example of this “do more of that” role of dopamine is in skill formation, which occurs in a brain region called basal ganglia.

  When someone is learning how to dance, they start by making each motion consciously. This means using the cerebral cortex to control the dance, motion by motion. As the cortex sends signals to the muscles—the leg bends this way and the hands that way—it also sends copies of those signals to the basal ganglia. If a combination of movements happens to land particularly well, a burst of dopamine is released by the midbrain, and this dopamine also goes into the basal ganglia. All neurons in the basal ganglia sense it, but it only affects those neurons that have just been activated—the ones that, an instant ago, received a signal from the cortex that led to the successful combination of movements. And so those successful neurons become “stronger” via dopamine modulation. In this way, through practice, dopamine selects successful dance motions and preserves them as a set, a unified combination that can be triggered all at once, directly from the basal ganglia, without the cortex having to think about every move. A skilled dancer then needs only to initiate this combination by thinking about the context—a particular moment in the song—and the sequence then “unpacks” itself, without conscious control. We call this “muscle memory”—in fact, it is basal ganglia memory, stored using dopamine signals that gradually optimize successful combinations of movements.*

  The “do more of that” logic extends to other brain areas that receive dopamine, including the cerebral cortex.6 Dopamine is released after something successful has been achieved; it strengthens the neurons and the connections between them that led to the success; we return to those neurons and those connections again and again. In the cortex, this might mean returning not just to neurons that execute an action, but to neurons that think about it—and so “do more of that” applies to thoughts, too, if we find them successful.7 If you have an insight that suddenly illuminated a problem, you will get a jolt of dopamine, and the neurons that were involved in that insight would solidify their connections.8 Next time, the insight will come more naturally. If a line in a song strikes an emotional chord, you will get a jolt of dopamine9 and wake up the next morning to an earworm. In the same way as dopamine selects successful combinations of motions in the basal ganglia, it selects successful combinations of thoughts in the cortex. So if we do something dopamine inducing, we start thinking about it more. In fact, this is what our minds are constantly doing if we let them loose—searching for hidden dopamine in thoughts about the past or about the future. The more we think about something, the more likely we are to think about it again.

  So, based on this explanation, dopamine helps us select the best actions and thoughts for achieving particular goals—do more of that, it tells the rest of the brain when a goal is achieved.

  Except there is a twist: success doesn’t always result in dopamine. Actually, what causes a burst of dopamine is not just any success, but unexpected success.

  This changes the “do more of that” logic quite a bit. Here’s how it works, for example, in a rat. Dopamine neurons are constantly releasing dopamine at low levels—like a gentle hum of a radio broadcast. Let’s say you put the rat in a cage, flash a lightbulb, and then deliver a reward—a drop of sugar water. The first time you do it, the rat does not know what the lightbulb means, and there’s no surge of dopamine the moment you flash it. But then the rat gets a reward—unexpectedly. This is when the dopamine surge happens—the volume of the broadcast suddenly increases, then returns to its normal volume. Next, you repeat the procedure many times. Gradually, the rat learns that the light-bulb precedes the reward. As it does, it starts reacting to the lightbulb—once it’s on, the rat knows that the sugar water is coming, and dopamine surges. Here’s the kicker: as more dopamine is released in response to the lightbulb, less gets released in response to the actual reward: sugar water. Over time, the only jolt of dopamine the rat gets is from seeing the cue. Once it actually gets the reward—achieves the success—dopamine stays steady, as if nothing happened. If you flash the lightbulb without delivering the reward—now expected—then the dopamine signal goes below the baseline: the radio hum momentarily turns into silence.

  So dopamine release most closely aligns not with the actual reward delivery, but with the surprise: the more unexpected the success, the more dopamine. When the reward is delivered for the first time, it is at its most unexpected, and dopamine release is the strongest. As the rat learns, the lightbulb is what becomes “unexpected”—the rat doesn’t know when it will flash—but once it flashes, the reward is expected. If something is expected but does not arrive—a violated expectation—dopamine dips below the baseline. So based on this, dopamine is a “better than expected” chemical, and its depletion means “worse than expected.”

  This is a more nuanced explanation for what dopamine does than simply “do more of that” or “pleasure chemical.” But it takes us back to the dark room problem.

  Who decides what is expected and whether what is actually happening right now is better or worse than that? The cerebral cortex does.10 No other brain region has enough information to piece together, for example, what money is—and money is a reliable source of dopamine in the human brain.11 So it is the cortex that must tell the reward system about an unexpected success and in response receive dopamine. So, basically, the cortex stimulates itself through the intermediary of the midbrain, which distributes the dopamine.

  But didn’t we say that all the cortex wants is to minimize its own effort, to align reality and expectation? If there is something unexpected, then wouldn’t the cortex want less of it, not more? Why, then, do we seek experiences that bring in new dopamine—travel to new places, read books, browse Wikipedia? Why is novelty something that we are drawn to, if all we want is to get rid of it? This is the dark room problem all over again—once you deny dopamine its essential “pleasurability,” it becomes unclear why we seem to be driven toward things that produce it, or why we are driven to anything at all.

  This is still an active area of research, and in my opinion, the precise relationship between the cerebral cortex and dopamine is one of the greatest unresolved questions in all of neuroscience.

  Here’s how I think of it, though I might be proven wrong in the future.

  Dopamine is not what the cortex actually wants. Actually, what it wants is to minimize dopamine, just as it wants to minimize all of its activity. But, ironically, it gets dopamine any time it identifies a situation it deems unexpectedly successful—that’s just how things are wired together! Rather than thinking of this dopamine jolt into the cortex as a positive, pleasurable signal, I think it makes more sense to think of it as an imperative signal: figure this out. This signal acts by accentuating the discrepancy that the cortex detects between expectation and reality—if this thing is so good, how come I don’t have it all the time? That forces the cortex to do what it always does—find a way to eliminate the difference by either changing the expectation or changing reality. I would guess that dopamine must shift the balance of forces toward changing reality, compelling us to act rather than accept the state of things as they stand. But as of this book’s writing, I don’t know of any research that definitively shows that it does that.

  What about our desire for novelty? If new things bring in dopamine, and we want to get rid of dopamine, why do we do new things?

  Maybe what we actually desire is not the novelty itself, but the process of turning novelty into expectation. Surprise without resolution is simply confusion, and confusion is not pleasurable. But if the surprise makes sense, it delights us. It appears that we like resolving uncertainty more than we hate uncertainty itself.

  And maybe the inevitable result of this pleasure that we derive from turning novelty into expectation is the reason we can never be satisfied. We don’t enjoy rewards per se—we enjoy the process of sliding down the ramp from surprise to nonsurprise. So we inevitably get to the bottom of that ramp and inevitably find ourselves wanting to get back on it, but the ramp is now gone, and we have to find a new surprise. As we learn more and more about the world, we gradually expand our range of expectations and look for novelty in progressively narrower, more nuanced domains. When you are a child, you are happy to simply hang out outside, finding motivation in playing with a stick or in the $5 you earn at the lemonade stand. As you get older, you need a lot more just to stay content with yourself. You require an endless scroll of tailor-made TV shows to stay entertained. The sums of money that motivate you gain several additional figures. It’s like Bitcoin mining, which was easy when the cryptocurrency was first invented but requires massive data centers today. Same with dopamine: the more expectations you create, the harder it is to find new surprises.

  So overall, dopamine doesn’t tell the cortex “good job.” Instead, any time something turns out to be better than expected, dopamine says, “get to work and make this the expectation.” If it doesn’t say this, as with encephalitis lethargica patients, the disagreement between reality and expectation is simply not strong enough to warrant any action.

  So, all in all, the best way to think of dopamine is as a “figure this out” chemical. This explains the effects of both amphetamines and dopamine depletion on mice. It explains why Adderall can create “tunnel vision” in human patients. It explains why people with low levels of dopamine experience lack of motivation.12

  It also explains our fascinating obsession with uncertainty.

  This is not unique to humans. Classic studies on the subject were done on pigeons13 but have since been replicated with other animals, too.14 You give these pigeons a button to peck and a reward as a result. Then you start changing the number of pecks required per reward. The more pecks required—say, fifty or a hundred pecks per reward—the more fatigued the pigeons seem after completing the task and the more reluctant they are to resume pecking.

  But make the number unpredictable, and the pigeons don’t stop. They continue pecking and pecking and pecking obsessively, regardless of how many times they get the reward. What motivates them is not the reward per se, but rather, a pattern yet to crack.

  It gets even better. Say you once again take some pigeons, put them in a cage, and install a button, but this time you simply deliver the reward at random times regardless of any pecking. Soon, a few of the pigeons start pecking the button. Eventually, all of them do. They all dig in, trying to figure out a pattern when there’s no pattern to figure out—and so they make it up, gradually becoming convinced that they are causing the reward. This is known as autoshaping—the pigeon equivalent of dancing for rain.

  All of this sounds almost painfully familiar. This is precisely why gambling and social media are so addictive: not just the monetary or social rewards, but their unpredictability. You never know which of your photos on Instagram will get a lot of likes or which of your TikToks will go viral. Casinos and social media networks amplify this unpredictability by delivering the rewards at random times—they are certainly well aware of B. F. Skinner’s experiments on pigeons that I just described. Imagine how it would feel if all your “likes” arrived together, once a week, at a designated time. You would probably come to dread the day—it would hardly ever feel better than expected and mostly worse than expected.

  The essence of dopamine is not to make us happy and neither is it to direct us toward a particular thing, good or bad. It is to compel us to actively fit reality into a pattern and to make that pattern the expectation.

  Wanting and Liking

  What about pleasure? If not from dopamine, where does it come from, and why does it seem to align so closely with dopamine in most cases?

  Part of the answer may be the very meaning we put into the word “pleasure.” It could be that pleasure includes not just “liking,” but also some “wanting”—the excitement due to an expectation of resolution that dopamine delivers.15 So maybe dopamine is part of the sensation of pleasure, if not all of it.

  Still, other brain chemicals, such as opioids (which include endorphins) and endocannabinoids, appear to be more in charge of pleasure—its most obvious, hedonic, “liking” aspect.16 They are often released simultaneously with dopamine, painting the experience in a subjectively positive, rewarding light—maybe they should have been called the “reward system,” not the dopamine, which, by itself, does not really reward us in any sense, but rather makes us work more.

  As with dopamine, the effect of these “liking chemicals” also decreases with repetition—except usually not by reducing the release, but by reducing the quantity of endocannabinoid and opioid receptors on receiving neurons. In people addicted to opioids, for example, there is so much of the artificial “pleasure” signal periodically flooding the brain that the cells remove almost all the receptors for it, and so the person feels awful when the “pleasure” signal returns to normal.17

  What I find most interesting—and least explored as of today—is how we come to enjoy complex things. Most of what we know about opioids and endocannabinoids is how they respond to primary, basic rewards—such as food or sex. It’s clear that some of these responses are hardwired, or genetically preprogrammed. A lot less is known about how these chemicals contribute to liking more complicated things that must be learned—such as, for example, music. Do the same “pleasure chemicals” that we get from food and sex also make us like our favorite songs? To find out, researchers looked at how naloxone, an opioid blocker (also used to prevent overdoses of opiates), affected the perception of music. On the surface, people who received naloxone injections responded less strongly—their pupils dilated less and their skin conductance didn’t change as much when they listened. But fascinatingly, their reports about how much they enjoyed music didn’t change. Objectively, the lack of pleasure chemicals made music less effective, but subjectively, the pleasure remained the same.18

 

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183