Determined, page 5
But suppose you have to pull the trigger or else there’ll be no readiness potential to detect and your friend’s research will be slowed down. Nonetheless, you still have options. You can shoot the person. You can shoot but intentionally miss. You can shoot yourself rather than comply.[*] As a major plot twist, you can shoot your friend.
It makes intuitive sense that if you want to understand what you wind up doing with your index finger on that trigger, that you should explore Libetian concerns, studying particular neurons and particular milliseconds in order to understand the instant you feel you have chosen to do something, the instant your brain has committed to that action, and whether those two things are the same. But here’s why these Libetian debates, as well as a criminal justice system that cares only about whether someone’s actions are intentional, are irrelevant to thinking about free will. As first aired at the beginning of this chapter, that is because neither asks a question central to every page of this book: Where did that intent come from in the first place?
If you don’t ask that question, you’ve restricted yourself to a domain of a few seconds. Which is fine by many people. Frankfurt writes, “The questions of how the actions and his identifications with their springs are caused are irrelevant to the questions of whether he performs the actions freely or is morally responsible for performing them.” Or in the words of Shadlen and Roskies, Libetian-ish neuroscience “can provide a basis for accountability and responsibility that focuses on the agent, rather than on prior causes” (my emphasis).
Where does intent come from? Yes, from biology interacting with environment one second before your SMA warmed up. But also from one minute before, one hour, one millennium—this book’s main song and dance. Debating free will can’t start and end with readiness potentials or with what someone was thinking when they committed a crime.[*] Why have I spent page after page going over the minutiae of the debates about what Libet means before blithely dismissing all of it with “And yet I think that is irrelevant”? Because Libet is viewed as the most important study ever done exploring the neurobiology of whether we have free will. Because virtually every scientific paper on free will trots out Libet early on. Because maybe you were born at the precise moment that Libet published his first study and now, all these years later, you’re old enough that your music is called “classic” rock and you have started to make little middle-aged grunting sounds when you get up from a chair . . . and they’re still debating Libet. And as noted before, this is like trying to understand a movie solely by watching its final three minutes.[33]
This charge of myopia is not meant to sound pejorative. Myopia is central to how we scientists go about finding out new things—by learning more and more about less and less. I once spent nine years on a single experiment; this can become the center of a very small universe. And I’m not accusing the criminal justice system of myopically focusing solely on whether there was intent—after all, where intent came from, someone’s history and potential mitigating factors, are considered when it comes to sentencing.
Where I am definitely trying to sound pejorative and worse is when this ahistorical view of judging people’s behavior is moralistic. Why would you ignore what came before the present in analyzing someone’s behavior? Because you don’t care why someone else turned out to be different from you.
As one of the few times in this book where I will knowingly be personal, this brings me to the thinking of Daniel Dennett of Tufts University. Dennett is one of the best-known and most influential philosophers out there, a leading compatibilist who has made his case both in technical work within his field and in witty, engaging popular books.
He implicitly takes this ahistorical stance and justifies it with a metaphor that comes up frequently in his writing and debates. For example, in Elbow Room: The Varieties of Free Will Worth Wanting, he asks us to imagine a footrace where one person starts off way behind the rest at the starting line. Would this be unfair? “Yes, if the race is a hundred-yard dash.” But it is fair if this is a marathon, because “in a marathon, such a relatively small initial advantage would count for nothing, since one can reliably expect other fortuitous breaks to have even greater effects.” As a succinct summary of this view, he writes, “After all, luck averages out in the long run.”[34]
No, it doesn’t.[*] Suppose you’re born a crack baby. In order to counterbalance this bad luck, does society rush in to ensure that you’ll be raised in relative affluence and with various therapies to overcome your neurodevelopmental problems? No, you are overwhelmingly likely to be born into poverty and stay there. Well then, says society, at least let’s make sure your mother is loving, is stable, has lots of free time to nurture you with books and museum visits. Yeah, right; as we know, your mother is likely to be drowning in the pathological consequences of her own miserable luck in life, with a good chance of leaving you neglected, abused, shuttled through foster homes. Well, does society at least mobilize then to counterbalance that additional bad luck, ensuring that you live in a safe neighborhood with excellent schools? Nope, your neighborhood is likely to be gang-riddled and your school underfunded.
You start out a marathon a few steps back from the rest of the pack in this world of ours. And counter to what Dennett says, a quarter mile in, because you’re still lagging conspicuously at the back of the pack, it’s your ankles that some rogue hyena nips. At the five-mile mark, the rehydration tent is almost out of water and you can get only a few sips of the dregs. By ten miles, you’ve got stomach cramps from the bad water. By twenty miles, your way is blocked by the people who assume the race is done and are sweeping the street. And all the while, you watch the receding backsides of the rest of the runners, each thinking that they’ve earned, they’re entitled to, a decent shot at winning. Luck does not average out over time and, in the words of Levy, “we cannot undo the effects of luck with more luck”; instead our world virtually guarantees that bad and good luck are each amplified further.
In the same paragraph, Dennett writes that “a good runner who starts at the back of the pack, if he is really good enough to DESERVE winning, will probably have plenty of opportunity to overcome the initial disadvantage” (my emphasis). This is one step above believing that God invented poverty to punish sinners.
Dennett has one more thing to say that summarizes this moral stance. Switching sports metaphors to baseball and the possibility that you think there’s something unfair about how home runs work, he writes, “If you don’t like the home run rule, don’t play baseball; play some other game.” Yeah, I want another game, says our now-adult crack baby from a few paragraphs ago. This time, I want to be born into a well-off, educated family of tech-sector overachievers in Silicon Valley who, once I decide that, say, ice-skating seems fun, will get me lessons and cheer me on from my first wobbly efforts on the ice. Fuck this life I got dumped into; I want to change games to that one.
Thinking that it is sufficient to merely know about intent in the present is far worse than just intellectual blindness, far worse than believing that it is the very first turtle on the way down that is floating in the air. In a world such as we have, it is deeply ethically flawed as well.
Time to see where intent comes from, and how the biology of luck doesn’t remotely average out in the long run.[35]
3
Where Does Intent Come From?
Because of our fondness for all things Libetian, we sit you in front of two buttons; you must push one of them. You’re given only hazy information about the consequences of pushing each button, beyond being told that if you pick the wrong button, thousands of people will die. Now pick.
No free will skeptic insists that sometimes you form your intent, lean way over to push the appropriate button, and suddenly, the molecules comprising your body deterministically fling you the other way and make you push the other button.
Instead, the last chapter showed how the Libetian debate concerns when exactly you formed that intent, when you became conscious of having formed it, whether neurons commanding your muscles had already activated by then, when it was that you could still veto that intention. Plus, questions about your SMA, frontal cortex, amygdala, basal ganglia—what they knew and when they knew it. Meanwhile, in parallel in the courtroom next door, lawyers argue over the nature of your intent.
The last chapter concluded by claiming that all these minutiae of milliseconds are completely irrelevant to why there is no free will. Which is why we didn’t bother sticking electrodes into your brain just before seating you. They wouldn’t reveal anything useful.
This is because the Libetian Wars don’t ask the most fundamental question: Why did you form the intent that you did?
This chapter shows how you don’t ultimately control the intent you form. You wish to do something, intend to do it, and then successfully do so. But no matter how fervent, even desperate, you are, you can’t successfully wish to wish for a different intent. And you can’t meta your way out—you can’t successfully wish for the tools (say, more self-discipline) that will make you better at successfully wishing what you wish for. None of us can.
Which is why it would tell us nothing to stick electrodes in your head to monitor what neurons are doing in the milliseconds when you form your intent. To understand where your intent came from, all that needs to be known is what happened to you in the seconds to minutes before you formed the intention to push whichever button you choose. As well as what happened to you in the hours to days before. And years to decades before. And during your adolescence, childhood, and fetal life. And what happened when the sperm and egg destined to become you merged, forming your genome. And what happened to your ancestors centuries ago when they were forming the culture you were raised in, and to your species millions of years ago. Yeah, all that.
Understanding this turtleism shows how the intent you form, the person you are, is the result of all the interactions between biology and environment that came before. All things out of your control. Each prior influence flows without a break from the effects of the influences before. As such, there’s no point in the sequence where you can insert a freedom of will that will be in that biological world but not of it.
Thus, we’ll now see how who we are is the outcome of the prior seconds, minutes, decades, geological periods before, over which we had no control. And how bad and good luck sure as hell don’t balance out in the end.
Seconds to Minutes Before
We ask our first version of the question of where that intent came from: What sensory information flowing into your brain (including some you’re not even conscious of) in the preceding seconds to minutes helped form that intent?[*] This can be obvious—“I formed the intent to push that button because I heard the harsh demand that I do so, and saw the gun pointed in my face.”
But things can be subtler. You view a picture of someone holding an object, for a fraction of a second; you must decide whether it was a cell phone or a handgun. And your decision in that second can be influenced by the pictured person’s gender, race, age, and facial expression. We all know real-life versions of this experiment resulting in police mistakenly shooting an unarmed person, and about the implicit bias that contributed to that mistake.[1]
Some examples of intent being influenced by seemingly irrelevant stimuli have been particularly well studied.[*] One domain concerns how sensory disgust shapes behavior and attitudes. In one highly cited study, subjects rated their opinions about various sociopolitical topics (e.g., “On a scale of 1 to 10, how much do you agree with this statement?”). And if subjects were sitting in a room with a disgusting smell (versus a neutral one), the average level of warmth both conservatives and liberals reported for gay men decreased. Sure, you think—you’d feel less warmth for anyone if you’re gagging. However, the effect was specific to gay men, with no change in warmth toward lesbians, the elderly, or African Americans. Another study showed that disgusting smells make subjects less accepting of gay marriage (as well as about other politicized aspects of sexual behavior). Moreover, just thinking about something disgusting (eating maggots) makes conservatives less willing to come into contact with gay men.[2]
Then there’s a fun study where subjects were either made uncomfortable (by placing their hand in ice water) or disgusted (by placing their thinly gloved hand in imitation vomit).[*] Subjects then recommended punishment for norm violations that were purity related (e.g., “John rubbed someone’s toothbrush on the floor of a public restroom” or the supremely distinctive “John pushed someone into a dumpster which was swarming with cockroaches”) or violations unrelated to purity (e.g., “John scratched someone’s car with a key”). Being disgusted by fake puke, but not being icily uncomfortable, made subjects more selectively punitive about purity violations.[3]
How can a disgusting smell or tactile sensation change unrelated moral assessments? The phenomenon involves a brain region called the insula (aka the insular cortex). In mammals, it is activated by the smell or taste of rancid food, automatically triggering spitting out the food and the species’s version of barfing. Thus, the insula mediates olfactory and gustatory disgust and protects from food poisoning, an evolutionarily useful thing.
But the versatile human insula also responds to stimuli we deem morally disgusting. The insula’s “this food’s gone bad” function in mammals is probably a hundred million years old. Then, a few tens of thousands of years ago, humans invented constructs like morality and disgust at moral norm violations. That’s way too little time to have evolved a new brain region to “do” moral disgust. Instead, moral disgust was added to the insula’s portfolio; as it’s said, rather than inventing, evolution tinkers, improvising (elegantly or otherwise) with what’s on hand. Our insula neurons don’t distinguish between disgusting smells and disgusting behaviors, explaining metaphors about moral disgust leaving a bad taste in your mouth, making you queasy, making you want to puke. You sense something disgusting, yech . . . and unconsciously, it occurs to you that it’s disgusting and wrong when those people do X. And once activated this way, the insula then activates the amygdala, a brain region central to fear and aggression.[4]
Naturally, there is the flip side to the sensory disgust phenomenon—sugary (versus salty) snacks make subjects rate themselves as more agreeable and helpful individuals and rate faces and artwork as more attractive.[5]
Ask a subject, Hey, in last week’s questionnaire you were fine with behavior A, but now (in this smelly room) you’re not. Why? They won’t explain how a smell confused their insula and made them less of a moral relativist. They’ll claim some recent insight caused them, bogus free will and conscious intent ablaze, to decide that behavior A isn’t okay after all.
It’s not just sensory disgust that can shape intent in seconds to minutes; beauty can as well. For millennia, sages have proclaimed how outer beauty reflects inner goodness. While we may no longer openly claim that, beauty-is-good still holds sway unconsciously; attractive people are judged to be more honest, intelligent, and competent; are more likely to be elected or hired, and with higher salaries; are less likely to be convicted of crimes, then getting shorter sentences. Jeez, can’t the brain distinguish beauty from goodness? Not especially. In three different studies, subjects in brain scanners alternated between rating the beauty of something (e.g., faces) or the goodness of some behavior. Both types of assessments activated the same region (the orbitofrontal cortex, or OFC); the more beautiful or good, the more OFC activation (and the less insula activation). It’s as if irrelevant emotions about beauty gum up cerebral contemplation of the scales of justice. Which was shown in another study—moral judgments were no longer colored by aesthetics after temporary inhibition of a part of the PFC that funnels information about emotions into the frontal cortex.[*] “Interesting,” the subject is told. “Last week, you sent that other person to prison for life. But just now, when looking at this other person who had done the same thing, you voted for them for Congress—how come?” And the answer isn’t “Murder is definitely bad, but OMG, those eyes are like deep, limpid pools.” Where did the intent behind the decision come from? The fact that the brain hasn’t had enough time yet to evolve separate circuits for evaluating morality and aesthetics.[6]
Next, want to make someone more likely to choose to clean their hands? Have them describe something crummy and unethical they’ve done. Afterward, they’re more likely to wash their hands or reach for hand sanitizer than if they’d been recounting something ethically neutral they’d done. Subjects instructed to lie about something rate cleansing (but not noncleansing) products as more desirable than do those instructed to be honest. Another study showed remarkable somatic specificity, where lying orally (via voice mail) increased the desire for mouthwash, while lying by hand (via email) made hand sanitizers more desirable. One neuroimaging study showed that when lying by voice mail boosts preference for mouthwash, a different part of the sensory cortex activates than when lying by email boosts the appeal of hand sanitizers. Neurons believing, literally, that your mouth or hand, respectively, is dirty.
Thus, feeling morally soiled makes us want to cleanse. I don’t believe there’s a soul for such moral taint to weigh on, but it sure weighs on your frontal cortex; after disclosing an unethical act, subjects are less effective at cognitive tasks that tap into frontal function . . . unless they got to wash their hands in between. The scientists who first reported this general phenomenon poetically named it the “Macbeth effect,” after Lady Macbeth, washing her hands of that imaginary damned spot caused by her murderousness.[*] Reflecting that, induce disgust in subjects, and if they can then wash their hands, they judge purity-related norm violations less harshly.[7]



