Determined, page 15
It’s 2022. Same cohort with, again, one person destined to go off the rails forty years hence. Again, here are their blood samples. This time, this century, you use them to sequence everyone’s genome. You discover that one individual has a mutation in a gene called MAPT, which codes for something in the brain called the tau protein. And as a result, you can accurately predict that it will be that person, because by age sixty, he will be showing the symptoms of behavioral variant frontotemporal dementia.[11]
Back to the 1922 cohort. The person in question has started shoplifting, threatening strangers, urinating in public. Why did he behave that way? Because he chose to do so.
Year 2022’s cohort, same unacceptable acts. Why will he have behaved that way? Because of a deterministic mutation in one gene.[*]
According to the logic of the thinkers just quoted, the 1922 person’s behavior resulted from free will. Not “resulted from behavior we would erroneously attribute to free will.” It was free will. And in 2022, it is not free will. In this view, “free will” is what we call the biology that we don’t understand on a predictive level yet, and when we do understand it, it stops being free will. Not that it stops being mistaken for free will. It literally stops being. There is something wrong if an instance of free will exists only until there is a decrease in our ignorance. As the crucial point, our intuitions about free will certainly work that way, but free will itself can’t.
We do something, carry out a behavior, and we feel like we’ve chosen, that there is a Me inside separate from all those neurons, that agency and volition dwell there. Our intuitions scream this, because we don’t know about, can’t imagine, the subterranean forces of our biological history that brought it about. It is a huge challenge to overcome those intuitions when you still have to wait for science to be able to predict that behavior precisely. But the temptation to equate chaoticism with free will shows just how much harder it is to overcome those intuitions when science will never be able to predict precisely the outcomes of a deterministic system.
Wrong Conclusion #2: A Causeless Fire
Most of the fascination with chaoticism comes from the fact that you can start with some simple deterministic rules for a system and produce something ornate and wildly unpredictable. We’ve now seen how mistaking this for indeterminism leads to a tragic downward spiral into a cauldron of free-will belief. Time now for the other problem.
Go back to the figure at the top of page 141 with its demonstration with rule 22 that two different starting states can turn into the identical pattern and thus, it is not possible to know which of those two was the actual source.
This is the phenomenon of convergence. It’s a term frequently used in evolutionary biology. In this instance, it’s not so much that you can’t tell which of two different possible ancestors a particular species arose from (e.g., “Was the ancestor of elephants three-legged or five-legged? Who can tell?”). It’s more when two very different sorts of species have converged on the same solution to the same sort of selective challenge.[*] Among analytical philosophers, the phenomenon is termed overdetermination—when two different pathways could each separately determine the progression to the same outcome. Implicit in this convergence is a loss of information. Plop down in some row in the middle of a cellular automaton, and not only can’t you predict what is going to happen, but you can’t know what did happen, which possible pathway led to the present state.
This issue of convergence has a surprising parallel in legal history. Thanks to negligence, a fire starts in building A. Nearby, completely unrelated, separate negligence gives rise to a fire in building B. The two fires spread toward each other and converge, burning down building C in the center. The owner of building C sues the other two owners. But which negligent person was responsible for the fire? Not me, each would argue in court—if my fire hadn’t happened, building C would still have burned down. And it worked, in that neither owner would be held responsible. This was the state of things until 1927, when the courts ruled in Kingston v. Chicago and NW Railroad that it is possible to be partially responsible for what happened, for there to be fractions of guilt.[12]
Similarly, consider a group of soldiers lining up in a firing squad to kill someone. No matter how much one is pulling a trigger in glorious obedience to God and country, there’s often some ambivalence, perhaps some guilt about mowing down someone or worry that fortunes will shift and you’ll wind up in front of a firing squad. And for centuries, this gave rise to a cognitive manipulation—one soldier at random was given a blank rather than a real bullet. No one knew who had it, and thus every shooter knew that they might have gotten the blank and thus weren’t actually a killer. When lethal injection machines were invented, some states stipulated that there’d be two separate delivery routes, each with a syringe full of poison. Two people would press each of two buttons, and a randomizer in the machine would infuse the poison from one syringe into the person and dump the contents of the other into a bucket. And not keep a record of which did which. Each person thus knew that they might not have been the executor. Those are nice psychological tricks for defusing a sense of responsibility.[13]
Chaoticism pulls for a related type of psychological trick. The feature of chaoticism where knowing a starting state doesn’t allow you to predict what will happen is a crushing blow to classic reductionism. But the inability to ever know what happened in the past demolishes what’s called radical eliminative reductionism, the ability to rule out every conceivable cause of something until you’ve gotten down to the cause.
So you can’t do radical eliminative reductionism and decide what single thing caused the fire, which button presser delivered the poison, or what prior state gave rise to a particular chaotic pattern. But that doesn’t mean that the fire wasn’t actually caused by anything, that no one shot the bullet-riddled prisoner, or that the chaotic state just popped up out of nowhere. Ruling out radical eliminative reductionism doesn’t prove indeterminism.
Obviously. But this is subtly what some free-will supporters conclude—if we can’t tell what caused X, then you can’t rule out an indeterminism that makes room for free will. As one prominent compatibilist writes, it is unlikely that reductionism will rule out the possibilities of free will, “because the chain of cause and effect contains breaks of the type that undermine radical reductionism and determinism, at least in the form required to undermine freedom.” God help me that I’ve gotten to the point of examining the split hair of and, but chaotic convergence does not undermine radical reductionism and determinism. Just the former. And in the view of that writer, this supposed undermining of determinism is relevant to “policies upon which we hinge responsibility.” Just because you can’t tell which of two towers of turtles propping you up goes all the way down doesn’t mean that you’re floating in the air.[14]
Conclusion
Where have we gotten at this point? The crushing of knee-jerk reductionism, the demonstration that chaoticism shows just the opposite of chaos, the fact that there’s less randomness than often assumed and, instead, unexpected structure and determinism—all of this is wonderful. Ditto for butterfly wings, the generation of patterns on sea shells, and Will Darling. But to get from there to free will requires that you mistake a failure of reductionism that makes it impossible to precisely describe the past or predict the future as proof of indeterminism. In the face of complicated things, our intuitions beg us to fill up what we don’t understand, even can never understand, with mistaken attributions.
On to our next, related topic.
7
A Primer on Emergent Complexity
The previous two chapters can basically be distilled to the following:
—“Break it down to its component parts” reductionism doesn’t work for understanding some vastly interesting things about us. Instead, in such chaotic systems, minuscule differences in starting states amplify enormously in their consequences.
—This nonlinearity makes for fundamental unpredictability, suggesting to many that there is an essentialism that defies reductive determinism, meaning that the “there can’t be free will because the world is deterministic” stance goes down the drain.
—Nope. Unpredictable is not the same thing as undetermined; reductive determinism is not the only kind of determinism; chaotic systems are purely deterministic, shutting down that particular angle of proclaiming the existence of free will.
This chapter focuses on a related domain of amazingness that seems to defy determinism. Let’s start with some bricks. Granting ourselves some artistic license, they can crawl around on tiny invisible legs. Place one brick in a field; it crawls around aimlessly. Two bricks, ditto. A bunch, and some start bumping in to each other. When that happens, they interact in boringly simple ways—they can settle down next to each other and stay that way, or one can crawl up on top of another. That’s all. Now scatter a hundred zillion of these identical bricks in this field, and they slowly crawl around, zillions sitting next to each other, zillions crawling on top of others . . . and they slowly construct the Palace of Versailles. The amazingness is not that, wow, something as complicated as Versailles can be built out of simple bricks.[*] It’s that once you made a big enough pile of bricks, all those witless little building blocks, operating with a few simple rules, without a human in sight, assembled themselves into Versailles.
This is not chaos’s sensitive dependence on initial conditions, where these identical building blocks actually all differed when viewed at a high magnification, and you then butterflew to Versailles. Instead, put enough of the same simple elements together, and they spontaneously self-assemble into something flabbergastingly complex, ornate, adaptive, functional, and cool. With enough quantity, extraordinary quality just . . . emerges, often even unpredictably.[*],[1]
As it turns out, such emergent complexity occurs in realms very pertinent to our interests. The vast difference between the pile of gormless, identical building blocks and the Versailles they turned themselves into seems to defy conventional cause and effect. Our sensible sides think (incorrectly . . .) of words like indeterministic. Our less rational sides think of words like magic. In either case, the “self” part of self-assembly seems so agentive, so rife with “be the palace of bricks that you wish to be,” that dreams of free will beckon. An idea that this and the next chapter will try to dispel.
Why We’re Not Talking about Michael Jackson Moonwalking
Let’s start with what wouldn’t count as emergent complexity.
Put a beefy guy in a faux military uniform carrying a sousaphone in the middle of a field. His behavior is simple—he can walk forward, to the left, or to the right, and does so randomly. Scatter a bunch of other instrumentalists there, and the same thing happens, all randomly moving, collectively making no sense. But toss three hundred of them onto the field and out of that emerges a giant Michael Jackson moonwalking past the fifty-yard line during the halftime performance.[*]
There are all these interchangeable, fungible marching band marchers with the same minuscule repertoire of movements. Why doesn’t this count as emergence? Because there’s a master plan. Not inside the sousaphonist but in the visionary who fasted in the desert, hallucinating pillars of salt moonwalking, then returned to the marching band with the Good News. This is not emergence.
Here’s real emergent complexity: Start with one ant. It wanders aimlessly on the field. As do ten of them. A hundred interact with vague hints of patterns. But put thousands of them together and they form a society with job specialization, construct bridges or rafts out of their bodies that float for weeks, build flood-proof underground nests with passageways paved with leaves, leading to specialized chambers with their own microclimates, some suited for farming fungi and others for brood rearing. A society that even alters its functions in response to changing environmental demands. No blueprint, no blueprint maker.[2]
What makes for emergent complexity?
—There is a huge number of ant-like elements, all identical or coming in just a few different types.
—The “ant” has a very small repertoire of things it can do.
—There are a few simple rules based on chance interactions with immediate neighbors (e.g., “walk with this pebble in your little ant mandibles until you bump into another ant holding a pebble, in which case, drop yours”). No ant knows more than these few rules, and each acts as an autonomous agent.
—Out of the hugely complicated phenomena this can produce emerge irreducible properties that exist only on the collective level (e.g., a single molecule of water cannot be wet; “wetness” emerges only from the collectivity of water molecules, and studying single water molecules can’t predict much about wetness) and that are self-contained at their level of complexity (i.e., you can make accurate predictions about the behavior of the collective level without knowing much about the component parts). As summarized by Nobel laureate physicist Philip Anderson, “More is different.”[*],[3]
—These emergent properties are robust and resilient—a waterfall, for example, maintains consistent emergent features over time despite the fact that no water molecule participates in waterfall-ness more than once.[4]
—A detailed picture of the maturely emergent system can be (but is not necessarily) unpredictable, which should have echoes of the previous two chapters. Knowing the starting state and reproduction rules (à la cellular automata) gives you the means to develop the complexity but not the means to describe it. Or, to use a word offered by a leading developmental neurobiologist of the past century, Paul Weiss, the starting state can never contain an “itinerary.”[*],[5]
—Part of this unpredictability is due to the fact that in emergent systems, the road you are traveling on is being constructed at the same time and, in fact, your being on it is influencing the construction process by constituting feedback on the road-making process.[*] Moreover, the goal you are traveling toward may not even exist yet—you are destined to interact with a target spot that may not exist yet but, with any luck, will be constructed in time. In addition, unlike last chapter’s cellular automata, emergent systems are also subject to randomness (jargon: “stochastic events”), where the sequence of random events makes a difference.[*]
—Often the emergent properties can be breathtakingly adaptive and, despite that, there’s no blueprint or blueprint maker.[6]
Here’s a simple version of the adaptiveness: Two bees leave their hive, each flying randomly until finding a food source. They both do, with one source being better. Each returns to the hive, neither bee knowing anything about both food sources. Nonetheless, all the bees fly straight to the better site.
Here’s a more complex example: An ant forages for food, checking eight different places. Little ant legs get tired, and ideally the ant visits each site only once, and in the shortest possible path of the 5,040 possible ones (i.e., seven factorial). This is a version of the famed “traveling salesman problem,” which has kept mathematicians busy for centuries, fruitlessly searching for a general solution. One strategy for solving the problem is with brute force—examine every possible route, compare them all, and pick the best one. This takes a ton of work and computational power—by the time you’re up to ten places to visit, there are more than 360,000 possible ways to do it, more than 80 billion with fifteen places to visit. Impossible. But take the roughly ten thousand ants in a typical colony, set them loose on the eight-feeding-site version, and they’ll come up with something close to the optimal solution out of the 5,040 possibilities in a fraction of the time it would take you to brute-force it, with no ant knowing anything more than the path that it took plus two rules (which we’ll get to). This works so well that computer scientists can solve problems like this with “virtual ants,” making use of what is now known as swarm intelligence.[*],[7]
There’s the same adaptiveness in the nervous system. Take a microscopic worm that neurobiologists love;[*] the wiring of its neurons shows close to traveling-salesman optimization, in terms of the cost of wiring them all up; same in the nervous system of flies. And in primate brains as well; examine the primate cortex, identify eleven different regions that wire up with each other. And of several million possible ways of doing it, the developing brain finds the optimal solution. As we’ll see, in all these cases, this is accomplished with rules that are conceptually similar to what the traveling-salesmen ants do.[8]
Other types of adaptiveness also abound. A neuron “wants” to spread its array of thousands of dendritic branches as efficiently as possible for receiving inputs from other neurons, even competing with neighboring cells. Your circulatory system “wants” to spread its thousands of branching arteries as efficiently as possible in delivering blood to every cell in the body. A tree “wants” to branch skyward most efficiently to maximize the sunlight its leaves are exposed to. And as we’ll see, all three solve the challenge with similar emergent rules.[9]
How can this be? Time to look at examples of how emergence actually emerges, using simple rules that work in similar ways in solving optimization challenges for, among other things, ants, slime molds, neurons, humans, and societies. This process will easily dispose of the first temptation: to decide that emergence demonstrates indeterminacy. Same answer as in the last chapter—unpredictable is not the same thing as undetermined. Disposing of the second temptation is going to be more challenging.



