“The Big Picture” by Sean Carroll

Part One: Cosmos

Carroll is a proponent of poetic naturalism. Naturalism comes down to three things:

  1. There is only one world, the natural world.
  2. The world evolves according to unbroken patterns, the laws of nature.
  3. The only reliable way of learning about the world is by observing it.

The poetic aspect comes to the fore when we start talking about the world. It can also be summarized in three points:

  1. There are many ways of talking about the world.
  2. All good ways of talking must be consistent with one another and with the world.
  3. Our purposes in the moment determine the best way of talking.

Carrol’s discussion of determinism provides an example of how poetic naturalism combines deep ontological truths about underlying reality with the practical and useful ways that we speak day-to-day:

Perhaps Laplace’s greatest contribution to our understanding of mechanics was not a technical or mathematical advance, but a philosophical one. He realized that there was a simple answer to the question “What determines what will happen next?” And the answer is “The state of the universe right now.” …

We know that the quantum state of a system, left alone, evolves in a perfectly deterministic fashion, free even of the rare but annoying examples of non-determinism that we can find in classical mechanics. But when we observe a system, it seems to behave randomly, rather than deterministically. …

The momentary or Laplacian nature of physical evolution doesn’t have much relevance for the choices we face in our everyday lives. For poetic naturalism, the situation is clear. There is one way of talking about the universe that describes it as elementary particles or quantum states, in which Laplace holds sway and what happens next depends only on the state of the system right now. There is also another way of talking about it, where we zoom out a bit and introduce categories like “people” and “choices.” Unlike our best theory of planets or pendulums, our best theories of human behavior are not deterministic. We don’t know any way to predict what a person will do based on what we can readily observe about their current state. Whether we think of human behavior as determined depends on what we know. (chpt. 4)

Carroll’s presentation of determinism is deceptively oversimplified. I believe his claim that the world is deterministic is only true within one of several interpretations of quantum mechanics. There is no scientific consensus as to whether the universe is deterministic or not. A great deal of debate still surrounds the so-called “collapse of the wave function” and the “measurement problem.”

The Copenhagen interpretation of quantum mechanics says that physical systems do not generally have definite properties prior to being “measured,” and only the act of measuring them causes them to taken on certain states. The Copenhagen interpretation has several problems.

The concept of the “system” and the “surroundings” are useful for modeling reality and writing equations for small parts of it, but they are not fundamental to our ontology. Thus, what does it mean to say the system is deterministic except when you observe it? We want to know whether the entire universe is deterministic or not. If there are parts of the universe that are not deterministic, even only occasionally when we “observer” them, then the entire universe is not deterministic. Determinism is an all-or-nothing phenomena.

Could we consider the entire universe as “the system,” such that there is no outside observer who could introduce randomness by “collapsing the wave function”? If this is possible then why does the act of dividing the universe into system and surroundings introduce randomness? It would appear that it can not.

One alternative to the Copenhagen interpretation is put forward by proponents of quantum decoherence. In this way of thinking, the “collapse of the wave function” and the “randomness” involved are simply useful ways of speaking about our incomplete knowledge of the environment’s very complicated wave function. Thus, there is no randomness involved. Quantum decoherence avoids making “measurements” and the “system” part of our ontology, which I find convincing. Quantum decoherence results in a deterministic universe.

Other alternatives to the Copenhagen interpretation exist, such as the objective-collapse theories, which may or may not result in a deterministic universe.

Thus, Carroll conveniently presents his minority-view as if it were the consensus of the scientific community.

I enjoyed Carroll’s discussion about the arrow of time:

For every way that a system can evolve forward in time in accordance with the laws of physics, there is another allowed evolution that is just “running the system backward in time.” There is nothing in the underlying laws that says things can evolve in one direction in time but not the other. Physical motions, to the best of our understanding, are reversible. Both directions of time are on an equal footing.

If there is a purported symmetry between past and future, why do so many everyday processes occur only forward and never backwards?

Boltzmann and his colleagues argued that we could understand entropy as a feature of how atoms are arranged inside different systems. Rather than thinking of heat and entropy as distinct kinds of things, obeying their own laws of nature, we an think of them as properties of systems made of atoms, and derive those rules from the Newtonian mechanics that applies to everything in the universe. Heat and entropy, in other words, are convenient ways of talking about atoms.

Given that, Boltzmann suggested that we could identify the entropy of a system with the [logarithm of the] number of different states that would be macroscopically indistinguishable from the state it is actually in. A low-entropy configuration is one where relatively few states would look that way, while a high-entropy one corresponds to many possible states. There are many ways to arrange molecules of cream and coffee so that they look all mixed together; there are far fewer arrangements where all of the cream is on the top and all of the coffee on the bottom.

With Boltzmann’s definition in hand, it makes complete sense that entropy tends to increase over time. The reason is simple: there are far more states with high entropy than states with low entropy. If you start in a low-entropy configuration and simply evolve in almost any direction, your entropy is extraordinary likely to increase. When the entropy of a system is as high as it can get, we say that the system is in equilibrium. In equilibrium, time has no arrow.

What Boltzmann successfully explained is why, given the entropy of the universe today, it’s very likely to be higher-entropy tomorrow. The problem is that, because the underlying rules of Newtonian mechanics don’t distinguish between past and future, precisely the same analysis should predict that the entropy was higher yesterday, as well. Nobody thinks the entropy was higher in the past, so we have to add something to our picture.

The thing we need to add is an assumption about the initial condition of the observable universe, namely, that it was in a very low-entropy state.

What we know is that this initially low entropy is responsible for the “thermodynamic” arrow of time, the one that says entropy was lower towards the past and higher toward the future. Amazingly, it seems that this property of entropy is responsible for all of the differences between past and future that we know about. Memory, aging, cause and effect—all can be traced to the second law of thermodynamics and in particular to the fact that entropy used to be low in the past. (chpt. 7)

Carroll oversimplifies his presentation—the standard model is not “time symmetric,” but instead is “charge, parity, and time symmetric.” I believe this distinction does not affect the conclusions Carroll draws, however.

It is interesting to consider that “cause and effect” are not fundamental entities—they are just useful ways of speaking about reality, like entropy and heat are. Carroll mentions a useful way of defining a “cause”:

We can make sense of statements like “A causes B” by thinking of different possible worlds: in particular, worlds that were essentially the same except for whether the event A actually occurred. Then, if we see that B occurs in all the worlds where A occurred, and B does not occur when A does not occur, it’s safe to say “A causes B.” (chpt. 8)

Part Two: Understanding

Part Three: Essence

Part Four: Complexity

Part Five: Thinking

Part Six: Caring