A Class of Models with the Potential to Represent Fundamental Physics
  1. Introduction
  2. Basic Form of Models
  3. Typical Behaviors
  4. Limiting Behavior and Emergent Geometry
  5. The Updating Process for String Substitution Systems
  6. The Updating Process in Our Models
  7. Equivalence and Computation in Our Models
  8. Potential Relation to Physics
  9. Additional Material
  10. References
  11. Index

8.10 Reversibility and Irreversibility

One feature of the traditional formalism for fundamental physics is that it is reversible, in the sense that it implies that individual states of closed systems can be uniquely evolved both forward and backward in time. (Time reversal violation in things like Ko particle decays show that the rule for going forward and backward in time can be slightly different. In addition, the cosmological expansion of the universe defines an overall arrow of time.)

One can certainly set up manifestly reversible rewriting rules (like AB, BA) in models like ours. And indeed the example of cellular automata [1:9.2] tends to suggest that most kinds of behavior seen in irreversible rules can also be seenthough perhaps more rarelyin reversible rules.

But it is important to realize that even when the underlying rules for a system are not reversible, the system can still evolve to a situation where there is effective reversibility. One way for this to happen is for the evolution of the system to lead to a particular set of “attractor” states, on which the evolution is reversible. Another possibility is that there is no such well-defined attractor, but that the system nevertheless evolves to some kind of “equilibrium” in which measurable effects show effective reversibility.

In our models, there is an additional complication: the fact that different possible updating orders lead to following different branches of the multiway system. In most kinds of systems, irreversible rules tend to be associated with the phenomenon of multiple initial states merging to produce a single final state in which the information about the initial state is lost. But when there is a branch in a multiway system, this is reversed: information is effectively created by the branch, and lost if one goes backwards.

When there is causal invariance, however, yet something different happens. Because now in a sense every branching will eventually merge. And what this means is that in the multiway system there is a kind of reversibility: any information created by a branching will always be destroyed again when the branches mergeeven though temporarily the “information content” may change.

It is important to note that this kind of microscopic reversibility is quite unrelated to the more macroscopic irreversibility implied by the Second Law of thermodynamics. As discussed in [1:9.3] the Second Law seems to first and foremost be a consequence of computational irreducibility. Even when the underlying rules for a system are reversible, the actual evolution of the system can so “encrypt” the initial conditions that no computationally feasible measurement process will succeed in reconstructing them. (The idea of considering computational feasibility clarifies past uncertainty about what might count as a reasonable “coarse graining procedure”.)

In any nontrivial example of one of our models, computational irreducibility is essentially inevitable. And this means that the model will tend to intrinsically generate effective randomness, or in other words, the computation it does will obscure whatever simplicity might have existed in its initial conditions.

There can still be large-scale featuresor particle-like structuresthat persist. But the presence of computational irreducibility implies that even at a level as low as the basic structure of space we can expect our models to show the kind of irreversibility associated with the Second Law. And in a sense we can view this as the reason that things like a robust structure for space can exist: because of computational irreducibility, our models show a kind of equilibrium in which the details are effectively random, and the only features that are computationally feasible to measure are the statistical regularities.