### Project Description

**Home** / **Activities** / **Research Projects** / Emergent objective reality: from observers to physics via Solomonoff induction

# Emergent objective reality: from observers to physics via Solomonoff induction

Several important conceptual problems in quantum mechanics, cosmology, and artificial intelligence / philosophy of mind arguably are related to our conceptual understanding of probability. This project aims to address these problems in a unified way by taking a radically unconventional perspective and asking: What if the probabilities are actually fundamental, and physics as we know it is an emergent phenomenon?

#### PRINCIPAL INVESTIGATOR:

Markus P. Müller

Institute for Quantum Optics and Quantum Information, Vienna

Perimeter Institute for Theoretical Physics

Rotman Institute of Philosophy

#### PROJECT DATES:

2016 – present

#### PROJECT FUNDING:

Foundational Questions Institute (FQXi)

Rotman Institute Catalyst Grant

Western University

#### PROJECT SUMMARY:

Theoretical Physics is more than just a collection of methods for predicting measurable quantities. In fact, the history of physics has given us many examples in which novel questions have led to new theories which have fundamentally changed our picture of the world, often in surprising ways. The starting point of this proposal is the hypothesis that motivated by certain conceptual problems in physics and its immediate vicinity, we are at a point where a comparably dramatic revision of some traditional aspects of our physical worldview is required.

This project combines insights from theoretical physics and from philosophy with the goal to “cause some good-natured trouble” (citing Chris Fuchs). More specifically, we aim to demonstrate rigorously with the tools of theoretical physics and philosophical argumentation the extent to which a picture of the world in which the first-person perspective of observers is fundamental and objective physical reality is emergent is consistent, fruitful and plausible. We will also study the practical and philosophical consequences of this for the areas of quantum mechanics, cosmology, and artificial intelligence / philosophy of mind.

Traditionally, physical theory assumes an “ontic” picture of the world, i.e. the existence of an objective external world that evolves according to certain physical laws independently of us. Our theories about these laws are incrementally constructed by deriving predictions based on them and comparing these to observations we actually make. Since the discovery of quantum mechanics, we think that these predictions are probabilistic at best, and in principle of the following form:

**P**(next observations | previous observations). (1)

For example, in a laboratory experiment, our “previous observations” will include all of our knowledge about the experimental setup and data we have acquired earlier; the “next observations” correspond to the possible outcomes of the experiment. Crucially, we traditionally view the probabilities in (1) as being *derived* from (or secondary to) an objective external world; either they arise because we are agents inside that objective universe who have only limited knowledge, or because the postulates of quantum theory claim directly that we should observe these probabilities as a consequence of the world’s quantum state. It turns out that several important conceptual problems in physics and beyond are closely related to the probabilities (1), and challenge this traditional view of physics:

**Quantum mechanics (QM)**. According to Bell’s Theorem, naive versions of realism (roughly, the idea that measurement outcomes always exist before the measurement is performed) are inconsistent with other important principles of physics (like locality). This has led to the slogan that “unperformed experiments have no results” [2], and to decades of arguments about how to interpret the counterintuitive formalism of QM. Substantial interpretational effort has been invested in the question “*where the probabilities in (1) come from*”, without any final concensus.**Cosmology**. If we are observers in a really “big” universe (for example, a world undergoing eternal inflation), then the question arises which probabilities of the form (1) we should actually assign to our own future observations. There are deep and surprising problems that arise in this respect, for example the famous and notorious*Boltzmann brain problem*(claiming that we should assign high probability to us being only a short-lived quantum fluctuation), or, more broadly speaking, the measure problem of cosmology.**Artificial Intelligence / Philosophy of Mind**. Even though it sounds like science fiction at the time of writing, current scientific progress suggests that we will soon live in a world where novel technologies present us with severe conceptual and ethical dilemmas. As one extreme and illustrative example, think of simulating the brain of a terminally ill person (after her death) on a computer. Would this be a valuable endeavor? Would the person “feel like being” in the computer simulation, or would it have no ffect on her first-person perspective whatsoever? Questions of this form can be recast in terms of the conditional probabilities (1): what is the probability that the person is going to observe the simulated state of mind, given what she has observed in the past?

This project, aims to address aspects of all these questions in a unified way, by taking a radically unconventional perspective and by asking: **What if the probabilities in (1) are actually fundamental, and physics as we know it is an emergent phenomenon?**

**Research Goals:**

**A new kind of metaphysics?**

Many philosophers of physics seem to have a strong bias towards the idea that physics consists of “things that move and collide”, and claim that the goal of a good interpretation of quantum mechanics, for example, is to explain the theory in terms which are inherently motivated by classical physics. It is encouraging to see that this problematic attitude is more and more questioned by philosophers themselves. For example, Ladyman and Ross write in [23]:

“The metaphysics of domestication tends to consist of attempts to render pieces of contemporary science – and, at least as often, simplified, mythical interpretations of contemporary science – into terms that can be made sense of by reference to the containment metaphor. That is, it seeks to account for the world as ‘made of ’ myriad ‘little things’ in roughly the way that (some) walls are made of bricks. Unlike bricks in walls, however, the little things are often in motion. Their causal powers are usually understood as manifest in the effects they have on each other when they collide. Thus the causal structure of the world is decomposed by domesticating metaphysics into reverberating networks of what we will call ‘microbangings’ – the types of ultimate causal relations that prevail amongst the basic types of little things, whatever they exactly turn out to be. Metaphysicians, especially recently, are heavily preoccupied with the search for ‘genuine causal oomph’, particularly in relation to what they perceive to be the competition between different levels of reality. We will argue that this is profoundly unscientific, and we will reject both the conception of causation and levels of reality upon which it is based.”

One way out of this traditional view is by drafting new conceptions of metaphysics, such as *ontic structural realism* [24]. This view grants that the naive notion of “objects” as fundamental is in conflict with the findings of modern physics, and instead views properties and/or relations as ontologically primitive.

However, this may just not be radical enough. If there is a grain of truth to some of the ideas presented above, then even the very notion of an objective external world (made of “structure”) is not completely ontologically primitive, even if we give up the idea that it is composed of “objects”. Instead, a new picture of the world (metaphysics) suggests itself, in which *observation*, or *experience*, is ontologically primitive.

In this project, we will construct a metaphysical theory based on the notion of observation, in conjunction with the insights and ideas sketched in the first part of this application. We will explore to what extent a consistent picture of the world of this kind is possible, plausible, and fruitful. This view will be closely related to ontic structural realism, but still go beyond it. In particular, we will address the following questions:

- Standard metaphysics would view “observers” (like us) as supervening on the external material world. There is always a threat of
*dualism*to this picture, in the sense that the lively experience of our “firstperson perspective” does not seem to be amenable to a completely satisfying explanation in these terms. In the worldview suggested here, on the other hand, we have the exact opposite: in a way, the external world supervenes on the observer. Can this give us a more coherent worldview without any threat of dualism? - According to the theory put forward above, “physics as computation” is more than just a metaphor. Rather, our physical world (even if just emergent from observations) is in fact an algorithm. Can this give us insights into the most suitable terminology that we should use to describe our world? That is, should our universe be understood in terms of a metaphysics of
*information processing*?

Note that there have been many proposals in the past to view our universe as a “computer” of some kind [26, 27]. However, we think that these approaches have been too naive, for example, by literally equating spacetime regions with circuits, and similar ad hoc ideas. In particular, they have neglected the role of the observer, which (as our theory suggests) is central to understanding the informationtheoretic properties of our world.

More generally, we will analyze the consequences that a notion of “emergent objective reality” may have or the Philosophy of Physics and metaphysics.

**Quantum theory from induction**

As explained above, it is one prediction of this theory that observers will generically see an external world that allows for a violation of Bell inequalities, but still satisfies the no-signalling principle. However, this is to date the least developed part of the theory [5], and there is the need for a better understanding. In particular,

- we will isolate the derivation of Bell violation and no-signalling from the specific form of our theory, and give a clear list of the main features of that theory that are responsible for these features to emerge.
- Within the theory, what is responsible for non-classical effects are loops in the observer graph (cf. Figure 1). This can be interpreted as “observers who fundamentally forget what they experienced”, similarly to the situation studied in the “sleeping beauty problem” [25]. This suggests a fascinating possibility: do quantum correlations generically appear in theories where observers are subject to a specific form of “fundamental forgetting”? Is non classical probabilistic behavior related to the wellknown failure of the formalism of probability theory in scenarios like the sleeping-beauty problem?

Even if we have no-signalling and Bell violation, there is still a multitude of probabilistic theories [14] different from quantum theory which describe conceivable physical behavior. We do not currently know whether our theory gives any further constraints on these behaviors. Therefore, a major problem is to

- derive the Hilbert space formalism of quantum theory, or at least characteristic aspects of it (like the Tsirelson bound [15] of quantum correlations), from the theory presented here.
- Technically, this will build on earlier work by the applicant on the reconstruction of quantum theory [20, 21]. However, substantially new ideas are needed, since the new reconstruction will rely on “observercentric” notions like observations and induction which are not traditionally considered in general probabilistic theories, at least not explicitly. We will adapt the framework of convex operational theories to this more “Bayesian” setting, and work out the constraints for states and correlations that follow from the theory explained in Section 1.

Work on this problem will benefit from an ongoing, but independent collaboration with A. Cabello, G. Chiribella, and M. Kleinmann on a project in *Quantum Bayesianism*. The goal of this other project is to derive the Hilbert space formalism of quantum theory from a “QBist” point of view – namely, the idea that a quantum state is nothing but an agent’s tool to organize (and update) her belief about a given physical system. This is also a derivation of quantum theory from “induction”, but in a quite different context (namely in one where there is still a traditional notion of a “unique objective external world”, but supplemented by observers that organize their knowledge about this world in a Bayesian manner).

**Applying algorithmic probability in physics**

Not only the theory presented here, but also other authors have suggested to apply algorithmic probability and Kolmogorov complexity in physics, for example in cosmology [28, 29] or thermodynamics [3, Sec. 8.5]. In the simplest case, the idea is to encode the state of a physical system into some binary string *x* ∈ {0, 1}*, and to use the Kolmogorov complexity *K*(*x*) as a measure of information content. Kolmogorov complexity is defined as

*K(x) *:= min{ℓ*(p) | U(p) *=* x*},

i.e. the length of the shortest computer program that makes the universal reference computer *U* halt and output *x*. This raises two immediate questions:

- How do we encode the state of a physical system into a binary string
*x*? First, there are many different methods of encoding, and second, the system might not obviously be discrete. - Once we have an encoding, which universal computer
*U*should we choose as our reference computer? The actual value*K(x)*will in general depend on this choice (though only within an additive constant).

Similar problems appear for algorithmic probability, which, according to the theory presented in Section 1, determines the probabilities of an observer’s future experiences. Within the theory, however, we have shown that problem 1. can be reduced to 2. (i.e. a different choice of encoding is equivalent to a different choice of universal computer), and that the theory itself is invariant with respect to the choice of universal computer. While this is an important success, it does not in itself tell us directly how to apply algorithmic probability in concrete physical situations. Compare this to General Relativity: even though the theory itself is “covariant” with respect to any choice of coordinate system, we still have to choose some such system to go out in the world and make concrete predictions.

We will therefore address the following questions:

Devise concrete methods to apply algorithmic probability in physics, in particular in cosmology. Note that detailed knowledge of cosmological models is not necessary for this; most questions can be addressed with thorough knowledge of quantum physics and basic knowledge of cosmology, cf. the discussion of the Boltzmann brain problem in Subsection 1.4, which also represents an example application. However, this research will benefit from the interaction with Prof. Chris Smeenk at the Rotman Institute of Philosophy. Not only is Smeenk an expert in the philosophy of cosmology, but he has also recently obtained a grant from the Templeton Foundation to further his research in this field.

A concrete example application would be to assign a measure to all possible universes, or histories of the universe, within some specified class. Encoding each (history of the) universe into a binary string *x*, we can use algorithmic probability **P**_{alg}(*x*) as a probability measure. This choice of measure is not only suggested by the theory in Section 1, but it also represents a formal implementation of Ockham’s razor, giving higher weight to simpler universes. We will explore whether this prescription makes qualitatively different predictions from other proposed measures, and in what way it allows us to make any useful statements on “typical” universes.

- Very closely related is the question how we can resolve the two problems (1. and 2. listed above on page 12), regarding the choice of encoding and universal computer. It is well-known that different choices of reference computer will become asymptotically (that is, in the limit of infinitely long binary strings) irrelevant. Does this imply that all predictions that we derive from algorithmic probability will be independent of this choice
*as long as they refer to properties of experiments in a universe that can be repeated in principle an unbounded number of times*? What is the nature of predictions that we can infer from algorithmic probability, and how does it relate to well-known problems of the use of probability in cosmology which have led to developments like “imprecise probability” [30]? - In particular, the theory as described in Section 1 offers a notion of “covariance” with respect to a choice of universal computer [5], which admits a formulation that is independent of the choice of reference computer. Can we use this construction directly or indirectly in applications, to arrive at concrete physical predictions that are independent of this choice?

Even if the theory presented in Section 1 turns out to be completely wrong, we believe that the proposed research can open up a fruitful new avenue of thinking about our world – one which is at the same time interdisciplinary, mathematically rigorous, and utterly surprising.

**References:**

M. Müller, Quantum Kolmogorov complexity and the quantum Turing machine, PhD thesis, TU Berlin, 2007. arXiv:0712.4377

M. Müller, Strongly universal quantum Turing machines and invariance of Kolmogorov complexity, IEEE Trans. Inf. Th. 54(2), 763—780 (2008). arXiv:quant-ph/0605030

F. Benatti, T. Krüger, M. Müller, Ra. Siegmund-Schultze, and A. Szkoła, Entropy and quantum Kolmogorov complexity: a quantum Brudno’s theorem, Commun. Math. Phys. 265(2), 437—461 (2008). arXiv:quant-ph/0506080

M. Müller, On the quantum Kolmogorov complexity of classical strings, Int. J. Quant. Inf. 7(4), 701—711 (2009). arXiv:0707.2924

M. Müller, Stationary Algorithmic Probability, Theoretical Computer Science 411, 113—130 (2010). arXiv:cs/0608095

M. E. Cuffaro, “The Kantian Framework of Complementarity.” Studies in History and Philosophy of Modern Physics, 41 (2010), 309-317.

M. E. Cuffaro “On the Significance of the Gottesman-Knill Theorem.” The British Journal for the Philosophy of Science, 68 (2017), 91-121.

M. E. Cuffaro, “Reconsidering No-Go Theorems from a Practical Perspective.” The British Journal for the Philosophy of Science (in press).

Markus P. Müller (Principal Investigator)

Michael Cuffaro (Postdoctoral Fellow)