On Multiverses 1.1.6

Abstract The notion of parallel worlds has for various reasons become popular outside of science fiction. One reason is the 2-Slit experiment at root of all quantum weirdness. Another is the nature of our universe as fine-tuned for life and consciousness. In this note we consider why neither wave-particle duality, nor fine tuning, is a legitimate basis for the existence of “parallel worlds” in the sense of this notion that is invoked to account for these phenomena. 

Download .pdf


Many Worlds
Stephen Hawking reportedly suggested that there is a parallel universe in which ex-One Direction member Zayn Malik remains with band.

As Hawking uses the term here, a parallel universe is a universe in which life takes an alternative turn, in which you had porridge instead of eggs on toast for breakfast, or in which the Nazis won WW11, or of course in which Zayn Malik remains with One Direction. Some physicists have the view that there is a parallel universe for every possible alternative. How do we know these parallel universes exist? Not because they can be observed -they can’t- but one reason that is proferred for their existence is an experiment known as the ‘2-slit experiment’. In the 2-slit experiment tiny particles (typically photons or electrons) are shot through a pair of slits and their arrival registered on a screen. Kind of like shooting bullets through a pair of windows you might imagine, except that in the 2-slit experiment the ‘particles’ don’t behave like bullets unless detectors are attached to the slits so that it can be determined through which slit a particle traveled on its way to the screen. Otherwise -if there are no detectors- then the particles don’t behave like bullets shot through a pair of windows but like a body of water washing through a pair of sluice gates. If we measure, the particles go through the first or the second slit, but if we don’t measure other they apparently go through both slits at once. This duality is common to all atomic and sub-atomic objects, and its peculiarity can be illustrated by an imaginary scenario known as ‘Schrodinger’s Cat’ after the physicist who devised it. Imagine a cat in a sealed chamber containing a piece of radioactive material that might or might not decay. If it decays, then a deadly poisonous gas is released into the chamber and the cat dies, but if it does not decay then the cat lives. The results of the 2-slit experiment may seem to imply that until we open the chamber and check, or until some kind of measurement is made on the cat, then it exists in a contradictory state of ‘alive and dead.’ How can this be? One surprisingly popular answer is that there is one universe in which the cat dies and another in which it lives: the radioactive substance decays and the cat dies because it was possible, and all possibilities are realised? But in this universe a strictly limited number of possibilities are realised, and the question of why the radioactive substance decays and the cat dies in this universe goes unanswered. In so far as there is any basis for Hawking’s suggestion that there is a parallel universe in which Zayn Malik stays with One Direction, this may be that basis. But in this universe a strictly limited number of possibilities are realized, and so the question of why Zayn Malik leaves One Direction in this universe goes unanswered. Clearly something is badly wrong here – this is not so much an explanation as it is a reductio of a putative explanation.

Fine Tuning
One reason for a belief in parallel universes is the so called ‘Many Worlds’ interpretation of Quantum Mechanics. Another -potentially relatable- reason people such as physicist Stephen Hawking and biologist Richard Dawkins feel the need to believe in parallel universes is the so called ‘fine tuning’ of the so called ‘constants of nature’. Dawkins:

To get our universe, with all of its potential for complexities or any kind of potential for any kind of life form, everything has to be precisely defined on this knife edge of improbability.

These are the speed of light, Planck’s constant, the gravitation constant, the fine-structure constant, the elementary charge, the Boltzman constant… and so on. The values of these constants are such that the slightest change to any of them would result in a collapse of the universe as we know it, and the collapse of a universe that supports life. In order to explain what appears to be intelligent design, Hawking and Dawkins invoke the idea of parallel universes in which the constants are other than they our in our universe. Considered by itself, our universe seems tailor-made for life and for consciousness, but in the context of an infinite number of universes -most of them wildly different from our own- this impression of improbability can be removed. Put it this way: all things being equal, it is highly improbable that a single blind man firing a shot from a rifle will hit a bottle located 20 meters away, but all things being equal an infinite number of blind shooters will hit such a bottle infinite times. Like the ‘Many Worlds’ interpretation of QM, this is a foolish explanation, and it puts us in mind of a quote wrongly attributed to G.K. Chesterton:

When a man stops believing in G-d, he doesn’t believe in nothing, he believes in anything.

The trouble is this… There are two related sets of laws beneath our scientific understanding of the universe -physical laws and mathematical laws- and all of our scientific knowledge is based on their interaction. A meta-universe in which the constants of nature lack the values they have in our universe is by definition a universe in which the interaction between physics and mathematics is unfamiliar, and there is no sensible way to extrapolate from the one to the other. Metaphorically, while we know that it is true that a sufficiently large number of blind shooters will hit the bottle in our exquisitely fine-tuned universe, we can’t sensibly apply this knowledge outside of this context. The same knock-down counter-argument can be offered in response to the well-known metaphor of a monkey randomly pressing the keys on a typewriter: a sufficiently large number of monkeys, it is correctly argued, will eventually type all the works of Shakespeare, but this is a false analogy when used outside the context of our universe. We can make inferences about possibility and probability assuming that the relationship between the laws of physics and mathematics are as they are in our universe, but all bets are off in a universe where this relationship doesn’t exist. If we are intellectually honest we will admit that we have no idea how to count -or if it is so much as possible to count- in a ‘universe’ whose physical laws are other than they our in ours, in which case the interaction between physics and mathematics is outside our experience. The trouble is that we tend believe we are able to reason mathematically from somewhere outside the physical universe, a vantage point from which the mind is free from the constraints imposed on it by physics. But there is no such vantage point for a human being. About fantasy-universes where the laws of physics and mathematics are related in an unfamiliar way, nothing positive can be said, and the miracle of fine-tuning cannot be explained by appeal to them.

The Connectedness of Physics and Mathematics
Extrapolating from how things are to our universe to how they might be in others universes is an illegitimate exercise because of the change in the relationship between physics and mathematics that occurs outside of the confines of our universe. But this objection was phrased as a ‘how do we know?’ objection: someone makes a claim of some sort and we ask them to justify their form of reasoning. A stronger objection than the ‘how do you know?’ objection is the ‘It can’t be so’ objection, when we don’t bother ask for a justification of the claim but point out a fundamental reason why it couldn’t possibly be right. We could for example ask someone how they know that there are whole number solutions to the equation x^n + y^n = z^n greater than 2, or better yet we could prove to them that there can be no such solution. The notion of parallel universes with different constants is vulnerable to the latter as well as the former style of objection. Imagine, the great 18th century mathematician Leonard Euler argued, the subtraction of the prime-divisions of 1 and all of their multiples from the harmonic series

1 + 1/2 + 1/3 + 1/4 ...

He imagined this subtraction taking place prime-series by prime-series, and rightly pointed out that if this process could be completed there would be nothing left but 1. An undivided unit, or 1/0, is something mathematicians and physicists call a ‘singularity’, and the universe according Big Bang cosmology begins with such a unit. At this point there can be no distinction between mind and matter, between the mathematics and physics, and so by running Euler’s thought experiment in reverse, it follows that both the number-line and the time-line of physical universe can be created simultaneously by the addition of prime-divisions and their multiples to the harmonic series. In this way both the form (numbers) and the content (physical objects) are brought unit by unit into being:




We can go on this manner indefinitely since there are an indefinite number of primes. This correspondence tell us a great deal about physics, and explains why the physical world is amenable to mathematical analysis. It is known for example that the primes thin-out as we count down the line, and that they thin-out according to the dictates of the ‘Prime Number Theorem’, proved in the 19th century. It follows therefore that energy thins-out in an isomorphic manner with the passage of time. Also this correspondence involves a set of gaps or holes in what would otherwise form a unified field, holes that correspond to the prime numbers, or rather to the differences between one prime-based sub-series and another. These gaps/holes have minimum and maximum size: if the holes are too small (if a lower limit could be exceeded) or to big (if an upper limit could be exceeded) then arithmetic would fail as a means of quantification. These minima and maxima can be specified by a relatively simple equation, and when we examine this equation in detail it becomes apparent that the constants of nature are easy consequences of it. The study, not of physics, or mathematics by themselves, but of the relationship between mathematics and physics is the branch of knowledge that reveals the answer to the question of fine-tuning: indisputably, the universe is fine-tuned for consciousness because it has been set up that way, but contrary to the opinion of Stephen Hawking and others, there are no alternative-set ups -no alternative or parallel universes in which things are different such as are amenable to description in terms of our mathematics. More on this is a moment, but by the Riemann Hypothesis extended to physics, such universes are, not impossible per se, but not comprehensible to arithmetic consciousness – uncountable we might say.

Whilst it is reasonable to disagree about many things in life, the one thing that everyone agrees on -and everyone living in an arithmetically comprehensible universe must agree on- is ‘primality’: the property of ‘primality’ cuts across every cultural barrier and across every imaginable barrier. In the light of the isomorphic relationship there is between prime and energy density, the existence of parallel universes in which the constants of nature are radically different from what they are in our universe violates what is the most fundamental fact of human experience, i.e. the identity of the prime numbers.

Quantum Mechanics
Recall the 2-slit experiment in which particles are shot through 2 slits, and recall that if we don’t attempt to measure the particles as they pass through the slits, the travel through both at once rather than 1 at a time. This leads to the main idea behind Quantum Mechanics = unmeasured quantum objects don’t follow 1 path or another but follow all possible paths. They are seen to adopt a single path by the act of measurement, the act of measurement is correlated with their adoption of a single path. Recall also the tale of Schrodinger’s Cat, alive or dead depending on the behaviour of some radioactive material. On a Quantum view, the cat is both alive and dead at the one time until a measurement is made. One explanation of this apparent conundrum is that there is one universe in which the cat is alive and another in which it is dead. Stephen Hawking proposes that there are parallel universes radically different from our own, and he construes them according to this explanation. He goes on to maintain that the fine-tuning of our universe for consciousness can be explained by the multitude of other parallel universes in which consciousness doesn’t exist. The ‘Many Worlds’ interpretation of QM fails because it doesn’t really explain the thing we want explained – why, for example, is the cat dead in this universe? The Many Worlds interpretation doesn’t say because it doesn’t say why some outcomes belong to some worlds and some outcomes belong to others. Also it complicates matters to an extreme degree, offering possibly the grossest insult to Occam’s razor (‘Thou shalt not multiply entities beyond necessity’) that has ever been offered by a scientific theory to the quest for simplicity that underlies this principle. The same scheme of things that shows that fine-tuning is necessary for arithmetic shows that many worlds are unnecessary. From Euler’s investigation comes a natural rather than an artificial or man-made notion of unity. Units on the Eulerian account are infinite series’ having a certain amount of prime-density and converging to 1 in the following way:


But it is easy to prove that either this series does not really equal 1 – that rather that it converges to 1 without ever reaching it – or if it is supposed to equal 1 then the negative series


converges to 0 without ever reaching it. We can choose how to view the situation and depending on what we choose we get a different result. This means that there is a randomness or an ‘uncertainty’ effecting our notions of 1 and 0. This uncertainty is lessened with the decrease in prime-density that occurs with the addition of each further series, but it cannot be eliminated altogether. Its obscurity arises, not so much because it is small, but because it involves a balance of prime-density and prime-sparsity which can be expressed using the complex numbers 1/2 + I b and 1/2 – I b where the real part 1/2 is constant but the imaginary part varies as a function of arithmetic increase. But if we look within Euler’s prime-based series to the composite sub-series’ that comprise them we find proto-units, and we find an amount of uncertainty that is unbalanced in one direction or another. Every unit has a prime-density of 1/2 give or take I b, but super-units have a prime-density greater than 1/2, sub-units a density that is less than 1/2. The unbalanced units are aspects of units, and these imbalances disappear at the level of the classical unit, i.e. by combining the unbalanced super and sub-units the balanced unit is produced.

It has correctly be said by Richard Feynman and others that quantum mechanics is probability with negative numbers. It may seems that we have no use for negative probabilities in the classical world, and it certainly doesn’t make sense to say that there is a -30% chance of rain tomorrow, but in fact there are negative numbers at work in any and every probability judgment. We do not to recognize them because of the balance or prime-density and sparsity (read balance of energy and space) that characterises the arithmetic classical domain. For example it appears that the chance of a fair coin producing a head or a tail is always 1/2, but in reality it is 1/2 + or – an ineliminable uncertainty I b because the true value of the number 1 itself is (1/2 + I b) +(1/2 – I b). The true value of 0 is therefore 0 give or take I b, and right there negative probabilities come into play – but because of the dual nature of prime-density and sparsity they tend to cancel. When we switch our attention from the classical to quantum domain, the balance of prime-density and sparsity underlining arithmetic progressions is lost (read the balance of energy and space is tipped toward energy). Now we are working with complex numbers such that there is not a balance of positive and negative elements, and when combined the result is a further complex number rather than a real number, and what is called an “interference pattern”. We can further this idea by associating the creation operators \left(b_n\right){}^{\dagger } and \left(f_n\right){}^{\dagger } to the prime numbers p_n… Now we have identified the unique ‘factorization’ of a state into creation operators acting on the ‘vacuum’ with the unique factorization of an integer into prime numbers (and we have a hierarchy of states: |1> is the ‘vacuum’; |2> and |3> and |5> are one-particle states; |6> is a two-particle state… and so on). By reference to the Witten index -the number of bosonic minus the number of fermionic zero-energy states- we see that the Mobius inversion function

\mu n={1 = n has an even number of distinct factors,
-1 = n has an odd number of distinct factors, 0 = n has a repeated factor}

is equivalent to the operator (-1)^F that distinguishes bosonic from fermionic states, with \mu n = 0 when n has a repeated factor being equivalent to the Pauli exclusion principle. If we re-express the Mertens function (which sums the 1s and -1s of the Mobius function) as \sum _{n=1}^{p_x} \mu (n), we see that how it is that the numbers 1/2 + I b and 1/2 – I b form an interference pattern, and that it is a balanced interference pattern:

That this balanced interference pattern emerges from unbalanced interference patterns can be shown by re-expressing \sum _{n=1}^{p_x} \mu (n)+2 as \sum _{n=1}^{p_x} \mu \left(e^{2 \gamma }\left(\sqrt{\frac{n}{e^{2 \gamma }}}\right)^2\right)+2, and then re-arranging the latter as

e^{2 \gamma } \sqrt{\frac{p_x}{e^{2 \gamma }}}{}^2-\frac{\int_1^x a_1 x H_x \, dn+\int_1^x a_2 x H_x \, dn+\int_1^x a_2 x (x \log ) \, dn+\int_1^x a_1 x (x \log ) \, dn+\text{...}}{n x}

Then for example:

Then if we re-express e^{2 \gamma }e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}, and consider that \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}\right)^2=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^{s+1} in the case that s = 1, re-express e^{2 \gamma } \sqrt{\frac{p_x}{e^{2 \gamma }}} as e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{p_x}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right){}^{1/s}\right){}^s then we can note that if s takes on any real value positive value other than 1 -even if the change is as slight as 1 -> 1.000000000000000000001- we upset the balance and we get a strictly finite amount of classical diffusion:  <img class="aligncenter wp-image-901" src="http://shroud-physics.com/wp-content/uploads/2017/09/s5-graph-2-1024x717.jpg" alt="" width="500" height="350" />  We see that there is a critically small difference between\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dnand\zeta (s)-\frac{1}{s-1} such that diffusion eventually gives way to localization (symbolized here as\hbar$):


This is the Riemann Hypothesis extended to physics referred to above: it says simply that the balance of energy-density and sparsity (light concentration and diffusion) signified by the a real part of Subscript[\[Rho], n] equal to 1/2, and by the value s = 1, is the balance that upholds arithmetic continuity and arithmetic consciousness. It explains the values of all the constants of nature (all of these can be without exception straightforwardly derived from it) since these constants are constrained precisely to be what they are in order that arithmetic consciousness by unheld – take a single zero off of the 1/2 line or disturb the value s = 1 by the tiniest amount and the associated arithmetic progression is finite. In the same way that brush strokes of the Mona Lisa are a part of a unified pattern that forms only when a sufficient number of brush strokes have been accumulated, unbalanced interference patterns are parts of a unit that forms only when a sufficient of number of these patterns have been accumulated. But these proto-units lack the defining characteristics of a unit. A unit is indefinitely repeatable because every unit is a form of something without boundary, i.e. the unified field at root of this process. However these repetitions depend on holes in this field, entities -or non-entities- whose nature is to be bounded. Herein lies a distinction between the long-ranged (units) and the short-ranged (sub-units). Unlike units, proto-units are not amenable to indefinite repetition but can be repeated but a finite number of times… and then stop. It should however be clear now exactly why Stephen Hawking’s proposal that there are parallel worlds that exist on analogy with the unmeasured particles in the 2-slit experiment fails. It fails because a particle is not a unit and not therefore a world: it is a proto-unit and aspect of a world rather than a world per se. Similarly, the subjective interpretation of QM according to which the world has to be measured or observed to exist fails: classical objects (units) continue to exist regardless of any measurement or observation is made on them, and because they are the result of a massive number of interacting quantum (sub-units). The Mona Lisa remains the Mona Lisa regardless of the colour or angle of a brushstroke – countless brush strokes are needed create the identity of the Mona Lisa, and countless brushstrokes must be destroyed to destroy that identity. There is an ancient brain teaser known as the “paradox of the heap” is useful here: take a heap of objects such as stones, and subtract one stone; you still have a heap of stones; subtract another stone and you will have a heap of stones; therefore the rule that if you have a heap of stones, and you subtract one stone, it may appear to follow that the heap is indestructible, when obviously the point will be reached when subtracting a stone from the heap turns that heap into a non-heap. The Riemann Hypothesis in a sense tells us what the difference between a  heap and a non-heap by telling us in a very precise way that arithmetically continuous worlds must involve a certain balance of prime-density (light) and prime-sparsity (space). Tip that balance and arithmetic continuity is lost. Thus the RH completely resolves the paradox of Schrodinger’s Cat: we know that the cat is always always either alive or dead regardless of measurements or observations because its classical identity is dependent on a balance of light and space that is unaffected by any of the very low-energy quantum systems we can study. One might image a balanced see-saw and imagine trying to tip that balance with a grain of said, or one may imagine that the image of the woman in Leonardo’s painting appears or disappears ao account of the presence or absence of a single brush stroke… Hawking’s scientific theories, and philosophy of QM generally, might remind us of the bizarre and impractical explanatory tales that children devise in documentaries when posed questions beyond their years: if you can answer a deep philosophical question then by all means answer it, but in a practical manner so that everyone can clearly see that your answer is right; if the question is too hard for you, don’t spin up errant nonsense like the Many Worlds Interpretation, or the use by Hawking and Dawkins of the Anthropic Principle etc.. Rather, take Isaac Newton’s or Richard Feynman’s approach and ‘feign no hypothesis’. It is a virtue to know, but it is also a virtue to know when you don’t know.


Chesterton, G (1935), The Oracle of the Dog

Dawkins, R (1986) The God Delusion (2006)

de la Vallée Poussin, C (1896), Recherches analytiques la théorie des nombres premiers (Analytical research on the theory of prime numbers)

Derbyshire, J. (2004), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics

Euler, L (1737), Variae observations circa series infinitas (Various observations concerning infinite series)

Feynman, R (1987): Negative probability in Quantum implications: Essays in honor of David Bohm

Hadamard, J (1896), Sur la distribution des zéros de la fonction zeta(s) et ses conséquences arithmétiques (On the distribution of the zeros of the zeta function and some consequences for arithmetic)

Hawking, S (2010), The Grand Design

Schrodinger, S (1935), Die gegenwärtige Situation in der Quantenmechanik (The present situation in quantum mechanics)