On the Trouble With Physics 1.7.2

Abstract Since the time of the ancient Greeks, the world has been under the sway of the philosophy of atomism, which holds that the parts are prior to the whole, and so that the world is an empty receptacle in which material objects are placed like furniture in an empty room. This philosophy however contradicts the General Theory of Relativity, according to which the fundamental condition of the universe involves no space and time and therefore no matter, and it is a philosophy that has numerous undesirable consequences. It is argued here that the alternative conception -that the whole is prior to the parts, and that space disrupts light rather than the other way around- coheres with theory and with observation, and that it is supported by deep mathematical principles by reference to which several problems of physics can be solved.

Download .pdf


The Trouble With Physics
“In his controversial 2006 book The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next, theoretical physicist Lee Smolin points out “five great problems in theoretical physics.

  1. The problem of quantum gravity: Combine general relativity and quantum theory into a single theory that can claim to be the complete theory of nature.
  2. The foundational problems of quantum mechanics: Resolve the problems in the foundations of quantum mechanics, either by making sense of the theory as it stands or by inventing a new theory that does make sense.
  3. The unification of particles and forces: Determine whether or not the various particles and forces can be unified in a theory that explains them all as manifestations of a single, fundamental entity.
  4. The tuning problem: Explain how the values of the free constants in the standard model of particle physics are chosen in nature.
  5. The problem of cosmological mysteries: Explain dark matter and dark energy. Or, if they don’t exist, determine how and why gravity is modified on large scales. More generally, explain why the constants of the standard model of cosmology, including the dark energy, have the values they do.”


Quantum gravity is the effort in theoretical physics to create a theory that includes both general relativity and the standard model of particle physics. Currently, these two theories describe different scales of nature and attempts to explore the scale where they overlap yield results that don’t quite make sense, like the force of gravity (or curvature of spacetime) becoming infinite.

After all, physicists never see real infinities in nature, nor do they want to!”


Newton misconceived nature of the relationship between light and space – he took the view that space was a dark static background against which luminous objects of one sort or another moved. This view goes back at least as far as Democritus and Leucippus, who saw the world as comprised of atoms separated by empty space. Einstein challenged this picture by allowing that space was active rather than passive, by allowing that space could influence the objects within it. His proposal that mass curves space has come to be associated with the proposal that the fundamental condition of the universe is infinitely massive – all of the mass in the universe is supposedly compressed to a point. But since this condition involves no space and time, nor can it involve mass, which by definition and by experiment (the Pauli exclusion principle forbids fermions from occupying the same space, and we never see a lone quark) is a spatio-temporal phenomenon. Nor can it be therefore that the curvature of  the original universe is a function of mass. This idea is an unhelpful attempt to hang on to the basic element of Newton’s atomistic picture, which gives space priority over light. One of various negative consequences stemming from the atomistic picture is that the 1/r^2 formula doesn’t have universal application. It doesn’t work when it comes to galaxies, nor when it comes to the atoms, or black holes. Mass is a combination of light and space, and if we reject atomism, and give priority to light rather than space, then the fundamental condition of the universe involves no mass, and curvature can at first be deemed to be a function of the concentration of light. In a region in which there is a balance of light and space, the law of gravitation that comes out of this re-conception will agree with the old law, but in regions in which this balance is tipped toward light the new law will make different predictions. Conversely, in a region where this balance is tipped toward space, the new law will again make different predictions, but this form of imbalance is to be associated to the end rather than the beginning of the universe. It seems therefore that a light-centric theory of the universe will agree with Newton and Einstein and with the dictates of contemporary physics when it comes to the solar system, but disagree when it comes to galaxies, atoms, and black holes. This is because the solar system is a region in which there is a balance of light and space, while galaxies and atoms are regions in which this balance is tipped toward light. Black holes are regions where the balance is tipped towards space. This tip is evidently extreme in the case of atoms, and so this disagreement will be extreme in atomic regions. 

To get at the deep mathematics underlying this model of the universe, we need to extend the inverse square law behind the 1/r^2 formula by extending pi. Classically, the law describes a fixed quantity can be increasingly spread over an increasing area, which means that there is a symmetric relationship between that quantity and the area. If the quantity or the area died-off with distance, then that would be an asymmetric relationship. In particular then, the light-centric law of gravitation must involve a broader concept of circularity, one that includes the traditional circle as a special case, but where the relationship between light and space can be an asymmetrical one. If a circle is supposed to have an area of 1, then a light source E located at the center will posses the same strength from center to circumference for E/1 = E. This is the same thing as saying that there is no difference between center and circumference. If there is no such difference, then either the circle has no area and no radius, or infinite area and an infinite radius; E either has either infinite strength or no strength. But in reality the strength of E always lies between these extremes. By the \pi r^2 formula, we know that a circle of area 1 has a radius of  \frac{1}{\sqrt{\pi }} or 0.56419, but there is no clear way to extract sufficient variation from \pi to permit a potentially infinite number of energy levels less than infinity but greater than 0. There are longer and more informative routes leading to the conclusion to which I am headed, but suffice to say here simply that \frac{1}{\sqrt{\pi }}\approx e^{-\gamma }=\frac{1}{\sqrt{e^{2 \gamma }}}=0.561459. Since gamma is the limit

\gamma =\lim_{x\to \infty } \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)

a measure that looks better able than \pi to perform the function we require is e^{2 \gamma }. Gamma is the limit of a potentially infinite number of values, so instead of

\pi \sqrt{\frac{1}{\pi }}^2=1

we may write

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

Old formula…

\pi R^2=A

which gives

\pi \sqrt{\frac{1}{\pi }}^2=1

New formula

e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2=1

followed by \lim_{x\to \infty } \, e^{2 \gamma }\left(\sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1.  Newton’s equation is couched in terms of the traditional unit circle (where the radius rather than the area is 1), the old formula for which is

\sqrt{\frac{A}{\pi }}=R


\sqrt{\frac{\pi }{\pi }}=1

New formula

\sqrt{\frac{e^{2 \gamma }}{e^{2 \gamma }}}=1

followed by

\lim_{x\to \infty } \sqrt{\frac{e^{2 \gamma }}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}=1

The new equations will at first look strange, but they are merely variants of the old  \pi r^2 formula which allow \pi to vary beyond it’s classical limits, and are therefore able to take account of extreme curvature caused by imbalances of light (extreme example = singularity at the beginning of time) and space (pseudo-singularities in black holes at the end of time). To make this new approach work properly, the following pair of shapes must be regarded dualistically, as having positive and negative elements:

Area of Exterior circle =e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2, area of interior circle = e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2-e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2

Once it is agreed that gamma is a spacial case of \zeta (s)-\frac{1}{s-1}, we can go from \lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2\right)=1 to gthe more general

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

Now consider that although the limit is 1, regardless of the values of x or s, we get an inter-relationship between the exterior and interior circles that is non-repeating if and only s = 1, meaning for instance that if and only if s =1 the exterior circles in the figures above can be expanded indefinitely and/or that the interior circles contracted indefinitely. These dynamics are associated with spirals that unfold forever. If s takes on a real value other than 1 -when there is a limiting value (we can symbolise as \hbar) beyond which the circles take on a certain maximum/minimum size and we get an inter-relationship, and a contraction/expansion process that is repetitive. These dynamics are associated with the degenerate circular forms of a spiral:

1-e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^{s}}-\int_1^x \frac{1}{n^{s}} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}

By reference to this distinction, we can regard the inverse square law as special case of a larger law, and extend Newton’s law of gravity beyond regions in which there is a balance of light and space, indeed extend it to all regions amenable to mathematical description. Take the 1/r^2 formula, and consider a circle of area 1. There is in the limit a perfect balance of light and space (E = 1 and A = 1). If however if we write \pi as the partial sum/integral as e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)} and/or s is a real number other than 1, then we have an imbalance. We can say the following: if s = 1, then there is an approximately symmetrical relationship between light and space, and the new formula will yield predictions which are similar to those yielded by 1/r^2. If however s != 1, the balance is strongly tipped toward light (extreme example the singularity of concentrated light at the root of the universe), or conversely toward space (extreme example the interior of black holes), and the new formula makes entirely different predictions than 1/r^2. When s !=1, the region of space described by the new law curves back on itself. In these light or space dense environments, curvature -as a function of density- is far greater.


One issue with understanding quantum physics is what the underlying physical mechanism involved is. There are many interpretations in quantum physics – the classic Copenhagen interpretation, Hugh Everette II’s controversial Many Worlds Interpretation, and even more controversial ones such as the Participatory Anthropic Principle.

The question that comes up in these interpretations revolves around what actually causes the collapse of the quantum wavefunction. (The puzzle of the curious aspect of human consciousness’s role in resolving these questions is related in Quantum Enigma.)

Most modern physicists who work with quantum field theory no longer consider these questions of interpretation to be relevant. The principle of decoherence is, to many, the explanation – interaction with the environment causes the quantum collapse. Even more significantly, physicists are able to solve the equations, perform experiments, and practice physics without resolving the questions of what exactly is happening at a fundamental level, and so most physicists don’t want to get near these bizarre questions with a 20 foot pole.”


Wolfram Mathworld defines a projection in this way:

A projection is the transformation of points and lines in one plane onto another plane by connecting corresponding points on the two planes with parallel lines. This can be visualized as shining a (point) light source (located at infinity) through a translucent sheet of paper and making an image of whatever is drawn on it on a second sheet of paper.

But a “(point) light source” is the same thing as a zero-dimensional light source, which involves the infinite concentration and the zero diffusion of light. The problem with the Wolfram Mathworld definition of projection, and with every physical theory that relies on zero-dimensional point-sources, is that these involve an infinite concentration and zero diffusion of light. Kepler and Newton showed that the motions of heavenly bodies follow orbits resulting from the intersection of a cone by a plane, i.e, they showed that gravitational attraction can be understood in terms of the intersection of a cone by a plane. By implication, if the plane intersects the cone at its root, and at 90 degrees, we have an infinite concentration and a zero diffusion of light, which is a picture of the fundamental condition of the universe and a point (light) source.  A solution to the problem then is that for every way of positioning the plane that allows for light to be diffused over space, a further point-source is required such that this is greater than zero-dimensional and involves therefore a finite quantity of concentrated light and a non-zero quantity of diffused light: 

As the amount of diffused light increases, the amount of concentrated light decreases, and so we have two related sets of numbers, one set representing the concentrated light (or the projectors), and one set representing the diffused light (the projections). Note that nothing is to be done to the light, and thus that we have the idea of a universe which is created according to the ancient Hebraic tradition by the projection, not of light per se, but of space (the Genesis account of creation has puzzled many because it explicitly says that light comes before any materiel source of light). We can identify these differences with atoms: as the gap widens, the atom diffuses light, jumps to a lower energy level; and as the gap narrows, the atom concentrates light, jumps to a lower energy level. In particular, we can identify the gap with the nucleus of an atom, whose energy levels know are statistically identical to the imaginary parts of the non-trivial zeros of the zeta function: 

These ideas can be given an initial mathematical form by associating the creation operators \left(b_n\right){}^{\dagger } and \left(f_n\right){}^{\dagger } to the prime numbers p_n... Now we have identified the unique ‘factorization’ of a state into creation operators acting on the ‘vacuum’ with the unique factorization of an integer into prime numbers (and we have a hierarchy of states: |1> is the ‘vacuum’; |2> and |3> and |5> are one-particle states; |6> is a two-particle state… and so on). By reference to the Witten index (the number of bosonic minus the number of fermionic zero-energy states) we see that the Mobius inversion function

\mu n={1 = n has an even number of distinct factors,
-1 = n has an odd number of distinct factors, 0 = n has a repeated factor}

is equivalent to the operator (-1)^F that distinguishes bosonic from fermionic states, with \mu n = 0 when n has a repeated factor being equivalent to the Pauli exclusion principle. If we re-express the Mertens function (which sums the 1s and -1s of the Mobius function) as \sum _{n=1}^{p_x} \mu (n), we see the following superposition of waves:

This wave depicts fluctuations that from a local perspective exist in a square root-bond balance of concentration and diffusion. If a balance of concentration and diffusion is within the bounds the Riemann Hypothesis imposes on arithmetically continuous phenomena, then the range of the diffused light is potentially infinite, but if a balance is outside of these bounds, then the range of the diffused light is finite. This buys us the distinction between a projector and a projection. We can reveal the hidden structure of n by re-expressing \sum _{n=1}^{p_x} \mu (n)+2 as \sum _{n=1}^{p_x} \mu \left(\left(e^{2 \gamma } \sqrt{\frac{n}{e^{2 \gamma }}}\right)^2\right)+2, and then re-arranging the latter as

e^{2 \gamma } \sqrt{\frac{p_x}{e^{2 \gamma }}}{}^2-\frac{\int_1^x a_1 x H_x \, dn+\int_1^x a_2 x H_x \, dn+\int_1^x a_2 x (x \log ) \, dn+\int_1^x a_1 x (x \log ) \, dn+\text{...}}{n x}

Then for example

Then if we re-express e^{2 \gamma } as e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} and consider that

\left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}\right)^2=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^{s+1}

in the case that s=1, re-express

e^{2 \gamma } \sqrt{\frac{p_x}{e^{2 \gamma }}}


e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{p_x}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right){}^{1/s}\right){}^s

and note that if s takes on any real value a value other than 1 -even if the change is as slight as 1 -> 1.000000000000000000001- we upset the balance and we get a strictly finite amount of diffusion: 

But in the same way that the finite matrices below are aspects of the potentially infinite matrices, superpositions associated to values of s other than 1 are aspects of superpositions associated to s = 1:

1=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^{s+1}

0=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^{s}}-\int_1^x \frac{1}{n^{s}} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}-e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^{s+1}

\left( \begin{array}{cccccc} 0 & -1 & -1 & -1 & \ldots & -1 \\ 1 & 0 & -1 & -1 & \ldots & -1 \\ 1 & 1 & 0 & -1 & \ldots & -1 \\ 1 & 1 & 1 & 0 & \ldots & -1 \\ \vdots & \vdots & \vdots & \vdots & \ddots & -1 \\ 1 & 1 & 1 & 1 & 1 & \hbar \\ \end{array} \right)

\left( \begin{array}{cccccc} 0 & 1 & 1 & 1 & \ldots & 1 \\ -1 & 0 & 1 & 1 & \ldots & 1 \\ -1 & -1 & 0 & 1 & \ldots & 1 \\ -1 & -1 & -1 & 0 & \ldots & 1 \\ \vdots & \vdots & \vdots & \vdots & \ddots & 1 \\ -1 & -1 & -1 & -1 & -1 & -\hbar \\ \end{array} \right)

1=e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2

0=e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2-\left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2

\left( \begin{array}{cccccc} 0 & -1 & -1 & -1 & -1 & \ldots \\ 1 & 0 & -1 & -1 & -1 & \ldots \\ 1 & 1 & 0 & -1 & -1 & \ldots \\ 1 & 1 & 1 & 0 & -1 & \ldots \\ 1 & 1 & 1 & 1 & 0 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{array} \right)

\left( \begin{array}{cccccc} 0 & 1 & 1 & 1 & 1 & \ldots \\ -1 & 0 & 1 & 1 & 1 & \ldots \\ -1 & -1 & 0 & 1 & 1 & \ldots \\ -1 & -1 & -1 & 0 & 1 & \ldots \\ -1 & -1 & -1 & -1 & 0 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\ \end{array} \right)

The Many Worlds interpretation of QM we know, and we can reject it because is doesn’t really explain anything. The well-known ‘Copenhagen Interpretation’ says that a quantum-states are observer-dependent (the tree falling in the forest doesn’t make a sound if there is no ear to hear it), and thus suggests that the same might also be true of the classical world. We can reject this interpretation  also because it doesn’t anchor to a reference-world that is objective – as Bertrand Russell once observed, it is false that everything is relative because in that case there would nothing for anything to be relative to. From the mathematics above, the misconception underlying the Many Worlds and the Copenhagen and other interpretations of QM is that quantum systems are the building blocks of classical systems in the same sense in which bricks are the building blocks of brick walls, which implies that a classical system (a brick wall) is merely a scaled up version of quantum systems (bricks). They are atomistic interpretations due to the atomistic nature of physics since the ancient Greeks. But from the mathematics above atomism is false: if you follow a classical system back in time as far as possible, you will not come to a quantum brick or to a small pile of quantum bricks, but to a maximally light-dense classical system that appeared at the moment of creation in an instant (this maximally light-dense state can be associated with the smallest prime (2) and the smallest non-trivial zero (0.5 +14.1347 I) which mark a maximum amount of light-concentration and a minimum amount of light-diffusion). An alternative picture can be formed by using the notions of projector and projection. In the light of this picture, physics has been based on an category mistake that equivocates between the projector (non-classical) and the projection (classical): projected classical domains arise from a particular balance of concentrated light and diffused light (s = 1), whereas the non-classical domains of the projectors involve imbalances in favour of concentrated light (s !=1 ). More particularly, the projectors (artificial point sources) produce continuous balances of light and space in the form of projections – more particularly, they produce spiral-waves associated to the non-trivial zeros of L-functions- but are themselves light-dominant. All balances are superpositions, projections are associated to symmetric superpositions, and projectors to asymmetric superpositions. No projector or combination thereof can possibly produce an infinite amount of diffused light -an infinite projection- because there can be no such thing as an infinite amount of diffused light, and so this sets up the need for the collapsing of asymmetric superpositions: whenever an asymmetric superposition can produce no further diffused light and further a symmetric superposition, it collapses to make room for a further projector. The zeros of L-functions are the atomic projectors and the primes are the atomic projections… In a sense there are Many Worlds in which Schrodinger’s Cat exists, and in a sense Schrodinger’s Cat is as the Copenhagen Interpretation proposes subjective, because there are infinite perspectives (each associated to an L-function) in which Schrodinger’s Cat exists. But these worlds are not contradictory (at all times in these worlds the cat is either alive or dead, and not, both), and they are all tied to an objective real world on account of the relationship between non-canonical L-functions whose starting prime-density is sub-maximal and the Riemann Zeta function. If the radioactive substance in the tale of Schrodinger’s Cat decays, then the cat dies, and if it doesn’t decay the cat lives, and until a measurement is made the state of this particle is in a superposition, but this doesn’t imply that until a measurement is made the cat is alive and dead at the same time. Why not? Because the inconsistency of the underlying non-classical superpositions in which a particle is in multiple states at the same time is not inherited by the classical system in which the cat is either alive or dead but not both. Consider the brush strokes making up a self-portrait by Rembrandt:


In the same way that the representational character of the overall image is not undermined by the non-representational character of the individual brush strokes, the character of the cat in Schrodinger’s thought experiment as alive or dead but not both is not undermined by the character of the particle in one state and another at the same time. Furthering the painting analogy, there is an ideal distance between the painting and the viewer, and there are distances such that a representational image disappears because the viewer is too close or too far from the painting. If the viewer is too close, then he or she will see seemingly haphazard brush strokes, but these brush strokes combine to create a coherent landscape. If the viewer goes extremely close -if they use a microscope- then they will see individual paint particles. As it is with the painting, so it is it is with the world: the painting was painted, and the world was created, specifically to be seen, and so brush strokes and quantum states combine to create coherent paintings and predictable classical states. And while there is undoubted randomness in the world, this randomness is strictly constrained by mathematical limits and precisely calculated to produce an orderly classical whole. To the uninitiated eye of an ant crawling on the surface of a painting- the artist has apparently distributed the paint in a haphazard way on the canvas, but these brush strokes are painted in such a way that the canvas is coherent to the eye of the man who surveys it from an appropriate distance. To the uninitiated mind, G-d plays dice at the level of the parts, but the dice are rolled in such a way that the end result is something orderly. Order it can, be said, emerges out of chaos, not in the sense that the chaos is rearranged or has order imposed on it, but in the sense that the chaos interacts with chaos to bring about order. “Quantum mechanics”, said Einstein, “is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the old one. I, at any rate, am convinced that He does not throw dice.” I believe that Einstein was right, and that G-d does not throw dice in the sense that Einstein objected to.


There are four fundamental forces of physics, and the standard model of particle physics includes only three of them (electromagnetism, strong nuclear force, and weak nuclear force). Gravity is left out of the standard model. Trying to create one theory which unifies these four forces into a unified field theory is a major goal of theoretical physics.

Since the standard model of particle physics is a quantum field theory, then any unification will have to include gravity as a quantum field theory, which means that solving problem 3 is connected with the solving of problem 1.

In addition, the standard model of particle physics shows a lot of different particles – 18 fundamental particles in all. Many physicists believe that a fundamental theory of nature should have some method of unifying these particles, so they are described in more fundamental terms. For example, string theory, the most well-defined of these approaches, predicts that all particles are different vibrational modes of fundamental filaments of energy, or strings.”


In 1632 Galileo observed in the Dialogue Concerning the Two Chief World Systems that the laws of physics are the same in a ship traveling at constant velocity on a smooth sea as they are in a ship standing still:

Shut yourself up with some friend in the main cabin below decks on some large ship, and have with you there some flies, butterflies, and other small flying animals. Have a large bowl of water with some fish in it; hang up a bottle that empties drop by drop into a wide vessel beneath it. With the ship standing still, observe carefully how the little animals fly with equal speed to all sides of the cabin. The fish swim indifferently in all directions; the drops fall into the vessel beneath; and, in throwing something to your friend, you need throw it no more strongly in one direction than another, the distances being equal; jumping with your feet together, you pass equal spaces in every direction… Have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still.

This observation has formed the basis of a principle known as ‘The Principle of Relativity’, which states simply that the laws of physics are the same for all observers in “inertial frames of reference”, i.e. observers at rest or moving uniformly in a straight line. Einstein discovered a contradiction involving this principle and the principle according to which light is propagated at the velocity c regardless of the motion of the emitting body. The figure below depicts a train, and a light source located in the center of a carriage, and an inconsistency between the perception of an observer inside and an observer outside the train:

From the point of view of an observer inside the train, light emitted from the center of the carriage reaches the front and the rear of the carriage simultaneously, but that from the point of view of an observer outside the train, the light reaches the front of the carriage first. It follows that at least one of the following principles is false:

  1. units of space and time are the same for all observers regardless of their states of motion
  2. the fundamental laws of physics are the same for all observers in inertial frames of reference
  3. light is propagated at the velocity c regardless of the state of motion of the emitting body

Given that 2 and 3 are true, it follows that 1 is false. It follows also that observers traveling at different velocities literally inhabit different spatio-temporal environments, different spaces and times. This is what leads to the famous twin-paradox, and all of the weirdness of Special Relativity. These paradoxes are due to the contraction of space and the slowing of time slows in the direction of the speed of light, and if per impossible one could travel at the speed of light, space and time would effectively vanish. In order to make sense of the concept of projection -and to avoid a circumstance in which there is an infinite concentration and a zero diffusion of light- it is necessary to work with artificial point sources. These are all tied to a solitary natural point source, from whose perspective light is infinitely concentrated. It follows that while it is possible to regard light as in motion against a background of space or vice versa in environments in which there is a balance of concentrated and diffused light (classical environments in keeping the condition imposed by the Riemann Hypothesis on arithmetic continuity), light is -absolutely speaking- at rest, and acquires the well-founded illusion of having a speed from the absolute motion of space. We must therefore revise the Theory of Relativity in terms of the principle that -absolutely speaking- it is light that is at rest, and it is space that expands equally in all directions at the minimum speed of c. The idea that light in a vacuum is propagated in straight lines at the velocity c is replaced by the idea that a vacuum expands in straight lines at the velocity c. Rather than

  • light is propagated at the velocity c regardless of the state of motion of the emitting body

we have

  • space expands at the velocity c regardless of the state of motion of the body in space

The curvilinear propagation of light in a gravitational field is replaced by the idea of the curvilinear propagation of space in a field of concentrated light. To apply this principle, take any situation in which we would be inclined to say that light is moving in respect of space and ask how space must be moving in respect of light to account for this impression. A symmetry arising from the balance of light and space, and prime density and sparsity, allows us to look at it in the former way (this is a well-founded illusion), but latter is what is really happening.

The principles which emerged originally from the Special Theory of Relativity are these:

  1. the fundamental laws of physics are the same for all observers in inertial frames of reference
  2. light propagates at the velocity c regardless of the of the state of motion of the emitting body
  3. therefore units of space and time differ from observer to observer

From these principles we get the idea that units of space and time shrink in the direction of accelerated motion, so that as this motion approaches that of the speed of light, space and time tend to vanish. But since light is absolutely at rest, these principles are misleading, and the correct way to express them is this:

  1. the fundamental laws of physics are the same for all observers in inertial frames of reference
  2. space expands at the velocity c regardless of the state of motion of the body in space
  3. therefore units of space and time differ from observer to observer

But we know both from maths and from experiment that, while the expansion of mathematical and physical space may be regarded as locally uniform, this expansion is accelerating from a global point of view, and so we can revise 2 in this way:

  • space expands locally at the velocity c regardless of the state of motion of the body in space, but there is a global expansion which is greater than c and which is accelerating

Richard Feynman in The Character of Physical Law invited us to imagine that atoms are divided into blue-coloured and white-coloured varieties, and separated into compartments. If the separation is removed, then just as blue dye turns water a luke-blue colour, the atoms form a luke-blue mixture. He observes that individual collisions provide no clue as to the irreversibility of the mixing process, but that studying a film of the mixing played in reverse reveals that

…every one of the collisions is absolutely reversible, and yet the whole moving picture shows something absurd, which is that in the reverse picture the molecules start in the mixed condition… and as time goes on, through all the collisions, the blue separates from the white…

He went on to say that

…it is not natural that the accidents of life should be such that the blues will separate themselves from the whites…

The one-way nature of this process is reflected by from the experiment in which a gas is confined to one of two compartments. If the separation between the compartments is removed, then the gas spontaneously distributes itself in a uniform manner throughout the two compartments, but it does not spontaneously revert to the separated state. More familiar still, is the breaking of an egg. We never see a broken egg spontaneously reassemble, and there is no way to reassemble an egg after it has been broken (“All the kings horses and all the kings men couldn’t put Humpty together again.”). But although both these processes involve a one-way direction when viewed from a sufficiently global perspective, they can go either way when viewed from a sufficiently local perspective: the individual atoms comprising the gas molecules might just as well go from compartment B to compartment A as from compartment A to compartment B, and if we study the individual atoms comprising Humpty Dumpty we get no clue as to the fact that Humpty cannot be reassembled. Feynman’s answer that it is not natural is the sort of answer that can be legitimately given only when no deeper explanation is possible, when the thing standing in need of explanation is a self-explanatory or is a brute fact has been reached, and the irreversibility of classical phenomena is very far from being a brute fact. To explain it, we consider instead of blue and white coloured atoms that eventually form a luke-blue mixture, bosons (which can all pile up in a single zero-dimensional location) and fermions (and fermions that must be separated in space): in its fundamental light state, the universe involves no space, and no fermions, and the birth of matter involves the introduction of space and fermions that produces a mixture of light and darkness. Again, the individual interactions between these particles provide no clue as to the irreversibility of the mixing process, but recalling the correspondence we earlier set up between the creation operators \left(b_n\right){}^{\dagger } and \left(f_n\right){}^{\dagger } to the prime numbers p_n… and that the Mobius inversion function is equivalent to the operator(-1)^F that distinguishes bosonic from fermionic states:

Any reversal contravenes the Riemann Hypothesis.

To the naked eye the the distribution of the stars in the night sky seems to be random, but looking through a telescope we realize that galaxies have a spiral shape, light-dense toward the center of the galaxy, and increasingly dark at distances further away from the center. Same thing with the primes in the number line:

It is only by considering a sufficiently large group of stars and primes, and a sufficiently large group of particles, that the loss of light-density known as ‘entropy’ is found to involve a one-way direction known as the arrow time. In fact there are multiple arrows of time. Well known are the thermodynamic arrow arising from the loss of heat, the cosmological arrow arising from the expansion of the universe, the radiative arrow arising from the expansion of waves outwards from their source, the causal arrow arising form the fact that effects follows causes rather than precede them, the quantum arrow arising from the collapse of the wave-function, and the psychological arrow of consciousness arising the fact that we remember the past and the future is unknown… Less well-known is the genetic arrow, which arises from the loss of mutability of DNA with generation, a consideration that explains anomalous results such as the apparent mismatch between Y-DNA extracted in 2014 from the skeleton of the English King Richard III and his contemporary paternal relatives, and points to the falsity of the Theory of Evolution (which depends on symmetric DNA mutation rates). But the arrow that contains and explains all the others is the arithmetic arrow: if we play a film depicting the the un-breaking of an egg or the increase of the density of the prime numbers in the number line, we see something that looks absurd, and Feynman had no better explanation for this impression of absurdity in the first case other than ‘the accidents of life,’ But the impression of absurdity attached to the loss of density of the prime numbers as we count down the line isn’t an ‘accident’ at all – it is mathematically necessary that the repetition of a unit be accompanied by a global decrease in prime-density. The Riemann Hypothesis is an extension of the Prime Number Theorem: it says, not merely that the primes thin out, but that local changes in prime-density are equally likely to involve an increase as a decrease in density, and that they cannot exceed the upper and lower bounds marked in red and blue in the graphs below:

\text{Density}=\frac{\pi (x)}{x}

\text{Density} \text{Min}=\frac{\sum _{n=2}^x \frac{1}{n \log }-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)}{x}

\text{Density} \text{Max}=\frac{\sum _{n=2}^x \frac{1}{H_n}-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)}{x}

We can now extend the earlier principles:

  1. the fundamental laws of arithmetic are the same for all observers in inertial (arithmetically continuous) frames of reference
  2. there is a local balance of prime-density and prime sparsity which accounts for the apparent constancy of the speed of light, and allows for either light or space to be assumed to be a rest, there is a global loss of prime-density which accounts for the red-shifted nature of the light from distant stars
  3. therefore units of space and time and units number differ from observer to observer

If prime-density were to increase sufficiently, there would be no units of number whatever, and what we would can say is that every observer has there own individual number line, each of which differs from that of every other observer. We can get at these different but interconnected number lines from Dirichlet’s observation that, so long as q and n have no common factor greater than 1, the progression q n + a involves infinite primes and that it is associated with an L-function (defined by the series \sum _{n=1}^{\infty } a_n n^{-s}). All of these L-functions involve a local balance of prime-density and prime-sparsity (as dictated by the Generalized Riemann Hypothesis), and therefore fit the experimental data. But all involve different notions of unity and zero -different sets of primes and zeros because of different sets of artificial point sources- and a global and irreversible loss of prime-density. Correspondingly, light-density is also dropping off in an irreversible manner, meaning that as the number line is turning irreversibly from number into space, the universe is turning irreversibly from light into darkness. “E = MC^2” now becomes “Energy = Mass times the expansion of space squared”, “M = E/C^2” becomes “Matter = Energy divided by the expansion of space squared.” But the two equations -one which describes the conversion of mass into energy and the other which describes the conversion of energy into mass- cannot as has been imagined be equivalent, because the expansion of space is, not constant, but is growing as a function of time (if C = 3 * 10^8 m/sec at time t1 and 3 * 10^9 m/sec at t2 then by multiplying M by 3 * 10^8 m/sec at t2 you obviously do not recapture E, and if the expansion of the universe is allowed to go on until a sufficiently late time tn, you will recapture only a small proportion of E).


A theoretical physics model is a mathematical framework that, in order to make predictions, requires that certain parameters are set.

In the standard model of particle physics, the parameters are represented by the 18 particles predicted by the theory, meaning that the parameters are measured by observation.

Some physicists, however, believe that fundamental physical principles of the theory should determine these parameters, independent of measurement. This motivated much of the enthusiasm for a unified field theory in the past and sparked Einstein’s famous question “Did G-d have any choice when he created the universe?” Do the properties of the universe inherently set the form of the universe, because these properties just won’t work if the form is different?

The answer to this seems to be leaning strongly toward the idea that there is not only one universe that could be created, but that there are a wide range of fundamental theories (or different variants of the same theory, based on different physical parameters, original energy states, and so on) and our universe is just one of these possible universes.

In this case, the question becomes why our universe has properties that seem to be so finely tuned to allow for the existence of life. This question is called the fine-tuning problem and has promoted some physicists to turn to the anthropic principle for explanation, which dictates that our universe has the properties it does because if it had different properties, we wouldn’t be here to ask the question. (A major thrust of Smolin’s book is the criticism of this viewpoint as an explanation of the properties.)


Missing from the standard model is the governing principle of our universe – the Riemann Hypothesis in its general form. The values of the free parameters in the standard model arise from the mathematics of the Riemann Hypothesis, which sets strict and precise limits on those balances and imbalances of concentrated and diffused light that are possible from the perspective of an observer, i.e. someone in an arithmetically continuous environment. It is true that if the universe lacked the properties that make for arithmetic continuity (if it was not RH-friendly), then the laws of arithmetic would not apply, and we could not do mathematical-physics, but that doesn’t explain why it has these properties any more than the fact that the noose around the neck of man to be hanged broke twice, thereby resulting in a pardon and saving his life, is a condition of his ability to reflect on his survival explains that survival. Just as an answer to the question of why the noose broke is required for the latter explanation, and an answer to the question of why the universe was set up in the way it was set up is required for the former explanation. Invoked as an explanation for fine-tuning, the anthropic principle involves an elementary confusion between necessary and sufficient conditions.


The universe still has a number of mysteries, but the ones that most vex physicists are dark matter and dark energy. This type of matter and energy is detected by its gravitational influences, but can’t be observed directly, so physicists are still trying to figure out what they are. Still, some physicists have proposed alternative explanations for these gravitational influences, which do not require new forms of matter and energy, but these alternatives are unpopular to most physicists.”


Einstein’s theory of gravity differed from Newton’s in that for Einstein gravity is a fictitious force. According to Einstein, the earth’s orbit around the sun was not to be explained by the existence of a force propagating through space and acting upon these bodies, but by the principle of inertia together with the curvature of the space in which they move. This curvature supposedly arises because of the presence in that space of matter. But the idea that space is curved by matter is a half-truth, and this half-truth is exposed by the behavior of distant galaxies, and in particular by the bullet cluster. It isn’t quite true that matter curves space; rather, concentrations or deficiencies of light curves it, and matter involves ligh and space. In a region of space where there is a balance of light and space, Newton’s laws of gravity and motion will apply more or less as written, but when this balance tips too strongly towards light, these laws will cease to apply. The most extreme imaginable imbalance is the singularity at the ‘beginning’ of the universe, when the balance is tipped so strongly toward light that there is nothing but light, light undiluted by space and time, infinite light. Galaxies are closer to this ideal, unattainable, singular state than is our solar system, and hence they involve more light and more curvature (the sun is closer to that state than the planets and thus it involves more light and more curvature). With the use of the right mathematics, the departure of galaxies from the strictures of Newton’s and Einstein’s theories of gravity towards a greater degree of contraction than we are accustomed from our terrestrial perspective can be explained without the benefit of invisible, undetectable, matter. Recall that the bullet cluster poses a problem for MOND because, while the majority of the mass of the colliding galaxies lies between them in the form of gas, the gravity is in the galaxies themselves.

No mere modification of the strength of gravity will be sufficient to account for this situation, because no such modification can explain why the gravity is in the galaxies rather than the gas. A solution is the following: yes, the mass is in the gas between the galaxies of the bullet cluster rather than in the galaxies themselves, but the light is in the galaxies, and it is imbalances of light or space, not mass per se, that produce curvature. What we observe when it comes to the rotation curves of distant galaxies is exactly what we ought to observe, and there is no need for dark matter.

Dark energy -otherwise known as the cosmological constant- is the sole force at work in the universe, the force driving the universe from light to dark. Everything else is an epiphenomenon of the action of this force.


Borne, I (1971), The Born-Einstein Letters

Dirichlet, P (1837), Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält” (Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers)


de Swart, J. et al (2017), How dark matter came to matter

Einstein, Albert (1916), Relativity: The Special and General Theory

Everett, H (1957), Theory of the Universal Wave Function

Feynman, R (1964) The Character of Physical Law

Feynman, R (1948). A Relativistic Cut-Off for Quantum Electrodynamics

Galileo, G (1632), Dialogue Concerning the Two Chief World Systems

Hawking, St (2003) ed. On the Shoulders of Giants: The Great Works of Physics and Astronomy (works by Copernicus, Kepler, Galileo, Newton, and Einstein)

Jones, A (2016), 5 Great Problems in Theoretical Physics

King, T et al (2014), Identification of the remains of King Richard III

Maxwell, J (1865) A dynamical theory of the electromagnetic field

Milgrom, M, (1983), A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis

Newton, Isaac (1687), The Principia: Mathematical Principles of Natural Philosophy


Odlyzko, A (1987), On the distribution of spacings between zeros of the zeta function

Peebles, P. et al (2003). The cosmological constant and dark energy

Riemann, G (1859), Über die Anzahl red Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude)

Rubin, V, et al (1980), “Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)

Russell, B (1925), ABC of Relativity

Schrodinger, E (1935), Die gegenwärtige Situation in der Quantenmechanik (The present situation in quantum mechanics)

Spector, D (1990), Supersymmetry and the Möbius inversion function

Smolin, L (2006), The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next

Taylor, C (1999), The atomists, Leucippus and Democritus: fragments, a text and translation with a commentary by C.C.W. Taylor

Witten, Edward (2003), Physical law and the quest for mathematical understanding