*Abstract**In November of 1859, 2 highly influential works were published. One was a 500 page book entitled On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life by Charles Darwin, the other was an 8 page paper by Bernhard Riemann entitled On the Prime Numbers Less Than a Given Quantity. On the face of it, they are unrelated, but it is a great irony that the central idea of Darwin’s work (Evolution) is contradicted by the central idea of Riemann’s work (The Riemann Hypothesis). In this note, the origin of this contradiction is traced, and it is argued that Riemann got the better of the unintentional but historic dispute.*

**Evolutionary Epistemology and Self-Refutation **

It is a minimum condition of the truth of a theory that it leave room for itself. The most blatant example of a theory that *doesn’t* leave room for itself is the theory of truth -ascribable to Nietzsche- according to which there is no such thing as truth. Tony Eagleton in *Idealogy: An Introduction* encapsulates the Nietzschean doctrine of truth in a sentence:

*There is no such thing as truth; everything is a matter of rhetoric and power; all viewpoints are relative; talk of “facts” or “objectivity” is merely a specious front for the promotion of specific interests.*

If this theory is true, then it isn’t. All forms of relativism or subjectivism -doctrines according to which at least there are no truths that hold from every point of view- face more or less the same problem: if these relativistic/subjectivistic theories do not themselves hold true from every point of view, then nor can they apply to every point of view. One such self-refuting theory is the theory of evolution, according to which our capacity to know the truth about the universe is the result of random mutations that took hold only because of their survival value. But there is a big difference between a belief that is promoting of survival and one that is true. More particularly, it is possible that a false belief *promotes* survival. An example -a theme developed at length in Ang Lee’s allegorical film* Life of Pi*– is the belief of a shipwrecked sailor floating in the ocean on a raft that he is sharing the raft with a tiger. Whilst such a belief may be false, it may also enhance the castaways chances of survival. It follows from an evolutionary account of our epistemic faculties that neither the theory of evolution itself nor any belief or set of beliefs is true in a sense other than that it arose from a belief-forming mechanism that promotes survival, a doctrine of truth that was explicitly advocated by Nietzsche, but is at least a sub-text of all evolutionary philosophies. In short: the theory of evolution is a form of relativism/subjectivism, i.e. it denies that there are truths that hold from every point of view; relativism/subjectivism is self-inconsistent; there is a presumption that any self-inconsistent theory is false; and therefore the theory of evolution is presumably false.

**The Riemann Hypothesis**

An intellectual challenge of a much less broad nature facing the Darwinian world view arises from considerations surrounding an 8 page mathematics paper that was published in November 1859, the same year and the same month that saw the publication of Darwin’s *On the Origin of Species*. In this paper- *On the Number of Primes Less Than a Given Magnitude* -Bernhard Riemann tackled the forbidding topic of the distribution of the prime numbers in the number line. As every school child knows, a prime number is an integer that is divisible only by 1 and itself. 2 is prime, 3 is prime, 4 is not because divisible by 2, 5 is prime, 6 is not because divisible by 2 and 3… and so on and so on. The primes are famously haphazard and unpredictable when seen from a local point of view -Marcus Du Sautoy writes that “the primes look more like a set of lottery ticket numbers than a sequence generated by some simple formula or law”- but globally they seem to exhibit a clear trend: they seem thin out as we count down the line, i.e. the further we count the fewer primes there are. In the year following the publication of Riemann’s paper, Jacques Hadamard and Charles Jean de la Vallée-Poussin independently proved this trend to be irreversible. What they *didn’t prove* -and what Riemann hypothesized in his 1859 paper- is that while the global trend is toward a loss of prime-density, from a local point of view a change in prime density is just as likely to involve a gain as a decease. To see why the hypothesis is true, consider firstly coin flipping. If a coin is fair, then there is a margin of error whose growth rate is approximately the square root of the number of coin flips. One way depict these imbalances is by assigning 1 to heads, -1 to tails, summing the 1s and -1s, and registering the imbalances as departures in a positive or negative direction from the *x*-axis. We call these pathways “random walks”, and if we average the furthest distance of a random walk from the beginning of the walk, we find that it converges to . We could imagine the average to be more or less than this square root value: if it was more, then random walks would on average be more random, and if it was less, then random walks would on average be less random. We can relate coin flipping and random walks to the Mobius function, which assigns the value 1 to square-free integers with an even number of prime factors, -1 to integers with an odd number of square-free prime factors, and 0 to non-square free integers. In the same way that the 1s and -1s assigned to heads and tails are summed to produce a random walk, the Mertens function sums the 1s and -1s assigned to square free integers to produce the random walk depicted below:

If we look closely, we see that this random walk is a superposition of smooth waves that oscillates about the x-axis:

Now we could imagine this superposition to have more or less amplitude. If the amplitude was more, then the random walk would be more random, and if it was less, then the random walk would be less random. This is where the Riemann Hypothesis comes in. Technically it says that all “the non-trivial zeros of the zeta function have real part equal to 1/2” – which to most ears is an incomprehensible statement- but an equivalent and quite comprehensible statement of the hypothesis says simply that the random walk of the Mertens function can’t be more or less random that indicated by the white coloured wave. This is saying, on the one hand, that the non-trivial zeros of the zeta function all have value 1/2, and equivalently, that the amplitude of the superposition has square root size.

Consider next a circle whose area is supposed for the sake of argument to be 1. Then an energy source E located at the center of this circle will posses the same strength from center to circumference for E/1 = E, which is the same thing as saying that there is no difference between center and circumference. If there is no such difference, then either the circle has no area and no radius (it is a point), or it has infinite area and an infinite radius (it is a line); if there is no such difference, then E either has either infinite strength or no strength. But in reality the strength of E always lies between these extremes. By the , we know that we know that a circle of area 1 has a radius of or 0.56419 or 0.561459. Since gamma is a limit (of the difference between the sum and the integral of the harmonic series, a measure that is better able to capture the dynamism we seek that than is or 3.17222. But gamma is the limit of a potentially infinite number of values, so instead of

we may write

which can then be re-written as

This extended equation involves a significant division between s = 1 and real values of s other than 1, for if and only if s = 1 does not reach the limit . We know this to be true, for otherwise there would be a value of *x* such that . and a circle of area 1. If we examine the equation , we see that it is an expression of the inverse square law, and concerns quantities that spread out rather than die off. The inverse square law comes from geometry alone, but the most well known examples of the inverse square law arise in physics – gravity and electromagnetism are governed by the inverse square law. These forces are to contrasted to the nuclear forces, because the former -and not the latter- are long-ranged. In its familiar form, the inverse square law tells us that as the radius of a circle grows arithmetically its area grows quadratically, meaning that a flux arising from an energy source located at the center of a circular area varies inversely as a square of the distance from that center. But if the units are units of area rather than units of distance, then we get an inverse square root law – as the radius of a circle grows arithmetically its area grows quadratically, but as the area of the circle grows arithmetically, its radius grows inverse quadratically. If we re-write as , and consider that

in the case that s = 1, we find that if s is a real number other than 1, then the ratio of the circumference of the circle changes, which changes the relationship between the flux and its area, and we get an area and a flux that is short-ranged.

Everyone knows the expression “beauty is in the eye of the beholder”, which is a manifestation of one side of the ancient philosophical tension between the idea that the world in some sense has a mind-independent existence (realism), and the idea that in some sense in the mind (anti-realism). Arithmetic consciousness is dependent on the existence of an isomorphic relationship between an external and an internal number line, and so this same issue arises in the form of the question of whether a count is due to a disproportional in the mind-independent world or in the number line itself – or the relationship between the world and the number line. Consider in this regard a smooth function such as

f(x) would ordinarily be thought to be simulating the primes, but the equations imply that the error between the primes and prime-simulation represents an ineliminable randomness inherent in our internal notions of unity and zero, in which case the difference between the function and the primes might be thought of as simulating 1 or 0.

We can measure anything in the classical domain with a unit measuring rod of any length. This symmetry can be relied on only because difference between the length of the measuring rod and the length of everything else scales in a proportional manner, and in non-classical domains this difference either doesn’t scale proportionally, or if it does, then it does so for a strictly finite number of scaling factors. The equation

shows that the gap between any external and internal arithmetic correspondence would collapse if per impossible the area were to equal to 1 and the radius were non-existent, or if the are area were equal to 1 and the radius were infinite. And it also shows therefore that the symmetry in virtue of which arithmetic is possible in any classical dimension is broken whenever the gap between an external count and internal count becomes excessively wide or narrow. When this symmetry breaks, we get levels of randomness in the relationship between the world and the number line that make classical mathematics next to useless. From the equation comes a tailor-made solution of the (mass gap) problem of why the nuclear forces are short-ranged, and it also yields a solution to the measurement paradox illustrated by the of the 2-slit experiment. In this experiment, electrons are shot from an electron gun through 2 slits, and their presence registered on a screen behind the slits. Recall that if the detectors are attached to the slits, and we assign 1s to the particles travelling through slit 1, and -1 to the particles travelling through slit-2, then the count of the electrons through slits is like the count of the coins or the count of square free integers and their behavior in space is governed by the inverse square root law. If however detectors are not attached to slits, we get a situation that at first sight makes no sense: in this case, the electrons appear to travel, not like individual particles, but like a single wave through the slits, and thus what is called an “interference pattern” appears on the screen. But the pattern that forms when detectors are attached to the slits is an inference pattern too, the difference being that -like the interference pattern that occurs when waves formed by random walks are summed, and the interference pattern underlying the sums of square-free integers in the Mertens function- it is an interference pattern in accordance with the inverse square root law, a symmetric rather than an asymmetric interference pattern it can be said. These symmetric interference patterns occur only when s = 1 and in which case the gap between the internal count and the external count is neither too wide nor too narrow. The same equation also yields a solution to the problem of how to extend Newton’s law of gravity beyond the classical domain. When s = 1, and then the gap between the internal count and the external count is neither too narrow nor too wide, there is a symmetrical relationship between energy and space, and the formula

will yield predictions which are the basically same as those yielded by by 1/r^2. If however s != 1, the formula makes radically different predictions than this.

The problem with which we are primarily concerned here -the RH- and at its deepest level the RH is saying that the randomness there is between the count of the primes and the primes themselves -a randomness that permits us to use arbitrary notions of unity and zero to count, and protects an important natural symmetry- is conserved only if the growth rate of this randomness possesses a special size, a size corresponding to the growth rate of the radius of a circle of area n, and to the constant . The truth of the hypothesis arises from the fact that a greater or lessor amount of randomness in the relationship between the count of the primes and the primes -a non-trivial zero with real part other than 1/2- means that “primeness” dies-off and a number line has finite length. By an old fashioned reductio ad absurdum argument then, the RH is true: assume it to be false; then the primes are finite; but the primes are infinite; therefore it’s true. The arithmetic progression corresponding to zeta function is a special case of the progressions governed by the construction qn + a for the values a = 0 and q = 1, but there are in infinite number of different progressions of the form a, a + q, a + 2q, a + 3q… involving infinite primes so long as a and q are co-prime. The generalized form of the RH says that the randomness of these progressions has a square root growth rate. If the Generalized Riemann Hypothesis is false -if there is a complex zero of any of any L-function with real part other than 1/2- then these less prime-dense but nonetheless infinitely long number lines would have finite length. Hence by the same reductio ad absurdum argument, the GRH is true. We can further illustrate the viability of this solution and other solutions with the example of an excited quantum systems, such as the kicked quantum rotor, which is able to mimic classical systems for a limited time, before reaching a state of localization sometimes known as “quantum break time”. This phenomenon led Julian Brown to claim in the *New Scientist* (1996) that

*If we could study a classical system for longer than its quantum break time, we would see that the behavior was not chaotic but quasi-periodic instead.*

Quantum systems involve a lessor degree of randomness than do classical systems, and hence they are to be associated with a wave that has a less than square root growth rate. If Classical systems were merely highly energized quantum systems, then every classical system would eventually reach quantum break-time.

But a classical system that is merely a long-ranged quantum system in disguise contradicts the infinitude of the primes, from which it follows that the quantum systems from which classical systems arise are essentially short-ranged, and so comprised of what we can call “sub-units” and/or “super-units” rather than units. These are not scalable units, but *aspects* of scalable units.

**The Arrow of Time**

Richard Feynman in *The Character of Physical Law* invited us to imagine that atoms are divided into blue-coloured and white-coloured varieties, and separated into compartments. If the separation is removed, then just as blue dye turns water a luke-blue colour, the atoms form a luke-blue mixture. He observes that individual collisions provide no clue as to the irreversibility of the mixing process, but that studying a film of the mixing played in reverse reveals that

*…every one of the collisions is absolutely reversible, and yet the whole moving picture shows something absurd, which is that in the reverse picture the molecules start in the mixed condition… and as time goes on, through all the collisions, the blue separates from the white…*

He went on to say that

*…it is not natural that the accidents of life should be such that the blues will separate themselves from the whites…*

But he failed to provide any reason why this should be unnatural… The one-way nature of this process is familiar from the experiment in which a gas is confined to one of two compartments. If the separation between the compartments is removed, then the gas spontaneously distributes itself in a uniform manner throughout the two compartments, but it does not spontaneously revert to the separated state. More familiar still, is the breaking of an egg. We never see a broken egg spontaneously reassemble, and there is no way to reassemble an egg after it has been broken (“All the kings horses and all the kings men couldn’t put Humpty together again.”). But although both these processes involve a one-way direction when viewed from a sufficiently global perspective, they can go either way when viewed from a sufficiently local perspective: the individual atoms comprising the gas molecules might just as well go from compartment B to compartment A as from compartment A to compartment B, and if we study the individual atoms comprising Humpty Dumpty we get no clue as to the fact that Humpty cannot be reassembled. It is only by considering a sufficiently large group of atoms that the loss of energy-density known as ‘entropy’ is found to involve a one-way direction. A film of a broken egg reassembling seems absurd, but in what does this impression absurdity consist? Is it merely unfamiliarity with the reverse direction? The prevailing view is that it is improbable, but not impossible, that things go in reverse. By appeal to results surrounding *On the Number of Primes Less Than a Given Magnitude* and to the re-expression of the traditional in terms of , we can give an* a priori* reason for the classical/quantum divide, and for and the time-irreversibility of things on the classical side of this divide, and show that there the are strict limits on the reverse direction.

In *Variae observations circa series infinitas* (1737) Euler, imagined the subtraction of the prime-divisions of 1 and all of their multiples from the harmonic series

He imagined this subtraction taking place prime-series by prime-series, and rightly pointed out that if this process could be completed there would be nothing left but 1. An undivided unit, or 1/0, is something mathematicians and physicists call a ‘singularity’, and it is to be identified with a circle of zero radius and an infinite state of prime density. At this point in the physical realm, there can be no distinction between mind and matter -between an internal and external number line- and so by running Euler’s thought experiment in reverse, the number-line, and the time-line of physical universe, are jointly created by the addition of prime-divisions and their multiples to the harmonic series. In this way, both the form (numbers) and the content (physical objects) are brought, unit by unit, into being:

This amorphous, massless state, is the unreachable pure energy limit of the numerical and physical universe, and we can see that arithmetically comprehensible processes therefore involve the conversion of pure energy into mass by means of a process that corresponds to the process by means of which the number line is formed. Since this process involves the reproduction of the unit by the reduction of prime-density, we know it to be time-irreversible. Hence there is an arrow of time that points from the past to the future, and an arrow of space that points from contraction to expansion, but these arrows are not consistent with the existence of an evolutionary arrow that starts *ex nihilo* with the Big Bang, and leads us to ever more complex states of self-organization by the means of random mutations and natural selection, let alone a past-future symmetric arrow. Missing in particular are a simple beginning, and random change that is both creative and capable of global increase:

**A Simple Beginning.**From Riemann’s 1895 paper comes the idea that bridges the gap between the classical and the non-classical domains: classical phenomena, which are to be defined by their place in spatiotemporal environments that are long-ranged, possess a single direction (past to future, contracted to expanded), emerge from the behavior of non-classical phenomena, which are to be defined by their place in spatiotemporal environments that are short-ranged, and curve back on themselves. The nature of the bridge is such that quantum objects -which possesses the kind of simplicity that the big bang theory and evolutionary cosmology require for their starting point- must be regarded as co-existing with it in the way that parts co-exist with a whole. The classical universe it can be said is a symmetric superposition, and contained within this symmetric superposition are the asymmetric superstitions of quantum mechanics. The big bang theory and the evolutionary cosmology generally ask us to believe that the universe once had the size and complexity of a single particle, but when the classical universe is reversed back as far as possible in time and in space, one doesn’t arrive at some extremely simple point of origin, and always at a universe whose order of complexity is classical, and not less than that of the present universe. The form of logical error which spoils the idea of an evolutionary arrow of time leading back to a simple founding event is the “category mistake”: evolutionary cosmology equivocates between classical units and non classical sub and super-units, and supposes on the basis of this equivocation that the classical word can be scaled down to quantum size. This is the reverse form of the same absurdity as that involved in the idea of scalable quantum computers: classical units are not scaled up quantum units and hence the latter cannot be scaled up to the size of the former (quantum computing), and nor can the former be scaled down to the size of the latter (evolutionary cosmology).**Creative Random Change.**While the arrows of time and space have a single direction given by the global decrease in prime-density, we know that prime-density is subject to local fluctuations. This means that the arrows can point either way from a local point of view. When they point in the direction of the conversion of energy into mass, time goes forward and space expands, and when they point in the direction of the conversion of mass into energy -the opposite direction of entropy and the expansion of the universe- time goes backwards and space contracts. If there is no change in prime-density, then the arrow points neither forwards nor backwards. Thus there are three types of change, one of which is destructive, one of which is creative, and one of which is neither destructive nor creative. But the Riemann Hypothesis (all the complex zeroes of the zeta function on the 1/2 line), of which the Prime Number Theorem (none of the complex zeros zeroes are on the 1 line) is a special case, imposes a strict limit on creative change, the direction of which is that of an increase in prime-density and an increase of the rate at which mass is converted to energy. All increases in prime-density are local, and they take place against a global background of decrease = all creative random changes are local and they take place against a global background of random changes that are destructive. In short, the creative random changes that evolutionary processes depend on to as it were spin straw into gold are like the bumps on surface that is sloping irreversibly downwards.

- Like the primes in the number line, they die-off and -without a trans-arithmetic intervention- they can never come back to life. What is commonly called “evolution” is in reality
*devolution*, and evolution is an illusion born of the combination of the ancient but false assumption of the Greek atomists that the universe can be explained by the philosophy in which the parts are prior the whole, and the invalid inference from the ability of random genetic changes to produce minor improvements in an organism to the conclusion that -not only finches with longer beaks, and penicillin-resistant bacteria and so on- but everything in the universe arose in this way.

Karl Popper remarked in his autobiography:

*I have come to the conclusion that Darwinism is not a testable scientific theory, but a metaphysical research programme-a possible framework for testable scientific theories.*

But not all conceptual frameworks are created equal, and unfortunately (given the breadth and depth of its influence on our thinking about the world and our place in it) the Darwinian framework is *no*t a rational framework for testable scientific theories.

##### REFERENCES

Abal, G, et al (2014), Quantum suppression of chaos in the Fermi accelerator

Brown, J (1996), When Two Worlds Meet

Casiti, G, et al (1979), Stochastic Behaviour in classical and Quantum Hamiltonian Systems

Darwin, C (1859), On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life

Dirichlet, P (1837), Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält” (Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers)

Euler, L (1737), Variae observations circa series infinitas (Various observations on infinite series)

de la Vallée Poussin, C (1896), Recherches analytiques la théorie des nombres premiers (Analytical research on the theory of prime numbers)

Dirichlet, P (1837), Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält” (Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers)

http://seedmagazine.com/content/article/prime_numbers_get_hitched/

Feynman, R (1967), The Character of Physical Law

Gutzwiller, M (1990), Chaos in Classical and Quantum Mechanics

Eagleton, T (1991), Idealogy: An Introduction

Hadamard, J (1896), Sur la distribution des zéros de la fonction zeta(s) et ses conséquences arithmétiques (On the distribution of the zeros of the zeta function and some consequences for arithmetic)

Lee, A (2012), Life of Pi

Popper, K (1976), Unended Quest. An Intellectual Autobiography

Noether, E (1915), Der Endlichkeitssatz der Invarianten endlicher Gruppen (The Finiteness Theorem for Invariants of Finite Groups)

Riemann, G (1859), Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude)