On the Dark World of Richard Dawkins 2.9.0

Abstract In The Selfish Gene, The Blind Watchmaker and The God Delusion, Richard Dawkins expresses what is undoubtedly the mainstream view of the origin of the species. An outgrowth of atomism, materialism, and the “scientific” revolution, the epistemological and ethical implications of the story of origins he champions (“Once upon a time there was a dark land in which might was the only right and in which no one lived happily ever after.”) has given the world Nietzsche, Adolf Hitler and the Nazis, the Holocaust, the moral bankruptcy of modern America and their representative -the man most likely to start World War III- Donald Trump. And of course it has given us Dawkins himself. Here we show that a  pair of necessary conditions of the reality of Dawkins’ brutal, mindless, pitiless world cannot be met…

Download .pdf


William Paley (Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature, 1802):

In crossing a heath, suppose I pitched my foot against a stone, and were asked how the stone came to be there; I might possibly answer, that, for anything I knew to the contrary, it had lain there forever: nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground, and it should be inquired how the watch happened to be in that place; I should hardly think of the answer I had before given, that for anything I knew, the watch might have always been there. … There must have existed, at some time, and at some place or other, an artificer or artificers, who formed [the watch] for the purpose which we find it actually to answer; who comprehended its construction, and designed its use. … Every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater or more, and that in a degree which exceeds all computation.

Richard Dawkins’ apparent life’s mission -and his focus in The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design (1986)- is that Paley is wrong because a) the truth of the theory of evolution shows that complex things can arise from something more simple than themselves, and b) the direction of the universe is the direction of simplicity to complexity Dawkin’s quotes:

The beauty of evolution is that it does provide an explanation of how you can get complexity out of simplicity. It does it by slow, gradual degree. At no point are you postulating the sudden coming into existence of a complicated being.

Science offers us an explanation of how complexity (the difficult) arose out of simplicity (the easy). The hypothesis of God offers no worthwhile explanation for anything, for it simply postulates what we are trying to explain. It postulates the difficult to explain, and leaves it at that.

In the beginning was simplicity.

The analogy between telescope and eye, between watch and living organism, is false. All appearances to the contrary, the only watchmaker in nature is the blind forces of physics, albeit deployed in a very special way. A true watchmaker has foresight: he designs his cogs and springs, and plans their interconnections, with a future purpose in his mind’s eye. Natural selection, the blind, unconscious, automatic process which Darwin discovered, and which we now know is the explanation for the existence and apparently purposeful form of all life, has no purpose in mind. It has no mind and no mind’s eye. It does not plan for the future. It has no vision, no foresight, no sight at all. If it can be said to play the role of watchmaker in nature, it is the blind watchmaker.

I say that he is mistaken, and that despite the popularity of his views at the time of writing (and the contempt in which unbelievers are held by believers), the theory of evolution will prove to be a deceptive half-truth. To make the anti-Darwinian case, I firstly identify the following the necessary conditions of the theory:

  • Simplicity: the origin of the universe from a state of simplicity
  • Creativity: the ability of random step-wise changes following a simple-to-complex direction to produce increasingly complex, and self-organized, systems;

and consider if they are met…


Quantum Gravity
The General Theory Relativity in its present form says that space-time is curved by mass. It follows that, in the beginning, all the mass of that universe was concentrated into a zero-dimensional point. That the idea is wrong is shown by the implication of multiple singularities at the center of black holes, the lack of a coherent framework for both large scale and small scale objects, and by the flat rotation curves of distant galaxies. A similar idea that doesn’t carry any absurd consequences is the idea of the infinite compression of energy (light), and the elimination of mass. Now curvature is to be attributed, not to mass – which is a combination of light and space- but to imbalances of energy light and space. Mathematically, we capture the what it is to be balanced, and what it is to depart from balance thereby producing curvature, by re-expressing the tradition equation for a circle of area 1 –\pi \sqrt{\frac{1}{\pi }}^2=1– as

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

Where the first equation fails by implying that an energy source located at the center of this area unit-circle is undiminished from center to circumference (it has either a zero or an infinite radius), the second provides as with a potentially infinite number of energy levels. Seeing that \gamma is a special case of \zeta (s)-\frac{1}{s-1}, we can go from \lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2\right)=1 to the general

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

Now we that although the limit is 1, regardless of the values of x or s, we get an inter-relationship between the exterior and interior circles that is non-repeating if and only s = 1, meaning for instance that the exterior circles in the figures above can be expanded indefinitely and/or that the interior circles contracted indefinitely. These dynamics are associated with spirals that unfold forever:

1-e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2

However if s takes on a positive real value other than 1 -when the circles take on certain maximum/minimum size- we get an inter-relationship, and a contraction/expansion process that is repetitive. These dynamics are associated with the circular forms of a spiral:

1-e^{(11+1) \left(\zeta (11)-\frac{1}{11-1}\right)} \left(\left(\frac{1}{\exp \left((11+1) \left(\sum _{n=1}^x \frac{1}{n^{11}}-\int_1^x \frac{1}{n^{11}} \, dn\right)\right)}\right){}^{\frac{1}{11+1}}\right){}^{11+1}

1-e^{(12+1) \left(\zeta (12)-\frac{1}{12-1}\right)} \left(\left(\frac{1}{\exp \left((12+1) \left(\sum _{n=1}^x \frac{1}{n^{12}}-\int_1^x \frac{1}{n^{12}} \, dn\right)\right)}\right){}^{\frac{1}{12+1}}\right){}^{12+1}

1-e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2

1-e^{(12+1) \left(\zeta (12)-\frac{1}{12-1}\right)} \left(\left(\frac{1}{\exp \left((12+1) \left(\sum _{n=1}^x \frac{1}{n^{12}}-\int_1^x \frac{1}{n^{12}} \, dn\right)\right)}\right){}^{\frac{1}{12+1}}\right){}^{12+1}

We can use this mathematics to extend the inverse square, and to extend this law and Newton’s law of gravity beyond regions in which there is a balance of light and space, indeed to all regions amenable to mathematical description. Take the 1/r^2 formula, and consider a circle of area 1. There is in the limit a perfect balance of light and space (E = 1 and A = 1). If however if we write \pi as the partial the partial sum/integral e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)} and/or s is a positive real number other than 1, then we have an imbalance. We can say the following: if s = 1, then there is an approximately symmetrical relationship between light and space, and the new formula will yield predictions which are similar to those yielded by 1/r^2. If however s != 1, the balance is strongly tipped toward light (extreme example the singularity of concentrated light at the root of the universe), or conversely toward space (extreme example the interior of black holes), and the new formula makes entirely different predictions than 1/r^2. When s !=1, the region of space described by the new law curves back on itself. In these light or space-dense environments, curvature -as a function of density- is far greater. But all of these degrees of curvature -and everything in this universe- is governed by the same equation:

The Golden Key
In Variae observations circa series infinitas (1737),  Euler argued that the product continued to infinity of this fraction

\frac{2\ 3\ 5\ 7\ 11\ 13\ 17\ 19\text{...}}{2\ 4\ 6\ 10\ 12\ 16\ 18\text{...}}

in which the numerators are prime numbers and the denominators are one less than the numerators, equals the sum of the infinite series


and they are both infinite. To prove his point to Euler invites us to imagine the extraction from the second series a prime denominator and all remaining multiples of that prime denominator until everything except the first term 1 has been eliminated. Let




This leaves


To eliminate the denominators that are divisible by 3, we divide both sides to get

\frac{x}{2\ 3}=\frac{1}{3}+\frac{1}{9}+\frac{1}{15}+\frac{1}{21}\text{...}

Subtracting again eliminates all remaining denominators that are multiples of 3 leaving

\frac{2 x}{2\ 3}=1+\frac{1}{5}+\frac{1}{7}+\frac{1}{11}+\frac{1}{13}\text{...}

Applying this eliminatory process to all of the prime numbers leaves

((1 2 4 5 10 12 16 18)/(2 3 5 7 11 13 17 19)...) x=1

This is a thought-experiment, but if these eliminations could be performed in the physical world, they would result in an undivided unit, and the shrinking of space and the slowing of time to a zero-dimensional point. With all of reality contracted to a zero-dimensional point, the distinction between the world and the mind that surveys it is lost. The takeaway from Euler’s thought-experiment here is that, since both prime-density and energy-density must at this point be infinite, this is the state of pure energy required by, but conspicuously missing from the General Theory of Relativity. This argument is justifiably called by John Derbyshire “The Golden Key”, and we can turn the key as follows:

U=e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2

0=-\left(e^{2 \gamma } \sqrt{\frac{1}{\exp \left(2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)\right)}}\right){}^2+\epsilon +e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2+1

[supsystic-tables id=25]

[supsystic-tables id=26]

u=e^{2 \gamma } \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^2

0=-e^{2 \gamma } \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^{p_x} \frac{1}{n^s}-\int_1^{p_x} \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^2+e^{2 \gamma } \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^2+\frac{1}{s}+\epsilon

\hbar = the point at which the gap between e^{2 \gamma } \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^2 and e^{2 \gamma } \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^{p_x} \frac{1}{n^s}-\int_1^{p_x} \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^2+\frac{1}{s} ceases to decrease.

[supsystic-tables id=27]

[supsystic-tables id=28]

There is a distinction here between infinitely (classical) and finitely expandable (non-classical) matrices arising from the difference between s = 1 and positive real values of s other than 1. The values on the center diagonal correspond in the one case to the terms of the reciprocal of the integers (the harmonic series), and in the other, to the reciprocals of the primes and their squares, and so from Euler’s argument we know that eliminating all of the finite matrices from an infinite matrix would it to collapse to singularity. Dirichlet observed in 1837 that if a and q share no common factor other than 1 -if they are “co-prime”- then the progression a, a + q, a + 2q, a + 3q… contains infinitely many primes and it is associated with an L-function. If and only if all of the complex zeros of L-functions have real part 1/2, then there is a balance of prime-density and sparsity corresponding to the equation \lim_{x\to \infty } \, e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}=1 in the circumstance that s = 1, and relevant energy systems are infinitely expandable. The governing principle of this model of universe – the thing that both keeps the universe from singularity and maintains the balance in virtue of which classical systems are infinitely expandable- is therefore the Riemann Hypothesis, the first form of which was published by Riemann in the same year and in same month as Darwin published On the Origin of Species. With a singularity of light as our staring point, classical energy systems, which can be described by potentially infinite matrices, arise from this singularity by the disruptive role played by non-classical energy systems, which can be described by strictly finite matrices. Illustrated below is the idea that every solution to the equation such that the solution is a complex zero of an L-function (\left(\frac{1}{2}+i m\right)+\left(\frac{1}{2}-i m\right)=1) is associated to a finite matrix, and by reference to these solutions and/or to these equations we can identify the points at which one finite matrix ends and another matrix begins:[supsystic-tables id=22]

[supsystic-tables id=23]

P versus NP and the Computational Arrow of Time
There is a deep problem on the borderland of computer science, logic, maths and physics known as ‘P versus NP’, which concerns the question of whether the class of decision problems whose solutions are quickly verifiable (NP) by a computer is the same as the class of problems that are quickly solvable by computer (P). Historically the problem arose because certain problems seem to be hard to solve. More particularly, they seem to require a lot of time -an exponentially growing amount of time- to solve. An example of a NP problem that seeming takes exponential time is Factoring. While it doesn’t take long to factor 15 or 21, imagine trying to factor the 200 digit integer


You can easily check for yourself  that it divides evenly into the primes




Although it takes a pocket calculator a spit second to do the multiplication, it would take a single 2.2 GHz computer roughly 75 years to do the division. Factoring is one is one of numerous (NP) problems that are easy in the one direction, and seem to be hard in the other. We can argue for the asymmetric nature of multiplication and factoring by considering the Travelling Salesman Problem, which is he problem of whether a salesman can visit a number of cities exactly once and return to a home-city for a certain cost.

First we transform it into a problem of whether a computer (salesman) can execute some number of instructions (visit some number of cities) which executes every instruction exactly once (visits every city exactly once) before returning to a halt state (home-city) for some maximum cost:


An arbitrary computer is therefore working on the problem of whether an arbitrary computer will halt when run with an arbitrary set of instructions, and thus the point will be reached when the evaluation is a self-evaluation, i.e. the point will be reached such that the computer is attempting to determine of itself if it will halt. If we associate to every city an instruction, this self-evaluative point will be reached when the number cities on the tour is the same as the number of instructions in the program. This leads to a contradiction in the case that the number cities is greater than the number of instructions, and by appeal to this contradiction, it follows that TSP involves a limit on the number of cities. This proves that TSP differs from multiplication, which isn’t sensitive to the size of the input, and that P and NP are not equal. More particularly… associate each atomic instruction in some finite set of instructions with some n-SAT formula (such as (\neg A\lor \neg B\lor C)\land (\neg A\lor \neg B\lor D)), associate each of these formulas to the vertices of a complete graph other than the start/stop vertex. Associate the halt state with this start/stop vertex. Let v be the number of variables per clause, and if the instruction doesn’t result in the machine going into an infinite loop, weight the vertex as \frac{v-1}{v}+\frac{1}{v}.  If the instruction does result in the machine going into an infinite loop, weight the offending vertex as 0 + 1. Weight the halt vertex as 1. \frac{n-1}{v} gives the minimum truth-density, \frac{(n-1) (v-1)}{v}, gives the maximum falsity-density of a satisfiable n-SAT instance, \frac{(n-1) (v-1)}{v}+\frac{n-1}{v}+1=n gives the maximum imbalance between the truth and falsity-density corresponding to a Travelling Salesman’s circuit that is within budget.

Let n be the number of instructions/vertices, and we can conclude that

  • If and only if it is possible to visit every vertex exactly once before returning to a halt vertex for a cost n without upsetting the balance of minimum truth-density/maximum falsity density \frac{(n-1) (v-1)}{v}+\frac{n-1}{v}+1=n, there is some Turing machine that will not infinite loop when run with some input.

To these circuits, we can therefore associate to infinite matrices:

[supsystic-tables id=19]

[supsystic-tables id=20]

[supsystic-tables id=24]

[supsystic-tables id=29]


The Blind Watchmaker
Armed with information gleaned from Euler, Riemann, and their brainchild -the Riemann Hypothesis- we refocus now on Darwin, Dawkins, and the former’s brainchild – evolution. Our finite matrices describe units, but these are not classical units -units that can be subject to an indefinite amount of scaling- but proto-units. They describe, not the moderately curved projections of our everyday classical experience arising, but the extremely curved projectors without which these projections and our classical world would vanish:

This distinction between potentially infinite classical matrices and strictly finite non-classical matrices implies that the theory of evolution is a flawed theory of origins because it implies that the earliest state of the classical universe cannot possibly have been simple. It must have possessed the same order of complexity as the present-day universe, meaning that it can only have come into existence at once, and by an act of creation. Dawkins in the The Blind Watchmaker focuses on the eye, often cited by detractors as something that cannot have come about as the result of a stepwise process, and is at pains to argue the contrary case. But classical universe is like a giant eye that cannot have come about as a stepwise process, and so the premise on which all of Dawkins’ arguments in The Blind Watchmaker rest -that complexity is reducible to simplicity- is false, and all of these arguments unsound at their root. The classical universe can’t as evolutionary cosmology requires be scaled down to quantum size, and nor can the quantum universe be scaled up to classical size, and to suppose otherwise is to make a category mistake. This disposes of the first and most fundamental necessary condition of the theory of evolution captured by Dawkins statement:

In the beginning was simplicity.

The beginning in the sense of the beginning of the classical universe was, not simplicity, but a great complexity that came about, not gradually, but at once. Beyond that, we are lead to the infinite concentration of light which is the source of the projected classical universe, but like the artificial point sources that diffuse finite amount of light to our universe this natural source lies beyond the time line of the universe.

Evolution is nothing if not an attempt to banish originality and uniqueness from the universe by showing that everything in this universe derives from something else in this universe. But the enterprise fails because of the location of quantum projectors outside of the classical universe. It particularly fails when it comes to the singularity of light at root of everything that is, and which it cannot deal with in any way since this thing is derivable from nothing other than itself. Dawkins doesn’t appreciate what is perhaps the most important truth behind the engineering of the physical universe: the classical/quantum divide is the divide between the arithmetic and the proto-arithmetic, and therefore the quantum domain doesn’t as he imagines exist on the time line of the universe studied by biologists and scientists that study large scale objects – it is a fiery creative/destructive force lying as indicated beyond and beneath this time line, and it is the force that drives the arrows of time from the past into the future. This classical/quantum distinction provides a natural distinction between the evolutionary (the derived) and the creative (the original), where atomic finite matrices and atomic quantum systems play the role of the originators of infinite matrices and the classical universe. But clearly there is a further distinction to be had arising from the difference between the atomic and the molecular: the classical universe increasingly involves a time line whose elements are repetitious, rather than unique, and so it also provides is a distinction between the easy (the combination of atomic proto-units and units and to produce molecular proto-units/units = the derivative) and the hard (the formation of new atomic proto-units/units = the original). And whilst we can all agree that the global direction given by the arrows of time is as evolution asserts one in which relative simplicity moves irreversibly towards complexity, we can further distinguish between derivative complexity and original complexity, and we can observe that the arrows of time move erratically but irreversibly from the latter towards the former. We might start with the radiative arrow arising from the expansion of waves outwards from their source since the disruption of the singularity of light is radiative in this manner. The most well known arrows are the thermodynamic arrow arising from the loss of heat in the direction of the future, the cosmological arrow arising from the expansion of the universe, the causal arrow arising form the fact that effects follows causes rather than precede them, the quantum arrow arising from the collapse of the wave-function, and the psychological arrow arising the fact that we remember the past and the future is unknown… Less well known is the computational arrow arising from the ease of problems in P, and the hardness of NP-Complete problems whose solution would permit the reversal of the arrows above. The inequality of P and NP, it can be said, reflects the impossibility of a classically computational universe, one in which everything is derivative, the universe whose most outspoken prophet is Richard Dawkins. All of these arrows derive from the arithmetic arrow arising from the increase in zero-density and the decrease of prime-density, and they all point in the general direction leading from relative simplicity to complexity, but more particularly they all point in the direction leading from creative complexity to derivative complexity. Our focus here is the genetic arrow, which arises from the fact that the creative complexity of DNA decreases like all creative complexity in the direction of the future. This is the wrong direction for a universe that is evolutionary – such a universe must a follow an arrow leading somehow in the reverse direction, and the existence of this genetic arrow disposes of the second necessary condition of the truth of the theory of evolution. Dawkins:

The beauty of evolution is that it does provide an explanation of how you can get complexity out of simplicity. It does it by slow, gradual degree. At no point are you postulating the sudden coming into existence of a complicated being.

Evolution provides an explanation of how you can get derivative complexity out of pre-existing original complexity – by slow, gradual degree. It offers no explanation of how you can get original complexity, which involves the sudden coming into existence of a complicated being. Dawkins

Science offers us an explanation of how complexity (the difficult) arose out of simplicity (the easy). The hypothesis of God offers no worthwhile explanation for anything, for it simply postulates what we are trying to explain. It postulates the difficult to explain, and leaves it at that.

Science offers us an explanation of how derivative complexity (the easy) arose out of pre-existing original complexity (the hard). It offers us no explanation of how original complexity arises. This is simply ignored by science, but it is a necessary condition of arithmetic consciousness. The “hypothesis” of G-d is a non-reductive condition of arithmetic consciousness.

All appearances to the contrary, the only watchmaker in nature is the blind forces of physics, albeit deployed in very special way. A true watchmaker has foresight: he designs his cogs springs, and plans their interconnections, with a future purpose in his mind’s eye. Natural selection, the blind, unconscious, automatic process which Darwin discovered, and which we now know is the explanation for the existence and apparently purposeful form of all life, has no purpose in mind. It has no mind and no mind’s eye. It does not plan for the future. It has no vision, no foresight, no sight at all. If it can be said to play the role of watchmaker in nature, it is the blind watchmaker.

Even at first glance, Dawkins position seems self-contradictory: if the watchmaker is as evolution dictates the product of blind, purposeless, mindless forces, then how can he be said in the final analysis to be anything other than blind, purposeless and mindless himself? Consciousness (mind), some materialists argue, emerges from a certain critical amount of blind, purposeless, mindless complexity, but we have seen that this is not the case, and that consciousness (the collapse an atomic wave-function) emerges from original non-reducible complexity, not from derivative complexity. But in any case the forces of physics are clearly far from blind and mindless. On the contrary, they exhibit such an extreme degree of foresight in regard to the purpose of producing arithmetic consciousness, that human watchmakers -who Dawkins agrees have minds and a purposes- are blind by comparison. If the Riemann Hypothesis were false (if there was a zero of an L-Function with real part other than 1/2), then arithmetic conscious and the universe with it would vanish. As Dawkins himself has said:

To get our universe, with all of its potential for complexities or any kind of potential for any kind of life form, everything has to be precisely defined on this knife edge of improbability.

This is an understatement. Let me show you just how thin this “knifes edge” is. Let g = Graham’s number- if every digit in Graham’s number were considered to occupy as little as 1 Planck volume, it would nonetheless too big to fit in the observable universe- and note that if the value of s in the equation

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

were to differ from 1 by as little as

1\ 10^{-g}

the primes would be finite, i.e. if and only if the there is a complex zero of an L-Function whose real part is other than 1/2 -if the Riemann Hypothesis is false- then there are no solutions to the equation

e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}=\left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2

This kind of precision is about as far from blindness, and lack of purpose as could be imagined. As for natural selection and for evolutionary (derivative) processes – in and of themselves they are blind, but they are directed by the creative (original) processes that underlie them. It is like the difference between the painter and the painting – in and of itself the painting is blind, but it conveys the vision of the painter. Dawkins may imagine, like Stephen Hawking, that he has an answer to this dilemma in the anthropic principle (the universe is fine-tuned because if it wasn’t we wouldn’t be conscious of it) and in multiverses, the most of which aren’t fine tuned, but by the considerations above neither of these is helpful to the cause of evolution. First the anthropic principle. Suppose that a prisoner sentenced to death by hanging escapes execution because the noose breaks three times, and there is a law decreeing that in these circumstances the prisoner is to be freed. Whilst it is true that he is able to contemplate his narrow escape only because the noose improbably broke multiple times, that is a necessary, not a sufficient condition of his escape. What are the sufficient conditions of the thrice-broken noose? By the same token fine-tuning is a necessary condition of a universe in which life and consciousness exists, but it is not sufficient. What are the sufficient conditions of life and consciousness? The government of the relationship between light and space by the Riemann Hypothesis provides both the necessary and sufficient conditions of life and consciousness. There are two related sets of laws beneath our scientific understanding of the universe -physical laws and mathematical laws- and all of our scientific knowledge is based on their interaction, an interaction that is governed by the Riemann Hypothesis, which places limits on the amount of randomness there is in the count and the thing being counted, the internal and the external number line. A meta-universe in which the Riemann Hypothesis is false, and in which constants of nature therefore lack the values they have in our universe, is not an arithmetically comprehensible universe. It is highly improbable that a single blind man firing a shot from a rifle will hit a bottle located 100 meters away, but all things being equal an infinite number of blind shooters will hit such a bottle infinite times. But while we know that it is true that a sufficiently large number of blind shooters will hit the bottle in our Riemann Hypothesis-friendly, exquisitely fine-tuned, universe, we can’t apply this knowledge outside of this context. The same knock-down counter-argument can be offered in response to the well-known metaphor of a monkeys randomly pressing the keys on a typewriter: a sufficiently large number of monkeys, it is correctly argued, will eventually type all the works of Shakespeare, but this is a false analogy when used outside the context of our universe. We can make inferences about possibility and probability assuming the Riemann Hypothesis which guarantees that the relationship between the laws of physics and mathematics are as they are in our universe, but all bets are off in a universe where this relationship doesn’t exist. Many -notably Hawking and Dawkins- tend believe we are able to reason mathematically from somewhere outside the physical universe, a vantage point from which the mind is free from the constraints imposed on it by physics, but there is no such vantage point for a man. About universes in which the Riemann Hypothesis is false, and in which the laws of physics and mathematics are therefore related in an unfamiliar way, nothing serving Hawking’s and Dawkins reductionistic world-view can be said, because such universes are uncountable.

The God Delusion
Many of Dawkins arguments against G-d are straw man arguments. He analogizes G-d to Father Christmas, leprechauns, tooth fairies.., and then proceeds to argue that, since there is no evidence for a Father Christmas-like G-d that, there is no G-d. But the G-d of Judeo-Christianity is unlike Father Christmas in that he is not in essence an object within the physical universe, but a transcendent condition of the existence of objects within this universe. It is irrelevant to the question of G-d’s existence that he has rarely been observed coming down people’s chimneys on Christmas eve, because G-d is not the sort of entity that could come down a chimney. This is a glaring category mistake by Dawkins. Dawkins’ only argument against G-d which avoids this mistake is that propounded in The Blind Watchmaker, i.e. that G-d is unnecessary as an explanation because the theory of evolution is a sufficient to explain the universe and everything in it. To see why G-d really is necessary, we consider what are the three classic “proofs” of the existence of G-d, the ontological proof, the cosmological proof, and the teleological proof. The best example of the ontological proof comes from philosopher Alvin Plantinga who argued in this way:

(1) G-d = the being that exists in all possible worlds;
(2) It is possible that G-d exists, i.e. a being satisfying this definition exists in at least one possible world;
(3) But if G-d exists in one possible world then he exists in all possible worlds;
(4) And if G-d exists in all possible worlds he exists in the actual world.
(5) Therefore G-d exists.

The best example of a cosmological proof comes from Leibniz who argued in this way:

(1) There is a distinction between contingent beings and a necessary being in that the former are derived from other beings and the latter-is- not;
(2) G-d = necessary being;
(3) Continent beings are dependent on necessary being;
(4) The world as we know it, and everything in it, is contingent;
(5) Therefore G-d exists.

Paley’s argument -which has been our indirect focus throughout- is a teleological proof. In essence:

(1) Everything -such as a watch or an eye- that exhbits great intricacy and purpose stands in need of a designer;
(2) The world as we know it exhbits intricacy and purpose;
(3) Therefore the world stands in need of a designer;
(4) The Designer of the World = G-d;
(5) Therefore G-d exists.

The standard refutation of Plantinga’s ontological argument that it is possible G-d doesn’t exist, from which it follows that he doesn’t exist in the actual world. Bertrand Russell eloquently expressed the standard refutation of Leibniz’ cosmological argument when he said during the course of a 1948 BBC radio debate with F.C. Copleston is that if every event in a causal series has a cause, then there is no need to ask for the cause of the series as a whole:

I can illustrate what seems to me your fallacy. Every man who exists has a mother, and it seems to me your argument is that therefore the human race must have a mother, but obviously the human race hasn’t a mother — that’s a different logical sphere.

The refutation of the teleological argument we have been considering from Dawkins is that things can exhibit great intricacy and apparent purpose when this in an illusion arising from the ability of evolution and natural selection to conjure complexity from simplicity. We are now in a position to reduce all these arguments to a single argument that is immune to these counter-arguments:

(1) A complex zero of an L-function off the 1/2 line -the falsity of the Riemann Hypothesis- implies the finitude of the primes;
(2) A universe in which the Riemann Hypothesis is false is a universe in which the primes are finite and arithmetic consciousness doesn’t exist;
(3) Arithmetic consciousness does exist in our universe;
(4) Therefore the Riemann Hypothesis is true in our universe;
(5) Therefore G-d -defined as transcendent singularity of light at root of every universe in which the Riemann Hypothesis is true- exists.


The Y-DNA of Richard III
A skeleton excavated in 2012 is almost certainly that of the English king, Richard III (1452 -1485), and mtDNA (which is passed from mother to child) extracted from the skeleton matches mtDNA taken from descendants of Richard’s sister Anne of York. However, Y-DNA extracted from the skeleton apparently doesn’t match Y-DNA taken from descendants of Henry Somerset the 5th Duke of Beaufort, who according to history descended from Richard’s 2nd great grand father Edward III (1312-1377). The implication according to geneticists, and the media, is that there is a ‘false paternity event somewhere between Edward and the Somersets:

We find a perfect mitochondrial DNA match between the sequence obtained from the remains and one living relative, and a single-base substitution when compared with a second relative. Y-chromosome haplotypes from male-line relatives and the remains do not match, which could be attributed to a false-paternity event occurring in any of the intervening generations. [supsystic-tables id=10]

The authors of the study maintain that this is “unremarkable”, but the false paternity events don’t end there, for only 4 of these 5 Somerset descendants actually match each other:

The Y-chromosome results also indicate one further false-paternity event between Henry Somerset and his five contemporary, presumed patrilinear descendants.

And it turns out that there is more, for although the patrilineal line of a Frenchman named Patrice de Warren traces back to Richard III through the illegitimate son of Edward III’s 4th great grandfather, Geoffrey Plantagenet, Count of Anjou (1113 – 1151), de Warren’s Y-DNA doesn’t match that of Richard III or any of the Somersets:

In December 2014, the University of Leicester’s DNA research in the King Richard III case showed that there was a break in the Y chromosome line.

This indicated that a false paternity or paternities (where the father is not the recorded father) had taken place in the 19 generations separating Richard III from Henry Somerset, 5th Duke of Beaufort, the common ancestor of the living male-line relatives tested.

We didn’t know where this break occurred but it was interesting as a break in certain points in the tree would have implications for the historical monarchy: the Lancastrian and Yorkist Plantagenet kings and, indirectly, the Tudors.

Professor Kevin Schürer and I were approached by Patrice de Warren who could trace his male line to Richard through Geoffrey, the Count of Anjou. This was very exciting and as a result of the research we are revealing new information at the Science Museum into the ancestry of Richard III.

If Patrice de Warren’s Y chromosome matched that of Richard III, then this would mean that the break in the Y chromosome line occurred somewhere between Edward III and Henry Somerset. If it matched the Somerset line, then it would mean the break occurred between Edward III and Richard III. In short, it would help narrow down where the break in the line took place.

As it happens, it’s revealed that another false paternity seems to have occurred in the tree as his Y chromosome type doesn’t match either of them! The hunt continues, and another mystery has arisen!

In the immortal words of the antagonist of Ian Fleming’s Goldfinger:

Once is happenstance. Twice is coincidence. The third time it’s enemy action.

A similar and more anomalous case involving ancient DNA concerns the presumed Y – DNA of Louis XVI (1754-1793) and that of living members of the House of Bourbon. Y – DNA was successfully extracted from a cloth supposedly bloodied at the time of Louis’ s beheading, and belongs – like that of Richard III – to haplogroup G2a. But the Y-DNA of 3 living members of the House of Bourbon belongs -like 4 of the 5 living Plantagenets- to R1b. Both Y-DNA and mtDNA were extracted from a mummified head presumed to be that of Louis’s 5th great grandfather Henry IV (1553-1610). 5 Y-DNA markers were recovered -an insufficient number to determine a haplogroup- but Charlier et al. concluded in 2013 that the blood and the head probably belonged to the people they were supposed to belong to: 

Taking into consideration that the partial Y-chromosome profile is extremely rare in modern human databases, we concluded that both males could be paternally related. The likelihood ratio of the two samples belonging to males separated by seven generations (as opposed to unrelated males) was estimated as 246.3, with a 95% confidence interval between 44.2 and 9729. Historically speaking, this forensic DNA data would confirm the identity of the previous Louis XVI sample, and give another positive argument for the authenticity of the head of Henri IV. [supsystic-tables id=12]

In the light of the results of testing several living members of the House of Bourbon, this seems to imply that there are at least 2 false paternity events – Henry IV cannot have been the biological father of Louis XIII, and there is a false paternity event between Louis, Grand Dauphin, and Louis XVI:

The difficulty becomes even more pronounced when mtDNA extracted from the head is considered. This belongs to haplogroup U, but Henry IV was maternally related to Louis XVII, through his mother Jeanne d’Albret over Anna of Habsburg to Marie-Antoinette, and tests performed on a lock of her hair, and on her son’s heart, show that Marie-Antoinette’s mtDNA belongs to haplogroup H:

From an orthodox point of view, this means that the MRCA of the head donor and Louis XVII lived 10s of 1000s of years ago. If the mummified head belonged to Henry IV, then this view implies the existence of an exceptional false-maternity event somewhere between Henry IV and Marie-Antoinette. Based on the work of King et al, and their presumed identification of the remains of Richard III, it is accepted that there are 3 false paternity events between Geoffrey Plantagenet and the Somersets and Patrice de Warren. Based on the work of Charlier et al, and the work of Larmuseau et al (2014) detailed in Genetic Genealogy reveals true Y haplogroup of House of Bourbon contradicting recent identification of the presumed remains of two French Kings, there are 2 false paternity events between Henry IV and the Bourbons, plus a false maternity event between Henry IV and Marie-Antoinette, which prompted Larmuseau to deny that the identification of the remains of Louis XVI and Henry 1V was correctly made. What we have here are a group of incompatible premises: (1) presumed identification of ancient remains; (2) presumed Y-DNA mutation rates; and (3) presumed relatives of the deceased. In the one case (3) is abandoned for the sake of consistency, and in the other it is (1) that is abandoned. But no one is considering that the source of the inconsistency might be the presumption contained in (2) rather than that contained in (3) or (1). Presumption, to modify the classic saying, is the mother of all mistakes.

Time Asymmetric -Curved- Mutation Rates
It is mathematically possible then that British Y-DNA groups are mutated forms of rare haplogroups prevalent in Britain in earlier centuries, i.e. that rare haplogroups in Britain are traces of British haplogroups most of which have since mutated. But on our present assumptions about Y-DNA mutation rates there is no such physical possibility – SNP mutations on which haplogroups are based are supposed to be glacially slow and, as the graphic below indicates, the MRCA of the ancient and modern Plantagenets/Bourbons supposedly lived, not 1000s of years ago, but 10s of 1000s- of years ago. But from the mathematics above it follows that time is like a cycle that turns at a seemingly constant rate within the context of the present, but in fact turns ever more slowly in the direction of the past. In terms of mathematical-physics, there asymmetrical relationship between the past and the future, so that time is not reversible, and experiments performed in the present will not necessarily yield the same results as these same experiments performed at an earlier time. Darwin didn’t know it, and Dawkins doesn’t know it, but our clocks are surreptitiously spiralling inwards, and the assumption that they turn in the present as they did in the past results in an overestimation -a potentiality gross overestimation- of the total number of hours that have elapsed on these clocks. The significance of Richard III’s DNA for this question, and for genetic genealogy generally, is that (at the time of writing) his is the only ancient genome belonging to someone with identifiable living relatives to be sequenced. We are provided then with an opportunity to make a Y-DNA comparison between family members separated in time by 5 centuries, and but for the example of Richard III, we are -from an empirical point of view- speculating as to what such a comparison might reveal. From a mathematical point of view we are not speculating, and we can predict with confidence that the king’s Y-DNA won’t tend to match that of contemporary of Plantagenets. Let’s be more specific about the mathematics to be used to make these predictions. A rate of change may be constant or it may be variable (follow a straight line or a curve). Also it may be constant + variable (follow a straight but rough line) or variable + variable (follow a rough curve). With our anomalous findings in mind, we can consider the possibility that mutations rates are best represented by jagged curves that possess a decreasing slope, and so that DNA mutations occur more frequently at first and ever less frequently thereafter.


n=number of STR mutations
m=sum of mutation rates
g=the length of a generation
TMRCA=years before present to the time the most recent common ancestor

and we have the simple formula whose mutation rates follows straight lines and which broadly reflects the way in which the TMRCA is usually computed:

\text{f1} \text{Present} \text{year}-\frac{g n}{2 m}=\text{TMRCA}

This is the mathematics that causes all of the conflicts between genealogy and genetics we have been considering above, and leads us to believe that the human race and the universe is extremely old. An also simple formula that reflects the possibility that mutation rates follow the primes -follow rough curves rather than straight lines- is

\text{Present} \text{year}-\pi \left(\frac{g n}{2 m}\right)=\text{TMRCA}

\pi (x) is what is called a ‘step function’, and it has been set up so that the jump from one step to another takes place only when a prime number appears in the number line:

As we travel down the line, we find that these jumps become rarer and rarer, for the primes thin out. Despite this global decrease in prime-density, there are local irregularities. If we identify prime-density with genetic simplicity, it follows that while the global direction of the Y-DNA tree is toward further genetic complexity, there are random variations in both directions. If we accept the proposal that the mutation rates of SNPs correspond to the primes in the manner indicated, we must reject the assumption that genetic mutations occur at a constant rate (straight versus curved) and the assumption that they always follow the one-way simple-to-complex direction of the ISOGG Y-DNA tree (smooth versus rough). The following chronology is necessarily speculative, but it illustrates the great difference this new approach to genetic change makes:[supsystic-tables id=14]

Of course it is widely believed that Y-Adam are far older than 6000 years (the latest figure is 300, 000 odd), but once in possession of the idea that unique change is subject to entropy, we will find all of the usual methods for estimating the age of ancient things to be prone to exaggerate this age, and in some cases to exaggerate it to an extreme degree. If we assume that some natural clock (e.g the rates at which atomic nuclei emit energy) has always run at its presently observable rate, and if we try to calculate how much time has elapsed since some earlier rotation, we will find that the calculation tends to over-estimate the true time. Since these natural clocks spiral rather than circle, the accuracy of the estimates of the age of things will tend to be inversely proportional to the number of rotations. Natural clocks in the present, we can say, are special cases of cases natural such that their cycle are relatively circular because of the relative flatness of space-time from a present perspective. If look back into the past, we will find that this apparent flatness is lost. We can be refine (f2) on the grounds that \pi (x) lies between the sums

\sum _{n=2}^x \frac{1}{H_n}-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)


\sum _{n=2}^x \frac{1}{n \log }-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

This gives us at first

\text{Present} \text{year}-\sum _{n=2}^{\frac{g n}{2 m}} \frac{1}{n \log }\leq \text{TMRCA}\leq \text{Present} \text{year}-\sum _{n=2}^{\frac{g n}{2 m}} \frac{1}{H_n}

Consider now  that the way to fully simulate \pi (x)  and the prime-density of the number line is by use of the function

\frac{\sum _{n=2}^x \frac{a_1}{H_n}+\sum _{n=2}^x \frac{a_2}{n \log }\text{...}+}{n}-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)


From an equivalent perspective on the same process, we see that we are combining the purple coloured waves arising from the differences between \pi x and \sum _{n=2}^x \frac{a_{1\ 1}}{\log (n)} and \sum _{n=2}^x \frac{a_{2\ 1}}{H_n} so that they increasingly take on the shape of the black coloured waves corresponding to the superposition of waves

2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

This gives us a notion of genetic change that does not take place in straight lines, but in superpositions of roughly formed waves. Switching to the equivalent  density perspective:

It follows straightforwardly that the amplitude of these waves diminishes erratically in the direction of the past to the future in accordance with the governing mathematical principle known as the Riemann Hypothesis. Thus we can modify our formula further as follows:

\text{Present} \text{year}-\sum _{n=2}^{\frac{g n}{2 m}} \frac{a_1}{n \log }\leq \text{TMRCA}\leq \text{Present} \text{year}-\sum _{n=2}^{\frac{g n}{2 m}} \frac{a_2}{H_n}

This ‘Riemannian’ framework explains, where the Darwinian framework does not, the existence of multiple haplogroups arising from a recent common ancestor. The Richard III/Somerset mismatches can be accounted for this this way:

MRCA=Edward III (1312- 1370)




\frac{27\ 18}{2 0.046}=5282.61

2017-\sum _{n=2}^{5282.61} \frac{0.983129}{n \log }=1312

2017-\sum _{n=2}^{5282.61} \frac{0.9926}{H_n}=1370

Somerset 3:




\frac{30\ 15}{2 0.046}=4891.3

2017-\sum _{n=2}^{4891.3} \frac{1.05023}{n \log }=1312

2017-\sum _{n=2}^{4891.3} \frac{1.03372}{H_n}=1377

Somerset 3 doesn’t match Somersets 1, 2, 4, and 5 in the traditional sense of the term ‘match’, prompting geneticists to appeal again to the catch-all ‘false paternity event’. Somersets 1, 2, 4, and 5 match closely – there are at most 2 mismatched markers out of 23 between them. In the case of the 1 marker mismatch, we can hone in on their supposed MRCA Henry Somerset (1744 – 1803) in this way:

Henry Somerset 1744 – 1803




\frac{30}{2 0.046}=326.087

2017-\sum _{n=2}^{326.087} \frac{3.75507}{n \log }=1744

2017-\sum _{n=2}^{326.087} \frac{3.39641}{H_n}=1803

Somersets 1, 2, 4 and 5 differ from Somerset 3 by 20 out of 23 markers:

Henry Somerset 1744 – 1803




\frac{30\ 20}{2 0.046}=6521.74

2017-\sum _{n=2}^{6521.74} \frac{0.317506}{n \log }=1744

2017-\sum _{n=2}^{6521.74} \frac{0.26889}{H_n}=1803

The traditional Darwinian picture of genetic inheritance is this:

The Riemanian picture is this:

A knee-jerk objection to this approach to genetics might be “so what, this tell us nothing, because we could use this technique to produce more or less any TMRCA we like”, but the objection is baseless. We have seen that by adding functions of the form \sum _{n=2}^x \frac{a_1}{n \log } and \sum _{n=2}^x \frac{a_2}{H_n} we are able to simulate \pi x, from which it follows that the rigid \pi \left(\frac{g n}{2 m}\right) can be regarded as a sum of these more fluid sums, and therefore as imposing a strict constraint on the fluidity of these sums. And we can and must go further: \pi \left(\frac{g n}{2 m}\right) is associated with the canonical arithmetic progression 1, 2, 3… but an infinite number of prime-staircases can be generated by use of the formulation qn + a, and Dirichlet’s Theorem tell us that all of these contains infinitely many primes and a Riemann Hypothesis. Each of these progressions is associated with an L-function of the form

f x\text{:=}\sum _{n=1}^{\infty } \frac{\chi _{m,k}(n)}{n^s}

and only in the most prime-dense canonical progression 1, 2, 3… is \chi _{m,k}(n) equal to 1. So in addition to the prime staircase associated with the progression a = 0 and q = 1, there are an infinite number of prime staircases associated with co-prime values of a and q, each of which is constrained by a pair of staircases, and each of which possesses a different slope, and climbs in a different way. Each of these progressions possesses a certain maximum and minimum prime-density in the same way that the canonical progression does. And each such progression is associated with a pair of waves whose frequency is fundamental. If we associate sub-Y-DNA trees with these less prime-dense progressions, then the unifying principle of these sub-trees is the maximum prime-density of the tree in question, a maximum that determines -in a comparable way that in which the wattage of a light bulb determines the intensity of the light that the bulb can output- the genetic flexibility of the sub-tree in question. With a knowledge of\chi _{m,k}(n), and this maximum, it is possible to determine which mutations are possible for a sub-tree, and which are not. The fluidity of the sums associated with (f4) is thus constrained by the maximum prime-density of the canonical tree, but also by the maximum prime-density of the relevant sub-tree. The Plantagenet data -for example- gives us the red wave below:

It appears to be out of bounds, but it is based on incomplete data – it will change as more markers and more haplotypes are added in. The question by which the veracity of a TMRCA yielded by (f4) stands or falls of is that of whether such a superposition of TMRCAs increasingly tends to be within bounds laid down by the Generalized Riemann Hypothesis as the number of waves tends to infinity, and by itself this wave gives no indication that implies the existence of any breaks in the official Plantagenet paternal ‘line’. The same can be said for all of the putatively broken patrilineal lines we have examined so far.


Charlier, P, et al (2013), Genetic comparison of the head of Henri IV and the presumptive blood from Louis XVI (both Kings of France)

Darwin, C (1859), On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life

Dawkins, R (1976), The Selfish Gene

Dawkins R (1986),  The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design

Dawkins, R (2006), The God Delusion

Derbyshire, J (2004), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics

Dirichlet, P (1837), Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält” (Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers)

Einstein, A (1916), Relativity: The Special and General Theory

Fleming, I (1959), Goldfinger

Graham, R et al (1971), Ramsey’s Theorem for n-Parameter Sets

Jehaes E, et al (2001), Mitochondrial DNA analysis of the putative heart of Louis XVII, son of Louis XVI and Marie-Antoinette

Hawking, S (2010), The Grand Design

King, T, et al (2014), Identification of the remains of King Richard III

Larmuseau, M, et al (2014), Genetic genealogy reveals true Y haplogroup of House of Bourbon contradicting recent identification of the presumed remains of two French Kings

Leibniz, G (1714), The Monadology

Littlewood, J. (1914), Sur les distribution des nombres premiers (On the distribution of the prime Numbers)

Newton, Isaac (1687), The Principia: Mathematical Principles of Natural Philosophy

Nordtvedt, K (2008), More Realistic TMRCA Calculations

Newton, I (1687), The Principia: Mathematical Principles of Natural Philosophy

Paley, W (1802), Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature

Plantinga, A (1974) The Nature of Necessity

Riemann, G (1859), Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude)

Rubin, V, et al (1980), “Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)

Russell, B, and Copleston, F (1948), The Existence of G-d, in John Hick, ed., The Existence of G-d

Richardson, D (2004), Everingham, K, Plantagenet Ancestry: A Study In Colonial And Medieval Families