On Public Key Cryptography, the 51% Attack, and the Impossibility of Scalable Quantum Computing (1.1.5)

Abstract In the world of cryptocurrency, there is a well-known limitation on the maximum share of the computing power of the bitcoin network that any one set of computers in the network can possess before it ceases to be decentralized: if any one group of users were to control more than 50% of the network’s mining hashrate, then the possibility exists that they could set up a financial dictatorship, thereby destroying the feature of cryptocurrencies that make them so attractive to those that oppose the inequitable distribution of wealth and power – decentralization. Here the idea is tabled that there is a similar decentralization-born limitation on the building of scalable quantum computers. It is argued that this arises from the the maximum share of the computing power of the “quantum network” that any one set of classically instantiated devices can possess, a share limited by the value \rho _n=\frac{1}{2} = 1/2 and by the Riemann Hypothesis.

 

PART 1

Cryptography

Wikipedia defines cryptography like this:

Cryptography… is the practice and study of techniques for secure communication in the presence of third parties (called adversaries).

For centuries, the key used to code a message and the key used to decode it were symmetric, i.e. the means of decoding the message was a matter of reversing the means of encoding the message. Metaphorically speaking, messages were placed in locked boxes (encoded) by one party (Alice), sent to another party (Bob), and opened by a single key (code) that must also exchanged, and could fall into the hands of an adversarial third party (Eve) en route. But with the advent of public key cryptography -invented in 1976 by mathematicians Whitfield Diffie and Martin Hellman (1)- an asymmetric encoding/decoding process was introduced. Metaphorically, Alice puts her message in a box, locks it with a key that she alone has, and sends it to Bob. Because he doesn’t have the key, Bob can’t open the box, but he can put a further lock on it, which he does, and sends it back to Alice. Alice then removes her lock and re-sends the box to Bob, who removes his lock and reads Alice’s message.

Trapdoor One-Way Functions

This more sophisticated form of code-making depends on something known as a ‘trap-door one-way-function’. A one way function is a mathematical operation that it easy to perform in one direction but hard to perform the other without a trapdoor. The simplest example of a one-way function is multiplication/division where this concerns prime numbers. A prime number is indivisible by any number other than 1 and itself, and a composite is the product of primes. 2 for example is prime, 3 is prime, but 4 is the product of 2 and 2. 5 is prime, but 6 is the product of 2 and 3… and so on. It is easy to multiply primes to create a composite, but difficult to divide a composite into its constituent primes. This division process is called factoring, and although it is easy enough to factor 15 (3.5) or 20 (2 .2 .5), imagine trying to factor the 200 digit integer

27997833911221327870829467638722601621070446786955428537560009929326128400107609345671052955360856061822351910951365788637105954482006576775098580557613579098734950144178863178946295187237869221823983

In fact, this number divides evenly into the primes

3532461934402770121272604978198464368671197400197625023649303468776121253679423200058547956528088349

and

7925869954478333033347085841480059687737975857364219960734330341455767872818152135381409304740185467

It takes a pocket calculator a spit second to do the multiplication, but it would take a single 2.2 GHz computer roughly 75 years to do the division.

Elliptic Curve Cryptography

There is an ancient question of whether there an integer that is the area of a right triangle whose sides have rational length. Such integers are called ‘congruent’. 6 for example is a congruent number because there is a right triangle whose area is 6 and whose sides are 3, 4 and 5. The problem of determining if a is the area of a right triangle having rational sides is equivalent to he problem of finding rational solutions for x and y to the cubic equation

y^2=x^3-a^2 x

which is in turn equivalent to the problem of determining if there are infinite rational points on an elliptic curve, the curve underlying Andrew Wiles’ proof of Fermat’s Last Theorem (2). Another form of trapdoor one-way function, and that used by Bitcoin and crytocurrencies, arises in the context of elliptic curves. The operation of addition can be defined on an elliptic curve by appeal to the fact that if we draw a line through two points P and Q on such a curve, the line intersects the curve at a third point, and since the elliptic curve is symmetric about the x-axis gives us a further point which is the reflection of this point We call this reflected point P + Q.

Multiplication can be defined by drawing a tangent line through a point P on an elliptic curve to arrive at another reflected point that we call P+P or 2P:

By drawing a tangent line through this point at 2P we create a further point that we call 3P. From here we can simply go on drawing lines and creating further points – 2P, 3P, 4P… and so on:

Now while it is easy, given a base point P, to work out how to produce n . P, it is not easy to go in the reverse direction: given n . P, what was the base point P?

BQP = NP

Problems that can be solved by computers in an efficient amount of time (where efficient means that as the problem-size grows arithmetically, the number of steps required to solve the problem grows “polynomially “rather than exponentially): problems whose solutions can be arrived at in an efficient amount of time we classify as “P” for polynomial time, problems whose solution can be verified in an efficient amount of time we classify as NP for non deterministic polynomial time; all problems in P are also in NP, but some NP problems- NP-hard- are not known are in P. NP-Complete problems are as hard as any problem in NP, and the classic example is The Travelling Salesman, which involves a salesman who must, after starting from a home-city, visit a number of cities exactly once before returning the home city for a certain cost. To see that there is no efficient way to solve this problem using a classical computer, let the home city = the halt state of a computer, let every other city = an atomic instruction of a computers program. If an instruction can be executed, assign a cost of 1, if an instruction cannot be executed, assign a cost of 2. If and only if the salesman can complete a circuit that visits every city exactly once for a cost of the number of cities, there is some computer that will halt when run with some input. If the number of cities is the same as the number of instructions in the program of the machine evaluating the TSP instance, the evaluation is a self-evaluation.

And if we assign a cost of 2, rather than 1, then this machine is required to determine that an input can only be run by a more complex program than itself.

This proves that there are some problems that classical computers, no matter how powerful, cannot solve in an efficient amount of time. It also proves that the problem of whether there are some problems that a classical computer can’t solve in an efficient amount of time (called “P versus NP”) cannot be solved in any amount of time by a classical computer, and yet you and I have just solved it, which implies at least that human mind is -or has access to- a non-classical computational intelligence. Every Travelling Salesman problem can be transformed into a Factoring Problem by identifying TSP problems with integers based on the number of cities in the problem = the number instructions in a computers program. Factoring expressed in terms of whether a number has a prime factor not greater than n is equivalent to Travelling Salesman expressed in terms of whether there is cycle whose cost is not greater than n, and a classical computer can’t factor a number n such that n is greater than the number of instructions in its own program… But Shor’s Algorithm (3) for a quantum computer permits the factoring of integers and solving the discrete logarithm problem in a efficient amount of time, from which it follows that the limit there is on classical computers in virtue of which they can’t solve certain problems in an efficient amount of time, nor solve certain problems in any amount of time, is inapplicable to quantum computers. It follows that the human mind is, not simply a classical computer, but has quantum computational elements. BQP is the class of problems efficiently solvable by a classical computer, and while P!=NP, BQP = NP. Since factoring and the discrete logarithm problem are both in NP, it follows that a quantum computer can solve them efficiently, and that if a scalable quantum computer were built, any information protected by the RSA cryptosystem or by elliptic curve cryptography would thereby by be rendered insecure. More generally a scalable quantum computer can reverse the arrow of time…

PART II

The Golden Key

Called by John Derbyshire “The Golden Key” (4), Euler’s classic argument (5) establishes that the product continued to infinity of this fraction

\frac{2\ 3\ 5\ 7\ 11\ 13\ 17\ 19\text{...}}{2\ 4\ 6\ 10\ 12\ 16\ 18\text{...}}

in which the numerators are prime numbers and the denominators are one less than the numerators, equals the sum of the infinite series

1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}\text{...}

and they are both infinite. To prove his point to Euler invites us to imagine the extraction from the second series a prime denominator and all remaining multiples of that prime denominator until everything except the first term 1 has been eliminated. Let

x=1+1/2+1/3+1/4+1/5+1/6...

Then

\frac{x}{2}=\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\frac{1}{8}\text{...}

This leaves

\frac{x}{2}=1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}\text{...}

To eliminate the denominators that are divisible by 3, we divide both sides to get

\frac{x}{2\ 3}=\frac{1}{3}+\frac{1}{9}+\frac{1}{15}+\frac{1}{21}\text{...}

Subtracting again eliminates all remaining denominators that are multiples of 3 leaving

\frac{2 x}{2\ 3}=1+\frac{1}{5}+\frac{1}{7}+\frac{1}{11}+\frac{1}{13}\text{...}

Applying this eliminatory process to all of the prime numbers leaves

\left(\frac{2\ 4\ 5\ 10\ 12\ 16\ 18}{2\ 3\ 5\ 7\ 11\ 13\ 17\ 19}\text{...}\right) x=1

This is a thought-experiment -mere imagination- but if these eliminations could be performed in the physical world, they would result in the disappearance of any distinction between the form and the content of a coordinate system, and therefore the shrinking of space and the slowing of time to a zero-dimensional point. With all of reality contracted to a zero-dimensional point, the distinction between the world and the mind that surveys it is lost. This is the singularity at root of general relativity. It is not -as is often maintained- all of the mass in the universe compressed down to a point, an infinitely heavy object. This is a gross misconception resulting from the assumption of atomism, and the prioritization of space over light, a misconception responsible for all that is ugly in contemporary physics, including black holes as infinite at their centers (6), the incongruity of relativity and quantum mechanics (7), the infinities of quantum field theory (8), and the flat rotation curves of distant galaxies that seem to call for dark matter (9, 10)… to name a handful of significantly troublesome things. Rather, this singular state is a state such that all of the space and time -all the gaps and holes- in the universe are from the perspective of this singularity excised, leaving only infinitely concentrated light, an infinitely light object in both senses of the term “light”. On this account there are two forms of curvature resulting either from an imbalance of light and space in favor of light (the most extreme form of this imbalance is the origin of the universe) or from an imbalance of light and space in favour of space (the most extreme forms of this imbalance are black holes).

The first idea we can take from Euler’s thought-experiment is that, since both prime-density and energy-density must at this point be infinite, the spatio-temporal development of the universe from a central singular point towards an ever-increasing state of decentralization is a process involving the distribution of the prime numbers. We can add to this that this process of decentralization takes place according to something known as the Riemann Hypothesis (11), which says that the thinning of primes -the spreading of prime-energy over time and space- with arithmetic increase cannot exceed the upper and lower bounds such as those marked in red and blue in the graphs below:

\pi (x)

\sum _{n=2}^x \frac{1}{n \log }-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

\sum _{n=2}^x \frac{1}{H_n}-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

The Generalized Riemann Hypothesis (12, 13) extends the Riemann Hypothesis by reference arithmetic progressions associated with the equation q n + a where q and n have no common factor greater than 1. In a universe whose fundamental condition is an infinite state of prime and energy density which is diffused from the point of view of any and every frame of reference according to the Generalized Riemann Hypothesis, time has a forwards direction associated with the loss of prime and energy density and a backwards direction associated with a gain in prime and energy density. Because the loss of prime-density predominates over any gains in prime-density, the direction of time is given by the GRH, a balance of the two forms of curvature is maintained by the equality

\rho _n=\frac{1}{2}

The waveform (\frac{2 \left(\text{Re} \sum _{n=1}^x \text{Ei}\left(\rho _n (\log x)\right)\right)}{x}) we obtain by taking difference between the number of primes not greater than x and an approximating formula such as

\left(\frac{\sum _{n=2}^x \frac{a_{1\ 1}}{H_n}+\sum _{n=2}^x \frac{a_{2\ 1}}{n \log }+\text{...}}{n}\right)

and dividing by x is a superposition of smooth waves:

-\frac{2 \left(\text{Re} \sum _{n=1}^1 \text{Ei}\left(\rho _n (\log x)\right)\right)}{x},-\frac{2 \left(\text{Re} \sum _{n=2}^2 \text{Ei}\left(\rho _n (\log x)\right)\right)}{x},-\frac{2 \left(\text{Re} \sum _{n=3}^3 \text{Ei}\left(\rho _n (\log x)\right)\right)}{x}

\frac{\pi (x)-\frac{\sum _{n=2}^x \frac{a_{1\ 1}}{H_n}+\sum _{n=2}^x \frac{a_{2\ 1}}{n \log }}{n}}{x},-\frac{2 \left(\text{Re} \sum _{n=1}^{10} \text{Ei}\left(\rho _n (\log x)\right)\right)}{x}

Or from another perspective on the same phenomenon, we can obtain the superposition 2 \left(\text{Re} \sum _{n=1}^{\infty } \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right) by taking the difference between the primes themselves and an approximating formula such as

\frac{a_1 x H_x+a_2 x \log (x)+\text{...}}{n x}

\frac{2 \left(\text{Re} \sum _{n=1}^1 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right)}{x},\frac{2 \left(\text{Re} \sum _{n=2}^2 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right)}{x},\frac{2 \left(\text{Re} \sum _{n=3}^3 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right)}{x}

\frac{e^{2 \gamma } \sqrt{\frac{p_x}{e^{2 \gamma }}}{}^2-\frac{\int_1^x a_1 x (x \log +\gamma ) \, dn+\int_1^x a_1 x (x \log ) \, dn+\left(\int_1^x a_2 x (x \log +\gamma ) \, dn+\int_1^x a_2 x (x \log ) \, dn\right)}{2 x}}{x},\frac{2 \left(\text{Re} \sum _{n=1}^{10} \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right)}{x}

The intimate connection there is between number theory and quantum field theory can be simply illustrated (14) by associating the creation operators \left(b_n\right){}^{\dagger } and \left(f_n\right){}^{\dagger } to the prime numbers p_n… Now we have identified the unique ‘factorization’ of a state into creation operators acting on the ‘vacuum’ with the unique factorization of an integer into prime numbers (and we have a hierarchy of states: |1> is the ‘vacuum’; |2> and |3> and |5> are one-particle states; |6> is a two-particle state… and so on). By reference to the Witten index (15) -the number of bosonic minus the number of fermionic zero-energy states- we see that the Mobius inversion function

\mu n={1 = n has an even number of distinct factors,
-1 = n has an odd number of distinct factors, 0 = n has a repeated factor}

is equivalent to the operator (-1)^F that distinguishes bosonic from fermionic states, with \mu n = 0 when n has a repeated factor being equivalent to the Pauli exclusion principle. If we re-express the Mertens function (which sums the 1s and -1s of the Mobius function) as \sum _{n=1}^{p_x} \mu (n), we see that sums of these states give us essentially the same composite spiral-wave as before.

\frac{2 \Re\left(\sum _{n=1}^1 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n \zeta '\left(\rho _n\right)}\right)}{x},\frac{2 \Re\left(\sum _{n=2}^2 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n \zeta '\left(\rho _n\right)}\right)}{x},\frac{2 \Re\left(\sum _{n=3}^3 \frac{\left(p_x\right){}^{\rho _n}}{\rho _n \zeta '\left(\rho _n\right)}\right)}{x}

\frac{2 \Re\left(\sum _{n=1}^{10} \frac{\left(p_x\right){}^{\rho _n}}{\rho _n \zeta '\left(\rho _n\right)}\right)}{x},\frac{\sum _{n=1}^{p_x} \mu (n)+2}{x}

Assuming that there are an equal number of non-zero-energy bosonic and fermionic states, this wave depicts the zero-energy fluctuations of these particles, the energy fluctuations of the vacuum. This is to say that the vacuum is the basis of everything -everything emanates from the vacuum- and that the vacuum is far from vacuous. There is here a distinction between symmetric superpositions such as \frac{2 \Re\left(\sum _{n=1}^{10} \frac{\left(p_x\right){}^{\rho _n}}{\rho _n \zeta '\left(\rho _n\right)}\right)}{x} (where the associated progression is arithmetically continuous) and asymmetric superpositions familiar from quantum mechanics (where the associated progression is arithmetically discontinuous). These examples concern the case where q =1 and a = 0, but we can easily construct approximating formulas and wave-forms for all the other possible values of q and a. The value \rho _n=\frac{1}{2}, associated as it is with a very particular balance of prime-density and sparsity, signifies the dividing line between classical objects that are constrained to travel in time in an arithmetic manner, and quantum objects that transcend arithmetic and are not so constrained. A key aspect of this mathematical scheme is that we can take any arithmetic progression associated to q n + a and to an L-Function and associate it to a formula such as \frac{\sum _{n=2}^x \frac{a_{1\ 1}}{H_n}+\sum _{n=2}^x \frac{a_{2\ 1}}{n \log }+\text{...}}{n} and a wave-form such as -2 \left(\text{Re} \sum _{n=1}^x \text{Ei}\left(\rho _{-n} (\log x)\right)\right)). The first perspective is the arithmetic perspective from which things are distributed, unit by unit, in mathematical and physical space and in time (the time perspective), the second perspective (the frequency perspective) is a “trans-arithmetic” perspective from which everything is in some sense always present. From the fact that the initial, foundational, state of the universe is state such that there is no space and time and no separation between abstract and/or physical units (a state such that prime and energy-density are infinite), it follows that the second perspective has priority over the first, that the first perspective is a well-founded illusion -a projection- arising from the second and governed by the Riemann Hypothesis.

The 2-Slit Experiment

In the fundamental experiment of quantum mechanics -the 2-slit experiment- particles are shot particle by particle through two slits and their arrival is registered on a screen. If we look -if a detector shines light on particles as they pass through the slits- we get a pattern indicating that light can be broken down into discrete element, and that these elements pass through the slits like bullets passing through a pair of windows; but if we don’t look, we get a pattern indicating that light is a continuous entity, and that it passes through the slits like water passing through a pair of sluice gates. This second kind of a pattern is known as an “interference pattern” and this is a series of dark/light bands: when the peak of one wave interferes with the trough of another they cancel (destructive interference) resulting in a dark band on the screen; and when the peaks of two waves interfere with each other they reinforce (constructive interference) resulting in a light band on the screen.

The great error of quantum mechanics, and the main reason people have struggled to make sense of it, is the assumption that the first pattern is not an interference pattern. This assumption is false. Contrary to widespread opinion, the first pattern is an inference pattern, but an interference pattern of the RH-friendly, symmetric type. To see what I mean, consider firstly what it is to be balanced (symmetric, classical), and what it is to depart from balance (asymmetric, quantum) by re-expressing the tradition equation for a circle of area 1 (\pi \sqrt{\frac{1}{\pi }}^2=1) as

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

Where the traditional equation fails by implying that an energy source located at the center of this area unit-circle is undiminished from center to circumference (it has either a zero or an infinite radius), the second provides us with a potentially infinite hierarchy of energy levels that are necessarily non-infinite and non-zero. Given that gamma is a spacial case of \zeta (s)-\frac{1}{s-1} for s =1, we can go from \lim_{x\to \infty } \, e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2=1 to the more general

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

If and only if s = 1, the unit above can be subject to an unlimited number of repetitions. If s is a real number greater or less than 1, then this number of repetitions is strictly finite, and the superpositions associated to these strictly finite progressions are asymmetric:

Consider secondly that all continuous arithmetic progressions, and all the continuously countable phenomena corresponding to them, can be described as arising from superpositions such as

\frac{2 \left(\text{Re} \sum _{n=1}^{10} \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}\right)}{x}

So also can random phenomena such as the detected particles in a 2-slit experiment – one perspective is the familiar, arithmetic, perspective, and the other is the trans-arithmetic perspective as this concerns symmetric superpositions. When counting particles, we assume the uniformity and continuity of an arithmetic progression, and note that there is an associated error of square-root size. But this assumption is unwarranted. If instead we assume that there is perfect balance of slit-1 and slit-2 particles, the square root-sized error is transferred to the uniformity of the arithmetic progression. If the error gets any bigger or smaller than this, then the progression loses its continuity. At root of the conservation of this error are the facts that if we assume the uniformity and continuity of an arithmetic progression to count primes, there is an associated error of square-root size, and that although this error is apparently reduced by the addition of harmonic and logarithmic functions to a prime-finding formula such as

\frac{a_1 x H_x+a_2 x \log (x)+\text{...}}{n x}

it is during the course of these additions really being transferred to the arithmetic progression, and particularly to uniformity of this progression. This error is an ineliminable element of arithmetic consciousness because, as we have seen, arithmetic consciousness emerges from it. But for the error, there is no arithmetic consciousness, and if and when the error becomes smaller of greater, arithmetic consciousness disappears. This is because, beneath these continuous arithmetic progressions (s=1) there are discontinuous ones (s!=1), i.e. beneath the underlying symmetric superstitions (s=1), there are asymmetric superpositions (s!=1) familiar from the pattern formed by the undetected particles in the 2-slit experiment. It has many more champions than not, but scalable QC must have always seemed to minds wary of any proposal to build a tower that reaches to heaven, a highly dubious affair, like perpetual motion, and speed-of-light travel, too good to be true in a world governed by the arrow of time. And yet there is nothing in quantum mechanical theory as it is taught at the time of writing that forbids it, and so anyone who believes that QM is a more or less satisfactory theory would be rationally committed to this belief that scalable qauntum computers can be built. But this is not a proof that QC is possible. Rather, it is a proof that QM is deeply flawed, a reductio ad absurdum of QM…. The big problem -the insurmountable problem- for the billion dollar industry is simply that scalable means arithmetically continuous, arithmetically continuous means asymmetric in the sense of the arrow of time leading from prime density to sparsity symmetric (the arithmetic perspective) and symmetric in the sense of the underlying superposition (the trans-arithmetic perspective), and arithmetically symmetric + trans-arithmetically symmetric means classical, i.e. “scalable” and “quantum” are incompatible terms, and “scalable qauntum computing” is a contradiction in terms. We have a very nice analogy at out disposal… There is a well-know limitation on the maximum share of the computing power of the bitcoin network that any one set of computers in the network can possess before that it ceases to be decentralized. If any one group of users were to control more than 50% of the network’s mining hash-rate, then the possibility exists that they could set up a financial dictatorship, thereby destroying the feature of cryptocurrencies that make them so attractive to those that oppose the inequitable distribution of wealth and power – decentralization.

Here it has been argued that there is a similar decentralization-born limitation on the building scalable quantum computers, that this arises from the limitation there is on the maximum share of the computing power of the “quantum network” underlying all human experience that any that can be possessed by any one set of classical devices, a share limited by the value

\rho _n=\frac{1}{2}

and by the Riemann Hypothesis. Public key cryptography, it follows, does the job it was devised to do. Kudos to Rivest, Shamir and Adleman who used Fermat’s Little theorem for something far more useful than anything his “big” theorem has heretofore been used for (16), and to Satoshi Nakamoto who made public key cryptography into something far more than a clever way to stop bad people from stealing stuff (17).

Download pdf

REFERENCES

(1) Diffie, W, Hellman, M (1976), New directions in cryptography

(2) Wiles, A (1995), Modular elliptic curves and Fermat’s Last Theorem

(3) Shor, P (1994), Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

(4) Derbyshire, J. (2004), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics

(5) Euler, L (1737), Various observations concerning infinite series

(6) Wald, R (1997), Gravitational Collapse and Cosmic Censorship

(7) Wald, R (1984), General Relativity

(8) Feynman, Richard (1948), A Relativistic Cut-Off for Quantum Electrodynamics

(9) Rubin, V et al (1980), Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)

(10) de Swart, J. et al (2017), How dark matter came to matter

(11) Riemann, G (1859), On the Number of Primes Less Than a Given Magnitude

(12) Dirichlet, P (1837), Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers

(13) Davenport, H (2000), Multiplicative number theory

(14) Spector, D (1990), Supersymmetry and the M\[Delta]bius Inversion Function

(15) Witten, E (1982), Constraints on supersymmetry breaking

(16) Rivest, R, Shamir, A, and Adleman, L (1978), A Method for Obtaining Digital Signatures and Public-Key Cryptosystems

(17) Nakamoto, S (2009), Bitcoin: A Peer-to-Peer Electronic Cash System