On Bitcoin, Cryptocurrencies, and the Decentralization of Wealth (1.2.2)

Abstract Like all revolutionary ideas, cryptocurrencies are a young person’s game, and those that claim it to be a “bubble”, and seize on every downturn in the price of bitcoin as evidence that the bubble is about to burst, are primarily old men, too steeped it what they were taught in their youth to think outside these strictures. There are rare exceptions, but the mind, like the body, loses flexibility as it ages (this is why most of the great mathematical discoveries are made by people in their twenties), so it’s not really their fault. They are simply past the age where the grasping of radically new concepts is possible for them. Of course the bitcoin bubble never bursts, it only only gets fatter, embarrassing anyone who attempts to set a date on when the value of the coin will go to zero. “A new scientific truth, said the founder of Quantum Mechanics Max Planck, “does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Eventually crypto’s opponents will die out, by which time the superiority of crypto over fiat will be manifest, and -I predict- we will have a global cryptocurrency… Here the mathematical basis of this viewpoint is summarised.


Bitcoin and Cryptographic Currencies

How does one quickly explain cryptocurrencies, of which bitcoin is the first and most well known? They are digital currencies whose value is based on the difficulty of solving computational problems belonging to a certain class (easy in one direction and hard in the inverse direction). One such problem is the maintenance of the ledger recording all transactions involving these currencies known as the “blockchain”, and for contributing processing power to this task, computers are rewarded with “coins”. This process is called “mining”, playing perhaps on the metaphor that bitcoin is digital gold. Once mined, coins can be bought and sold, and used to buy and sell like any other unit of currency. (1) There are many difference between fiat money and cryptographic currencies, but one with which we shall be concerned here is that the former, but not the latter, are under the tight control of central authorities -notably banks of one kind or another. Bitcoin, Ethereum and company are de-centralized, because the ledger doesn’t exist in one place, but is spread across the network of computers constituted by the users of the cryptocurrrecny.

Economist Felix Martin calls Bitcoin “hard money” for the fact that there are a fixed number of bitcoins and claims that, since the usefulness of hard money “waxes and wanes”, its appeal will be short-lived (2)

An economy’s creditors – those who hold financial claims on other people, when everything’s netted out – lose when the standard monetary unit buys less stuff. Its debtors, by the same token, gain. The trouble is that… the distribution of creditors and debtors throughout society changes radically over time. As a result, the fairness and efficiency of a hard money standard waxes and wanes as well.

“That”, he says, “is not a statement of opinion. It is a statement of historical fact.” The objection that bitcoin is ‘hard money’ can be brushed aside. This is the result of an elementary misunderstanding of the cryptographic currency scene by Martin, who seems to be unaware of the relationship between bitcoin and other crypto-currencies called altcoins: an altcoin, whose value is derived from their proportion of a bitcoin, can be created at will and in any desired quantity. It is therefore no more difficult to create new digital money in spite of the fact that there are a limited supply of bitcoins than it is to create paper money in spite of the fact that there is a limited supply of gold or any other finite resource. Thus, bitcoin can serve either a deflationary or an inflationary economy. A more important objection to Martin’s reasoning is his assumption that the tension there is between creditors and debtors is following a random trend, that like the heads and tails sides of a tossed coin, the one is no more favored than the other. But we now make two observations that pose a challenge to his stance. The first is simply that the changing ratio of creditors to debtors corresponds to the changing ratio of centralizing to decentralizing economic influences. When a few individuals or groups hold the ‘purse strings’, this is a state of centralization. For example, the creators of the now defunct Isracoin observed that the majority of the wealth of Israel is controlled by 20 families and 5 banks. Conspiracy theorists maintain that the majority of the world’s wealth is controlled by a few families and banks… Suppose for the sake of argument that a society consisted of 100 individuals, and that the wealth of this society consists of 100 gold coins. An extreme state of centralizaton is a state such that all of these coins belongs to 1 person. An extreme sate of decentralization is a state such that each member of the society has 1 gold coin. In the latter case there can be no creditors and debtors because no one can afford to be creditor. The second more complex observation is that centralization diminishes as a function of time, meaning that we are headed toward a state such that there are no creditors and debtors.

The Paradox of Parliamentary Sovereignty

We can begin to explain why this so firstly by reference to an analysis of the influential 3-part definition of Parliamentary Sovereignty given by A.V. Dicey (3):

1. Parliament may legislate on any topic

2. No Parliament can bind its successors or be bound by its predecessors

3. Nobody may challenge the validity of an act of Parliament

Principles 1 and 2 are contradictory: if Parliament can, as Dicey says, legislate on any topic, then it can legislate on the topic of its own future powers. Principles 1 and 3 are for essentially the same reason contradictory: if Parliament can legislate on any topic then it can legislate to challenge the validity of an act of Parliament. Suppose that Parliament makes a law. Does Parliament possess sufficient power that this law will stand for all time? If it does, then there is limit on Parliament’s power to re-shape the past. If it does not, then there is a limit on Parliament’s power to shape the future. In either case Parliament is not, and cannot be, sovereign in Dicey’s sense. The same can be said for the Supreme Court, the Court of Appeal, for any Court in the hierarchy of courts, or for any authority that makes or interprets the law. An authority that has little no power to make the law is strongly governed by restraining influences from the past (rigidity), whereas Parliament as a predominant law-making authority is free to a greater extent free overturn the past (flexibility). In practice there may be little difference between the code-based ‘civil’ legal systems of Europe, and the non-code-based ‘common law’ systems deriving from Britain, but in theory the former systems favour rigidity while the latter favour flexibility. This tension between rigidity and flexibility arises from the limits there are on the power of any leglisative authority. To be precise, there is an inversely proportional relationship between to the ability of an authority to influence the future, and the influences acting on this authority from the past. It follows that no one person or body is sovereign, but that sovereignty is shared between various people and bodies. There is we might say a spectrum of sovereignty ranging from an authority -such as the Magistrates Court- that is relatively constrained by past and by future authorities, to an authority -such as Parliament- that is relatively unconstrained. Although it lies towards the unconstrained end of this spectrum, Parliament holds a shifting and uncertain status, depending on the powers of the more or less subordinate legislators also occupying positions within this spectrum.

From the failure of Dicey’s definition we see that there are two limiting principles governing the exercise of power. The first principle is that an actor’s power to influence the future is inversely proportional to the influences that flow from the past.

This principle creates an ineliminable and unbalanced self-tension between rigidity and flexibility familiar in the jurisprudential context from the contrast between civil and common law, and disputes within the UK common law system concerning the question of how strictly the doctrine of precedent should be applied. We may picture this tension as a series of waves oscillating about the an axis which represents the half way mark between the impossible extremes of complete rigidity and complete flexibility, sometimes crossing the axis towards rigidity, and sometimes crossing it in the other direction towards flexibility. The second -related- limiting principle is that an actor’s power to influence the future is inversely proportional to the number of other independent actors with whom they share power. The greater the number of independent powers, the less power can be wielded by any one actor. The effect of adding powers is to limit the size of the shifts of the overall system between rigidity and flexibility. These shifts might be dramatic ones if a single man or group of men held the majority of the power, but they are reduced with the addition of independent powers. This analysis allows us to see that the failure of Dicey’s Legislative Supremacy and the loss of occurs with application of Baron de Montesquieu’s Separation of Powers (4) and are aspects of the same thing, the difference being that the loss of power that occurs in first case is caused by the self-opposition of past and future parliamentary bodies. The unifying idea is that shared power is diminished power. Clearly there are, not 1 or 2 or 3, but numerous powers, but it is possible to capture this entire complex of relationships with a simple model:

We start with an idealised balance of rigidity and flexibility that is perpetually tipped in one direction or another, but never either to the extreme that change ceases or becomes infinite. Every change is a shift, however small, in rigidity/flexibility, and so the model contends with anything that can happen in a political state, or indeed in any physical state. We go on to observe that the strength of these shifts in the balance of rigidity and flexibility is inversely proportional to the number of independent powers, some of which are past and future incarnations of the same actor. The smaller the number of independent powers, the greater the potential for dramatic shifts, and conversely. The second principle is a restatement of the idea behind the Separation of Powers.

Riemann Hypothesis

Examining history for trends is -like all empirical exercises- a dangerous one, for it is difficult to know if a trend is limited to time and place. The problem is that we can’t by the use observation alone see the whole of history, we can only see tiny cross sections of it. The trends that are the most useful, and that should be most sought after, are those that apply everywhere, that are immune from the vicissitudes of locality, and the best place to look for these trends is in the realm of mathematics. To see that the decentralization of wealth and power over time is inevitable, we consider an idea originating in a mathematics paper published in 1859 (5). In this work, a little-known mathematician named Bernhard Riemann, it is stated that  the real part of the non-trivial zeros the zeta function is “very probably” always equal to 1/2. This statement that has come to be known as the “Riemann Hypothesis”.

Expressed in this way, the RH is incomprehensible (except to a relatively small sub-set of those with post-graduate degrees in mathematics), but beneath the esotericism it is a simple statement about the prime numbers (numbers divisible only by themselves and 1) and the rate at which these thin out as the number line continues toward infinity. It was proven in 1896 (3, 4) that the limit of number of primes not greater than x as x -> Infinity is

\frac{x}{x \log }

The RH is equivalent to the stronger claim that the number of primes not greater than x is

\sum _{n=2}^x \frac{1}{n \log }

plus or minus the square root of x.

It is to be solved by re-expressing the tradition equation for a circle of area 1 as

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

which can then be extended in the following way:

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

This extended equation involves a significant division between s = 1 and real values of s other than 1, for if and only if s = 1 does

\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn

not reach the limit

e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}

The division allows us to distinguish between long-ranged (infinite primes) and short-ranged (finite primes) progressions. For example:

If and only if the number line is long-ranged, is it the case that number of primes not greater than x is within the bounds prescribed by the RH. Hence the error term is what it appears to be, and what the Riemann originally said it was.

To see that the RH is not -as it is likely to appear- a difficult but trivial puzzle, of interest to pure mathematicians, that it is the governing principle of the world as we know it, and more particularly that it enforces the decentralization of wealth and power over time, consider something that Richard Feynman once said about entropy (8). He invited us to imagine that atoms are divided into blue-coloured and white-coloured varieties, and separated into compartments. If the separation is removed, then just as blue dye turns water a luke-blue colour, the atoms form a luke-blue mixture. He observes that individual collisions provide no clue as to the irreversibility of the mixing process, but that studying a film of the mixing played in reverse reveals that

…every one of the collisions is absolutely reversible, and yet the whole moving picture shows something absurd, which is that in the reverse picture the molecules start in the mixed condition… and as time goes on, through all the collisions, the blue separates from the white…

He went on to say that

…it is not natural that the accidents of life should be such that the blues will separate themselves from the whites…

.The one-way nature of this process is reflected by from the experiment in which a gas is confined to one of two compartments. If the separation between the compartments is removed, then the gas spontaneously distributes itself in a uniform manner throughout the two compartments, but it does not spontaneously revert to the separated state. More familiar still, is the breaking of an egg. We never see a broken egg spontaneously reassemble, and there is no way to reassemble an egg after it has been broken (“All the kings horses and all the kings men couldn’t put Humpty together again.”). But although both these processes involve a one-way direction when viewed from a sufficiently global perspective, they can go either way when viewed from a sufficiently local perspective: the individual atoms comprising the gas molecules might just as well go from compartment B to compartment A as from compartment A to compartment B, and if we study the individual atoms comprising Humpty Dumpty we get no clue as to the fact that Humpty cannot be reassembled. If we play a film depicting the un-breaking of an egg or the increase of the density of the prime numbers in the number line, we see something that looks absurd, and Feynman had no better explanation for this impression of absurdity in the first case other than ‘the accidents of life’. But the impression of absurdity attached to an increase in the density of the prime numbers as we count down the line isn’t an ‘accident’ at all – it is mathematically necessary that the repetition of a unit be accompanied by a global decrease in density. This is the Prime Number Theorem. The Riemann Hypothesis is an extension of the Prime Number Theorem : it says, not merely that the primes thin out globally, but that the number primes not greater than x cannot exceed the upper and lower bounds marked in red and blue in the graphs below:

\pi (x)

\sum _{n=2}^x \frac{1}{n \log }-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

\sum _{n=2}^x \frac{1}{H_n}-2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

If we suppose the form of graph depicting shifts in power (based on the paradox of parliamentary sovereignty and the idea behind the separation of powers) to given by the superposition

2 \left(\text{Re} \sum _{n=1}^{\infty } \text{Ei}\left(\rho _{-n} (\log x)\right)\right)

we find that we obtain this superposition by taking difference between the approximating formula for the number of primes not greater than x

\frac{\sum _{n=2}^x \frac{a_{1\ 1}}{H_n}+\sum _{n=2}^x \frac{a_{2\ 1}}{n \log }+\text{...}}{n}

from the actual number of primes not greater than x:

To the naked eye, the distribution of the stars in the night sky seems to be random, but looking through a telescope we realize that galaxies have a spiral shape, light-dense toward the center of the galaxy, and increasingly dark at distances further away from the center.

Same thing with the primes in the number line:

It is only by considering a sufficiently large group of stars and primes, and a sufficiently large group of particles, that the loss of energy-density known as ‘entropy’ is found to involve a one-way direction known as the arrow time. In fact there are multiple arrows of time. Well known are the thermodynamic arrow arising from the loss of heat, the cosmological arrow arising from the expansion of the universe, the radiative arrow arising from the expansion of waves outwards from their source, the causal arrow arising form the fact that effects follows causes rather than precede them, the quantum arrow arising from the collapse of the wave-function, and the psychological arrow of consciousness arising the fact that we remember the past and the future is unknown… Less well-known is the genetic arrow, which arises from the loss of mutability of DNA with generation, a consideration that explains anomalous results such as the apparent mismatch between Y-DNA extracted in 2014 from the skeleton of the English King Richard III and his contemporary paternal relatives (9), and points to the Theory of Evolution being a special case of a larger more sophisticated theory (Darwinism depends on symmetrical DNA mutation rates). The arrows of time with which we are concerned with here are the computational arrow of time (in virtue of which computational problems such as those underlying cryptocurrencies are easy in the one -forwards- direction and hard in the inverse direction) and the economic arrow of time (in virtue of which wealth and power tend to be decentralized over time), but the arrow that contains and explains all the other arrows is the arithmetic arrow. The first person to hint at the possible unification of all of these arrows was Euler (10), who noted that the product continued to infinity of this fraction

\frac{2\ 3\ 5\ 7\ 11\ 13\ 17\ 19\text{...}}{2\ 4\ 6\ 10\ 12\ 16\ 18\text{...}}

in which the numerators are prime numbers and the denominators are one less than the numerators, equals the sum of the infinite series


and they are both infinite. To prove his point to Euler invites us to imagine the extraction from the second series a prime denominator and all remaining multiples of that prime denominator until everything except the first term 1 has been eliminated. Let




This leaves


To eliminate the denominators that are divisible by 3, we divide both sides to get

\frac{x}{2\ 3}=\frac{1}{3}+\frac{1}{9}+\frac{1}{15}+\frac{1}{21}\text{...}

Subtracting again eliminates all remaining denominators that are multiples of 3 leaving

\frac{2 x}{2\ 3}=1+\frac{1}{5}+\frac{1}{7}+\frac{1}{11}+\frac{1}{13}\text{...}

Applying this eliminatory process to all of the prime numbers leaves

\left(\frac{2\ 4\ 5\ 10\ 12\ 16\ 18}{2\ 3\ 5\ 7\ 11\ 13\ 17\ 19}\text{...}\right) x=1

This is a thought-experiment -mere imagination- but if these eliminations could be performed in the physical world, they would result in the disappearance of any distinction between the form and the content of a coordinate system, and therefore the shrinking of space and the slowing of time to a zero-dimensional point. With all of reality contracted to a zero-dimensional point, the distinction between the world and the mind that surveys it is lost. The idea we can take from Euler’s thought-experiment is that, since both prime-density and energy-density must at this point be infinite, the development of the universe from this central singular point towards an ever-increasing state of de-centralization is a process involving the distribution of the prime numbers… Once this connection is made, we see that all of our arrows are subject to the same dynamics as the arithmetic arrow constituted by the number line and governed by the RH (more particularly the Generalized Riemann Hypothesis (11, 12) which concerns arithmetic progressions associated with the equation q n + a where q and n have no common factor greater than 1 ), and all exhibit the same tendency to decentralization.


Martin and all of the critics of bitcoin and cryptocurrecnies have failed amongst things to realize the irreversibility of the trend of wealth and power toward decentralization, and so they have failed to realize that cryptographic are aspects of an arrow pointing in the direction that we are -whether we like it nor not- going. Ironically, the analogy Martin draws between the birth of the modern banking system and bitcoin supports the existence of this decentralizing trend:

Sovereign governments everywhere are petrified. An ingenious new invention that allows people to make payments across borders without leaving a trace in the official monetary system is spreading like wildfire. Its workings are so clever that few understand them. It’s backed by some of the leading entrepreneurs of the day. The embattled establishment is warning that the state’s right to regulate finance is being undermined.

That may sound a lot like bitcoin in 2014. But, in fact, it’s the story of a much earlier episode of monetary innovation: the birth of modern banking in sixteenth century Europe.

The banks at that time played a decentralizing role, and took power away from the governments. Today cryptographic currencies are playing a decentralizing role and taking power away from the banks. But this power is not merely swapped from the control of one broker to another. Rather, it is shared amongst a larger number of independent brokers. With the execution of Charles 1, and the rise of Parliament as the supreme authority in Britain, the power of the royal Stewart line was not thereby transferred to an alternative central authority in the way that one king inherits the power of his predecessor. Rather, the outcome of the conflict between the Royalists and the Parliamentarians was that power was shared between a larger number of brokers. It was more equitably distributed. Bitcoin and cryptographic currencies may be seen as a further step on the way toward a state of decentralization. Any single cryptocurrency may fail (although bitcoin is unlikely for its foundational status to be amongst the failed coins), but all bets against the ultimate replacement of fiat by one and/or another cryptocurrency are bets against the right horse.

Download pdf


(1) Nakamoto, S (2009), Bitcoin: A Peer-to-Peer Electronic Cash System

(2) Martin , M (2014), Bitcoin is Pointless as a Currency, But it Could Change the World Anyway

(3) Dicey, A (1885), Introduction to the Study of the Law of the Constitution

(4) Montesquieu, Baron de (1748), The Spirit of the Laws

(5) Riemann, G (1859), On the Number of Primes Less Than a Given Magnitude

(6) Feynman, R (1967), The Character of Physical Law

(7) Hadamard, J (1896), On the distribution of the zeros of the zeta function and some consequences for arithmetic

(8) de la Vallée Poussin, C (1896), Analytical research on the theory of prime numbers

(9) King, T, et al (2014), Identification of the remains of King Richard III

(10) Euler, L (1737), Various observations concerning infinite series

(11) Dirichlet, P (1837), Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers

(12) Davenport, H (2000), Multiplicative number theory