Abstract In The Emperor’s New Mind, and in the later Shadows of the Mind, Sir Roger Penrose argues that human intelligence is not purely computational by appeal to Godel’s Theorem. Basically the argument is this: given that every minimally complex and consistent system of proof (such as that constituted by the average PC) contains sentences that are true but unprovable in that system, there are truths comprehensible to the human mind that cannot be programmed into a computer, from which it follows that the human intelligence is not purely computational. Penrose’ argument -sometimes known as the “Penrose-Lucus argument” – has been heavily criticized by mainstream academia, and its conclusion rejected. In this note the argument is defended by identifying Godel’s Theorem as a member of a broader class of statements whose incomprehensibility by any artificial computational device is unambiguous. Moreover, it is argued that natural quantum computers are not subject to the limit that puts these statements beyond the reach of artificial computational devices, and so that Penrose’ proposal that the human mind has a quantum computer at its disposal is the appropriate explanation of why human mathematicians succeed where man-made computers are bound to fail.
PART I: THE P VERSUS NP PROBLEM
The Big Bang Theory
In one episode of the popular TV sitcom The Big Bang Theory, several of the supposedly super-intelligent main characters are playing a quiz game:
Leonard (PhD, IQ 173): How long is a galactic year?
Rajesh (PhD): 250 million years…
Howard (M.Eng.): Which Archimedean solid has 20 regular triangular faces, 30 square faces, 12 pentagonal faces,
60 vertices, and 120 edges?
Leonard, Rajesh, Sheldon (PhD, IQ 187): The Rhombicosidodecahedron!
Rajesh: We are so smart…
Yes and no. From the beginning, the show has promoted various false ideas and values, and chief among these is a notion of intelligence according to which this is a matter of memory. The sign of the highly intelligent person by this criterion is someone -such as the lead character Sheldon Cooper- who has at their intellectual fingertips a host of facts and formulas. Perhaps no human being in history was by this criterion more intelligent than Kim Peek, the so called “real Rain Man”. But this is the passive form of intelligence, the form of intelligence possessed by he idiot savant and by the successful game show contestant. By this criterion, nothing is smarter than a well programmed computer, and there are a number of Big Bang Theory story-lines carrying the implication that the human mind is a computer – passive.
Kant drew a useful distinction between ‘analytic’ and ‘synthetic’ propositions, where the first are those of the form “All bachelors are unmarried” and the second of the form “Tom is a bachelor”. Analytic propositions analyze the contents of a concept or term in the way that ‘All bachelors are unmarried’ analyses the concept of ‘bachelor’; synthetic propositions synthesize concepts in the way that ‘Tom is a bachelor” synthesizes the concepts of “Tom” and “bachelor”. This distinction captures the idea that passive intelligence -which repeats what it has learned- tends to analysis, whereas active intelligence -which links disparate-seeming phenomena- tends to synthesis. Newton’s identification of the motion of the planets with the behaviour of terrestrial objects, and Einstein’s identification of energy with mass, were triumphs of synthetic intelligence. The active form of intelligence is something that is rarely on display on The Big Bang Theory, and yet this is the sense in which Newton, Einstein, and Feynman – some of the show’s biggest intellectual heroes- were intelligent. Newton and Einstein were both mediocre students at times, Feynman’s IQ was once measured at a mere 125, but all solved previously unsolvable problems by the creation of new techniques, and Newton and Einstein in particular revised the ideas of their predecessors in profound ways. They did not possess memories as retentive as Sheldon Cooper’s or Kim Peek’s (Feynman’s memory by his own admission was poor), but they were immeasurably superior where it counts: they were actively intelligent. Active intelligence is difficult to test for, because at it best, it reveals itself by solving problems for which the solutions were not previously known, and so it is difficult for a tester to pose a problem whose solution they would recognize as such if the testee actually solved it. Very few people for example could appreciate why Andrew Wiles’ proposed solution of Fermat’s Last Theorem is correct, and so there would be no point in their asking him for his solution. Similarly with Gregori Perelman’s proposed solution of the Thurston’s Geometrization conjecture. The following is the kind of question typically to be found in an IQ test:
It’s a maths question which asks us to subtract 6 from a certain location on a 12 hour clock to arrive at a new location. Since the answers to such questions are known, IQ questions all have the property that they can be successfully answered using known techniques. What about the question of whether there are 3 positive integers capable of satisfying the equation x^n + y^n = z^n for any positive-integer value of n greater than 2? This was first asked by Fermat in the 17th century, and although it sounds simple enough, was not answered for 300 years, and by the use of mathematics far in advance of the mathematics of the time in which it was posed. To answer a question that cannot be answered using known techniques, one needs to create or discover a new technique. There is then a fundamental difference between IQ-test-style questions, which can all be answered using well-known -using extremely well-known- techniques- and questions that can’t. This difference concerns two jointly necessary but distinct forms of intelligence. To answer an IQ-test-style question one follows a set of rules that one has learned, and one arrives at the correct answer in the same way that by one arrives at the correct destination by following the directions on a map. Errors occur only because of a failure to follow the rules. A computer could be programmed to sit a standard IQ test, and effortlessly achieve 100% every time, and a well-programmed computer judged by this limited criterion of intelligence is more intelligent that any human being. It is this false consideration perhaps had has led to the idea of the coming ‘technological singularity’ (roughly the imaginary point at which computer ‘intelligence’ outstrips human intelligence). But rules have to be established/discovered in the first place, and so this passive form of intelligence can’t be exercised without the foundation provided by the rules themselves. Plenty about the human mind is derivative, but there is a fundamental role for origination because, without origination, there is nothing to derive. That origination is prior to derivation is arguably the fundamental reason that the human mind cannot be a computational device – computational devices in the usual sense of the term “computational” are purely derivative, and thus unable to solve any problem that requires origination, but the great strength of the human mind is its capacity to originate rather than blindly follow what has been inculcated in it. It is this capacity that distinguishes man from animals, and from computers…
P versus NP
There is a deep problem on the borderland of computer science, logic, maths and physics known as ‘P versus NP’, which concerns the question of whether the class of decision problems whose solutions are quickly verifiable (NP) by a computer is the same as the class of problems that are quickly solvable by computer (P). Historically the problem arose because certain problems seem to be hard to solve. More particularly, they seem to require a lot of time -an exponentially growing amount of time- to solve. An example of a NP problem that seeming takes exponential time is Factoring. While it doesn’t take long to factor 15 or 21, imagine trying to factor the 200 digit integer
You can easily check for yourself that it divides evenly into the primes
Although it takes a pocket calculator a spit second to do the multiplication, it would take a single 2.2 GHz computer about 75 years to do the division. Factoring is one is one of numerous (NP) problems that are easy in the one direction, and seem to be hard in the other. We can argue for the conclusion that Factoring really is hard by considering the Travelling Salesman Problem (TSP), which is he problem of whether a salesman can visit a number of cities exactly once and return to a home-city for a certain cost.
First we transform TSP into a problem of whether a computer (salesman) can execute some number of instructions (visit some number of cities) which executes every instruction exactly once (visits every city exactly once) before returning to a halt state (home-city) for some maximum cost:
An arbitrary computer is therefore working on the problem of whether an arbitrary computer will halt when run with an arbitrary set of instructions, and thus the point will be reached when the evaluation is a self-evaluation, i.e. the point will be reached such that the computer is attempting to determine of itself if it will halt. If we associate to every city an instruction, this self-evaluative point will be reached when the number cities on the tour is the same as the number of instructions in the program. This leads to a contradiction in the case that the number cities is greater than the number of instructions, and by appeal to this contradiction, it follows that TSP involves a limit on the number of cities. This proves that TSP differs from problems -such as multiplication- which aren’t aren’t sensitive to the size of the input, and that P and NP are not equal. More particularly… associate each atomic instruction in some finite set of instructions with some n-SAT formula (such as ), associate each of these formulas to the vertices of a complete graph other than the start/stop vertex. Associate the halt state to this start/stop vertex. Let v be the number of variables per clause, and if the instruction doesn’t result in the machine going into an infinite loop, weight the vertex as . If the instruction does result in the machine going into an infinite loop, weight the offending vertex as 0 + 1. Weight the halt vertex as 1. gives the minimum truth-density, , gives the maximum falsity-density of a satisfiable n-SAT instance, gives the maximum imbalance between the truth and falsity-density corresponding to a Travelling Salesman’s circuit that is within budget.
Let n be the number of instructions/vertices, and we can conclude that
- If and only if it is possible to visit every vertex exactly once before returning to a halt vertex for a cost n without upsetting the balance of minimum truth-density/maximum falsity density , there is some Turing machine that will not infinite loop when run with some input.
The Turing Test
The Turing Test is a kind of game in which a player questions two other players hidden behind a screen or in another room. The purpose of the first player is to determine which of the hidden players is a computer and which is human, and the purpose of the computer-player is to pass itself off as human. The father of the computer, Alan Turing, who devised the test, was of the view that if a computer-player could pass this test then it was to all intents and purposes intelligent. Philosopher John Searle devised the thought-experiment known as “The Chinese Room” to show that, even if a computer-player could pass the Turing Test, this doesn’t mean that it is truly intelligent. Searle imagined a non-Chinese speaker locked in a room with an English rule book which allow him to correlate sets of Chinese symbols in such a way that he can give written answers in Chinese to questions written in Chinese and fool people outside the room into believing that he understands Chinese:
The argument addresses itself, not to artificial intelligence per se, but to digital computers which manipulate symbols in this way, and concludes that a computer passing the Turing Test is no more capable of thinking than the man in the Chinese Room is capable of understanding Chinese. The dispute between Turing and Searle concerns the issue of what intelligence is. Turing believed that an algorithmic process (a blind rule-following process) is intelligent or is capable of being intelligent, and Searle disagreed. Given that the human mind is paradigmatically intelligent, it follows that if a computer can perform any intellectual task that the human mind can perform, then the human mind can be regarded as essentially algorithmic too. Roger Penrose has argued there are some tasks that the human mind can perform that are beyond the capacity of an algorithm, in particular the ability of the human mind to grasp the truth of Godel’s Theorem. This says that every consistent system of proof complex enough to involve arithmetic contains a true sentence such that it is not provable within that system. I interpret Roger to argue on this basis that, since a purely computational process would be unable to prove/understand this theorem without to going beyond it’s own capacity, an intelligence that understands Godel’s theorem cannot be purely computational, and I propose to extend this argument…
Consider logician Graham Priest’s notion of an inclosure contradiction, exemplified by the Liar paradox, and involving an object that both transcends and yet is inclosed by some set of objects:
“I am false.” therefore “I transcend the set of all true sentences.”
“Is is true that I am false.” therefore “I am inclosed by the set of all true sentences.”.
Turing’s Halting Theorem works by dismissing as impossible a Turing machine that says in effect “I infinite loop.”, a machine such that if it existed would both transcend and be inclosed by the set of all Turing machines that halt. (In the same way that the barber of a certain town who would shave all and only those townspeople that do not shave themselves is thwarted by the need to be outside of the town to perform this feat, a Turing machine that would to solve the classical halting problem is thwarted by the need to run an input than can only be run by a more complex program.) Another inclosure contradiction arises in the context of Godel’s Theorem, the essence of which is a statement written in the language of a system S that declares of itself that it is unprovable in S. In so far as the system doesn’t permit the proof of false statements, this statement must, as it says, be both true and unprovable in S, and Godel’s Theorem works by dismissing as impossible a true sentence provable in a consistent system S that says in effect “I am unprovable in S.”, a sentence such that if it existed would both transcend and be inclosed by the set of all the provable sentences of S:
“I am unprovable in S” therefore “I transcend the set of all provable sentences.”
“All the true sentences of S can be proved in S.”
But notice how this statement becomes the original liar paradox in the context of a global system S* where there is no escape to a higher order system:
“I am unprovable in S*” therefore “I transcend the set of all true sentences.”
“I am true.” therefore “I am inclosed by the set of all true sentences.”
This observation helps give rise to a notion that is invaluable here, that of the essential nature of the inconsistency of any argument that uses certain inclosure contradictions to reach its conclusion: like all mathematical arguments, Turing’s and Godel’s arguments -Cantor’s diagonal arguments, and Kant’s antinomies and so on- go through only within the context of a implicit global -a complete– system of proof upon whose consistency they depend – their application this is to say is infinite in extent. Yet these same arguments deny the joint existence of a system that is both complete and consistent. Same thing with the argument above – P is found to be not equal to NP on the basis of the insolubility of the Halting Problem, and an inclosure contradictions. I claim that these contradiction generally, and the solution to the P versus NP problem in particular, point to a fundamental limit on computation as understood by Turing, and a fundamental difference between the human mind and any purely -Turingesque- computational process. The latter work be following finite sets of rules known as programs. The larger the number of rules, the more complex the program, but complexity is always finite. In the same way that a system of proof can’t evaluate a proposition more complex than itself, nor can a computer program run an input more complex than itself. The existence of this limit is signified by the Halting Problem, a problem affecting any computer attempting to determine which inputs are too complex to be run. By contrast with a computer program, it seems that the human mind must continually run inputs of greater complexity than itself by virtue of its self-consciousness, something that accords with the ability of this mind to think about these kinds of self-referential problems and solve them. How can this be made to make sense? The general answer is that the digital computer is atomic (bottom-up), and the human mind is holistic (top-down). If n is the number of instructions of some Turing machine M’s program, then -relative to M– a TSP problem involving n + 1 vertices has exceeded a critical threshold and become the classical the Halting Problem, a problem that is unsolvable by M in any finite amount of time. If the relative complexity of M’s input and program is over the threshold represented by a truth/falsity density of 1/2, yet beneath the threshold represented by n, then the relevant TSP instance is a special case of the Halting Problem, a problem that is solvable by M in a finite amount of time. M, therefore, is able to solve a limited range of TSP problems in an exponential but finite amount of time, a range that is dependent on the number of instructions in its program:
All legal TSP circuits, we can see, are either prime sub-circuits or the multiples of prime sub-circuits associated with the equation
It follows from the existence of this arithmetic hierarchy of more or less complex inputs/programs that a Factoring problem can be transformed into the problem of whether a prime TSP sub-circuit is larger than the square root of the number of vertices of a TSP instance associated with some Turing machine M’s program. If it is larger, than M will infinite loop when run with this TSP, and we can extend the argument concerning NP-complete problems to Factoring. But in 1994, Peter Shor introduced a quantum algorithm which -if there was a quantum computer to run it- would permit the factoring of integers in polynomial time. But by the argument above, any such computer would be able to solve the problem standing in the way of a solution to the classical Halting Problem. It is possible to build toy quantum computers, and these can factor small integers (a quantum computer running Shor’s algorithm has been used to factor 21 into 7 times 3 with high probability), but given that man-made computers are limited in their capacity by the Halting Problem, it follows that BQP (the class of problems quickly solvable by a quantum computer) must differ from P in that, while there is no limit on the size of a polynomial-time problem solvable by a classical computer, there is a limit on the size of a polynomial-time problem solvable by a man-made quantum computer. But while this a limit on the size of a polynomial time-problem solvable by an artificial quantum computer, it does not mean that there is any limit on the size of a polynomial-time problem solvable by a natural quantum computer. Quantum coherence is known to decay exponentially in time, so that macroscopic quantum superpositions are generally unsustainable. From a superficial perspective, this problem of the suppression of decoherence, as a quantum system is scaled up, may be only an engineering problem, and this is the assumption on which the quantum computing industry is based. This assumption is false, but I leave this aside here as tangential to the point that I am most concerned to make, which is that computers as Turing understood them aren’t intelligent in the sense that the human mind is intelligent. While a quantum system loses coherence as it is scaled up (which stands in the way of man-made quantum computers that can be scaled up beyond toy-size), it retains coherence at a sufficiently fine scale, and at a scale at which key sub-structures of the brain exist. Herein lies an explanation of the ability of the human mind to solve problems that classical computers cannot. Let’s recap with a view to disposing of Turing’s model of human intelligence and recommending an alternative model. Because a man-made computer program consists of a finite set of rules, it follows that a computer program has a finite amount of what we can call ‘complexity’ (if you want an algorithm to have more complexity then you have to create more rules). When the input to a computer program has an equal or greater amount of complexity, the program will be unable to run it – the program will infinite loop when run with such an input. If the input to a computer program has a lessor amount of complexity than the program, but its complexity is of the same order, then the program will be able to run the input and halt within some finite of amount of exponentially growing time, but as the ratio of the complexity of the input and program approaches 1, so does the amount of time it will take the program run the input and halt, until the point is reached where the input is too complex to be run in a finite amount of time. But quantum computers -Shor’s algorithm shows us- are not bound by this limit. If the human mind is a computer of the sort that Turing had in mind when he proposed the Turing Test (which is what people mean when they say that the human mind is a computer, and is what Roger Penrose means when he denies this), then it too must be confined by this limit on complexity, the limit beyond which it can’t run an input in a finite amount of time. And it will take such a computer an infinite amount of time to prove the inequality of P and NP because the proof addresses itself to inputs more complex than any given computer program. If the human mind is a computational device of the Turing-sort, then it could not possibly prove the inequality of P and NP within a finite amount of time. The same can be said for Turing’s Halting Theorem, for Godel’s Theorem, and for all mathematical results involving inclosure contradictions defined in such a way that the contradictory object must transcend any given finite limit. (The ironic thing is that it was Turing himself who first showed that man-made computers are bound by the finitude of their programs, seemingly unaware that by doing so he had disproved his own pet theory that there is no more to intelligence than rule-following.) In conclusion, while he human mind has many aspects that mirror the behavior of a classical computer, and it can -and arguably must be- a quantum computer in order to make the computations that it does.
PART 11: ON THE MILLENIUM PROBLEMS
Scott Aaronson and others have pointed that if P = NP, i.e. if NP-complete problems could be solved in polynomial time, then a computer could be programmed to solve all of the Millennium Problem (these are a group of questions that have proven themselves resistant to the application of known techniques for a long time, and each carries a million dollar bounty):
- Yang-Mill and the Mass Gap
- Riemann Hypothesis
- Poincare Conjecture
- Birch and Swinnerton-Dyer Conjecture
- Navier-Stokes equation
- Hodge Conjecture
- P versus NP
They interesting, not because of their importance to the ‘mathematics community’, much less because of the prize money, but because they are likely to be paradigmatic examples of questions they can’t be answered without something new, without a burst of originality. Wherever such a burst is required, we are caught in a trap arising from the derivative, passive -computational- aspects of our minds; wherever such a burst is required, there is as it were an ‘intellectual road-block’ beyond which we cannot pass without breaking the accepted rules. In the documentary Taking the World From Another Point of View, Richard Feynman talks insightfully about the failure of well-tried methods as a means of overcoming road-blocks in physics:
All that stuff is tried… when we’re going against a problem we do all that, that’s very useful, but we all know that, that’s what we learned in the physics classes, how to do that. But the new problem, where we’re stuck, we’re stuck because all those methods don’t work, if any of those methods would’ve worked we would’ve gone through there. So when we get stuck in a certain place, it’s a place where history will not repeat herself… Whatever we’re going to look at, at the method and the trick, and the way it’s going to look, is going to be very different than anything we’re seen before because we’ve used all the methods from before…
We spoke earlier about the difference between problems with which a passive, computer-style, intelligence excels (IQ-style questions) and those with which it founders (the examples given were Fermat’s Last Theorem and the Geometrization Conjecture). The Millennium Problems are classic examples of the latter. I propose turning Scott’s observation on its head, and point out that since P is not equal to NP, a computer cannot be programmed to solve any of the Millennium problems. Can they not solve these problems in some exponential, impractically long, amount of time? I argue that -like the P versus NP problem itself- all of the Millennium Problems involve inclosure contradictions, and thus that it would take any man-made computational device an infinite amount of time to solve them. Where the human mind collapses and expands, any artificial intelligence collapses and fails to expand, when confronted with an inclosure contradiction. This expansion is the basis consciousness (which is premised on the recognition of an ontological difference between the subject and the object of consciousness), while this collapse is the reason that man-made computers lack consciousness and will always lack consciousness, and fail to live up to any of the grand expectations people influenced by Turing and the computer revolution have come to have of them.
The Area Unit Circle and the Extension of Pi
Consider a circle whose area a is supposed for the sake of argument to be 1. Then an energy source E located at the center of this circle will posses the same strength from center to circumference for E/1 = E, which is the same thing as saying that there is no difference between center and circumference. If there is no such difference, then either the circle has no area and no radius (it is a point), or it has infinite area and an infinite radius (it is a line); if there is no such difference, then E either has either infinite strength or no strength. But in reality the strength of E always lies between these extremes. By the , a circle of area 1 has a radius of or 0.56419 which is approximately equal to or or 0.561459. Since gamma is a limit, a measure that is better able to capture the dynamism we seek that than is , and since gamma is the limit of a potentially infinite number of values, we may write
instead of , which can be written as
This extended equation involves a significant division between s = 1 and real values of s other than 1, for if and only if s = 1 does not reach the limit . We know this to be true because of the finite nature of other than s = 1, and the infinite nature of . If there was a value of x such that , then there would per impossible be a circle of area 1 (by the same token gamma must be irrational). The equation is an expression of the inverse square law, and concerns quantities that spread-out rather than die-off. The inverse square law comes from geometry alone, but the most well known applications of the law arise in physics, and gravity and electromagnetism are governed by it. These forces are to contrasted to the nuclear forces, because whereas the former are long-ranged, the latter are short-ranged. If we re-write as , and consider that is is a special case of for s = 1, we see that this duality/randomness is either increased (s < 1) or decreased (s > 1) by changing s to some real value other than 1. When we make this change, we see also that the arithmetic progression associated with is short-ranged. Let s = 1, and the gap between 1 and can be associated with long-ranged forces and classical systems and might be depicted in this way:
The exterior circle can be expanded forever because the interior circle can be contracted forever. If however s is a real number other than 1, then the gap can be associated with short-range forces, and non-classical systems:
Now we get an exterior circle that cannot be expanded forever because the interior circle contracts for a finite time before the difference between 1 and goes beneath a critically small amount () and a repetitive expanding and contracting pattern is adopted by the interior circle.
Long-Ranged and Short-Ranged Energy Systems
Let s = 1, and let
Now we have an infinite matrix -and an energy system the distribution of whose energy levels is long-ranged:
But if s is a real number other than 1 then
If for example s = 9, then
The case of s = 1:
Thus we have an infinite matrix (and an infinite energy system), and sub-matrices (and sub-systems) that are finite, and in the similar way that is a sub-sum of , these finite matrices are sub-matrices of the infinite matrices.
The Mass Gap Problem
If we regard the atom as having circular shape, and the nucleus as located at the center of this circular shape, then we might expect from the laws of gravity and electromagnetism that nuclear energy would behave mathematically in the same way, that the strength of the force would diminish as a square of the distance, but no. Unlike gravity and electromagnetism -which are long-ranged- the nuclear forces are short-ranged – their influence extends only a very short-distance from the center of the atom, and then vanishes. Unlike electromagnetism, the nuclear forces are described by a wave-equation with an extra term indicating that, unlike the photon, the associated particles have a mass:
The mass-gap is the difference between the lightest particle and the vacuum, and the mass-gap problem is to explain why these particles have a mass.
Classically, the inverse square law describes a fixed quantity that can be increasingly spread over an increasing area, which means that there is a symmetric relationship between that quantity and the area. If the quantity or the area died-off with distance, then that would be an asymmetric relationship. In particular then, the inverse square law must involve a broader concept of circularity, one that includes the traditional circle as a special case, but where the relationship between energy and space can be an asymmetrical one. We can extend to inverse square law in the required way by reference to a circle is supposed to have an area of 1, then an energy source E located at the center will posses the same strength from center to circumference for E/1 = E… The ‘mass-gap’ is the difference between the lightest particle and the vacuum, and the mass-gap problem is to explain why these particles have a mass. From the equations above comes the idea of the disruption of an energy field by spatial particles, rather than the disruption of empty space by energetic particles. This gap would in this case be the difference between the lightest massive particle and a massless energy field. But when s = 1, and the sizes of this gap is such that there is a balance of concentrated and diffused light, then the result is light and space that are co-extensive over a potentially infinite range. This gives us the symmetry principle according to which we can look at it either way – theoretically this switch is merely a matter of reversing the usual picture of the universe inherited from the ancient Greek atomists as a dark, static, receptacle, within which luminous objects move:
That is, whenever we are inclined to say that a quantity of light is moving relative to a background of space, ask also how space must be moving to if the background of the universe is light (in the figure below, the black square moves rather than the yellow circle):
For instance we may change the speed of light in a vacuum from 3 * 10^8 m/sec to 0, and consider instead that space is expanding at a minimum rate of 3 * 10^8 m/sec…
But when s is a real number greater than 1, and there is more concentrated light than there is diffused light, the result is light and space that are arithmetically co-extensive over a short range (when difference between the limit and partial goes beneath a critical threshold symbolized by h, arithmetic development ceases):
The symmetry principle now fails -we are talking about imbalances of light and space- but in the same way that the sum (s =1) is a sub-sum of the sum sum (s!=1), the equation (s=1) is a sub-equation of the equation (s!=1), and so these asymmetrical, non-classical, realms are under tight constraint from the symmetrical classical realm which they underly.
The Riemann Hypothesis
If a coin is fair, then there is a margin of error whose growth rate is approximately the square root of the number of coin flips. One way to depict these imbalances is assigning 1 to heads, -1 to tails, summing the 1s and -1s, and registering the imbalances as departures in a positive or negative direction from the x-axis. We call these pathways “random walks”, and if we average the furthest distance of a random walk from the beginning of the walk, we find that it converges to . We could imagine the average to be more or to be less than this square root value: if it was more, then random walks would on average be more random, and if it was less than random walks would on average be less random. Next we relate coin flipping and random walks to the Moebius function, which assigns the value 1 to square-free integers with an even number of prime factors, -1 to integers with an odd number of square-free prime factors, and 0 to non-square free integers. In the same way that the 1s and -1s assigned to heads and tails are summed to produce a random walk, the Mertens function sums the 1s and -1s assigned to square-free integers to produce the random walk depicted below:
If we look closely, we see that this walk is in fact a superposition of walks :
Another perspective on the same phenomenon:
Now we could imagine these superpositions to have more or less amplitude: if the amplitude was more, then the random walk would be more random, and if it was less, then the random walk would be less random. The Riemann Hypothesis says that the amplitude must be exactly that associated with a value of such that the real part of this complex number is 1/2.
We have seen that there is in the classical world a local balance of light-concentration (energy) and light-diffusion (space): so that it is possible to assume either that light or space is at rest/moving uniformly, but globally light is being diffused. Equivalently, there is in this world a local balance of prime-density and sparsity resulting in the fact that a fluctuation in prime-density is just as likely to go the one way as it is to go the other. Globally speaking light-density and prime-density are lessening because they are spreading-out, and this spreading-out is long-ranged, i.e. it is potentially infinite in extent. If s != 1, we get an imbalance, and a dying-off rather than a spreading-out when it comes to light and to the primes, and a random-walk that is short-ranged, but these imbalances are necessarily part of an overarching balance. If there could be some n such that the real part of is not equal to 1/2 -in which case the amplitude gets shrunk or stretched in a manner corresponding to s != 1- then we would get an imbalance that doesn’t fit with the over-aching balance. The argument that proves Riemann right is this:
- All long-ranged surfaces are associated with the value s = 1 and a real part of equal to 1/2;
- If the Riemann Hypothesis is false -if there is any non-trivial zero whose real part is other than 1/2- then the primes form a short-ranged surface and are finite in number;
- The primes do not form a short-ranged surface;
- Therefore the Riemann Hypothesis (in all of its forms) is true.
The Flat Earth Society notwithstanding, we know the surface of the earth to be round, but for many intents and purposes we can regard it as flat because we are comparatively speaking very small. We can call a surface that is flat in this local sense a ‘manifold’, and note that some manifolds are smooth, while others are not. If a loop is drawn on the surface of a sphere, for example, it can apparently be shrunk to a point, regardless of where on the surface of the sphere such a loop is drawn. But if a loop is drawn on the surface of a torus it is possible for it to get stuck:
The reason a loop can be shrunk to a point on the surface of a sphere is that, in a sense, this surface has no holes in it, and the reason this not true of the surface of a torus is that this surface clearly does have a hole in it. We can say that a manifold is long-ranged if and only if it passes the ‘loop-shrinking test’, if a loop drawn on the manifold can be shrunk to a point. Let’s adopt the topological convention that an n-sphere inhabits n+1 dimensional space, in which case the sphere and the torus are 2-dimensional objects in 3 dimensional space. In its broadest form, the Poincare Conjecture says that if an n-manifold passes the loop-shrinking test, then it is long-ranged, and the version of the conjecture carrying the million dollar prize is concerned with a 3-dimensional object in 4-dimensional space.
As with most mathematics, there is a problem with the very language in which Poincare Conjecture is expressed because our solution to the Mass Gap Problem implies that there can be no such thing in our arithmetic domain as a point in the sense of a zero-sized object (or a line in the sense of an infinite-sized object), and that every surface has a non-zero sized hole in it. Nonetheless we can use the key distinction between long-ranged (smooth) and short-ranged manifolds to reveal the truth of the Poincare Conjecture by applying this distinction to surface areas in any number of dimensions. We do this by identifying with , which in the 1D case gives us:
Extending the notion of \[Pi] to include values of s other than 1…
We may then argue in this way:
- All manifolds associated with the value s = 1 are smooth manifolds in the sense that these manifolds can be continuously expanded as the holes in them are continuously contracted:
- If the Poincare Conjecture is false -if there might be a compact object in 4 dimensions which is smooth in the crude sense that a sphere is smooth, but not deformable without cutting or gluing into a hypersphere- then there is a smooth manifold in the more sophisticated sense that is associated with the value s != 1;
- Therefore the Poincare Conjecture is true.
We know that all elliptic curves (and all modular forms) are smooth manifolds as this concept is defined above. It follows that the non-modular -and any curve associated with an inappropriate solution to Fermat’s Last Theorem- is associated with a non-smooth manifold, and so that Fermat’s Last Theorem has the form of the Riemann Hypothesis and the Poincare Conjecture:
- All elliptic curves are smooth manifolds;
- If Fermat’s Last Theorem is false- if there are any whole number solutions to the equation x^ n + y ^ n = z^ n greater than 2 then there is a smooth manifold associated with the value s != 1;
- Therefore Fermat’s Last Theorem is true.
It makes sense that this should be so because Fermat’s Last Theorem concerns the equation for the circle, and concerns the ways in which -in the sense of the ratio of a circle’s circumference to its diameter- can be distorted: both the Riemann Hypothesis and Fermat’s Last Theorem are placing limits on how great this distortion can be before a manifold ceases to be smooth.
Birch and Swinnerton-Dyer Conjecture
The Birch Swinnerton-Dyer Conjecture starts with the ancient question of whether there an integer that is the area of a right triangle whose sides have rational length. Such integers are called ‘congruent’. 6 for example is a congruent number because there is a right triangle whose area is 6 and whose sides are 3, 4 and 5. The problem of determining if a is the area of a right triangle having rational sides is equivalent to the problem of finding rational solutions for x and y to the cubic equation
which is in turn equivalent to the problem of determining if there are infinite rational points on an elliptic curve, the curve underlying Andrew Wiles’ proof of Fermat’s Last Theorem.
Recall our earlier ruminations on a unit-circle in the sense of a circle whose area is 1. The traditional unit-circle has a radius of 1, but it is a simple matter to set up a correspondence between the two using the same mathematical framework. Rather than assuming the area is 1, in which case
gives the radius, we would assume the radius is 1 in which case
Thus we have two different types of unit-circles, the difference being that in the one case the area is rational and the radius is irrational, whereas in the other case the radius is rational but the area is irrational. The Pythagorean Theorem cannot in the case of a circle based on the first equation yield so much as a single right triangle -or a single rectangle- whose area and whose sides are rational, for although the equation for the latter unit-circle is
the equation for the former is
the equation for the area of the latter is
If n is a congruent number, then the hypotenuse of the right triangle whose area is n is the diameter of a circle such that it is a scaled version of the unit-circle whose radius is 1. For example, the triangle of area 6, and sides 3, 4, and 5, corresponds to a circle of diameter 10. The right triangle associated with this diameter can be continuously scaled, meaning that there are an infinite number of right triangles whose areas are congruent numbers:
And if and only if the triangle in question can be continuously scaled in this manner is it the case that there are an infinite number of rational points on the associated elliptic curve. This implies that the underlying unit-circle has a radius of 1, and that s in the following formula for is equal to 1:
It is arguable on this basis that there is a 4-way equivalency between the Poincare Conjecture, the Riemann Hypothesis, Fermat’s Last Theorem, and the Birch Swinnerton-Dyer Conjecture:
- All elliptic curves are smooth manifolds;
- If the Birth Swinnerton-Dyer Conjecture is false – if it is possible that s = 1 and the L-Function of an elliptic curve is equal to 0 while the curve doesn’t contain infinite rational points- then there are elliptic curves that are non-smooth;
- Therefore the Birch Swinnerton-Dyer Conjecture is true.
In all these cases we have argued that
- All smooth manifolds are associated with s = 1;
- All non-smooth manifolds are associated with real values of s such that they are not equal to 1;
- The falsity of any these conjectures/hypotheses implies either that there is a smooth manifold associated with s != 1 or that there is a non-smooth manifold associated with s = 1.
Navier Stokes Equations
Imagine now that you have a container of water and you are given the pressure and the velocity of this body of water at every point at some time t1 then a solution to the Navier-Stokes Equations will give you the pressure and the velocity of this body water for any later time tn. The problem of the Navier-Stokes Equations isn’t to find such a solution, but to prove that it exists (of course if you find a solution then that proves it exists, but one can prove that a solution exists without finding it). But if you can find a system that satisfies certain conditions -most particularly ‘smoothness’, which means in this context that the system has finite energy- for which there are no solutions to the Navier-Stokes Equations, then you have also solved the problem.
Consider the equation . This tells of an inelimable gap between the interior and exterior circles below:
If and only if s =1 and
there is a balance of contraction and expansion, so that the exterior circle in the one case can approach as closely to 1 as the exterior circle can approach to 0, and so that in the other case the exterior circle can approach as closely to -1 as the interior circle can approach to 0. If the Navier-Stokes Equations had a solution, we could use them to close these gaps and eliminate an ineliminable fluidity. More particularly, calculus and the notions of differentiability and integrability -from which the Navier-Stokes Equations emerge- is dependent on the smoothness of a manifold as defined above, but beneath this manifold are sub-manifolds on which this smoothness depends, but such that the Navier-Stokes Equations are bound to fail when confronted with them. Calculus asks us to forget about what speed something is going at a particular instant, and look instead at the average speed for smaller and smaller intervals of the journey. As these small intervals approach zero, the average speed approaches a certain limiting value, and by convention this value is called your instantaneous speed. But the whole exercise has an imprecision attached to it because of the fact that true instantaneous speed should be 0. If we continue to reduce the differences between a point of interest and nearby points, then any surrounding space will vanish if and only if this difference reaches 0. This tells us that there is no coordinate system whose origin corresponds to a state of rest, and this is another way of expressing the fact that arithmetic depends on the gap there must always be between the means and the object of counting, between -let us say for the moment- the mind and the world. Equivalently, there is an impossibility attached to knowing both the speed and the position of an object, for the ability to know these variables with complete precision -in which case instantaneous speed and position are both 0- requires that the object has neither speed nor position. We must consider then the minimum and the maximum size of the departure from this extreme, which we can do by appeal to the equation , to the value of s, and to its association with smooth and non-smooth manifolds.
The fluidity inherent in the equation above is balanced by the value of s = 1 -so that as momentum is increasingly rigidified, position increasingly becomes flexible and conversely- and we can argue that differentiation -and by implication integration- work as advertised only in the case that s = 1 and the surface is smooth, and thus the argument is as follows:
- All differentiable/integrable manifolds are smooth manifolds;
- If the Navier-Stokes Equations have a solution, then it is possible to use them to eliminate the fluidity in virtue of which differentiation/integration is possible;
- Therefore the Navier-Stokes Equations have no solution.
“Every harmonic differential form (of a certain type) on a non-singular projective algebraic variety is a rational combination of cohomology classes of algebraic cycles.”
Consider a circle and a square of area 1, and a square and circle inscribed within them:
From a geometric point of view, the circle and the square are distinct and rigid, but from a topological view-point, they can be deformed into each other on account of their smoothness. There is then a flexibility here – a fluidity. The area marked in back is shared between the enclosing and the enclosed shapes, but the area marked in yellow is unshared:
If we work, not with as traditionally understood, but with
we see that the shapes above are governed by s = 1 and a certain very particular balance of shared and unshared area, a balance associated with a very particular balance of rigidity and flexibility, and associated with smooth manifolds. Like the other Millennium Problems, the Hodge Conjecture may on this basis be seen as the same problem. We can argue, firstly, because -contrary to the impression created by the way in which the Poincare Conjecture is usually stated- every smooth manifold involves a hole arising from the nature of the area unit circle. Secondly, we can argue that every such manifold involves non-trivial topological cycles which are equivalent to algebraic cycles by virtue of the relationship between a unit circle and unit square:
- All smooth manifolds involve equivalent the topological and algebraic cycles;
- If the Hodge Conjecture is false, and there is a non-trivial topological cycle of the relevant type (even dimension etc.) that is not equivalent to an algebraic cycle, then this cycle is like a zero off the critical line or like the Frey curve;
- Therefore the Hodge Conjecture is true.
P versus NP
Consider the decision version of the Shortest Path problem, i.e. is there is a path from the start vertex to the destination vertex whose cost is less than some cost c? To answer this question for a graph of any size, we simply run Djikstra’s algorithm. A related, yet fundamentally different decision problem is this: is there an all-vertex tour of a graph whose cost is less than some cost c? The first question can be answered in polynomial time, the second takes exponential time – why?
In a story co-authored with Adolfo Caesars called Del Rigor en la Ciencia, or On Exactitude in Science, Jorge Luis Borges tells of a map of an empire that is identical with that empire:
… In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
purportedly from Suárez Miranda, Travels of Prudent Men, Book Four, Ch. XLV, Lérida, 1658
A map is a way of finding a location in an unknown territory without performing an exhaustive search of that territory, and if we have no map, or if -equivalently- the smallest map we have of a territory is the same size as that territory itself, then as the size of the territory grows arithmetically, the number of places we have to search in order to find our location grows exponentially (a complete graph of n vertices has for example Hamiltonian cycles). Non-metaphorically, when the smallest program we have to operate on an input is as complex as that input itself, then as the complexity of the input grows arithmetically, it is inevitable that the number of operations grows exponentially. More generally, a scientific theory can be regarded as a computer program for predicting observations, and if the smallest program we have is as complex as the data-set itself, then as the complexity of the data-set grows arithmetically, the number of operations needed to analyze the data and make a prediction grows exponentially. In a fantasy-world where computation is free, it is possible that P = NP; but in the real world, where there are exponentially steep time-costs to be paid by computational devices running self-inputs, P != NP… In other words, the input-program relationship in the case of Shortest Path us fundamentally different from the input-program relationship in the case of TSP…
The Millennium Problems as Inclosure Contradictions
Re-consider the Liar sentence “This sentence is false.” If it is true then, since it says it is false, it is false, and if it is false, then since that is what it says, it is true. The Liar paradox is the paradigmatic example of what Graham Priest calls an inclosure contradiction:
“I am false.” therefore “I transcend the set of all true sentences.”
“Is is true that I am false.” therefore “I am inclosed by the set of all true sentences.”.
The paradox can be mitigated by regarding a liar-like sentence in a local context (“This sentence is false on Wednesday.” only involves a contradiction on Wednesday.) This is the essence of Godel’s idea that every system of proof strong enough to do arithmetic contains a sentence that says in effect that it is not provable in a local system: if such a sentence is false, then false sentences are provable in the system, and if it is true then some true sentences are not provable in the system. But if the system of proof in question is global -if every true sentence is provable in the system- then the ‘Godel sentence’ is in this case equivalent to the Liar sentence, meaning that the Godel sentences are not expressible in a complete and consistent system. A system in which a Godel sentence is not expressible would be a non-arithmetic or proto-arithmetic system, and this implies that arithmetic depends on a tradeoff between completeness and inconsistency. There is an old brain-teaser concerning the barber of a certain town that shaves all and only those residents that do not shave themselves, and the question arises as to whether the barber shaves himself. If he does shave himself, then he is not shaved by the town’s barber = himself, and if he does not shave himself, then he is shaved by the town’s barber = himself…. Such a barber, we can conclude, does not exist. We can avoid the paradox it may seem by moving the barber out of town, which is what Russell and Tarski did with their approach to the Liar Paradox, and what Godel and Turing did, but if we are to go on moving the barber out of town forever, this implies that there is no such thing as completeness, or singularity. Another form of the barber story concerns a barber that shaves all and only those residents of a certain town that do shave themselves. Now the only way to diffuse the contradiction is to make the barber the town’s sole resident – otherwise the barber would have to shave people that do and don’t shave themselves- and if we are to push this line of reasoning as far as possible, we find that there is no such thing as inconsistency, or plurality. Since classical in all its forms is arithmetic, we can argue on this basis that classical systems of all kinds are those in which there is a precise balance/imbalance of completeness and inconsistency (singularity and plurality) without which the matrices below would be finitely expandable:
We can turn logic into arithmetic in this way:
and distinguish between infinitely and finitely expandable arithmetic progressions by reference to the (real) value of s in equations such as the following:
Regardless of the value of s, we can fill the matrix above with a balance of 1s and -1s. We must allow for a certain dynamic amount of ‘error-room’ arising from the difference between the partial sum/integral and the limit, and if s = 1 we find that this error-room gets forever closer to 0 without reaching it. Thus we have a system of proof and an arithmetic progression that is infinitely expandable. We can describe this potentially infinite system in terms of the equation. If s != 1, then this error-room eventually runs out, and we have system of proof and an arithmetic progression that is but finitely expandable, but these finite sub-matrices are akin to the negative building blocks in virtue of which infinite matrices are arithmetic. Otherwise put, there is in the classical world a local balance/imbalance of light-concentration (energy) and light-diffusion (space): so that it is possible to assume either that light or space is at rest/moving uniformly, but globally light is being diffused. Equivalently, there is in this world a local balance/imbalance of prime-density and sparsity resulting in the fact that a fluctuation in prime-density is just as likely to go the one way as it is to go the other. Globally speaking, light-density and prime-density are lessening because they are spreading-out, and this spreading-out is long-ranged, i.e. it is potentially infinite in extent. If s != 1, we get a local imbalance that is too great for classical maths and physics, and a dying-off rather than a spreading-out when it comes to light and to the primes… We can call a manifold that is long-ranged “smooth”, and we can argue in this way:
- All smooth manifolds are associated with s = 1;
- These are associated with non-smooth sub-manifolds arising from real values of s such that these are not equal to 1;
- The falsity of any these conjectures/hypotheses implies either that there is a smooth manifold that is associated with s != 1 or that there is a non-smooth manifold associated with s = 1
This argument is about the nature of the relationship between the smooth and the non-smooth manifolds, and the precisely calculated way in which they must interact to produce the smooth manifolds of the classical world and to create the possibility of rational inquiry into this world. Each of the Millennium problems is to be solved by appeal to the inclosure contradiction that arises when a hole in a manifold (a non-smooth sub-manifold) is the same size or larger than the corresponding positive proportion of the manifold: think of P versus NP and the size of the gaps between classical computer programs, and the size of the corresponding gaps between non-classical sub-computer programs:
Thus the interaction involves a key balance -and a key imbalance- of negative and positive, of (in)consistency (negative) and completeness (positive), and so there is a significant sense in which the argument is just an abstract and extended and particularized form of Godel’s Theorem, the thrust of which is that an imbalance of consistency and completeness is a condition of the possibility of rational inquiry.
Consciousness, it can be seen, exists only on account of the gap there is between the partial sum/integral and the limit of the formula for a circle of area 1:
Without these gaps, there is a possible object of consciousness of, but no subject. These gaps are of a negative nature in the sense that they are pre-existed by something unified, from which it follows that consciousness itself is negative. In Being and Nothingness, Sartre challenges us to try and detect the mind independently of some impression, some perception, independently of some content, and he observes that this is impossible. In the same vein, Wittgenstein analogizes consciousness to the eye, pointing out in the Tractatus that the eye does not see itself because it is not a part of the visual field:
The recognition of the existence of these gaps is a recognition of the existence of a gap between proof and truth (between if you like verifying a solution to a computational problem and solving that problem), and is a statement of the essence of Godel’s Theorem. Since this is a global statement -since it ranges over all truth- it is what Graham Priest calls an “inclosure contradiction.” Each of the solutions to all of the Millennium Problems proffered above involves a recognition of the existence of this gap, and thus all of these solutions are re-statements of the essence of Godel’s Theorem, and inclosure contradictions. They are thus problems that no purely computational intelligence can solve in any amount of time. And so like Godel’s Theorem, the solutions to the Millennium Problems can be seen as implying that there is a limit on purely computational reasoning arising from this gap. Moreover, there is in the inclosure contradiction a series of collapses and an expansions, and it is in the interplay between these from which arithmetic and arithmetic consciousness arises: an input to a computer grows in size until it hits a limit arising from the relative size of input and program, and then another more complex program is needed, and so on and on on… And these gaps are, not merely the seat of arithmetic consciousness, but the seat of the holy grail of Penrose’ argument in The Emperor’s New Mind – non-computability. Simply: since computability arises from the special harmonious relationship there must between proof and truth, and since this relationship cannot be perfectly isomorphic without causing a collapse into singularity and into non-consciousness, both consciousness and non-computability must lie in the gap between the two. It is inevitable then that classical computers are non-conscious. By implication, quantum computers mediate between the two sides of the gap, which is their great strength (they can do things classical computers cannot do) but also their great weakness (they cannot be scaled).
PART 111: ON THE TECHNOLOGICAL-SINGULARITY-DELUSION
The Simulation Argument
The ‘Simulation Argument’ uses probability theory to defend the claim that at least 1 of the following statements is true:
- Civilizations go extinct before they reach technological maturity
- Technologically mature civilizations are not interested in producing ancestor-simulations
- The world as we know it is a computer simulation
In a nutshell, once a civilization acquires the ability to simulate the past by the use of sophisticated computers, there will be many more simulated pasts than actual pasts, and it becomes far more probable that one is living in a simulated past. If one takes oneself not to be living in a simulation, this implies that no ‘technologically mature’ civilization come to exist, that there is a bottle neck and/or a disinterest that prevents them from creating computer-simulations of the past. An underlying assumption is that consciousness can be simulated by a computer, and the question arises as to whether this assumption is true. The most general reason to think that it isn’t true is this: if you are living in a computer simulation, then the laws governing the connection between your mind and the world -the laws of logic and mathematics in particular- are simulated laws and therefore cannot be used as a basis for the view that you are living in a simulation. Otherwise put, it is a condition of being able to reason about the world that it’s governing laws are not simulated. The Simulation Argument is reminiscent of Descartes’ pretend-argument that for all we know we are living in a illusory world created by an evil demon. Descartes sets this argument up because he wants to knock it down by finding in effect that the possibility of an evil demon conflicts with the possibility of rational inquiry, a possibility that requires in the final analysis the existence of an intelligence that will not be deceived if it follows the laws of thought (a great defect of evolutionary epistemology is that it does not provide us any such access to the truth, only one to ideas that promote survival, and which may well be false). Any theory that fails to leave room for the discovery of it’s own truth is a bad theory, and so any theory according to which we are entirely deceived by an evil demon, or imprisoned in some kind of Matrix-like computer simulation is a bad theory, but we can be more specific: it is impossible to reach ‘technological maturity’ defined in this way because consciousness can’t be simulated by a computer, or by any man-made device. The classical Halting Problem stands in the way of computer-consciousness, but this problem is a special case of a wider class of problems. These problems -‘NP-hard problems’ – arise when complexity of an input is of the same order as that of the program: when the complexity of the input is not less than that of the program, you have the classical Halting Problem, and the run-time in this case is infinite, and otherwise it is exponential. If it was possible to run such inputs in polynomial time, the loss of energy-density that occurs with the passage of time, and the expansion of space, would be symmetric and reversible without any miraculous intervention: Humpty Dumpty could after all be put back together by the king’s horses and the king’s men, and it would be possible to do all of the things that mortal men have dreamed of doing but cannot do (build perpetual motion machines, travel faster than light and thus turn back time etc.). This problem is destined to forever thwart the ambitions of the A.I. community. Their intellectual grandfather, Leibniz, envisioned a great calculating machine -a super-computer- that could correctly answer every question posed to it in every area of knowledge, so that any dispute could be easily resolved with the words
Let us calculate, without further ado, and see who is right.
If we could have such a super computer, then we wouldn’t bother with elections because the super-computer would choose the correct candidates. Nor would we bother with courts of law because the super-computer would deliver the correct decision. We would always know exactly what do because the super-computer would tell us. Of course, computers give correct answers to questions only if they are correctly programmed, and so the existence of an omniscient super-computer depends on the prior existence of an omniscient super-programmer… But there is a more fundamental obstacle standing in the way of a super-computer, and it is that NP-hard problems -when they are solvable at all- take an impractically long time to solve. Because this limitation is a theoretical one, there can be no technologically mature future dominated by the God-like Leibnizian super-computers of the simulation argument and the post-singularity era. Indeed there will never be a computer that will tell us anything of great interest, let alone one that will simulate consciousness. Like Leibniz, and like Turing, the author of the Simulation Argument has grossly over-estimated the power of machines because he has grossly misunderstood the nature of consciousness.
Aaronson, S (2013), Quantum Computing since Democritus
Baker, T (1975), Relativizations of the P=?NP question
Borges, J (1975), A Universal History of Infamy
Brown, J (1996), When Two Worlds Meet
Cantor, G (1891), On an Elementary Question of Manifold Theory
Borges, J (1975), A Universal History of Infamy
Bostrom, N (2003), Are You Living in a Simulation?
Church, A (1936), An unsolvable problem of elementary number theory
Cook, S (1971), The complexity of theorem proving procedures
Cook, S (2011), In Pursuit of The Travelling Salesman
Descartes, R (1641), Meditations on First Philosophy
Dijkstra, E (1959), A note on two problems in connexion with graphs
Einstein, A (1926), Letter to Max Born
Feynman, R (1948), A Relativistic Cut-Off for Quantum Electrodynamics
Feynman, R (1972), Take the world from another point of view [videorecording] / with Films for the Hu.
Frey, G (1986), Links between stable elliptic curves and certain Diophantine equations
Godel, K (1930), On Formally Undecidable Propositions Of Principia Mathematica And Related Systems
Kant, I (1781), The Critique of Pure Reason
Krom, M (1967), The Decision Problem for a Class of First-Order Formulas in Which all Disjunctions are Binary
Lucus, J (1959), Minds, Machines, and Godel
Martin, R (1977), On a Puzzling Classical Validity
Leibniz, G (1685), The Art of Discovery
Martin-Lopez, E, et al (2012), Experimental realization of Shor’s quantum factoring algorithm using qubit recycling
Maxwell, J (1865), A dynamical theory of the electromagnetic field
The Real Rain Man (2006), documentary by Focus Productions
Priest, G (1987), In Contradiction: A Study of the Transconsistent
Priest, G (1995), Beyond the Limits of Thought
Perelman, G (2002), The entropy formula for the Ricci flow and its geometric applications
Penrose, R (1989), The Emperor’s New Mind
Penrose, R (19994) Shadows of the Mind
Riemann, G (1859), On the Number of Primes Less Than a Given Magnitude
Russell, B and Whitehead, A (1910-13), Principia Mathematica
Russell, B (1908), Mathematical logic as based on the theory of types
Russell, B (1919), The Philosophy of Logical Atomism
Sartre, J (1943), Being and Nothingness: An Essay on Phenomenological Ontology
Searle, J (1980), Minds, Brains and Programs
Shor, P (1994), Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer
Tarski, A (1944), The Semantic Conception of Truth and the Foundations of Semantics
Taylor, C (1999), The atomists, Leucippus and Democritus: fragments, a text and translation with a commentary by C.C.W. Taylor
Thurston, W (1982), Three-Dimensional Manifolds, Kleinian Groups and Hyperbolic Geometry
Turing, Alan (1937), On Computable Numbers, with an Application to the Entscheidungsproblem
Turing, A (1939), Systems of Logic Based on Ordinals
Wiles, A (1995), Modular elliptic curves and Fermat’s Last Theorem
Wittgenstein, L (1921), Tractatus-Logicus-Philosophicus
Xu, N, et al (2012), Quantum Factorization of 143 on a Dipolar-Coupling Nuclear Magnetic Resonance System