On the Physics of Time Travel 1.2.5

Abstract “Frank Wilczek was recently asked: “If you could ask a super-intelligent alien one yes-or no-question, what would it be? His response: “P versus NP. That basically contains all the other questions doesn’t it?”” (1) Here we exploit the inclusive nature of P versus NP to explain why time-travel, ghosts, UFOs, and other paranormal phenomena are physically possible, and so that the failure of human beings to understand them is born of a failure of human intelligence. Just as the ant is kept by its narrow intelligence from any but a dim understanding of the world of the man, the man is kept by his narrow intelligence from any but a dim understanding of higher worlds.

 

The Paranormal and Humean Skepticism
In the early years of their marriage, my parents one evening observed a silent, bight, circular light flying overhead. A local newspaper later reported that this light was seen by numbers of people, and that it left a circular scorch mark on the sand of a nearby beach where it landed. And during the course of the evening that mars was closest to earth for many years, my brother, a friend, and myself observed -and filmed- a symmetric configuration of lights flying silently overhead. Again, local news agencies reported that other had seen the same thing. Mistaken identity? You had to be there, but these things were manifestly not man-made aircraft, astronomical or atmospheric phenomena, satellites, weather balloons, or any of the things that are typically supposed to be mistaken for UFOs qua unnatural or extraordinary. The anecdotal evidence for UFOs as something extraordinary is overwhelming widespread, but there are also reports of craft possessing an unnatural ability to abruptly start, stop, and turn, to accelerate up to astronomically high speeds, to appear and disappear, reports authored, not by John average, but by astronauts, pilots and other aeronautic experts. I’m not averse to the Jungian idea of group illusions -things seen by a variety of people on a variety of occasions that arise from a shared mode of consciousness but nonetheless lack objective existence -canvassed in books such as Creatures from Inner Space (2) and Daimonic Reality: A Field Guide to the Underworld (3) – but a basis of the distinction there is between subjective and objective is the objective trace. Objective phenomena, and not subjective phenomena, leave traces such as burn marks or images on film. You might, like Nebuchadnezzar of old, imagine that you see a ghostly hand writing the words “Mene, Mene, Tekel, Parsin” on the wall, but imagination alone is unable to produce writing on a wall that others can read after the imaginary fact.

The skeptical objection to the paranormal -and the entire philosophy of skepticism towards the paranormal- comes down to a thought expressed by David Hume in his Enquiry Concerning Human Understanding (4):

But suppose, that all the historians who treat of England, should agree, that, on the first of January 1600, Queen Elizabeth died; that both before and after her death she was seen by her physicians and the whole court, as is usual with persons of her rank; that her successor was acknowledged and proclaimed by the parliament; and that, after being interred a month, she again appeared, resumed the throne, and governed England for three years: I must confess that I should be surprised at the concurrence of so many odd circumstances, but should not have the least inclination to believe so miraculous an event. I should not doubt of her pretended death, and of those other public circumstances that followed it: I should only assert it to have been pretended, and that it neither was, nor possibly could be real. You would in vain object to me the difficulty, and almost impossibility of deceiving the world in an affair of such consequence; the wisdom and solid judgement of that renowned queen; with the little or no advantage which she could reap from so poor an artifice: All this might astonish me; but I would still reply, that the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.

To the possibility of ghosts, UFOs, paranormal phenomena, the resurrection of Queen Elizabeth or Jesus of Nazareth… the skeptic says with Hume:

…the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.

Is this position a reasonable one? Hume is working with a descriptive notion of the laws of nature, i,e. the laws of nature are for Hume what they appear to be from certain observations, and so it might be argued that observation demands that any laws that seem to forbid UFOs etc. are the wrong laws because, from observation, these phenomena exist. Time to update the laws. The difficulty is that the “known laws” -the laws that we can all observe to work in all conditions and can be tested in the laboratory- apparently contradict the existence of the paranormal. There are for example no known laws of physics that explain how it is possible for a craft to accelerate from 0 to 1000s of mph and disappear in a few seconds. From here we go to the notion of a prescriptive law – a law that applies at the logical and mathematical level and imposes conditions on what can and cannot be observed. In some people’s minds “laws of nature” have this quality, and rationalists are committed to the idea that the laws of nature are as they are, not by accident, but because of underlying logical/mathematical laws. I think this rationalist thesis is in some sense true, but suffice it to say that it not clear that a world in which the laws of physics are other than the laws of physics as we know them now is logically or mathematically impossible. Judging by the tone of the quoted passage from Hume, the philosopher was inclined -at least at the psychological level- to regard the miraculous as if the laws of nature preventing miracles were prescriptive, although from the central thrust of his empiricist metaphysics, he ought not have thought in this way at all. I mean, it was Hume who first argued that the Principle of the Uniformity of Nature that underlies empirical inquiry cannot be drawn from experience, that it is a “habit of mind”, and it may be from the Humean perspective that certain “habits of mind” leading us to believe that nature is a certain way and that certain things don’t happen are bad habits, arising from a narrowness of vision and blinding us to subtle, hidden, but nonetheless important truths.

P versus NP and Special Relativity
One way to challenge the negative stance of the Humean empiricist towards the a paranormal starts with an observation made by computer scientist Scott Aaronson about a problem on the borderline of computer science, logic, mathematics, physics, and philosophy known as P versus NP. This concerns the question of whether the class of decision problems (problems with yes-or-no answers) whose solutions are quickly verifiable by a computer (called “NP”) is the same as the class of problems that are quickly solvable by computer (called “P”). Historically the problem arose because certain problems seem to be hard to solve – more particularly, they seem to require a lot of time -an exponentially growing amount of time- to solve. (“Exponentially” long here means that as the size of a problem grows arithmetically, the amount of computer-time required to solve it grows exponentially). An example of an NP problem that seems to take exponential time is Factoring. While it doesn’t take long to factor 15 or 21, imagine trying to factor the 200 digit integer

27997833911221327870829467638722601621070446786955428537560009929326128400107609345671052955360856061822351910951365788637105954482006576775098580557613579098734950144178863178946295187237869221823983

It can be easily on checked that it divides evenly into the primes

3532461934402770121272604978198464368671197400197625023649303468776121253679423200058547956528088349

and

7925869954478333033347085841480059687737975857364219960734330341455767872818152135381409304740185467

but although it takes a pocket calculator a spit second to do the multiplication, it would take a single 2.2 GHz computer roughly 75 years to do the division. Are P and NP the same? They certainly appear not to be, as there are problems such as Factoring that seem intrinsically hard for classical computers. If P and NP are they same, then as Aaronson says, the world is a very different place from what it appears to be.

If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. (5)

There’s quite a bit more too it than that, but Aaronson asks us to imagine (6) that we set a computer working on an exponentially hard problem while we board a rocket ship that accelerates up to the speed of light and returns to earth only after the calculation has been completed. In earth time it might be millions of years, but in rocket-time it might be a only a few hours.

The trouble -the rub- is that, while the time to make the computation has apparently become non-exponential (“polynomial”), the energy requirements of the rocket ship are exponential, and so the idea that this maneuver can be used to thwart the inequality of and P and NP is an illusion.

Aaronson’s thought experiment is a form of the Twin Paradox (7). This comes out of the Special Theory of Relativity, the essence which can be most easily explained by reference to the famous train thought-experiment. The figure below depicts a train, and a light source located in the center of a carriage. It shows that from the point of view of an observer inside the train, light emitted from the center of the carriage reaches the front and the rear of the carriage simultaneously, but that from the point of view of an observer outside the train, the light reaches the front of the carriage first.

One of the following principles then must be false:

1. There is a perfectly isomorphic relationship between the time lines of observers regardless of their states of motion

2. The fundamental laws of physics are the same for all observers (in uniform motion)

3. Light is propagated at the velocity c regardless of the state of motion of the emitting body

(2) and (3) are apparently true, meaning that (1) is false. According to SR then, observers traveling at different velocities literally inhabit different spatio-temporal environments, different times and spaces (8, 9, 10). In particular, space contracts and time slows in the direction of motion, so that if one were to accelerate to a speed close to the speed of light, time would pass much less slowly than it would for someone travelling at a lessor speed. Hence the possibility of Aaronsons’s Relativity Computer.

P versus NP and General Relativity
But there is a subtlety here that has been completely overlooked, and that we can reveal by considering General Relativity, Einstein’s extension of SR to gravity. At root of GR is the idea that mass curves space-time (8, 9, 10), and the fact that this is a half-truth is suggested by the implication of local infinities (11), the lack of a coherent mathematical framework for both large scale and small scale objects (12), and by the flat rotation curves of distant galaxies. The last is the motivation for the idea of dark matter (13, 14):

The idea of a space-time curved by mass readily takes us back to an initial condition of the universe such that all the mass of the universe is compressed to a point, and this same infinite compression of mass is, by the terms of the theory, also to found at the centers of black holes. But the singular nature of the initial condition of the universe represents the beginning of the time, while the singularities at the centers of black holes in some sense represent the end of time, and these forms of curvature should therefore be quite distinct. More particularly, it should not be the case that both are attributable to the infinite action of the force of gravity – this is the theory breaking down, and a sign of a false assumption. Yes, everyone can see that the singularity “preceding” the big bang is not the same as the singularities “inside” black holes, but there is missing mathematics, and whether we like it or not, GR paints a picture of the universe that begins and ends in an identical state (infinite gravity is infinite gravity) when, clearly, there is throughout the universe as we know it an arrow of time leading from an energetic contracted state to an exhausted expanded one. In so far as it so much as makes sense to speak of infinite gravity, these can’t both be the result of infinite gravity. If we give up this idea that curvature is due to mass (which is a combination of light and space), and employ instead the idea that curvature is due to imbalances of light and space (where the classical world is balanced, the atomic world is unbalanced in the direction of light, and black holes are unbalanced in the direction of space), we will find that we can solve this and other problems. Mathematically, we capture what it is to be balanced, and what it is to depart from balance, thereby producing curvature, by re-expressing the tradition equation for a circle of area 1 (\pi \sqrt{\frac{1}{\pi }}^2=1) as

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

Where the traditional equation fails by implying that an energy source located at the center of this area unit-circle is undiminished from center to circumference (it has either a zero or an infinite radius), the second provides us with a potentially infinite hierarchy of energy levels that are necessarily non-infinite and non-zero. Given that gamma is a spacial case of

\zeta (s)-\frac{1}{s-1} for s = 1, we can go from \lim_{x\to \infty } \, e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2=1 to the more general

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

Let s be a positive integer greater than 1 and let

\left(\frac{e^{2 \gamma } \left(e^{-\left(\zeta (s)-\frac{1}{s-1}\right)}\right)^2}{\hbar =e^{2 \gamma } \left(e^{-\left(\zeta (s)-\frac{1}{s-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^7 \frac{1}{n^s}-\int_1^7 \frac{1}{n^s} \, dn\right)}\right){}^2}\right){}^{1/s} = the critical line/radius on the one side of which curvature in the direction of light is classical.
and on the other is non-classical

and we see something like this:

Let s be a positive real number less than 1, and let

\left(\frac{e^{2 \gamma } \left(e^{-\left(\zeta (s)-\frac{1}{s-1}\right)}\right)^2}{\sigma =e^{2 \gamma } \left(e^{-\left(\zeta (s)-\frac{1}{s-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^7 \frac{1}{n^s}-\int_1^7 \frac{1}{n^s} \, dn\right)}\right){}^2}\right){}^{1/s} = the critical line/radius on the one side of which curvature in the direction of space is classical
and on the other is non-classical

and we have the following model:

Since the initial condition of the universe is, in this model, massless -since it involves no space and time, and an infinite concentration of light- it follows that absolutely speaking light has no speed, and that the apparent speed of light is due to the expansion of space. Hence we have

1.* There is a perfectly isomorphic relationship between the time lines of observers regardless of their states of motion

2.* The fundamental laws of physics are the same for all observers

3. * Space expands at the velocity c regardless of the state of motion of the body in space

Recalling that balanced classical regions of moderate curvature are governed by the equation \lim_{x\to \infty } \, e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}\right){}^2=1, we can associate this concept of physical inertia to one of mathematical inertia and we can re-express 1.* – 3.* in this way:

1.** There is a perfectly isomorphic relationship between the number lines of observers regardless of their states of motion

2.** The fundamental laws of arithmetic are the same for all observers

3.** The prime-density of an arithmetic progression is governed by \lim_{x\to \infty } \left(e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}\right){}^2\right)=1

If (2**) and (3**) are true, then we can preserve the symmetry of the laws of arithmetic and the balance of prime-density and sparsity only at the cost of the symmetry of the unit. We might say that everyone has a clock governing their lives that runs at a slightly speed to anyone else’s clock because the prime-based asymmetry between the underlying number lines. But the equation \lim_{x\to \infty } \left(e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}\right){}^2\right)=1  , and the balance of density and sparsity implied by e^{2 \gamma } and by the Generalized Riemann Hypothesis (11), places a strict limit on the extent of the disagreement between one number line and another – between one clock and another- thereby preventing such things as the Twin Paradox and other strange consequences of SR. It follows from the switch from the usual space-centric perspective to the absolute light-centric perspective (the God’s Eye View rather than the Man’s Eye View) that inertia is to be defined as the speed of the uniform expansion of space -c- and any departure by an object from c requires the action of a force. From here we deduce the laws of motion:

(1) Every object persists in its state of rest or uniform motion in a straight line unless it is compelled to change that state by forces impressed upon it.

(2) Force is equal to the change in momentum (mV) per change in time. For a constant mass force = mass times acceleration.

(3) For every action there is an equal and opposite reaction. (15)

Gravity and all of the official forces of nature can be reduced to the thought that a force is the curvature of space-time resulting in the departure from c in the direction of the acceleration (an imbalance of space) or the de-acceleration of the expansion (an imbalance of light) of space. Since the expansion of space is accelerating as a function of time, “accelerating” up to c is the same thing as decelerating to the degree that the gap between the acceleration and the uniform speed of the expansion of space -c- is bridged and therefore decelerating to the degree that -locally speaking- acceleration is suspended and time ceases.

P versus NP and RH
All important in this scheme of things is the notion of balance, from which arises the notion of uniformity from which gravity and the other forces represent departures in one direction (toward the acceleration of expansion) or the other (toward the de-acceleration of expansion). The governing limit, as I have said is set by\lim_{x\to \infty } \left(e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}\right){}^2\right)=1 , but we can go further and say that the governing principle is the famous Riemann Hypothesis in its most general sense. Named for having been proposed by Bernard Riemann in his 1859 paper On the Number of Primes Less than a Given Magnitude (published in the same year and the same month as Darwin’s On the Origin of Species).

This says that the complex zeros of an L-function (16, 17, 18), each of which is associated to an arithmetic progression involving a certain balance of prime density and sparsity, have real part 1/2. Stated in these terms, the RH calls upon considerable knowledge of analytic number theory, and many dry technicalities, to so much as understand hypothesis is -let alone adduce reasons for or against it- but despite any appearances to the contrary the RH is a relatively simple statement about the limits there are on the departure from a balance of prime-density and sparsity that can exhibited by an arithmetic progression, limits beyond which uniformity ceases, i.e. limits dividing the classical from the non-classical. A useful way to explain the RH is in terms of the role played by \[Pi] (in the sense of the ratio of the diameter to the circumference of a circle, not a specific value) in the equation

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

If s is a positive real number greater than 1, then pi in this sense has been stretched beyond a state in which there is a balance of prime-density and sparsity and -physically speaking- a balance of light and space. If s is a positive real number less than 1, then pi in this sense has been shrunk beyond a state in which there is a balance of prime-density and sparsity, and a balance of light and space. The one form of distortion corresponds to extreme de-acceleration of the expansion of space-time and an imbalance of light and space in the direction of light, and the other form of distortion corresponds to the extreme acceleration of the expansion of space-time and an imbalance of light and space in the direction of space. These imbalances -these extreme forms of curvatures- work together to produce balance and moderate curvature. If and only if s = 1 and

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2=1

then the stretching and shrinking of pi is such that there is balance. To see how delicate and precisely maintained this balance is, let g = Graham’s number (19, 20). If every digit in Graham’s number is considered to occupy as little as 1 Planck volume, it would nonetheless too big to fit in the observable universe, and if the value of s in the equation were to differ from 1 by as little as

1^{-g}

then there are no solutions to the equation

e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}=\left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}\right){}^2

and the classical/non-classical divide collapses. Each of the complex zeros is associated to a different spiral-wave

and summing them produces a superposition. Assuming the RH, a formula for the primes is

\text{Re} \sum _{n=1}^{\infty } \frac{\left(p_x\right){}^{\rho _n}}{\rho _n}+\frac{\int_1^x x \left(a_1 H_x\right) \, dn+\left(\int_1^x x \left(a_2 H_x\right) \, dn+\int_1^x a_2 x (x \log ) \, dn\right)+\int_1^x a_1 x (x \log ) \, dn\text{...}}{n x}

and so this wave is the same as the difference between the primes and the function \frac{\int_1^x x \left(a_1 H_x\right) \, dn+\left(\int_1^x x \left(a_2 H_x\right) \, dn+\int_1^x a_2 x (x \log ) \, dn\right)+\int_1^x a_1 x (x \log ) \, dn\text{...}}{n x}

The RH says that the non-trivial zeros have real part equal to 1/2, which is an indirect and obtuse way of saying that the growth rate of the amplitude of this superposition stays within the bounds imposed by the square-root limit. If and only if the superposition goes outside these bounds, then pi re-expressed as e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)} to allow for the necessary flexibility) has been stretched or shrunk too much for an arithmetic progression. We know that the real parts of the complex zeros are always less than 1/and/or greater than 0, and that they come in pairs of the form 1/2 + I and 1/2 – I. A pair of zeros off the critical line would have the form 1/n + I and n-1/n – I where n .> 2, and they would imply a greater than allowable stretching or shrinking of \[Pi], and a greater than allowable amount of prime density or sparsity or light or space than is possible for any uniform number or time line, i.e. a number/line line involving potentially infinite prime numbers. The RH then is defining of uniformity -smooth surfaces- and of c:

P versus NP and the other Millennium Problems
Along with P versus NP, the RH is one of 7 problems called by the Clay Mathematics Institute the most difficult and important in modern mathematics (21 – 27), and the remaining 5 Clay problems are also defining of uniformity and of c. The complete list is

  • Yang-Mill and the Mass Gap
  • Riemann Hypothesis
  • Poincare Conjecture
  • Birch and Swinnerton-Dyer Conjecture
  • Navier-Stokes equation
  • Hodge Conjecture
  • P versus NP

and Scott Aaronson (28) and others have rightly pointed that if P = NP, then a computer could be programmed to solve all of these problems.

The Mass Gap Problem
If we regard the atom as having circular shape, and the nucleus as located at the center of this circular shape, then we might expect from the laws of gravity and electromagnetism that nuclear energy would behave mathematically in the same way, that the strength of the force would diminish as a square of the distance, but no. Unlike gravity and electromagnetism -which are long-ranged- the nuclear forces are short-ranged – their influence extends only a very short-distance from the center of the atom, and then vanishes. Unlike electromagnetism, the nuclear forces are described by a wave-equation with an extra term indicating that, unlike the photon, the associated particles have a mass (\frac{\frac{\partial ^2A}{\partial t^2}}{c^2}=\triangledown \triangledown -\frac{A}{L^2}). The mass-gap is the difference between the lightest particle and the vacuum, and the mass-gap problem is to explain why these particles have a mass.

Solution
Classically, the inverse square law describes a fixed quantity can be increasingly spread over an increasing area, which means that there is a symmetric relationship between that quantity and the area. If the quantity or the area died-off with distance, then that would be an asymmetric relationship. In particular then, the inverse square law must involve a broader concept of circularity, one that includes the traditional circle as a special case, but where the relationship between energy and space can be an asymmetrical one. We can extend to inverse square law in the required way by reference to a circle is supposed to have an area of 1, then an energy source S located at the center will posses the same strength from center to circumference for S/1 = S… The ‘mass-gap’ is the difference between the lightest particle and the vacuum, and the mass-gap problem is to explain why these particles have a mass. From the equations above comes the idea of the disruption of an energy field by spatial particles, rather than the disruption of empty space by energetic particles. This gap would in this case be the difference between the lightest massive particle and a massless energy field. But when s = 1, and the sizes of this gap is such that there is a balance of concentrated and diffused light, then the result is light and space that are co-extensive over a potentially infinite range. This gives us the symmetry principle according to which we can look at it either way – this switch is a matter of reversing the usual picture of the universe inherited from the ancient Greek atomists as a dark, static, receptacle, within which luminous objects move:

That is, whenever we are inclined to say that a quantity of light is moving relative to a background of space, ask also how space must be moving if the background of the universe is light (in the figure below, the black square moves rather than the yellow circle):

For instance we may change the speed of light in a vacuum from 3 * 10^8 m/sec to 0, and consider instead that space is expanding at a minimum rate of 3 * 10^8 m/sec… This symmetry breaks when s is a positive real number other than 1. If s >1 then it breaks in the direction of light, and if s < 1 it breaks in the direction of space. When it breaks, the co-extension of light and space is short-ranged.

Poincare Conjecture
The Flat Earth Society notwithstanding, we know the surface of the earth to be round, but for many intents and purposes we can regard it as flat because we are comparatively speaking very small. We can call a surface that is flat in this local sense a ‘manifold’, and note that some manifolds are smooth, while others are not. If a loop is drawn on the surface of a sphere, for example, it can apparently be shrunk to a point, regardless of where on the surface of the sphere such a loop is drawn. But if a loop is drawn on the surface of a torus it is possible for it to get stuck.

The reason a loop can be shrunk to a point on the surface of a sphere is that, in a sense, this surface has no holes in it, and the reason this not true of the surface of a torus is that this surface clearly does have a hole in it. We can say that a manifold is long-ranged if and only if it passes the ‘loop-shrinking test’, if a loop drawn on the manifold can be shrunk to a point. Let’s adopt the topological convention that an n-sphere inhabits n+1 dimensional space, in which case the sphere and the torus are 2-dimensional objects in 3 dimensional space. In its broadest form, the Poincare Conjecture says that if a n-manifold passes the loop-shrinking test, then it is long-ranged, and the version of the conjecture carrying the million dollar prize is concerned with a 3-dimensional object in 4-dimensional space.

Solution
As with most mathematics, there is a problem with the very language in which Poincare Conjecture is expressed because our solution to the Mass Gap Problem implies that there can be no such thing in our arithmetic domain as a point in the sense of a zero-sized object (or a line in the sense of an infinite-sized object), and that every surface has a non-zero sized hole in it. Nonetheless we can use the key distinction between long-ranged (smooth) and short-ranged manifolds to reveal the truth of the Poincare Conjecture by applying this distinction to surface areas in any number of dimensions. We do this by identifying \[Pi] with E^(2 \[DoubledGamma]), which in the 1D case gives us:

\lim_{x\to \infty } \left(e^{2 \gamma } \sqrt{\frac{1}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}{}^2\right)=1

Extending the notion of pi to include values of s other than 1…

\lim_{x\to \infty } \left(e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{\exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)}\right){}^{\frac{1}{s+1}}\right){}^{s+1}\right)=1

We may then argue in this way:

  • All manifolds associated with the value s = 1 are smooth manifolds in the sense that these manifolds can be continuously expanded as the holes in them are continuously contracted;
  • If the Poincare Conjecture is false -if there might be a compact object in 4 dimensions which is smooth in the crude sense that a sphere is smooth, but not deformable without cutting or gluing into a hypersphere- then there is a smooth manifold in the more sophisticated sense that is associated with the value s != 1;
  • Therefore the Poincare Conjecture is true.

All elliptic curves (and all modular forms) are smooth manifolds as this concept is defined above. It follows that the non-modular y^2 = x (x - a^n) (x + b^n)  (29) -and any curve associated with an inappropriate solution to Fermat’s Last Theorem- is associated with a non-smooth manifold, and so that Fermat has the form of the RH and Poincare:

  • All elliptic curves are smooth manifolds;
  • If Fermat is false- if there are any whole number solutions to the equation x^ n + y ^ n = z^ n greater than 2 then there is a smooth manifold associated with the value s != 1;
  • Therefore Fermat is true.

It makes sense that this should be so because Fermat concerns the equation for the circle, and \lim_{x\to \infty } \, e^{2 \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)}=e^{2 \left(\zeta (s)-\frac{1}{s-1}\right)} concerns the ways in which pi -in the sense of the ratio of a circle’s circumference to its diameter- can be distorted: both Fermat and the RH are placing limits on how great this distortion can be before a manifold ceases to be smooth.

Birch and Swinnerton-Dyer Conjecture
The Birch Swinnerton-Dyer Conjecture starts with the ancient question of whether there an integer that is the area of a right triangle whose sides have rational length. Such integers are called ‘congruent’. 6 for example is a congruent number because there is a right triangle whose area is 6 and whose sides are 3, 4 and 5. The problem of determining if a is the area of a right triangle having rational sides is equivalent to he problem of finding rational solutions for x and y to the cubic equation

y^2=x^3-a^2 x

which is in turn equivalent to the problem of determining if there are infinite rational points on an elliptic curve, the curve underlying Andrew Wiles’ proof of Fermat’s Last Theorem (30).

Solution
Recall our earlier ruminations on a unit-circle in the sense of a circle whose area is 1. The traditional unit-circle has a radius of 1, but it is a simple matter to set up a correspondence between the two using the same mathematical framework. Rather than

\\(\*UnderscriptBox[\(lim\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-be41d7d55f6130e537c0b6f8d3a71999_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x \[Rule] \ \[Infinity]" title="Rendered by QuickLaTeX.com" height="36" width="323" style="vertical-align: 0px;"/>]\)\[ThinSpace]E^(2 \\(\* TagBox["\[DoubledGamma]", Function[{}, EulerGamma]]\))(Sqrt[1/E^(2 (\\( \*UnderoverscriptBox[\(\[Sum]\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-289014eb3f580212739c24a497c864d1_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n = 1" title="Rendered by QuickLaTeX.com" height="13" width="42" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x" title="Rendered by QuickLaTeX.com" height="8" width="10" style="vertical-align: 0px;"/>] \*FractionBox[<img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-4868771cbc422b5818f85500909ce433_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="1" title="Rendered by QuickLaTeX.com" height="13" width="7" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-b170995d512c659d8668b4e42e1fef6b_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n" title="Rendered by QuickLaTeX.com" height="8" width="11" style="vertical-align: 0px;"/>]\) - \\( \*SubsuperscriptBox[\(\[Integral]\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-4868771cbc422b5818f85500909ce433_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="1" title="Rendered by QuickLaTeX.com" height="13" width="7" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x" title="Rendered by QuickLaTeX.com" height="8" width="10" style="vertical-align: 0px;"/>]<img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-5a37cb0e24a9a67b95942d0b9773b4b5_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="\*FractionBox[\(1" title="Rendered by QuickLaTeX.com" height="18" width="117" style="vertical-align: -5px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-b170995d512c659d8668b4e42e1fef6b_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n" title="Rendered by QuickLaTeX.com" height="8" width="11" style="vertical-align: 0px;"/>] \[DifferentialD]n\)\)))])^2 = 1

we have

\lim_{x\to \infty } \sqrt{\frac{e^{2 \gamma }}{e^{2 \left(\sum _{n=1}^x \frac{1}{n}-\int_1^x \frac{1}{n} \, dn\right)}}}=1

We have then two different types of unit-circles, the difference being that in the one case the area is rational and the radius is irrational, whereas in the other case the radius is rational but the area is irrational. The Pythagorean Theorem cannot in the case of a circle based on the first equation yield so much as a single right triangle -or a single rectangle- whose area and whose sides are rational, for although the equation for the latter unit-circle is

x^2+y^2=1^2

the equation for the former is

\[Pi] (1/Sqrt[\[Pi]])^2 = 1

the equation for the area of the latter is

\pi 1^2=\pi

If n is a congruent number, then the hypotenuse of the right triangle whose area is n is the diameter of a circle such that it is a scaled version of the unit-circle whose radius is 1. For example, the triangle of area 6, and sides 3, 4, and 5, corresponds to a circle of diameter 10. The right triangle associated with this diameter can be continuously scaled, meaning that there are an infinite number of right triangles whose areas are congruent numbers.

And if and only if the triangle in question can be continuously scaled in this manner is it the case that there are an infinite number of rational points on the associated elliptic curve. This implies that the underlying unit-circle has a radius of 1, and that s in the following formula for pi is equal to 1:

e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}

It is arguable on this basis that there is a 4-way equivalency between the Fermat, Poincare, RH, and the Birch Swinnerton-Dyer Conjecture:

  • All elliptic curves are smooth manifolds;
  • If the Birth Swinnerton-Dyer Conjecture is false – if it is possible that s = 1 and the L-Function of an elliptic curve is equal to 0 while the curve doesn’t contain infinite rational points- then there exists an elliptic curve that is non-smooth;
  • Therefore the Birch Swinnerton-Dyer Conjecture is true.

In all these cases we have argued that

  • All smooth manifolds are associated with s = 1;
  • The falsity of any these conjectures/hypotheses implies either that there is a smooth manifold associated with s != 1 or that there is a non-smooth manifold associated with s = 1.

 

Navier Stokes Equations
Imagine that you have a container of water and you are given the pressure and the velocity of this body of water at every point at some time Subscript[t, 0], then a solution to the Navier-Stokes equations will give you the pressure and the velocity of this body water for any later time Subscript[t, n]. The challenge is to prove that solutions always exist to this type of problem of fluid dynamics that do not involve a singularity (smooth solutions).

Solution
Consider the equation \\(\*UnderscriptBox[\(lim\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-be41d7d55f6130e537c0b6f8d3a71999_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x \[Rule] \ \[Infinity]" title="Rendered by QuickLaTeX.com" height="36" width="323" style="vertical-align: 0px;"/>]\)\[ThinSpace]E^(2 \\(\* TagBox["\[DoubledGamma]", Function[{}, EulerGamma]]\)) Sqrt[1/E^(2 (\\( \*UnderoverscriptBox[\(\[Sum]\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-289014eb3f580212739c24a497c864d1_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n = 1" title="Rendered by QuickLaTeX.com" height="13" width="42" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x" title="Rendered by QuickLaTeX.com" height="8" width="10" style="vertical-align: 0px;"/>] \*FractionBox[<img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-4868771cbc422b5818f85500909ce433_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="1" title="Rendered by QuickLaTeX.com" height="13" width="7" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-b170995d512c659d8668b4e42e1fef6b_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n" title="Rendered by QuickLaTeX.com" height="8" width="11" style="vertical-align: 0px;"/>]\) - \\( \*SubsuperscriptBox[\(\[Integral]\), <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-4868771cbc422b5818f85500909ce433_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="1" title="Rendered by QuickLaTeX.com" height="13" width="7" style="vertical-align: -1px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="x" title="Rendered by QuickLaTeX.com" height="8" width="10" style="vertical-align: 0px;"/>]<img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-5a37cb0e24a9a67b95942d0b9773b4b5_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="\*FractionBox[\(1" title="Rendered by QuickLaTeX.com" height="18" width="117" style="vertical-align: -5px;"/>, <img src="http://shroud-physics.com/wp-content/ql-cache/quicklatex.com-b170995d512c659d8668b4e42e1fef6b_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="n" title="Rendered by QuickLaTeX.com" height="8" width="11" style="vertical-align: 0px;"/>] \[DifferentialD]n\)\)))]^2 = 1. This tells of an inelimable gap between spirals and circles, or between the interior and exterior circles of annuli.

If and only if s =1 and

e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)} \left(\left(\frac{1}{e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}}\right)^{\frac{1}{s+1}}\right)^{s+1}=e^{2 \gamma } \sqrt{\frac{1}{e^{2 \gamma }}}^2

there is a balance of contraction and expansion, so that the exterior circle in the one case can approach as closely to 1 as the exterior circle can approach to 0, and so that in the other case the exterior circle can approach as closely to -1 as the interior circle can approach to 0. If the Navier-Stokes Equations always have a smooth solution, we could use them to close these gaps and eliminate an ineliminable fluidity. More particularly, calculus and the notions of differentiability and integrability -from which the Navier-Stokes Equations emerge- is dependent on the smoothness of a manifold as defined above, but beneath this manifold are sub-manifolds on which this smoothness depends, but such that the Navier-Stokes Equations are bound to fail when confronted with them. Calculus asks us to forget about what speed something is going at a particular instant, and look instead at the average speed for smaller and smaller intervals of the journey. As these small intervals approach zero, the average speed approaches a certain limiting value, and by convention this value is called the instantaneous speed. But the whole exercise has an imprecision attached to it because of the fact that true instantaneous speed should be 0. If we continue to reduce the differences between a point of interest and nearby points, then any surrounding space will vanish if and only if this difference reaches 0. This tells us that there is no coordinate system whose origin corresponds to a state of rest, and this is another way of expressing the fact that arithmetic depends on the gap there must always be between the means and the object of counting, between -let us say for the moment- the mind and the world. Equivalently, there is an impossibility attached to knowing both the speed and the position of an object, for the ability to know these variables with complete precision -in which case instantaneous speed and position are both 0- requires that the object has neither speed nor position.  We must consider then the minimum and the maximum size of the departure from this extreme, which we can do by appeal to the equation \lim_{x\to \infty } \, \exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}, to the value of s, and to its association with smooth and non-smooth manifolds.

The fluidity inherent in the equation above is balanced by the value of s = 1 -so that as momentum is increasingly rigidified, position increasingly becomes flexible and conversely- and we can argue that differentiation -and by implication integration- work as advertised only in the case that s = 1 and the surface is smooth, and thus the argument is as follows:

  • All differentiable/integrable manifolds are smooth manifolds;
  • If the Navier-Stokes Equations always have a smooth solution, then it is possible to use them to eliminate the fluidity in virtue of which differentiation/integration is possible;
  • Therefore the Navier-Stokes Equations don’t always have a smooth solution.

Hodge Conjecture
“Every harmonic differential form (of a certain type) on a non-singular projective algebraic variety is a rational combination of cohomology classes of algebraic cycles.”

Solution
Consider a circle and a square of area 1, and a square and circle inscribed within them.

From a geometric point of view, the circle and the square are distinct and rigid, but from a topological view-point, they can be deformed into each other on account of their smoothness. There is then a flexibility here – a fluidity. The area marked in back is shared between the enclosing and the enclosed shapes, but the area marked in yellow is unshared.

Shared = \frac{2}{\pi }+\frac{\pi }{4}=1.42202\approx \sqrt{2}

Unshared = 2-\frac{2}{\pi }-\frac{\pi }{4}=0.577982\approx \gamma

If we work, not with \pi as traditionally understood, but with

\lim_{x\to \infty } \, \exp \left((s+1) \left(\sum _{n=1}^x \frac{1}{n^s}-\int_1^x \frac{1}{n^s} \, dn\right)\right)=e^{(s+1) \left(\zeta (s)-\frac{1}{s-1}\right)}

we see that the shapes above are governed by s = 1 and a certain very particular balance of shared and unshared area, a balance associated with a very particular balance of rigidity and flexibility, and associated with smooth manifolds. Like the other Millennium Problems, the Hodge Conjecture may on this basis be seen as the same problem. We can argue, firstly, because -contrary to the impression created by the way in which the Poincare Conjecture is usually stated- every smooth manifold involves a hole arising from the nature of the area unit circle. Secondly, we can argue that every such manifold involves non-trivial topological cycles which are equivalent to algebraic cycles by virtue of the relationship between a unit circle and unit square:

  • All smooth manifolds involve equivalent the topological and algebraic cycles;
  • If the Hodge Conjecture is false, and there is a non-trivial topological cycle of the relevant type (even dimension etc.) that is not equivalent to an algebraic cycle, then this cycle is like a zero off the critical line, or like the Frey curve;
  • Therefore the Hodge Conjecture is true.

In all of these cases we have a a simple, single, line of reasoning in mind. It is that the universe arises from a singularity of light that is uniformly projected over space in the analogous way to that in which an energy source located at the center of a circle is projected over space, i.e. pi in the sense of the ratio of the diameter the circumference of a circle is an essential element of the uniformity of this projection. But underlying this uniform projection is the dis-uniform projection of light, in the form of the sources of the projection, the projectors rather than the projection. The question of the Millennium Problems -all of them- is whether a surface has the properties of a projection (infinite primes and arithmetic continuity) or a projector (finite primes and arithmetic discontinuity), which turns on the question of whether pi has been distorted beyond the point that a projection is uniform. All of these problems are to be solved by identifying the bounds there are on distortion of \[Pi] given that a surface is a projective surface, and seeing that each of the various conjectures and hypotheses -the exception is the Mass Gap problem which isn’t really a claim- carries as a consequence of its truth or falsity that, although \[Pi] is distorted beyond these bounds, and a surface nonetheless remain uniform. Like the falsity of Fermat, the equality of P and NP, the falsity of the RH, the falsity of Poincare, the falsity of the Birch Swinnerton-Dyer, the existence of smooth solutions to all Navier-Stokes equations, the falsity of the Hodge Conjecture, all imply that \[Pi] can be distorted beyond bounds and a surface remain uniform, and hence we know by reductio ad absurdum that the opposite applies.

 

The Barriers to Time-Travel
c as understood in the Standard Model of Particle Physics can be translated into language of the new model by reference to the fine structure constant – \alpha. The recommended CODATA value of \alpha is

\frac{7.29735}{10^3}=\frac{1}{137.036}=\left(e^{2 \gamma } \left(e^{-\left(\zeta (12)-\frac{1}{12-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^1 \frac{1.00009}{1^{12}}-\int_1^1 \frac{1.00009}{1^{12}} \, dn\right)}\right){}^2\right){}^2

It follows that

\alpha =\frac{e^2}{\left((4 \pi ) \epsilon _0\right) (c \hbar )}=\frac{\left(e^2 c\right) \mu _0}{(4 \pi ) \hbar }=\frac{e^2 k_e}{c \hbar }=\frac{c \mu _0}{2 R_K}=\frac{e^2 Z_0}{(4 \pi ) \hbar }=\left(e^{2 \gamma } \left(e^{-\left(\zeta (12)-\frac{1}{12-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^1 \frac{1.00009}{1^{12}}-\int_1^1 \frac{1.00009}{1^{12}} \, dn\right)}\right){}^2\right){}^2

In Planck units

\alpha _G\left(\left(\frac{e}{m_e}\right){}^2\right)=\left(e^{2 \gamma } \left(e^{-\left(\zeta (12)-\frac{1}{12-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^1 \frac{1.00009}{1^{12}}-\int_1^1 \frac{1.00009}{1^{12}} \, dn\right)}\right){}^2\right){}^2

There is a difference of 0.0000127295 between the recommended value of \left(e^{2 \gamma } \left(e^{-\left(\zeta (12)-\frac{1}{12-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^1 \frac{1}{1^{12}}-\int_1^1 \frac{1}{1^{12}} \, dn\right)}\right){}^2\right){}^2 and \alpha, but this value has changed over the decades, it is dependent on many questionable assumptions, and positive and negative variations in \alpha of one part in 100, 000 are suggested by data on quasar absorption lines (31).

But what of the connection of \alpha to s = 12? The fact that there are other values of s involving different values of \alpha defined as \left(e^{2 \gamma } \left(e^{-\left(\zeta (n)-\frac{1}{n-1}\right)}\right)^2-e^{2 \gamma } \left(e^{-\left(\sum _{n=1}^1 \frac{1}{1^n}-\int_1^1 \frac{1}{1^n} \, dn\right)}\right){}^2\right){}^2 means there are other epochs (earlier and later eras in the history of our own universe) and other dimensions (universes that are distinct from our own) such that these are associated to different sets of physical constants. Any such acceleration greater than c would permit an object to escape to one of these other epochs (time-travel to the past) or other dimensions (universe-hopping) in which the laws of physics as we know them do not apply. But what about an acceleration that is not greater than c? Now the limiting threshold is, not the distinction between present and past, or between one dimension and another, but that which comes out of the RH and the other Millennium problems and divides the uniform (classical) from the non-uniform (non-classical). Classical objects (s = 1) are accelerating away from c, but at a non-exponential rate, whereas non-classical objects (s is a positive real number other than 1) are either accelerating away from c, or towards it, at an exponential rate. Again, the value of c differs from epoch/dimension to epoch/dimension, and so by implication does the classical/non-classical distinction. But in any case speeds close enough to whatever c happens to be in one’s own epoch/dimension to permit the scenario of Aaronson’s thought experiment are exponential speeds. This distinction that seemingly prevents the Twin Paradox from arising, it prevents a zero off the critical line… and it prevents the equality of P and NP. These barriers we can observe are the barriers to the reality of time-travel and much – if not all- of what we would call the “paranormal”. To see that these barriers are artificial, and can be broken, consider Shor’s Algorithm (32). Shor’s Algorithm is an algorithm for a quantum, rather than a classical computer, that permits the Factoring problem to be solved in polynomial time, and puts Factoring in BQP, which is the quantum version of P.

This carries the implication that the barriers we have been discussing are barriers for classical, but not for quantum computational processes. To see why it is true that Shor’s Algorithm really does permit polynomial-time solutions to NP-Complete problems, consider first the Travelling Salesman Problem, which is he problem of whether a salesman can visit a number of cities exactly once and return to a home-city for a certain cost, which we express graphically as the portable of whether there is a pathway runaround a graph that visits every vertex exact one before returning to a home vertex. First we transform TSP into a problem of whether a computer (salesman) can execute some number of instructions (visit some number of cities) which executes every instruction exactly once (visits every city exactly once) before returning to a halt state (home-city) for some maximum cost. An arbitrary computer is therefore working on the problem of whether an arbitrary computer will halt when run with an arbitrary set of instructions, and thus the point will be reached when the evaluation is a self-evaluation, i.e. the point will be reached such that the computer is attempting to determine of itself if it will halt.

If we associate to every city an instruction, this self-evaluative point will be reached when the number of cities on the tour is not less than the number of instructions in the program. This leads to a contradiction in the case that the number cities is greater than the number of instructions.

TSP then involves a limit on the number of cities, from which it follows that TSP differs from the quickly solvable problems in P, where the relationship between input and program is such that both may grow infinitely in tandem, and that P and NP are not equal. We can further this argument by considering the question of whether the square root of the number of cities in some TSP problem is greater than the largest prime factor of the number of cities in some other TSP problem, from which arises the question of whether the computer corresponding to the one TSP problem can run the other TSP problem. The computer can run a TSP problem in finite time if and only if the square root of the number of cities is not greater than the largest prime factor of the number of cities (instructions) of the other problem, for if the square root of number of cities is greater, then than the number of cities is greater than the number of instructions in the program. Any TSP problem can be transformed into a Factoring problem by considering the relative prime-complexity of a pair of TSP problems, which means that Factoring isn’t in P. It also means that a polynomial-time algorithm for Factoring is a polynomial-time algorithm for TSP and for any NP problem…

Jesus of Nazareth, Time-Traveller
We have here the makings of physical -rather than metaphysical- basis for UFOs as trans-dimensional objects, and for time-travel and other paranormal phenomena. In the light of this, lets consider what is undoubtedly the greatest trans-dimensional, time-traveling, paranormal event in human history, the resurrection of Jesus of Nazareth, probably the thing that Hume -a staunch anti-Christian- is indirectly referring to by his Queen Elizabeth analogy:

But suppose, that all the historians who treat of England, should agree, that, on the first of January 1600, Queen Elizabeth died; that both before and after her death she was seen by her physicians and the whole court, as is usual with persons of her rank; that her successor was acknowledged and proclaimed by the parliament; and that, after being interred a month, she again appeared, resumed the throne, and governed England for three years…

Jesus was crucified by the Romans in the 1st century, his dead body was interred for 3 days, but from all accounts given by eye-witnesses to these events, came back to life, escaped the tomb, and re-appeared to his disciples and to hundreds of others. The historical evidence for his defeat of death is as good as the historical evidence for the occurrence of any ancient historical event, and -summarized by resurrection expert Dr Gary Habermas- proceeds in this way (33):

(1) If the bible is the word of God, then Jesus resurrected;

(2) If the bible is merely a reliable historical document, then Jesus resurrected;

(3) If the bible is not a reliable historical document, then Jesus resurrected;

(4) Therefore, Jesus resurrected.

By itself, the argument doesn’t succeed because of the defeaters: “Whatever contravenes the laws of nature cannot occur.” and “Resurrection contravenes the laws of nature.” but in the light of the considerations above the defeaters may not apply. To see that they don’t in fact apply, consider that a better argument that the historical argument comes from the Shroud of Turin. This is a 14.3 foot by 3.7 linen cloth bearing the faint double-image (ventral and dorsal) of a naked man who appears to have been crucified (together with burn marks and water stains, and repair-patches). The image resides only on top-most fibrils of the threads with which the Shroud is woven, and it is a negative image.

Although very faint when viewed as a positive, the image becomes much clearer when darks and lights are reversed:

In 1978 group of scientists known as STURP (Shroud of Turin Research Project) performed an intensive series of tests on the Shroud. Many were skeptical and expected that they would quickly discover it to be a fake (contrary to what has sometimes been claimed by Shroud-deniers, 38 of the 40 were agnostics prior to their investigation), but after three years of analyzing the data they collected they wrote in their final report:

We can conclude for now that the Shroud image is that of a real human form of a scourged, crucified man. It is not the product of an artist. The blood stains are composed of hemoglobin and also give a positive test for serum albumin. The image is an ongoing mystery and until further chemical studies are made, perhaps by this group of scientists, or perhaps by some scientists in the future, the problem remains unsolved.

This conclusion was seemingly belied by the results of a 1988 radiocarbon test on a sample taken form the corner of the Shroud (Rae’s corner). This test indicated that the linen was been produced, with 95% confidence, between 1260 and 1390 AD (34)

  • Adding fuel to the skeptical fire was the fact that this time-period coincides with the time that the first known expositions of the Shroud are made in the French town of Lirey in 1355. The findings initially limited the popularity of Shroud research, but from the beginning, the question wasn’t if the carbon date was right but why it was wrong, for the Shroud clearly predates 1260. If the medieval date is right, then this implies that the Shroud is a forgery, when all the scientific evidence we have other than this date implies that it is not a forgery:
  • the image on the Shroud was not drawn or painted (there are no binding agents or particulates on the Shroud in the region of the image, and unlike a painting the image has no directionality) (35)
  • it is a negative created at a time when photography didn’t exist, but it is not a photograph (it contains 3d information that photographs do not)

  • it is not a contact print (parts of the Shroud that were not in contract with the body bear impressions as clear as parts that were in contact with the body)
  • the man in the Shroud has truly been subject to horrific and mortal injuries (the blood on the Shroud -that of real man- contains a high level of bilirubin, and substances called creatinine and ferritin, all of which are found in patients who suffer forceful multiple traumas like torture (36) (37)
  • he has wounds associated with crucifixion, and the exit wound on the wrist contradicts depictions of the crucifixion in medieval art, but reflects the way in which people must really crucified
  • blood stains on his feet tend to contradict depictions of the crucifixion in medieval art, but to cohere with the only ancient skeleton of a crucifixion victim (a first century Jewish man named Jehohanan, whose ankle bone has been pierced with a nail) (38, 39)
  • he is covered with scourge marks clearly inflicted by the roman instrument of the torture known as the flagrum

  • and he has puncture wounds on his head consistent with the wearing of a roughly prepared cap of thorns rather than the elegant  wreath of thorns depicted by medieval artists
  • there is a large wound on his right side which matches a spear used by 1st century roman executioners and from which postmortem blood and a watery serum (visible only by ultraviolet fluorescence photography) have flowed
  •  there are no signs of decomposition, meaning that body was removed from the Shroud within a few days
  • the Shroud contains traces of pollen from plants growing only in the area of Jerusalem, some of which are extinct since antiquity (40)
  • there are microscopic traces of dirt at the foot of the man in the Shroud that only match limestone found in the area of Jerusalem (41)

None of this is sufficient to recommend the possibility of the resurrection so long as the defeater of the impossibility of rising from the dead remains, but in the words Dr John Jackson, the founder of the STURP team

…it is conceivable that the Shroud image presents, if you will, some type of “new physics” that ultimately requires an extension or even revision of current concepts. (42)

and

It might be that a simple piece of cloth, known as the Shroud of Turin, represents a valid case for rethinking certain concepts of modern science. (43)

Jackson’s own theory -known as the “cloth-collapse” theory- is this:

The concept of a cloth falling into the underlying body region and receiving an image, in essence, requires that two separate assumptions be made. First, we must assume that the body became mechanically “transparent” to its physical surroundings and, second, that a stimulus was generated that recorded the passage of the cloth through the body region onto the cloth as an image. With regard to the latter assumption, it is unclear in an a priori sense what to assume for the physical nature of the stimulus. However, we at least know that it was able to interact physically with cloth; otherwise, image discolorations would not have been formed. I propose that, as the Shroud collapsed through the underlying body, radiation emitted from all points within that body discolored the cloth so as to produce the observed image. (43)

But critics of the authenticity of the Shroud have, mixed in with the usual emotionally driven pseudo-historical and pseudo-scientific objections, some legitimate observations that prove Jackson wrong. For example, David Kyle Johnson Ph. D., who writes a Psychology Today blog (44) writes that the hair of the man on the Shroud defies gravity:

It is parallel with the man’s body, as if he is standing, instead of falling to the back of the head as it would if was the body were lying flat…

He goes on to say that

Shroud enthusiasts – “shroudies,” as they like to be called – insist that the image on the shroud was produced by some kind of energy (like radiation) emitted by Jesus’s body as he rose. But the image on the shroud could not be produced by such an event. (A) Radiation can’t leave an image in cloth (false). (B) Even if it could, since radiation emits in all directions, at best it would just leave a blurry silhouette, not a clear cut face with features (false). (C) Even if it could produce a clear cut face with features, that face would be distorted. A cloth wrapped around someone’s head lays flat against their nose, eye sockets and ears. If someone’s face somehow ‘radiated’ and recorded an image on such a cloth, when flattened out the cloth would depict whole representations of each part – nose, eye socket, and ears – all pointing in the same direction (true, assuming the normal presence of gravity)

Two things that stand expose the faked-up-Shrouds as clear fakes are the undistorted nature of the frontal and dorsal images -especially the dorsal images- regardless of their distance form the cloth (see bas-relief fake Shrouds), and the evenly distributed nature of the projected light (see photographic fake Shrouds) that caused the degradation of the cellulose of the micro-fibers of the cloth that in turn produced a body-image. This same things expose cloth-collapse theories as false: the cloth in these theories isn’t flat with respect to the body that it wrapped, this body is itself the light-source, and so inevitably there is going to be both distortion and unevenness in the image created by the light. A theory -the only theory- that is free from these defects came in its earliest form from artist Isobel Piczeg (45), always the most deeply thoughtful and eagle-eyed of sindonologists, who observed:

  • The lack of anatomical distortion of the naked Body projected on the Shroud.
  • It is clearly visible on the Shroud Images, especially on the Dorsal Image, that the muscles of the Body are not crushed and flattened against the stone bench of the tomb.

The Body is hovering between the upper and the lower sheet and there is NO TRACE OF GRAVITY.

  • The lack of gravity is also further proven by the Shroud linen. The linen does not fall on top of the Body, but remains in its unnaturally stretched condition at some distance from the body.

She proposes that there is an “interface between the projection of the Frontal Image and the Dorsal Image” onto the Shroud and that this is interface is an “event horizon that blocks every communication between the two sides of the image.”:

An Event Horizon is a critical line or a radius that divides Space-Time into two distinct regions. The exterior region one can experience, but the region beyond the critical line or radius one cannot experience. The critical line marks the path of the last light pulse that still reached the Event Horizon and Time itself slowed to a halt. Looking at the critical line from the other side Time and events gain almost infinite speed and one could see the whole history of the Universe, past present and future rapidly passing to an arbitrary end.

Ordinarily Event Horizons are tied to Gravity and Time, until they both, Time and Space cease to exist in a Black Hole, the end product of the process and gravity suffers a catastrophic collapse.

Does the Interface Event Horizon of the Shroud lead to a Black Hole?

Everything on the Shroud indicates that the answer is NO. It is here that we face the most substantial paradox of our investigation. A paradox of that magnitude one cannot solve all at once, but one can assemble everything that is known so far and get closer to the magic door opening through the Shroud into a startlingly different world…

The upper region of the Isolated System of the Shroud has one Event Horizon, H1, that serves as a boundary of the Upper Region and there has to be an Event Horizon, H2, that is boundary to the Lower Region of the closed system. There is no space region but infinite density between the two. The two Boundaries can be looked at as one. Because H1 and H2 move so close to each other that they look as one, makes them eventually disappear, causing a total collapse of the Time quantum to ABSOLUTE ZERO TIME.

The total Space and Time breakdown to zero exposes that what was in the heart of the now collapsed Event Horizon. Not a Black Hole, but a very special kind of SINGULARITY, similar to the one that once assisted the creation of a universe, our own…

SUMMARY OF THE PROCESS OF IMAGE FORMATION:

There is the hermetic separation of the two Images with no overlap. The Frontal Image is projected upward and the Dorsal Image downward, clearly describing a Horizontal Interface in between.The Interface seems to have the power of making the Shroud straight and taut and definitely forced to be parallel with this mysterious entity.

Due to this condition of the Shroud the Image of the Body on both sides is visibly perfect and undistorted anatomically. The cloth is staying at some distance from the Body and does not fall on it. Also clearly visible is that the muscles of the Body are not flattened against the stone bench of the tomb. The body is weightlessly hovering between the two sides of the sheet. The fact that the upper side of the sheet did not fall on the Body and that the muscles are not crushed and flattened against the stone bench of the tomb clearly proves the total absence of gravity.All the above visible traces on the Shroud indicate that the new Entity, the mysterious Interface is significantly more than just an Interface. It has obviously great power of a special kind that indicates we are dealing with an Event Horizon. The lack of Gravity, however, introduces here a significant paradox. There is a great upsurge of energy with strong organizing power that would mean decreasing entropy. The immense energy released has the means to overpower Gravity while energizing the Event Horizon already in place. (here some new and different laws seem to present themselves that have to be studied).

The greatly divided Upper region and Lower region of the Isolated System of the Shroud each seems to have its own Event Horizon. Since energy has been added by entropy falling, the Event Horizons move closer and closer to each other, finally becoming one. Time collapses to Zero. Eventually they disappear, revealing the entity they were clothing so to speak, the Space-Time Singularity Big Bang style. Everything indicates that at the collapse of the Interface the Body is suddenly no longer in the tomb.

After time, due to the impulse of the time-like part of Singularity started to proceed in a new direction, it had to give forth the Image that was carried by the last time quanta and encoded, as we know, on the Event Horizon. It was the Image of the dead Body with a retrospective quality and recorded in two halves.

The space-like part of Singularity, as we have seen, explodes with everything all at once spreading its powerful Information Field. With its obvious assistance the Image recovered by time spreads itself on both sides of the cloth. The mechanism is extremely complex. The orthogonal projection is scarce. The oblique forms are transported in multi planes turning and turning, superposed and creating a Bas-relief effect with the mysterious Event Horizon as its impenetrable background. Another paradoxical quality of the Shroud is its semi-three-dimensional character, a two-dimensional encoded Image that can only translate itself in three dimensions, forever speaking about the presence of a real three-dimensional Body.

We have to remember also that the explosion of a singularity is an incredibly powerful event even in an Isolated System. One of these brought forth the Big bang. The tomb can contain and witness only the infinitesimal split duration of the explosion and action of the Singularity, hence the unknown and alien marks on the Shroud that no one could imitate.

Creation started out of a Singularity and its explosion. We call it the Big Bang. No matter how startling and how unbelievable it is, but this investigation led to a definite Singularity. The Second Creation has been started in the tomb and burst forth from there. The Shroud is an incredible Blueprint presented amidst the signs of a monumental sacrifice.

I close with the very fitting words of John Archibald Wheeler:

“Some day a door will open and expose the glittering central mechanism of the world in its beauty and simplicity”.
Who would have thought that this door will open through the Shroud of Turin!

These ideas are vague-sounding, expressed with words rather than numbers, and as indicated by Isabel herself, they don’t really make sense in the light of the General Theory of Relativity, and/or The Standard Model of Particle Physics (“paradox” is her term ). But they make good sense, and the paradox is resolved, in the light of the theory described above and illustrated below:

This picture shows that the resurrection took place, and image was projected onto the Shroud, at the split second the gap between the acceleration of the expansion of space and c was bridged by the de-accelerating body of Jesus of Nazareth, i.e. at the split second that gravity was suspended for that body on its way out of that epoch, and out of the terrestrial dimension. Fanciful? Only to a primitive mind. The objection that such a thing is impossible by natural means can be brushed aside: such a thing is impossible by natural means for a classical computational process, but perfectly possible by natural means for a quantum computational process. Without the Humean defeater that UFOs are in contravention of natural laws (and so probably people who claim that they see them are lying or delusional and/or misinterpreting natural phenomena), the anecdotal evidence for UFOs is now to be taken seriously. And without the defeater of the physical impossibility of the resurrection, Hebermas’s historical argument, and the argument constituted by the overwhelming strong historical and forensic evidence of the Shroud, ought to be proof enough for a court of law to conclude that, beyond reasonable doubt, Jesus of Nazareth rose from the dead.

“Who is he?”, asks David Rolfe in his award winning documentary on the Shroud of Turin (46) ? I’ll tell you who he is. He is the one who turned back time in the tomb in the rock in 1st century Jerusalem because, like the fictional Dr (Who), he possessed a mastery of the workings of time and space, and he could regenerate.

 

Download pdf

REFERENCES

(1) Aaronson, S, Has There Been Progress on the P versus NP Question?

(2) Gooch, S (1984), Creatures From Inner Space

(3) Harpur, P (2003), Daimonic Reality: A Field Guide to the Underworld

(4) Hume, D (1748), An Enquiry Concerning Human Understanding

(5) https://www.scottaaronson.com/blog/

(6) Aaronson, S, Quantum Computing and the Limits of the Efficiently Computable

(7) Cowell, B (2000), The Modern Revolution in Physics

(8) Einstein, A (1905), On the Electrodynamics of Moving Bodies

(9) Einstein A. (1916), Relativity: The Special and General Theory

(10) O’Connor, J (1996), General relativity

(11) Wald, R (1997), Gravitational Collapse and Cosmic Censorship

(12) Wald, R (1984), General Relativity

(13) Rubin, V et al (1980), Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R = 4kpc) to UGC 2885 (R = 122kpc)

(14) de Swart, J. et al (2017), How dark matter came to matter

(15) Newton, I (1687), The Principia: Mathematical Principles of Natural Philosophy

(16) Riemann, G (1859), Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude)

(17) Dirichlet, P (1837), Beweis des Satzes, dass jede unbegrenzte arithmetische Progression, deren erstes Glied und Differenz ganze Zahlen ohne gemeinschaftlichen Factor sind, unendlich viele Primzahlen enthält” (Proof of the theorem that every unbounded arithmetic progression, whose first term and common difference are integers without common factors, contains infinitely many prime numbers)

(18) Davenport, H (2000), Multiplicative number theory

(19) Graham, R et al (1971), Ramsey’s Theorem for n-Parameter Sets

(20) Gardner, M (1977), In which joining sets of points leads into diverse (and diverting) paths

(21) http://www.claymath.org/millennium-problems/yang%E2%80%93mills-and-mass-gap

(22) http://www.claymath.org/sites/default/files/official_problem_description.pdf

(23) http://www.claymath.org/sites/default/files/poincare.pdf

(24) http://www.claymath.org/sites/default/files/birchswin.pdf

(25) http://www.claymath.org/sites/default/files/navierstokes.pdf

(26) http://www.claymath.org/sites/default/files/hodge.pdf

(27) http://www.claymath.org/sites/default/files/pvsnp.pdf

(28) Aaronson, S, P = NP

(29) Frey, G (1986), Links between stable elliptic curves and certain Diophantine equations

(30) Wiles, A (1995), Modular elliptic curves and Fermat’s Last Theorem

(31) Webb, J et al (2010), Evidence for the spatial variation of the fine-structure constant

(32) Shor, P (1995), Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

(33) Habermas, G (1976), The Resurrection of Jesus: A Danin, ARational Inquiry

(34) Damon et al (1989), Radiocarbon dating of the Shroud of Turin

(35) Piczek, I (1993), Is the Shroud of Turin a Painting?

(36) Adler, A (2000), Chemical and Physical Characteristics of the Blood Stains

(37) Carlino, E et al (2017), Atomic resolution studies detect new biologic evidences on the Turin Shroud

(38) Haas, N (1970), Anthroploical Observations on the Skeletal Remains from Giv’at ha-Mivtar

(39) Zias, J et al (1985), The Crucified Man from Giv’at ha-Mivtar: A Reappraisal

(40) Danin, A (2000), Micro-Traces of plants on the Shroud of Turin as geographical markers

(41) Kohlbeck, A et al (1986), New Evidence May Explain Image on Shroud of Turin

(42) Jackson, J (1990) Is the image on the Shroud due to a process heretofore unknown to modern science?

(43) Jackson, J (1991), An unconventional hypothesis to explain all image characteristics found on the Shroud image

(44) Johnson, K, https://www.psychologytoday.com/blog/logical-take

(45) Piczek, I, The Event Horizon of the Shroud of Turin

(46) Rolfe, D (1978), The Silent Witness