Abstract In 1963, a game show called Lets Make A Deal began in the United States. On the show, the host – Monty Hall – would present contestants with the choice of 3 doors, behind only 1 of which was a car. A contestant would pick a door such as No. 1, and Monty, who knew what was behind the doors, would open another door, say No. 2, revealing a goat. Monty would then ask the contestant if they wanted to change their selection to door No. 3. It is widely accepted that the contestant should change doors on the basis that the chances of the car being behind door 3 are 2/3, whereas the chances of the car being behind door 1 are only 1/3. But by appeal to congruities that exist between this seemingly innocuous and simple problem and variety of deeper and less tractable problems, the Monty Hall Problem is revealed to be the tip of a great intellectual and iceberg.
The Monty Hall Problem
Named after the host of the game shows Let’s Make a Deal, on which contestants were presented with the choice of three doors behind one of which was a car. Behind the other doors – goats. A contestant would pick a door such as No. 1, and the host Monty Hall who knew what was behind the doors, opened another door, say No. 2, revealing a goat:
Monty then asked the contestant if they wanted to change their selection to door No. 3? The question is that of whether the car is more likely to be behind door 1 or door 3?
The mathematician Paul Erdos apparently had difficulty accepting that there was any benefit to such a change, but imagine a slightly different show on which there are 100 doors, behind one of which is the car. There is in this case a 1/100 chance of choosing the desired door the first time. If the host of the show reveals goats behind 98 doors, and invites a contestant to choose again, their confidence that the car is to be found behind the unopened door is bound to be high.
The Sleeping Beauty Problem
Sleeping Beauty participates in an experiment, the details of which she is aware. She is put to sleep on Sunday, and while she is sleeping a fair coin is tossed. If it comes up heads, Beauty is allowed to sleep until Tuesday. If it comes up tails, she is woken on Monday and given a memory-erasing drug so that when she wakes next she won’t remember the previous awakening. Then she is woken again on Tuesday. She has no way to distinguish one awakening from another, and each time she is woken she is asked to assess the probability that the coin came up heads. What should her answer be?
There are 2 independent questions here: “What is the probability that the coin came up heads?” and “What is the probability that the coin came up heads and I am waking on Tuesday for the first time?” The answer to the first question seems to be 1/2, since it seems there are 2 equally likely possibilities: (1) the coin came up heads and (2) the coin came up tails. But the answer to the second question seems to be 1/3, since there are 3 possibilities: (1) The coin came up heads and I am waking on Tuesday for the first time, (2) the coin came up tails and I am waking on Monday or (3) the coin came up tails and I am waking on Tuesday for the second time. Trouble is that the two answers are in conflict, and this conflict separates people into two camps – the ‘Halfers’ and the ‘Thirders’. The Halfers say that the answer to the second question is 1/2 when this seems wrong, and the Thirders say that 1/3 is the answer to the first question when this seems wrong… Imagine repeatedly tossing a fair coin and recording the ratio of heads and tails. If every head is recorded as a 1 and every tail as a -1, we will end up with a rough balance of 1s and -1s (there will be a margin of error about the size of the square root of the number of coin-tosses).
But if we record every coin-toss as a 1, then to arrive at this balance + margin of error we have illegitimately assumed that the number of coin tosses and the number of integers used to count them forms another string of 1s and -1s that is perfectly balanced, i.e. that the number line is flat. This perfectly balanced string has the same ratio of 1s and -1s as that we assign to the probability of a fair coin turned up heads or tails, i.e. 1/2. But what if we take the ratio of the imperfectly balanced string and use this to measure the probability of a ‘fair’ coin turning up heads or tails, i.e. what if we re-define fairness to allow for a certain limited amount of duality? If the imbalance of 1s and -1s is 1/3 to 2/3, we can describe the probability of the coin turning up heads (in which Beauty awoke once on Tuesday) or tails (in which Beauty awoke on Monday and Tuesday), not as 1/2 + 1/2 = 1, but as
where m I is a margin of error times the square root of – 1, and which in this case is equal to 1/6 I. The probability that Beauty is waking on Tuesday for the first time can be construed as (1/2 – m I), whereas the probability that she is waking on Monday, or on Tuesday for the second time, would in this case be (1/2 + m I). Beauty should on the basis of this reasoning say that the probability of the coin being heads is, neither 1/2 or 1/3, but (1/2 – m I). Looking back to The Monty Hall Problem, we might say that the contestant should switch doors because the probability that the car is behind the door they choose is (1/2 – m I) whereas the probability that car is behind the alternative door is (1/2 + m I)… By the use of the concept of a margin of error affecting every pairing of a true statement and its opposite, and every pairing of 1 and 0, and by the use of imaginary and negative numbers, we allow for a limited amount of duality between these extremes, and do justice to the intuitions of the Halfers and the Thirders.
The Liar Paradox
Which of the cards below need to be turned over to verify the rule that if a card has A on one side then it has 1 on the other?
Obviously card 4 needs to be turned, and no need to bother with card 3. Card 1 doesn’t need to be turned because while a G requires a 1 on the other side, a 1 doesn’t require a G on the other side. Card 2 needs to be turned for there may be a G on the other side, which would violate the rule. So cards 2 and 4 need to be turned to verify the rule. Trickier is the task of working out which of the cards below must be turned to verify the rule that ‘all and only cards with a true statement written on one side have ‘True’ written on the other, and all and other cards with a false statement written on one side have ‘False’ written on the other’.
‘False’ is written on the other side of card 4 if and only if it is not, so let’s agree that the 4th card is as impossibly contradictory as is the Liar sentence “This sentence is false.” The paradox can be mitigated by regarding a liar-like sentence in a local context (“This sentence is false on Wednesday.” only involves a contradiction on Wednesday.) This is the essence of Godel’s idea that every system of proof strong enough to do arithmetic contains a sentence that says in effect that it is not provable in a local system: if such a sentence is false, then false sentences are provable in the system, and if it is true then some true sentences are not provable in the system. But if the system of proof in question is global -if every true sentence is provable in the system- then the ‘Godel sentence’ is in this case equivalent to the Liar sentence, meaning that the Godel sentences are not expressible in a complete and consistent system. A system in which a Godel sentence is not expressible would be a non-arithmetic or pre-arithmetic system, and this implies that arithmetic depends on a tradeoff between completeness and inconsistency. There is an old brain-teaser concerning the barber of a certain town that shaves all and only those residents that do not shave themselves, and the question arises as to whether the barber shaves himself. If he does shave himself, then he is not shaved by the town’s barber = himself, and if he does not shave himself, then he is shaved by the town’s barber = himself…. Such a barber, we can conclude, does not exist (the logical form of this statement -in domain d there does not any exist any x such that x stands in relation r to all and only those things in d that do not stand in this relation to themselves- is a central and problematic premise in all diagonal arguments, problematic because of the question of how to draw a distinction between a diagonal theorem and a diagonal paradox). We can avoid the paradox it may seem by moving the barber out of town, which is what Russell and Tarski did with their approach to the Liar Paradox, and what Godel did, but this implies that there is no such thing as completeness. Another form of the barber story concerns a barber that shaves all and only those residents of a certain town that do shave themselves (in domain d there exists an x such that x stands in relation r to all and only those things in d that stand in this relation to themselves). Now the only way to diffuse the contradiction is to make the barber the town’s sole resident – otherwise the barber would have to shave people that do and don’t shave themselves- but this implies that there is no such thing as inconsistency.
Since classical consciousness in all its forms is arithmetic, we can argue on this basis that classical systems of all kinds are those in which there is a balance/imbalance of of completeness and inconsistency -(1/2 m I) + (1/2 – m I) = 1- without which the matrices below would be finitely expandable: [supsystic-tables id=19] [supsystic-tables id=20]
We can turn logic into arithmetic in this way:
= the point at which the gap between and ceases to decrease.
and distinguish between infinitely and finitely expandable arithmetic progressions by reference to the difference between s = 1 and positive real values of s other than 1. What exactly does the equation (1/2 m I) + (1/2 – m I) = 1 to this with this arrangement? Illustrated below is the idea that every solution to the equation such that the solution is a zero of an L-function is associated to a finite matrix, and by reference to these solutions and/or to these equations we can identify the “break times”, i.e. the points at which one finite matrix ends and another matrix begins:
The Zebra Puzzle
- There are five houses
- The Englishman lives in the red house
- The Spaniard owns the dog
- Coffee is drunk in the green house
- The Ukrainian drinks tea
- The green house is immediately to the right of the ivory house
- The Old Gold smoker owns snails
- Kools are smoked in the yellow house
- Milk is drunk in the middle house
- The Norwegian lives in the first house
- The man who smokes Chesterfields lives in the house next to the man with the fox
- Kools are smoked in the house next to the house where the horse is kept
- The Lucky Strike smoker drinks orange juice
- The Japanese smokes Parliaments
- The Norwegian lives next to the blue house
By a combination of trial and error, and logical inference, everything will eventually fit like this: [supsystic-tables id=21]
A computer could be programmed to solve the Zebra Puzzle very quickly, but that’s because there are only 5 houses and 5 things associated with each house and therefore about 25 billion possible associations. If there are 6 houses and 6 things associated with those houses, then there are about 1.4 x 10^17 possible associations. A 10GHz machine would take 1.4 x 1017/1010 = 1.4 x 107 seconds, i.e. 116 days to check all associations and find one that is compatible with the clues… Even the most powerful computers struggle with Zebra-style problems, and the reason is that the size of the puzzle increases arithmetically, the number of steps the computer must take to solve it increases geometrically, and this is the inevitable result of a theoretical limit. Remember what we said about counting coin-tosses in relationship to The Sleeping Beauty Problem? Trivially, this act -and any act- of counting involves a relationship between the ‘counted’ arithmetic progression and the ‘counting’ progression, and it has been shown why this relationship is governed by the equation (1/2 m I) + (1/2 – m I) = 1. Translating from math-speak to computer-speak, we can talk about an ‘input’ progression and a ‘program’ progression rather than a ‘counted’ and a ‘counting’ progression, but we can require again that the relationship between these by governed by the same equation in order that the progressions be classical, i.e. capable of the joint infinite development discussed above. When the relationship between them falls outside of the limits set by the equation(1/2 m I) + (1/2 – m I) = 1, the distinction between input and program eventually collapses, in which case the input is a self-input, i.e. the program is effectively being inputted into itself. Self-inputs involve exponential development. Every computer program we can conclude has the same order of complexity as the Zebra Puzzle, and a wide variety of problems. A computer program then is itself a Zebra Puzzle in disguise, and a computer trying to solve a Zebra Puzzle is therefore running a self-input, when the run-time of the program is bound to grow exponentially: either the computer will never solve the puzzle (in the case familiar from the classical form of the Halting Problem = the complexity/prime-sparsity of the inputs largest cycle is greater than that of the program’s largest cycle), or it will solve the puzzle in finite time, but in an exponentially growing number of steps. This means that no matter how powerful the computer, there will be inputs that a) it cannot run in any amount of time and inputs that b) it cannot run in any efficient amount of time.
Bostrom, N (2007), Sleeping beauty and self-location: A hybrid model
Cook, S (1971), The complexity of theorem proving procedures
Godel, K (1930), On Formally Undecidable Propositions Of Principia Mathematica And Related Systems
Russell, B (1908), Mathematical logic as based on the theory of types
Russell, B (1919), The Philosophy of Logical Atomism
Simmons, K (1993), Universality and the Liar: An Essay on Truth and the Diagonal Argument
Tarski, A (1944), The Semantic Conception of Truth and the Foundations of Semantics
Turing, Alan (1937), On Computable Numbers, with an Application to the Entscheidungsproblem
Turing, A (1939), Systems of Logic Based on Ordinals
Wasson, P (1968), Reasoning about a rule