SLIDE 1
Shannon, Turing and Hats: Information Theory Incompleteness - - PowerPoint PPT Presentation
Shannon, Turing and Hats: Information Theory Incompleteness - - PowerPoint PPT Presentation
Shannon, Turing and Hats: Information Theory Incompleteness Francois Durand, Fabien Mathieu, Philippe Jacquet WITMSE 2017 September 13 th , 2017 Roadmap 1 Getting Rich without a DeLorean 2 The Shannon Approach 3 Days of infinite past 2 /
SLIDE 2
SLIDE 3
Roadmap
1
Getting Rich without a DeLorean
2
The Shannon Approach
3
Days of infinite past
3/21
SLIDE 4
Back to back to the future
Plot of episode 2
A poor guy A Time Machine An Almanac A rich guy What if no Time Machine?
4/21
SLIDE 5
Back to back to the future
Plot of episode 2
A poor guy A Time Machine An Almanac A rich guy What if no Time Machine?
4/21
SLIDE 6
Back to back to the future
Plot of episode 2
A poor guy A Time Machine An Almanac A rich guy What if no Time Machine?
4/21
SLIDE 7
Back to back to the future
Plot of episode 2
A poor guy A Time Machine An Almanac A rich guy What if no Time Machine?
4/21
SLIDE 8
Back to back to the future
Plot of episode 2
A poor guy A Time Machine An Almanac A rich guy What if no Time Machine?
4/21
SLIDE 9
YMCA LMCA problem
Biff is a huge fan of English soccer
He wants to bet on one of his 4 favorite teams He gathers all statistics of last century Can he predict next year champion?
5/21
SLIDE 10
YMCA LMCA problem
Biff is a huge fan of English soccer
He wants to bet on one of his 4 favorite teams He gathers all statistics of last century Can he predict next year champion?
5/21
SLIDE 11
YMCA LMCA problem
Biff is a huge fan of English soccer
He wants to bet on one of his 4 favorite teams He gathers all statistics of last century Can he predict next year champion?
5/21
SLIDE 12
YMCA LMCA problem
Biff is a huge fan of English soccer
He wants to bet on one of his 4 favorite teams He gathers all statistics of last century Can he predict next year champion?
5/21
SLIDE 13
Roadmap
1
Getting Rich without a DeLorean
2
The Shannon Approach
3
Days of infinite past
6/21
SLIDE 14
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 2 3 7 10 7 13
7/21
SLIDE 15
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... X18 X20 X6 X13 1 1 1 1 2 3 7 10 7 13
Fully deterministic
Biff Best success rate: 100 100 Entropy:
7/21
SLIDE 16
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... X18 X20 X6 X13 1 1 1 1 2 3 7 10 7 13
Fully deterministic
Biff Best success rate: 100/100 Entropy:
7/21
SLIDE 17
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 1/3 3/10 1 6/13 2/3 7/10 7/13
With a one-year memory
Biff Best success rate: 68.4 100 Entropy: 0.83
7/21
SLIDE 18
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 1/3 3/10 1 6/13 2/3 7/10 7/13
With a one-year memory
Biff Best success rate: 68.4/100 Entropy: 0.83
7/21
SLIDE 19
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 2 3 7 10 7 13 , 18/57 , 20/57 , 6/57 , 20/57
Memoryless
Biff Best success rate: 20 57 35 100 Entropy: 1.88
7/21
SLIDE 20
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 2 3 7 10 7 13 , 18/57 , 20/57 , 6/57 , 20/57
Memoryless
Biff Best success rate: 20/57 ≈ 35/100 Entropy: 1.88
7/21
SLIDE 21
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 2 3 7 10 7 13 , 1/4 , 1/4 , 1/4 , 1/4
Uniformly Memoryless
Biff Best success rate: 1 4 Entropy: 2
7/21
SLIDE 22
Prediction of a LCMA sequence
Shannon: the champion sequence is a signal
With a long enough sequence, Biff can try to predict things. Statistics example: ( , , , ) = (18, 20, 6, 13) Premier league may be a Markov source... 2 3 7 10 7 13 , 1/4 , 1/4 , 1/4 , 1/4
Uniformly Memoryless
Biff Best success rate: 1/4 Entropy: 2
7/21
SLIDE 23
Biff problem
If the source is uniformly memoryless...
Biff cannot go past 1 4: not enough to get rich. No matter how much data he gathers. What if past is infinite?
8/21
SLIDE 24
Biff problem
If the source is uniformly memoryless...
Biff cannot go past 1 4: not enough to get rich. No matter how much data he gathers. What if past is infinite?
8/21
SLIDE 25
Biff problem
If the source is uniformly memoryless...
Biff cannot go past 1/4: not enough to get rich. No matter how much data he gathers. What if past is infinite?
8/21
SLIDE 26
Biff problem
If the source is uniformly memoryless...
Biff cannot go past 1/4: not enough to get rich. No matter how much data he gathers. What if past is infinite?
8/21
SLIDE 27
Biff problem
If the source is uniformly memoryless...
Biff cannot go past 1/4: not enough to get rich. No matter how much data he gathers. What if past is infinite?
8/21
SLIDE 28
Roadmap
1
Getting Rich without a DeLorean
2
The Shannon Approach
3
Days of infinite past
9/21
SLIDE 29
Infinite past model
We'll make the following assumptions
Past is infinite: no big bang. Biff, Arsenal, Chelsea, Liverpool and Manchester are immortal. Year champion: uniform memoryless. Universe ends at year 576,000,000,000 AD.
10/21
SLIDE 30
Infinite past model
We'll make the following assumptions
Past is infinite: no big bang. Biff, Arsenal, Chelsea, Liverpool and Manchester are immortal. Year champion: uniform memoryless. Universe ends at year 576,000,000,000 AD. Today 2017 Infinite past
- infinity
End 576,000,000,000
10/21
SLIDE 31
Infinite past model
We'll make the following assumptions
Past is infinite: no big bang. Biff, Arsenal, Chelsea, Liverpool and Manchester are immortal. Year champion: uniform memoryless. Universe ends at year 576,000,000,000 AD.
10/21
SLIDE 32
Infinite past model
We'll make the following assumptions
Past is infinite: no big bang. Biff, Arsenal, Chelsea, Liverpool and Manchester are immortal. Year champion: uniform memoryless. Universe ends at year 576,000,000,000 AD. , 1/4 , 1/4 , 1/4 , 1/4
10/21
SLIDE 33
Infinite past model
We'll make the following assumptions
Past is infinite: no big bang. Biff, Arsenal, Chelsea, Liverpool and Manchester are immortal. Year champion: uniform memoryless. Universe ends at year 576,000,000,000 AD. Today 2017 Infinite past
- infinity
End 576,000,000,000
10/21
SLIDE 34
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 35
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 36
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 37
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 38
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 39
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 40
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 41
Reminder: Hat puzzles
Hat puzzles
A bunch of people with Hats. You do not know the color of your own Hat. Guess your color by looking at a subset of others' hats.
Exist in multiple flavors
With or without additional information. Finite or infinite population.
11/21
SLIDE 42
Axiom of choice
Definition
two infinite sequences of winners X and Y are equivalent if they differ only for a finite number of years.
Axiom of choice
Biff can choose one representative sequence per equivalence class.
12/21
SLIDE 43
Axiom of choice
Definition
two infinite sequences of winners X and Y are equivalent if they differ only for a finite number of years.
Axiom of choice
Biff can choose one representative sequence per equivalence class.
12/21
SLIDE 44
Axiom of choice
Definition
two infinite sequences of winners X and Y are equivalent if they differ only for a finite number of years.
Axiom of choice
Biff can choose one representative sequence per equivalence class.
12/21
SLIDE 45
Biff prediction algorithm
Build a sequence by concatenation of observed past and Manchester-padding. Take the representative of the equivalence class. Announce the result predicted by the representative.
13/21
SLIDE 46
Biff prediction algorithm
Build a sequence by concatenation of observed past and Manchester-padding. Take the representative of the equivalence class. Announce the result predicted by the representative.
13/21
SLIDE 47
Biff prediction algorithm
Build a sequence by concatenation of observed past and Manchester-padding. Take the representative of the equivalence class. Announce the result predicted by the representative.
13/21
SLIDE 48
Biff prediction algorithm
Build a sequence by concatenation of observed past and Manchester-padding. Take the representative of the equivalence class. Announce the result predicted by the representative.
13/21
SLIDE 49
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 50
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 51
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 52
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 53
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 54
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 55
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 56
Biff is never wrong but a finite number of times
Assume any reality
Biff always guesses the same sequence. The sequence differs from reality only for a finite number of years. Biff is wrong only a finite number of times. Success rate: 1. Result does not depend on the way the sequence is drawn! True even with maximal entropy 2
14/21
SLIDE 57
Biff is wrong most of the time
The decision of 2017-Biff does not depend on the 2018-champion. Deferred decision: choose 2018-champion after the decision of 2017-Biff. Biff is wrong with probability 3 4.
15/21
SLIDE 58
Biff is wrong most of the time
The decision of 2017-Biff does not depend on the 2018-champion. Deferred decision: choose 2018-champion after the decision of 2017-Biff. Biff is wrong with probability 3 4.
15/21
SLIDE 59
Biff is wrong most of the time
The decision of 2017-Biff does not depend on the 2018-champion. Deferred decision: choose 2018-champion after the decision of 2017-Biff. Biff is wrong with probability 3 4.
15/21
SLIDE 60
Biff is wrong most of the time
The decision of 2017-Biff does not depend on the 2018-champion. Deferred decision: choose 2018-champion after the decision of 2017-Biff. Biff is wrong with probability 3/4.
15/21
SLIDE 61
Paradox explained
If there is a probability that Biff is wrong at year t, it must be 3 4. Hence the probability that Biff is wrong infinitely often is at least 3 4. But Biff is always wrong only for a finite number of times: probability
- f being infinitely wrong is 0.
Conclusion
The event Biff is wrong cannot be measured.
16/21
SLIDE 62
Paradox explained
If there is a probability that Biff is wrong at year t, it must be 3/4. Hence the probability that Biff is wrong infinitely often is at least 3 4. But Biff is always wrong only for a finite number of times: probability
- f being infinitely wrong is 0.
Conclusion
The event Biff is wrong cannot be measured.
16/21
SLIDE 63
Paradox explained
If there is a probability that Biff is wrong at year t, it must be 3/4. Hence the probability that Biff is wrong infinitely often is at least 3/4. But Biff is always wrong only for a finite number of times: probability
- f being infinitely wrong is 0.
Conclusion
The event Biff is wrong cannot be measured.
16/21
SLIDE 64
Paradox explained
If there is a probability that Biff is wrong at year t, it must be 3/4. Hence the probability that Biff is wrong infinitely often is at least 3/4. But Biff is always wrong only for a finite number of times: probability
- f being infinitely wrong is 0.
Conclusion
The event Biff is wrong cannot be measured.
16/21
SLIDE 65
Paradox explained
If there is a probability that Biff is wrong at year t, it must be 3/4. Hence the probability that Biff is wrong infinitely often is at least 3/4. But Biff is always wrong only for a finite number of times: probability
- f being infinitely wrong is 0.
Conclusion
The event Biff is wrong cannot be measured.
16/21
SLIDE 66
Getting rid of the axiom of choice
God doesn't play dice with the universe Assume Champions are produced by a Backward Turing Machine Keep inference from the past Axiom of choice is removed
17/21
SLIDE 67
Getting rid of the axiom of choice
God doesn't play dice with the universe Assume Champions are produced by a Backward Turing Machine Keep inference from the past Axiom of choice is removed
17/21
SLIDE 68
Getting rid of the axiom of choice
God doesn't play dice with the universe Assume Champions are produced by a Backward Turing Machine Keep inference from the past Axiom of choice is removed Today 2017 Infinite past
- infinity
End 576,000,000,000
17/21
SLIDE 69
Getting rid of the axiom of choice
God doesn't play dice with the universe Assume Champions are produced by a Backward Turing Machine Keep inference from the past Axiom of choice is removed Today 2017 Infinite past
- infinity
End 576,000,000,000
17/21
SLIDE 70
Getting rid of the axiom of choice
God doesn't play dice with the universe Assume Champions are produced by a Backward Turing Machine Keep inference from the past Axiom of choice is removed Today 2017 Infinite past
- infinity
End 576,000,000,000
17/21
SLIDE 71
Biff in a multiverse of Turing Machines
The set of Turing machines is enumerable Biff selects the first machine with finite errors Same reasoning as before
Biff always guesses the same sequence He is wrong only a finite number of times 18/21
SLIDE 72
Biff in a multiverse of Turing Machines
The set of Turing machines is enumerable Biff selects the first machine with finite errors Same reasoning as before
Biff always guesses the same sequence He is wrong only a finite number of times 18/21
SLIDE 73
Biff in a multiverse of Turing Machines
The set of Turing machines is enumerable Biff selects the first machine with finite errors Same reasoning as before
Biff always guesses the same sequence He is wrong only a finite number of times 18/21
SLIDE 74
Biff in a multiverse of Turing Machines
The set of Turing machines is enumerable Biff selects the first machine with finite errors Same reasoning as before
Biff always guesses the same sequence He is wrong only a finite number of times 18/21
SLIDE 75
Biff in a multiverse of Turing Machines
The set of Turing machines is enumerable Biff selects the first machine with finite errors Same reasoning as before
◮ Biff always guesses the same sequence ◮ He is wrong only a finite number of times
18/21
SLIDE 76
Paradox of Turing Machines multiverse
Assume a probability distribution over Turing machines. The probability that Biff is wrong at year t is now perfectly defined. Probability must go to 0 has we travel to the past
Conclusion
Biff gets wronger and wronger has the end is near: The less Biff knows, the better he predicts.
19/21
SLIDE 77
Paradox of Turing Machines multiverse
Assume a probability distribution over Turing machines. The probability that Biff is wrong at year t is now perfectly defined. Probability must go to 0 has we travel to the past
Conclusion
Biff gets wronger and wronger has the end is near: The less Biff knows, the better he predicts.
19/21
SLIDE 78
Paradox of Turing Machines multiverse
Assume a probability distribution over Turing machines. The probability that Biff is wrong at year t is now perfectly defined. Probability must go to 0 has we travel to the past
Conclusion
Biff gets wronger and wronger has the end is near: The less Biff knows, the better he predicts.
19/21
SLIDE 79
Paradox of Turing Machines multiverse
Assume a probability distribution over Turing machines. The probability that Biff is wrong at year t is now perfectly defined. Probability must go to 0 has we travel to the past
Conclusion
Biff gets wronger and wronger has the end is near: The less Biff knows, the better he predicts.
19/21
SLIDE 80
Paradox of Turing Machines multiverse
Assume a probability distribution over Turing machines. The probability that Biff is wrong at year t is now perfectly defined. Probability must go to 0 has we travel to the past
Conclusion
Biff gets wronger and wronger has the end is near: The less Biff knows, the better he predicts.
19/21
SLIDE 81
Conclusion
Takeaway
Information theory cannot be straightforwardly adapted to infinity. Is there a proper framework? Very early work with lot of open questions, e.g. infinite future? Best way to get rich...
20/21
SLIDE 82
Conclusion
Takeaway
Information theory cannot be straightforwardly adapted to infinity. Is there a proper framework? Very early work with lot of open questions, e.g. infinite future? Best way to get rich...
20/21
SLIDE 83