Seeing the unseen: from coin flips to statistical inverse problems - - PowerPoint PPT Presentation

seeing the unseen from coin flips to statistical inverse
SMART_READER_LITE
LIVE PREVIEW

Seeing the unseen: from coin flips to statistical inverse problems - - PowerPoint PPT Presentation

Seeing the unseen: from coin flips to statistical inverse problems Alberto J. Coca StatsLab, University of Cambridge Topics Taster University of Cambridge Open Day 5th July 2018 Outline 1 Introduction Mathematical Statistics Examples 2


slide-1
SLIDE 1

Seeing the unseen: from coin flips to statistical inverse problems

Alberto J. Coca StatsLab, University of Cambridge Topics Taster University of Cambridge Open Day 5th July 2018

slide-2
SLIDE 2

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-3
SLIDE 3

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-4
SLIDE 4

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-5
SLIDE 5

Mathematical Statistics

Question: what is Statistics?

slide-6
SLIDE 6

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data.

slide-7
SLIDE 7

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance!

slide-8
SLIDE 8

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance! We face many challenges.

slide-9
SLIDE 9

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance! We face many challenges. E.g., to develop new statistical methods to analyse very large and complex data sets (old methods no cannot handle them!),

slide-10
SLIDE 10

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance! We face many challenges. E.g., to develop new statistical methods to analyse very large and complex data sets (old methods no cannot handle them!), to make them computationally efficient,

slide-11
SLIDE 11

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance! We face many challenges. E.g., to develop new statistical methods to analyse very large and complex data sets (old methods no cannot handle them!), to make them computationally efficient, etc.

slide-12
SLIDE 12

Mathematical Statistics

Question: what is Statistics? An answer: extracting information and drawing conclusions from (random) data. We are in the era of data and Statistics is of key importance! We face many challenges. E.g., to develop new statistical methods to analyse very large and complex data sets (old methods no cannot handle them!), to make them computationally efficient, etc. Mathematical Statistics: understand mathematical properties of statistical methods to be sure that they are sensible (in some sense).

slide-13
SLIDE 13

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-14
SLIDE 14

Example I: Netflix prize

$1M contest open from October 2006 to August 2009.

slide-15
SLIDE 15

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!):

slide-16
SLIDE 16

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 ? ? 2 ? . . . 1 U2 4 5 3 ? 3 ? . . . ? U3 5 4 2 4 ? ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-17
SLIDE 17

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 ? ? 2 ? . . . 1 U2 4 5 3 ? 3 ? . . . ? U3 5 4 2 4 ? ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-18
SLIDE 18

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2? ? 2 ? . . . 1 U2 4 5 3 ? 3 ? . . . 1? U3 5 4 2 4 ? ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-19
SLIDE 19

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 ? 2 ? . . . 1 U2 4 5 3 ? 3 ? . . . 1 U3 5 4 2 4 ? ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-20
SLIDE 20

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4? 2 ? . . . 1 U2 4 5 3 3? 3 ? . . . 1 U3 5 4 2 4 3? ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-21
SLIDE 21

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4 2 ? . . . 1 U2 4 5 3 3 3 ? . . . 1 U3 5 4 2 4 3 ? . . . 2 U4 2 ? ? 5 ? 2 . . . ? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-22
SLIDE 22

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4 2 ? . . . 1 U2 4 5 3 3 3 ? . . . 1 U3 5 4 2 4 3 ? . . . 2 U4 2 ? 4? 5 ? 2 . . . 5? U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5

slide-23
SLIDE 23

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4 2 ? . . . 1 U2 4 5 3 3 3 ? . . . 1 U3 5 4 2 4 3 ? . . . 2 U4 2 ? 4 5 ? 2 . . . 5 U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k?? ? ? 1 1 ? ? . . . 5

slide-24
SLIDE 24

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4 2 ? . . . 1 U2 4 5 3 3 3 ? . . . 1 U3 5 4 2 4 3 ? . . . 2 U4 2 ? 4 5 ? 2 . . . 5 U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5 Is this matrix-completion algorithm mathematically sensible?

slide-25
SLIDE 25

Example I: Netflix prize

$1M contest open from October 2006 to August 2009. Predict un-rated films. Data (about 98.7% ratings missing!): User Film F1 F2 F3 F4 F5 F6 ... F18k U1 4 5 2 4 2 ? . . . 1 U2 4 5 3 3 3 ? . . . 1 U3 5 4 2 4 3 ? . . . 2 U4 2 ? 4 5 ? 2 . . . 5 U5 1 ? 4 5 ? 2 . . . 5 . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . U480k ? ? 1 1 ? ? . . . 5 Is this matrix-completion algorithm mathematically sensible? E.g., if we have more and more data, will it recover the “true ratings”?

slide-26
SLIDE 26

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc.

slide-27
SLIDE 27

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!):

slide-28
SLIDE 28

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body;

slide-29
SLIDE 29

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and,

slide-30
SLIDE 30

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and, image is created using the times echoes take to return to probe.

slide-31
SLIDE 31

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and, image is created using the times echoes take to return to probe.

slide-32
SLIDE 32

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and, image is created using the times echoes take to return to probe.

slide-33
SLIDE 33

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and, image is created using the times echoes take to return to probe. Given tissues produce specific echoes. Machine “inverted” latter indirect (and incomplete!) info and dealt with instrumental errors (random): statistical inverse problem.

slide-34
SLIDE 34

Example II: medical imaging

We need non-invasive ways to explore/diagnose patients: e.g., ultrasound, CT scan, MRI, etc. Ultrasound (very simplified!): send many sound pulses with probe that travel into your body; they hit boundaries between tissues and get reflected; and, image is created using the times echoes take to return to probe. Given tissues produce specific echoes. Machine “inverted” latter indirect (and incomplete!) info and dealt with instrumental errors (random): statistical inverse problem. Does it do better as instrum. errors decrease?

slide-35
SLIDE 35

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-36
SLIDE 36

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-37
SLIDE 37

Coin flips: experiment I

Let us simplify things and flip some coins:

slide-38
SLIDE 38

Coin flips: experiment I

Let us simplify things and flip some coins: with your phones, go to https://albertococacabrero.wordpress.com/openday/ or google Alberto J Coca Cambridge and add openday/ to end of my URL:

slide-39
SLIDE 39

Coin flips: experiment I

Let us simplify things and flip some coins: with your phones, go to https://albertococacabrero.wordpress.com/openday/ or google Alberto J Coca Cambridge and add openday/ to end of my URL:

slide-40
SLIDE 40

Coin flips: experiment I

Let us simplify things and flip some coins: with your phones, go to https://albertococacabrero.wordpress.com/openday/ or google Alberto J Coca Cambridge and add openday/ to end of my URL:

slide-41
SLIDE 41

Coin flips: experiment I

Let us simplify things and flip some coins: with your phones, go to https://albertococacabrero.wordpress.com/openday/ or google Alberto J Coca Cambridge and add openday/ to end of my URL:

slide-42
SLIDE 42

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin).

slide-43
SLIDE 43

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data?

slide-44
SLIDE 44

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips .

slide-45
SLIDE 45

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess?

slide-46
SLIDE 46

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

slide-47
SLIDE 47

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

each coin flip has options and probabilities given by Options Heads Tails Probabs. p 1 − p

slide-48
SLIDE 48

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

each coin flip has options and probabilities given by Options Heads Tails Probabs. p 1 − p If our data is Heads,Tails, Heads, it has probability or likelihood L = p(1 − p)p = p2(1 − p);

slide-49
SLIDE 49

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

each coin flip has options and probabilities given by Options Heads Tails Probabs. p 1 − p If our data is Heads,Tails, Heads, it has probability or likelihood L = p(1 − p)p = p2(1 − p); and, more generally, if we flip n coins and obtain m heads, the likelihood of our data is L(p, m, n) = pm(1 − p)n−m.

slide-50
SLIDE 50

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

each coin flip has options and probabilities given by Options Heads Tails Probabs. p 1 − p If our data is Heads,Tails, Heads, it has probability or likelihood L = p(1 − p)p = p2(1 − p); and, more generally, if we flip n coins and obtain m heads, the likelihood of our data is L(p, m, n) = pm(1 − p)n−m. Homework: ˆ p = m

n maximises q → L(q, m, n) when q ∈ [0, 1]!

slide-51
SLIDE 51

Coin flips: experiment I, MLE

We do not know the probability p ∈ [0, 1] of this coin landing Heads (typically p =1/2, i.e. fair coin). How to guess p from our data? “Frequentist” guess is ˆ p = #Heads

#flips . Is this a sensible guess? Yes:

each coin flip has options and probabilities given by Options Heads Tails Probabs. p 1 − p If our data is Heads,Tails, Heads, it has probability or likelihood L = p(1 − p)p = p2(1 − p); and, more generally, if we flip n coins and obtain m heads, the likelihood of our data is L(p, m, n) = pm(1 − p)n−m. Homework: ˆ p = m

n maximises q → L(q, m, n) when q ∈ [0, 1]!

Indeed, ˆ p is called the Maximum Likelihood Estimator (MLE).

slide-52
SLIDE 52

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties:

slide-53
SLIDE 53

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞.

slide-54
SLIDE 54

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0?

slide-55
SLIDE 55

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n.

slide-56
SLIDE 56

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n.

slide-57
SLIDE 57

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n. Hence, to find value of a, plot x = log n vs. y = log Error ≈ −ax and compute the slope.

slide-58
SLIDE 58

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n. Hence, to find value of a, plot x = log n vs. y = log Error ≈ −ax and compute the slope. Plot suggests a = 1/2, i.e. Error ≈ 1/√n!

slide-59
SLIDE 59

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n. Hence, to find value of a, plot x = log n vs. y = log Error ≈ −ax and compute the slope. Plot suggests a = 1/2, i.e. Error ≈ 1/√n! Thus, MLE is optimal in convergence rates!

slide-60
SLIDE 60

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n. Hence, to find value of a, plot x = log n vs. y = log Error ≈ −ax and compute the slope. Plot suggests a = 1/2, i.e. Error ≈ 1/√n! Thus, MLE is optimal in convergence rates! (Optimal in other senses in view of, e.g., Central Limit Theorem, but no time to explain this.)

slide-61
SLIDE 61

Coin flips: experiment I, MLE

The MLE ˆ p = m

n enjoys mathematically desirable properties: e.g.,

Law of Large Numbers (LLN) It holds that ˆ p → p as n → ∞. How fast does Error = |ˆ p − p| → 0? Mathematical results guarantee it cannot be faster than 1/√n. Note that if Error ≈ 1/na = n−a for some a>0, then log Error ≈ −a log n. Hence, to find value of a, plot x = log n vs. y = log Error ≈ −ax and compute the slope. Plot suggests a = 1/2, i.e. Error ≈ 1/√n! Thus, MLE is optimal in convergence rates! (Optimal in other senses in view of, e.g., Central Limit Theorem, but no time to explain this.) Other estimators are optimal too:

slide-62
SLIDE 62

Coin flips: experiment I, Bayes

‘Bayesian” method: before conducting the experiment, guess probabilities for unknown p, i.e., Prob(p = q |no data) = G(q) with G, e.g., Green: I am (obviously!) respectable and coin is probably fair;

slide-63
SLIDE 63

Coin flips: experiment I, Bayes

‘Bayesian” method: before conducting the experiment, guess probabilities for unknown p, i.e., Prob(p = q |no data) = G(q) with G, e.g., Green: I am (obviously!) respectable and coin is probably fair; Red: I am (absolutely) not re- spectable and coin is unfair;

slide-64
SLIDE 64

Coin flips: experiment I, Bayes

‘Bayesian” method: before conducting the experiment, guess probabilities for unknown p, i.e., Prob(p = q |no data) = G(q) with G, e.g., Green: I am (obviously!) respectable and coin is probably fair; Red: I am (absolutely) not re- spectable and coin is unfair; Blue: I would rather not guess to avoid confrontations...

slide-65
SLIDE 65

Coin flips: experiment I, Bayes

‘Bayesian” method: before conducting the experiment, guess probabilities for unknown p, i.e., Prob(p = q |no data) = G(q) with G, e.g., Green: I am (obviously!) respectable and coin is probably fair; Red: I am (absolutely) not re- spectable and coin is unfair; Blue: I would rather not guess to avoid confrontations... The initial guess G(q) evolves through Bayes rule as we get the data giving B(q) = L(q, m, n)G(q) 1

0 L(r, m, n)G(r)dr

, q ∈ [0, 1].

slide-66
SLIDE 66

Coin flips: experiment I, Bayes

‘Bayesian” method: before conducting the experiment, guess probabilities for unknown p, i.e., Prob(p = q |no data) = G(q) with G, e.g., Green: I am (obviously!) respectable and coin is probably fair; Red: I am (absolutely) not re- spectable and coin is unfair; Blue: I would rather not guess to avoid confrontations... The initial guess G(q) evolves through Bayes rule as we get the data giving B(q) = L(q, m, n)G(q) 1

0 L(r, m, n)G(r)dr

, q ∈ [0, 1]. The normalisation ensures 1

0 B(q)dq =

1

0 Prob(p = q |data)dq = 1.

slide-67
SLIDE 67

Coin flips: experiment I, Bayes

How does the data-update of G given by B look?

slide-68
SLIDE 68

Coin flips: experiment I, Bayes

How does the data-update of G given by B look?

slide-69
SLIDE 69

Coin flips: experiment I, Bayes

How does the data-update of G given by B look? B gives much more information than ˆ p!

slide-70
SLIDE 70

Coin flips: experiment I, Bayes

How does the data-update of G given by B look? B gives much more information than ˆ p! It does so without having to maximise likelihood!

slide-71
SLIDE 71

Coin flips: experiment I, Bayes

How does the data-update of G given by B look? B gives much more information than ˆ p! It does so without having to maximise likelihood! Mathematical properties?

slide-72
SLIDE 72

Coin flips: experiment I, Bayes

How does the data-update of G given by B look? B gives much more information than ˆ p! It does so without having to maximise likelihood! Optimal! Bernstein–von Mises Theorem (BvM) If G(p)>0, then B(q) ≈ ˆ p + 1

√n

  • p(1 − p) N(0, 1)(q) as n → ∞.

(Here N(0, 1) is the “Bell curve”.)

slide-73
SLIDE 73

Coin flips: experiment I, Bayes

How does the data-update of G given by B look? B gives much more information than ˆ p! It does so without having to maximise likelihood! Optimal! Bernstein–von Mises Theorem (BvM) If G(p)>0, then B(q) ≈ ˆ p + 1

√n

  • p(1 − p) N(0, 1)(q) as n → ∞.

Indeed, Bayesian method is extensively used in practice. However, much harder to analyse mathematically: no-free-lunch principle!

slide-74
SLIDE 74

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1.

slide-75
SLIDE 75

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses.

slide-76
SLIDE 76

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2.

slide-77
SLIDE 77

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now?

slide-78
SLIDE 78

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen?

slide-79
SLIDE 79

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen? Each sum of a pair of coin flips has options and probabilities Options 1 2 Probabs. p2 2p(1 − p) (1 − p)2

slide-80
SLIDE 80

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen? Each sum of a pair of coin flips has options and probabilities Options 1 2 Probabs. p2 2p(1 − p) (1 − p)2 Easy guesses given by “inverting table entries”: e.g., ˆ p =

  • #0s

#data.

slide-81
SLIDE 81

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen? Each sum of a pair of coin flips has options and probabilities Options 1 2 Probabs. p2 2p(1 − p) (1 − p)2 Easy guesses given by “inverting table entries”: e.g., ˆ p =

  • #0s

#data.

Cannot be optimal as it does not use all the info from data.

slide-82
SLIDE 82

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen? Each sum of a pair of coin flips has options and probabilities Options 1 2 Probabs. p2 2p(1 − p) (1 − p)2 Easy guesses given by “inverting table entries”: e.g., ˆ p =

  • #0s

#data.

Cannot be optimal as it does not use all the info from data. If n0 =#0s, n1 =#1s and n2 =#2s, the likelihood of the data is L(p, n0, n1, n2) = p2n0(2p(1 − p))n1(1 − p)2n2.

slide-83
SLIDE 83

Coin flips: experiment II, MLE

Let H =Heads =0 and T =Tails =1. Now we do not see coin flips directly but “their sum” every 2 tosses. E.g., if coin flips are H, H

  • 0+0=0

, T, H

  • 1+0=1

, H,T

  • 0+1=1

, T,T

  • 1+1=2

, the data is 0,1,1,2. How to guess the probability p of the underlying coin landing Heads now? I.e., how to see the unseen? Each sum of a pair of coin flips has options and probabilities Options 1 2 Probabs. p2 2p(1 − p) (1 − p)2 Easy guesses given by “inverting table entries”: e.g., ˆ p =

  • #0s

#data.

Cannot be optimal as it does not use all the info from data. If n0 =#0s, n1 =#1s and n2 =#2s, the likelihood of the data is L(p, n0, n1, n2) = p2n0(2p(1 − p))n1(1 − p)2n2. MLE is ˆ p = 1

2(1+ n0−n2 n

). The MLE “inverts table entries” optimally, enjoying same properties as before (and more): as n → ∞, ˆ p → p with Error = |ˆ p − p| ≈ 1/√n.

slide-84
SLIDE 84

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method:

slide-85
SLIDE 85

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1].

slide-86
SLIDE 86

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1]. B in action

  • p = 2

3 ≈67%

  • :
slide-87
SLIDE 87

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1]. B in action

  • p = 2

3 ≈67%

  • :
slide-88
SLIDE 88

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1]. B in action

  • p = 2

3 ≈67%

  • :

Again, B gives much more information than ˆ p!

slide-89
SLIDE 89

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1]. B in action

  • p = 2

3 ≈67%

  • :

Again, B gives much more information than ˆ p! B much easier than “inverting table entries” (maximising likelihood)!

slide-90
SLIDE 90

Coin flips: experiment II, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, n1, n2)G(q) 1

0 L(r, n0, n1, n2)G(r)dr

, q ∈ [0, 1]. B in action

  • p = 2

3 ≈67%

  • :

Again, B gives much more information than ˆ p! B much easier than “inverting table entries” (maximising likelihood)! In fact, we input the table entries and B “inverts” them automatically and

  • ptimally: a BvM

Theorem holds!

slide-91
SLIDE 91

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin.

slide-92
SLIDE 92

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2.

slide-93
SLIDE 93

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen?

slide-94
SLIDE 94

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4

slide-95
SLIDE 95

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4 Guess obtained inverting p4, i.e. ˆ p =1−

  • 1

1 − qH #4s #data

  • 1/4.
slide-96
SLIDE 96

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4 Guess obtained inverting p4, i.e. ˆ p =1−

  • 1

1 − qH #4s #data

  • 1/4. Not optimal!
slide-97
SLIDE 97

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4 Guess obtained inverting p4, i.e. ˆ p =1−

  • 1

1 − qH #4s #data

  • 1/4. Not optimal!

If nj =#js, likelihood of data is L(p, n0, ..., n4)=4

j=0 pj(p)nj.

slide-98
SLIDE 98

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4 Guess obtained inverting p4, i.e. ˆ p =1−

  • 1

1 − qH #4s #data

  • 1/4. Not optimal!

If nj =#js, likelihood of data is L(p, n0, ..., n4)=4

j=0 pj(p)nj. A unique

maximiser of p → L(p, n0, ..., n4) exists (MLE) but not in closed form: “inverting table entries” explicitly is too hard!

slide-99
SLIDE 99

Coin flips: experiment III, MLE

Do not see the coin flips directly but flip a second coin with qH ∈[0, 1]: if it lands H (T) observe the “sum” of 2 (4 resp.) tosses of the first coin. E.g., if second coin’s flips are 0, 1, 0 (not observed!) and the first coin’s flips are H, H

  • 0+0=0

, T, H, H, T

  • 1+0+0+1=2

, T, T

  • 1+1=2

, the data is 0, 2, 2. If qH known, how to guess p now? I.e., how to see the unseen? Each (random) sum of coin flips has options and probabilities given by Opts. 1 ... 4 Probs. p0 =qHp2 +(1−qH)p4 p1 =qH2p(1−p) +(1−qH)4p3(1−p) ... p4 =(1−qH) × (1−p)4 Guess obtained inverting p4, i.e. ˆ p =1−

  • 1

1 − qH #4s #data

  • 1/4. Not optimal!

If nj =#js, likelihood of data is L(p, n0, ..., n4)=4

j=0 pj(p)nj. A unique

maximiser of p → L(p, n0, ..., n4) exists (MLE) but not in closed form: “inverting table entries” explicitly is too hard! The implicit MLE still enjoys the same desirable properties.

slide-100
SLIDE 100

Coin flips: experiment III, Bayes

The superiority of the Bayesian method in this experiment is clear:

slide-101
SLIDE 101

Coin flips: experiment III, Bayes

The superiority of the Bayesian method in this experiment is clear: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

, q ∈ [0, 1].

slide-102
SLIDE 102

Coin flips: experiment III, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

. B in action

  • qH = 2

3, p = 1 5 =20%

  • :
slide-103
SLIDE 103

Coin flips: experiment III, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

. B in action

  • qH = 2

3, p = 1 5 =20%

  • :
slide-104
SLIDE 104

Coin flips: experiment III, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

. B in action

  • qH = 2

3, p = 1 5 =20%

  • :

Again, B gives much more information than ˆ p!

slide-105
SLIDE 105

Coin flips: experiment III, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

. B in action

  • qH = 2

3, p = 1 5 =20%

  • :

Again, B gives much more information than ˆ p! B has simpler and explicit expression!

slide-106
SLIDE 106

Coin flips: experiment III, Bayes

Now we appreciate further the superiority of the Bayesian method: with same initial guess G(q) as before, we again take B(q) = L(q, n0, ..., n4)G(q) 1

0 L(r, n0, ..., n4)G(r)dr

. B in action

  • qH = 2

3, p = 1 5 =20%

  • :

Again, B gives much more information than ˆ p! B has simpler and explicit expression! We input the table entries and B “inverts” them automatically and

  • ptimally: a BvM

Theorem holds!

slide-107
SLIDE 107

Outline

1 Introduction

Mathematical Statistics Examples

2 Seeing the unseen

Coin flips Statistical inverse problems

slide-108
SLIDE 108

Statistical inverse problems: random processes

The data of Experiment III (qH =p =1/2) can be visualised as

10 20 30 40 50 60 70 80 90 100 0.5 1 1.5 2 2.5 3 3.5 4

slide-109
SLIDE 109

Statistical inverse problems: random processes

The data of Experiment III (qH =p =1/2) can be visualised as

10 20 30 40 50 60 70 80 90 100 0.5 1 1.5 2 2.5 3 3.5 4

Generalisation (n=1000): instead of second coin, die with infinite faces

100 200 300 400 500 600 700 800 900 1000 2 4 6 8 10 12

slide-110
SLIDE 110

Statistical inverse problems: random processes

The data of Experiment III (qH =p =1/2) can be visualised as

10 20 30 40 50 60 70 80 90 100 0.5 1 1.5 2 2.5 3 3.5 4

Generalisation (n=1000): instead of second coin, die with infinite faces

100 200 300 400 500 600 700 800 900 1000 2 4 6 8 10 12

These are examples of random or stochastic processes: their estimation and study are large areas of fields of statistics and probability.

slide-111
SLIDE 111

Statistical inverse problems: random processes

The data of Experiment III (qH =p =1/2) can be visualised as

10 20 30 40 50 60 70 80 90 100 0.5 1 1.5 2 2.5 3 3.5 4

Generalisation (n=1000): instead of second coin, die with infinite faces

100 200 300 400 500 600 700 800 900 1000 2 4 6 8 10 12

These are examples of random or stochastic processes: their estimation and study are large areas of fields of statistics and probability. Stochastic processes have a prominent role in applied mathematics, the sciences and engineering as they are models for dynamical systems.

slide-112
SLIDE 112

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p.

slide-113
SLIDE 113

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear.

slide-114
SLIDE 114

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist.

slide-115
SLIDE 115

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist. These issues are shared among statistical inverse problems, which appear in all type of applications, e.g., medical imaging (start of talk).

slide-116
SLIDE 116

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist. These issues are shared among statistical inverse problems, which appear in all type of applications, e.g., medical imaging (start of talk). The Bayesian method is an attractive alternative that in many cases where MLE fails to see the unseen, it optimally sees the unseen.

slide-117
SLIDE 117

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist. These issues are shared among statistical inverse problems, which appear in all type of applications, e.g., medical imaging (start of talk). The Bayesian method is an attractive alternative that in many cases where MLE fails to see the unseen, it optimally sees the unseen. However, the mathematics justifying this fact are not well understood; indeed, it is an active field of research!

slide-118
SLIDE 118

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist. These issues are shared among statistical inverse problems, which appear in all type of applications, e.g., medical imaging (start of talk). The Bayesian method is an attractive alternative that in many cases where MLE fails to see the unseen, it optimally sees the unseen. However, the mathematics justifying this fact are not well understood; indeed, it is an active field of research! If you would like to learn more about any of the above, Cambridge is the right place for you :)

slide-119
SLIDE 119

Statistical inverse problems: conclusions

No hope to “invert table” in the generalisation to find MLE for p. In these examples a unique MLE exists, but other issues may appear. In other stochastic processes, the MLE may not be unique or even exist. These issues are shared among statistical inverse problems, which appear in all type of applications, e.g., medical imaging (start of talk). The Bayesian method is an attractive alternative that in many cases where MLE fails to see the unseen, it optimally sees the unseen. However, the mathematics justifying this fact are not well understood; indeed, it is an active field of research! If you would like to learn more about any of the above, Cambridge is the right place for you :)

Thanks for your attention and best of luck!