Independent Natural Extension for Infinite Spaces - - PowerPoint PPT Presentation

independent natural extension for infinite spaces
SMART_READER_LITE
LIVE PREVIEW

Independent Natural Extension for Infinite Spaces - - PowerPoint PPT Presentation

Independent Natural Extension for Infinite Spaces Williams-coherence to the Rescue! Jasper De Bock Ghent University Belgium X 1 X 2 independent local local uncertainty uncertainty model model ? joint uncertainty model P (


slide-1
SLIDE 1

Independent Natural Extension for Infinite Spaces

Jasper De Bock

Ghent University Belgium

Williams-coherence to the Rescue!

slide-2
SLIDE 2
slide-3
SLIDE 3
slide-4
SLIDE 4
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

?

slide-8
SLIDE 8

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(X1) P(X2) P(X1|X2) = P(X1) P(X2|X1) = P(X2)

?

P(X1, X2)

slide-9
SLIDE 9

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(X1) P(X2) P(X1, X2) = P(X1)P(X2) P(X1|X2) = P(X1) P(X2|X1) = P(X2)

slide-10
SLIDE 10

X1

local 
 uncertainty
 model

P(X1) P(f(X1)) E(f(X1)) E(f(X1)) P(f(X1)) D1 P(X1) , , . . . , , ,

slide-11
SLIDE 11

X1

local 
 uncertainty
 model

P(X1) P(f(X1)) E(f(X1)) E(f(X1)) P(f(X1)) D1 P(X1) , , . . . , , ,

slide-12
SLIDE 12

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X1, X2))

?

slide-13
SLIDE 13

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X1, X2))

?

?

slide-14
SLIDE 14

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1)) P(f(X1, X2))

?

slide-15
SLIDE 15

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1)) P(f(X1, X2))

?

Coherence

slide-16
SLIDE 16

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1))

Coherence

P 1(f) P 2(f) = = (P 1 ⊗ P 2)(f(X1, X2))

slide-17
SLIDE 17

Independent Natural Extension for Infinite Spaces

Jasper De Bock

Ghent University Belgium

Williams-coherence to the Rescue!

slide-18
SLIDE 18

(P 1 ⊗ P 2)(f(X1) + h(X2)) = P 1(f(X1)) + P 2(h(X2)) (P 1 ⊗ P 2)(g(X1)h(X2)) = ( P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≥ 0 P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≤ 0

External additivity Factorisation

if g ≥ 0

Two very useful properties

slide-19
SLIDE 19

DISCLAIMER!

All of this is well known, and has been for several years now…

de Cooman & 
 Miranda 2012 de Cooman, 
 Miranda & Zaffalon 2011

slide-20
SLIDE 20

DISCLAIMER!

All of this is well known, and has been for several years now… …but

  • nly for

finite spaces!

de Cooman & 
 Miranda 2012 de Cooman, 
 Miranda & Zaffalon 2011

slide-21
SLIDE 21

Independent Natural Extension for Infinite Spaces

?

slide-22
SLIDE 22

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1))

Coherence

?

?

slide-23
SLIDE 23

Coherence

?

Walley Williams

slide-24
SLIDE 24

Coherence

Walley Williams

Miranda & Zaffalon 2015

Independent natural extension may not exist!

?

slide-25
SLIDE 25

Coherence

Walley Williams

Miranda & Zaffalon 2015

Independent natural extension may not exist! Independent natural extension always exists!

De Bock
 2017 Vicig
 2000

slide-26
SLIDE 26

Independent Natural Extension for Infinite Spaces

Williams-coherence to the Rescue!

!

slide-27
SLIDE 27

(P 1 ⊗ P 2)(f(X1) + h(X2)) = P 1(f(X1)) + P 2(h(X2)) (P 1 ⊗ P 2)(g(X1)h(X2)) = ( P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≥ 0 P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≤ 0

External additivity Factorisation

if g ≥ 0

Two very useful properties

? ?

slide-28
SLIDE 28

(P 1 ⊗ P 2)(f(X1) + h(X2)) = P 1(f(X1)) + P 2(h(X2)) (P 1 ⊗ P 2)(g(X1)h(X2)) = ( P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≥ 0 P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≤ 0

External additivity Factorisation

if g ≥ 0

Two very useful properties

?

slide-29
SLIDE 29

independent

X1 X2

local 
 uncertainty
 model local 
 uncertainty
 model joint uncertainty model

P(f(X1)) P(f(X2)) P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1))

Coherence

P 1(f) P 2(f) = = (P 1 ⊗ P 2)(f(X1, X2))

slide-30
SLIDE 30

P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1))

independent

P(f(X1)|B2) = P(f(X1)) ∀B2 ∈ B2 P(f(X2)|B1) = P(f(X2)) ∀B1 ∈ B1

slide-31
SLIDE 31

P(f(X2)|X1) = P(f(X2)) P(f(X1)|X2) = P(f(X1))

independent

P(f(X1)|B2) = P(f(X1)) ∀B2 ∈ B2 P(f(X2)|B1) = P(f(X2)) ∀B1 ∈ B1 Bi = {{xi}: xi ∈ Xi} Bi = P(Xi) \ {∅}

value-independence: subset-independence:

slide-32
SLIDE 32

(P 1 ⊗ P 2)(f(X1) + h(X2)) = P 1(f(X1)) + P 2(h(X2)) (P 1 ⊗ P 2)(g(X1)h(X2)) = ( P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≥ 0 P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≤ 0

External additivity Factorisation

Two very useful properties

?

if g ≥ 0

slide-33
SLIDE 33

(P 1 ⊗ P 2)(f(X1) + h(X2)) = P 1(f(X1)) + P 2(h(X2)) (P 1 ⊗ P 2)(g(X1)h(X2)) = ( P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≥ 0 P 1(g(X1))P 2(h(X2)) if P(h(X2)) ≤ 0

External additivity Factorisation

Two very useful properties

if g ≥ 0 is B1-measurable

slide-34
SLIDE 34

Walley Williams

Independent natural extension may not exist! Independent natural extension always exists!

value- independence subset- independence

Factorisation may not hold! Factorisation always holds!

slide-35
SLIDE 35

Independent Natural Extension for Infinite Spaces Williams-Coherence to the Rescue!

Jasper De Bock jasper.debock@ugent.be Ghent University, Belgium

If you are not familiar with sets of desirable gambles, lower previsions, Williams-coherence, epistemic independence or i n d e p e n d e n t n a t u r a l e x t e n s i
  • n
, t h i s p
  • s
t e r m a y m a k e l i t t l e s e n s e at first. I will do my very best to compensate with enthusiasm! If I fail, we can also simply go for a beer. In any case, the thought bubbles below may serve as a nice discussion starter. You said that probabilities are a s p e c i a l c a s e . Y e a h r i g h t . . . how does that work? All of this seems very abstract. Does it have any practicaly use? That’s weird! Shouldn’t the right-hand side be unconditional? So why is t h e r e n
  • B
i here? What happens if there are more than two variables? A subject’s uncertainty about a variable X that takes values x in a—possibly infinite—set X can be mod- elled in various ways. We consider two very general and closely connected frameworks, the latter of which includes probabilities as a special case. Sets of desirable gambles. The basic idea here is to consider the subject’s attitude towards gambles on X , which are bounded real-valued functions f on X whose value f (x) represents the—possibly negative— payoff for the outcome x. In particular, we consider the gambles that she finds desirable, in the sense that she prefers them over not betting at all. We gather all these gambles in a so-called set of desirable gambles D, which is a subset of the set G (X ) of all gambles. Conditional lower previsions. Here too, the idea is to model a subject’s uncertainty about X by considering her attitude towards gambles on X . However, in this case, instead of considering sets of gambles, we con- sider the prices at which a subject is willing to buy these gambles. Let C (X ) be the set of all pairs ( f,B), where f is a gamble on X and B is a non-empty subset
  • f X —an event. A conditional lower prevision P on
a domain C ✓ C (X ) is then a map P: C ! R: ( f,B) ! P( f|B), where R := R[{∞,+∞}. For any ( f,B) in C , the lower prevision P( f|B) of f conditional on B is interpreted as the subject’s supremum price µ for buying f, pro- vided that the transaction is cancelled if B does not
  • happen. In other words, P( f|B) is the supremum value
  • f µ for which she is eager to engage in a transaction
where she receives f (x) µ if x 2 B and zero otherwise. If B = X , we write P( f ) := P( f|X ) and call P( f ) the lower prevision of f. Their connection. These two uncertainty frameworks are closely connected. In particular, because of their interpretation in terms of buying prices for gambles, a c
  • n
d i t i
  • n
a l l
  • w
e r p r e v i s i
  • n
s c a n e a s i l y b e d e r i v e d f r
  • m
a set of gambles D. For every D ✓ G (X ), the corre- sponding conditional lower prevision PD is defined by PD( f|B) := sup{µ 2 R: [ f µ]IB 2 D}. for every ( f,B) 2 C (X ). Coherence. For an uncertainty model to represent a rational subject’s beliefs, it needs to satisfy a set of rationality criteria; if it does, it is called coherent. For a set of desirable gambles D, coherence means that for any gambles f,g 2 G (X ) and any real number λ > 0:
  • D1. if f 0 and f 6= 0, then f 2 D
  • D2. if f 2 D then λ f 2 D
  • D3. if f,g 2 D, then f + g 2 D
  • D4. if f  0, then f /
2 D A conditional lower prevision P on a domain C ✓ C (X ) is then said to be coherent if there is a coherent set
  • f desirable gambles D on X such that P coincides
with PD on C . Equivalently, P is coherent if it satisfies the structure-free notion of Williams-coherence that was developed by Pelessoni and Vicig (2009).

Modelling Uncertainty

We say that X1 and X2 are independent if our uncertainty model for X1 is not af- fected by conditioning on information about X2, and vice versa. This definition can easily be applied to a probability mea- sure, and then yields the usual notion of
  • independence. More generally, it can just
as easily be applied to lower previsions, sets of desirable gambles, or any other t y p e
  • f
u n c e r t a i n t y m
  • d
e l , a n d i s t h e n r e
  • ferred to as epistemic independence.
We consider a very general definition of e p i s t e m i c i n d e p e n d e n c e . I n p a r t i c u l a r , f
  • r
every i 2 {1,2}, we consider any set of conditioning events Bi for the variable Xi, that is, any subset of the set P/ 0(Xi) of all non-empty subsets of Xi. A coherent conditional lower prevision P on C (X1 ⇥X2) is then called epistem- ically independent if for any i and j such that {i, j} = {1,2}: P( fi|Bi \Bj) = P( fi|Bi) for all ( fi,Bi) 2 C (Xi) and B j 2 Bj. Similarly, a coherent set of desirable gambles D on X1 ⇥ X2 is epistemically independent if for any i and j such that {i, j} = {1,2} and for any Bj 2 Bj: margi(DcB j) = margi(D), in the sense that for all f 2 G (Xi) : f (Xi)IBj(Xj) 2 D , f (Xi) 2 D, where IB j is the indicator of B j, defined by IB j(x j) := 1 if x j 2 Bj and IB j(xj) := 0
  • therwise.
Two special cases are particularly impor-
  • tant. If B1 = X1 and B2 = X2, we ob-
tain the special case of epistemic value- independence, which is the most con- ventional approach, and which is often simply called epistemic independence. If B1 = P/ 0(X1) and B2 = P/ 0(X2), we
  • btain what we call epistemic subset-
  • independence. As we will see, the latter
has superior properties.

Modelling Independence

For all i 2 {1,2}, let Di be a local coherent set of desir- able gambles on Xi. The independent natural exten- sion of D1 and D2 is then the smallest—most conservative— e p i s t e m i c a l l y i n d e p e n d e n t c
  • h
e r e n t s e t
  • f
d e s i r a b l e g a m b l e s
  • n X1 ⇥X2 that extends them, meaning that
(8i 2 {1,2}) Di = margi(D) := {f 2 G (Xi): f (Xi) 2 D}. For lower previsions, the local models P1 and P2 are co- herent conditional lower previsions on C1 ✓ C (X1) and C2 ✓ C (X2), respectively. The independent natural exten- sion of P1 and P2 is then the smallest—most conservative— epistemically independent coherent lower prevision on C (X1 ⇥X2) that extends them, meaning that (8i 2 {1,2}) Pi( fi|Bi) = P( fi|Bi) for all ( fi,Bi) 2 Ci. Existence. In both of our two frameworks, the independent natural extension always exists; we denote it by D1 ⌦ D2 a n d P1 ⌦P2 , r e s p e c t i v e l y . F
  • r
l
  • w
e r p r e v i s i
  • n
s , t h i s r e s u l t c r u
  • c
i a l l y d e p e n d s
  • n
  • u
r u s e
  • f
W i l l i a m s
  • c
  • h
e r e n c e : f
  • r
W a l l e y
  • coherence, as shown by Miranda and Zaffalon (2015) for
epistemic value-independence, this may no longer hold. Properties. Let {i, j} = {1,2} and con- sider any h 2 G (Xj) and f,g 2 G (Xi) such t h a t g 0 i s Bi-measurable — a t e c h n i c a l condition that coincides with the usual no- t i
  • n
w h e n Bi [{/ 0} is a σ-field. Then if all the terms are well-defined—if C1 and C2 are large enough—we have that (P1 ⌦P2)( f + gh) = Pi
  • f + gP j(h)
  • .
As a direct consequence, we find that (P1 ⌦P2)( f + h) = Pi( f ) + Pj(h). and—with Pi(g) := Pi(g)—that (P1 ⌦P2)(gh) = Pi
  • gPj(h)
  • =
( Pi(g)Pj(h) if P j(h) 0; Pi(g)P j(h) if Pj(h)  0, known as external additivity and fac- torisation, respectively. Crucially, for epistemic subset-independence, Bi- m e a s u r a b i l i t y i s t r i v i a l l y s a t i s fi e d , a n d fac- torisation then always holds.

Independent Natural Extension

See you at 
 the poster?