Approximating Probabilistic Bisimulation by Introduction Background - - PowerPoint PPT Presentation

approximating probabilistic bisimulation by
SMART_READER_LITE
LIVE PREVIEW

Approximating Probabilistic Bisimulation by Introduction Background - - PowerPoint PPT Presentation

Approximation by Averaging Panangaden Approximating Probabilistic Bisimulation by Introduction Background Conditional Expectation Cones and Duality Conditional expectation Prakash Panangaden Markov processes Bisimulation School of


slide-1
SLIDE 1

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Approximating Probabilistic Bisimulation by Conditional Expectation

Prakash Panangaden

School of Computer Science McGill University

CMS Meeting 5 - 8 June 2020

slide-2
SLIDE 2

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

slide-3
SLIDE 3

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

slide-4
SLIDE 4

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

3

Cones and Duality

slide-5
SLIDE 5

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

3

Cones and Duality

4

Conditional expectation

slide-6
SLIDE 6

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

3

Cones and Duality

4

Conditional expectation

5

Markov processes

slide-7
SLIDE 7

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

3

Cones and Duality

4

Conditional expectation

5

Markov processes

6

Bisimulation

slide-8
SLIDE 8

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Outline

1

Introduction

2

Background

3

Cones and Duality

4

Conditional expectation

5

Markov processes

6

Bisimulation

7

Conclusions

slide-9
SLIDE 9

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Joint work with

Chaput, Danos and Plotkin Philippe Chaput, Vincent Danos, Prakash Panangaden, and Gordon Plotkin. "Approximating Markov processes by averaging." Journal of the ACM (JACM) 61, no. 1 (2014): 1-45. The idea of functorializing conditional expectation is due to Vincent.

slide-10
SLIDE 10

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Approximation via Averaging

1

Approximation of Markov processes should be based

  • n “averaging”.
slide-11
SLIDE 11

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Approximation via Averaging

1

Approximation of Markov processes should be based

  • n “averaging”.

2

Averages are computed by expectation values.

slide-12
SLIDE 12

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Approximation via Averaging

1

Approximation of Markov processes should be based

  • n “averaging”.

2

Averages are computed by expectation values.

3

Beautiful functorial presentation of expectation values due to Vincent Danos.

slide-13
SLIDE 13

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Approximation via Averaging

1

Approximation of Markov processes should be based

  • n “averaging”.

2

Averages are computed by expectation values.

3

Beautiful functorial presentation of expectation values due to Vincent Danos.

4

Make bisimulation and approximation live in the same universe

slide-14
SLIDE 14

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Some notation

1

Given (X, Σ, p) and (Y, Λ) and a measurable function f : X − → Y we obtain a measure q on Y by q(B) = p(f −1(B)). This is written Mf (p) and is called the image measure of p under f.

slide-15
SLIDE 15

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Some notation

1

Given (X, Σ, p) and (Y, Λ) and a measurable function f : X − → Y we obtain a measure q on Y by q(B) = p(f −1(B)). This is written Mf (p) and is called the image measure of p under f.

2

We say that a measure ν is absolutely continuous with respect to another measure µ if for any measurable set A, µ(A) = 0 implies that ν(A) = 0. We write ν ≪ µ.

slide-16
SLIDE 16

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The Radon-Nikodym Theorem

The Radon-Nikodym theorem is a central result in measure theory allowing one to define a “derivative” of a measure with respect to another measure. Radon-Nikodym If ν ≪ µ, where ν, µ are finite measures on a measurable space (X, Σ) there is a positive measurable function h on X such that for every measurable set B ν(B) =

  • B

h dµ. The function h is defined uniquely up to a set of µ-measure

  • 0. The function h is called the Radon-Nikodym derivative of

ν with respect to µ; we denote it by dν

dµ. Since ν is finite, dν dµ ∈ L+ 1 (X, µ).

slide-17
SLIDE 17

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Notation for Radon-Nikodym

1

Given an (almost-everywhere) positive function f ∈ L1(X, p), we let f · p be the measure which has density f with respect to p.

slide-18
SLIDE 18

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Notation for Radon-Nikodym

1

Given an (almost-everywhere) positive function f ∈ L1(X, p), we let f · p be the measure which has density f with respect to p.

2

Two identities that we get from the Radon-Nikodym theorem are:

slide-19
SLIDE 19

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Notation for Radon-Nikodym

1

Given an (almost-everywhere) positive function f ∈ L1(X, p), we let f · p be the measure which has density f with respect to p.

2

Two identities that we get from the Radon-Nikodym theorem are:

given q ≪ p, we have dq

dp · p = q.

slide-20
SLIDE 20

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Notation for Radon-Nikodym

1

Given an (almost-everywhere) positive function f ∈ L1(X, p), we let f · p be the measure which has density f with respect to p.

2

Two identities that we get from the Radon-Nikodym theorem are:

given q ≪ p, we have dq

dp · p = q.

given f ∈ L+

1 (X, p), df·p dp = f

slide-21
SLIDE 21

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Notation for Radon-Nikodym

1

Given an (almost-everywhere) positive function f ∈ L1(X, p), we let f · p be the measure which has density f with respect to p.

2

Two identities that we get from the Radon-Nikodym theorem are:

given q ≪ p, we have dq

dp · p = q.

given f ∈ L+

1 (X, p), df·p dp = f

3

These two identities just say that the operations (−) · p and d(−)

dp

are inverses of each other as maps between L+

1 (X, p) and M≪p(X) the space of finite measures on

X that are absolutely continuous with respect to p.

slide-22
SLIDE 22

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation and conditional expectation

1

The expectation Ep(f) of a measurable function f is the average computed by

  • fdp and therefore it is just a

number.

slide-23
SLIDE 23

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation and conditional expectation

1

The expectation Ep(f) of a measurable function f is the average computed by

  • fdp and therefore it is just a

number.

2

The conditional expectation is not a mere number but a random variable.

slide-24
SLIDE 24

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation and conditional expectation

1

The expectation Ep(f) of a measurable function f is the average computed by

  • fdp and therefore it is just a

number.

2

The conditional expectation is not a mere number but a random variable.

3

It is meant to measure the expected value in the presence of additional information.

slide-25
SLIDE 25

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation and conditional expectation

1

The expectation Ep(f) of a measurable function f is the average computed by

  • fdp and therefore it is just a

number.

2

The conditional expectation is not a mere number but a random variable.

3

It is meant to measure the expected value in the presence of additional information.

4

The additional information takes the form of a sub-σ algebra, say Λ, of Σ. The experimenter knows, for every B ∈ Λ, whether the outcome is in B or not.

slide-26
SLIDE 26

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation and conditional expectation

1

The expectation Ep(f) of a measurable function f is the average computed by

  • fdp and therefore it is just a

number.

2

The conditional expectation is not a mere number but a random variable.

3

It is meant to measure the expected value in the presence of additional information.

4

The additional information takes the form of a sub-σ algebra, say Λ, of Σ. The experimenter knows, for every B ∈ Λ, whether the outcome is in B or not.

5

Now she can recompute the expectation values given this information.

slide-27
SLIDE 27

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Formalizing conditional expectation

It is an immediate consequence of the Radon-Nikodym theorem that such conditional expectations exist.

slide-28
SLIDE 28

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Formalizing conditional expectation

It is an immediate consequence of the Radon-Nikodym theorem that such conditional expectations exist. Kolmogorov Let (X, Σ, p) be a measure space with p a finite measure, f be in L1(X, Σ, p) and Λ be a sub-σ-algebra of Σ, then there exists a g ∈ L1(X, Λ, p) such that for all B ∈ Λ

  • B

fdp =

  • B

gdp.

slide-29
SLIDE 29

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Formalizing conditional expectation

It is an immediate consequence of the Radon-Nikodym theorem that such conditional expectations exist. Kolmogorov Let (X, Σ, p) be a measure space with p a finite measure, f be in L1(X, Σ, p) and Λ be a sub-σ-algebra of Σ, then there exists a g ∈ L1(X, Λ, p) such that for all B ∈ Λ

  • B

fdp =

  • B

gdp. This function g is usually denoted by E(f|Λ).

slide-30
SLIDE 30

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Formalizing conditional expectation

It is an immediate consequence of the Radon-Nikodym theorem that such conditional expectations exist. Kolmogorov Let (X, Σ, p) be a measure space with p a finite measure, f be in L1(X, Σ, p) and Λ be a sub-σ-algebra of Σ, then there exists a g ∈ L1(X, Λ, p) such that for all B ∈ Λ

  • B

fdp =

  • B

gdp. This function g is usually denoted by E(f|Λ). We clearly have f · p ≪ p so the required g is simply

df·p dp|Λ , where p |Λ is the restriction of p to the

sub-σ-algebra Λ.

slide-31
SLIDE 31

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Properties of conditional expectation

1

The point of requiring Λ-measurability is that it “smooths out” variations that are too rapid to show up in Λ.

slide-32
SLIDE 32

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Properties of conditional expectation

1

The point of requiring Λ-measurability is that it “smooths out” variations that are too rapid to show up in Λ.

2

The conditional expectation is linear, increasing with respect to the pointwise order.

slide-33
SLIDE 33

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Properties of conditional expectation

1

The point of requiring Λ-measurability is that it “smooths out” variations that are too rapid to show up in Λ.

2

The conditional expectation is linear, increasing with respect to the pointwise order.

3

It is defined uniquely p-almost everywhere.

slide-34
SLIDE 34

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure.

slide-35
SLIDE 35

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive.

slide-36
SLIDE 36

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive. What properties do the positive vectors have? Say P ⊂ V are the positive vectors, we include 0.

slide-37
SLIDE 37

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive. What properties do the positive vectors have? Say P ⊂ V are the positive vectors, we include 0. Then for any positive v ∈ P and positive real r, rv ∈ P. For u, v ∈ P we have u + v ∈ P and if v ∈ P and −v ∈ P then v = 0.

slide-38
SLIDE 38

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive. What properties do the positive vectors have? Say P ⊂ V are the positive vectors, we include 0. Then for any positive v ∈ P and positive real r, rv ∈ P. For u, v ∈ P we have u + v ∈ P and if v ∈ P and −v ∈ P then v = 0. We define a cone C in a vector space V to be a set with exactly these conditions.

slide-39
SLIDE 39

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive. What properties do the positive vectors have? Say P ⊂ V are the positive vectors, we include 0. Then for any positive v ∈ P and positive real r, rv ∈ P. For u, v ∈ P we have u + v ∈ P and if v ∈ P and −v ∈ P then v = 0. We define a cone C in a vector space V to be a set with exactly these conditions. Any cone defines a order by u ≤ v if v − u ∈ C.

slide-40
SLIDE 40

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What are cones?

Want to combine linear structure with order structure. If we have a vector space with an order ≤ we have a natural notion of positive and negative vectors: x ≥ 0 is positive. What properties do the positive vectors have? Say P ⊂ V are the positive vectors, we include 0. Then for any positive v ∈ P and positive real r, rv ∈ P. For u, v ∈ P we have u + v ∈ P and if v ∈ P and −v ∈ P then v = 0. We define a cone C in a vector space V to be a set with exactly these conditions. Any cone defines a order by u ≤ v if v − u ∈ C. Unfortunately for us, many of the structures that we want to look at are cones but are not part of any

  • bvious vector space: e.g. the measures on a space.
slide-41
SLIDE 41

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use I

If µ is a measure on X, then one has the well-known Banach spaces L1 and L∞.

slide-42
SLIDE 42

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use I

If µ is a measure on X, then one has the well-known Banach spaces L1 and L∞. These can be restricted to cones by considering the µ-almost everywhere positive functions.

slide-43
SLIDE 43

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use I

If µ is a measure on X, then one has the well-known Banach spaces L1 and L∞. These can be restricted to cones by considering the µ-almost everywhere positive functions. We will denote these cones by L+

1 (X, Σ, µ) and

L+

∞(X, Σ).

slide-44
SLIDE 44

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use I

If µ is a measure on X, then one has the well-known Banach spaces L1 and L∞. These can be restricted to cones by considering the µ-almost everywhere positive functions. We will denote these cones by L+

1 (X, Σ, µ) and

L+

∞(X, Σ).

These are complete normed cones.

slide-45
SLIDE 45

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p

slide-46
SLIDE 46

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p If q is such a measure, we define its norm to be q(X).

slide-47
SLIDE 47

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p If q is such a measure, we define its norm to be q(X). M≪p(X) is also an ω-complete normed cone.

slide-48
SLIDE 48

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p If q is such a measure, we define its norm to be q(X). M≪p(X) is also an ω-complete normed cone. The cones M≪p(X) and L+

1 (X, Σ, p) are isometrically

isomorphic in ωCC.

slide-49
SLIDE 49

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p If q is such a measure, we define its norm to be q(X). M≪p(X) is also an ω-complete normed cone. The cones M≪p(X) and L+

1 (X, Σ, p) are isometrically

isomorphic in ωCC. We write Mp

UB(X) for the cone of all measures on

(X, Σ) that are uniformly less than a multiple of the measure p: q ∈ Mp

UB means that for some real

constant K > 0 we have q ≤ Kp.

slide-50
SLIDE 50

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Cones that we use II

Let (X, Σ, p) be a measure space with finite measure p. We denote by M≪p(X), the cone of all measures on (X, Σ, p) that are absolutely continuous with respect to p If q is such a measure, we define its norm to be q(X). M≪p(X) is also an ω-complete normed cone. The cones M≪p(X) and L+

1 (X, Σ, p) are isometrically

isomorphic in ωCC. We write Mp

UB(X) for the cone of all measures on

(X, Σ) that are uniformly less than a multiple of the measure p: q ∈ Mp

UB means that for some real

constant K > 0 we have q ≤ Kp. The cones Mp

UB(X) and L+ ∞(X, Σ, p) are isomorphic.

slide-51
SLIDE 51

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The pairing

Pairing function There is a map from the product of the cones L+

∞(X, p) and

L+

1 (X, p) to R+ defined as follows:

∀f ∈ L+

∞(X, p), g ∈ L+ 1 (X, p)

f, g =

  • fgdp.
slide-52
SLIDE 52

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The pairing

Pairing function There is a map from the product of the cones L+

∞(X, p) and

L+

1 (X, p) to R+ defined as follows:

∀f ∈ L+

∞(X, p), g ∈ L+ 1 (X, p)

f, g =

  • fgdp.

This map is bilinear and is continuous and ω-continuous in both arguments; we refer to it as the pairing.

slide-53
SLIDE 53

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Duality expressed via pairing

This pairing allows one to express the dualities in a very convenient way. For example, the isomorphism between L+

∞(X, p) and (L+ 1 (X, p))∗ sends f ∈ L+ ∞(X, p) to

λg.f, g = λg.

  • fgdp.
slide-54
SLIDE 54

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Duality is the Key

M≪p(X)

L+

1 (X, p)

  • ∼ L+,∗

∞ (X, p)

  • Mp

UB

L+

∞(X, p)

  • L+,∗

1

(X, p)

  • (1)

where the vertical arrows represent dualities and the horizontal arrows represent isomorphisms. Pairing function There is a map from the product of the cones L+

∞(X, p) and

L+

1 (X, p) to R+ defined as follows:

∀f ∈ L+

∞(X, p), g ∈ L+ 1 (X, p)

f, g =

  • fgdp.
slide-55
SLIDE 55

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Where the action happens

We define two categories Rad∞ and Rad1 that will be needed for the functorial definition of conditional expectation.

slide-56
SLIDE 56

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Where the action happens

We define two categories Rad∞ and Rad1 that will be needed for the functorial definition of conditional expectation. This will allow for L∞ and L1 versions of the theory.

slide-57
SLIDE 57

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Where the action happens

We define two categories Rad∞ and Rad1 that will be needed for the functorial definition of conditional expectation. This will allow for L∞ and L1 versions of the theory. Going between these versions by duality will be very useful.

slide-58
SLIDE 58

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The “infinity” category

Rad∞ The category Rad∞ has as objects probability spaces, and as arrows α : (X, p) − → (Y, q), measurable maps such that Mα(p) ≤ Kq for some real number K. The reason for choosing the name Rad∞ is that α ∈ Rad∞ maps to d/dqMα(p) ∈ L+

∞(Y, q).

slide-59
SLIDE 59

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The “one” category

Rad1 The category Rad1 has as objects probability spaces and as arrows α : (X, p) − → (Y, q), measurable maps such that Mα(p) ≪ q.

slide-60
SLIDE 60

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The “one” category

Rad1 The category Rad1 has as objects probability spaces and as arrows α : (X, p) − → (Y, q), measurable maps such that Mα(p) ≪ q.

1

The reason for choosing the name Rad1 is that α ∈ Rad1 maps to d/dqMα(p) ∈ L+

1 (Y, q).

slide-61
SLIDE 61

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The “one” category

Rad1 The category Rad1 has as objects probability spaces and as arrows α : (X, p) − → (Y, q), measurable maps such that Mα(p) ≪ q.

1

The reason for choosing the name Rad1 is that α ∈ Rad1 maps to d/dqMα(p) ∈ L+

1 (Y, q).

2

The fact that the category Rad∞ embeds in Rad1 reflects the fact that L+

∞ embeds in L+ 1 .

slide-62
SLIDE 62

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Pairing function revisited

Recall the isomorphism between L+

∞(X, p) and L+,∗ 1

(X, p) mediated by the pairing function: f ∈ L+

∞(X, p) → λg : L+ 1 (X, p).f, g =

  • fgdp.
slide-63
SLIDE 63

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Precomposition

1

Now, precomposition with α in Rad∞ gives a map P1(α) from L+

1 (Y, q) to L+ 1 (X, p).

slide-64
SLIDE 64

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Precomposition

1

Now, precomposition with α in Rad∞ gives a map P1(α) from L+

1 (Y, q) to L+ 1 (X, p).

2

Dually, given α ∈ Rad1 : (X, p) − → (Y, q) and g ∈ L+

∞(Y, q) we have that P∞(α)(g) ∈ L+ ∞(X, p).

slide-65
SLIDE 65

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Precomposition

1

Now, precomposition with α in Rad∞ gives a map P1(α) from L+

1 (Y, q) to L+ 1 (X, p).

2

Dually, given α ∈ Rad1 : (X, p) − → (Y, q) and g ∈ L+

∞(Y, q) we have that P∞(α)(g) ∈ L+ ∞(X, p).

3

Thus the subscripts on the two precomposition functors describe the target categories.

slide-66
SLIDE 66

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Precomposition

1

Now, precomposition with α in Rad∞ gives a map P1(α) from L+

1 (Y, q) to L+ 1 (X, p).

2

Dually, given α ∈ Rad1 : (X, p) − → (Y, q) and g ∈ L+

∞(Y, q) we have that P∞(α)(g) ∈ L+ ∞(X, p).

3

Thus the subscripts on the two precomposition functors describe the target categories.

4

Using the ∗-functor we get a map (P1(α))∗ from L+,∗

1

(X, p) to L+,∗

1

(Y, q) in the first case and

slide-67
SLIDE 67

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Precomposition

1

Now, precomposition with α in Rad∞ gives a map P1(α) from L+

1 (Y, q) to L+ 1 (X, p).

2

Dually, given α ∈ Rad1 : (X, p) − → (Y, q) and g ∈ L+

∞(Y, q) we have that P∞(α)(g) ∈ L+ ∞(X, p).

3

Thus the subscripts on the two precomposition functors describe the target categories.

4

Using the ∗-functor we get a map (P1(α))∗ from L+,∗

1

(X, p) to L+,∗

1

(Y, q) in the first case and

5

dually we get (P∞(α))∗ from L+,∗

∞ (X, p) to L+,∗ ∞ (Y, q).

slide-68
SLIDE 68

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation value functor

The functor E∞(·) is a functor from Rad∞ to ωCC which, on objects, maps (X, p) to L+

∞(X, p) and on maps

is given as follows:

slide-69
SLIDE 69

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation value functor

The functor E∞(·) is a functor from Rad∞ to ωCC which, on objects, maps (X, p) to L+

∞(X, p) and on maps

is given as follows: Given α : (X, p) − → (Y, q) in Rad∞ the action of the functor is to produce the map E∞(α) : L+

∞(X, p)

− → L+

∞(Y, q) obtained by composing (P1(α))∗ with the

isomorphisms between L+,∗

1

and L+

slide-70
SLIDE 70

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Expectation value functor

The functor E∞(·) is a functor from Rad∞ to ωCC which, on objects, maps (X, p) to L+

∞(X, p) and on maps

is given as follows: Given α : (X, p) − → (Y, q) in Rad∞ the action of the functor is to produce the map E∞(α) : L+

∞(X, p)

− → L+

∞(Y, q) obtained by composing (P1(α))∗ with the

isomorphisms between L+,∗

1

and L+

L+,∗

1

(X, p)

(P1(α))∗

  • L+

∞(X, p)

  • E∞(α)
  • L+,∗

1

(Y, q)

L+

∞(Y, q)

slide-71
SLIDE 71

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Consequences

1

It is an immediate consequence of the definitions that for any f ∈ L+

∞(X, p) and g ∈ L1(Y, q)

E∞(α)(f), gY = f, P1(α)(g)X.

slide-72
SLIDE 72

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Consequences

1

It is an immediate consequence of the definitions that for any f ∈ L+

∞(X, p) and g ∈ L1(Y, q)

E∞(α)(f), gY = f, P1(α)(g)X. λh : L+

1 (X, p).f, h

  • f

  • λg : L+

1 (Y, q).f, g ◦ α ✤

E∞(α)(f)

slide-73
SLIDE 73

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Consequences

1

It is an immediate consequence of the definitions that for any f ∈ L+

∞(X, p) and g ∈ L1(Y, q)

E∞(α)(f), gY = f, P1(α)(g)X. λh : L+

1 (X, p).f, h

  • f

  • λg : L+

1 (Y, q).f, g ◦ α ✤

E∞(α)(f)

2

Note that since we started with α in Rad∞ we get the expectation value as a map between the L+

∞ cones.

slide-74
SLIDE 74

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The other expectation value functor

The functor E1(·) is a functor from Rad1 to ωCC which maps the object (X, p) to L+

1 (X, p) and on maps is given as

follows: Given α : (X, p) − → (Y, q) in Rad1 the action of the functor is to produce the map E1(α) : L+

1 (X, p) −

→ L+

1 (Y, q) obtained by

composing (P∞(α))∗ with the isomorphisms between L+,∗

and L+

1 as shown in the diagram below

L+,∗

∞ (X, p) (P∞(α))∗

  • L+

1 (X, p)

  • E1(α)
  • L+,∗

∞ (Y, q)

L+

1 (Y, q)

slide-75
SLIDE 75

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Markov kernels as linear maps

1

Given τ a Markov kernel from (X, Σ) to (Y, Λ), we define Tτ : L+(Y) − → L+(X), for f ∈ L+(Y), x ∈ X, as Tτ(f)(x) =

  • Y f(z)τ(x, dz).
slide-76
SLIDE 76

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Markov kernels as linear maps

1

Given τ a Markov kernel from (X, Σ) to (Y, Λ), we define Tτ : L+(Y) − → L+(X), for f ∈ L+(Y), x ∈ X, as Tτ(f)(x) =

  • Y f(z)τ(x, dz).

2

This map is well-defined, linear and ω-continuous.

slide-77
SLIDE 77

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Markov kernels as linear maps

1

Given τ a Markov kernel from (X, Σ) to (Y, Λ), we define Tτ : L+(Y) − → L+(X), for f ∈ L+(Y), x ∈ X, as Tτ(f)(x) =

  • Y f(z)τ(x, dz).

2

This map is well-defined, linear and ω-continuous.

3

If we write 1B for the indicator function of the measurable set B we have that Tτ(1B)(x) = τ(x, B).

slide-78
SLIDE 78

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Markov kernels as linear maps

1

Given τ a Markov kernel from (X, Σ) to (Y, Λ), we define Tτ : L+(Y) − → L+(X), for f ∈ L+(Y), x ∈ X, as Tτ(f)(x) =

  • Y f(z)τ(x, dz).

2

This map is well-defined, linear and ω-continuous.

3

If we write 1B for the indicator function of the measurable set B we have that Tτ(1B)(x) = τ(x, B).

4

It encodes all the transition probability information

slide-79
SLIDE 79

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

From linear maps to markov kernels

1

Conversely, any ω-continuous morphism L with L(1Y) ≤ 1X can be cast as a Markov kernel by reversing the process on the last slide.

slide-80
SLIDE 80

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

From linear maps to markov kernels

1

Conversely, any ω-continuous morphism L with L(1Y) ≤ 1X can be cast as a Markov kernel by reversing the process on the last slide.

2

The interpretation of L is that L(1B) is a measurable function on X such that L(1B)(x) is the probability of jumping from x to B.

slide-81
SLIDE 81

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Backwards

1

We can also define an operator on M(X) by using τ the

  • ther way.
slide-82
SLIDE 82

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Backwards

1

We can also define an operator on M(X) by using τ the

  • ther way.

2

We define ¯ Tτ : M(X) − → M(Y), for µ ∈ M(X) and B ∈ Λ, as ¯ Tτ(µ)(B) =

  • X τ(x, B) dµ(x).
slide-83
SLIDE 83

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Backwards

1

We can also define an operator on M(X) by using τ the

  • ther way.

2

We define ¯ Tτ : M(X) − → M(Y), for µ ∈ M(X) and B ∈ Λ, as ¯ Tτ(µ)(B) =

  • X τ(x, B) dµ(x).

3

It is easy to show that this map is linear and ω-continuous.

slide-84
SLIDE 84

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What do they mean?

1

The operator ¯ Tτ transforms measures “forwards in time”; if µ is a measure on X representing the current state of the system, ¯ Tτ(µ) is the resulting measure on Y after a transition through τ.

slide-85
SLIDE 85

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What do they mean?

1

The operator ¯ Tτ transforms measures “forwards in time”; if µ is a measure on X representing the current state of the system, ¯ Tτ(µ) is the resulting measure on Y after a transition through τ.

2

The operator Tτ may be interpreted as a likelihood transformer which propagates information “backwards”, just as we expect from predicate transformers.

slide-86
SLIDE 86

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What do they mean?

1

The operator ¯ Tτ transforms measures “forwards in time”; if µ is a measure on X representing the current state of the system, ¯ Tτ(µ) is the resulting measure on Y after a transition through τ.

2

The operator Tτ may be interpreted as a likelihood transformer which propagates information “backwards”, just as we expect from predicate transformers.

3

Tτ(f)(x) is just the expected value of f after one τ-step given that one is at x.

slide-87
SLIDE 87

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Labelled abstract Markov processes

The definition An abstract Markov kernel from (X, Σ, p) to (Y, Λ, q) is an ω-continuous linear map τ : L+

∞(Y) −

→ L+

∞(X) with τ ≤ 1.

slide-88
SLIDE 88

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Labelled abstract Markov processes

The definition An abstract Markov kernel from (X, Σ, p) to (Y, Λ, q) is an ω-continuous linear map τ : L+

∞(Y) −

→ L+

∞(X) with τ ≤ 1.

LAMPS A labelled abstract Markov process on a probability space (X, Σ, p) with a set of labels (or actions) A is a family

  • f abstract Markov kernels τa : L+

∞(X, p) −

→ L+

∞(X, p)

indexed by elements a of A.

slide-89
SLIDE 89

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The approximation map

The expectation value functors project a probability space

  • nto another one with a possibly coarser σ-algebra.

Given an AMP on (X, p) and a map α : (X, p) − → (Y, q) in Rad∞, we have the following approximation scheme: Approximation scheme L+

∞(X, p) τa

L+

∞(X, p) E∞(α)

  • L+

∞(Y, q) α(τa) P∞(α)

  • L+

∞(Y, q)

slide-90
SLIDE 90

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A special case

Take (X, Σ) and (X, Λ) with Λ ⊂ Σ and use the measurable function id : (X, Σ) − → (X, Λ) as α.

slide-91
SLIDE 91

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A special case

Take (X, Σ) and (X, Λ) with Λ ⊂ Σ and use the measurable function id : (X, Σ) − → (X, Λ) as α. Coarsening the σ-algebra L+

∞(X, Σ, p) τa

L+

∞(X, Σ, p) E∞(id)

  • L+

∞(X, Λ, p) id(τa) P∞(id)

  • L+

∞(X, Λ, p)

slide-92
SLIDE 92

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A special case

Take (X, Σ) and (X, Λ) with Λ ⊂ Σ and use the measurable function id : (X, Σ) − → (X, Λ) as α. Coarsening the σ-algebra L+

∞(X, Σ, p) τa

L+

∞(X, Σ, p) E∞(id)

  • L+

∞(X, Λ, p) id(τa) P∞(id)

  • L+

∞(X, Λ, p)

Thus id(τa) is the approximation of τa obtained by averaging over the sets of the coarser σ-algebra Λ.

slide-93
SLIDE 93

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A special case

Take (X, Σ) and (X, Λ) with Λ ⊂ Σ and use the measurable function id : (X, Σ) − → (X, Λ) as α. Coarsening the σ-algebra L+

∞(X, Σ, p) τa

L+

∞(X, Σ, p) E∞(id)

  • L+

∞(X, Λ, p) id(τa) P∞(id)

  • L+

∞(X, Λ, p)

Thus id(τa) is the approximation of τa obtained by averaging over the sets of the coarser σ-algebra Λ. We now have the machinery to consider approximating along arbitrary maps α.

slide-94
SLIDE 94

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation traditionally

Larsen-Skou definition Given an LMP (S, Σ, τa) an equivalence relation R on S is called a probabilistic bisimulation if sRt then for every measurable R-closed set C we have for every a τa(s, C) = τa(t, C). This variation to the continuous case is due to Josée Desharnais and her Indian friends.

slide-95
SLIDE 95

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Event bisimulation

In measure theory one should focus on measurable sets rather than on points.

slide-96
SLIDE 96

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Event bisimulation

In measure theory one should focus on measurable sets rather than on points. Event Bisimulation Given a LMP (X, Σ, τa), an event-bisimulation is a sub-σ-algebra Λ of Σ such that (X, Λ, τa) is still an LMP .

slide-97
SLIDE 97

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Event bisimulation

In measure theory one should focus on measurable sets rather than on points. Event Bisimulation Given a LMP (X, Σ, τa), an event-bisimulation is a sub-σ-algebra Λ of Σ such that (X, Λ, τa) is still an LMP . This means τa sends the subspace L+

∞(X, Λ, p) to itself;

where we are now viewing τa as a map on L+

∞(X, Λ, p).

slide-98
SLIDE 98

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The bisimulation diagram

L+

∞(X, Σ, p) τa

L+

∞(X, Σ, p)

L+

∞(X, Λ, p)

  • τa

L+

∞(X, Λ, p)

  • This is a “lossless” approximation!
slide-99
SLIDE 99

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Zigzag maps

We can generalize the notion of event bisimulation by using maps other than the identity map on the underlying sets. This would be a map α from (X, Σ, p) to (Y, Λ, q), equipped with LMPs τa and ρa respectively, such that the following commutes: L+

∞(X, Σ, p) τa

L+

∞(X, Σ, p)

L+

∞(Y, Λ, q) P∞(α)

  • ρa

L+

∞(Y, Λ, q) P∞(α)

  • (2)
slide-100
SLIDE 100

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A key diagram

When we have a zigzag the following diagram commutes: L+

∞(Y) ρa

  • P∞(α)
  • L+

∞(Y) E1(α)(1X)·(−)

  • P∞(α)
  • L+

∞(X) τa

L+

∞(X) E∞(α)

  • L+

∞(Y) α(τa)

  • P∞(α)
  • L+

∞(Y)

(3) The upper trapezium says we have a zigzag. The lower trapezium says that we have an “approximation” and the triangle on the right is an earlier lemma.

slide-101
SLIDE 101

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A key diagram

When we have a zigzag the following diagram commutes: L+

∞(Y) ρa

  • P∞(α)
  • L+

∞(Y) E1(α)(1X)·(−)

  • P∞(α)
  • L+

∞(X) τa

L+

∞(X) E∞(α)

  • L+

∞(Y) α(τa)

  • P∞(α)
  • L+

∞(Y)

(3) The upper trapezium says we have a zigzag. The lower trapezium says that we have an “approximation” and the triangle on the right is an earlier lemma. If we “approximate” along a zigzag we actually get the exact result.

slide-102
SLIDE 102

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

A key diagram

When we have a zigzag the following diagram commutes: L+

∞(Y) ρa

  • P∞(α)
  • L+

∞(Y) E1(α)(1X)·(−)

  • P∞(α)
  • L+

∞(X) τa

L+

∞(X) E∞(α)

  • L+

∞(Y) α(τa)

  • P∞(α)
  • L+

∞(Y)

(3) The upper trapezium says we have a zigzag. The lower trapezium says that we have an “approximation” and the triangle on the right is an earlier lemma. If we “approximate” along a zigzag we actually get the exact result. Approximations are approximate bisimulations.

slide-103
SLIDE 103

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version.

slide-104
SLIDE 104

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version. Use co-spans of zigzags; it is usual to use spans but co-spans give a smoother and more general theory.

slide-105
SLIDE 105

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version. Use co-spans of zigzags; it is usual to use spans but co-spans give a smoother and more general theory. With spans one can prove logical characterization of bisimulation on analytic spaces.

slide-106
SLIDE 106

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version. Use co-spans of zigzags; it is usual to use spans but co-spans give a smoother and more general theory. With spans one can prove logical characterization of bisimulation on analytic spaces. With the cospan definition we get logical characterization on all measurable spaces.

slide-107
SLIDE 107

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version. Use co-spans of zigzags; it is usual to use spans but co-spans give a smoother and more general theory. With spans one can prove logical characterization of bisimulation on analytic spaces. With the cospan definition we get logical characterization on all measurable spaces. On analytic spaces the two concepts co-incide.

slide-108
SLIDE 108

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation as a cospan

Zigzags give a “functional” version of bisimulation; what is the relational version. Use co-spans of zigzags; it is usual to use spans but co-spans give a smoother and more general theory. With spans one can prove logical characterization of bisimulation on analytic spaces. With the cospan definition we get logical characterization on all measurable spaces. On analytic spaces the two concepts co-incide. Recent results show that the theory cannot be made to work with spans on general measurable spaces.

slide-109
SLIDE 109

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

The official definition of bisimulation

Bisimulation We say that two objects of AMP, (X, Σ, p, τ) and (Y, Λ, q, ρ), are bisimilar if there is a third object (Z, Γ, r, π) with a pair of zigzags α : (X, Σ, p, τ) − → (Z, Γ, r, π) β : (Y, Λ, q, ρ) − → (Z, Γ, r, π) giving a cospan diagram (X, Σ, p, τ)

α

  • (Y, Λ, q, ρ)

β

  • (Z, Γ, r, π)

(4) Note that the identity function on an AMP is a zigzag, so if a zigzag exists the two AMPs are bisimilar.

slide-110
SLIDE 110

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Fundamental categorical result

The category AMP has pushouts Furthermore, if the morphisms in the span are zigzags then the morphisms in the pushout diagram are also zigzags.

slide-111
SLIDE 111

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

Bisimulation is an equivalence

X

α

  • Y

β

  • δ
  • Z

γ

  • W

ζ

  • U

η

  • V

(5) The pushouts of the zigzags β and δ yield two more zigzags ζ and η (and the pushout object V). As the composition of two zigzags is a zigzag, X and Z are bisimilar. Thus bisimulation is transitive.

slide-112
SLIDE 112

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What did we do with this theory?

1

We showed logical characterization of bisimulation for any measurable space.

slide-113
SLIDE 113

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What did we do with this theory?

1

We showed logical characterization of bisimulation for any measurable space.

2

We developed a theory of approximation by looking at finitely generated sub-σ-algebras coming form the logic: approximate bisimulations.

slide-114
SLIDE 114

Approximation by Averaging Panangaden Introduction Background Cones and Duality Conditional expectation Markov processes Bisimulation Conclusions

What did we do with this theory?

1

We showed logical characterization of bisimulation for any measurable space.

2

We developed a theory of approximation by looking at finitely generated sub-σ-algebras coming form the logic: approximate bisimulations.

3

We showed that there is a canonical minimal realization that arises as the projective limit of the finite approximations.