A Probabilistic Separation Logic Justin Hsu UWMadison Computer - - PowerPoint PPT Presentation

a probabilistic separation logic
SMART_READER_LITE
LIVE PREVIEW

A Probabilistic Separation Logic Justin Hsu UWMadison Computer - - PowerPoint PPT Presentation

A Probabilistic Separation Logic Justin Hsu UWMadison Computer Sciences 1 Brilliant Collaborators Gilles Barthe Kevin Liao Jialu Bao Simon Docherty Alexandra Silva 2 What Is Independence, Intuitively? Two random variables x and y are


slide-1
SLIDE 1

Justin Hsu

UW–Madison Computer Sciences

A Probabilistic Separation Logic

1

slide-2
SLIDE 2

Brilliant Collaborators

Gilles Barthe Kevin Liao Jialu Bao Simon Docherty Alexandra Silva

2

slide-3
SLIDE 3

What Is Independence, Intuitively?

Two random variables x and y are independent if they are uncorrelated: the value of x gives no information about the value or distribution of y.

3

slide-4
SLIDE 4

Things that are independent

Fresh random samples

◮ x is the result of a fair coin flip ◮ y is the result of another, “fresh” coin flip ◮ More generally: “separate” sources of randomness

Uncorrelated things

◮ x is today’s winning lottery number ◮ y is the closing price of the stock market

4

slide-5
SLIDE 5

Things that are not independent

Re-used samples

◮ x is the result of a fair coin flip ◮ y is the result of the same coin flip

Common cause

◮ x is today’s ice cream sales ◮ y is today’s sunglasses sales

5

slide-6
SLIDE 6

What Is Independence, Formally?

Definition

Two random variables x and y are independent (in some implicit distribution over x and y) if for all values a and b: Pr(x = a ∧ y = b) = Pr(x = a) · Pr(y = b) That is, the distribution over (x, y) is the product of a distribution over x and a distribution over y.

6

slide-7
SLIDE 7

Why Is Independence Useful for Program Reasoning?

Ubiquitous in probabilistic programs

◮ A “fresh” random sample is independent of the state.

Simplifies reasoning about groups of variables

◮ Complicated: general distribution over many variables ◮ Simple: product of distributions over each variable

Preserved under common program operations

◮ Local operations independent of “separate” randomness ◮ Behaves well under conditioning (prob. control flow)

7

slide-8
SLIDE 8

Reasoning about Independence: Challenges

Formal definition isn’t very promising

◮ Quantification over all values: lots of probabilities! ◮ Computing exact probabilities: ofen difficult

How can we leverage the intuition behind probabilistic independence?

8

slide-9
SLIDE 9

Main Observation: Independence is Separation

Two variables x and y in a distribution µ are independent if µ is the product of two distributions µx and µy with disjoint domains, containing x and y.

Leverage separation logic to reason about independence

◮ Pioneered by O’Hearn, Reynolds, and Yang ◮ Highly developed area of program verification research ◮ Rich logical theory, automated tools, etc.

9

slide-10
SLIDE 10

Our Approach: Two Ingredients

  • Develop a probabilistic

model of the logic BI

  • Design a probabilistic

separation logic PSL

10

slide-11
SLIDE 11

Recap: Bunched Implications and Separation Logics

11

slide-12
SLIDE 12

What Goes into a Separation Logic?

12

slide-13
SLIDE 13

What Goes into a Separation Logic?

  • 1. Programs

◮ Transform input states to output states

12

slide-14
SLIDE 14

What Goes into a Separation Logic?

  • 1. Programs

◮ Transform input states to output states

  • 2. Assertions

◮ Formulas describe pieces of program states ◮ Semantics defined by a model of BI (Pym and O’Hearn)

12

slide-15
SLIDE 15

What Goes into a Separation Logic?

  • 1. Programs

◮ Transform input states to output states

  • 2. Assertions

◮ Formulas describe pieces of program states ◮ Semantics defined by a model of BI (Pym and O’Hearn)

  • 3. Program logic

◮ Formulas describe programs ◮ Assertions specify pre- and post-conditions

12

slide-16
SLIDE 16

Classical Setting: Heaps

Program states (s, h)

◮ A store s : X → V, map from variables to values ◮ A heap h : N ⇀ V, partial map from addresses to values

13

slide-17
SLIDE 17

Classical Setting: Heaps

Program states (s, h)

◮ A store s : X → V, map from variables to values ◮ A heap h : N ⇀ V, partial map from addresses to values

Heap-manipulating programs

◮ Control flow: sequence, if-then-else, loops ◮ Read/write addresses in heap ◮ Allocate/free heap cells

13

slide-18
SLIDE 18

Assertion Logic: Bunched Implications (BI)

Substructural logic (O’Hearn and Pym)

◮ Start with regular propositional logic (⊤, ⊥, ∧, ∨, →) ◮ Add a new conjunction (“star”): P ∗ Q ◮ Add a new implication (“magic wand”): P − ∗ Q

14

slide-19
SLIDE 19

Assertion Logic: Bunched Implications (BI)

Substructural logic (O’Hearn and Pym)

◮ Start with regular propositional logic (⊤, ⊥, ∧, ∨, →) ◮ Add a new conjunction (“star”): P ∗ Q ◮ Add a new implication (“magic wand”): P − ∗ Q

Star is a multiplicative conjunction

◮ P ∧ Q: P and Q hold on the entire state ◮ P ∗ Q: P and Q hold on disjoint parts of the entire state

14

slide-20
SLIDE 20

Resource Semantics of BI (O’Hearn and Pym)

Suppose states form a pre-ordered, partial monoid

◮ Set S of states, pre-order ⊑ on S ◮ Partial operation ◦ : S × S ⇀ S (assoc., comm., ...)

15

slide-21
SLIDE 21

Resource Semantics of BI (O’Hearn and Pym)

Suppose states form a pre-ordered, partial monoid

◮ Set S of states, pre-order ⊑ on S ◮ Partial operation ◦ : S × S ⇀ S (assoc., comm., ...)

Inductively define states that satisfy formulas

15

slide-22
SLIDE 22

Resource Semantics of BI (O’Hearn and Pym)

Suppose states form a pre-ordered, partial monoid

◮ Set S of states, pre-order ⊑ on S ◮ Partial operation ◦ : S × S ⇀ S (assoc., comm., ...)

Inductively define states that satisfy formulas

s | = ⊤ always s | = ⊥ never

15

slide-23
SLIDE 23

Resource Semantics of BI (O’Hearn and Pym)

Suppose states form a pre-ordered, partial monoid

◮ Set S of states, pre-order ⊑ on S ◮ Partial operation ◦ : S × S ⇀ S (assoc., comm., ...)

Inductively define states that satisfy formulas

s | = ⊤ always s | = ⊥ never s | = P ∧ Q iff s | = P and s | = Q

15

slide-24
SLIDE 24

Resource Semantics of BI (O’Hearn and Pym)

Suppose states form a pre-ordered, partial monoid

◮ Set S of states, pre-order ⊑ on S ◮ Partial operation ◦ : S × S ⇀ S (assoc., comm., ...)

Inductively define states that satisfy formulas

s | = ⊤ always s | = ⊥ never s | = P ∧ Q iff s | = P and s | = Q s | = P ∗ Q iff s1 ◦ s2 ⊑ s with s1 | = P and s2 | = Q

State s can be split into two “disjoint” states,

  • ne satisfying P and one satisfying Q

15

slide-25
SLIDE 25

Example: Heap Model of BI

Set of states: heaps

◮ S = N ⇀ V, partial maps from addresses to values

16

slide-26
SLIDE 26

Example: Heap Model of BI

Set of states: heaps

◮ S = N ⇀ V, partial maps from addresses to values

Monoid operation: combine disjoint heaps

◮ s1 ◦ s2 is defined to be union iff dom(s1) ∩ dom(s2) = ∅

16

slide-27
SLIDE 27

Example: Heap Model of BI

Set of states: heaps

◮ S = N ⇀ V, partial maps from addresses to values

Monoid operation: combine disjoint heaps

◮ s1 ◦ s2 is defined to be union iff dom(s1) ∩ dom(s2) = ∅

Pre-order: extend/project heaps

◮ s1 ⊑ s2 iff dom(s1) ⊆ dom(s2), and s1, s2 agree on dom(s1)

16

slide-28
SLIDE 28

Propositions for Heaps

Atomic propositions: “points-to”

◮ x → v holds in heap s iff x ∈ dom(s) and s(x) = v

Example axioms (not complete)

◮ Deterministic: x → v ∧ y → w ∧ x = y → v = w ◮ Disjoint: x → v ∗ y → w → x = y

17

slide-29
SLIDE 29

The Separation Logic Proper

Programs c from a basic imperative language

◮ Read from location: x := ∗e ◮ Write to location: ∗e := e′

18

slide-30
SLIDE 30

The Separation Logic Proper

Programs c from a basic imperative language

◮ Read from location: x := ∗e ◮ Write to location: ∗e := e′

Program logic judgments

{P} c {Q}

Reading

Executing c on any input state satisfying P leads to an output state satisfying Q, without invalid reads or writes.

18

slide-31
SLIDE 31

Basic Proof Rules

19

slide-32
SLIDE 32

Basic Proof Rules

Reading a location

{x → v} y := ∗x {x → v ∧ y = v}

Read

19

slide-33
SLIDE 33

Basic Proof Rules

Reading a location

{x → v} y := ∗x {x → v ∧ y = v}

Read

Writing a location

{x → v} ∗x := e {x → e}

Write

19

slide-34
SLIDE 34

The Frame Rule

Properties about unmodified heaps are preserved

{P} c {Q} c doesn’t modify FV (R) {P ∗ R} c {Q ∗ R}

Frame

20

slide-35
SLIDE 35

The Frame Rule

Properties about unmodified heaps are preserved

{P} c {Q} c doesn’t modify FV (R) {P ∗ R} c {Q ∗ R}

Frame

So-called “local reasoning” in SL

◮ Only need to reason about part of heap used by c ◮ Note: doesn’t hold if ∗ replaced by ∧, due to aliasing!

20

slide-36
SLIDE 36

A Probabilistic Model of BI

21

slide-37
SLIDE 37

States: Distributions over Memories

22

slide-38
SLIDE 38

States: Distributions over Memories

Memories (not heaps)

◮ Fix sets X of variables and V of values ◮ Memories indexed by domains A ⊆ X: M(A) = A → V

22

slide-39
SLIDE 39

States: Distributions over Memories

Memories (not heaps)

◮ Fix sets X of variables and V of values ◮ Memories indexed by domains A ⊆ X: M(A) = A → V

Program states: randomized memories

◮ States are distributions over memories with same domain ◮ Formally: S = {s | s ∈ Distr(M(A)), A ⊆ X} ◮ When s ∈ Distr(M(A)), write dom(s) for A

22

slide-40
SLIDE 40

Monoid: “Disjoint” Product Distribution

Intuition

◮ Two distributions can be combined iff domains are disjoint ◮ Combine by taking product distribution, union of domains

23

slide-41
SLIDE 41

Monoid: “Disjoint” Product Distribution

Intuition

◮ Two distributions can be combined iff domains are disjoint ◮ Combine by taking product distribution, union of domains

More formally...

Suppose that s ∈ Distr(M(A)) and s′ ∈ Distr(M(B)). If A, B are disjoint, then: (s ◦ s′)(m ∪ m′) = s(m) · s′(m′) for m ∈ M(A) and m′ ∈ M(B). Otherwise, s ◦ s′ is undefined.

23

slide-42
SLIDE 42

Pre-Order: Extension/Projection

Intuition

◮ Define s ⊑ s′ if s “has less information than” s′ ◮ In probabilistic setting: s is a projection of s′

24

slide-43
SLIDE 43

Pre-Order: Extension/Projection

Intuition

◮ Define s ⊑ s′ if s “has less information than” s′ ◮ In probabilistic setting: s is a projection of s′

More formally...

Suppose that s ∈ Distr(M(A)) and s′ ∈ Distr(M(B)). Then s ⊑ s′ iff A ⊆ B, and for all m ∈ M(A), we have: s(m) =

  • m′∈M(B)

s′(m ∪ m′). That is, s is obtained from s′ by marginalizing variables in B \ A.

24

slide-44
SLIDE 44

Atomic Formulas

Equalities

◮ e = e′ holds in s iff all variables FV (e, e′) ⊆ dom(s), and e is equal to e′ with probability 1 in s

25

slide-45
SLIDE 45

Atomic Formulas

Equalities

◮ e = e′ holds in s iff all variables FV (e, e′) ⊆ dom(s), and e is equal to e′ with probability 1 in s

Distribution laws

◮ e ∼ Unif holds in s iff FV (e) ⊆ dom(s), and e is uniformly distributed (e.g., fair coin flip) ◮ e ∼ D holds in s iff all variables in FV (e) ⊆ dom(s)

25

slide-46
SLIDE 46

Example Axioms (not complete)

26

slide-47
SLIDE 47

Example Axioms (not complete)

Distribution operations

◮ x ∼ D ∧ y ∼ D → x ∧ y ∼ D

26

slide-48
SLIDE 48

Example Axioms (not complete)

Distribution operations

◮ x ∼ D ∧ y ∼ D → x ∧ y ∼ D

Equality and distributions

◮ x = y ∧ x ∼ Unif → y ∼ Unif

26

slide-49
SLIDE 49

Example Axioms (not complete)

Distribution operations

◮ x ∼ D ∧ y ∼ D → x ∧ y ∼ D

Equality and distributions

◮ x = y ∧ x ∼ Unif → y ∼ Unif

Uniformity and products

◮ (x ∼ Unif ∗ y ∼ Unif) → (x, y) ∼ Unif B×B

26

slide-50
SLIDE 50

Example Axioms (not complete)

Distribution operations

◮ x ∼ D ∧ y ∼ D → x ∧ y ∼ D

Equality and distributions

◮ x = y ∧ x ∼ Unif → y ∼ Unif

Uniformity and products

◮ (x ∼ Unif ∗ y ∼ Unif) → (x, y) ∼ Unif B×B

Uniformity and exclusive-or (⊕)

◮ x ∼ Unif ∗ y ∼ D ∧ z = x ⊕ y → z ∼ Unif ∗ y ∼ D

26

slide-51
SLIDE 51

Intuitionistic, or Classical?

27

slide-52
SLIDE 52

Intuitionistic, or Classical?

Many SLs use classical version of BI (Boolean BI)

◮ Pre-order is discrete (trivial) ◮ Benefits: can describe heap domain exactly (e.g., empty) ◮ Drawbacks: must describe the entire heap

27

slide-53
SLIDE 53

Intuitionistic, or Classical?

Many SLs use classical version of BI (Boolean BI)

◮ Pre-order is discrete (trivial) ◮ Benefits: can describe heap domain exactly (e.g., empty) ◮ Drawbacks: must describe the entire heap

Our probabilistic model is for intuitionistic BI

◮ Pre-order is nontrivial ◮ Benefits: can describe a subset of the variables ◮ Necessary: other variables might not be independent!

27

slide-54
SLIDE 54

A Probabilistic Separation Logic

28

slide-55
SLIDE 55

A Toy Probabilistic Language

Program syntax

Exp ∋ e ::= x ∈ X | tt | ff | e ∧ e′ | e ∨ e′ | · · · Com ∋ c ::= skip | x ← e | x

$

← Unif | c; c′ | if e then c else c′

29

slide-56
SLIDE 56

A Toy Probabilistic Language

Program syntax

Exp ∋ e ::= x ∈ X | tt | ff | e ∧ e′ | e ∨ e′ | · · · Com ∋ c ::= skip | x ← e | x

$

← Unif | c; c′ | if e then c else c′

29

slide-57
SLIDE 57

A Toy Probabilistic Language

Program syntax

Exp ∋ e ::= x ∈ X | tt | ff | e ∧ e′ | e ∨ e′ | · · · Com ∋ c ::= skip | x ← e | x

$

← Unif | c; c′ | if e then c else c′

Semantics: distribution transformers (Kozen)

c : Distr(M(X)) → Distr(M(X))

29

slide-58
SLIDE 58

Program Logic Judgments in PSL

P and Q from probabilistic BI, c a probabilistic program

{P} c {Q}

30

slide-59
SLIDE 59

Program Logic Judgments in PSL

P and Q from probabilistic BI, c a probabilistic program

{P} c {Q}

Validity

For all input states s ∈ Distr(M(X)) satisfying the pre-condition s | = P, the output state cs satisfies the post-condition cs | = Q.

30

slide-60
SLIDE 60

Program Logic Judgments in PSL

P and Q from probabilistic BI, c a probabilistic program

{P} c {Q}

Validity

For all input states s ∈ Distr(M(X)) satisfying the pre-condition s | = P, the output state cs satisfies the post-condition cs | = Q.

30

slide-61
SLIDE 61

Basic Proof Rules in PSL

31

slide-62
SLIDE 62

Basic Proof Rules in PSL

Assignment

x / ∈ FV (e) {⊤} x ← e {x = e}

Assn

31

slide-63
SLIDE 63

Basic Proof Rules in PSL

Assignment

x / ∈ FV (e) {⊤} x ← e {x = e}

Assn

Sampling

{⊤} x

$

← Unif {x ∼ Unif}

Samp

31

slide-64
SLIDE 64

Conditional Rule in PSL

Q is “supported” {e = tt ∗ P} c {e = tt ∗ Q} {e = ff ∗ P} c′ {e = ff ∗ Q} {e ∼ D ∗ P} if e then c else c′ {e ∼ D ∗ Q}

Cond

32

slide-65
SLIDE 65

Conditional Rule in PSL

Q is “supported” {e = tt ∗ P} c {e = tt ∗ Q} {e = ff ∗ P} c′ {e = ff ∗ Q} {e ∼ D ∗ P} if e then c else c′ {e ∼ D ∗ Q}

Cond

Pre-conditions

◮ Inputs to branches derived from conditioning on e ◮ Independence ensures that P holds afer conditioning

32

slide-66
SLIDE 66

Conditional Rule in PSL

Q is “supported” {e = tt ∗ P} c {e = tt ∗ Q} {e = ff ∗ P} c′ {e = ff ∗ Q} {e ∼ D ∗ P} if e then c else c′ {e ∼ D ∗ Q}

Cond

Pre-conditions

◮ Inputs to branches derived from conditioning on e ◮ Independence ensures that P holds afer conditioning

Post-conditions

◮ Not all post-conditions Q can be soundly combined ◮ “Supported”: Q describes unique distribution (Reynolds)

32

slide-67
SLIDE 67

The Frame Rule in PSL

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

Side conditions

33

slide-68
SLIDE 68

The Frame Rule in PSL

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

Side conditions

  • 1. Variables in R are not modified (standard in SL)

33

slide-69
SLIDE 69

The Frame Rule in PSL

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

Side conditions

  • 1. Variables in R are not modified (standard in SL)
  • 2. P describes all variables that might be read

33

slide-70
SLIDE 70

The Frame Rule in PSL

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

Side conditions

  • 1. Variables in R are not modified (standard in SL)
  • 2. P describes all variables that might be read
  • 3. Everything in Q is freshly written, or in P

33

slide-71
SLIDE 71

The Frame Rule in PSL

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

Side conditions

  • 1. Variables in R are not modified (standard in SL)
  • 2. P describes all variables that might be read
  • 3. Everything in Q is freshly written, or in P

Variables in the post Q were independent of R, or are newly independent of R

33

slide-72
SLIDE 72

Example: Deriving a Better Sampling Rule

Given rules:

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

{⊤} x

$

← Unif {x ∼ Unif}

Samp

34

slide-73
SLIDE 73

Example: Deriving a Better Sampling Rule

Given rules:

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

{⊤} x

$

← Unif {x ∼ Unif}

Samp

Can derive:

x / ∈ FV (R) {R} x

$

← Unif {x ∼ Unif ∗ R}

Samp*

34

slide-74
SLIDE 74

Example: Deriving a Better Sampling Rule

Given rules:

{P} c {Q} FV (R) ∩ MV (c) = ∅ | = P → RV (c) ∼ D FV (Q) ⊆ RV (c) ∪ WV (c) {P ∗ R} c {Q ∗ R}

Frame

{⊤} x

$

← Unif {x ∼ Unif}

Samp

Can derive:

x / ∈ FV (R) {R} x

$

← Unif {x ∼ Unif ∗ R}

Samp*

Intuitively: fresh random sample is independent of everything

34

slide-75
SLIDE 75

Key Property for Soundness: Restriction

Theorem (Restriction)

Let P be any formula of probabilistic BI, and suppose that s | = P. Then there exists s′ ⊑ s such that s′ | = P and dom(s′) = dom(s) ∩ FV (P).

Intuition

◮ The only variables that “matter” for P are FV (P) ◮ Tricky for implications; proof “glues” distributions

35

slide-76
SLIDE 76

Verifying an Example

36

slide-77
SLIDE 77

One-Time-Pad (OTP)

Possibly the simplest encryption scheme

◮ Input: a message m ∈ B ◮ Output: a ciphertext c ∈ B ◮ Idea: encrypt by taking xor with a uniformly random key k

37

slide-78
SLIDE 78

One-Time-Pad (OTP)

Possibly the simplest encryption scheme

◮ Input: a message m ∈ B ◮ Output: a ciphertext c ∈ B ◮ Idea: encrypt by taking xor with a uniformly random key k

The encoding program:

k

$

← Unif c ← k ⊕ m

37

slide-79
SLIDE 79

How to Formalize Security?

38

slide-80
SLIDE 80

How to Formalize Security?

Method 1: Uniformity

◮ Show that c is uniformly distributed ◮ Always the same, no matter what the message m is

38

slide-81
SLIDE 81

How to Formalize Security?

Method 1: Uniformity

◮ Show that c is uniformly distributed ◮ Always the same, no matter what the message m is

Method 2: Input-output independence

◮ Assume that m is drawn from some (unknown) distribution ◮ Show that c and m are independent

38

slide-82
SLIDE 82

Proving Input-Output Independence for OTP in PSL

k

$

← Unif c ← k ⊕ m

39

slide-83
SLIDE 83

Proving Input-Output Independence for OTP in PSL

{m ∼ D} assumption k

$

← Unif c ← k ⊕ m

39

slide-84
SLIDE 84

Proving Input-Output Independence for OTP in PSL

{m ∼ D} assumption k

$

← Unif {m ∼ D ∗ k ∼ Unif} [Samp*] c ← k ⊕ m

39

slide-85
SLIDE 85

Proving Input-Output Independence for OTP in PSL

{m ∼ D} assumption k

$

← Unif {m ∼ D ∗ k ∼ Unif} [Samp*] c ← k ⊕ m {m ∼ D ∗ k ∼ Unif ∧ c = k ⊕ m} [Assn*]

39

slide-86
SLIDE 86

Proving Input-Output Independence for OTP in PSL

{m ∼ D} assumption k

$

← Unif {m ∼ D ∗ k ∼ Unif} [Samp*] c ← k ⊕ m {m ∼ D ∗ k ∼ Unif ∧ c = k ⊕ m} [Assn*] {m ∼ D ∗ c ∼ Unif} XOR axiom

39

slide-87
SLIDE 87

Recent Directions: Conditional Independence

40

slide-88
SLIDE 88

What is Conditional Independence (CI)?

Two random variables x and y are independent conditioned on z if they are only correlated through z: fixing any value of z, the value of x gives no information about the value of y.

41

slide-89
SLIDE 89

Main Idea: Lif to Markov Kernels

Maps of type M(S) → Distr(M(T))

◮ S ⊆ T: maps must “preserve input to output” ◮ Plain distributions encoded as M(∅) → Distr(M(T))

42

slide-90
SLIDE 90

Main Idea: Lif to Markov Kernels

Maps of type M(S) → Distr(M(T))

◮ S ⊆ T: maps must “preserve input to output” ◮ Plain distributions encoded as M(∅) → Distr(M(T))

CI expressible in terms of kernels

Let ⊙ be Kleisli composition and ⊗ be “parallel” composition. If we can decompose: µ = µz ⊙ (µx ⊗ µy) with µx : M(z) → Distr(M(x, z)), µy : M(z) → Distr(M(y, z)), then x and y are independent conditioned on z.

42

slide-91
SLIDE 91

DIBI: Dependent and Independent BI

43

slide-92
SLIDE 92

DIBI: Dependent and Independent BI

Main idea: add a non-commutative conjunction P Q

◮ States are now kernels ◮ P ∗ Q: parallel composition of kernels ◮ P Q: Kleisli composition of kernels

43

slide-93
SLIDE 93

DIBI: Dependent and Independent BI

Main idea: add a non-commutative conjunction P Q

◮ States are now kernels ◮ P ∗ Q: parallel composition of kernels ◮ P Q: Kleisli composition of kernels

Interaction: reverse exchange law

(P Q) ∗ (R S) ⊢ (P ∗ R) (Q ∗ S) Reverse of the usual direction (cf. Concurrent Kleene Algebra)

43

slide-94
SLIDE 94

See the Papers for More Details

A Probabilistic Separation Logic (POPL 2020)

◮ Extensions to PSL: deterministic variables, loops, etc. ◮ Many examples from cryptography, security of ORAM ◮ arXiv: https://arxiv.org/abs/1907.10708

A Logic to Reason about Dependence and Independence

◮ Details about DIBI, sound and complete Hilbert system ◮ Models capturing join dependency in relational algebra ◮ A separation logic (CPSL) based on DIBI ◮ arXiv: available soon, or send an email

44

slide-95
SLIDE 95

Justin Hsu

UW–Madison Computer Sciences

A Probabilistic Separation Logic

45