efficient analysis of probabilistic programs with an
play

Efficient Analysis of Probabilistic Programs with an Unbounded - PowerPoint PPT Presentation

Efficient Analysis of Probabilistic Programs with an Unbounded Counter CAV 2011 azdil 1 Stefan Kiefer 2 cera 1 Tom a s Br Anton n Ku 1 Masaryk University, Brno, Czech Republic 2 University of Oxford, UK Evaluation of And-Or Trees


  1. Efficient Analysis of Probabilistic Programs with an Unbounded Counter CAV 2011 azdil 1 Stefan Kiefer 2 cera 1 Tom´ aˇ s Br´ Anton´ ın Kuˇ 1 Masaryk University, Brno, Czech Republic 2 University of Oxford, UK

  2. Evaluation of And-Or Trees procedure AND(node) ∨ if node is a leaf return node.value else ∧ ∧ for each successor s of node if OR(s) = 0 then return 0 ∨ ∨ ∨ ∨ ∨ return 1 1 0 1 ∧ ∧ ∧ procedure OR(node) ... 1 (evaluate only when necessary)

  3. Evaluation of And-Or Trees procedure AND(node) ∨ if node is a leaf return node.value else ∧ ∧ for each successor s of node if OR(s) = 0 then return 0 ∨ ∨ ∨ ∨ ∨ return 1 1 0 1 ∧ ∧ ∧ procedure OR(node) ... 1 (evaluate only when necessary) What is the average runtime? cannot tell: program may not even terminate

  4. Evaluation of And-Or Trees procedure AND(node) ∨ if node is a leaf return node.value else ∧ ∧ for each successor s of node if OR(s) = 0 then return 0 ∨ ∨ ∨ ∨ ∨ return 1 1 0 1 ∧ ∧ ∧ procedure OR(node) ... 1 (evaluate only when necessary) What is the average runtime? cannot tell: program may not even terminate probabilistic assumptions: AND node has 3 kids in average (geom. distribution) OR node has 2 kids in average a branch has length 4 in average Pr(leaf evaluates to 0) = Pr(leaf evaluates to 1) = 1 2

  5. Evaluation of And-Or Trees procedure AND(node) ∨ if node is a leaf return node.value else ∧ ∧ for each successor s of node if OR(s) = 0 then return 0 ∨ ∨ ∨ ∨ ∨ return 1 1 0 1 ∧ ∧ ∧ procedure OR(node) ... Under these probabilistic assumptions: 1 Approximate efficiently the expected runtime (evaluate only when necessary) What is the average runtime? cannot tell: program may not even terminate probabilistic assumptions: AND node has 3 kids in average (geom. distribution) OR node has 2 kids in average a branch has length 4 in average Pr(leaf evaluates to 0) = Pr(leaf evaluates to 1) = 1 2

  6. Probabilistic Counter Machines Probabilistic Counter Machines induce infinite Markov chains: 0 . 3 q , 3 r , 3 0 . 6 0 . 4 0 . 7 0 . 6 0 . 3 0 . 3 ֒ − → r (+1) ֒ − → q ( ± 0) q r q , 2 r , 2 0 . 6 0 . 4 0 . 7 q ֒ − → q ( − 1) r ֒ − → r ( − 1) 0 . 4 0 . 7 0 . 3 q , 1 r , 1 0 . 4 0 . 7 q , 0 r , 0

  7. Modeling a Program as Prob. Counter Machine if leaf, return 0 or 1 : ℓ · z and ֒ − → and0 ( − 1) ℓ · (1 − z ) and ֒ − − − − → and1 ( − 1) procedure AND(node) otherwise, call OR : if node is a leaf 1 − ℓ and ֒ − − → or (+1) return node.value else if OR returns 0, return 0 immediately : for each successor s of node 1 if OR(s) = 0 then return 0 ֒ − → and0 ( − 1) or0 return 1 otherwise, maybe call another OR : x ֒ − → or (+1) or1 1 − x or1 ֒ − − → and1 ( − 1)

  8. Applications of Probabilistic Counter Machines PCMs model infinite-state probabilistic programs recursion unbounded data structures PCMs = discrete-time Quasi-Birth-Death processes well established stochastic model studied since the late 60s queueing theory, performance evaluation, . . . Recently: Games over (Probabilistic) Counter Machines energy games [Chatterjee, Doyen et al.] optimizing resource consumption in portable devices

  9. Related Model: Probabilistic Pushdown System Probabilistic Pushdown Systems modify a stack: 0 . 3 q ( X ) ֒ − → r ( YY ) 0 . 5 q ( X ) ֒ − → r ( X ) q ( Y ) ֒ → . . . − r ( X ) ֒ − → . . . r ( Y ) ֒ − → . . . 0 . 2 q ( X ) ֒ − → q ( ε ) Prob. Pushdown Systems (equivalently, Recursive Markov Chains) are more general, but more expensive to analyze. PCMs are Prob. Pushdown Systems with a single stack symbol.

  10. Probabilistic Counter Machines 0 . 3 q , 3 r , 3 0 . 6 0 . 4 0 . 7 0 . 6 0 . 3 0 . 3 ֒ − → r (+1) ֒ − → q ( ± 0) q r q , 2 r , 2 0 . 6 0 . 4 0 . 7 q ֒ − → q ( − 1) r ֒ − → r ( − 1) 0 . 4 0 . 7 0 . 3 q , 1 r , 1 0 . 4 0 . 7 q , 0 r , 0

  11. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T .

  12. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (0) = 1 m (4)

  13. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (1) m (0) = 1 m (4)

  14. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (1) m (2) m (0) = 1 m (4)

  15. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (1) m (2) m (3) m (0) = 1 m (4)

  16. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (1) m (2) m (3) m (0) = 1 m (4) m (4)

  17. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . m (1) m (2) m (3) m (0) = 1 m (4) m (5) m (4)

  18. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . Pr( T > k ) ≤ Pr( m ( k ) ≥ 1) ≤ a k m (1) ∞ m (2) m (3) m (0) = 1 1 � E T = Pr( T > k ) ≤ 1 − a k =0 m (4) m (5) m (4)

  19. Trend Runtime T := number of steps from ( q , 1) to ( ∗ , 0) We want to efficiently approximate E T . Trend t := “average increase of the counter per step” Assume t < 0. Intuition: The more negative the trend t , the smaller T . Proposition (from martingale theory: Azuma’s inequality) Let m (0) , m (1) , m (2) , . . . be random variables with m (0) = 1 . Let t < 0 . But the trend must be independent of k :-( Assume E ( m ( k +1) | m ( k ) ) = m ( k ) + t for all k. Then for all k: Pr( m ( k ) ≥ 1) ≤ a k , where a = e − t 2 / 2 < 1 . Pr( T > k ) ≤ Pr( m ( k ) ≥ 1) ≤ a k m (1) ∞ m (2) m (3) m (0) = 1 1 � E T = Pr( T > k ) ≤ 1 − a k =0 m (4) m (5) m (4)

  20. Trend Average counter increase depends on state: � 0 . 2 � 0 . 4 · ( − 1) + 0 . 6 · (+1) � � = 0 . 3 · 0 + 0 . 7 · ( − 1) − 0 . 7 0 . 3 q , 3 r , 3 0 . 6 0 . 4 0 . 7 0 . 3 q , 2 r , 2 0 . 6 0 . 4 0 . 7 0 . 3 q , 1 r , 1 0 . 4 0 . 7 q , 0 r , 0

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend