csc373 week 11
play

CSC373 Week 11: Randomized Algorithms 373F19 - Nisarg Shah & - PowerPoint PPT Presentation

CSC373 Week 11: Randomized Algorithms 373F19 - Nisarg Shah & Karan Singh 1 Randomized Algorithms Input Deterministic Algorithm Output Input Randomized Algorithm Output Randomness 373F19 - Nisarg Shah & Karan Singh 2 Randomized


  1. CSC373 Week 11: Randomized Algorithms 373F19 - Nisarg Shah & Karan Singh 1

  2. Randomized Algorithms Input Deterministic Algorithm Output Input Randomized Algorithm Output Randomness 373F19 - Nisarg Shah & Karan Singh 2

  3. Randomized Algorithms • Running time ➢ Sometimes, we want the algorithm to always take a small amount of time o Regardless of both the input and the random coin flips ➢ Sometimes, we want the algorithm to take a small amount of time in expectation o Expectation over random coin flips o Still regardless of the input 373F19 - Nisarg Shah & Karan Singh 3

  4. Randomized Algorithms • Efficiency ➢ We want the algorithm to return a solution that is, in expectation, close to the optimum according to the objective under consideration o Once again, the expectation is over random coin flips o We want this to hold for every input 373F19 - Nisarg Shah & Karan Singh 4

  5. Derandomization • For some problems, it is easy to come up with a very simple randomized approximation algorithm • Later, one can ask whether this algorithm can be “derandomized” ➢ Informally, the randomized algorithm is making some random choices, and sometimes they turn out to be good ➢ Can we make these “good” choices deterministically? 373F19 - Nisarg Shah & Karan Singh 5

  6. Recap: Probability Theory • Random variable 𝑌 ➢ Discrete o Takes value 𝑤 1 with probability 𝑞 1 , 𝑤 2 w.p. 𝑞 2 , … o Expected value 𝐹 𝑌 = 𝑞 1 ⋅ 𝑤 1 + 𝑞 2 ⋅ 𝑤 2 + ⋯ o Examples: the roll of a six-sided die (takes values 1 through 6 with probability 1/6 each) ➢ Continuous o Has a probability density function (pdf) 𝑔 o Its integral is the cumulative density function (cdf) 𝐺 • 𝐺 𝑦 = Pr[𝑌 ≤ 𝑦] o Expected value 𝐹 𝑌 = ∫ 𝑦 𝑔 𝑦 𝑒𝑦 o Examples: normal distribution, exponential distribution, uniform distribution over [0,1] , … 373F19 - Nisarg Shah & Karan Singh 6

  7. Recap: Probability Theory • Things you should be aware of… ➢ Conditional probabilities ➢ Independence among random variables ➢ Conditional expectations ➢ Moments of random variables ➢ Standard discrete distributions: uniform over a finite set, Bernoulli, binomial, geometric, Poisson, … ➢ Standard continuous distributions: uniform over intervals, Gaussian/normal, exponential, … 373F19 - Nisarg Shah & Karan Singh 7

  8. Three Pillars Linearity of Expectation Union Bound Chernoff Bound • Deceptively simple, but incredibly powerful! • Many many many many probabilistic results are just interesting applications of these three results 373F19 - Nisarg Shah & Karan Singh 8

  9. Three Pillars • Linearity of expectation ➢ 𝐹 𝑌 + 𝑍 = 𝐹 𝑌 + 𝐹[𝑍] ➢ This does not require any independence assumptions about 𝑌 and 𝑍 ➢ E.g. if you want to find out how many people will attend your party on average, just ask each person the probability with which they will attend and add up o It does not matter that some of them are friends, and will either attend together or not attend together 373F19 - Nisarg Shah & Karan Singh 9

  10. Three Pillars • Union bound ➢ For any two events 𝐵 and 𝐶 , Pr 𝐵 ∪ 𝐶 ≤ Pr 𝐵 + Pr[𝐶] ➢ “Probability that at least one of the 𝑜 events 𝐵 1 , … , 𝐵 𝑜 will occur is at most σ 𝑗 Pr 𝐵 𝑗 ” ➢ Typically, 𝐵 1 , … , 𝐵 𝑜 are “bad events” o You do not want any of them to occur 1 2𝑜 for each 𝑗 , then o If you can individually bound Pr 𝐵 𝑗 ≤ Τ Τ probability that at least one them occurs ≤ 1 2 1 2 , none of the bad events will occur o So with probability ≥ Τ • Chernoff bound & Hoeffding’s inequality ➢ Read up! 373F19 - Nisarg Shah & Karan Singh 10

  11. Exact Max- 𝑙 -SAT 373F19 - Nisarg Shah & Karan Singh 11

  12. Exact Max- 𝑙 -SAT • Problem (recall) ➢ Input: An exact 𝑙 -SAT formula 𝜒 = 𝐷 1 ∧ 𝐷 2 ∧ ⋯ ∧ 𝐷 𝑛 , where each clause 𝐷 𝑗 has exactly 𝑙 literals, and a weight 𝑥 𝑗 ≥ 0 of each clause 𝐷 𝑗 ➢ Output: A truth assignment 𝜐 maximizing the number (or total weight) of clauses satisfied under 𝜐 ➢ Let us denote by 𝑋(𝜐) the total weight of clauses satisfied under 𝜐 373F19 - Nisarg Shah & Karan Singh 12

  13. Exact Max- 𝑙 -SAT • Recall our local search ➢ 𝑂 𝑒 (𝜐) = set of all truth assignments which can be obtained by changing the value of at most 𝑒 variables in 𝜐 • Result 1: Neighborhood 𝑂 1 (𝜐) ⇒ Τ 2 3 -apx for Exact Max-2-SAT. • Result 2: Neighborhood 𝑂 1 𝜐 ∪ 𝜐 𝑑 ⇒ Τ 3 4 -apx for Exact Max-2-SAT. • Result 3: Neighborhood 𝑂 1 𝜐 + oblivious local 3 4 -apx for Exact Max-2-SAT. search ⇒ Τ 373F19 - Nisarg Shah & Karan Singh 13

  14. Exact Max- 𝑙 -SAT • Recall our local search ➢ 𝑂 𝑒 (𝜐) = set of all truth assignments which can be obtained by changing the value of at most 𝑒 variables in 𝜐 • We claimed that ¾ -apx for Exact Max-2-SAT can be 2 𝑙 −1 2 𝑙 -apx for Exact Max- 𝑙 -SAT generalized to ➢ Algorithm becomes slightly more complicated • What can we do with randomized algorithms? 373F19 - Nisarg Shah & Karan Singh 14

  15. Exact Max- 𝑙 -SAT • Recall: ➢ We have a formula 𝜒 = 𝐷 1 ∧ 𝐷 2 ∧ ⋯ ∧ 𝐷 𝑛 ➢ Variables = 𝑦 1 , … , 𝑦 𝑜 , literals = variables or their negations ➢ Each clause contains exactly 𝑙 literals • The most naïve randomized algorithm ➢ Set each variable to TRUE with probability ½ and to FALSE with probability ½ • How good is this? 373F19 - Nisarg Shah & Karan Singh 15

  16. Exact Max- 𝑙 -SAT • Recall: ➢ We have a formula 𝜒 = 𝐷 1 ∧ 𝐷 2 ∧ ⋯ ∧ 𝐷 𝑛 ➢ Variables = 𝑦 1 , … , 𝑦 𝑜 , literals = variables or their negations ➢ Each clause contains exactly 𝑙 literals • For each clause 𝐷 𝑗 : ➢ Pr[𝐷 𝑗 is not satisfied] = 1/2 𝑙 (WHY?) ➢ Hence, Pr[𝐷 𝑗 is satisfied] = (2 𝑙 − 1)/2 𝑙 373F19 - Nisarg Shah & Karan Singh 16

  17. Exact Max- 𝑙 -SAT • For each clause 𝐷 𝑗 : ➢ Pr[𝐷 𝑗 is not satisfied] = 1/2 𝑙 (WHY?) ➢ Hence, Pr[𝐷 𝑗 is satisfied] = (2 𝑙 − 1)/2 𝑙 • Let 𝜐 denote the random assignment 𝑛 𝑥 𝑗 ⋅ Pr[𝐷 𝑗 is satisfied] = σ 𝑗=1 ➢ 𝐹 𝑋 𝜐 (Which pillar did we just use?) 2 𝑙 −1 2 𝑙 −1 𝑛 𝑥 𝑗 ≥ 2 𝑙 ⋅ σ 𝑗=1 ➢ 𝐹 𝑋 𝜐 = 2 𝑙 ⋅ 𝑃𝑄𝑈 373F19 - Nisarg Shah & Karan Singh 17

  18. Derandomization • Can we derandomize this algorithm? ➢ What are the choices made by the algorithm? o Setting the values of 𝑦 1 , 𝑦 2 , … , 𝑦 𝑜 ➢ How do we know which set of choices is good? • Idea: ➢ Do not think about all the choices at once. ➢ Think about them one by one. 373F19 - Nisarg Shah & Karan Singh 18

  19. Derandomization • Say you want to deterministically make the right choice for 𝑦 1 ➢ Choices of 𝑦 2 , … , 𝑦 𝑜 are still random 𝐹 𝑋 𝜐 = Pr 𝑦 1 = 𝑈 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝑈 + Pr 𝑦 1 = 𝐺 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝐺 1 2 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝑈 + ൗ 1 2 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝐺 = ൗ ➢ This means at least one of 𝐹[𝑋(𝜐)|𝑦 1 = 𝑈] and 𝐹[𝑋(𝜐)|𝑦 1 = 𝐺] must be at least as much as 𝐹 𝑋 𝜐 o Moreover, both quantities can be computed, so we can take the better of the two! o For now, forget about the running time… 373F19 - Nisarg Shah & Karan Singh 19

  20. Derandomization • Once we have made the right choice for 𝑦 1 (say T), then we can apply the same logic to 𝑦 2 ➢ 1 2 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝑈, 𝑦 2 = 𝑈 𝐹 𝑋 𝜐 |𝑦 1 = 𝑈 = ൗ 1 2 ⋅ 𝐹 𝑋 𝜐 𝑦 1 = 𝑈, 𝑦 2 = 𝐺 + ൗ ➢ And then we can pick the choice that leads to a better conditional expectation • Derandomized Algorithm: ➢ For 𝑗 = 1, … , 𝑜 o Let 𝑨 𝑗 = 𝑈 if 𝐹 𝑋 𝜐 𝑦 1 = 𝑨 1 , … , 𝑦 𝑗−1 = 𝑨 𝑗−1 , 𝑦 𝑗 = 𝑈 ≥ 𝐹 𝑋 𝜐 𝑦 1 = 𝑨 1 , … , 𝑦 𝑗−1 = 𝑨 𝑗−1 , 𝑦 𝑗 = 𝐺 , and 𝑨 𝑗 = 𝐺 otherwise o Set 𝑦 𝑗 = 𝑨 𝑗 373F19 - Nisarg Shah & Karan Singh 20

  21. Derandomization • This is called the method of conditional expectations ➢ If we’re happy when making a choice at random, we should be at least as happy conditioned on at least one of the possible values of that choice • Derandomized Algorithm: ➢ For 𝑗 = 1, … , 𝑜 o Let 𝑨 𝑗 = 𝑈 if 𝐹 𝑋 𝜐 𝑦 1 = 𝑨 1 , … , 𝑦 𝑗−1 = 𝑨 𝑗−1 , 𝑦 𝑗 = 𝑈 ≥ 𝐹 𝑋 𝜐 𝑦 1 = 𝑨 1 , … , 𝑦 𝑗−1 = 𝑨 𝑗−1 , 𝑦 𝑗 = 𝐺 , and 𝑨 𝑗 = 𝐺 otherwise o Set 𝑦 𝑗 = 𝑨 𝑗 ➢ How do we compare the two conditional expectations? 373F19 - Nisarg Shah & Karan Singh 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend