randomness in computing
play

Randomness in Computing L ECTURE 3 Last time Probability - PowerPoint PPT Presentation

Randomness in Computing L ECTURE 3 Last time Probability amplification Verifying matrix multiplication Today More probability amplification Randomized Min-Cut Random variables 1/28/2020 Sofya Raskhodnikova;Randomness in


  1. Randomness in Computing L ECTURE 3 Last time • Probability amplification • Verifying matrix multiplication Today • More probability amplification • Randomized Min-Cut • Random variables 1/28/2020 Sofya Raskhodnikova;Randomness in Computing

  2. Review question: balls and bins We have two bins with balls. • Bin 1 contains 3 black balls and 2 white balls. • Bin 2 contains 1 black ball and 1 white ball. We pick a bin uniformly at random. Then we pick a ball uniformly at random from that bin. What is the probability that we picked bin 1, given that we picked a white ball? 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  3. Bayesian Approach to Amplification How does our confidence increase with the number of trials? • C = event that identity is correct • A = event that test accepts Our analysis of Basic Frievalds: • Pr[A| 𝐷 ] ≤ 1/2 • 1-sided error: Pr[A|C]=1 Assumption (initial belief or ``prior’’): Pr 𝐷 = 1/2 By Bayes’ Law Pr 𝐵 𝐷 ⋅ Pr 𝐷 Pr 𝐷 𝐵 = Pr 𝐵 𝐷 ⋅ Pr 𝐷 + Pr 𝐵 𝐷 ⋅ Pr 𝐷 1 ⋅ 1 = 2 2 ≥ 1 ⋅ 1 2 + 1 2 ⋅ 1 3 2 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  4. Bayesian Approach to Amplification How does our confidence increase with the number of trials? • C = event that identity is correct • A = event that test accepts Our analysis of Basic Frievalds: • Pr[A| 𝐷 ] ≤ 1/2 • 1-sided error: Pr[A|C]=1 Assumption (initial belief or ``prior’’): Pr 𝐷 = 𝟑/𝟒 By Bayes’ Law Pr 𝐵 𝐷 ⋅ Pr 𝐷 Pr 𝐷 𝐵 = Pr 𝐵 𝐷 ⋅ Pr 𝐷 + Pr 𝐵 𝐷 ⋅ Pr 𝐷 1 ⋅ 𝟑 = 𝟓 𝟒 ≥ 1 ⋅ 𝟑 𝟒 + 1 2 ⋅ 𝟐 𝟔 𝟒 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  5. Bayesian Approach to Amplification How does our confidence increase with the number of trials? • C = event that identity is correct • A = event that test accepts Our analysis of Basic Frievalds: • Pr[A| 𝐷 ] ≥ 1/2 • 1-sided error: Pr[A|C]=1 Assumption (initial belief or ``prior’’): Pr 𝐷 = 𝟑 𝒋 /(𝟑 𝒋 + 𝟐) By Bayes’ Law Pr 𝐵 𝐷 ⋅ Pr 𝐷 Pr 𝐷 𝐵 = Pr 𝐵 𝐷 ⋅ Pr 𝐷 + Pr 𝐵 𝐷 ⋅ Pr 𝐷 𝟑 𝒋 1 ⋅ 𝟑 𝒋+𝟐 𝟑 𝒋 + 𝟐 ≤ = 𝟑 𝒋+𝟐 + 𝟐 𝟑 𝒋 𝟑 𝒋 + 𝟐 + 1 𝟐 1 ⋅ 2 ⋅ 𝟑 𝒋 + 𝟐 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  6. § 1.5 (MU) Randomized Min Cut Given: undirected graph 𝐻 = (𝑊, 𝐹) A global cut of 𝐻 is a partition of 𝑊 into non-empty, disjoint sets S, T. The cutset of the cut is the set of edges that connect the parts: 𝑣, 𝑤 𝑣 ∈ 𝑇, 𝑤 ∈ 𝑈} Goal: Find the min cut in 𝐻 (a cut with the smallest cutset). T S Applications: Network reliability, network design, clustering Exercise: How many distinct cuts are there in a graph 𝐻 with 𝑜 nodes? 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  7. Min Cut Algorithms Given: undirected graph 𝐻 = (𝑊, 𝐹) with 𝑜 nodes and 𝑛 edges. Goal: Find the min cut in 𝐻 . Algorithms for Min Cut: 𝑃(𝑛𝑜 + 𝑜 2 log 𝑜) time • Deterministic [Stoer-Wagner `97] 𝑃(𝑜 2 𝑛 log 𝑜) time • Randomized [Karger `93] but there are improvements 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  8. § 1.5 (MU) Karger’s Min Cut Algorithm Idea: Repeatedly pick a random edge and put its endpoints on the same side of the cut. Basic operation: Edge contraction of an edge 𝒗, 𝒘 • Merge 𝑣 and 𝑤 into one node • Eliminate all edges connecting 𝑣 and 𝑤 • Keep all other edges, including parallel edges (but no self-loops) 𝒗 𝒘 Claim A cutset of the contracted graph is also a cutset of the original graph . 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  9. § 1.5 (MU) Karger’s Min Cut Algorithm (input: undirected graph 𝐻 = (𝑊, 𝐹) Algorithm Basic Karger While 𝑊 > 2 1. choose 𝑓 ∈ 𝐹 uniformly at random 2. 𝐻 ← graph obtained by contracting 𝑓 in 𝐻 3. Return the only cut in 𝐻. 4. Theorem 2 Basic-Karger returns a min cut with probability ≥ 𝑜(𝑜−1) . Probability Amplification: Repeat 𝑠 = 𝑜 𝑜 − 1 ln 𝑜 times and return the smallest cut found. Running time of Basic Karger: Best known implementation: O 𝑛 • Easy: 𝑃(𝑛) per contraction, so 𝑃(𝑛𝑜) • View as Kruskal’s MST algorithm in 𝐻 with 𝑥 𝑓 𝑗 = 𝜌(𝑗) run until two components are left: 𝑃(𝑛 log 𝑜) 1/28/2020 Sofya Raskhodnikova; Randomness in Computing

  10. Measurements in random experiments • Example 1: coin flips – Measurement X: number of heads. – E.g., if the outcome is HHTH, then X=3. • Example 2: permutations – 𝑜 students exchange their hats, so that everybody gets a random hat – Measurement X: number of students that got their own hats. – E.g., if students 1,2,3 got hats 2,1,3 then X=1. 1/28/2020

  11. Random variables: definition • A random variable X on a sample space Ω is a function 𝑌: Ω → ℝ that assigns to each sample point 𝜕 ∈ Ω a real number 𝑌 𝜕 . • For each random variable, we should understand: – The set of values it can take. – The probabilities with which it takes on these values. • The distribution of a discrete random variable X 𝑏, Pr 𝑌 = 𝑏 . is the collection of pairs 1/28/2020

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend