Introduction to Randomized Algorithms: QuickSort Lecture 2 January - - PowerPoint PPT Presentation

introduction to randomized algorithms quicksort
SMART_READER_LITE
LIVE PREVIEW

Introduction to Randomized Algorithms: QuickSort Lecture 2 January - - PowerPoint PPT Presentation

CS 498ABD: Algorithms for Big Data, Spring 2019 Introduction to Randomized Algorithms: QuickSort Lecture 2 January 17, 2019 Chandra (UIUC) CS498ABD 1 Spring 2019 1 / 51 Outline Our goal Basics of randomization probability space,


slide-1
SLIDE 1

CS 498ABD: Algorithms for Big Data, Spring 2019

Introduction to Randomized Algorithms: QuickSort

Lecture 2

January 17, 2019

Chandra (UIUC) CS498ABD 1 Spring 2019 1 / 51

slide-2
SLIDE 2

Outline

Our goal

Basics of randomization – probability space, expectation, events, random variables, etc. Randomized Algorithms – Two types

Las Vegas Monte Carlo

Randomized Quick Sort

Chandra (UIUC) CS498ABD 2 Spring 2019 2 / 51

slide-3
SLIDE 3

Part I Introduction to Randomized Algorithms

Chandra (UIUC) CS498ABD 3 Spring 2019 3 / 51

slide-4
SLIDE 4

Randomized Algorithms

Input x Output y

Deterministic Algorithm

Chandra (UIUC) CS498ABD 4 Spring 2019 4 / 51

slide-5
SLIDE 5

Randomized Algorithms

Input x Output y

Deterministic Algorithm

Input x Output yr

Randomized Algorithm

random bits r

Chandra (UIUC) CS498ABD 4 Spring 2019 4 / 51

slide-6
SLIDE 6

Example: Randomized QuickSort

QuickSort ?

1

Pick a pivot element from array

2

Split array into 3 subarrays: those smaller than pivot, those larger than pivot, and the pivot itself.

3

Recursively sort the subarrays, and concatenate them.

Randomized QuickSort

1

Pick a pivot element uniformly at random from the array

2

Split array into 3 subarrays: those smaller than pivot, those larger than pivot, and the pivot itself.

3

Recursively sort the subarrays, and concatenate them.

Chandra (UIUC) CS498ABD 5 Spring 2019 5 / 51

slide-7
SLIDE 7

Example: Randomized Quicksort

Recall: QuickSort can take Ω(n2) time to sort array of size n.

Chandra (UIUC) CS498ABD 6 Spring 2019 6 / 51

slide-8
SLIDE 8

Example: Randomized Quicksort

Recall: QuickSort can take Ω(n2) time to sort array of size n.

Theorem

Randomized QuickSort sorts a given array of length n in O(n log n) expected time.

Chandra (UIUC) CS498ABD 6 Spring 2019 6 / 51

slide-9
SLIDE 9

Example: Randomized Quicksort

Recall: QuickSort can take Ω(n2) time to sort array of size n.

Theorem

Randomized QuickSort sorts a given array of length n in O(n log n) expected time. Note: On every input randomized QuickSort takes O(n log n) time in expectation. On every input it may take Ω(n2) time with some small probability.

Chandra (UIUC) CS498ABD 6 Spring 2019 6 / 51

slide-10
SLIDE 10

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C?

Chandra (UIUC) CS498ABD 7 Spring 2019 7 / 51

slide-11
SLIDE 11

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C? Deterministic algorithm:

1

Multiply A and B and check if equal to C.

2

Running time?

Chandra (UIUC) CS498ABD 7 Spring 2019 7 / 51

slide-12
SLIDE 12

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C? Deterministic algorithm:

1

Multiply A and B and check if equal to C.

2

Running time? O(n3) by straight forward approach. O(n2.37) with fast matrix multiplication (complicated and impractical).

Chandra (UIUC) CS498ABD 7 Spring 2019 7 / 51

slide-13
SLIDE 13

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C?

Chandra (UIUC) CS498ABD 8 Spring 2019 8 / 51

slide-14
SLIDE 14

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C? Randomized algorithm:

1

Pick a random n × 1 vector r.

2

Return the answer of the equality ABr = Cr.

3

Running time?

Chandra (UIUC) CS498ABD 8 Spring 2019 8 / 51

slide-15
SLIDE 15

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C? Randomized algorithm:

1

Pick a random n × 1 vector r.

2

Return the answer of the equality ABr = Cr.

3

Running time? O(n2)!

Chandra (UIUC) CS498ABD 8 Spring 2019 8 / 51

slide-16
SLIDE 16

Example: Verifying Matrix Multiplication

Problem

Given three n × n matrices A, B, C is AB = C? Randomized algorithm:

1

Pick a random n × 1 vector r.

2

Return the answer of the equality ABr = Cr.

3

Running time? O(n2)!

Theorem

If AB = C then the algorithm will always say YES. If AB = C then the algorithm will say YES with probability at most 1/2. Can repeat the algorithm 100 times independently to reduce the probability of a false positive to 1/2100.

Chandra (UIUC) CS498ABD 8 Spring 2019 8 / 51

slide-17
SLIDE 17

Why randomized algorithms?

1

Many many applications in algorithms, data structures and computer science!

2

In some cases only known algorithms are randomized or randomness is provably necessary.

3

Often randomized algorithms are (much) simpler and/or more efficient.

4

Several deep connections to mathematics, physics etc.

5

. . .

6

Lots of fun!

Chandra (UIUC) CS498ABD 9 Spring 2019 9 / 51

slide-18
SLIDE 18

Average case analysis vs Randomized algorithms

Average case analysis:

1

Fix a deterministic algorithm.

2

Assume inputs comes from a probability distribution.

3

Analyze the algorithm’s average performance over the distribution over inputs. Randomized algorithms:

1

Algorithm uses random bits in addition to input.

2

Analyze algorithms average performance over the given input where the average is over the random bits that the algorithm uses.

3

On each input behaviour of algorithm is random. Analyze worst-case over all inputs of the (average) performance.

Chandra (UIUC) CS498ABD 10 Spring 2019 10 / 51

slide-19
SLIDE 19

Part II Basics of Discrete Probability

Chandra (UIUC) CS498ABD 11 Spring 2019 11 / 51

slide-20
SLIDE 20

Discrete Probability

We restrict attention to finite probability spaces.

Definition

A discrete probability space is a pair (Ω, Pr) consists of finite set Ω

  • f elementary events and function p : Ω → [0, 1] which assigns a

probability Pr[ω] for each ω ∈ Ω such that

ω∈Ω Pr[ω] = 1.

Chandra (UIUC) CS498ABD 12 Spring 2019 12 / 51

slide-21
SLIDE 21

Discrete Probability

We restrict attention to finite probability spaces.

Definition

A discrete probability space is a pair (Ω, Pr) consists of finite set Ω

  • f elementary events and function p : Ω → [0, 1] which assigns a

probability Pr[ω] for each ω ∈ Ω such that

ω∈Ω Pr[ω] = 1.

Example

An unbiased coin. Ω = {H, T} and Pr[H] = Pr[T] = 1/2.

Example

A 6-sided unbiased die. Ω = {1, 2, 3, 4, 5, 6} and Pr[i] = 1/6 for 1 ≤ i ≤ 6.

Chandra (UIUC) CS498ABD 12 Spring 2019 12 / 51

slide-22
SLIDE 22

Events

Definition

Given a probability space (Ω, Pr) an event is a subset of Ω. In other words an event is a collection of elementary events. The probability

  • f an event A, denoted by Pr[A], is

ω∈A Pr[ω].

The complement event of an event A ⊆ Ω is the event Ω \ A frequently denoted by ¯ A.

Chandra (UIUC) CS498ABD 13 Spring 2019 13 / 51

slide-23
SLIDE 23

Events

Definition

Given a probability space (Ω, Pr) an event is a subset of Ω. In other words an event is a collection of elementary events. The probability

  • f an event A, denoted by Pr[A], is

ω∈A Pr[ω].

The complement event of an event A ⊆ Ω is the event Ω \ A frequently denoted by ¯ A.

Example

A pair of independent dice. Ω = {(i, j) | 1 ≤ i ≤ 6, 1 ≤ j ≤ 6}. Let A be the event that the sum of the two numbers on the dice is even. Then A =

  • (i, j) ∈ Ω
  • (i + j) is even
  • .

Pr[A] = |A|/36 = 1/2.

Chandra (UIUC) CS498ABD 13 Spring 2019 13 / 51

slide-24
SLIDE 24

Independent Events

Definition

Given a probability space (Ω, Pr) and two events A, B are independent if and only if Pr[A ∩ B] = Pr[A] Pr[B]. Otherwise they are dependent. In other words A, B independent implies one does not affect the other.

Chandra (UIUC) CS498ABD 14 Spring 2019 14 / 51

slide-25
SLIDE 25

Independent Events

Definition

Given a probability space (Ω, Pr) and two events A, B are independent if and only if Pr[A ∩ B] = Pr[A] Pr[B]. Otherwise they are dependent. In other words A, B independent implies one does not affect the other.

Example

Two coins. Ω = {HH, TT, HT, TH} and Pr[HH] = Pr[TT] = Pr[HT] = Pr[TH] = 1/4.

1

A is the event that the first coin is heads and B is the event that second coin is tails. A, B are independent.

Chandra (UIUC) CS498ABD 14 Spring 2019 14 / 51

slide-26
SLIDE 26

Independent Events

Definition

Given a probability space (Ω, Pr) and two events A, B are independent if and only if Pr[A ∩ B] = Pr[A] Pr[B]. Otherwise they are dependent. In other words A, B independent implies one does not affect the other.

Example

Two coins. Ω = {HH, TT, HT, TH} and Pr[HH] = Pr[TT] = Pr[HT] = Pr[TH] = 1/4.

1

A is the event that the first coin is heads and B is the event that second coin is tails. A, B are independent.

2

A is the event that both are not tails and B is event that second coin is heads.

Chandra (UIUC) CS498ABD 14 Spring 2019 14 / 51

slide-27
SLIDE 27

Independent Events

Definition

Given a probability space (Ω, Pr) and two events A, B are independent if and only if Pr[A ∩ B] = Pr[A] Pr[B]. Otherwise they are dependent. In other words A, B independent implies one does not affect the other.

Example

Two coins. Ω = {HH, TT, HT, TH} and Pr[HH] = Pr[TT] = Pr[HT] = Pr[TH] = 1/4.

1

A is the event that the first coin is heads and B is the event that second coin is tails. A, B are independent.

2

A is the event that both are not tails and B is event that second coin is heads. A, B are dependent.

Chandra (UIUC) CS498ABD 14 Spring 2019 14 / 51

slide-28
SLIDE 28

Union bound

The probability of the union of two events, is no bigger than the probability of the sum of their probabilities.

Lemma

For any two events E and F, we have that Pr

  • E ∪ F
  • ≤ Pr
  • E
  • + Pr
  • F
  • .

Proof.

Consider E and F to be a collection of elmentery events (which they are). We have Pr

  • E ∪ F
  • =
  • x∈E∪F

Pr[x] ≤

  • x∈E

Pr[x] +

  • x∈F

Pr[x] = Pr

  • E
  • + Pr
  • F
  • .

Chandra (UIUC) CS498ABD 15 Spring 2019 15 / 51

slide-29
SLIDE 29

Random Variables

Definition

Given a probability space (Ω, Pr) a (real-valued) random variable X

  • ver Ω is a function that maps each elementary event to a real
  • number. In other words X : Ω → R.

Chandra (UIUC) CS498ABD 16 Spring 2019 16 / 51

slide-30
SLIDE 30

Random Variables

Definition

Given a probability space (Ω, Pr) a (real-valued) random variable X

  • ver Ω is a function that maps each elementary event to a real
  • number. In other words X : Ω → R.

Definition

Expectation For a random variable X over a probability space (Ω, Pr) the expectation of X is defined as

ω∈Ω Pr[ω] X(ω). In

  • ther words, the expectation is the average value of X according to

the probabilities given by Pr[·].

Chandra (UIUC) CS498ABD 16 Spring 2019 16 / 51

slide-31
SLIDE 31

Expectation

Example

A 6-sided unbiased die. Ω = {1, 2, 3, 4, 5, 6} and Pr[i] = 1/6 for 1 ≤ i ≤ 6.

1

X : Ω → R where X(i) = i mod 2. Then E[X] = 6

i=1 Pr[i] · X(i) = 1 6

6

i=1 X(i) = 1/2.

Chandra (UIUC) CS498ABD 17 Spring 2019 17 / 51

slide-32
SLIDE 32

Expectation

Example

A 6-sided unbiased die. Ω = {1, 2, 3, 4, 5, 6} and Pr[i] = 1/6 for 1 ≤ i ≤ 6.

1

X : Ω → R where X(i) = i mod 2. Then E[X] = 6

i=1 Pr[i] · X(i) = 1 6

6

i=1 X(i) = 1/2.

2

Y : Ω → R where Y (i) = i 2. Then E[Y ] = 6

i=1 1 6 · i 2 = 91/6.

Chandra (UIUC) CS498ABD 17 Spring 2019 17 / 51

slide-33
SLIDE 33

Expected number of vertices?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. Compute the expected number of vertices in H. (A) n/2. (B) n/4. (C) m/2. (D) m/4. (E) none of the above.

Chandra (UIUC) CS498ABD 18 Spring 2019 18 / 51

slide-34
SLIDE 34

Expected number of vertices is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

Chandra (UIUC) CS498ABD 19 Spring 2019 19 / 51

slide-35
SLIDE 35

Expected number of vertices is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

X(ω) = # vertices in H as per ω = # 1s in ω.

Chandra (UIUC) CS498ABD 19 Spring 2019 19 / 51

slide-36
SLIDE 36

Expected number of vertices is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

X(ω) = # vertices in H as per ω = # 1s in ω. E[X] =

  • ω∈Ω Pr[ω] X(ω)

=

  • ω∈Ω 1/2nX(ω)

=

1/2n n k=0

n

k

  • k

=

1/2n(2n n 2)

= n/2

Chandra (UIUC) CS498ABD 19 Spring 2019 19 / 51

slide-37
SLIDE 37

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is (A) n/2. (B) n/4. (C) m/2. (D) m/4. (E) none of the above.

Chandra (UIUC) CS498ABD 20 Spring 2019 20 / 51

slide-38
SLIDE 38

Expected number of edges is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

Chandra (UIUC) CS498ABD 21 Spring 2019 21 / 51

slide-39
SLIDE 39

Expected number of edges is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

X(ω) = # edges present in H as per ω = ??

Chandra (UIUC) CS498ABD 21 Spring 2019 21 / 51

slide-40
SLIDE 40

Expected number of edges is:

Probability Space

Ω = {0, 1}n. For ω ∈ {0, 1}n, ωv = 1 if vertex v is present in H, else is zero. For each ω ∈ Ω, Pr[ω] =

1 2n .

X(ω) = # edges present in H as per ω = ?? How to compute E[X]?

Chandra (UIUC) CS498ABD 21 Spring 2019 21 / 51

slide-41
SLIDE 41

Indicator Random Variables

Definition

A binary random variable is one that takes on values in {0, 1}.

Chandra (UIUC) CS498ABD 22 Spring 2019 22 / 51

slide-42
SLIDE 42

Indicator Random Variables

Definition

A binary random variable is one that takes on values in {0, 1}. Special type of random variables that are quite useful.

Definition

Given a probability space (Ω, Pr) and an event A ⊆ Ω the indicator random variable XA is a binary random variable where XA(ω) = 1 if ω ∈ A and XA(ω) = 0 if ω ∈ A.

Chandra (UIUC) CS498ABD 22 Spring 2019 22 / 51

slide-43
SLIDE 43

Indicator Random Variables

Definition

A binary random variable is one that takes on values in {0, 1}. Special type of random variables that are quite useful.

Definition

Given a probability space (Ω, Pr) and an event A ⊆ Ω the indicator random variable XA is a binary random variable where XA(ω) = 1 if ω ∈ A and XA(ω) = 0 if ω ∈ A.

Example

A 6-sided unbiased die. Ω = {1, 2, 3, 4, 5, 6} and Pr[i] = 1/6 for 1 ≤ i ≤ 6. Let A be the even that i is divisible by 3. Then XA(i) = 1 if i = 3, 6 and 0 otherwise.

Chandra (UIUC) CS498ABD 22 Spring 2019 22 / 51

slide-44
SLIDE 44

Expectation of indicator random variable

Proposition

For an indicator variable XA, E[XA] = Pr[A].

Proof.

E[XA] =

  • ω∈Ω

XA(ω) Pr[ω] =

  • ω∈A

1 · Pr[ω] +

  • ω∈Ω\A

0 · Pr[ω] =

  • ω∈A

Pr[ω] = Pr[A] .

Chandra (UIUC) CS498ABD 23 Spring 2019 23 / 51

slide-45
SLIDE 45

Linearity of Expectation

Lemma

Let X, Y be two random variables (not necessarily independent) over a probability space (Ω, Pr). Then E[X + Y ] = E[X] + E[Y ].

Proof.

E[X + Y ] =

  • ω∈Ω

Pr[ω] (X(ω) + Y (ω)) =

  • ω∈Ω

Pr[ω] X(ω) +

  • ω∈Ω

Pr[ω] Y (ω) = E[X] + E[Y ] .

Chandra (UIUC) CS498ABD 24 Spring 2019 24 / 51

slide-46
SLIDE 46

Linearity of Expectation

Lemma

Let X, Y be two random variables (not necessarily independent) over a probability space (Ω, Pr). Then E[X + Y ] = E[X] + E[Y ].

Proof.

E[X + Y ] =

  • ω∈Ω

Pr[ω] (X(ω) + Y (ω)) =

  • ω∈Ω

Pr[ω] X(ω) +

  • ω∈Ω

Pr[ω] Y (ω) = E[X] + E[Y ] .

Corollary

E[a1X1 + a2X2 + . . . + anXn] = n

i=1 ai E[Xi].

Chandra (UIUC) CS498ABD 24 Spring 2019 24 / 51

slide-47
SLIDE 47

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-48
SLIDE 48

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is Event Ae = edge e ∈ E is present in H. Pr

  • Ae=(u,v)
  • = Pr[u and v both are present] =

Pr[u is present] · Pr[v is present] = 1

2 · 1 2 = 1 4.

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-49
SLIDE 49

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is Event Ae = edge e ∈ E is present in H. Pr

  • Ae=(u,v)
  • = Pr[u and v both are present] =

Pr[u is present] · Pr[v is present] = 1

2 · 1 2 = 1 4.

XAe indicator random variables, then E[XAe] = Pr[Ae].

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-50
SLIDE 50

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is Event Ae = edge e ∈ E is present in H. Pr

  • Ae=(u,v)
  • = Pr[u and v both are present] =

Pr[u is present] · Pr[v is present] = 1

2 · 1 2 = 1 4.

XAe indicator random variables, then E[XAe] = Pr[Ae]. Let X =

e∈E XAe (Number of edges in H)

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-51
SLIDE 51

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is Event Ae = edge e ∈ E is present in H. Pr

  • Ae=(u,v)
  • = Pr[u and v both are present] =

Pr[u is present] · Pr[v is present] = 1

2 · 1 2 = 1 4.

XAe indicator random variables, then E[XAe] = Pr[Ae]. Let X =

e∈E XAe (Number of edges in H)

E[X] = E

  • e∈E

XAe

  • =
  • e∈E

E[XAe] =

  • e∈E

Pr[Ae] = m 4

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-52
SLIDE 52

Expected number of edges?

Let G = (V, E) be a graph with n vertices and m edges. Let H be the graph resulting from independently deleting every vertex of G with probability 1/2. The expected number of edges in H is Event Ae = edge e ∈ E is present in H. Pr

  • Ae=(u,v)
  • = Pr[u and v both are present] =

Pr[u is present] · Pr[v is present] = 1

2 · 1 2 = 1 4.

XAe indicator random variables, then E[XAe] = Pr[Ae]. Let X =

e∈E XAe (Number of edges in H)

E[X] = E

  • e∈E

XAe

  • =
  • e∈E

E[XAe] =

  • e∈E

Pr[Ae] = m 4 It is important to setup random variables carefully.

Chandra (UIUC) CS498ABD 25 Spring 2019 25 / 51

slide-53
SLIDE 53

Expected number of triangles?

Let G = (V, E) be a graph with n vertices and m edges. Assume G has t triangles (i.e., a triangle is a simple cycle with three vertices). Let H be the graph resulting from deleting independently each vertex

  • f G with probability 1/2. The expected number of triangles in H is

(A) t/2. (B) t/4. (C) t/8. (D) t/16. (E) none of the above.

Chandra (UIUC) CS498ABD 26 Spring 2019 26 / 51

slide-54
SLIDE 54

Independent Random Variables

Definition

Random variables X, Y are said to be independent if ∀x, y ∈ R, Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] .

Chandra (UIUC) CS498ABD 27 Spring 2019 27 / 51

slide-55
SLIDE 55

Independent Random Variables

Definition

Random variables X, Y are said to be independent if ∀x, y ∈ R, Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] .

Examples

Two independent un-biased coin flips: Ω = {HH, HT, TH, TT}. X = 1 if first coin is H else 0. Y = 1 if second coin is H else 0.

Chandra (UIUC) CS498ABD 27 Spring 2019 27 / 51

slide-56
SLIDE 56

Independent Random Variables

Definition

Random variables X, Y are said to be independent if ∀x, y ∈ R, Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] .

Examples

Two independent un-biased coin flips: Ω = {HH, HT, TH, TT}. X = 1 if first coin is H else 0. Y = 1 if second coin is H else

  • 0. Independent.

Chandra (UIUC) CS498ABD 27 Spring 2019 27 / 51

slide-57
SLIDE 57

Independent Random Variables

Definition

Random variables X, Y are said to be independent if ∀x, y ∈ R, Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] .

Examples

Two independent un-biased coin flips: Ω = {HH, HT, TH, TT}. X = 1 if first coin is H else 0. Y = 1 if second coin is H else

  • 0. Independent.

X = #H, Y = #T.

Chandra (UIUC) CS498ABD 27 Spring 2019 27 / 51

slide-58
SLIDE 58

Independent Random Variables

Definition

Random variables X, Y are said to be independent if ∀x, y ∈ R, Pr[X = x ∧ Y = y] = Pr[X = x] Pr[Y = y] .

Examples

Two independent un-biased coin flips: Ω = {HH, HT, TH, TT}. X = 1 if first coin is H else 0. Y = 1 if second coin is H else

  • 0. Independent.

X = #H, Y = #T. Dependent. Why?

Chandra (UIUC) CS498ABD 27 Spring 2019 27 / 51

slide-59
SLIDE 59

Independent Randomized Variables

Lemma

If X and Y are independent then E[XY ] = E[X] · E[Y ]

Proof.

E[X · Y ] =

  • ω∈Ω

Pr[ω] (X(ω) · Y (ω)) =

  • x,y∈R

Pr[X = x ∧ Y = y] (x · y) =

  • x,y∈R

Pr[X = x] · Pr[Y = y] · x · y = (

  • x∈R

Pr[X = x] x)(

  • y∈R

Pr[Y = y] y) = E[X] E[Y ]

Chandra (UIUC) CS498ABD 28 Spring 2019 28 / 51

slide-60
SLIDE 60

Types of Randomized Algorithms

Typically one encounters the following types:

1

Las Vegas randomized algorithms: for a given input x

  • utput of algorithm is always correct but the running time is a

random variable. In this case we are interested in analyzing the expected running time.

Chandra (UIUC) CS498ABD 29 Spring 2019 29 / 51

slide-61
SLIDE 61

Types of Randomized Algorithms

Typically one encounters the following types:

1

Las Vegas randomized algorithms: for a given input x

  • utput of algorithm is always correct but the running time is a

random variable. In this case we are interested in analyzing the expected running time.

2

Monte Carlo randomized algorithms: for a given input x the running time is deterministic but the output is random; correct with some probability. In this case we are interested in analyzing the probability of the correct output (and also the running time).

3

Algorithms whose running time and output may both be random.

Chandra (UIUC) CS498ABD 29 Spring 2019 29 / 51

slide-62
SLIDE 62

Analyzing Las Vegas Algorithms

Deterministic algorithm Q for a problem Π:

1

Let Q(x) be the time for Q to run on input x of length |x|.

2

Worst-case analysis: run time on worst input for a given size n. Twc(n) = max

x:|x|=n Q(x).

Chandra (UIUC) CS498ABD 30 Spring 2019 30 / 51

slide-63
SLIDE 63

Analyzing Las Vegas Algorithms

Deterministic algorithm Q for a problem Π:

1

Let Q(x) be the time for Q to run on input x of length |x|.

2

Worst-case analysis: run time on worst input for a given size n. Twc(n) = max

x:|x|=n Q(x).

Randomized algorithm R for a problem Π:

1

Let R(x) be the time for Q to run on input x of length |x|.

2

R(x) is a random variable: depends on random bits used by R.

3

E[R(x)] is the expected running time for R on x

4

Worst-case analysis: expected time on worst input of size n Trand−wc(n) = max

x:|x|=n E[R(x)] .

Chandra (UIUC) CS498ABD 30 Spring 2019 30 / 51

slide-64
SLIDE 64

Analyzing Monte Carlo Algorithms

Randomized algorithm M for a problem Π:

1

Let M(x) be the time for M to run on input x of length |x|. For Monte Carlo, assumption is that run time is deterministic.

2

Let Pr[x] be the probability that M is correct on x.

3

Pr[x] is a random variable: depends on random bits used by M.

4

Worst-case analysis: success probability on worst input Prand−wc(n) = min

x:|x|=n Pr[x] .

Chandra (UIUC) CS498ABD 31 Spring 2019 31 / 51

slide-65
SLIDE 65

Part III Randomized Quick Sort

Chandra (UIUC) CS498ABD 32 Spring 2019 32 / 51

slide-66
SLIDE 66

Randomized QuickSort

Randomized QuickSort

1

Pick a pivot element uniformly at random from the array.

2

Split array into 3 subarrays: those smaller than pivot, those larger than pivot, and the pivot itself.

3

Recursively sort the subarrays, and concatenate them.

Chandra (UIUC) CS498ABD 33 Spring 2019 33 / 51

slide-67
SLIDE 67

Analysis

What events to count? Number of Comparisions.

Chandra (UIUC) CS498ABD 34 Spring 2019 34 / 51

slide-68
SLIDE 68

Analysis

What events to count? Number of Comparisions. What is the probability space? All the coin tosses at all levels and parts of recursion.

Chandra (UIUC) CS498ABD 34 Spring 2019 34 / 51

slide-69
SLIDE 69

Analysis

What events to count? Number of Comparisions. What is the probability space? All the coin tosses at all levels and parts of recursion. Too Big!!

Chandra (UIUC) CS498ABD 34 Spring 2019 34 / 51

slide-70
SLIDE 70

Analysis

What events to count? Number of Comparisions. What is the probability space? All the coin tosses at all levels and parts of recursion. Too Big!! What random variables to define? What are the events of the algorithm?

Chandra (UIUC) CS498ABD 34 Spring 2019 34 / 51

slide-71
SLIDE 71

Analysis via Recurrence

1

Given array A of size n, let Q(A) be number of comparisons of randomized QuickSort on A.

2

Note that Q(A) is a random variable.

3

Let Ai

left and Ai right be the left and right arrays obtained if:

Let Xi be indicator random variable, which is set to 1 if pivot is

  • f rank i in A, else zero.

Q(A) = n +

n

  • i=1

Xi ·

  • Q(Ai

left) + Q(Ai right)

  • .

Chandra (UIUC) CS498ABD 35 Spring 2019 35 / 51

slide-72
SLIDE 72

Analysis via Recurrence

1

Given array A of size n, let Q(A) be number of comparisons of randomized QuickSort on A.

2

Note that Q(A) is a random variable.

3

Let Ai

left and Ai right be the left and right arrays obtained if:

Let Xi be indicator random variable, which is set to 1 if pivot is

  • f rank i in A, else zero.

Q(A) = n +

n

  • i=1

Xi ·

  • Q(Ai

left) + Q(Ai right)

  • .

Since each element of A has probability exactly of 1/n of being chosen: E[Xi] = Pr[pivot has rank i] = 1/n.

Chandra (UIUC) CS498ABD 35 Spring 2019 35 / 51

slide-73
SLIDE 73

Independence of Random Variables

Lemma

Random variables Xi is independent of random variables Q(Ai

left) as

well as Q(Ai

right), i.e.

E

  • Xi · Q(Ai

left)

  • = E[Xi] E
  • Q(Ai

left)

  • E
  • Xi · Q(Ai

right)

  • = E[Xi] E
  • Q(Ai

right)

  • Proof.

This is because the algorithm, while recursing on Q(Ai

left) and

Q(Ai

right) uses new random coin tosses that are independent of the

coin tosses used to decide the first pivot. Only the latter decides value of Xi.

Chandra (UIUC) CS498ABD 36 Spring 2019 36 / 51

slide-74
SLIDE 74

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n.

Chandra (UIUC) CS498ABD 37 Spring 2019 37 / 51

slide-75
SLIDE 75

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n. We have, for any A: Q(A) = n +

n

  • i=1

Xi

  • Q(Ai

left) + Q(Ai right)

  • Chandra (UIUC)

CS498ABD 37 Spring 2019 37 / 51

slide-76
SLIDE 76

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n. We have, for any A: Q(A) = n +

n

  • i=1

Xi

  • Q(Ai

left) + Q(Ai right)

  • By linearity of expectation, and independence random variables:

E

  • Q(A)
  • = n +

n

  • i=1

E[Xi]

  • E
  • Q(Ai

left)

  • + E
  • Q(Ai

right)

  • .

Chandra (UIUC) CS498ABD 37 Spring 2019 37 / 51

slide-77
SLIDE 77

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n. We have, for any A: Q(A) = n +

n

  • i=1

Xi

  • Q(Ai

left) + Q(Ai right)

  • By linearity of expectation, and independence random variables:

E

  • Q(A)
  • = n +

n

  • i=1

E[Xi]

  • E
  • Q(Ai

left)

  • + E
  • Q(Ai

right)

  • .

⇒ E

  • Q(A)
  • ≤ n +

n

  • i=1

1 n (T(i − 1) + T(n − i)) .

Chandra (UIUC) CS498ABD 37 Spring 2019 37 / 51

slide-78
SLIDE 78

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n.

Chandra (UIUC) CS498ABD 38 Spring 2019 38 / 51

slide-79
SLIDE 79

Analysis via Recurrence

Let T(n) = maxA:|A|=n E[Q(A)] be the worst-case expected running time of randomized QuickSort on arrays of size n. We derived: E

  • Q(A)
  • ≤ n +

n

  • i=1

1 n (T(i − 1) + T(n − i)) . Note that above holds for any A of size n. Therefore max

A:|A|=n E[Q(A)] = T(n) ≤ n + n

  • i=1

1 n (T(i − 1) + T(n − i)) .

Chandra (UIUC) CS498ABD 38 Spring 2019 38 / 51

slide-80
SLIDE 80

Solving the Recurrence

T(n) ≤ n +

n

  • i=1

1 n (T(i − 1) + T(n − i)) with base case T(1) = 0.

Chandra (UIUC) CS498ABD 39 Spring 2019 39 / 51

slide-81
SLIDE 81

Solving the Recurrence

T(n) ≤ n +

n

  • i=1

1 n (T(i − 1) + T(n − i)) with base case T(1) = 0.

Lemma

T(n) = O(n log n).

Chandra (UIUC) CS498ABD 39 Spring 2019 39 / 51

slide-82
SLIDE 82

Solving the Recurrence

T(n) ≤ n +

n

  • i=1

1 n (T(i − 1) + T(n − i)) with base case T(1) = 0.

Lemma

T(n) = O(n log n).

Proof.

(Guess and) Verify by induction.

Chandra (UIUC) CS498ABD 39 Spring 2019 39 / 51

slide-83
SLIDE 83

Part IV Slick analysis of QuickSort

Chandra (UIUC) CS498ABD 40 Spring 2019 40 / 51

slide-84
SLIDE 84

A Slick Analysis of QuickSort

Let Q(A) be number of comparisons done on input array A:

1

For 1 ≤ i < j < n let Rij be the event that rank i element is compared with rank j element.

2

Xij is the indicator random variable for Rij. That is, Xij = 1 if rank i is compared with rank j element, otherwise 0.

Chandra (UIUC) CS498ABD 41 Spring 2019 41 / 51

slide-85
SLIDE 85

A Slick Analysis of QuickSort

Let Q(A) be number of comparisons done on input array A:

1

For 1 ≤ i < j < n let Rij be the event that rank i element is compared with rank j element.

2

Xij is the indicator random variable for Rij. That is, Xij = 1 if rank i is compared with rank j element, otherwise 0. Q(A) =

  • 1≤i<j≤n

Xij and hence by linearity of expectation, E

  • Q(A)
  • =
  • 1≤i<j≤n

E

  • Xij
  • =
  • 1≤i<j≤n

Pr

  • Rij
  • .

Chandra (UIUC) CS498ABD 41 Spring 2019 41 / 51

slide-86
SLIDE 86

A Slick Analysis of QuickSort

Rij = rank i element is compared with rank j element. Question: What is Pr[Rij]?

Chandra (UIUC) CS498ABD 42 Spring 2019 42 / 51

slide-87
SLIDE 87

A Slick Analysis of QuickSort

Rij = rank i element is compared with rank j element. Question: What is Pr[Rij]? With ranks:

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8

Chandra (UIUC) CS498ABD 42 Spring 2019 42 / 51

slide-88
SLIDE 88

A Slick Analysis of QuickSort

Rij = rank i element is compared with rank j element. Question: What is Pr[Rij]? With ranks:

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8 As such, probability of comparing 5 to 8 is Pr[R4,7].

Chandra (UIUC) CS498ABD 42 Spring 2019 42 / 51

slide-89
SLIDE 89

A Slick Analysis of QuickSort

Rij = rank i element is compared with rank j element. Question: What is Pr[Rij]? With ranks:

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8

1

If pivot too small (say 3 [rank 2]). Partition and call recursively:

7 5 9 1 3 4 8 6

= ⇒

7 5 9 3 4 8 6 1

Decision if to compare 5 to 8 is moved to subproblem.

Chandra (UIUC) CS498ABD 42 Spring 2019 42 / 51

slide-90
SLIDE 90

A Slick Analysis of QuickSort

Rij = rank i element is compared with rank j element. Question: What is Pr[Rij]? With ranks:

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8

1

If pivot too small (say 3 [rank 2]). Partition and call recursively:

7 5 9 1 3 4 8 6

= ⇒

7 5 9 3 4 8 6 1

Decision if to compare 5 to 8 is moved to subproblem.

2

If pivot too large (say 9 [rank 8]):

7 5 9 1 3 4 8 6 7 5 9 1 3 4 8 6

= ⇒

7 5 1 3 4 8 6 9

Decision if to compare 5 to 8 moved to subproblem.

Chandra (UIUC) CS498ABD 42 Spring 2019 42 / 51

slide-91
SLIDE 91

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8 As such, probability of com- paring 5 to 8 is Pr[R4,7].

1

If pivot is 5 (rank 4). Bingo!

7 5 9 1 3 4 8 6

= ⇒

1 3 4 5 7 9 8 6

Chandra (UIUC) CS498ABD 43 Spring 2019 43 / 51

slide-92
SLIDE 92

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8 As such, probability of com- paring 5 to 8 is Pr[R4,7].

1

If pivot is 5 (rank 4). Bingo!

7 5 9 1 3 4 8 6

= ⇒

1 3 4 5 7 9 8 6

2

If pivot is 8 (rank 7). Bingo!

7 5 9 1 3 4 8 6

= ⇒

7 5 9 1 3 4 6 8

Chandra (UIUC) CS498ABD 43 Spring 2019 43 / 51

slide-93
SLIDE 93

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

7 5 9 1 3 4 8 6

1 2 3 4 5 6 7 8 As such, probability of com- paring 5 to 8 is Pr[R4,7].

1

If pivot is 5 (rank 4). Bingo!

7 5 9 1 3 4 8 6

= ⇒

1 3 4 5 7 9 8 6

2

If pivot is 8 (rank 7). Bingo!

7 5 9 1 3 4 8 6

= ⇒

7 5 9 1 3 4 6 8

3

If pivot in between the two numbers (say 6 [rank 5]):

7 5 9 1 3 4 8 6

= ⇒

7 5 9 1 3 4 6 8

5 and 8 will never be compared to each other.

Chandra (UIUC) CS498ABD 43 Spring 2019 43 / 51

slide-94
SLIDE 94

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

Conclusion:

Ri,j happens if and only if: ith or jth ranked element is the first pivot out of ith to jth ranked elements.

How to analyze this?

Thinking acrobatics!

1

Assign every element in the array a random priority (say in [0, 1]).

2

Choose pivot to be the element with lowest priority in subproblem.

3

Equivalent to picking pivot uniformly at random (as QuickSort do).

Chandra (UIUC) CS498ABD 44 Spring 2019 44 / 51

slide-95
SLIDE 95

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

How to analyze this?

Thinking acrobatics!

1

Assign every element in the array a random priority (say in [0, 1]).

2

Choose pivot to be the element with lowest priority in subproblem. = ⇒ Ri,j happens if either i or j have lowest priority out of elements rank i to j,

Chandra (UIUC) CS498ABD 45 Spring 2019 45 / 51

slide-96
SLIDE 96

A Slick Analysis of QuickSort

Question: What is Pr[Ri,j]?

How to analyze this?

Thinking acrobatics!

1

Assign every element in the array a random priority (say in [0, 1]).

2

Choose pivot to be the element with lowest priority in subproblem. = ⇒ Ri,j happens if either i or j have lowest priority out of elements rank i to j, There are k = j − i + 1 relevant elements. Pr

  • Ri,j
  • = 2

k = 2 j − i + 1.

Chandra (UIUC) CS498ABD 45 Spring 2019 45 / 51

slide-97
SLIDE 97

A Slick Analysis of QuickSort

Question: What is Pr[Rij]?

Chandra (UIUC) CS498ABD 46 Spring 2019 46 / 51

slide-98
SLIDE 98

A Slick Analysis of QuickSort

Question: What is Pr[Rij]?

Lemma

Pr

  • Rij
  • =

2 j−i+1.

Chandra (UIUC) CS498ABD 46 Spring 2019 46 / 51

slide-99
SLIDE 99

A Slick Analysis of QuickSort

Question: What is Pr[Rij]?

Lemma

Pr

  • Rij
  • =

2 j−i+1.

Proof.

Let a1, . . . , ai, . . . , aj, . . . , an be elements of A in sorted order. Let S = {ai, ai+1, . . . , aj} Observation: If pivot is chosen outside S then all of S either in left array or right array. Observation: ai and aj separated when a pivot is chosen from S for the first time. Once separated no comparison. Observation: ai is compared with aj if and only if either ai or aj is chosen as a pivot from S at separation...

Chandra (UIUC) CS498ABD 46 Spring 2019 46 / 51

slide-100
SLIDE 100

A Slick Analysis of QuickSort

Continued...

Lemma

Pr

  • Rij
  • =

2 j−i+1.

Proof.

Let a1, . . . , ai, . . . , aj, . . . , an be sort of A. Let S = {ai, ai+1, . . . , aj} Observation: ai is compared with aj if and only if either ai or aj is chosen as a pivot from S at separation. Observation: Given that pivot is chosen from S the probability that it is ai or aj is exactly 2/|S| = 2/(j − i + 1) since the pivot is chosen uniformly at random from the array.

Chandra (UIUC) CS498ABD 47 Spring 2019 47 / 51

slide-101
SLIDE 101

How much is this?

Hn = n

i=1 1 i is the n’th harmonic number

(A) Hn = Θ(1). (B) Hn = Θ(log log n). (C) Hn = Θ(√log n). (D) Hn = Θ(log n). (E) Hn = Θ(log2 n).

Chandra (UIUC) CS498ABD 48 Spring 2019 48 / 51

slide-102
SLIDE 102

And how much is this?

Tn =

n−1

  • i=1

n−i

  • j=1

1 j is equal to (A) Tn = Θ(n). (B) Tn = Θ(n log n). (C) Tn = Θ(n log2 n). (D) Tn = Θ(n2). (E) Tn = Θ(n3).

Chandra (UIUC) CS498ABD 49 Spring 2019 49 / 51

slide-103
SLIDE 103

A Slick Analysis of QuickSort

Continued...

E

  • Q(A)
  • =
  • 1≤i<j≤n

E[Xij] =

  • 1≤i<j≤n

Pr[Rij] .

Lemma

Pr[Rij] =

2 j−i+1.

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-104
SLIDE 104

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • =
  • 1≤i<j≤n

Pr

  • Rij
  • =
  • 1≤i<j≤n

2 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-105
SLIDE 105

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • =
  • 1≤i<j≤n

2 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-106
SLIDE 106

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • =
  • 1≤i<j≤n

2 j − i + 1 =

n−1

  • i=1

n

  • j=i+1

2 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-107
SLIDE 107

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • =

n−1

  • i=1

n

  • j=i+1

2 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-108
SLIDE 108

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • = 2

n−1

  • i=1

n

  • i<j

1 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-109
SLIDE 109

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • = 2

n−1

  • i=1

n

  • i<j

1 j − i + 1

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-110
SLIDE 110

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • = 2

n−1

  • i=1

n

  • i<j

1 j − i + 1 ≤ 2

n−1

  • i=1

n−i+1

  • ∆=2

1 ∆

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-111
SLIDE 111

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • = 2

n−1

  • i=1

n

  • i<j

1 j − i + 1 ≤ 2

n−1

  • i=1

n−i+1

  • ∆=2

1 ∆ ≤ 2

n−1

  • i=1

(Hn−i+1 − 1) ≤ 2

  • 1≤i<n

Hn

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-112
SLIDE 112

A Slick Analysis of QuickSort

Continued...

Lemma

Pr[Rij] =

2 j−i+1.

E

  • Q(A)
  • = 2

n−1

  • i=1

n

  • i<j

1 j − i + 1 ≤ 2

n−1

  • i=1

n−i+1

  • ∆=2

1 ∆ ≤ 2

n−1

  • i=1

(Hn−i+1 − 1) ≤ 2

  • 1≤i<n

Hn ≤ 2nHn = O(n log n)

Chandra (UIUC) CS498ABD 50 Spring 2019 50 / 51

slide-113
SLIDE 113

Where do I get random bits?

Question: Are true random bits available in practice?

1

Buy them!

2

CPUs use physical phenomena to generate random bits.

3

Can use pseudo-random bits or semi-random bits from nature. Several fundamental unresolved questions in complexity theory

  • n this topic. Beyond the scope of this course.

4

In practice pseudo-random generators work quite well in many applications.

5

The model is interesting to think in the abstract and is very useful even as a theoretical construct. One can derandomize randomized algorithms to obtain deterministic algorithms.

Chandra (UIUC) CS498ABD 51 Spring 2019 51 / 51