Evidence-Based Elections CDAR Risk Seminar Philip B. Stark 15 - - PowerPoint PPT Presentation

evidence based elections
SMART_READER_LITE
LIVE PREVIEW

Evidence-Based Elections CDAR Risk Seminar Philip B. Stark 15 - - PowerPoint PPT Presentation

Evidence-Based Elections CDAR Risk Seminar Philip B. Stark 15 September 2020 University of California, Berkeley 1 Many collaborators including (most recently) Andrew Appel, Josh Benaloh, Matt Bernhard, Michelle Blom, Andrew Conway, Rich


slide-1
SLIDE 1

Evidence-Based Elections

CDAR Risk Seminar

Philip B. Stark 15 September 2020

University of California, Berkeley 1

slide-2
SLIDE 2

Many collaborators including (most recently) Andrew Appel, Josh Benaloh, Matt Bernhard, Michelle Blom, Andrew Conway, Rich DeMillo, Steve Evans, Amanda Glazer, Alex Halderman, Mark Lindeman, Kellie Ottoboni, Ron Rivest, Peter Ryan, Jake Spertus, Peter Stuckey, Vanessa Teague, Poorvi Vora

2

slide-3
SLIDE 3

Outline:

  • There is a problem
  • There is a solution
  • Useful statistical tools
  • choosing the “right” null hypothesis
  • finding a canonical form of the problem: inference about the mean of a finite,

nonnegative population

  • sequential tests and martingale based methods: Kolmogorov’s inequality
  • union-intersection tests (versus intersection-union tests)
  • combining P-values from separate tests

3

slide-4
SLIDE 4

4

slide-5
SLIDE 5

https://www.youtube.com/embed/cruh2p_Wh_4

5

slide-6
SLIDE 6

https://www.stat.berkeley.edu/~stark/Seminars/AuditPics/MODEMS4.mp4

6

slide-7
SLIDE 7

7

slide-8
SLIDE 8

8

slide-9
SLIDE 9

Arguments that US elections can’t be hacked:

  • Physical security
  • Not connected to the Internet
  • Tested before election day
  • Too decentralized

9

slide-10
SLIDE 10

Arguments that US elections can’t be hacked:

  • Physical security
  • "sleepovers," unattended equipment in warehouses, school gyms, ...
  • locks use minibar keys
  • bad/no seal protocols, easily defeated seals
  • no routine scrutiny of custody logs, 2-person custody rules, ...
  • Not connected to the Internet
  • Tested before election day
  • Too decentralized

10

slide-11
SLIDE 11

Arguments that US elections can’t be hacked:

  • Physical security
  • Not connected to the Internet
  • remote desktop software
  • wifi, bluetooth, cellular modems, ... https://tinyurl.com/r8cseun
  • removable media used to configure equipment & transport results
  • Zip drives
  • USB drives. Stuxnet, anyone?
  • parts from foreign manufacturers, including China; Chinese pop songs in flash
  • Tested before election day
  • Too decentralized

11

slide-12
SLIDE 12

12

slide-13
SLIDE 13

13

slide-14
SLIDE 14

14

slide-15
SLIDE 15

15

slide-16
SLIDE 16

16

slide-17
SLIDE 17

17

slide-18
SLIDE 18

18

slide-19
SLIDE 19

19

slide-20
SLIDE 20

20

slide-21
SLIDE 21

https://drive.google.com/uc?id=1hKKJg_AG6ctKUewZpI5eJgxmx5j- f2qL&export=download

21

slide-22
SLIDE 22

Arguments that US elections can’t be hacked:

  • Physical security
  • Not connected to the Internet
  • Tested before election day
  • Dieselgate, anyone?
  • Northampton, PA
  • Los Angeles, CA VSAP
  • Too decentralized

22

slide-23
SLIDE 23

23

slide-24
SLIDE 24

24

slide-25
SLIDE 25

25

slide-26
SLIDE 26

26

slide-27
SLIDE 27

27

slide-28
SLIDE 28

Arguments that US elections can’t be hacked:

  • Physical security
  • Not connected to the Internet
  • Tested before election day
  • Too decentralized
  • market concentrated: few vendors/models in use
  • vendors & EAC have been hacked
  • demonstration viruses that propagate across voting equipment
  • “mom & pop” contractors program thousands of machines, no IT security
  • changing presidential race requires changing votes in only a few counties
  • small number of contractors for election reporting
  • many weak links

28

slide-29
SLIDE 29

Security properties of paper

  • tangible/accountable
  • tamper evident
  • human readable
  • large alteration/substitution attacks generally require many accomplices

29

slide-30
SLIDE 30

Security properties of paper

  • tangible/accountable
  • tamper evident
  • human readable
  • large alteration/substitution attacks generally require many accomplices

Not all paper is trustworthy: How paper is marked, curated, tabulated, & audited are crucial.

29

slide-31
SLIDE 31

30

slide-32
SLIDE 32

31

slide-33
SLIDE 33

32

slide-34
SLIDE 34

33

slide-35
SLIDE 35

Did the reported winner really win?

  • Procedure-based vs. evidence-based elections
  • sterile scalpel v. patient’s condition

34

slide-36
SLIDE 36

Did the reported winner really win?

  • Procedure-based vs. evidence-based elections
  • sterile scalpel v. patient’s condition
  • Any way of counting votes can make mistakes
  • Every electronic system is vulnerable to bugs, configuration errors, & hacking
  • Did error/bugs/hacking cause losing candidate(s) to appear to win?

34

slide-37
SLIDE 37

35

slide-38
SLIDE 38

Risk-Limiting Audits (RLAs, Stark, 2008)

  • If there’s a trustworthy paper record of votes, can check whether reported

winner really won.

  • If you accept a controlled “risk” of not correcting the reported outcome if it is

wrong, typically don’t need to look at many ballots if outcome is right.

36

slide-39
SLIDE 39

A risk-limiting audit has a known minimum chance of correcting the reported

  • utcome if the reported outcome is wrong (& doesn’t alter correct outcomes).

37

slide-40
SLIDE 40

A risk-limiting audit has a known minimum chance of correcting the reported

  • utcome if the reported outcome is wrong (& doesn’t alter correct outcomes).

Risk limit: largest possible chance of not correcting reported outcome, if reported

  • utcome is wrong.

37

slide-41
SLIDE 41

A risk-limiting audit has a known minimum chance of correcting the reported

  • utcome if the reported outcome is wrong (& doesn’t alter correct outcomes).

Risk limit: largest possible chance of not correcting reported outcome, if reported

  • utcome is wrong.

Wrong means accurate handcount of trustworthy paper would find different winner(s).

37

slide-42
SLIDE 42

A risk-limiting audit has a known minimum chance of correcting the reported

  • utcome if the reported outcome is wrong (& doesn’t alter correct outcomes).

Risk limit: largest possible chance of not correcting reported outcome, if reported

  • utcome is wrong.

Wrong means accurate handcount of trustworthy paper would find different winner(s). Establishing whether paper trail is trustworthy involves other processes, generically, compliance audits

37

slide-43
SLIDE 43

RLA pseudo-algorithm

while (!(full handcount) && !(strong evidence outcome is correct)) { examine more ballots }

38

slide-44
SLIDE 44

RLA pseudo-algorithm

while (!(full handcount) && !(strong evidence outcome is correct)) { examine more ballots } if (full handcount) { handcount result is final }

38

slide-45
SLIDE 45

39

slide-46
SLIDE 46

Risk-Limiting Audits

  • Endorsed by NASEM, PCEA, ASA, LWV, CC, VV, . . .

40

slide-47
SLIDE 47

Role of math/stat

  • Get evidence about the population of cast ballots from a random sample.
  • Guarantee a large chance of correcting wrong outcomes; minimize work if the
  • utcome is correct.
  • When can you stop inspecting ballots?
  • When there’s strong evidence that a full hand count is pointless

41

slide-48
SLIDE 48
  • Null hypothesis: reported outcome is wrong.
  • Significance level (Type I error rate) is “risk”
  • Frame the hypothesis quantitatively: necessary and sufficient conditions

42

slide-49
SLIDE 49

SHANGRLA: Sets of Half-Average Nulls Generate Risk-Limiting Audits

bi is ith ballot card, N cards in all. 1candidate(bi) ≡

  • 1,

ballot i has a mark for candidate 0,

  • therwise.

AAlice,Bob(bi) ≡ 1Alice(bi) − 1Bob(bi) + 1 2 ≥ 0. mark for Alice but not Bob, AAlice,Bob(bi) = 1. mark for Bob but not Alice, AAlice,Bob(bi) = 0. marks for both (overvote) or neither (undervote) or doesn’t contain contest, AAlice,Bob(bi) = 1/2.

43

slide-50
SLIDE 50

¯ Ab

Alice,Bob ≡ 1

N

N

  • i=1

AAlice,Bob(bi). Mean of a finite nonnegative list of N numbers. Alice won iff ¯ Ab

Alice,Bob > 1/2. 44

slide-51
SLIDE 51

Plurality & Approval Voting

K ≥ 1 winners, C > K candidates in all. Candidates {wk}K

k=1 are reported winners.

Candidates {ℓj}C−K

j=1

reported losers.

45

slide-52
SLIDE 52

Plurality & Approval Voting

K ≥ 1 winners, C > K candidates in all. Candidates {wk}K

k=1 are reported winners.

Candidates {ℓj}C−K

j=1

reported losers. Outcome correct iff ¯ Ab

wk,ℓj > 1/2,

for all 1 ≤ k ≤ K, 1 ≤ j ≤ C − K K(C − K) inequalities.

45

slide-53
SLIDE 53

Plurality & Approval Voting

K ≥ 1 winners, C > K candidates in all. Candidates {wk}K

k=1 are reported winners.

Candidates {ℓj}C−K

j=1

reported losers. Outcome correct iff ¯ Ab

wk,ℓj > 1/2,

for all 1 ≤ k ≤ K, 1 ≤ j ≤ C − K K(C − K) inequalities. Same approach works for D’Hondt & other proportional representation schemes. (Stark & Teague 2015)

45

slide-54
SLIDE 54

Super-majority

f ∈ (1/2, 1]. Alice won iff (votes for Alice) > f × ((valid votes for Alice) + (valid votes for everyone else)) Set A(bi) ≡

      

1 2f ,

bi has a mark for Alice and no one else 0, bi has a mark for exactly one candidate, not Alice

1 2,

  • therwise.

Alice won iff ¯ Ab > 1/2.

46

slide-55
SLIDE 55

Borda count, STAR-Voting, & other additive weighted schemes

Winner is the candidate who gets most “points” in total. sAlice(bi): Alice’s score on ballot i. scand(bi): another candidate’s score on ballot i. s+: upper bound on the score any candidate can get on a ballot. Alice beat the other candidate iff Alice’s total score is bigger than theirs: AAlice,c(bi) ≡ sAlice(bi) − sc(bi) + s+ 2s+ . Alice won iff ¯ Ab

Alice,c > 1/2 for every other candidate c. 47

slide-56
SLIDE 56

Ranked-Choice Voting, Instant-Runoff Voting (RCV/IRV)

2 types of assertions together give sufficient (not necessary) conditions (Blom et

  • al. 2018):
  • 1. Candidate i has more first-place ranks than candidate j has total mentions.
  • 2. After a set of candidates E have been eliminated from consideration, candidate i is

ranked higher than candidate j on more ballots than vice versa. Both can be written ¯ Ab > 1/2. Finite set of such assertions implies reported outcome is right. More than one set suffices; can optimize expected workload.

48

slide-57
SLIDE 57

Auditing assertions

Test complementary null hypothesis ¯ Ab ≤ 1/2 sequentially.

  • Audit until either all complementary null hypotheses about a contest are rejected at

significance level α or until all ballots have been tabulated by hand.

  • Yields a RLA of the contest in question at risk limit α.
  • No multiplicity adjustment needed.

49

slide-58
SLIDE 58

Martingales and sequential methods

Sequential testing originated w/ Wald (1945; military secret before). Key object: martingale. Sequence of rvs {Zj} s.t.

  • E|Zj| < ∞
  • E(Zj+1|Z1, . . . , Zj) = Zj

50

slide-59
SLIDE 59

Kolmogorov’s inequality

If {Zj} is a nonnegative martingale, then for any p > 0 and all J ∈ {1, . . . , N}, Pr

  • max

1≤j≤J Zj(t) > 1/p

  • ≤ p E|ZJ|.

Markov’s inequality applied to optionally stopped martingales.

51

slide-60
SLIDE 60

Wald’s SPRT

For j = 1, 2, . . ., let Pj0 be the probability of X1, . . . , Xj under H0; Pj1 be the probability

  • f X1, . . . , Xj under H1.

Zj = Pj1 Pj0 , j = 1, 2, . . . is a nonnegative martingale if H0 is true. 1/Zj is a valid P-value for H0 at step j.

52

slide-61
SLIDE 61

Ballot-polling audits

Sample sequentially w/o replacement from a finite population of N non-negative items, {x1, . . . , xN}, with xj ≥ 0, ∀j. Total is N¯ x ≥ 0. Value of the jth item drawn is Xj. If ¯ x = t, EX1 = t, so E(X1/t) = 1. Given X1, . . . , Xn, the total of the remaining N − n items is Nt − n

j=1 Xj, so the mean

  • f the remaining items is

Nt − n

j=1 Xj

N − n = t − 1

N

n

j=1 Xj

1 − n/N .

53

slide-62
SLIDE 62

Define Y1(t) ≡

  

X1/t, Nt > 0, 1, Nt = 0, and for 1 ≤ n ≤ N − 1, Yn+1(t) ≡

    

Xn+1

1− n

N

t− 1

N

n

j=1 Xj ,

n

j=1 Xj < Nt,

1,

n

j=1 Xj ≥ Nt.

Then E(Yn+1(t)|Y1, . . . Yn) = 1.

54

slide-63
SLIDE 63

Let Zn(t) ≡ n

j=1 Yj(t).

E|Zk| ≤ maxj xj < ∞ and E (Zn+1(t)|Z1(t), . . . Zn(t)) = E (Yn+1(t)Zn(t)|Z1(t), . . . Zn(t)) = Zn(t). Thus (Z1(t), Z2(t), . . . , ZN(t)) is a non-negative closed martingale. Thus a P-value for the hypothesis ¯ x = t for data X1, . . . XJ is (max1≤j≤J Zj(t))−1 ∧ 1.

55

slide-64
SLIDE 64

Many other martingales

Kaplan’s martingale (KMART) Let Sj ≡ j

k=1 Xk, ˜

Sj ≡ Sj/N, and ˜ j ≡ 1 − (j − 1)/N. Define Yn ≡

1

n

  • j=1
  • γ
  • Xj

˜ j t − ˜ Sj−1 − 1

  • + 1
  • dγ.

Polynomial in γ of degree at most n, with constant term 1. Under the null, (Yj)N

j=1 is a non-negative closed martingale with expected value 1. 56

slide-65
SLIDE 65

Ballot-comparison audits

Use cast vote records (CVRs): system’s interpretation of each ballot. Like checking an expense report. bi is ith ballot, ci is cast-vote record for ith ballot. A an assorter.

  • verstatement error for ith ballot is

ωi ≡ A(ci) − A(bi) ≤ A(ci) ≤ u, where u is an upper bound on the value A assigns to any ballot card or CVR.

57

slide-66
SLIDE 66

v ≡ 2¯ Ac − 1, reported assorter margin. B(bi, c) ≡ (1 − ωi/u)/(2 − v/u) > 0, i = 1, . . . , N. B assigns non-negative numbers to ballots. Reported outcome correct iff ¯ B > 1/2.

58

slide-67
SLIDE 67

Stratified sampling

Cast ballots are partitioned into S ≥ 2 strata. Stratum s contains Ns cast ballots. Let ¯ Ab

s denote the mean of the assorter applied to just the ballot cards in stratum s.

Then ¯ Ab = 1 N

S

  • s=1

Ns ¯ Ab

s = S

  • s=1

Ns N ¯ Ab

s .

Can reject the hypothesis ¯ Ab ≤ 1/2 if we can reject the hypothesis

  • s∈S

Ns

N ¯ Ab

s ≤ βs

  • for all (βs)S

s=1 s.t. S s=1 βs ≤ 1/2.

Union-Intersection Test

59

slide-68
SLIDE 68

Fisher’s Combining Function

{Ps(βs)}S

s=1 are independent random variables.

If

s∈S

  • Ns

N ¯

Ab

s ≤ βs

  • , distribution of

−2

S

  • s=1

ln Ps(βs) is dominated by chi-square distribution with 2S degrees of freedom. Low-dimensional optimization problem to maximize P-value over (βs)S

s=1. 60

slide-69
SLIDE 69

Sample design

  • individual ballots?
  • clusters of ballots?
  • stratify? (logistics, equipment capabilities, . . . )
  • sampling probabilities?
  • with replacement? without replacement? Bernoulli?
  • fully sequential? batch-oriented?

61

slide-70
SLIDE 70

62

slide-71
SLIDE 71

Bayesian election audits

Limit the upset probability, the posterior probability that the reported outcome is wrong, given the sample, for a particular prior distribution on outcomes

63

slide-72
SLIDE 72

Bayesian election audits

Limit the upset probability, the posterior probability that the reported outcome is wrong, given the sample, for a particular prior distribution on outcomes Typically use Dirichlet-multinomial prior. “Non-partisan” priors invariant under permutations of the candidate names.

63

slide-73
SLIDE 73

64

slide-74
SLIDE 74

Bayes/Frequentist duality

Risk of an audit for a set of cast votes and a reported outcome:

  • probability of not correcting outcome if reported outcome is wrong for that set of

votes

  • 0 if reported outcome is correct for that set of votes

65

slide-75
SLIDE 75

Bayes/Frequentist duality

Risk of an audit for a set of cast votes and a reported outcome:

  • probability of not correcting outcome if reported outcome is wrong for that set of

votes

  • 0 if reported outcome is correct for that set of votes
  • RLAs control maximum risk.
  • Bayesian audits (Rivest & Shen) control weighted average of the risk. The prior

determines the weights in the average.

  • For 2-candidate plurality contest w/ no invalid votes, least-favorable prior has point

mass 1/2 at tie, remaining 1/2 mass arbitrary over winning outcomes (Vora, 2018).

65

slide-76
SLIDE 76

Wrinkles

  • ~20% of U.S. voters don’t vote on paper
  • ballot-marking devices make the paper trail hackable: current suit in GA
  • inadequate rules for chain of custody, ballot accounting, . . .
  • transparent high-quality randomness
  • public ceremony of die rolls, published crypto-quality PRNG
  • missing ballots; imperfect manifests
  • “Manifest Phantoms to Evil Zombies”
  • ability to produce CVRs linked to ballots
  • redacted CVRs
  • preserving privacy while ensuring the public can confirm audit didn’t stop too soon

66

slide-77
SLIDE 77

Open-source software

  • auditTools
  • ballotPollTools
  • SUITE
  • SHANGRLA
  • Arlo

67

slide-78
SLIDE 78

Evidence-Based Elections: 3 C’s

  • Voters CREATE complete, durable, verified audit trail.

68

slide-79
SLIDE 79

Evidence-Based Elections: 3 C’s

  • Voters CREATE complete, durable, verified audit trail.
  • LEO CARES FOR the audit trail adequately to ensure it remains complete and

accurate.

68

slide-80
SLIDE 80

Evidence-Based Elections: 3 C’s

  • Voters CREATE complete, durable, verified audit trail.
  • LEO CARES FOR the audit trail adequately to ensure it remains complete and

accurate.

  • Verifiable audit CHECKS reported results against the paper

68

slide-81
SLIDE 81
  • 255 state-level pres. races, 1992–2012, 10% risk limit
  • BPA expected to examine fewer than 308 ballots for half.

69

slide-82
SLIDE 82
  • 255 state-level pres. races, 1992–2012, 10% risk limit
  • BPA expected to examine fewer than 308 ballots for half.
  • 2016 presidential election, 5% risk limit
  • BPA expected to examine ~700k ballots nationally (<0.5%)

69

slide-83
SLIDE 83

Risk-Limiting Audits

  • ~60 pilot audits in AK, CA, CO, GA, IN, KS, MI, MT, NJ, OH, OR, PA, RI, WA,

WY, VA, DK.

  • CA counties: Alameda, El Dorado, Humboldt, Inyo, Madera, Marin, Merced,

Monterey, Napa, Orange, San Francisco, San Luis Obispo, Santa Clara, Santa Cruz, Stanislaus, Ventura, Yolo.

  • Routine statewide in CO since 2017. Statewide audits in AK, KS, WY in 2020.
  • Laws in CA, CO, RI, VA, WA

70

slide-84
SLIDE 84

71

slide-85
SLIDE 85

Voting and COVID-19

72

slide-86
SLIDE 86

73

slide-87
SLIDE 87
  • In-person voting involves congregating & touching common objects (esp. BMDs &

DREs, but also pens, doorknobs), but S. Korea did great job recently

74

slide-88
SLIDE 88

75

slide-89
SLIDE 89
  • Online voting does not require contact, but
  • No way to secure online voting
  • Demonstration hacks by Halderman et al.

76

slide-90
SLIDE 90

77

slide-91
SLIDE 91

78

slide-92
SLIDE 92

79

slide-93
SLIDE 93
  • VBM does not require congregating . . .
  • Klobuchar & Wyden introduced bill requiring everyone to get VBM ballot . . .
  • Serious logistical and security problems:
  • printing & mailing: 3rd parties need more equipment
  • ballots lost in the mail in either direction
  • USPS might be dead
  • potential for DOS attacks
  • ballot harvesting, coercion, vote-selling
  • authentication, signature verification (if any)
  • weaponized to disenfranchise minority voters, e.g., GA
  • need to inform voters of (non) receipt, notify them of problems & allow time to “cure”

80

slide-94
SLIDE 94

81

slide-95
SLIDE 95

82

slide-96
SLIDE 96

83

slide-97
SLIDE 97

84

slide-98
SLIDE 98

Recommendations for November 2020

  • expand vote by mail and early voting
  • minimize use of DREs & BMDs (not secure; vector for coronavirus)
  • secure/monitored kiosks to pick up blank ballots (BOD?) & cast voted ballots
  • ballot tracking; provide adequate notice & opportunity to “cure” problems
  • increase transparency: public video monitoring, etc.
  • rigorous ballot accounting & compliance audits including eligibility
  • risk-limiting audits, at least for statewide contests
  • beware sham RLAs of insecure systems

85

slide-99
SLIDE 99

Recommendations for Statistics instruction

  • finite-sample exact/conservative nonparametric inference
  • sampling designs
  • sequential tests
  • martingale methods
  • methods for combining P-values, including Fisher’s method
  • testing by maximizing P-values over nuisance parameters
  • pseudo-random number generation

86

slide-100
SLIDE 100

87