Data Privacy Anonymization Li Xiong CS573 Data Privacy and - - PowerPoint PPT Presentation

data privacy anonymization
SMART_READER_LITE
LIVE PREVIEW

Data Privacy Anonymization Li Xiong CS573 Data Privacy and - - PowerPoint PPT Presentation

Data Privacy Anonymization Li Xiong CS573 Data Privacy and Security Outline Inference control Anonymization problem Anonymization notions and approaches (and how they fail to work!) k-anonymity l-diversity t-closeness


slide-1
SLIDE 1

Data Privacy – Anonymization

Li Xiong

CS573 Data Privacy and Security

slide-2
SLIDE 2

Outline

  • Inference control
  • Anonymization problem
  • Anonymization notions and approaches (and

how they fail to work!)

– k-anonymity – l-diversity – t-closeness

  • Takeaways
slide-3
SLIDE 3

Access Control vs. Inference Control

 Access control: protecting information from being accessed by

unauthorized users

 Inference control (disclosure control): protecting private data

from being inferred from sanitized data or models by authorized users

Original Data Sanitized Data/ Models Inference Control Access Control

Data

slide-4
SLIDE 4

Disclosure Risk and Information Loss

  • Privacy (disclosure risk) - the risk that a

given form of disclosure will arise if the data is released

  • Utility (information loss) - the information

which exist in the initial data but not in released data due to disclosure control methods

Original Data Sanitized Data/ Models Inference Control

slide-5
SLIDE 5

What to Protect: Classical Intuition for Privacy

  • Uninformative principle (Dalenius 1977)

– Access to the published data does not reveal anything extra about any target victim, even with the presence of attacker’s background knowledge

  • btained from other sources
  • Similar to semantic security of encryption

– Knowledge of the ciphertext (and length) of some unknown message does not reveal any additional information on the message that can be feasibly extracted

slide 6

slide-6
SLIDE 6
  • Membership disclosure: Attacker can tell that

a given person is in the dataset

  • Identity disclosure: Attacker can tell which

record corresponds to a given person

  • Sensitive attribute disclosure: Attacker can tell

that a given person or record has a certain sensitive attribute

slide 7

What to protect: types of disclosure

slide-7
SLIDE 7

What’s published

  • Microdata represents a set of records

containing information on an individual unit such as a person, a firm, an institution

  • Macrodata represents

computed/derived statistics

  • Models and patterns from machine

learning and data mining

slide-8
SLIDE 8

Initial Microdata

Name Age Diagnosis Income Wayne 44 AIDS 45,500 Gore 44 Asthma 37,900 Banks 55 AIDS 67,000 Casey 44 Asthma 21,000 Stone 55 Asthma 90,000 Kopi 45 Diabetes 48,000 Simms 25 Diabetes 49,000 Wood 35 AIDS 66,000 Aaron 55 AIDS 69,000 Pall 45 Tuberculosis 34,000

Masked Microdata

44 AIDS 50,000 44 Asthma 40,000 55 AIDS 70,000 44 Asthma 20,000 55 Asthma 90,000 45 Diabetes 50,000

  • Diabetes

50,000

  • AIDS

70,000 55 AIDS 70,000 45

  • 30,000

Age Diagnosis Income

Disclosure Control For Microdata

slide-9
SLIDE 9

Initial Microdata

Name Age Diagnosis Income Wayne 44 AIDS 45,500 Gore 44 Asthma 37,900 Banks 55 AIDS 67,000 Casey 44 Asthma 21,000 Stone 55 Asthma 90,000 Kopi 45 Diabetes 48,000 Simms 25 Diabetes 49,000 Wood 35 AIDS 66,000 Aaron 55 AIDS 69,000 Pall 45 Tuberculosis 34,000

Tables

Count Diagnosis 4 AIDS 3 Asthma 2 Diabetes 1 Tuberculosis

Table 1 - Count Diagnosis

Count Age Income 1 <= 30 49,000 1 31- 40 66,000 5 41 - 50 188,200 3 51-60 226,000 > 60

Table 2 - Total Incoming

Masked Tables from Tables

Count Diagnosis 4 AIDS 3 Asthma Masked Table 1 Count Age 5 31 - 40 3 41 - 50 Income 188,200 226,000 Masked Table 2

Disclosure Control for Macro Data (Statistics Tables)

slide-10
SLIDE 10

Initial Microdata

Name Age Diagnosis Income Wayne 44 AIDS 45,500 Gore 44 Asthma 37,900 Banks 55 AIDS 67,000 Casey 44 Asthma 21,000 Stone 55 Asthma 90,000 Kopi 45 Diabetes 48,000 Simms 25 Diabetes 49,000 Wood 35 AIDS 66,000 Aaron 55 AIDS 69,000 Pall 45 Tuberculosis 34,000

Disclosure Control For Data Mining/Machine Learning Models

slide-11
SLIDE 11

Inference Control Methods

  • Microdata Release (Anonymization)

– Input perturbation: attribute suppression, generalization, perturbation

  • Macrodata Release

– Output perturbation: summary statistics with perturbation

  • Query restriction/auditing (interactive

version)

– Auditor decides which queries are OK, type of noise

slide 15

slide-12
SLIDE 12

Outline

  • Anonymization problem
  • Anonymization notions and approaches (and

how they fail to work!)

– Basic attempt: de-identification – k-anonymity – l-diversity – t-closeness

  • Takeaways
slide-13
SLIDE 13

Original Data Sanitized Records De-identification

Basic Attempt

  • Remove/replace identifier

attributes

slide-14
SLIDE 14

slide 18

  • Remove “personally identifying information” (PII)

– Name, Social Security number, phone number, email, address… what else?

  • Problem: PII has no technical meaning or common

definition

– Defined in sectoral laws such as HIPAA (PHI: Protected Health Information)

  • 18 identifiers

– Any information can be personally identifying

  • E.g. Rare disease condition
  • Many de-anonymization examples: GIC, AOL dataset, Netflix

Prize dataset

Data “Anonymization”

slide-15
SLIDE 15

Massachusetts GIC Incident

Name SSN Age Zip Diagnosis Alice 123456789 44 48202 AIDS Bob 323232323 44 48202 AIDS Charley 232345656 44 48201 Asthma Dave 333333333 55 48310 Asthma Eva 666666666 55 48310 Diabetes

Anonymized

Age Zip Diagnosis 44 48202 AIDS 44 48202 AIDS 44 48201 Asthma 55 48310 Asthma 55 48310 Diabetes

GIC

 Massachusetts GIC released

“anonymized” data on state employees’ hospital visit

 Then Governor William Weld assured

public on privacy

slide-16
SLIDE 16

Massachusetts GIC

Name SSN Age Zip Diagnosis Alice 123456789 44 48202 AIDS Bob 323232323 44 48202 AIDS Charley 232345656 44 48201 Asthma Dave 333333333 55 48310 Asthma Eva 666666666 55 48310 Diabetes Name Age Zip Alice 44 48202 Charley 44 48201 Dave 55 48310

Voter roll for Cambridge

 Then graduate student Sweeney linked the data with

Voter registration data in Cambridge and identified Governor Weld’s record

Age Zip Diagnosis 44 48202 AIDS 44 48202 AIDS 44 48201 Asthma 55 48310 Asthma 55 48310 Diabetes

slide-17
SLIDE 17

Re-identification

9/9/2018 21

slide-18
SLIDE 18

AOL Query Log Release

AnonID Query QueryTime ItemRank ClickURL 217 lottery 2006-03-01 11:58:51 1 http://www.calottery.com 217 lottery 2006-03-27 14:10:38 1 http://www.calottery.com 1268 gall stones 2006-05-11 02:12:51 1268 gallstones 2006-05-11 02:13:02 1 http://www.niddk.nih.gov 1268

  • zark horse blankets

2006-03-01 17:39:28 8 http://www.blanketsnmore.com

(Source: AOL Query Log)

20 million Web search queries by AOL

slide-19
SLIDE 19

User No. 4417749

  • User 4417749

– “numb fingers”, – “60 single men” – “dog that urinates on everything” – “landscapers in Lilburn, Ga” – Several people names with last name Arnold – “homes sold in shadow lake subdivision gwinnett county georgia”

slide-20
SLIDE 20

User No. 4417749

  • User 4417749

– “numb fingers”, – “60 single men” – “dog that urinates on everything” – “landscapers in Lilburn, Ga” – Several people names with last name Arnold – “homes sold in shadow lake subdivision gwinnett county georgia” Thelma Arnold, a 62-year-old widow who lives in Lilburn, Ga., frequently researches her friends’ medical ailments and loves her dogs

slide-21
SLIDE 21

The Genome Hacker (2013)

slide-22
SLIDE 22

Outline

  • Anonymization problem
  • Anonymization notions and approaches (and

how they fail to work!)

– Basic attempt: de-identification – k-anonymity – l-diversity – t-closeness

  • Takeaways
slide-23
SLIDE 23

K-Anonymity

  • The term was introduced in 1998 by Samarati

and Sweeney.

  • Important papers:

– Sweeney L. (2002), K-Anonymity: A Model for Protecting Privacy, International Journal

  • n Uncertainty, Fuzziness and Knowledge-based Systems, Vol. 10, No. 5, 557-570

– Sweeney L. (2002), Achieving K-Anonymity Privacy Protection using Generalization and Suppression, International Journal on Uncertainty, Fuzziness and Knowledge- based Systems, Vol. 10, No. 5, 571-588 – Samarati P. (2001), Protecting Respondents Identities in Microdata Release, IEEE Transactions on Knowledge and Data Engineering, Vol. 13, No. 6, 1010-1027

  • Hundreds of papers on the topic in the past decade

– Theoretical results – Many algorithms achieving k-anonymity – Many improved principles and algorithms

slide-24
SLIDE 24

Motivating Example

Original Data Sanitized Records De-identification

Non-Sensitive Data Sensitive Data # Zip Age Nationality Name Condition

1 13053 28 Brazilian Ronaldo Heart Disease 2 13067 29 US Bob Heart Disease 3 13053 37 Indian Kumar Cancer 4 13067 36 Japanese Umeko Cancer

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 13053 28 Brazilian Heart Disease 2 13067 29 US Heart Disease 3 13053 37 Indian Cancer 4 13067 36 Japanese Cancer

slide-25
SLIDE 25

Motivating Example

Original Data Sanitized Records De-identification

Non-Sensitive Data Sensitive Data # Zip Age Nationality Name Condition

1 13053 28 Brazilian Ronaldo Heart Disease 2 13067 29 US Bob Heart Disease 3 13053 37 Indian Kumar Cancer 4 13067 36 Japanese Umeko Cancer

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 13053 28 Brazilian Heart Disease 2 13067 29 US Heart Disease 3 13053 37 Indian Cancer 4 13067 36 Japanese Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

slide-26
SLIDE 26

Motivating Example

Original Data Sanitized Records De-identification

Non-Sensitive Data Sensitive Data # Zip Age Nationality Name Condition

1 13053 28 Brazilian Ronaldo Heart Disease 2 13067 29 US Bob Heart Disease 3 13053 37 Indian Kumar Cancer 4 13067 36 Japanese Umeko Cancer

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 13053 28 Brazilian Heart Disease 2 13067 29 US Heart Disease 3 13053 37 Indian Cancer 4 13067 36 Japanese Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

slide-27
SLIDE 27

31

  • There are some fields that may uniquely identify some individual
  • The attacker can use them to join with other sources and identify the individuals

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

… … … … …

Quasi Identifier

Source of the Problem

slide-28
SLIDE 28

Attribute Classification

  • I1, I2,..., Im - identifier attributes

– Ex: Name and SSN – Information that leads to a specific entity.

  • K1, K2,.…, Kp - key attributes (quasi-identifiers)

– Ex: Zip Code and Age – May be known by an intruder.

  • S1, S2,.…, Sq - confidential attributes

– Ex: Principal Diagnosis and Annual Income – Assumed to be unknown to an intruder.

slide-29
SLIDE 29

Attribute Types

  • Identifier, Key (Quasi-Identifiers) and

Confidential Attributes

RecID Name SSN Age State Diagnosis Income Billing 1 John Wayne 123456789 44 MI AIDS 45,500 1,200 2 Mary Gore 323232323 44 MI Asthma 37,900 2,500 3 John Banks 232345656 55 MI AIDS 67,000 3,000 4 Jesse Casey 333333333 44 MI Asthma 21,000 1,000 5 Jack Stone 444444444 55 MI Asthma 90,000 900 6 Mike Kopi 666666666 45 MI Diabetes 48,000 750 7 Angela Simms 777777777 25 IN Diabetes 49,000 1,200 8 Nike Wood 888888888 35 MI AIDS 66,000 2,200 9 Mikhail Aaron 999999999 55 MI AIDS 69,000 4,200 10 Sam Pall 100000000 45 MI Tuberculosis 34,000 3,100

slide-30
SLIDE 30

K-Anonymity Definition

  • The k-anonymity property is satisfied if the

number of records with respect to each Quasi-identifier set (QID) is greater or equal to k

  • Assumption: attacker has external

knowledge that can link QID with identifier

slide-31
SLIDE 31

K-Anonymity Example

RecID Age Zip Sex Illness 1 50 41076 Male AIDS 2 30 41076 Female Asthma 3 30 41076 Female AIDS 4 20 41076 Male Asthma 5 20 41076 Male Asthma 6 50 41076 Male Diabetes

QID = { Age, Zip, Sex }

SELECT COUNT(*) FROM Patient GROUP BY Sex, Zip, Age;

If the results include groups with count less than k, the relation Patient does not have k-anonymity property with respect to QID.

slide-32
SLIDE 32

Achieving k-anonymity: k-anonymization

Caucas 78712 Flu Asian 78705 Shingles Caucas 78754 Flu Asian 78705 Acne AfrAm 78705 Acne Caucas 78705 Flu

  • How to transform a dataset such that it

satisfies k-anonymity while minimizing information loss

slide-33
SLIDE 33

Achieving k-anonymity: k-anonymization

Caucas 78712 Flu Asian 78705 Shingles Caucas 78754 Flu Asian 78705 Acne AfrAm 78705 Acne Caucas 78705 Flu Caucas 787XX Flu

Asian/AfrAm 78705

Shingles Caucas 787XX Flu

Asian/AfrAm 78705

Acne

Asian/AfrAm 78705

Acne Caucas 787XX Flu

  • How to transform a dataset such that it

satisfies k-anonymity while minimizing information loss

slide-34
SLIDE 34

slide 38

Achieving k-Anonymity: k-anonymization

  • Generalization

– Replace specific quasi-identifiers with more general values until get k identical values

  • Example: area code instead of phone number

– Partition ordered-value domains into intervals

  • Suppression

– When generalization causes too much information loss

  • Lots of algorithms in the literature

– Aim to minimize various form of information loss

slide-35
SLIDE 35
  • Membership disclosure: Attacker can tell that

a given person is in the dataset

  • Identity disclosure: Attacker can tell which

record corresponds to a given person

  • Sensitive attribute disclosure: Attacker can tell

that a given person or record has a certain sensitive attribute

slide 41

What to protect: types of disclosure

slide-36
SLIDE 36

Linking Attack based on QID (w/o k- anonymity)

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 13053 28 Brazilian Heart Disease 2 13067 29 US Heart Disease 3 13053 37 Indian Cancer 4 13067 36 Japanese Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

  • Membership

disclosure

  • Identity disclosure
  • Sensitive attribute

disclosure

slide-37
SLIDE 37

Linking Attack based on QID (w/ k- anonymity)

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 130xx 2x American Heart Disease 2 130xx 2x American Heart Disease 3 130xx 3x Asian Cancer 4 130xx 3x Asian Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

  • Membership

disclosure

  • Identity disclosure
  • Sensitive attribute

disclosure

slide-38
SLIDE 38

Linking Attack based on QID (w/ k- anonymity)

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 130xx 2x American Heart Disease 2 130xx 2x American Heart Disease 3 130xx 3x Asian Cancer 4 130xx 3x Asian Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

  • Membership

disclosure

  • Identity disclosure
  • Sensitive attribute

disclosure

slide-39
SLIDE 39

Linking Attack based on QID (w/ k- anonymity)

Non-Sensitive Data Sensitive Data # Zip Age Nationality Condition

1 130xx 2x American Heart Disease 2 130xx 2x American Heart Disease 3 130xx 3x Asian Cancer 4 130xx 3x Asian Cancer

Attacker’s Knowledge: Voter registration list

Chris Bob Paul John

Name

US 23 13067 4 US 29 13067 3 US 22 13067 2 US 45 13067 1

Nationality Age Zip #

  • Membership

disclosure – may not be protected

  • Identity disclosure -

protected assuming attacker knows nothing beyond QID

  • Sensitive attribute

disclosure – may not be protected

slide-40
SLIDE 40

Zipcode Age Disease 476** 2* Heart Disease 476** 2* Heart Disease 476** 2* Heart Disease 4790* ≥40 Flu 4790* ≥40 Heart Disease 4790* ≥40 Cancer 476** 3* Heart Disease 476** 3* Cancer 476** 3* Cancer

A 3-anonymous patient table

Bob Zipcode Age 47678 27 Carl Zipcode Age 47673 36

Homogeneity attack Background knowledge attack

Attacks on k-Anonymity

  • K-Anonymity protects against identity disclosure but not provide sufficient

protection against attribute disclosure

  • Homogeneity attack: Sensitive values in each quasi-identifier group

(equivalence class) lack diversity

  • Background knowledge attack: The attacker has knowledge about other

records in the dataset

slide-41
SLIDE 41

Another Attempt: l-Diversity

Caucas 787XX Flu Caucas 787XX Shingles Caucas 787XX Acne Caucas 787XX Flu Caucas 787XX Acne Caucas 787XX Flu

Asian/AfrAm 78XXX

Flu

Asian/AfrAm 78XXX

Flu

Asian/AfrAm 78XXX

Acne

Asian/AfrAm 78XXX

Shingles

Asian/AfrAm 78XXX

Acne

Asian/AfrAm 78XXX

Flu

  • Protect against attribute

disclosure

  • Sensitive attributes must be

“diverse” within each quasi- identifier equivalence class

  • l-diversity: at least l “well-

represented” values for the sensitive attribute in each equivalence class

[Machanavajjhala et al. ICDE ‘06]

slide-42
SLIDE 42

… HIV- … HIV- … HIV- … HIV- … HIV- … HIV+ … HIV- … HIV- … HIV- … HIV- … HIV- … HIV-

Original dataset

Q1 HIV- Q1 HIV- Q1 HIV- Q1 HIV+ Q1 HIV- Q1 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 Flu

Anonymization B

Q1 HIV+ Q1 HIV- Q1 HIV+ Q1 HIV- Q1 HIV+ Q1 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 HIV- Q2 HIV-

Anonymization A

99% have HIV-

50% HIV-  quasi-identifier group is “diverse” This leaks a ton of information 99% HIV-  quasi-identifier group is not “diverse” …yet anonymized database does not leak anything

Neither Necessary, Nor Sufficient

slide 52

slide-43
SLIDE 43

Attacks on l-diversity: Skewness Attack

  • Example: sensitive attribute is HIV+ (1%) or

HIV- (99%)

  • Consider an equivalence class that contains an

equal number of HIV+ and HIV- records

– Diverse, but potentially violates privacy!

  • l-diversity does not differentiate:

– Equivalence class 1: 49 HIV+ and 1 HIV- – Equivalence class 2: 1 HIV+ and 49 HIV-

slide 54

l-diversity does not consider overall distribution of sensitive values!

slide-44
SLIDE 44

Bob

Zip Age

47678 27

Zipcode Age Salary Disease 476** 2* 20K Gastric Ulcer 476** 2* 30K Gastritis 476** 2* 40K Stomach Cancer 4790* ≥40 50K Gastritis 4790* ≥40 100K Flu 4790* ≥40 70K Bronchitis 476** 3* 60K Bronchitis 476** 3* 80K Pneumonia 476** 3* 90K Stomach Cancer

A 3-diverse patient table Conclusion 1. Bob’s salary is in [20k,40k], which is relatively low 2. Bob has some stomach-related disease

l-diversity does not consider semantics of sensitive values!

Similarity attack

Attacks on l-diversity: similarity attack

slide 55

slide-45
SLIDE 45

Caucas 787XX Flu Caucas 787XX Shingles Caucas 787XX Acne Caucas 787XX Flu Caucas 787XX Acne Caucas 787XX Flu

Asian/AfrAm 78XXX

Flu

Asian/AfrAm 78XXX

Flu

Asian/AfrAm 78XXX

Acne

Asian/AfrAm 78XXX

Shingles

Asian/AfrAm 78XXX

Acne

Asian/AfrAm 78XXX

Flu [Li et al. ICDE ‘07]

Distribution of sensitive attributes within each quasi-identifier group should be “close” to their distribution in the entire original database

t-Closeness

slide 56

slide-46
SLIDE 46

Caucas

787XX HIV+ Flu

Asian/AfrAm

787XX HIV- Flu

Asian/AfrAm

787XX HIV+ Shingles

Caucas

787XX HIV- Acne

Caucas

787XX HIV- Shingles

Caucas

787XX HIV- Acne

This is k-anonymous, l-diverse and t-close… …so secure, right?

k-Anonymous, l-diverse, t-close Dataset

slide 57

slide-47
SLIDE 47

Caucas

787XX HIV+ Flu

Asian/AfrAm

787XX HIV- Flu

Asian/AfrAm

787XX HIV+ Shingles

Caucas

787XX HIV- Acne

Caucas

787XX HIV- Shingles

Caucas

787XX HIV- Acne

Bob is Caucasian and I heard he was admitted to hospital with flu…

slide 58

What Does Attacker Know?

slide-48
SLIDE 48

Caucas

787XX HIV+ Flu

Asian/AfrAm

787XX HIV- Flu

Asian/AfrAm

787XX HIV+ Shingles

Caucas

787XX HIV- Acne

Caucas

787XX HIV- Shingles

Caucas

787XX HIV- Acne

Bob is Caucasian and I heard he was admitted to hospital … And I know three other Caucasions admitted to hospital with Acne or Shingles …

slide 59

What Does Attacker Know?

slide-49
SLIDE 49

Issues with Syntactic Privacy notions

  • Syntactic

– Focuses on data transformation, not on what can be learned from the anonymized dataset – “k-anonymous” dataset (or its variants) can leak sensitive information

  • “Quasi-identifier” fallacy

– Assumes a priori that attacker will not know certain information about his target – Any attribute can be a potential quasi-identifier (AOL example)

slide 60

slide-50
SLIDE 50

Some Takeaways

  • “Security requires a particular mindset.

Security professionals - at least the good ones- see the world differently. They can't walk into a store without noticing how they might

  • shoplift. They can't vote without trying to

figure out how to vote twice. They just can't help it.” –Bruce Schneier (2008)

  • Think about how things may fail instead of

how it may work

slide-51
SLIDE 51

The adversarial mindset: Four Key Questions

  • 1. Security/privacy goal: What policy or good state is meant

to be enforced?

  • 2. Adversarial model: Who is the adversary? What is the

adversary’s space of possible actions?

  • 3. Mechanisms: Are the right security mechanisms in place to

achieve the security goal given the adversarial model?

  • 4. Incentives: Will human factors and economics favor or

disfavor the security goal?