02 - Introduction to Security With material from Dave Levin, Mike - - PowerPoint PPT Presentation

02 introduction to security
SMART_READER_LITE
LIVE PREVIEW

02 - Introduction to Security With material from Dave Levin, Mike - - PowerPoint PPT Presentation

02 - Introduction to Security With material from Dave Levin, Mike Hicks Ad: Joe Bonneau tomorrow Comments on the reading Defining security properties Threat modeling Defensive strategies Intro to encryption Defining


slide-1
SLIDE 1

02 - Introduction to Security

With material from Dave Levin, Mike Hicks

slide-2
SLIDE 2
  • Ad: Joe Bonneau tomorrow
  • Comments on the reading
  • Defining security properties
  • Threat modeling
  • Defensive strategies
  • Intro to encryption
slide-3
SLIDE 3

Defining security

  • Requirements
  • Confidentiality (and Privacy and Anonymity)
  • Integrity
  • Availability
  • Supporting mechanisms
  • Authentication
  • Authorization
  • Auditability
slide-4
SLIDE 4

Privacy and Confidentiality

  • Definition: Sensitive information not leaked unauthorized
  • Called privacy for individuals, confidentiality for data
  • Example policy: Bank account status (including

balance) known only to the account owner

  • Leaking directly or via side channels
  • Example: Manipulating the system to directly display

Bob’s bank balance to Alice

  • Example: Determining Bob has an account at Bank A

based on shorter delay on login failure

https://www.youtube.com/watch?v=Nlf7YM71k5U Secrecy vs. Privacy?

slide-5
SLIDE 5

Anonymity

  • A specific kind of privacy
  • Attacker cannot determine who is communicating
  • Sender, receiver or both
  • Example: Non-account holders should be able to

browse the bank site without being tracked

  • Here the adversary is the bank
  • The previous examples considered other

account holders as possible adversaries

slide-6
SLIDE 6

Integrity

  • Definition: Sensitive information not changed by

unauthorized parties or computations

  • Example: Only the account owner can authorize

withdrawals from her account

  • Violations of integrity can also be direct or indirect
  • Example: Withdraw from the account yourself vs.

confusing the system into doing it

slide-7
SLIDE 7

Availability

  • Definition: A system is responsive to requests
  • Example: A user may always access her account

for balance queries or withdrawals

  • Denial of Service (DoS) attacks attempt to

compromise availability

  • By busying a system with useless work
  • Or cutting off network access
slide-8
SLIDE 8

Supporting mechanisms

  • Leslie Lamport’s Gold Standard defines mechanisms

provided by a system to enforce its requirements

  • Authentication
  • Authorization
  • Audit
  • The gold standard is both requirement and design
  • The sorts of policies that are authorized determine the

authorization mechanism

  • The sorts of users a system has determine how they

should be authenticated

slide-9
SLIDE 9

Authentication

  • Who/what is the subject of security policies?
  • Need notion of identity and a way to connect action with

identity

  • a.k.a. a principal
  • How can system tell a user is who she says she is?
  • What (only) she knows (e.g., password)
  • What she is (e.g., biometric)
  • What she has (e.g., smartphone, RSA token)
  • Authentication mechanisms that employ more than one of these

factors are called multi-factor authentication

  • E.g., passwords and text a special code to user’s smart phone
slide-10
SLIDE 10

Authorization

  • Defines when a principal may perform an action
  • Example: Bob is authorized to access his own

account, but not Alice’s account

  • Access-control policies define what actions might

be authorized

  • May be role-based, user-based, etc.
slide-11
SLIDE 11

Audit

  • Retain enough information to determine the

circumstances of a breach or misbehavior (or establish one did not occur)

  • Often stored in log files
  • Must be protected from tampering,
  • Disallow access that might violate other policies
  • Example: Every account-related action is logged

locally and mirrored at a separate site

  • Only authorized bank employees can view log
slide-12
SLIDE 12

Threat Modeling

(Risk Analysis)

slide-13
SLIDE 13

Threat Model

  • Make adversary’s assumed powers explicit
  • Must match reality, otherwise risk analysis of the

system will be wrong

  • The threat model is critically important
  • If you don’t know what the attacker can (and

can’t) do, how can you know whether your design will repel that attacker?

  • This is part of risk analysis
slide-14
SLIDE 14

Example: Network User

  • Can connect to a service via the network
  • May be anonymous
  • Can:
  • Measure size, timing of requests, responses
  • Run parallel sessions
  • Provide malformed inputs or messages
  • Drop or send extra messages
  • Example attacks: SQL injection, XSS, CSRF, buffer overrun
slide-15
SLIDE 15

Example: Snooping User

  • Attacker on same network as other users
  • e.g., Unencrypted Wi-Fi at coffee shop
  • Can also
  • Read/measure others’ messages
  • Intercept, duplicate, and modify
  • Example attacks: Session hijacking,
  • ther data theft, side-channel attack,

denial of service

slide-16
SLIDE 16

Example: Co-located User

  • Attacker on same machine as other users
  • E.g., malware installed on a user’s laptop
  • Thus, can additionally
  • Read/write user’s files (e.g., cookies) and

memory

  • Snoop keypresses and other events
  • Read/write the user’s display (e.g., to spoof)
  • Example attacks: Password theft (and other

credentials/secrets)

slide-17
SLIDE 17

Threat-driven Design

  • Different threat models will elicit different responses
  • Network-only attackers implies message traffic is safe
  • No need to encrypt communications
  • This is what telnet remote login software assumed
  • Snooping attackers means message traffic is visible
  • So use encrypted wifi (link layer), encrypted network layer (IPsec),
  • r encrypted application layer (SSL)
  • Which is most appropriate for your system?
  • Co-located attacker can access local files, memory
  • Cannot store unencrypted secrets, like passwords
  • Worry about keyloggers as well (2nd factor?)
slide-18
SLIDE 18

Bad Model = Bad Security

  • Assumptions you make are potential holes the attacker can exploit
  • E.g.: Assuming no snooping users no longer valid
  • Prevalence of wi-fi networks in most deployments
  • Other mistaken assumptions
  • Assumption: Encrypted traffic carries no information
  • Not true! By analyzing the size and distribution of messages,

you can infer application state

  • Assumption: Timing channels carry little information
  • Not true! Timing measurements of previous RSA

implementations could eventually reveal an SSL secret key

slide-19
SLIDE 19

Finding a good model

  • Compare against similar systems
  • What attacks does their design contend with?
  • Understand past attacks and attack patterns
  • How do they apply to your system?
  • Challenge assumptions in your design
  • What happens if assumption is false?
  • What would a breach potentially cost you?
  • How hard would it be to get rid of an assumption, allowing for a

stronger adversary?

  • What would that development cost?
slide-20
SLIDE 20

Exercise: Threat modeling

  • Think about security of a home
  • Come up with at least 2 different threat models
  • That lead to very different security decisions
  • Explain your threat model and suggest defenses
slide-21
SLIDE 21

Defense: Allocating resources

  • It’s impossible to stop everything
  • Defender must be correct 100% of the time,

attacker only once

  • Time, cost, people
  • Better uses of resources
  • Think through likelihoods, priorities
  • Effectiveness vs. cost of defense
slide-22
SLIDE 22

Defensive strategies

  • Prevention: Eliminate software defects entirely
  • Example: Heartbleed bug would have been prevented by using a

type-safe language, like Java

  • Mitigation: Reduce harm from exploitation of unknown defects
  • Example: Run each browser tab in a separate process, so exploiting
  • ne tab does not give access to data in another
  • Detection/Recovery: Identify, understand attack; undo damage
  • Examples: Monitoring, snapshotting
  • Incentives: Legal/criminal threats, economic incentives
  • Examples: Credit card vs. small business banking
slide-23
SLIDE 23

Some Principles

  • Favor simplicity
  • Use fail-safe defaults
  • Do not expect expert users
  • Trust with reluctance
  • Minimize trusted computing base
  • Grant the least privilege possible; compartmentalize
  • Defend in Depth
  • If one fails, maybe the next will succeed
  • Use community resources to stack defenses
  • Monitor and trace
slide-24
SLIDE 24

Intro to Crypto

https://en.wikipedia.org/wiki/File:Bletchley_Park_Bombe4.jpg

slide-25
SLIDE 25

Crypto is everywhere

  • Secure comms:
  • Web traffic (HTTPS)
  • Wireless traffic (802.11, WPA2, GSM, Bluetooth)
  • Files on disk: Bitlocker, FileVault
  • User authentication: Kerberos
  • … and much more
slide-26
SLIDE 26

Overall goal: Protect communication

Alice Bob

message m: “curiouser and curiouser!”

Eve

Public channel

Powerful adversary: say, any polynomial-time algorithm

slide-27
SLIDE 27

Security goals

  • Privacy
  • Integrity
  • Authentication
slide-28
SLIDE 28

Bob D

Goal: Privacy

Alice

message m: “curiouser and curiouser!”

Eve

Public channel

Eve should not be able to learn m. Not even one bit!

E

???

slide-29
SLIDE 29

Bob D

Goal: Integrity

Alice

message m: “curiouser and curiouser!”

Eve

Public channel

Eve should not be able to alter m without detection.

E

message m’: “curious and curious?” ERROR!

Works regardless of whether Eve knows the contents of m!

slide-30
SLIDE 30

Bob D

Goal: Authenticity

Alice Eve

Public channel

Eve should not be able to forge messages as Alice

E

“Why is a raven like a writing desk?” signed, Alice ERROR!

slide-31
SLIDE 31

Bob D

Symmetric crypto

Alice

Public channel

E

m ke m (or error) kd c

  • k = ke = kd
  • Everyone who knows k knows the whole secret
slide-32
SLIDE 32
  • How did Alice and Bob both get the secret key?
  • That is a different problem
  • Not solved by symmetric crypto. Assumed.
slide-33
SLIDE 33

Bob D

Asymmetric crypto

Alice

Public channel

E

m ke m (or error) kd c

  • ke != kd
  • kd = private key, ke = public key
  • Bob computes both, gives public key to Alice
  • Alice sends a message to Bob: c = E(m, ke)
  • Bob can decrypt it: m = D(m, kd)
  • Anyone can send, only Bob can read!
slide-34
SLIDE 34
  • How did Alice get Bob’s public key?
  • That’s easy, he sent it in plain / publicly
  • BUT, how does she know it came from Bob?
  • And not from Eve?
  • Again, this is a separate problem. Assumed.
slide-35
SLIDE 35

Bob V

Message authentication

Alice

Public channel

S

m true or false m||s

s = Sign(m, ks)

ks kv

Verify(m,s,kv) ?= true Only someone who knows ks could have sent the message!

slide-36
SLIDE 36

Crypto summary

Symmetric trust model Asymmetric trust model Privacy Private-key encryption

  • Stream ciphers
  • Block ciphers

Public-key encryption Authenticity, Integrity Hashes, MACs, authenticated encryption Signatures, PKI, certificates, SSL/TLS, user authentication

Everyone shares the same secret k Every party has her

  • wn secret

Assumptions: (1) All algorithms public, (2) security based only on key size

slide-37
SLIDE 37

Known usability issues

  • Rarely configured by default
  • Requires a good password
  • What happens if you lose it / forget it?
  • How to get the key?
  • Cross-device issues (keys, compatibility)
  • “Only paranoid people use encryption”
slide-38
SLIDE 38

Key exchange

slide-39
SLIDE 39

One solution: Trusted third party (TTP)

  • Require N keys rather than N^2
  • TTP is a bottleneck for every message
  • TTP must be online at all times
  • TTP can read every message
  • Does not solve bootstrapping problem

TTP

U1 U2 U3 U4 k1 k2 k3 k4

(Symmetric)

slide-40
SLIDE 40

TTP

Session keys and tickets

Used for Kerberos

Bob Alice

“Bob”

kAT

KAB, Ticket “Hi”

kAB

Ticket

Ticket = E(KBT, “Alice||Bob||kAB”) (fresh kAB)

  • TTP is a bottleneck for every message
  • TTP must be online at all times
  • TTP can read every message
  • Does not solve bootstrapping problem
slide-41
SLIDE 41

TTP

TTP for Asymmetric, take 1

  • TTP is a bottleneck for every conversation
  • TTP must be online to start a new conversation
  • TTP can read every message
  • TTP must be trusted to tell the truth!
  • Does not solve bootstrapping problem

U1 U2 PK1 PK2

Trusted directory service

PK2 PK1

slide-42
SLIDE 42

TTP for asymmetric, take 2

Alice

TTP

Bob

PKT PKT PKA plus verification Alice owns PKA. Signed, PKT S(SKA, E(PKB, m)) + cert

Bob: Verify cert with PKT, verify message with PKA Certificates

slide-43
SLIDE 43

With certificates

  • TTP is a bottleneck for every conversation
  • TTP must be online to start a new conversation
  • TTP can read every message
  • TTP must be trusted to tell the truth!
  • Does not solve bootstrapping problem
slide-44
SLIDE 44

Web of trust

  • Alternative PKI — not hierarchical
  • Pioneered by PGP
  • Don’t rely on centralized authorities
  • Everyone issues certificates for people they know
slide-45
SLIDE 45

Trust chains in web of trust

Alice Bob Cookie Donald

trusts vouches for vouches for sends message to

slide-46
SLIDE 46

A matter of trust

  • Context:
  • Alice trusts Bob to diligently check identity
  • But Bob is only signing identity, not necessarily

belief that Cookie is equally vigilant

  • Transitivity: Alice trusts Bob, and Bob trusts Cookie.
  • But does that mean Alice should trust Cookie?
  • Trust for honesty == trust for good judgment?
slide-47
SLIDE 47

Trusting the Trusted Third Party

http://randomrock.com.br/randomrock/rock-n-movies-20-watchmen/

slide-48
SLIDE 48

Where do CAs come from?

  • CA public keys shipped with browsers, OS
  • iOS9 ships with >50 that start with A-C
  • see here for full list
slide-49
SLIDE 49

CA compromise

  • 2001: Verisign issued two code-signing certificates for

Microsoft Corporation

  • To someone who didn’t actually work at MS
  • No functional revocation paradigm
  • 2011: Signing keys compromised at Comodo and

DigiNotar

  • Bad certs for Google, Yahoo!, Tor, others
  • Seem to have been used mostly in Iran
  • Some CAs are less picky than others
slide-50
SLIDE 50

Case study: Superfish (Feb 2015)

  • Lenovo laptops shipped with “Superfish” adware
  • Installs self-signed root cert into browsers
  • MITM on every HTTPS site to inject ads
  • Worse: Same private key for every laptop
  • Password = “komodia” (company
  • Lenovo“did not find any evidence to

substantiate security concerns”

http://arstechnica.com/security/2015/02/lenovo-pcs-ship-with-man-in-the-middle-adware-that-breaks-https-connections/

http://www.sainteldaily.com/archives/11400

slide-51
SLIDE 51

Fixing rogue CA problems

  • Limit which CAs can issue for which domains
  • Certificate pinning
  • Browser, apps fix certain CA or cert for a server
  • Shipped with product, or on first use
  • Not always appropriate, hard to maintain
slide-52
SLIDE 52

Fixing rogue CA problems (2)

  • Broad surveillance
  • People on many networks report certs to Notaries
  • Check that others saw the same cert you did
  • Privacy implications
  • Public unforgeable audit log
  • Uses crypto, Merkle hash trees
  • Only accept certs published in log
  • Same idea: Non-equivocation
  • Being implemented now

https://www.eff.org/observatory https://www.eff.org/sovereign-keys