Customized protocol modeling for detection of guessing and DoS - - PowerPoint PPT Presentation

customized protocol modeling for detection of guessing
SMART_READER_LITE
LIVE PREVIEW

Customized protocol modeling for detection of guessing and DoS - - PowerPoint PPT Presentation

Customized protocol modeling for detection of guessing and DoS attacks Bogdan Groza Marius Minea Institute e-Austria Timi soara and Politehnica University of Timi soara Formal Methods for Components and Objects Graz, 30 November 2010


slide-1
SLIDE 1

Customized protocol modeling for detection of guessing and DoS attacks Bogdan Groza Marius Minea

Institute e-Austria Timi¸ soara

and

Politehnica University of Timi¸ soara

Formal Methods for Components and Objects Graz, 30 November 2010

slide-2
SLIDE 2

Components and environment in security modeling

Usual execution for components: (component step; environment step) * Modeling security: intruder interacts with protocol as protocol agent,

  • r

between protocol steps, using Dolev-Yao abilities (component step; environment steps*) *

slide-3
SLIDE 3

Implementation choices

ASLan is based on rewriting rules How to customize ? add extra intruder capabilities but: would require changing the backends add custom rewrite rules flexible: can control when they are executed Applications: denial of service attacks guessing attacks

slide-4
SLIDE 4

DoS attacks by resource exhaustion

General intuition: force user to consume excessive resources with less resource use by attacker In cryptographic protocols: some operations more expensive (exponentiation, public-key encryption/decryption, signatures) Root causes: design flaws cost imbalance (usually affects server side) solution: cryptographic puzzles (client puzzles) lack of authenticity: adversary can steal computational work basic principle: include sender identity in message

slide-5
SLIDE 5

Classifying DoS attacks

Excessive use no abnormal protocol use adversary consumes less resources than honest principals (flooding, spam, ...) Malicious use adversary brings protocol to abnormal state protocol goals not completed correctly

slide-6
SLIDE 6

Modelling cost in transition rules

Augment model with cost for protocol transitions: LHS.cost(P, C1) ⇒ RHS.cost(P, C2) Add cost of cryptographic primitives in intruder deductions: iknows(X).iknows(Y).cost(i, C1).sum(C1, cop, C2) ⇒ iknows(op(X, Y)).cost(i, C2) with op ∈ {exp, enc, sig} iknows(enc(K, X)).iknows(K).cost(i, C1).sum(C1, cdec, C2) ⇒ iknows(X).cost(i, C2) (for decryption) Cost set: monoid {0, low, high, expensive} [Meadows’01]

slide-7
SLIDE 7

Formalizing excessive use

  • 1. session is initiated by adversary

and

  • 2. adversary cost less than honest principal cost

attack state dos excessive(P) := initiate(i).cost(i, Ci).cost(P, CP).less(Ci, CP) Augment rules: track only cost of adversary-initiated sessions (SID) LHS.initiate(i, SID).cost(P, C1).sum(C1, cstep, C2) ⇒ RHS.cost(P, C2) LHS.initiate(A, SID).not(equal(i, A)) ⇒ RHS ⇒ can also model distributed DoS

slide-8
SLIDE 8

Formalizing malicious use

In normal use (injective agreement), protocol events match: every message Msg received by Rcv from Snd in session SID at step label LbL is due to a corresponding send Absence of this property is an attack on protocol functionality (authentication) nagree(Snd, Rcv, Msg, Lbl, SID) := recv(Snd, Rcv, Msg, Lbl, SID).not(send(Snd, Rcv, Msg, Lbl, SID)) Can happen due to adversary reusing values from previous runs ⇒ track agent cost in compromised sessions, but not in normal ones

slide-9
SLIDE 9

Malicious use in multiple sessions

  • 1. track per-session cost for normal sessions

LHS.not(bad(SID)).scost(P, C1, SID).sum(C1, cstep, C2) ⇒ RHS.scost(P, C2, SID)

  • 2. identify session tampering ⇒ switch cost tracking

LHS.not(send(S, P, M, L, SID)).scost(P, C1, SID).sum(C1, cstep, C2) ⇒ RHS.recv(S, P, M, L, SID).bad(SID).cost(P, C2)

  • 3. track per-principal cost for tampered sessions

LHS.bad(SID).cost(P, C1).sum(C1, cstep, C2) ⇒ RHS.bad(SID).cost(P, C2)

slide-10
SLIDE 10

Case studies

  • 1. Station-to-station protocol: reproduced Lowe’s attack

A → B : αx B → A : αy, CertB, Ek(sigB(αy, αx)) A → B : CertA, Ek(sigA(αx, αy)) A → Adv(B): αx Adv → B: αx B → Adv: αy, CertB, Ek(sigB(αy, αx)) Adv(B) → A: αy, CertB, Ek(sigB(αy, αx)) A → Adv(B): CertA, Ek(sigA(αx, αy))

  • 2. Just Fast Keying (augmented with client puzzles): malicious use

exploit honest initiator to solve puzzles of targeted responder

slide-11
SLIDE 11

1 Custom rules for DoS attacks 2 Custom rules for guessing attacks

slide-12
SLIDE 12

Why detect guessing attacks?

Important weak passwords are common vulnerable protocols still in use Realistic, if secrets have low entropy Few tools capable to detect guessing attacks: Lowe ’02, Corin et al. ’04, Blanchet-Abadi-Fournet ’08

slide-13
SLIDE 13

What is needed to guess ?

  • 1. guess a value for the secret s
  • 2. compute a verifier value that confirms the guess

Example guessing conditions [Lowe, 2002] Adv knows v, Es(v): guess s, and verify known value v Adv knows Es(v.v): guess s, and obtain v in 2 ways Adv knows Es(s): guess s, and decrypt Es(s), verify that s is obtained

slide-14
SLIDE 14

Goals for theory and implementation

Detect both on-line and off-line attacks Distinguish blockable / non-blockable on-line attacks Handle cases where several verifiers match one guess Reason about chaining guesses of multiple secrets

slide-15
SLIDE 15

To guess, we need pseudo-random one-way functions

We can guess s from f (s) if f is injective in s. Generalized: f (s, x) is distinguishing in s (probabilistically) if polynomially many f (s, xi) can distinguish any s′ = s. Quantified: f (s, x) is strongly distinguishing in s after q queries if q values f (s, xi) can on average distinguish any s′ = s. Two main cases for guessing: knowing the image of a one-way function on the secret knowing the image of trap-door one-way function on the secret

slide-16
SLIDE 16

Oracles and the adversary

Oracle: abstract view of computation (function) may be off-line or on-line, employing honest principal An adversary:

  • bserves the oracle for a secret s if he knows a term that

contains the secret s

ihears(Term) ∧ Term ⊢part s ⇒ observes(OTerm

s

(·))

controls the oracle for a secret s if he can generate terms that contains fresh replacements for the secret s

ihears(Term) ∧ Term ⊢s←igen(s′) Term′ ∧ iknows(Term′) ⇒ controls(OTerm

s

(·))

slide-17
SLIDE 17

What guesses can be verified ? (1)

an already known term: vrfy(Term) :– iknows(Term) a signature, if the public key and the message are known: vrfy(sign(inv(PK), Term)) :– iknows(PK) , iknows(Term) a term from under a one-way function: vrfy(Term) :– iknows(h) , iknows(apply(h, Term′)) , part(Term, Term′) , controls(Term, Term′)

slide-18
SLIDE 18

What guesses can be verified ? (2)

a ciphertext, if key is known (or decryption oracle controlled) and part of plaintext verifiable: vrfy(scrypt(K, Term)) :– iknows(K) , splitknow(Term, T1, T2) , vrfy(T2) a key, if ciphertext known and part of plaintext verifiable: vrfy(K) :– ihears(scrypt(K, Term)) , splitknow(Term, T1, T2) , vrfy(T2)

where splitknow(Term, T1, T2) splits Term and adds iknows(T1) e.g., from m.h(m) with iknows(m) can verify h(m)

slide-19
SLIDE 19

Actual guessing rules

from one-way function images (allows guessing from h(s), m.h(s.m) etc.) guess(s) :– observes(Of

s (·)) , controls(Of s (·))

by inverting one-way trapdoor functions (allows guessing from {m.m}s, m.{h(m)}s etc.) guess(s) :– observes(O{T}K

s

) , controls(O

{T}K−1 s

) , splitknow(T, T1, T2) , vrfy(T2)

slide-20
SLIDE 20

Implementation

Protocol: modelled using transition rules Guessing rules: modelled as Horn clauses Horn clauses are re-evaluated after each protocol step (and applied until transitive closure) allows natural modeling of recursive facts multiple (intruder) deductions applied after each protocol step crucial efficiency gain compared to modeling with transitions

slide-21
SLIDE 21

Flavors of guessing

  • ff-line: terms constructed directly by intruder
  • n-line: uses computations of honest protocol principals

(intruder controls computation oracles with arbitrary inputs) undetectable (noted by Ding and Horster ’95) all participants terminate (no abnormal protocol activity) modeled by checking that all instances reach final state multiple secrets a guessed secret becomes known to the intruder allows chaining of guessing rules

slide-22
SLIDE 22

Case study 1: Norwegian ATM

Real case, described by Hole et al. (IEEE S&P 2007) 2001: money withdrawn from card within 1 hour of being stolen Question: could the thief have done it without knowing the PIN? Card setup: PIN and card-specific data DES-encrypted with unique bank key 16-bit truncation of result stored on card: ⌊DESBK(PIN.CV )⌋16 Suggested attack [Hole et al., 2007]: break bank key DES search, verifier is a legitimate card owned by adversary Problem: verifier only has 16 bits ⇒ 256−16 = 240 bank keys But: each honest card reduces key search space by 16 bits ⇒ ⌈56/16⌉ = 4 cards suffice

slide-23
SLIDE 23

Norwegian ATM Model

Card Issuing Stage: 1.Bank → User : ⌊DESBK(PIN)⌋16, PIN PIN Change Procedure: 1.User → ATM : ⌊DESBK(PINold)⌋16, PINold, PINnew 2.ATM → User : ⌊DESBK(PINnew)⌋16 Known attack: f (s, PIN) = ⌊DESs(PIN)⌋16 is strongly distinguishing in 4 queries (BK has 56 bits) ⇒ Adv can get 4 legitimate cards and break bank key New attack (for simplified scenario, PIN encrypted alone under BK) if Adv can do unlimited PIN changes on own card ⇒ controls f (BK, PIN) in argument PIN ⇒ can guess BK

slide-24
SLIDE 24

Example 2: MS-CHAP and NTLM

Two well-known, insecure protocols from Microsoft, still in use

i → (a,1): start (a,1) → i: a 1.A → B : A i → (b,1): a 2.B → A : NB (b,1) → i: Nb(2) 3.A → B : NA, H(kab, NA, NB, A) i → (a,1): Nb(2) 4.B → A : H(kab, NA) (a,1) → i: Na(3).h(kab.Na(3).Nb(2).a) i → (b,1): Na(3).h(kab.Na(3).Nb(2).a) 1.B → A : NB (b,1) → i: PID(2),h(kab.Na(3)) 2.A → B : NA, H(kab, H′(NA, NB)) i → (a,1): h(kab.Na(3)) 3.B → A : (a,1) → i: PID(1) H(kab, H′(NA, NB)), H′(NA, NB) i → (i,1): h(kabrpl.Na(3)) i → (i,1): kab.snull i → (i,17): kab,PID(2).PID(1).offlinePID

slide-25
SLIDE 25

Example 3: Lomas et al.’89

Lowe’s replay attack: replace timestamp with constant 0 OFMC found typing attack, replacing the timestamp with a nonce

1.A → S : {A, B, Na1, Na2, Ca, {Ta}pwdA}pks 1.Adv(A) → S : {A, B, Na1′, Na2′, Ca′, {Na1, k ⊕ Na2}pwdA}pks 2.S → B : A, B 2.S → B : A, B 3.B → S : {B, A, Nb1, Nb2, Cb, {Tb}pwdB }pks 3.B → S : {B, A, Nb1′, Nb2′, Cb′, {Tb′}pwdB }pks 4.S → A : {Na1, k ⊕ Na2}pwdA 4.S → Adv(A) : {Na1′, k′ ⊕ Na2′}pwdA 5.S → B : {Nb1, k ⊕ Nb2}pwdB ... 6.B → A : {Rb}k 7.A → B : {f (Rb), Ra}k 8.B → A : {f (Ra)}k

slide-26
SLIDE 26

Conclusions

Automated detection for two types of attacks (guessing, DoS), less represented in current verification toolsets Implemented by augmenting existing (standard) protocol models with transition costs / guessing rules as Horn clauses Flexibile and efficient, no need to modify backends Insights for attack classification

  • ff-line vs. on-line guessing attacks

excessive vs. malicious use in DoS attacks attacks undetectable by protocol participants Current and future work: completely automate model augmentation (script/translator) large-scale evaluation on AVANTSSAR protocol library Automated Validation of Trust and Security

  • f Service-Oriented Architectures, FP7-ICT-2007-1 project 216471