Analysing privacy-type properties in cryptographic protocols - - PowerPoint PPT Presentation
Analysing privacy-type properties in cryptographic protocols - - PowerPoint PPT Presentation
Analysing privacy-type properties in cryptographic protocols Stphanie Delaune Univ Rennes, CNRS, IRISA, France Thursday, July 12th, 2018 Cryptographic protocols everywhere ! Cryptographic protocols small programs designed to secure
Cryptographic protocols everywhere !
Cryptographic protocols
◮ small programs designed to secure
communication (e.g. secrecy, authentication, anonymity, . . . )
◮ use cryptographic primitives (e.g.
encryption, signature, . . . . . . )
The network is unsecure!
Communications take place over a public network like the Internet.
Cryptographic protocols everywhere !
Cryptographic protocols
◮ small programs designed to secure
communication (e.g. secrecy, authentication, anonymity, . . . )
◮ use cryptographic primitives (e.g.
encryption, signature, . . . . . . ) It becomes more and more important to protect our privacy.
Electronic passport
− → studied in [Arapinis et al., 10] An e-passport is a passport with an RFID tag embedded in it. The RFID tag stores:
◮ the information printed on your
passport,
◮ a JPEG copy of your picture.
Electronic passport
− → studied in [Arapinis et al., 10] An e-passport is a passport with an RFID tag embedded in it. The RFID tag stores:
◮ the information printed on your
passport,
◮ a JPEG copy of your picture.
The Basic Access Control (BAC) protocol is a key establishment protocol that has been designed to also ensure unlinkability.
ISO/IEC standard 15408
Unlinkability aims to ensure that a user may make multiple uses of a service or resource without others being able to link these uses together.
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
get_challenge
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
get_challenge NP, KP NP
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE )
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE ) {NP, NR, KP}KE , MACKM ({NP, NR, KP}KE )
Basic Acccess Control (BAC) protocol
Passport
(KE, KM)
Reader
(KE, KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE ) {NP, NR, KP}KE , MACKM ({NP, NR, KP}KE ) Kseed = KP ⊕ KR Kseed = KP ⊕ KR
How cryptographic protocols can be attacked?
How cryptographic protocols can be attacked?
Logical attacks
◮ can be mounted even assuming perfect
cryptography, ֒ → replay attack, man-in-the middle attack, . . .
◮ subtle and hard to detect by “eyeballing” the
protocol This is the so-called Dolev-Yao attacker !
How cryptographic protocols can be attacked?
Logical attacks
◮ can be mounted even assuming perfect
cryptography, ֒ → replay attack, man-in-the middle attack, . . .
◮ subtle and hard to detect by “eyeballing” the
protocol Example: An authentication flaw on the Needham Schroeder protocol A → B : {A, NA}pub(B) B → A : {NA, NB}pub(A) A → B : {NB}pub(B) NS protocol (1978)
How cryptographic protocols can be attacked?
Logical attacks
◮ can be mounted even assuming perfect
cryptography, ֒ → replay attack, man-in-the middle attack, . . .
◮ subtle and hard to detect by “eyeballing” the
protocol Example: An authentication flaw on the Needham Schroeder protocol A → B : {A, NA}pub(B) B → A : {NA, NB}pub(A) A → B : {NB}pub(B) A → B : {A, NA}pub(B) B → A : {NA, NB, B}pub(A) A → B : {NB}pub(B) NS protocol (1978) NS-Lowe protocol (1995)
How cryptographic protocols can be attacked?
Logical attacks
◮ can be mounted even assuming perfect
cryptography, ֒ → replay attack, man-in-the middle attack, . . .
◮ subtle and hard to detect by “eyeballing” the
protocol Example: FREAK attack by Barghavan et al. (2015) A logical flaw that allows a man-in-the- middle attacker to downgrade connections from ’strong’ RSA to ’export grade’ RSA.
How cryptographic protocols can be attacked?
Logical attacks
◮ can be mounted even assuming perfect
cryptography, ֒ → replay attack, man-in-the middle attack, . . .
◮ subtle and hard to detect by “eyeballing” the
protocol Example: A traceability attack on the BAC protocol (2010) privacy issue The register - Jan. 2010
French electronic passport
− → the passport must reply to all received messages. Passport
(KE,KM)
Reader
(KE,KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE )
French electronic passport
− → the passport must reply to all received messages. Passport
(KE,KM)
Reader
(KE,KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE ) If MAC check fails mac_error
French electronic passport
− → the passport must reply to all received messages. Passport
(KE,KM)
Reader
(KE,KM)
get_challenge NP, KP NP NR, KR {NR, NP, KR}KE , MACKM ({NR, NP, KR}KE ) If MAC check succeeds If nonce check fails nonce_error
An attack on the French passport [Chothia & Smirnov, 10]
An attacker can track a French passport, provided he has once witnessed a successful authentication.
An attack on the French passport [Chothia & Smirnov, 10]
An attacker can track a French passport, provided he has once witnessed a successful authentication. Part 1 of the attack. The attacker eavesdropes on Alice using her passport and records message M. M = {NR, NP, KR}KE , MACKM({NR, NP, KR}KE )
An attack on the French passport [Chothia & Smirnov, 10]
An attacker can track a French passport, provided he has once witnessed a successful authentication. Part 1 of the attack. The attacker eavesdropes on Alice using her passport and records message M. M = {NR, NP, KR}KE , MACKM({NR, NP, KR}KE ) Part 2 of the attack. In presence of an unknown passport (K ′
E, K ′ M), the attacker
replays the message M and checks the error code he receives.
- 1. MAC check failed: K ′
M = KM
= ⇒ ???? is not Alice
- 2. MAC check succeeded:
K ′
M = KM
= ⇒ ???? is Alice
Outline
|
Does the protocol
Modelling
satisfy
| = ϕ
a security property? Outline of the remaining of this talk
- 1. Modelling cryptographic protocols and their security properties
- 2. Designing verification algorithms
− → we focus here on privacy-type security properties
Part I Modelling cryptographic protocols and their security properties
Two major families of models ...
... with some advantages and some drawbacks. Computational model
◮ + messages are bitstring, a general and powerful adversary ◮ – manual proofs, tedious and error-prone
Symbolic model
◮ – abstract model, e.g. messages are terms ◮ + automatic proofs
Two major families of models ...
... with some advantages and some drawbacks. Computational model
◮ + messages are bitstring, a general and powerful adversary ◮ – manual proofs, tedious and error-prone
Symbolic model
◮ – abstract model, e.g. messages are terms ◮ + automatic proofs
Some results allowed to make a link be- tween these two very different models. − → Abadi & Rogaway 2000
Back to the BAC protocol
Nonces nr, np, and keys kr, kp, ke, km are modelled using names Cryptographic primitives are modelled using function symbols
◮ encryption/decryption: senc/2, sdec/2 ◮ concatenation/projections: , /2, proj1/1, proj2/1 ◮ mac construction: mac/2
sdec(senc(x, y), y) = x proj1(x, y) = x proj2(x, y) = y
Back to the BAC protocol
Nonces nr, np, and keys kr, kp, ke, km are modelled using names Cryptographic primitives are modelled using function symbols
◮ encryption/decryption: senc/2, sdec/2 ◮ concatenation/projections: , /2, proj1/1, proj2/1 ◮ mac construction: mac/2
sdec(senc(x, y), y) = x proj1(x, y) = x proj2(x, y) = y Exclusive-or operator: ⊕ of arity 2 and 0 (neutral element) x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z x ⊕ x = x ⊕ y = y ⊕ x x ⊕ 0 = x
Protocols as processes
Syntax [Abadi & Fournet, 01] P, Q := null process in(c, x).P input
- ut(c, u).P
- utput
if u = v then P else Q conditional P | Q parallel composition !P replication new n.P fresh name generation
Protocols as processes
Syntax [Abadi & Fournet, 01] P, Q := null process in(c, x).P input
- ut(c, u).P
- utput
if u = v then P else Q conditional P | Q parallel composition !P replication new n.P fresh name generation
Modelling Passport’s role
PBAC(kE, kM) = new nP.new kP.out(nP).in(zE, zM). if zM = mac(zE, kM) then if nP = proj1(proj2(sdec(zE, kE))) then out(m, mac(m, kM)) else out(nonce_error) else out(mac_error) where m = senc(nP, proj1(zE), kP, kE).
Protocols as processes
Syntax [Abadi & Fournet, 01] P, Q := null process in(c, x).P input
- ut(c, u).P
- utput
if u = v then P else Q conditional P | Q parallel composition !P replication new n.P fresh name generation
Semantics →
Comm
- ut(c, u).P | in(c, x).Q → P | Q{u/x}
Then if u = v then P else Q → P when u =E v Else if u = v then P else Q → Q when u =E v Repl !P → P |!P
+ some structural rules and closure under evaluation contexts
What does unlinkability mean?
Informally, an observer/attacker can not observe the difference between the two following situations:
- 1. a situation where the same passport
may be used twice (or even more);
- 2. a situation where each passport is
used at most once.
What does unlinkability mean?
Informally, an observer/attacker can not observe the difference between the two following situations:
- 1. a situation where the same passport
may be used twice (or even more);
- 2. a situation where each passport is
used at most once. More formally, !new ke.new km.(!PBAC | !RBAC)
?
≈ !new ke.new km.( PBAC | RBAC) ↑ ↑
many sessions for each passport
- nly one session
for each passport
(we still have to formalize the notion of equivalence)
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c.
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c. Example 1:
- ut(a, yes)
?
≈t out(a, no)
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c. Example 1:
- ut(a, yes) ≈t out(a, no)
− → A = in(a, x).if x = yes then out(c, ok)
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c. Example 2: assuming that k and k′ are known by the attacker new s.out(a, senc(s, k)).out(a, senc(s, k′))
?
≈t new s, s′.out(a, senc(s, k)).out(a, senc(s′, k′))
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c. Example 2: assuming that k and k′ are known by the attacker new s.out(a, senc(s, k)).out(a, senc(s, k′)) ≈t new s, s′.out(a, senc(s, k)).out(a, senc(s′, k′))
− → A = in(a, x).in(a, y).if (sdec(x, k) = sdec(y, k′)) then out(c, ok)
Testing equivalence - P ≈t Q
for all processes A (the attacker), we have that: (A | P) ⇓c if, and only if, (A | Q) ⇓c where P ⇓c means that P can evolve and emits on channel c. Example 3: new s.out(a, s) ≈t new k.out(a, senc(yes, k))
Some other equivalence-based security properties
Vote privacy the fact that a particular voted in a partic- ular way is not revealed to anyone Strong secrecy the fact that an adversary cannot see any difference when the value of the secret changes − → stronger than the notion of secrecy as non-deducibility. Guessing attack the fact that an adversary can not learn the value of passwords even if he knows that they have been choosen in a particular dic- tionary.
Part II Designing verification algorithms for privacy-type properties
Warm-up – the so-called passive attacker
The static equivalence problem
◮ Input: two frames φ and ψ
φ = {w1 ⊲ u1, . . . , wℓ ⊲ uℓ} ψ = {w1 ⊲ v1, . . . , wℓ ⊲ vℓ}
◮ Output: Can the attacker distinguish the two frames, i.e. does
there exist a test R1
?
= R2 such that: R1φ =E R2φ but R1ψ =E R2ψ (or the converse).
Warm-up – the so-called passive attacker
The static equivalence problem
◮ Input: two frames φ and ψ
φ = {w1 ⊲ u1, . . . , wℓ ⊲ uℓ} ψ = {w1 ⊲ v1, . . . , wℓ ⊲ vℓ}
◮ Output: Can the attacker distinguish the two frames, i.e. does
there exist a test R1
?
= R2 such that: R1φ =E R2φ but R1ψ =E R2ψ (or the converse). Example 1: adec(aenc(x, pk(y)), y) = x
◮ φ = {w1 ⊲ pk(sks); w2 ⊲ aenc(yes, pk(sks))}; and ◮ ψ = {w1 ⊲ pk(sks); w2 ⊲ aenc(no, pk(sks))}.
Warm-up – the so-called passive attacker
The static equivalence problem
◮ Input: two frames φ and ψ
φ = {w1 ⊲ u1, . . . , wℓ ⊲ uℓ} ψ = {w1 ⊲ v1, . . . , wℓ ⊲ vℓ}
◮ Output: Can the attacker distinguish the two frames, i.e. does
there exist a test R1
?
= R2 such that: R1φ =E R2φ but R1ψ =E R2ψ (or the converse). Example 1: adec(aenc(x, pk(y)), y) = x
◮ φ = {w1 ⊲ pk(sks); w2 ⊲ aenc(yes, pk(sks))}; and ◮ ψ = {w1 ⊲ pk(sks); w2 ⊲ aenc(no, pk(sks))}.
− → They are not in static equivalence: aenc(yes,w1) ? = w2.
Warm-up – the so-called passive attacker
The static equivalence problem
◮ Input: two frames φ and ψ
φ = {w1 ⊲ u1, . . . , wℓ ⊲ uℓ} ψ = {w1 ⊲ v1, . . . , wℓ ⊲ vℓ}
◮ Output: Can the attacker distinguish the two frames, i.e. does
there exist a test R1
?
= R2 such that: R1φ =E R2φ but R1ψ =E R2ψ (or the converse). Example 2: (randomized encryption)
◮ φ = {w1 ⊲ pk(sks); w2 ⊲ aenc(yes, r, pk(sks))}; and ◮ ψ = {w1 ⊲ pk(sks); w2 ⊲ aenc(no, r, pk(sks))}.
Warm-up – the so-called passive attacker
The static equivalence problem
◮ Input: two frames φ and ψ
φ = {w1 ⊲ u1, . . . , wℓ ⊲ uℓ} ψ = {w1 ⊲ v1, . . . , wℓ ⊲ vℓ}
◮ Output: Can the attacker distinguish the two frames, i.e. does
there exist a test R1
?
= R2 such that: R1φ =E R2φ but R1ψ =E R2ψ (or the converse). Example 2: (randomized encryption)
◮ φ = {w1 ⊲ pk(sks); w2 ⊲ aenc(yes, r, pk(sks))}; and ◮ ψ = {w1 ⊲ pk(sks); w2 ⊲ aenc(no, r, pk(sks))}.
− → They are in static equivalence.
Static equivalence – some existing results
Theory E Deduction Static Equivalence subterm convergent PTIME blind signature, decidable
- homo. encryption
[Abadi & Cortier, 06] ACUN/AG PTIME PTIME [Chevalier et al, 03] [Cortier & D., 10]
Static equivalence – some existing results
Theory E Deduction Static Equivalence subterm convergent PTIME blind signature, decidable
- homo. encryption
[Abadi & Cortier, 06] ACUN/AG PTIME PTIME [Chevalier et al, 03] [Cortier & D., 10] Combination result If deduction and static equivalence are decidable for two disjoint theories E1 and E2 then they are also decidable for E1 ∪ E2. [Cortier & D., 10]
Static equivalence – some existing results
Theory E Deduction Static Equivalence subterm convergent PTIME blind signature, decidable
- homo. encryption
[Abadi & Cortier, 06] ACUN/AG PTIME PTIME [Chevalier et al, 03] [Cortier & D., 10] Combination result If deduction and static equivalence are decidable for two disjoint theories E1 and E2 then they are also decidable for E1 ∪ E2. [Cortier & D., 10] − → inspired from existing results and proofs in unification theory, e.g. [Nutt, 90], and [Baader & Schulz, 96]
Caution !
One should never underestimate the attacker ! The attacker can listen to the communication but also:
◮ intercept the messages that are sent by the participants, ◮ build new messages according to his deduction capabilities,
and
◮ send messages on the communication network.
− → this is the co-called active attacker
How can we check testing equivalence?
The problem is undecidable in general
− → even under quite severe restrictions [Chrétien PhD thesis, 2016]
How can we check testing equivalence?
The problem is undecidable in general
− → even under quite severe restrictions [Chrétien PhD thesis, 2016]
Several procedures and automatic tools already exist !
How can we check testing equivalence?
The problem is undecidable in general
− → even under quite severe restrictions [Chrétien PhD thesis, 2016]
Several procedures and automatic tools already exist ! Two main categories of tools have been developed so far:
◮ unbounded number of sessions: e.g. ProVerif [Blanchet et al,
2005], and Tamarin [Basin et al, 2015]. − → no miracle: these tools may failed, and only consider a strong form of equivalence, namely diff-equivalence.
◮ bounded number of sessions, i.e. processes without !
− → the problem becomes decidable (at least for classical primitives)
Part II Designing verification algorithms for privacy-type properties
for a bounded number of sessions
Constraint solving approach (confidentiality)
− → [Millen & Shmatikov, 2001]
Constraint solving approach (confidentiality)
− → [Millen & Shmatikov, 2001] Step 1: the infinite set of concrete executions following a particular interleaving is represented through a constraint sytem. e.g. in(u1). out(v1). in(u2). . . . . is transformed into C =
φ0
?
⊢ u1 φ0, w1 ⊲ v1
?
⊢ u2 ... φ0, w1 ⊲ v1, .., wn ⊲ vn
?
⊢ s − → ui, vi may contain variables
Constraint solving approach (confidentiality)
− → [Millen & Shmatikov, 2001] Step 1: the infinite set of concrete executions following a particular interleaving is represented through a constraint sytem. e.g. in(u1). out(v1). in(u2). . . . . is transformed into C =
φ0
?
⊢ u1 φ0, w1 ⊲ v1
?
⊢ u2 ... φ0, w1 ⊲ v1, .., wn ⊲ vn
?
⊢ s − → ui, vi may contain variables Step 2: A procedure to decide whether a constraint sytem admits a solution, i.e. does there exist R0, . . . , Rn (computations done by the attacker) and σ such that: R0ϕ0 =E u1σ, R1(ϕ0 ∪ {w1 ⊲ v1σ}) =E u2σ,. . .
Step 2: a procedure for solving a constraint system
− → a set of transformation rules to simplify constraint systems
Step 2: a procedure for solving a constraint system
− → a set of transformation rules to simplify constraint systems
C =
φ0
?
⊢ u1 φ0, w1 ⊲ v1
?
⊢ u2 . . . φ0, w1 ⊲ v1, . . . , wn ⊲ vn
?
⊢ s
C1 C2 C3 ⊥ C4 solved ⊥ − → this gives us a symbolic representation of all the solutions.
From confidentiality to privacy-type properties
Step 1: from testing equivalence to symbolic equivalence ΣP = {C1, . . . , Cp} ≈s {C′
1, . . . , C′ q} = ΣQ
− → we will have many equivalences like this to consider !
From confidentiality to privacy-type properties
Step 1: from testing equivalence to symbolic equivalence ΣP = {C1, . . . , Cp} ≈s {C′
1, . . . , C′ q} = ΣQ
− → we will have many equivalences like this to consider ! Step 2: procedure for checking symbolic equivalence Do ΣP and ΣQ have the same set of solutions? for all R1, . . . , Rn solution of C ∈ ΣP, there exists C′ ∈ ΣQ such that R1, . . . , Rn solution of C′, and the resulting frames are in static equivalence. − → clever algorithms have been developed to solve this problem
Checking symbolic equivalence (a long story)
Symbolic equivalence ΣP ≈s ΣQ
first algorithm for subterm convergent theories to check symbolic equivalence between two positive constraint systems [Baudet, 2005] − → more simple proof of the same result (still 20 pages) by [Chevalier & Rusinowitch, 2011]. − → no implementation !
Checking symbolic equivalence (a long story)
Symbolic equivalence ΣP ≈s ΣQ
first algorithm for subterm convergent theories to check symbolic equivalence between two positive constraint systems [Baudet, 2005] − → more simple proof of the same result (still 20 pages) by [Chevalier & Rusinowitch, 2011]. − → no implementation !
Some practical algorithms and tools
◮ Spec: fixed set of primitives, processes with no else branch
[Tiu et al, 2011]
◮ Apte: fixed set of primitives, else branches, non-determinism
(e.g. private channel) [Cheval et al, 2011]
◮ Akiss : more primitives, no else branch. [Chadha et al, 2012]
Checking symbolic equivalence (a long story)
Symbolic equivalence ΣP ≈s ΣQ
first algorithm for subterm convergent theories to check symbolic equivalence between two positive constraint systems [Baudet, 2005] − → more simple proof of the same result (still 20 pages) by [Chevalier & Rusinowitch, 2011]. − → no implementation !
Some practical algorithms and tools
◮ Spec: fixed set of primitives, processes with no else branch
[Tiu et al, 2011]
◮ Apte: fixed set of primitives, else branches, non-determinism
(e.g. private channel) [Cheval et al, 2011]
◮ Akiss : more primitives, no else branch. [Chadha et al, 2012]
− → but a limited practical impact because they scale badly
Partial order reduction for security protocols
[Hirschi PhD thesis, 2017]
Main objective
to develop POR techniques that are suitable for analysing security protocols (especially testing equivalence)
Partial order reduction for security protocols
[Hirschi PhD thesis, 2017]
Main objective
to develop POR techniques that are suitable for analysing security protocols (especially testing equivalence) Example: in(c1, x1).out(c1, ok) | in(c2, x2).out(c2, ok) We propose two optimizations:
- 1. compression: we impose a simple strategy on the exploration
- f the available actions (roughly outputs are performed first
and using a fixed arbitrary order)
- 2. reduction: we avoid exploring some redundant traces taking
into account the data that are exchanged − → Each optimisations brings an exponential speedup, and have been integrated in Apte and in its successor DeepSec
The DeepSec tool [Cheval, Kremer & Rakotonirina, 2018]
A procedure based on a symbolic semantics and constraint solving:
◮ large class of processes (but no replication): else branches,
correct for most standard cryptographic primitives, and beyond (e.g. blind signatures)
◮ quite efficient: exploit multicore architectures, integrate POR
- ptimisations
The DeepSec tool [Cheval, Kremer & Rakotonirina, 2018]
A procedure based on a symbolic semantics and constraint solving:
◮ large class of processes (but no replication): else branches,
correct for most standard cryptographic primitives, and beyond (e.g. blind signatures)
◮ quite efficient: exploit multicore architectures, integrate POR
- ptimisations