Calculated Secure Processes
Michael J. Banks Jeremy L. Jacob York Doctoral Symposium 4 November 2010
Calculated Secure Processes Michael J. Banks Jeremy L. Jacob York - - PowerPoint PPT Presentation
Calculated Secure Processes Michael J. Banks Jeremy L. Jacob York Doctoral Symposium 4 November 2010 Whats this talk about? Formal methods are used in software development when the behavioural correctness of systems is paramount.
Michael J. Banks Jeremy L. Jacob York Doctoral Symposium 4 November 2010
Formal methods are used in software development when the behavioural correctness of systems is paramount. Specification What properties should a system possess? Verification Does a system satisfy its specification? How can we use formal methods to construct secure software?
◮ Confidential (sensitive or valuable) information must be
protected from being disclosed to untrusted users
◮ We aim to guarantee information flow security
Background
CSP is a well-known process algebra for specifying systems:
◮ originally due to Hoare; revised by Hoare, Brooks, Roscoe ◮ a process interacts with environment by performing events ◮ process semantics given by a model (traces, failures, . . . )
a → S engage in event ‘a’, then behave as process S S ⊓ T non-deterministic choice between S and T S ✷ T deterministic choice between S and T S ; T behave as S until it terminates, then behave as T
A motivational example
Consider the process specification: M (h → l → M) ⊓ (k → Stop) Suppose that:
◮ user Low can observe ‘l’ events only; ◮ Low has complete knowledge of M’s structure; and ◮ Low must be unable to establish if ‘h’ occurs
A motivational example
Consider the process specification: M (h → l → M) ⊓ (k → Stop)
If Low observes l, it can calculate that M’s trace must be in:
and so Low can infer that (at least one) ‘h’ must have occurred!
We limit the flow of information to Low by inserting alternative “cover story” behaviours consistent with Low’s observations.
◮ Cover stories represent non-confidential behaviours ◮ Idea: If Low cannot rule out all cover stories, it cannot
establish that any confidential activity has taken place We write <P, Q>, where:
◮ P is a process specifying confidential behaviours ◮ Q is a process specifying cover story behaviours ◮ and Low must be unable to distinguish P from Q
The semantics of <P, Q>(S) are defined by algebraic laws.
provided P cannot perform an ‘a’ event:
provided S can behave as X:
b → Y ; <a → X, b → Y>(S “after” X)
Continuing the example
Low cannot deduce the occurrence of ‘h’ immediately before ‘l’: P h → l → Skip Q l → Skip Applying the algebraic laws, we calculate M′ = <P, Q>(M): M′ = <P, Q>((h → l → M) ⊓ (k → Stop))
If Low observes l, the trace of M′ must be a member of:
and Low can no longer infer that ‘h’ has occurred.
with respect to <P, Q>
Applying <P, Q> to a process S yields a secure process with respect to the confidentiality property encoded by P, Q.
Given any Low-observation of <P, Q>(S), Low can never establish (with certainty) that a P activity has taken place.
S is secure (with respect to P, Q) provided <P, Q>(S) = S But adding cover stories may violate functional requirements.
Classical notion of information flow security is noninterference:
◮ Low can learn nothing about high-level user’s activities ◮ Often too strong for practical software development! ◮ Our notion of confidentiality is weaker than noninterference
We have not considered factors outside the model:
◮ probability of confidential behaviours, relative to cover stories ◮ timing of events ◮ power consumption
(We need a richer CSP semantic model to reason about these.)
◮ An operator for rewriting CSP processes to ensure they
satisfy confidentiality properties
◮ Encoding confidentiality properties via processes is original
◮ How can application of the operator be automated? ◮ Refinement reduces non-determinism within processes, but
doing so may introduce new sources of information flow!
Slides and paper are available at: http://www-users.cs.york.ac.uk/~mbanks/