Security Management and Engineering Is this - - PowerPoint PPT Presentation

security management and engineering
SMART_READER_LITE
LIVE PREVIEW

Security Management and Engineering Is this - - PowerPoint PPT Presentation

Security Management and Engineering Is this product/technique/service secure? Simple Yes/No answers are often wanted, but typically inappropriate. Security of an item depends much on the context in which it is used. Complex


slide-1
SLIDE 1

Security Management and Engineering

Is this product/technique/service secure?

  • Simple Yes/No answers are often wanted, but typically inappropriate.
  • Security of an item depends much on the context in which it is used.
  • Complex systems can provide a very large number of elements and interactions that are open to
  • abuse. An effective protection can therefore only be obtained as the result of a systematic planning

approach. “No need to worry, our product is 100% secure. All data is encrypted with 128-bit keys. It takes billions

  • f years to break these.” : Such statements are abundant in marketing literature.

A security manager should ask:

  • What does the mechanism achieve?
  • Do we need confidentiality, integrity or availability of exactly this data?
  • Who will generate the keys and how?
  • Who will store / have access to the keys?
  • Can we lose keys and with them data?
  • Will it interfere with other security measures (backup, auditing, scanning, . . . )?
  • Will it introduce new vulnerabilities or can it somehow be used against us?
  • What if it breaks or is broken?
slide-2
SLIDE 2

UK Computer Misuse Act 1990

  • Knowingly causing a computer to perform a function with the intent to access

without authorisation any program or data held on it ⇒ up to 6 months in prison and/or a fine

  • Doing so to further a more serious crime

⇒ up to 5 years in prison and/or a fine

  • Knowingly causing an unauthorised modification of the contents of any computer to

impair its operation or hinder access to its programs or data ⇒ up to 5 years in prison and/or a fine The intent does not have to be directed against any particular computer, program or data. In other words, starting automated and selfreplicating tools (viruses, worms, etc.) that randomly pick where they attack is covered by the Act as well. Denial-of-service attacks in the form of overloading public services are not yet covered explicitly. Kuhn

slide-3
SLIDE 3

Security policy development

  • Step 1: Security requirements analysis
  • Identify assets and their value
  • Identify vulnerabilities, threats and risk

priorities

  • Identify legal and contractual requirements
slide-4
SLIDE 4

Step 2: Work out a suitable security policy

The security requirements identified can be complex and may have to be abstracted first into a high-level security policy, a set of rules that clarifies which are or are not authorised, required, and prohibited activities, states and information flows. Security policy models are techniques for the precise and even formal definition of such protection goals. They can describe both automatically enforced policies (e.g., a mandatory access control configuration in an operating system, a policy description language for a database management system, etc.) and procedures for employees (e.g., segregation of duties).

slide-5
SLIDE 5

Step 3: Security policy document

Once a good understanding exists of what exactly security means for an organisation and what needs to be protected or enforced, the highlevel security policy should be documented as a reference for anyone involved in implementing controls. It should clearly lay

  • ut the overall objectives, principles and the underlying threat

model that are to guide the choice of mechanisms in the next step.

slide-6
SLIDE 6

Step 4: Selection and implementation of controls

Issues addressed in a typical low-level organisational security policy:

  • General (affecting everyone) and specific responsibilities for security
  • Names manager who “owns” the overall policy and is in charge of its continued

enforcement, maintenance, review, and evaluation of effectiveness

  • Names individual managers who “own” individual information assets and are responsible

for their day-to-day security

  • Reporting responsibilities for security incidents, vulnerabilities, software malfunctions
  • Mechanisms for learning from incidents
  • Incentives, disciplinary process, consequences of policy violations
  • User training, documentation and revision of procedures
  • Personnel security (depending on sensitivity of job) Background checks, supervision,

confidentiality agreement

  • Regulation of third-party access
  • Physical security: Definition of security perimeters, locating facilities to minimise traffic

across perimeters, alarmed fire doors, physical barriers that penetrate false floors/ceilings, entrance controls, handling of visitors and public access, visible identification, responsibility to challenge unescorted strangers, location of backup equipment at safe distance, prohibition of recording equipment, redundant power supplies, access to cabling, authorisation procedure for removal of property, clear desk/screen policy, etc.

slide-7
SLIDE 7
  • Segregation of duties: Avoid that a single person can abuse authority without

detection (e.g., different people must raise purchase order and confirm delivery of goods, croupier vs. cashier in casino)

  • Audit trails: What activities are logged, how are log files protected from

manipulation

  • Separation of development and operational facilities
  • Protection against unauthorised and malicious software
  • Organising backup and rehearsing restoration
  • File/document access control, sensitivity labeling of documents and media
  • Disposal of media: Zeroise, degauss, reformat, or shred and destroy storage media,

paper, carbon paper, printer ribbons, etc. before discarding it.

  • Network and software configuration management
  • Line and file encryption, authentication, key and password management
  • Duress alarms, terminal timeouts, clock synchronisation, . . .
slide-8
SLIDE 8

UK Data Protection Act 1998

Anyone processing personal data must comply with the eight principles of data protection, which require that data must be 1. Fairly and lawfully processed [

– Person’s consent or organisation’s legitimate interest needed, no deception about purpose, sensitive data (ethnic origin, political opinions, religion, trade union membership, health, sex life, offences) may only be processed with consent or for medical research or equal opportunity monitoring, etc.

2. Processed for limited purposes

– In general, personal data can’t be used without consent for purposes other than those for which it was

  • riginally collected.

3. adequate, relevant and not excessive 4. Accurate

  • 5. Not kept longer than necessary
  • 6. processed in accordance with the data subject’s rights:

– Persons have the right to access data about them, unless this would breach another person’s privacy, and can request that inaccurate data is corrected.

  • 7. secure
  • 8. not transferred to countries without adequate protection
  • This means, no transfer outside the European Free Trade Area. Special “safe harbour” contract

arrangements with data controllers in the US are possible.

slide-9
SLIDE 9

Scope: Security, Attacks and the Science of Cybersecurity

RK Shyamasundar

9

slide-10
SLIDE 10

Process of Science

10

slide-11
SLIDE 11

Agenda

  • Dimensions: A Glimpse at Examples to demonstrate

– Side channel attacks (Timing, power, EM, Faults , …) – Tracking Location in P2P like Skype – SCADA Attacks– Stuxnet; Bigdata approaches – Guessing Passwords

  • Privacy (Dimensions: Anonymity, psuedo-anonymity, …)
  • What is preserving- purpose and privacy-policy mean?
  • Challenge of Electronic Voting Systems
  • Science of Security

– Limitations of Science of CS (Formal Methods, …) – Where are we?

  • Doctrines: Cybersecurity of Public Infra structures.

11

slide-12
SLIDE 12

Secure or Insecure

Insecure!

  • Suppose we have a

precisely defined security claim about a system, from which we can derive the consequences which can be tested,

  • Then in principle we can

prove that the system is insecure. Secure?

  • Suppose you design a system,

derive some security claims, and discover every time that the system remains secure under all tests.

  • Is the system then secure?
  • No, it is simply not proved

insecure.

  • In the future you could refine

the security model, there could be a wider range of tests and attacks, and you might then discover that the thing is insecure.

12

slide-13
SLIDE 13

13

slide-14
SLIDE 14

14

slide-15
SLIDE 15

Side channel Attacks

Side-channel cryptanalysis is any attack on a cryptosystem requiring information emitted as a byproduct of the physical implementation. White Box Crypt analysis Nothing is hidden (white box) *Debugger *Static Analysis * Memory Dumps

15

Paul Kocher

slide-16
SLIDE 16

Paul Kocher, President, Cryptology Res.

  • A Biology B.S. from Stanford, 1991
  • Worked with Martin Hellman
  • Worked with RSA
  • Member, US National Academy of Engg, 2009
slide-17
SLIDE 17

POWER ANALYSIS Timing Analysis

EM Signals containing full secrets Fault Injection HW attacks: Opening the box

SCOPE: Unlimited

17

slide-18
SLIDE 18

Implementation via Squaring and Multiplication

  • N digit exponentiation requires log N squares

and up to log n multiplications

slide-19
SLIDE 19
slide-20
SLIDE 20

Science of Guessing Passwords

  • Joseph Bonneau

Ph.D. Thesis,

  • Univ. of Cambridge, 2012

Advisor: Ross Anderson, FRS NSA AWARD FOR THE BEST SCIENTIFIC CYBERSECURITY PAPER

  • Banking PINs

20

slide-21
SLIDE 21

Banking PINs

History

  • Numeric PINs -- First British deployment

in 1967 for banking -- 6-digit PINs in the Barclays-De La Rue cash machines deployed in Japan

  • Sweden in 1967 used no PINs and

absorbed losses from lost or stolen cards.

  • As late as 1977, Spain's La Caixa issued

cards without PINs.

  • PINs were initially bank assigned by

necessity as they were hard-coded onto cards using steganographic schemes such as dots of carbon-14.

  • Soon a variety of schemes for storing a

cryptographic transformation of the PIN developed -- IBM 3624 ATM controller

  • Banks gradually began allowing

customer-chosen PINs in the 1980s as a marketing tactic

Guessing Difficulty

  • Approximating PIN strength
  • Security Implications

– Particularly after stealing a wallet PIN

  • The optimal blacklist is:
  • 0000, 0101{0103, 0110, 0111,

0123, 0202, 0303, 0404, 0505, 0606, 0707, 0808, 0909, 1010, 1101{1103, 1110{1112, 1123, 1201{1203, 1210{1212, 1234, 1956{2015, 2222, 2229, 2580, 3333, 4444, 5252, 5683, 6666, 7465, 7667.

Guessing human-chosen secrets Joseph Bonneau, Ph.D. Thesis, Univ. of Cambridge, 2012

NSA AWARD FOR THE BEST SCIENTIFIC CYBERSECURITY PAPER

slide-22
SLIDE 22

Tracking, say, SKYPE Locations

Real Time Communication: Peer-to-Peer (P2P)

– Datagram flows between the two conversing partners – Exposes the IP addresses of all the participants to one another

  • If A knows B’s VoIP ID, she can

establish a call with Bob & obtain his current address by simply sniffing datagrams arriving at her computer. Steven Le Blond et al (2012)

  • Using Geo-localization

services, one can map B’s IP address to a location and ISP.

  • If B is Mobile, she can call him
  • ver a week/month and
  • bserve
  • Once A knows B’s IP, she can

crawl P2P file-sharing systems to see if that IP is uploading/downloading files

  • VoIP can potentially collect

targeted user’s location

  • A SERIOUS INFRINGEMENT ON

PRIVACY

22

slide-23
SLIDE 23

23

slide-24
SLIDE 24

Attacks on Supervisory Control And Data Acquisition (SCADA)

SCADA

  • Control Systems

– Now at a higher risks to computer attacks because their vulnerabilities are increasingly becoming exposed and available to an ever-growing set of motivated and highly- skilled attacker

  • Miscreants tailor their attacks

with the aim of damaging the physical systems under control

  • Essentially a Cyberwar

STUXNET Attacks

  • Stuxnet is a Windows computer worm

discovered in July 2010 that targets industrial software and equipment

  • it is the first discovered malware that

spies on and subverts industrial systems

  • Kaspersky Labs concluded that the

sophisticated attack could only have been conducted "with nation-state support”

  • Stuxnet attacked Windows systems

using an unprecedented four zero-day attacks (plus the CPLINK vulnerability and a vulnerability used by the Conficker worm)

24

slide-25
SLIDE 25

Stuxnet

  • Astonished by the complexity of

the program and the quantity of zero day exploits used in this worm. – Zero day exploits are those that have no work around or patch.

  • Another unique aspect of Stuxnet

is that it contained components that were digitally signed with stolen certificates.

  • a root kit was found for the

programmable logic controller (PLC) which allows the manipulation of sensitive equipment.

  • Expected to have been created by

a team of as many as 30

  • individuals. – STATE SUPPORT
  • indicates a level of organization

and funding that probably has not been seen before

  • What was Stuxnet designed to

do?

– While there is no direct evidence, the code suggests that Stuxnet looks for a setup that is used in processing facilities that handle uranium used in nuclear devices – Thus the ultimate goal is to sabotage that facility by reprogramming to controllers to

  • perate

25

slide-26
SLIDE 26

What should be the strategy to deal with these kinds of attacks?

  • Should it go along the lines
  • f IT security?
  • How about Defense-in-

depth mechanisms analogous to anomaly detection?

  • What about false-alarms in

anomaly detection?

  • Should the focus be on

Physical systems rather than software/network models?

Control System: Characteristics

  • Control systems not suitable for

patching and frequent updates

  • While current tools from

Information security can give necessary mechanisms for securing control systems, these alone are not sufficient for defense-in-depth of control systems

  • When attackers bypass even

basic defenses they may succeed in damaging the physical world

26

slide-27
SLIDE 27

SCADA Attacks: Summary

Consequences

Risk Assessment

– While studies exist on cyber security of SCADA there are very few studies to identify attack strategy of an adversary

  • nce it gains access (existing

studies pertain to data injection for power grids, electricity markets etc.) – Need to understand threat model to design appropriate defenses and take measures to secure the most critical sensors and actuators

SCADA Security Summary

  • New Attack detection

Patterns

– Dynamic system models for specifying Intrusion detection Systems

  • Attack Resilient Algorithms

and Architectures

– Design to withstand cyber assault – Reconfigure and adapt control systems when under attack

Multi Disciplinary: Control Engineers + CS + Domain of Application …

27

slide-28
SLIDE 28

BigData: Algorithmic Approach

  • Compositionality + Scalablability (Reducing the problem

to convex-hull detection over distributed streaming data) – Shyamasundar 2013

28

slide-29
SLIDE 29

29

slide-30
SLIDE 30

Privacy Related Design Issues: A Glimpse

30

slide-31
SLIDE 31

Tor Anonymity Network

Tor?

  • Tor is free software and an
  • pen network that helps

you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security. Why Anonymity?

  • Tor protects you by bouncing

your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location.

31

slide-32
SLIDE 32

How anonymous are You?

  • You are only as anonymous as the data you

send

– Untrusted Exit Points – Is the exit point for traffic looking at the data? – Traffic may be encrypted inside the network, but not once it is outbound!

32

slide-33
SLIDE 33

Purpose in Privacy Policies

  • Yahoo!'s practice is not to use the content of

messages *…+ for marketing purposes.

  • By providing your personal information, you give

[Social Security Administration] consent to use the information only for the purpose for which it was collected.

How do you formalize and Enforce Purpose Restrictions

33

slide-34
SLIDE 34

Purpose Restrictions in Privacy Policies

  • Yahoo!'s practice is not to use the content
  • f messages *…+ for marketing purposes.
  • By providing your personal information,

you give [Social Security Administration] consent to use the information only for the purpose for which it was collected.

34

Not for Only for

slide-35
SLIDE 35

Goal

  • Give a semantics to

– “Not for” purpose restrictions – “Only for” purpose restrictions

that is parametric in the purpose

35

  • Provide automated enforcement of purpose

restrictions for that semantics

slide-36
SLIDE 36

An Example

  • X-ray technician has taken a patient’s X-ray and send the

patient’s medical record to a specialist for diagnosis.

  • Specialist will only reach a diagnosis if the technician first

adds the X-ray to the patient’s medical record.

  • Hospital is governed by a privacy policy: medical records

will be used only for the purpose of reaching a diagnosis.

  • Goal: What actions can the technician perform while
  • beying the restriction of purpose in the privacy policy.
  • For realizing it, we need a semantics for purpose

restrictions that tells us how to determine whether an action is for a purpose.

36

slide-37
SLIDE 37

X-ray taken

Send record

X-ray added Diagnosis by specialist No diagnosis by specialist

Send record Add x-ray

37

Medical Record Med records used only for diagnosis

slide-38
SLIDE 38

Auditing

38

Auditee’s behavior Purpose restriction Environment Model Obeyed Violated Inconclusive

slide-39
SLIDE 39

VOTING SYSTEM REQUIREMENTS

39

slide-40
SLIDE 40

Voter System Verification

  • Provide End-to-End Integrity
  • Votes verifiably “cast as intended”
  • Votes verifiably “collected as cast”
  • Votes verifiably “Counted as collected”

40

slide-41
SLIDE 41

Challenge: CAP Theorem Context

  • Consistency (The same data is seen by all nodes at the

same time)

  • Availability (a guarantee that every request receives a

response about whether it was successful or failed)

  • Partition tolerance (the system continues to operate

despite arbitrary message loss/failure of part of the system) CCOutage

  • CAP Theorem: A distributed system

cannot satisfy all three of these guarantees at the same time.

41

slide-42
SLIDE 42

42

slide-43
SLIDE 43

What are the factors involved in Security?

  • Attacks – Threat Models
  • Policies
  • Defenses

(Confidentiality, integrity, availability, authentication, privacy, …)

Fred Schneider Who can know or learn what? What changes are permitted? When do you render service?

43

slide-44
SLIDE 44

Fred Schneider

  • Defends against

legacy code

  • Does not always

defend

  • Integrity
  • Confidentiality

44

slide-45
SLIDE 45

How about CS Foundations as Science of Security

  • How about Formal methods?

– Testing can only prove the presence of an error and not its absence (Edsgar Dijkstra) – Testing does not test programs but tests the programmer (Tony Hoare)

  • Formal methods make security more precise

– making and checking formal models

  • Formal methods lack a process for aligning

– Theories with reality

  • Need

– measurable validation – experimental method

45

slide-46
SLIDE 46

Towards a Science of Security

  • Science: (Nature, Methods, Laws)

– Process of aligning theories with reality

  • Need Science of Security
  • Persistent laws of security

– Or is it methods to improve theories rather as in Science?

46

slide-47
SLIDE 47

Need

  • Relationship among

– Interfaces and actions (reality) – Models – Role of Trust

  • Cannot be created but only

relocated

  • Basis for composing defenses

and trust relocation

– Laws

  • Predict qualitative or

quantitative properties

  • Analyze or synthesize
  • Classes, defenses,

mechanisms, & policies.

  • Independence of components
  • Modeling Security

– Threat models – Defenses vs threats

  • game theoretic approaches
  • Develop principles for

compositional security schemes (particularly, for bigdata control)

  • Security measurements:

– Evaluating strength of defense mechanisms, risk management

47

slide-48
SLIDE 48

Science of Security

Science with a focus on

 Process:

– Hypothesis – experiments – validation

 Results:

– abstractions and models,

  • btained by
  • invention
  • measurement + insight

– connections + relationships, packaged as

  • theorems, not artifacts

(Fred Schneider )

Engineering with a focus on

 Artifacts:

– discovers missing or invalid assumptions

  • Proof of concept;

measurement – discovers what are the real problems

  • Eg. Attack

discovery

48

slide-49
SLIDE 49

Science of Security

Address questions that transcend systems, attacks, defenses:

 Is “code safety” universal for enforcement?  Can sufficiently introspective defenses always be

subverted?

 …

Computer Science ≠ Science base for Security Analogy with medical science vs medical practice

(Fred Schenider)

49

slide-50
SLIDE 50

Process

Science Security

50

  • No Defense is perfect!
  • Every thief can be caught!
slide-51
SLIDE 51

51

slide-52
SLIDE 52

Whither Cybersecurity?

  • Succession of doctrines advocated in the

past for enhancing cybersecurity

– Prevention, Risk management, and Deterrence through Accountability. – None has proved Effective, – Can we learn from failures?

  • How about:

– Proposal of viewing cybersecurity as a public good – How about adopt mechanisms inspired by those used for public health?

52

Fred Schneider

slide-53
SLIDE 53

Cybersecurity Doctrines: Prevention

  • Computing Systems +

Human elements

  • Can we guarantee absence
  • f vulnerabilities and hence

impossibilities of attacks?

  • Establishing correctness of

large system is still not viable.

  • Assumptions about

environment varies

  • Attacks evolve with

defenses

  • Threats exploit new
  • pportunities that arise

due to new value creation

  • r systems
  • System considered secure

today may not be secure tomorrow

53

slide-54
SLIDE 54

Cybersecurity Doctrines: Risk Management

  • 100% cybersecurity is

not affordable

  • However, it is also not

needed for most systems.

  • Concentrate on

vulnerabilities

– sufficiently likely by the perceived threats – could lead to expensive compromises (in some metric)

  • Quite practical but lack of

information in calculating risks is a serious limitation

  • Metrics shall provide a good

tradeoff between investment and Cybersecurity

54

slide-55
SLIDE 55

Cybersecurity Doctrine: Deterrence through Accountability

  • Attacks are treated as

crimes

  • Concerned with

infrastructure to perform forensics, identity perpetrators and deal (Prosecute) with them

  • The doctrine is punitive

– This may not lead to systems being kept and running

  • Actions by machines and

humans

  • Agreement about

illegality is elusive or deceptive

  • Cross border (!)

cooperation

55

slide-56
SLIDE 56

A New Doctrine: Public Cybersecurity

  • Non-rivalrous:

– because one user benefiting from the security of a networked system does not diminish the ability of any

  • ther user to benefit from the

security of that system.

  • Non-excludable

– users of a secure system cannot be easily excluded from benefits security brings.

  • Cannot apply the usual “Common

goods” for managing the depletion and inequitable consumption by first comers or more sophisticated users.

  • This is not applicable to

Cybersecurity

  • How about Public health 

Public Cybersecurity – Realizing security – Managing Insecurity

56

Public Health Prevention, Containment Mitigation Recovery *No compensation of victims * No punishment – except like quarantine

slide-57
SLIDE 57

57

Best Theory is Inspired by Practice and Best practice is inspired by Theory Donald E. Knuth