Security Protocols Security protocols are the intellectual core of - - PowerPoint PPT Presentation

security protocols
SMART_READER_LITE
LIVE PREVIEW

Security Protocols Security protocols are the intellectual core of - - PowerPoint PPT Presentation

Security Protocols Security protocols are the intellectual core of security engineering They are where cryptography and system mechanisms meet They allow trust to be taken from where it exists to where it s needed But they are


slide-1
SLIDE 1

Security Protocols

  • Security protocols are the intellectual core of

security engineering

  • They are where cryptography and system

mechanisms meet

  • They allow trust to be taken from where it

exists to where it’s needed

  • But they are much older then computers…
slide-2
SLIDE 2

A Simple Authentication

  • An infrared token used in some multi-storey parking garages

to enable subscribers to raise the barrier.

  • First transmits its serial number & then transmits an

authentication block that consists of the same serial number, followed by a random number, all encrypted using a key that is unique to the device.

  • T  G : T, {T, N}KT
  • The in-car token sends its name, T, followed by the encrypted

value of T concatenated with N, where N stands for “number used once,” or nonce.

slide-3
SLIDE 3

A Simple Authentication

  • Key management: A typical garage token’s key KT is

simply its serial number encrypted under a global master key, KM, known to the central server:

  • KT = {T}KM
  • This is known as key diversification.
  • Gives a very simple way of implementing access

tokens, and is very widely used in smartcard-based systems as well.

slide-4
SLIDE 4

A Simple Authentication: Common Mistake

  • Checking that the nonce is different from last

time,

  • Given two valid codes A and B, the series
  • ABABAB. . . was interpreted as a series of

independently valid codes.

  • In one car lock, the thief could open the door

by replaying the last-but-one code.

slide-5
SLIDE 5

Car unlocking protocols: Challenge Response

  • Principals are the engine

controller E and the car key transponder T

  • Static (T  E: KT)
  • Non-interactive

T  E: T, {T,N}KT

  • Interactive

E  T: N T  E: {T,N }KT

  • N is a ‘nonce’ for ‘number

used once’.

  • As the car key is inserted

into the steering lock, the engine management unit sends a challenge, consisting of a random n-bit number to the key using a short-range radio signal.

  • The car key computes a

response by encrypting the challenge.

slide-6
SLIDE 6

What goes wrong

  • In cheap devices, N may be random or a counter –
  • ne-way communication and no clock
  • It can be too short, and wrap around
  • If it’s random, how many do you remember? (the

valet attack)

  • Counters and timestamps can lose sync leading to

DoS attacks

  • There are also weak ciphers – Eli Biham’s 2008

attack on the Keeloq cipher (216 chosen challenges then 500 CPU days’ analysis – some

  • ther vendors authenticate challenges)
slide-7
SLIDE 7

Problems

  • This is still not bulletproof.
  • In one system, the random numbers

generated by the engine management unit turned out to be rather predictable, so it was possible for a thief to interrogate the key in the car owner’s pocket, as he passed, with the anticipated next challenge.

slide-8
SLIDE 8

Two-factor authentication (password generator)

S  U: N U  P: N, PIN P  U: {N, PIN}KP S Server P Pwd Genertaor U User

slide-9
SLIDE 9

IFF (2)

slide-10
SLIDE 10
  • Several MIGs had loitered in southern Angola, just north of

the South African air defense belt, until a flight of SAAF Impala bombers raided a target in Angola.

  • Then the MIGs turned sharply and flew openly through the

SAAF’s air defenses, which sent IFF challenges.

  • The MIGs relayed them to the Angolan air defense batteries,

which transmitted them at a SAAF bomber;

  • the responses were relayed back in real time to the MIGs,

which retransmitted them and were allowed through

slide-11
SLIDE 11

IFF (3)

  • The middleman attack is very general – Conway

discussed how to beat a grandmaster at postal chess

  • The fix for the man-in-the-middle attack is often

application specific

  • E.g. NATO mode 12 IFF: 32 bit encrypted challenge

(to prevent enemy using IFF to locate beyond radar range) at rate of 250 per second

slide-12
SLIDE 12

Identify Friend or Foe (IFF)

  • Basic idea: fighter challenges bomber

F  B: N B  F: {N}K

  • But what if the bomber reflects the challenge back at

the fighter’s wingman?

F  B: N B  F: N F  B: {N}K B  F: {N}K

slide-13
SLIDE 13

Overcoming Reflection Attack

  • In many cases, it is sufficient to include the names of the two

parties in the authentication exchange.

  • Require a friendly bomber to reply to the challenge:
  • F  B : N with a response such as:
  • B  F : {B, N}K
  • Thus, a reflected response {F, N} (or even ,F’, N} from the

fighter pilot’s wingman) could be detected.

slide-14
SLIDE 14

Reflection Attacks

  • Mutual authentication: Mutual Id of two Suppose, that a

simple challenge-response IFF system designed to prevent anti-aircraft gunners attacking friendly aircraft also had to be deployed in a fighter-bomber.

  • Now suppose that the air force simply installed one of its air

gunners’ challenge units in each aircraft and connected it to the fire-control radar.

  • But now an enemy bomber might reflect a challenge back at
  • ur fighter, get a correct response, and then reflect that back

as its own response:

slide-15
SLIDE 15

Source: Ross Anderson

slide-16
SLIDE 16

The Power of Security Policy Modeling

  • 1. Message authentication code (MAC)-- data

integrity mechanism that provides integrity, but no confidentiality.

  • 2. Chosen plaintext secure encryption (CPA-secure

encryption) provides confidentiality against eavesdropping, but is not secure against an active attacker who tampers with traffic.

  • 3. Intuitively, combining the two primitives should

provide both confidentiality and integrity against an active adversary.

– How to do this integration?

slide-17
SLIDE 17

denote the encryption and MAC keys, respectively

Integrating Confidentiality and Integrity

  • X|| y denotes concatenation of x and y
slide-18
SLIDE 18
slide-19
SLIDE 19

During decryption, if the relevant integrity tag fails to verify, the decryption algorithm outputs a distinguished symbol ($) to indicate error

Which method is Right and which is better?

slide-20
SLIDE 20

Threat Model

  • Threat model associated with authenticated

encryption:

– the attacker is able to obtain the encryption of arbitrary messages of its choice – Attacker’s goal :

  • Learn information about the decryption of a well-formed

challenge ciphertext (thereby defeating confidentiality),

  • or generate a new well-formed ciphertext different from all

ciphertexts previously given to the attacker (thereby defeating integrity).

  • If the attacker cannot do either then we say that

the system provides authenticated encryption

slide-21
SLIDE 21

Choice: TLS

  • Not generically secure:

– there are specific instances of encryption and MAC such that the TLS combination does not provide authenticated encryption. – However, for specific encryption systems, such as randomized counter mode encryption, TLS method provides authenticated encryption even if the MAC is

  • nly weakly secure (so called, one-time secure). The

reason is that the MAC is protected by the encryption and therefore need not be a fully secure MAC; weak MAC security is sufficient.

slide-22
SLIDE 22

Choice: IPSEC

  • The IPsec construction can be shown to

provide authenticated encryption for any MAC and CPAsecure encryption.

  • The basic reason is that the MAC locks the

ciphertext so that any modification of the ciphertext en-route will be detected by the decryptor.

slide-23
SLIDE 23

Choice: SSH

  • The SSH construction is known to be secure

when a very specific MAC is used, but may not be secure for a general purpose MAC. To see why, recall that a MAC need not preserve confidentiality and therefore MAC (km, m) may leak information about the encrypted plaintext.

slide-24
SLIDE 24

Choices

  • Based on these comparisons, a designer can

choose the appropriate method for the application at hand.

– When countermode encryption is used, the TLS construction is adequate even if a simple MAC is used. – Otherwise, one should use the IPsec construction.

  • This clear understanding is only made possible

thanks to the precise formulation of authenticated encryption

slide-25
SLIDE 25

What do we learn

  • Using the definition of authenticated encryption,

the National Institute of Standards and Technology (NIST) was able to publish precise encryption modes, called CCM and GCM, designed to meet the definition

  • Once the goals of authenticated encryption were

clearly spelled out, it turned out that authenticated encryption can be built far more efficiently than by combining encryption and MAC algorithms

slide-26
SLIDE 26

Reference: Privacy and Cybersecurity: the next 100 years Carl Landwehr et al., Vol 100, Proceedings IEEE 2012, 13 May 2012

slide-27
SLIDE 27

Protection from Untrusted Interaction

  • Vulnerabilities
  • Malware
  • Insufficient Control
  • User-oriented access control (DAC, MAC, RBAC, …

): it is typical for active entities (known as ‘subjects’) to have access to all the user's privileges regardless of the privileges actually required by the program running.

slide-28
SLIDE 28

Running untrusted code

Need to run buggy/unstrusted code:

– programs from untrusted Internet sites:

  • apps, extensions, plug-ins, codecs for media player

– exposed applications: pdf viewers, outlook – legacy daemons: sendmail, bind – honeypots

Goal: if application “misbehaves” ⇒ kill it

slide-29
SLIDE 29

User Oriented Control

  • User-oriented access controls do not sufficiently

mitigate malware threat

– Man-in-the-middle attacks can intercept communications between hosts and insert malware via trusted websites and hosts.

  • even intercept “secure” encrypted communications .

– Viruses copy themselves to other programs. – Worms propagate across networks, often by exploiting software vulnerabilities. – Trojan horses pose as legitimate programs. – Malware can be sent via email in targeted attacks.

slide-30
SLIDE 30

Trust Based Execution (1)

  • One of the simplest access control techniques to

mitigate the risk of running programs with all of the user's authority is to only allow particular programs to run.

– Microsoft AppLocker and Microsoft Software Restriction Policies (SRP) – White list, black list

  • Using this approach, processes typically still have

all the authority of the user;

– however, only those programs deemed trustworthy (or not “untrusted”) are allowed to run.

slide-31
SLIDE 31

Trust Based Execution (2)

  • analyse source code or binary files to decide whether

the program is trusted to run.

  • used by many of the current generation of anti-

malware suites, typically based on attributes that identify code as untrusted.

  • Signature-based and heuristic lists identify programs

based on characteristics of executable files and are typically used to specify black lists to prevent known types of malware from running.

  • The techniques used to identify malware have become

increasingly sophisticated, as have the various techniques employed by malware to avoid detection

slide-32
SLIDE 32

Trust Based Execution (3)

  • Reputation-based security, used by systems

such as Symantec Quorum, Microsoft SpyNet, and McAfee Artemis, is a relatively new technique, that uses information collected from a large number of users to make judgements about the likely trustworthiness

  • f programs to decide if programs should be

trusted to run.

slide-33
SLIDE 33

Trust based Execution (4)

  • Trust-based security systems provide protection from a limited number of

specific threats.

– many legitimate “trusted” programs can be the source of malicious behaviour: for example, well-intended software authors may accidentally introduce design or implementation flaws, resulting in software vulnerabilities that enable attackers to execute malicious code.

  • These approaches do not provide a mechanism for safely running

programs without trusting them to run with all of a user's authority.

– it is not ideal to have to complete trust on any software.

  • Furthermore, in many cases it is overly restrictive or impractical to

prohibit users from running code obtained from third parties via the Internet.

– For example, mobile phone “apps” are currently very popular, and the web is becoming increasingly dynamic, including client-side execution of mobile code. – All of these mechanisms can fail to protect users from malware.

slide-34
SLIDE 34

Trust Based Executions (5)

  • All of the above mechanisms can fail to

protect users from malware.

– digital signatures and certificates have failed to accurately reflect the actual origin of programs

  • ActiveX has been a prevalent infection vector
  • anti-malware black list techniques have failed

to identify malware

slide-35
SLIDE 35

Application-oriented Access Control

  • Restrict subjects based on the identity of the

application or process, rather than just the identity of the user.

  • This approach is designed to limit the ability of

applications to access resources outside of those they require to perform legitimate actions.

  • Application-oriented controls can restrict the damage

caused by malware or exploited vulnerabilities by limiting the software to those actions authorised by whoever configures the security policy, whether end users, administrators, or software developers.

slide-36
SLIDE 36

Confinement (1)

  • Ensure misbehaving Application (app) cannot

harm rest of system

  • Various levels of Implementation

– Hardware: run application on isolated hw

  • Difficult to manage (with network needs)

– Virtual machines: isolate OS’s on a single machine – Process: System Call Interposition-- Isolate a process in a single operating system

slide-37
SLIDE 37

Confinement (2)

  • Threads: Software Fault Isolation

(SFI)

– Isolating threads sharing same address space

  • Application: e.g. browser-based

confinement

slide-38
SLIDE 38

Isolation (Confinement) : Sandboxes and Virtualisation

  • One way to restrict a program's ability to access

resources is to run it in an environment where the application can only access objects within their so- called ‘sandbox’.

  • Typically sandboxes only apply to programs explicitly

launched into or from within a sandbox.

– In most cases no security context changes take place when a new process is started, and all programs in a particular sandbox run with the same set of rights.

  • Sandboxes can either be permanent where resource

changes persist after the programs finish running, or ephemeral where changes are discarded after the sandbox is no longer in use

slide-39
SLIDE 39

Isolation Based Sandbox

Applications (such as Application A in the figure) that are not within the sandbox are typically outside the scope of the sandbox access controls, and are therefore (other security controls permitting) free to access any resources, including those within a sandbox.

slide-40
SLIDE 40

Container Based Sandboxes

  • Share the kernel but have separate user-space

resources.

  • More efficient system level virtualization
  • Chroot, jails, linux containers

(Discussed already)

slide-41
SLIDE 41

From Dan Boneh

slide-42
SLIDE 42

Implementing confinement

Key component: reference monitor – Mediates requests from applications

  • Implements protection policy
  • Enforces isolation and confinement

– Must always be invoked:

  • Every application request must be mediated

– Tamperproof:

  • Reference monitor cannot be killed
  • … or if killed, then monitored process is killed too

– Small enough to be analyzed and validated

slide-43
SLIDE 43

A old example: chroot

Often used for “guest” accounts on ftp sites To use do: (must be root)

chroot /tmp/guest root dir “/” is now “/tmp/guest” su guest EUID set to “guest”

Now “/tmp/guest” is added to file system accesses for applications in jail

  • pen(“/etc/passwd”, “r”) 
  • pen(“/tmp/guest/etc/passwd” , “r”)

 application cannot access files outside of jail

slide-44
SLIDE 44

Jailkit

Problem: all utility progs (ls, ps, vi) must live inside jail

  • jailkit project: auto builds files, libs, and dirs needed in jail

env

  • jk_init: creates jail environment
  • jk_check: checks jail env for security problems
  • checks for any modified programs,
  • checks for world writable directories, etc.
  • jk_lsh: restricted shell to be used inside jail
  • note: simple chroot jail does not limit network access
slide-45
SLIDE 45

Escaping from jails

Early escapes: relative paths

  • pen( “../../etc/passwd”, “r”) 
  • pen(“/tmp/guest/../../etc/passwd”, “r”)

chroot should only be executable by root. – otherwise jailed app can do:

  • create dummy file “/aaa/etc/passwd”
  • run chroot “/aaa”
  • run su root to become root

(bug in Ultrix 4.0)

slide-46
SLIDE 46

Many ways to escape jail as root

  • Create device that lets you access raw disk
  • Send signals to non chrooted process
  • Reboot system
  • Bind to privileged ports
slide-47
SLIDE 47

Freebsd jail

Stronger mechanism than simple chroot To run: jail jail-path hostname IP-addr cmd – calls hardened chroot (no “../../” escape) – can only bind to sockets with specified IP address and authorized ports – can only communicate with processes inside jail – root is limited, e.g. cannot load kernel modules

slide-48
SLIDE 48

Not all programs can run in a jail

Programs that can run in jail:

  • audio player
  • web server

Programs that cannot:

  • web browser
  • mail client
slide-49
SLIDE 49

Problems with chroot and jail

Coarse policies: – All or nothing access to parts of file system – Inappropriate for apps like a web browser

  • Needs read access to files outside jail

(e.g. for sending attachments in Gmail)

Does not prevent malicious apps from: – Accessing network and messing with other machines – Trying to crash host OS

slide-50
SLIDE 50

Chroot()

  • Changes root directory for a process and its

children

  • The namespaces of the application limits it to
  • nly access files inside specified directory tree
  • A wrapper program “chroot” can be used to

launch programs into a “chroot jail”

slide-51
SLIDE 51

Chroot()

  • Only root can perform but should change

identity as soon as possible

  • Root an escape a chroot jail by another

chroot()

  • There are resources such as process controls

and networking that are not mediated

  • Mechanisms like FreeBSD jails solve some of

these problems.

slide-52
SLIDE 52

System Call interposition

  • These schemes filter system calls

– Systrace, janus, Etrace, TRON

  • MAPbox: Confines programs based on

behaviour classes

  • De-Merits: System calls can be complex

– Combined in various ways as they were not designed as a security interface.

slide-53
SLIDE 53

System call interposition

Observation: to damage host system (e.g. persistent changes) app must make system calls: – To delete/overwrite files: unlink, open, write – To do network attacks: socket, bind, connect, send Idea: monitor app’s system calls and block unauthorized calls Implementation options: – Completely kernel space (e.g. GSWTK) – Completely user space (e.g. program shepherding) – Hybrid (e.g. Systrace)

slide-54
SLIDE 54

Initial implementation (Janus)

[GWTB’96]

Linux ptrace: process tracing process calls: ptrace (… , pid_t pid , …) and wakes up when pid makes sys call. Monitor kills application if request is disallowed OS Kernel

monitored application (browser) monitor

user space

  • pen(“/etc/passwd”, “r”)
slide-55
SLIDE 55

Complications

  • If app forks, monitor must also fork

– forked monitor monitors forked app

  • If monitor crashes, app must be killed
  • Monitor must maintain all OS state associated with app

– current-working-dir (CWD), UID, EUID, GID – When app does “cd path” monitor must update its CWD

  • otherwise: relative path requests interpreted incorrectly

cd(“/tmp”)

  • pen(“passwd”, “r”)

cd(“/etc”)

  • pen(“passwd”, “r”)
slide-56
SLIDE 56

Problems with ptrace

Ptrace is not well suited for this application: – Trace all system calls or none

inefficient: no need to trace “close” system call

– Monitor cannot abort sys-call without killing app Security problems: race conditions – Example: symlink: me ⟶ mydata.dat

proc 1: open(“me”) monitor checks and authorizes proc 2: me ⟶ /etc/passwd OS executes open(“me”)

Classic TOCTOU bug: time-of-check / time-of-use

time

not atomic

slide-57
SLIDE 57

Alternate design: systrace [P’02]

  • systrace only forwards monitored sys-calls to monitor (efficiency)
  • systrace resolves sym-links and replaces sys-call

path arguments by full path to target

  • When app calls execve, monitor loads new policy file

OS Kernel

monitored application (browser) monitor

user space

  • pen(“etc/passwd”, “r”)

sys-call gateway systrace

permit/deny policy file for app

slide-58
SLIDE 58

Ostia: a delegation architecture

*GPR’04+

Previous designs use filtering:

  • Filter examines sys-calls and decides whether to block
  • Difficulty with syncing state between app and monitor (CWD, UID, ..)

– Incorrect syncing results in security vulnerabilities (e.g. disallowed file opened)

A delegation architecture: OS Kernel

monitored application agent

user space policy file for app

  • pen(“etc/passwd”, “r”)
slide-59
SLIDE 59

Ostia: a delegation architecture

*GPR’04+

  • Monitored app disallowed from making monitored sys calls

– Minimal kernel change (… but app can call close() itself )

  • Sys-call delegated to an agent that decides if call is allowed

– Can be done without changing app (requires an emulation layer in monitored process)

  • Incorrect state syncing will not result in policy violation
  • What should agent do when app calls execve?

– Process can make the call directly. Agent loads new policy file.

slide-60
SLIDE 60

NaCl: a modern day example

  • game: untrusted x86 code
  • Two sandboxes:

– outer sandbox: restricts capabilities using system call interposition – Inner sandbox: uses x86 memory segmentation to isolate application memory among apps

Browser HTML JavaScript

NPAPI

NaCl runtime game

slide-61
SLIDE 61

Policy

Sample policy file:

path allow /tmp/* path deny /etc/passwd network deny all

Manually specifying policy for an app can be difficult:

– Systrace can auto-generate policy by learning how app behaves on “good” inputs – If policy does not cover a specific sys-call, ask user

… but user has no way to decide Difficulty with choosing policy for specific apps (e.g. browser) is the main reason this approach is not widely used

slide-62
SLIDE 62

Copy on write Sandboxes

  • Allow applications to read all files and all

writes are written to a separate area.

  • Upon termination ask for writes that are to be

kept

  • Examples Alcatraz
slide-63
SLIDE 63

Linux Capabilities

  • On Unix two types of users

– Privileges (uid =0) – Unpriv (uid != 0)

  • Root(0) can do anything

– Bypasses all kernel permission checks

  • Capabilities divide these privileges
  • Include CAP_CHROOT
slide-64
SLIDE 64

Sandboxes  VM

  • Most sandboxes provide an isolation-based approach where the

effect of programs run inside a sandbox is entirely isolated from resources outside the sandbox's authority. However, due to practical requirements, sandboxing schemes often provide ways of circumventing this isolation in order to copy data into and out of sandboxes.

  • System-level sandboxes provide complete operating environments

to confined applications. One way of achieving this is through the use of hardware-level virtual machines (VMs). A virtual machine monitor (VMM) can be used to multiplex the physical hardware between multiple self-contained fully virtualised VM operating environments, each containing a complete operating system.

slide-65
SLIDE 65

Isolation: System Level Sandboxes

  • System level sandboxes provide a complete

environment for OS

  • Virtualization: A hypervisor (virtual machine

monitor (VMM), can multiplex hardware to run hardware level virtual machines (VM)

slide-66
SLIDE 66

HW HW Hypervisor OS OS OS OS Hyper visor OS OS

Type I Type II

slide-67
SLIDE 67

Isolation: System Level Sandboxes

  • HW Emulation Base: The guest OS need not

know it is being virtualized

– Vmware, VirtualBox

  • Para-virtulaization (Software emulation)

– The guest knows they are being virtualized and used the API provided by the virtualization – Can be more efficient since work can be done by the host – Xen, User-mode Linux