CS419 Spring 2010 Computer Security Vinod Ganapathy Lecture 13 - - PowerPoint PPT Presentation

cs419 spring 2010
SMART_READER_LITE
LIVE PREVIEW

CS419 Spring 2010 Computer Security Vinod Ganapathy Lecture 13 - - PowerPoint PPT Presentation

CS419 Spring 2010 Computer Security Vinod Ganapathy Lecture 13 Chapter 6: Intrusion Detection Security Intrusion & Detection Security Intrusion a security event, or combination of multiple security events, that constitutes a security


slide-1
SLIDE 1

CS419 – Spring 2010

Computer Security

Vinod Ganapathy Lecture 13 Chapter 6: Intrusion Detection

slide-2
SLIDE 2

Security Intrusion & Detection

Security Intrusion

a security event, or combination of multiple security events, that constitutes a security incident in which an intruder gains, or attempts to gain, access to a system (or system resource) without having authorization to do so.

Intrusion Detection

a security service that monitors and analyzes system events for the purpose of finding, and providing real- time or near real-time warning of attempts to access system resources in an unauthorized manner.

slide-3
SLIDE 3

Principles of Intrusion Detection

  • Characteristics of systems not under attack

– User, process actions conform to statistically predictable pattern – User, process actions do not include sequences of actions that subvert the security policy – Process actions correspond to a set of specifications describing what the processes are allowed to do

  • Systems under attack do not meet at least
  • ne of these
slide-4
SLIDE 4

Example

  • Goal: insert a back door into a system

– Intruder will modify system configuration file or program – Requires privilege; attacker enters system as an unprivileged user and must acquire privilege

  • Nonprivileged user may not normally acquire privilege

(violates #1)

  • Attacker may break in using sequence of commands that

violate security policy (violates #2)

  • Attacker may cause program to act in ways that violate

program’s specification

slide-5
SLIDE 5

Goals of IDS

  • Detect wide variety of intrusions

– Previously known and unknown attacks – Suggests need to learn/adapt to new attacks or changes in behavior

  • Detect intrusions in timely fashion

– May need to be be real-time, especially when system responds to intrusion

  • Problem: analyzing commands may impact response

time of system

– May suffice to report intrusion occurred a few minutes or hours ago

slide-6
SLIDE 6

Goals of IDS

  • Present analysis in simple, easy-to-

understand format

– Ideally a binary indicator – Usually more complex, allowing analyst to examine suspected attack – User interface critical, especially when monitoring many systems

  • Be accurate

– Minimize false positives, false negatives – Minimize time spent verifying attacks, looking for them

slide-7
SLIDE 7

Intrusion Techniques

  • objective to gain access or increase privileges
  • initial attacks often exploit system or software

vulnerabilities to execute code to get backdoor

 e.g. buffer overflow

  • or to gain protected information

 e.g. password guessing or acquisition

slide-8
SLIDE 8

Intrusion Detection Systems

  • classify intrusion detection systems (IDSs) as:

 Host-based IDS: monitor single host activity  Network-based IDS: monitor network traffic

  • logical components:

 sensors - collect data  analyzers - determine if intrusion has occurred  user interface - manage / direct / view IDS

slide-9
SLIDE 9

Models of Intrusion Detection

  • Anomaly detection

– What is usual, is known – What is unusual, is bad

  • Misuse detection

– What is bad, is known – What is not bad, is good

  • Specification-based detection

– What is good, is known – What is not good, is bad

slide-10
SLIDE 10

IDS Principles

  • assume intruder behavior differs from

legitimate users

 expect overlap as shown  observe deviations

from past history

 problems of:

  • false positives
  • false negatives
  • must compromise
slide-11
SLIDE 11

IDS Requirements

  • run continually
  • be fault tolerant
  • resist subversion
  • impose a minimal overhead on system
  • configured according to system security policies
  • adapt to changes in systems and users
  • scale to monitor large numbers of systems
  • provide graceful degradation of service
  • allow dynamic reconfiguration
slide-12
SLIDE 12

IDS Architecture

  • Basically, a sophisticated audit system

– Sensor: gathers data for analysis – Analyzer: it analyzes data obtained from the sensor according to its internal rules – Notifier obtains results from analyzer, and takes some action

  • May simply notify security officer
  • May reconfigure agents, director to alter collection,

analysis methods

  • May activate response mechanism
slide-13
SLIDE 13

Sensors

  • Obtains information and sends to

analyzer

  • May put information into another form

– Preprocessing of records to extract relevant parts

  • May delete unneeded information
  • Analyzer may request agent send other

information

slide-14
SLIDE 14

Example

  • IDS uses failed login attempts in its

analysis

  • Sensor scans login log every 5 minutes,

sends director for each new login attempt:

– Time of failed login – Account name and entered password

  • Analyzer requests all records of login

(failed or not) for particular user

– Suspecting a brute-force cracking attempt

slide-15
SLIDE 15

Host-Based Sensors

  • Obtain information from logs

– May use many logs as sources – May be security-related or not – May be virtual logs if agent is part of the kernel

  • Very non-portable
  • Sensor generates its information

– Scans information needed by IDS, turns it into equivalent of log record – Typically, check policy; may be very complex

slide-16
SLIDE 16

Network-Based Sensors

  • Detects network-oriented attacks

– Denial of service attack introduced by flooding a network

  • Monitor traffic for a large number of hosts
  • Examine the contents of the traffic itself
  • Agent must have same view of traffic as

destination

– TTL tricks, fragmentation may obscure this

  • End-to-end encryption defeats content

monitoring

– Not traffic analysis, though

slide-17
SLIDE 17

Network Issues

  • Network architecture dictates agent

placement

– Ethernet or broadcast medium: one agent per subnet – Point-to-point medium: one agent per connection,

  • r agent at distribution/routing point
  • Focus is usually on intruders entering network

– If few entry points, place network agents behind them – Does not help if inside attacks to be monitored

slide-18
SLIDE 18

Analyzer

  • Reduces information from sensors

– Eliminates unnecessary, redundant records

  • Analyzes remaining information to determine

if attack under way

– Analysis engine can use a number of techniques, discussed before, to do this

  • Usually run on separate system

– Does not impact performance of monitored systems – Rules, profiles not available to ordinary users

slide-19
SLIDE 19

Notifier

  • Accepts information from director
  • Takes appropriate action

– Notify system security officer – Respond to attack

  • Often GUIs

– Well-designed ones use visualization to convey information

slide-20
SLIDE 20

Example GUI

A E D C B

  • GUI showing the progress of a worm as it

spreads through network

  • Left is early in spread
  • Right is later on
slide-21
SLIDE 21

Host-Based IDS

  • specialized software to monitor system activity to

detect suspicious behavior

 primary purpose is to detect intrusions, log suspicious

events, and send alerts

 can detect both external and internal intrusions

  • two approaches, often used in combination:

 anomaly detection - defines normal/expected behavior

  • threshold detection
  • profile based

 signature detection - defines (im)proper behavior

slide-22
SLIDE 22

Audit Records

  • a fundamental tool for intrusion detection
  • two variants:

 native audit records - provided by O/S

  • always available but may not be optimum

 detection-specific audit records - IDS specific

  • additional overhead but specific to IDS task
  • often log individual elementary actions
  • e.g. may contain fields for: subject, action, object,

exception-condition, resource-usage, time-stamp

slide-23
SLIDE 23

Anomaly Detection

  • threshold detection

 checks excessive event occurrences over time  alone a crude and ineffective intruder detector  must determine both thresholds and time intervals

  • profile based

 characterize past behavior of users / groups  then detect significant deviations  based on analysis of audit records

  • gather metrics: counter, guage, interval timer, resource utilization
  • analyze: mean and standard deviation, multivariate, markov

process, time series, operational model

slide-24
SLIDE 24

Threshold Metrics

  • Counts number of events that occur

– Between m and n events (inclusive) expected to occur – If number falls outside this range, anomalous

  • Example

– Windows: lock user out after k failed sequential login attempts. Range is (0, k– 1).

  • k or more failed logins deemed anomalous
slide-25
SLIDE 25

Difficulties

  • Appropriate threshold may depend on

non-obvious factors

– Typing skill of users – If keyboards are US keyboards, and most users are French, typing errors very common

  • Dvorak vs. non-Dvorak within the US
slide-26
SLIDE 26

Statistical Moments

  • Analyzer computes standard deviation ,
  • ther measures of correlation

– If measured values fall outside expected intervals, anomalous

  • Potential problem

– Profile may evolve over time; solution is to weigh data appropriately or alter rules to take changes into account

slide-27
SLIDE 27

Example: IDES

  • Developed at SRI International

– Represent users, login session, other entities as

  • rdered sequence of statistics <q0,j, …, qn,j>

– qi,j (statistic i for day j) is count or time interval – Weighting favors recent behavior over past behavior

  • Ak,j sum of counts making up metric of kth statistic on jth

day

  • qk,l+1 = Ak,l+1 – Ak,l + 2–rtqk,l where t is number of log

entries/total time since start, r factor determined through experience

slide-28
SLIDE 28

Potential Problems

  • Assumes behavior of processes and

users can be modeled statistically

– Ideal: matches a known distribution such as Gaussian or normal – Otherwise, must use techniques like clustering to determine moments, characteristics that show anomalies, etc.

  • Real-time computation a problem too
slide-29
SLIDE 29

Markov Model

  • Past state affects current transition
  • Anomalies based upon sequences of events,

and not on occurrence of single event

  • Problem: need to train system to establish

valid sequences

– Use known, training data that is not anomalous – The more training data, the better the model – Training data should cover all possible normal uses of system

slide-30
SLIDE 30

Example: TIM

  • Time-based Inductive Learning
  • Sequence of events is abcdedeabcabc
  • TIM derives following rules:

R1: ab→c (1.0)R2: c→d (0.5)R3: c→e (0.5) R4: d→e (1.0) R5: e→a (0.5) R6: e→d (0.5)

  • Seen: abd; triggers alert

– c always follows ab in rule set

  • Seen: acf; no alert as multiple events can

follow c

– May add rule R7: c→f (0.33); adjust R2, R3

slide-31
SLIDE 31

Misuse Detection

  • observe events on system and applying a

set of rules to decide if intruder

  • approaches:

 rule-based anomaly detection

  • analyze historical audit records for expected

behavior, then match with current behavior

 rule-based penetration identification

  • rules identify known penetrations / weaknesses
  • often by analyzing attack scripts from Internet
  • supplemented with rules from security experts
slide-32
SLIDE 32

Misuse Modeling

  • Determines whether a sequence of

instructions being executed is known to violate the site security policy

– Descriptions of known or potential exploits grouped into rule sets – IDS matches data against rule sets; on success, potential attack found

  • Cannot detect attacks unknown to developers
  • f rule sets

– No rules to cover them

slide-33
SLIDE 33

Example: NFR

  • Built to make adding new rules easily
  • Architecture:

– Packet sucker: read packets from network – Decision engine: uses filters to extract information – Backend: write data generated by filters to disk

  • Query backend allows administrators to extract

raw, postprocessed data from this file

  • Query backend is separate from NFR process
slide-34
SLIDE 34

Domain-specific Language

  • Example: ignore all traffic not intended for 2

web servers:

# list of my web servers my_web_servers = [ 10.237.100.189 10.237.55.93 ] ; # we assume all HTTP traffic is on port 80 filter watch tcp ( client, dport:80 ) { if (ip.dest != my_web_servers) return; # now process the packet; we just write out packet info record system.time, ip.src, ip.dest to www._list; } www_list = recorder(“log”)

slide-35
SLIDE 35

Distributed Host-Based IDS

slide-36
SLIDE 36

Combining Sources: DIDS

  • Neither network-based nor host-based

monitoring sufficient to detect some attacks

– Attacker tries to telnet into system several times using different account names: network-based IDS detects this, but not host-based monitor – Attacker tries to log into system using an account without password: host-based IDS detects this, but not network-based monitor

  • DIDS uses agents on hosts being monitored,

and a network monitor

– DIDS director uses expert system to analyze data

slide-37
SLIDE 37

Attackers Moving in Network

  • Intruder breaks into system A as alice
  • Intruder goes from A to system B, and breaks

into B’s account bob

  • Host-based mechanisms cannot correlate

these

  • DIDS director could see bob logged in over

alice’s connection; expert system infers they are the same user

– Assigns network identification number NID to this user

slide-38
SLIDE 38

Handling Distributed Data

  • Agent analyzes logs to extract entries of

interest

– Agent uses signatures to look for attacks

  • Summaries sent to director

– Other events forwarded directly to director

  • DIDS model has agents report:

– Events (information in log entries) – Action, domain

slide-39
SLIDE 39

Distributed Host-Based IDS

slide-40
SLIDE 40

Network-Based IDS

  • network-based IDS (NIDS)

 monitor traffic at selected points on a network  in (near) real time to detect intrusion patterns  may examine network, transport and/or

application level protocol activity directed toward systems

  • comprises a number of sensors

 inline (possibly as part of other net device)  passive (monitors copy of traffic)

slide-41
SLIDE 41

NIDS Sensor Deployment

slide-42
SLIDE 42

Intrusion Detection Techniques

  • signature detection

 at application, transport, network layers;

unexpected application services, policy violations

  • anomaly detection

 of denial of service attacks, scanning, worms

  • when potential violation detected sensor

sends an alert and logs information

 used by analysis module to refine intrusion

detection parameters and algorithms

 by security admin to improve protection

slide-43
SLIDE 43

Intrusion Detection Exchange Format

slide-44
SLIDE 44

Honeypots

  • are decoy systems

 filled with fabricated info  instrumented with monitors / event loggers  divert and hold attacker to collect activity info  without exposing production systems

  • initially were single systems
  • more recently are/emulate entire networks
slide-45
SLIDE 45

Honeypot Deployment

slide-46
SLIDE 46

SNORT

  • lightweight IDS

 real-time packet capture and rule analysis  passive or inline

slide-47
SLIDE 47

SNORT Rules

  • use a simple, flexible rule definition language
  • with fixed header and zero or more options
  • header includes: action, protocol, source IP, source

port, direction, dest IP, dest port

  • many options
  • example rule to detect TCP SYN-FIN attack:

Alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg: "SCAN SYN FIN"; flags: SF, 12; \ reference: arachnids, 198; classtype: attempted-recon;)