1 Basic Info n Breakfast, coffee breaks n Meals n Lunch provided - - PowerPoint PPT Presentation

1 basic info
SMART_READER_LITE
LIVE PREVIEW

1 Basic Info n Breakfast, coffee breaks n Meals n Lunch provided - - PowerPoint PPT Presentation

1 Basic Info n Breakfast, coffee breaks n Meals n Lunch provided both days n Supported by University of Pittsburgh Provosts Office, SCI n n Dinner on your own n WiFi Wyndham Pittsburgh <v93j3q> n Need help? n Kelly Shaffer,


slide-1
SLIDE 1

1

slide-2
SLIDE 2

Basic Info

n Breakfast, coffee breaks n Meals

n Lunch provided both days

n Supported by University of Pittsburgh

n

Provost’s Office, SCI

n Dinner – on your own

n WiFi – Wyndham Pittsburgh <v93j3q> n Need help?

n Kelly Shaffer, Program Director at SCI n Runhua Xu, LERSAIS PhD student n Project team

2

slide-3
SLIDE 3

3

Insider Threat Mitigation

Access Control Approach

James Joshi Professor, Director of LERSAIS SAC-PA Workshop June 22-23, 2017

slide-4
SLIDE 4

But first … Research Activities

n

Advanced Access Control/ Trust Management Models/Approaches

n

Context based, Geo-social RBAC, Privacy/Trust aware RBAC

n

Secure Interoperation

n

RBAC, Trust based approaches

n

RBAC & Insider Threat Mitigation

n

Attribute based access (e.g., in Cloud)

n

Insider Attack Mitigation

n

Cloud computing, Critical Infrastructure

n

Risk, Trust aware Access management

n

Network Security

n

DDoS Attack, Some prior work in IPv6

4

slide-5
SLIDE 5

Research Activities

n

Security & Privacy in

n

Cloud computing & Social Network

n

Policy as a service; Access control in Cloud

n

Privacy conscious execution in Cloud

n

Anonymization techniques

n

Privacy threat analysis (e.g., Identity Clone & Mutual Friend based attacks)

n

Insider threats (NSA grant)

n

HealthCare IT

n

Privacy aware Social Networks for Intimate Partner Violence; Access control in Healthcare Systems

n

Location based services

n

Access/privacy control in LBSN

n

Anonymization techniques 5

slide-6
SLIDE 6

Insider threat

“Th The y year 20 r 2013 ma may b be t the y year o r of th the inside der th threat

  • at. … Th

These incidents high ghligh ght the need to im improve e the e abil ilit ity y of

  • rga

ganizations to detect, deter, an and d respond d to to inside der th threats ats”. ”.

n

Computer Emergency Response Team (CERT), January 2014.

6 Edward Snowden

slide-7
SLIDE 7

Insider Attacks’ Impact

n Accounted for around 30% of total incidents

reported from 2004 to 2014

n Monetary losses up to $10 million n 75% of organizations had a negative impact on their

  • perations

n 28% on their reputations

  • 60% of respondents reported monetary losses

caused by non-malicious insiders

7

Sources: Computer Crime and Security Survey 2010/2011 and The US Cyber Crime Survey 2014

slide-8
SLIDE 8

More Recent …

n Insider attack frequency

n Credential thief (imposter risk):

09.7%

n Criminal & malicious insider:

21.8%

n Employee or Contractor negligence:

68.4%

n Average annualized cost

n Credential thief (imposter risk):

$ 776,165

n Criminal & malicious insider:

$1,227,812

n Employee or Contractor negligence:

$2,291,591

“2016 Cost of Insider Threats” Ponemon Report

8

slide-9
SLIDE 9

Current Approaches

n Access control systems are

highly static

n Only credentials are required n What about their behavior?

n Anomaly detection systems require

manual verification and/or input

n Unreliable and slow

n Risk methodologies are performed

sporadically (e.g., NIST, Octave, etc.)

n Do not minimize risk exposure continuously

and automatically

9

slide-10
SLIDE 10

So, what can we do about it?

n Statistics show that insider attacks are

typically preceded by

n technical precursors and n psychological precursors

10

slide-11
SLIDE 11

Our Research

n Utilize wo concepts:

n Trust: expectation of future

behavior based on the history

n Risk: likelihood of a

hazardous situation and its consequences if it occurs

n We include risk and trust in

access control systems to adapt to anomalous and suspicious changes in users' behavior

Access Control Trust Risk

11

Control risk for each access request automatically J

slide-12
SLIDE 12

Access Control for Insider Threat Mitigation

12

Geo-Social Insider Threat Resilient Access Control Framework (G-SIR) Obligation-based Framework to Reduce Risk Exposure and Deter Insider Attacks An Adaptive Risk Management RBAC Framework

Basic Risk based approach Focus on Obligations Advanced Access Control Joint work with Dr. Nathalie Baracaldo, IBM Almaden Research (PhD Thesis) & Prof. Balaji Palaniamy

slide-13
SLIDE 13

Monitoring, Context and Trust Module

Integrated System Architecture

PEP:= Policy Enforcement Point PDP:= Policy Decision Point PIP:= Policy Information Point PIP Trust Repository Obligation State Repository Trust Module

System Admin. User Risk-and-Trust Aware Access Control Module

PDP Obligation Handler Risk Module

Administration Module

Report Module Obligation Management Module Policy Editor Inference Threat Management Module Monitoring Module Context Module PEP Geo-Social Module

Social Network Service Location Service

Monitored Data & Context Repository

13

slide-14
SLIDE 14

Framework I

An Adaptive Risk Management RBAC Framework

14

Nathalie Baracaldo, James Joshi "An Adaptive Risk Management and Access Control Framework to Mitigate Insider Threats" Computers & Security. 2013.(Journal) Nathalie Baracaldo, James Joshi "A Trust-and-Risk Aware RBAC Framework: Tackling Suspicious Changes in User's Behavior" ACM Symposium on Access Control Models and Technologies (SACMAT), Newark, USA. 2012.

slide-15
SLIDE 15

Requirements

1.

Enforce separation of duties (SoD) and cardinality constraints

2.

Detect suspicious activities, and establish a trust level for each user

n

Different trust values for users depending on the context

3.

Different permissions may have different risks associated with them

n

Adapt to suspicious changes in behavior of users by restricting permissions depending on risk values

4.

Risk exposure should be automatically reduced, minimizing the impact of possible attacks

15

slide-16
SLIDE 16

In a nutshell…

16

role

permission

authorized(u,role) & trust(u,c)≥trust_threshold(role) trust_threshold(role)

slide-17
SLIDE 17

Trust value of users

n Each user u is assigned a trust

value:

n 0≤trust(u,c) ≤ 1 à reflects his

behavior

n Where c is the context, and u is

the user

n Prior work exists to calculate

this value

17

slide-18
SLIDE 18

n Each permission is assigned a risk value

according to:

n The context n The likelihood of misuse n The cost of misuse

Assigning risk to permissions

18

permission

slide-19
SLIDE 19

Risk of roles

n The risk of activating a set of roles

depends on:

n Context n The user that is going to activate the roles n Authorized permissions & their risk n Inference risk

19 role permission

slide-20
SLIDE 20

n Inference Threat: exists when a user is able to infer

unauthorized sensitive information through what seems to be innocuous data he is authorized for

n Inference tuple:

<PS, px>

Shows the minimum information needed (PS) to infer px Colored Petri-net for analysis

Inference risk

p1 p22 p3 p11 p43 p16 p23

px

20

slide-21
SLIDE 21

n Risk exposure of activating a set of roles n For a set of roles RS, the trust threshold is the

normalized version of their risk

Risk of roles

21 role 1 permission4 permission3 permission2 permission1 InferredPx role 30 permission40 permission30

slide-22
SLIDE 22

Reduction of risk exposure

n Select roles with minimum risk that also

respect the policy constraints & provide the requested permissions

n Role activation algorithm based on this

22

slide-23
SLIDE 23

Experimental Setup

n Generate synthetic well-formed policies n Each point represents the average time

  • f running the algorithm for 30 different

policies

n Evaluated the proposed algorithm under

two different heuristics for several types

  • f policies

23

slide-24
SLIDE 24

Granted requests for different percentage of misbehaving users

0% 20% 40% 60% 80% 100% 25 35 45 55 65 75 85 95

% of Requests Granted Number of Roles 0% Misbehaving users

20% Misbehaving users 40% Misbehaving users 60% Misbehaving users

24

Critical accesses are denied preventing possible attacks

slide-25
SLIDE 25

Framework II

Obligation-based Framework To Reduce Risk Exposure And Deter Insider Attacks

25 Nathalie Baracaldo, James Joshi "Beyond Accountability: Using Obligations to Reduce Risk Exposure and Deter Insider Attacks" ACM Symposium on Access Control Models and Technologies (SACMAT), Amsterdam, The Netherlands. 2013.

slide-26
SLIDE 26

Motivation

n Many application domains require the

inclusion of obligations as part of their access control policies

26

slide-27
SLIDE 27

A posteriori obligations

n Assigned to users when they are granted

access, and need to be completed before a deadline

n In a healthcare environment e.g., after 30 days of

accessing a patient’s sensitive information, a report needs to be filed

n The obligation is fulfilled if it is performed before

its deadline (30 days), otherwise it is violated

27

slide-28
SLIDE 28

Managing a posteriori

  • bligations is challenging

n Once you grant access to a user, there is no

guarantee that he will fulfill the associated

  • bligation

n Statistics show that it is not wise to trust

users blindly!

28

Ideally But this may happen

slide-29
SLIDE 29

Obligation violation

n Every time an a posteriori obligation is

assigned to a user, there is some risk of non- fulfillment

n The risk exposure depends on the impact of

not fulfilling the obligation

n Delays on the operation n Fines n Loss of good will n Lawsuits

29

slide-30
SLIDE 30

Current Approaches…

n Accountability n Provision resources

necessary to fulfill

  • bligations

n But they ignore that users may misbehave

and can’t blindly be trusted to fulfill a posteriori obligations!

30

slide-31
SLIDE 31

Requirements

n Reduce the risk exposure caused by a posteriori

  • bligations
  • Identify the trust value of a user based on the

pattern of fulfillment of a posteriori obligations

  • Identify policy misconfigurations

n Identify when a user is likely to become an insider

attacker, without invading users' privacy

31

slide-32
SLIDE 32

Criticality of Obligations

n Criticality represents the severity of not

fulfilling an obligation for the organization

n We use the criticality as a threshold to

determine how much a user needs to be trusted in order to be assigned the

  • bligation

32

Obligation

Trust Criticality

Risk= f( , )

slide-33
SLIDE 33

System Overview

n

We use standard RBAC

n

However, ourtrust approach can be used for any other access control model that includes obligations

33 Receive request for permissions Find appropriate set of roles Deny access Would access create a posteriori obligations? Grant access No Enough trust to perform a posteriori

  • bligations?

Yes No Yes Appropriate set of roles? Yes No

slide-34
SLIDE 34

Why can we identify suspicious insiders through obligations?

n Psychological precursors: disregard of

authority and lack of dependability

n Decrease in productivity and rate of fulfilled tasks

(obligations)

n The lack of fulfillment of obligations is

used as an indicator

34

slide-35
SLIDE 35

Threat model

n We consider two types of users:

n Naïve users: don’t know how the system works n Strategic users: know about the system's

mechanisms to compute trust values. May try to maintain their trust levels within the expected thresholds

n Both types of users know they are being

monitored

35

slide-36
SLIDE 36

How trusted is a user?

n Current behavior n Historic behavior n Sudden changes in behavior n His behavior with respect to his peers

36

slide-37
SLIDE 37

Trust computation

n An observation of a user’s behavior is:

<obligation, (fulfilled |violated)>

n We group observations based on when

they are generated

n The most recent group reflects the current

behavior

37

Most recent group Oldest group

slide-38
SLIDE 38

Raw trust of an observation group

n Weighted average of:

n The number of obligations fulfilled n over the total number of obligations assumed by the user

n The weight is provided by the criticality of each

  • bligation

n To avoid attacks from strategic users

38

slide-39
SLIDE 39

Historical trust

n Based on previous observation groups n Weighted average of the raw trust of

each group

n

The weight of each raw trust of a group depends on:

n How critical the obligations in each group are n How far away in time the observation group occurred

39

History Recent history Older history

slide-40
SLIDE 40

Trust fluctuation

n Difference between current raw trust and

historical trust

n Positive difference: user improved his behavior J n Negative: his behavior worsened L 40

History Current

slide-41
SLIDE 41

Group Drift and Penalty

n When a user is the only one to violate

an obligation his group drift is 1

n That deviation should be penalized!

41 Identify a black sheep!

slide-42
SLIDE 42

Obligation-based trust

n Finally, we combine the components

to find the trust of a user:

ü Current behavior (raw trust of last group) ü Historic behavior ü Sudden changes in behavior ü His behavior with respect to his peers

42

slide-43
SLIDE 43

Administration Module

n Identify policy misconfigurations/ users

colluding

n Several people not fulfilling the same obligations

n Outliers: users that may require further

monitoring (higher risk)

slide-44
SLIDE 44

Some Evaluation Results (Framework II)

n Quick to stop damage

Time

%Obligations Violated Trust Value

  • 0.2

0.2 0.4 0.6 0.8 1 1.2

t0 t10 t20 t30 t40 t50 t60 t70 t80 t90 t100 Drift[t] trust(u,t) %violated Trust

44

slide-45
SLIDE 45

Some Evaluation Results (cont.)

n Slow to recover trust J

0.2 0.4 0.6 0.8 1 1.2 t0 t15 t30 t45 t60 t75 t90 t105 t120 t135 t150 RT[t] TV[t] %Droped

trust(u,t ) % violated

%Obligations Violated Trust Value Trust Time 45

slide-46
SLIDE 46

Framework III

G-SIR -An Insider Attack Resilient Geo-Social Access Control System

46

Nathalie Baracaldo, Balaji Palanisamy, James Joshi, G-SIR: An Insider Attack Resilient Geo-Social Access Control Framework, IEEE Transactions on Dependable and Secure Computing (Accepted)

slide-47
SLIDE 47

n Access to users’ whereabouts and social interactions n Location and social relations information can be used

as context to determine how users may access information or resources in a secure way

n For the most part, social contracts are currently used

to regulate geo-social behavior

Motivation

47

slide-48
SLIDE 48

G-SIR Policy

role

Associated with a constraint vector

Spatial scope (1) Enabling constraints (3) Inhibiting constraints (4) Geo-social traces (5)

Geo-social Obligations

(6)

A role can be activated iff: 1) all its constraints are fulfilled 2) the risk management procedure allows it!

Geo-social contracts (2)

slide-49
SLIDE 49

Key issues

n Geo-Social Contracts

n Indicate where users should not visit or people

user should not interact with/visit

n Enabling and inhibiting constraints

n Collusion free enabling

n Geo-social obligations n Trace-based constraints

49

slide-50
SLIDE 50

Overview of G-SIR

50 Monitoring

  • Technical indicators
  • G-SIR compliance logs

G-SIR policy evaluation

  • Geo-social context evaluation
  • Risk Management

Analytics Access control decision Access Request

slide-51
SLIDE 51

51

slide-52
SLIDE 52

52

slide-53
SLIDE 53

So far we have considered a single actor: the requester

n Privilege misuse threats:

The requester becomes rogue

n Who are the users in the vicinity?

n New actors: enablers and inhibitors

53

Requester

slide-54
SLIDE 54

Classifying Users in the Vicinity: Social Predicate

n Defines a set of users based on

n A social graph and labels

  • f social relations

n We can even use more than

  • ne graph (e.g., graphs formed using tweets

and retweets)

54

Is a user part of community X? Are two users friends? What is their relationship? Are they connected?

slide-55
SLIDE 55

Inhibitors

n An inhibitor is an undesirable

user for an access

n E.g., conflicting project, undesirable community,

etc.

n Proximity threats: Insider adversaries who

may gain access to information by placing themselves (strategically or opportunistically) close to the requester

55

slide-56
SLIDE 56

Enablers

n K users in the vicinity who validate an

access request:

n bootstrap the trust of a requester J

56

Requester

Laboratory

Place:= laboratory Relationship:= superior K:=3

slide-57
SLIDE 57

Some caveats…

n Social

engineering Trick the enabler or the requester to enter into a targeted place

n Collusion threats

Requester and enablers may collude to gain access

57

Requester Enabler Requester Enabler

Scenario 1: Scenario 2: Scenario 3:

Requester Enabler

slide-58
SLIDE 58

Requirements

n Classify users in the vicinity n Design policy constraints to capture and

prevent undesirable geo-social behavior: geo-social contracts, geo-social obligations and trace-based constraints

n Mitigate the risk of colluding users n Adapt access control decisions to negative

changes in behavior of users

58

slide-59
SLIDE 59

Overview of G-SIR

59 Monitoring

  • Technical indicators
  • G-SIR compliance logs

G-SIR policy evaluation

  • Geo-social context evaluation
  • Risk Management

Analytics Access control decision Access Request

slide-60
SLIDE 60

G-SIR Policy

role

Associated with a constraint vector

Spatial scope (1) Enabling constraints (3) Inhibiting constraints (4) Geo-social traces (5)

Geo-social Obligations

(6)

A role can be activated iff: 1) all its constraints are fulfilled 2) the risk management procedure allows it!

Geo-social contracts (2)

slide-61
SLIDE 61

Key issues

n Geo-Social Contracts

n Indicate where users should not visit or people

user should not interact with/visit

n Enabling and inhibiting constraints

n Collusion free enabling

n Geo-social obligations n Trace-based constraints

61

slide-62
SLIDE 62

Geo-Social Contracts

n Places and people that users

assigned to the role should not visit

n Is the geo-social contract violated?

𝜒 indicates how bad the violation is

<<place, Social_Predicate>,𝜒 >

Users assigned to role receptionist should not enter the server rooms: <<serverRooms, ⊥ >, 0.8>

slide-63
SLIDE 63

Inhibiting Constraints

<context, place, social_predicate, α>

n α := minimum confidence level required to classify a

user as inhibitor

<projector, sameRoom, belongToCommunity(u?,BadGuys),0.95> <laptop, 2FeetRadiusAroundRequester, belongToCommunity(u?,BadGuys),0.95>

slide-64
SLIDE 64

Collusion-free Enabling Constraints

  • 𝜐% := maximum tolerance to collusion

n Collusion-free enforcement:

n If PrCollusion(Enablers ∪ requester) > 𝜐%,

the candidate enablers are rendered untrustworthy

<Place, k, social_predicate, 𝜐%>

slide-65
SLIDE 65

Geo-social obligation

n Geo-social actions that users need to fulfill

after they have been granted an access

65

< 𝑒𝑗𝑠, duration, 𝜒> where 𝑒𝑗𝑠 𝜗 {<+visit, place>, <-visit, place>,

<+meet, social_predicate>, <-meet, social_predicate>} 𝜒 := criticality of violations

<-meet, belongsToCommunity(u?, Y ),1year, 0.5>

slide-66
SLIDE 66

Trace-based constraints

n Constraints recent whereabouts

n If a doctor was in a contagious unit, he

cannot enter the new born unit in a week

n Unless you go to a sanitizing facility

66

<lst, Duration, > where lst =<<place1, social_predicate1>, ...<placek,social_predicatek>n>

And is the criticality of a violation

slide-67
SLIDE 67

Risk Management: Formulated using utility theory

n Utilities depend on the context and the

permissions authorized by an access

n We find a threshold which is compared to the

probability of attack

67

Grant Access Deny Access Attack L Utility depends on the cost of the attack J Thwarted the attack No attack J Utility depends on the gain from transaction Based on cost of annoyance

slide-68
SLIDE 68

Average time as the policy size increases

68

Some additional runtime overhead due to the extra verifications

  • performed. However, the overhead is acceptable in comparison to

Geo-Social RBAC

slide-69
SLIDE 69

Conclusions

n We proposed three adaptive access

control frameworks that can reduce the risk of insider threats

69

Adaptive RBAC Obligation Framework G-SIR

slide-70
SLIDE 70

70

slide-71
SLIDE 71

Framework I: Contributions

n Presented a model that includes risk and

trust in RBAC to adapt to anomalous and suspicious changes in users' behavior

n Proposed a comprehensive way to

calculate risks of permissions and roles

n We introduce the notion of inference of

unauthorized permissions & formulated a Colored Petri-net

71

slide-72
SLIDE 72

Framework I: Contributions (cont.)

n We define an optimization problem to enforce the

policy, reduce the risk exposure of the

  • rganization, and ensure that all constraints are

respected

n We present a role activation algorithm to solve

the optimization problem and evaluate its performance using well-formed policies

n Provide a simulation methodology to help identify

policies with unacceptable inference risk

72

slide-73
SLIDE 73

Framework II: Contributions

n Proposed a framework that reduces

the risk exposure caused by a posteriori obligations

n Presented an obligation-based trust

methodology that is resistant to naïve and strategic users

n It can be integrated into any access control model

with a posteriori obligations (e.g., UCON)

73

slide-74
SLIDE 74

Framework II: Contributions (cont.)

n Showed that based on previous work on

psychological precursors a posteriori

  • bligations can be used to identify

suspicious users

n Presented an administration module

to identify patterns of misbehavior, suspicious users and non-updated policies

74

slide-75
SLIDE 75

Framework III: Contributions

  • First research effort to analyze geo-social

access control systems to thwart insider attacks: We uncover some novel insider threats

n We provide an access control model to

mitigate those threats with novel constraints:

geo-social contracts, geo-social obligations, inhibiting, collusion-free enabling constraints and trace-based constraints

n Show that G-SIR can prevent some insider

threats

75

slide-76
SLIDE 76

Limitation and Future Work

n We only deter insider threats that are

regulated by the Policy Enforcement Point

n We assumed that monitored information was

available, but there may be privacy concerns

n As future work a policy specification

framework needs to be provided

n Graphical interface n User studies

76

slide-77
SLIDE 77

Associated publication

n

Nathalie Baracaldo, James Joshi "An Adaptive Risk Management and Access Control Framework to Mitigate Insider Threats" Computers & Security. 2013.

n

Nathalie Baracaldo, James Joshi "A Trust-and-Risk Aware RBAC Framework: Tackling Suspicious Changes in User's Behavior" ACM Symposium on Access Control Models and Technologies (SACMAT), Newark, USA. 2012.

n

Nathalie Baracaldo, James Joshi "Beyond Accountability: Using Obligations to Reduce Risk Exposure and Deter Insider Attacks" ACM Symposium on Access Control Models and Technologies (SACMAT), Amsterdam, The Netherlands. 2013.

n

Nathalie Baracaldo, Balaji Palanisamy, James Joshi "Geo-Social- RBAC: A Location-based Socially Aware Access Control Framework" The 8th International Conference on Network and System Security (NSS 2014). 2014.

77

RB RBAC Obliga gations Geo Geo-so soci cial al

slide-78
SLIDE 78

Questions? Comments? Ideas?

78

Thanks!

slide-79
SLIDE 79

Comparison of two heuristics: Min risk & Max perm for different policy configurations

79

Max perm heuristic outperforms the Min risk heuristic consistently

slide-80
SLIDE 80

Overview of G-SIR

80

Trust

Monitored Information

  • Technical indicators

Monitoring Colluding Communities Probability of Attack

G-SIR policy

Access control decision

Risk management

G-SIR policy compliance logs

Analytics

slide-81
SLIDE 81

Another piece of my insider threat research:

An Adaptive Geo-Social Access Control System

Trust&(u,t,c)&

Analyze&according&to&geo6social&indicators&

  • Permanent&obligations&
  • Transient&obligations&
  • Geo6social&Traces&
  • Historical&collusion&indicators&

Geo-Social user’s behavior

Access&Control&Risk&Management&Process&

  • Verify&that&all&required&roles&to&grant&access&are&enabled&for&a&user&
  • Verify&trustworthiness&of&requesting&user:&compare&Trust(u,t,c)&to&τ(Qu)&
  • Ensure&all&enablers&are&trusted&enough&and&consider&colluding&communities&to&

enforce&cardinality&constraints& & Grant/Deny& Assess&Risk&of&a&Request&

Identify&required&trust&considering:&&

  • Risk&of&misuse&of&requested&permissions&
  • Criticality&of&imposed&obligations&&

& Access&Request& Qu=&<u,&P’>&

τ(Qu)&&

Colluding& Communities&

81

slide-82
SLIDE 82

82

slide-83
SLIDE 83

Experimental Setup (Framework 1)

n We generated synthetic well-formed

policies

n Each point represents the average time

  • f running the algorithm for 30 different

policies

n We evaluated the proposed algorithm

under two different heuristics for several types of policies

83

slide-84
SLIDE 84

Granted requests for different percentage of misbehaving users

0% 20% 40% 60% 80% 100% 25 35 45 55 65 75 85 95

% of Requests Granted Number of Roles 0% Misbehaving users

20% Misbehaving users 40% Misbehaving users 60% Misbehaving users

84

Critical accesses are denied preventing possible attacks

slide-85
SLIDE 85

Risk exposure of the proposed system (min. risk) vs. traditional role activation

100 200 300 400 500 600 24 44 64 84

Risk

Number

  • f Roles

Min risk (aver. risk) Min num roles(aver. risk)

85

The lower the risk, the better! Our approach reduces the risk Traditional Approach Proposed Approach

slide-86
SLIDE 86
  • Managing active inference threats

– Simulate users' behavior to identify active inference threats and prioritize threat mitigation

Administration Module: Example Simulation Results

86

slide-87
SLIDE 87

87

slide-88
SLIDE 88

Some Evaluation Results (Framework II)

n Quick to stop damage

Time

%Obligations Violated Trust Value

  • 0.2

0.2 0.4 0.6 0.8 1 1.2

t0 t10 t20 t30 t40 t50 t60 t70 t80 t90 t100 Drift[t] trust(u,t) %violated Trust

88

slide-89
SLIDE 89

Some Evaluation Results (cont.)

n Slow to recover trust J

0.2 0.4 0.6 0.8 1 1.2 t0 t15 t30 t45 t60 t75 t90 t105 t120 t135 t150 RT[t] TV[t] %Droped

trust(u,t ) % violated

%Obligations Violated Trust Value Trust Time 89

slide-90
SLIDE 90

90

slide-91
SLIDE 91

Experimental Setup

n Mobile simulator written in java n Users move randomly at every time instant and are

related through a random social network

n Random well-formed policy was generated n If a user stepped into a protected place, an access

request for the particular role was generated on his behalf

n Each point represents the average time of running

the simulation for 30 different policies

91

slide-92
SLIDE 92

Baseline comparison

92

G-SIR captures more threats than the baseline

slide-93
SLIDE 93

Requests granted

93

slide-94
SLIDE 94

Inhibiting constraints

94