Answer Set Grammars for Representing and Learning Generative - - PowerPoint PPT Presentation

answer set grammars for representing and learning
SMART_READER_LITE
LIVE PREVIEW

Answer Set Grammars for Representing and Learning Generative - - PowerPoint PPT Presentation

Answer Set Grammars for Representing and Learning Generative Policies Mark Law * , Alessandra Russo * , Elisa Bertino , Seraphine Calo , Dinesh Verma , Irene Manotas , Geeth de Mel , Krysia Broda * , Jorge Lobo * Imperial


slide-1
SLIDE 1

Mark Law*, Alessandra Russo*, Elisa Bertino§, Seraphine Calo†, Dinesh Verma†, Irene Manotas†, Geeth de Mel††, Krysia Broda*, Jorge Lobo§§

Answer Set Grammars for Representing and Learning Generative Policies

*Imperial College London, §Purdue University †IBM US, ††IBM UK, §§ICREA - Universitat Pompeo Fabra

slide-2
SLIDE 2

§ Future coalition missions will be carried out

by distributed intelligent devices and systems

§ Devices operate in dynamic context

(collaboratively or in isolation), in the presence of uncertainty and insecurity

§ Need for autonomy in distributed

coalition intelligence.

Policy Management of Intelligent Devices and Systems

§ Intelligent devices/systems need to self-generate and enforce

policies in dynamic and complex setting to support distributed analytics.

§ Generative policy technology: a solution for automatic

evolution and dynamic context-aware generation of instantiated policies.

slide-3
SLIDE 3

Why New Generation of Policy Management?

3

Traditional PMS: IETF/DMTF1

Traditional Policy Management Systems (PMS) are limited

§ Policies are predefined § Manually engineered § Modified by humans when

failures are detected

slide-4
SLIDE 4

Generative Policy Model

4

Policy Management Tool constraints policies Autonomous Device policies' representation Interaction graphs Policy Enforcement Point (PEP) Monitoring Policy Decision Point (PEP) Monitoring Requests Actions Policy Adaptation Point (PAP) Learning System Profile Policy repository Policies Generative Policies Policy representation Local Policy Refinement Instantiated Policies Policies Context repository P a s t A c t i v a t e d P

  • l

i c i e s

§ Extension of symbolic

machine learning for automatic evolution of generative policies, amenable to formal analysis.

§ Dynamic computation of

instantiated policies from generative policies in a context-aware manner. Gaps / Research Challenges

slide-5
SLIDE 5

Policy Management Tool constraints policies Autonomous Device ASG Generative Policies Interaction graphs Policy Enforcement Point (PEP) Monitoring Policy Decision Point (PEP) Monitoring Requests Actions Policy Adaptation Point (PAP) ASP - ILASP System Profile Policy repository Policies Learned ASG Generative Policies Policy representation Local Policy Refinement Instantiated Policies Policies Context repository Past Activated Policies

Generative Policy Model

5

§ Formalisation of notion of

generative policy (ASG), key for automatic evolution and dynamic instantiation

§ Definition of computational

task for learning generative policies

§ Algorithm for learning

generative policies

§ Complexity results

Scientific Theoretical Advancements

slide-6
SLIDE 6

Context Free Grammar (CFGs)

§ G = <GT, GN, GPR, GS>

– GT

Terminal symbols

– GN

Non-Terminal symbols

– GPR

Production Rules

– GS

Start Node (in GN) § Each production rule is of the form:

n -> n1,…,nk P

S -> “a” S “b” S ->

slide-7
SLIDE 7

Answer Set Grammars (ASGs)

§ G = <GT, GN, GPR, GS>

– GT

Terminal symbols

– GN

Non-Terminal symbols

– GPR

Annotated Production Rules

– GS

Start Node (in GN) § Each production rule is of the form:

n -> n1,…,nk P

§ Can represent Context-sensitive Grammars (CSGs) such as

anbncn, the copy language, the subset-sum language.

ASP program

slide-8
SLIDE 8

Start -> “if” Conditions “then” Action {} Conditions -> “true” {} Conditions -> Condition “and” Conditions {} Condition -> Expression “==” Value { :- expr_type(X)@1, not type(X)@3. } Expression -> Device “.” Attribute { expr_type(X) :- att_type(X)@3. } Value -> [ constant ] { } Attribute -> “port” { att_type(port). } Attribute -> “ip_address” { att_type(ip). } Device -> “UAV” {} | “VM” {} Action -> “allow” {} | “deny” {} if UAV.ip_address == 20 && UAV.port == 45.79.75.202 then allow Start -> “if” Conditions “then” Action {} Conditions -> “true” {} Conditions -> Condition “and” Conditions {} Condition -> Expression “==” Value { } Expression -> Device “.” Attribute { } Value -> [ constant ] { type(***).} Attribute -> “port” { } Attribute -> “ip_address” { } Action -> “allow” {} | “deny” {} if UAV.ip_address == 45.79.75.202 && UAV.port == 20 then allow if UAV.ip_address == 20 && UAV.port == 45.79.75.202 then allow

Example in ASG

slide-9
SLIDE 9

Theoretical Contributions

§ Formalisation of ASGs § Definition of the learning task § New algorithm for solving ASG learning tasks § Complexity results on key decision problems:

§ These contributions along with an evaluation of the approach has recently been

submitted to AAAI. The submitted paper is available in CENSE.

– Bounded-ASG-membership – Bounded-ASG-satisfiability – Bounded-LASG-verification – Bounded-LASG-satisfiability

slide-10
SLIDE 10

ASGs for Policy Generation: In Practice

§ Execution time: from a policy instance to a decision. § Generation time: from a (learned) ASG to a policy

instance.

§ Learning time: from examples to ASGs (representing

generative policy).

Policy Specification Contextual information Policy Instance Answer Set Grammar Answer Set Program Answer Set Solver Policy Generation Policy Specification Learning task Answer Set Grammar ILASP

Examples of contexts & decisions

Generative Policy Learning

slide-11
SLIDE 11

Answer Set Grammar Induction

§ ASG learning task T = <G, SM, E+, E- >

– G

An existing “background knowledge” grammar

– SM

A hypothesis space

– E+ and E-

positive and negative examples of strings § Task is to find the shortest extension of G using the ASP

rules in SM such that all examples are covered.

§ We have shown that learning only the context-sensitive

conditions can be more efficient than learning the full ASG.

– In some cases, we may only need to learn some conditions.

slide-12
SLIDE 12

Example learning task

IP Address Port 146.179.40.24 20 129.42.38.10 10 29.11.18.98 20 31.7.196.5 22

UAV Whitelist:

Start -> “if” Conditions “then” Action { :- allow@4, #false : whitelist(IP, PT), val(”UAV”, “ip”, IP)@2, val(”UAV”, “port”, PT)@2. :- deny@4, whitelist(IP, PT), val(”UAV”, “ip”, IP)@2, val(”UAV”, “port”, PT)@2. } Conditions -> “true” {} Conditions -> Condition “and” Conditions { val(X, Y, Z) :- val(X, Y, Z)@1. val(X, Y, Z) :- val(X, Y, Z)@3. } Condition -> Expression “==” Value { :- expr_type(X)@1, not type(X)@3. val(NAME, ATT, VAL) :- device(NAME)@1, expr_type(ATT)@1, val(VAL)@3. } ... E+ = { “if UAV.ip_address == 146.179.40.24 && UAV.port == 20 then allow” “if UAV.ip_address == 129.42.38.09 && UAV.port == 10 then deny” “if true then deny” ... } E- = { “if true then allow” “if UAV.ip_address == 129.42.38.10 && UAV.port == 10 then deny” ... }

slide-13
SLIDE 13

Next Steps: Learning from Policy Decisions

§ Our current implementation learns from strings (i.e. policies) § This can be upgraded to learning from decisions

Generative Policy learning task T = <G, SM, E+, E- >

– G

An existing “background knowledge” grammar

– SM

A hypothesis space

– E+ and E-

positive/negative examples of contexts and decisions § E.g. in a given context, an example UAV should be denied access. § Our current implementation can be easily extended to handle decision

examples.

slide-14
SLIDE 14

Next Steps: Policy Preference Learning

§ A single ASG may have many strings in its language. § Generated policies could be:

– The union of all strings in the language. – A single string from the language.

§ In the second case, some policies may be better than others.

– ASP supports preferences encoded as weak constraints. – These can be learned with ILASP.

§ We will learn from examples of preferred policies/decisions.

slide-15
SLIDE 15

Conclusion

§ Formalised Answer Set Grammars, and Answer Set

Grammar Induction.

§ Shown that ASGs can represent Context-sensitive

Grammars.

§ In the context of policy learning, we can learn context-

sensitive conditions on when certain policies apply.

§ Next steps are to learn from decisions, and learn which

policies are preferred.

slide-16
SLIDE 16

Set of Papers Published in P2T1

§ The Generative Policy Approach for Dynamic Collaboration in Coalition Environments, SPIE

DSS 2018, Orlando, FL, April 2018.

§ A Policy System for Control of Data Fusion Processes and Derived Data“, 21st International

Conference on Information Fusion, Cambridge, UK, July 2018 (joint paper T1-T2).

§ Self Generating Policies for Training Data Curation in Coalition Environments, PADG

Workshop, Barcelona, Spain, Sept. 2018.

§ AGENP: An ASGrammar-based GENerative Policy Framework, PADG Workshop,

Barcelona, Spain, Sept. 2018.

§ The Challenge of Access Control Policies Quality, ACM Journal of Data and Information

Quality (in print).

§

Methods and Tools for Policy Analysis, accepted for publication in ACM Computing Surveys.

16

slide-17
SLIDE 17

Back up slide: results.

17