An Automata-Based View on Configurability and Uncertainty Martin - - PowerPoint PPT Presentation

an automata based view on configurability and uncertainty
SMART_READER_LITE
LIVE PREVIEW

An Automata-Based View on Configurability and Uncertainty Martin - - PowerPoint PPT Presentation

An Automata-Based View on Configurability and Uncertainty Martin Berglund 1 and Ina Schaefer 2 pmberglund@sun.ac.za , i.schaefer@tu-braunschweig.de October 19, 2018 1 Centre for AI Research (CSIR), Dept. of Information Science, Stellenbosch U. 2


slide-1
SLIDE 1

An Automata-Based View on Configurability and Uncertainty

Martin Berglund1 and Ina Schaefer2

pmberglund@sun.ac.za, i.schaefer@tu-braunschweig.de

October 19, 2018

1Centre for AI Research (CSIR), Dept. of Information Science, Stellenbosch U.

  • 2Inst. of Software Engineering and Automotive Informatics, TU Braunschweig
slide-2
SLIDE 2

How did this come about?

A sprawling conversation: algorithms to handle configurability with unknowns and unpredictable actors, with a known configuration space

???

slide-3
SLIDE 3

The process

Listen for and record instruction sequences of peer devices in network

slide-4
SLIDE 4

The process

Listen for and record instruction sequences of peer devices in network Deduce probable configurations

  • f peer devices
slide-5
SLIDE 5

The process

Listen for and record instruction sequences of peer devices in network Deduce probable configurations

  • f peer devices

Configuration space possible

Relating to the concept

  • f configurability used in

software product line eng.

slide-6
SLIDE 6

The process

Listen for and record instruction sequences of peer devices in network Deduce probable configurations

  • f peer devices

Configuration space possible

Relating to the concept

  • f configurability used in

software product line eng.

Feature selection probabilities

Novel here

slide-7
SLIDE 7

The process

Listen for and record instruction sequences of peer devices in network Deduce probable configurations

  • f peer devices

Attempt communication based on the features available in the deduced configuration Configuration space possible

Relating to the concept

  • f configurability used in

software product line eng.

Feature selection probabilities

Novel here

slide-8
SLIDE 8

Going for a formalism

Definition

An extensible automaton A is a tuple A = (B, ∆, wt) where

  • B = (Q, Σ, q0, δ, F) is a DFA, the base automaton,
  • ∆ ⊆ 2Q×Σ×Q are the features and,
  • wt the weight function: dom(wt) = 2∆, range totally ordered

For each {δ1, · · · , δn} ⊆ ∆ A+{δ1,...,δn} = (Q, Σ, q0, δ ∪ δ1 ∪ · · · ∪ δn, F). is a realization of A. The weight of the realization is wt({δ1, . . . , δn). If A+δ1,...,δn is a DFA the realization is proper.

slide-9
SLIDE 9

Going for a formalism

Definition

An extensible automaton A is a tuple A = (B, ∆, wt) where

  • B = (Q, Σ, q0, δ, F) is a DFA, the base automaton,
  • ∆ ⊆ 2Q×Σ×Q are the extension transition sets, and,
  • wt the weight function: dom(wt) = 2∆, range totally ordered

For each {δ1, · · · , δn} ⊆ ∆ A+{δ1,...,δn} = (Q, Σ, q0, δ ∪ δ1 ∪ · · · ∪ δn, F). is a realization of A. The weight of the realization is wt({δ1, . . . , δn). If A+δ1,...,δn is a DFA the realization is proper.

slide-10
SLIDE 10

Going for a formalism

Definition

An extensible automaton A is a tuple A = (B, ∆, wt) where

  • B = (Q, Σ, q0, δ, F) is a DFA, the base automaton,
  • ∆ ⊆ 2Q×Σ×Q are the extension transition sets, and,
  • wt the weight function: dom(wt) = 2∆, range totally ordered

For each {δ1, · · · , δn} ⊆ ∆ A+{δ1,...,δn} = (Q, Σ, q0, δ ∪ δ1 ∪ · · · ∪ δn, F). is a realization of A. The weight of the realization is wt({δ1, . . . , δn). If A+δ1,...,δn is a DFA the realization is proper. Finite state? For tractability, makes sense for e.g. protocols wt may be a Boolean function (matching feature models), or costs/probabilities. Here mostly just monotonic

slide-11
SLIDE 11

An example

Example feature model: the base functionality; “readX” to query the value X; can be configured with the features:

  • F1. Monitor, 10% less likely, permits:

“monitorX · getX · getX · · · · getX · unmonitorX”

  • F2. Logging, 50% less likely, permits: “log-readX · readX”, cannot

be combined with F1

  • F3. Logging monitoring 30% less likely, permits:

“monitorX · getX · getX · · · · getX · unmonitorX” and “log-monX · monitorX · getX · getX · · · getX · unmonitorX · log-unmonX”, requires F2 Most likely for “log-mon·monitorX ·getX ·unmonitorX ·log-unmon” is F2 and F3 for 35% chance

slide-12
SLIDE 12

An example extensible automaton

q1 q2 q3 q3,2 q3,1 q3,3 q0 qf readX

slide-13
SLIDE 13

An example extensible automaton

q1 q2 q3 q3,2 q3,1 q3,3 q0 qf monitorX unmonitorX getX readX δF1

slide-14
SLIDE 14

An example extensible automaton

q1 q2 q3 q3,2 q3,1 q3,3 q0 qf monitorX unmonitorX log-readX readX monitorX log-monX getX readX δF1 δF2

slide-15
SLIDE 15

An example extensible automaton

q1 q2 q3 q3,2 q3,1 q3,3 q0 qf monitorX unmonitorX log-readX readX monitorX unmonitorX log-monX log-unmonX monitorX unmonitorX getX getX getX readX δF1 δF2 δF3

slide-16
SLIDE 16

An aside: weighted automata relation

Weighted finite automata can (somewhat) describe extensible automata when

  • wt describes a product in some semiring (e.g.

wt(∆′) =

δ∈∆′ wt(δ)); or;

  • wt ranges over a group, preserving its identity

Then stratify the extensible automaton A into a WFA (but, exponential blowup, illustrative only for membership): A+∅ A+{δ1} A+{δ2} A+{δ3} A+{δ1,δ2} A+{δ2,δ3} A+{δ1,δ3}

wt({δ1}) wt({δ2}) wt({δ3})

wt({δ1,δ2}) wt({δ1}) wt({δ2,δ3}) wt({δ2}) wt({δ2,δ3}) wt({δ3}) wt({δ1,δ3}) wt({δ3})

slide-17
SLIDE 17

Problems

Definition

An instance of the cost-constrained superset realization (CCSR) problem is a tuple (L, (A, ∆, wt), c) where

  • L is a language given as a DFA,
  • (A, ∆, wt) is an extensible automaton, and
  • c ∈ range(wt), such that

there exists a proper realization A+∆′ of A with L ⊆ L(A+∆′) and a weight greater than or equal to c.

slide-18
SLIDE 18

Problems

Definition

An instance of the cost-constrained superset realization (CCSR) problem is a tuple (L, (A, ∆, wt), c) where

  • L is a language given as a DFA,
  • (A, ∆, wt) is an extensible automaton, and
  • c ∈ range(wt), such that

there exists a proper realization A+∆′ of A with L ⊆ L(A+∆′) and a weight greater than or equal to c.

Definition

An instance of CCSR where L is a singleton is an instance of the cost-constrained membership realization (CCMR) problem.

slide-19
SLIDE 19

Hardness

Lemma

The CCRS and CCMR problems are NP-complete even for a constant wt. SAT reduction, variables x1, . . . , xn, clauses c1, . . . , cm, base automaton B: x1 x2 x3 xn c1 c2 c3 cm qf · · · · · · extensions s.t. a proper realization accepts an+m iff c1 ∧ · · · ∧ cm

  • satisfiable. E.g. with literals x1 ∈ c2, x1 ∈ c4, ¬x2 ∈ c1 and xn ∈ c3

implies extensions δx1,{c2,c4}, δ¬x2,{c1} and δxn,{c3,cm} s.t. B+{δx1,{c2,c4},δ¬x2,{c1},δxn,{c3,cm}} is x1 x2 x3 xn c1 c2 c3 c4 c5 cm qf · · · · · · a a a a a a a a δx1,{c2,c4} δ¬x2,{c1} δ¬xn,{c3,cm}

With some node duplication and/or subset management taken into account.

slide-20
SLIDE 20

Monotonic weights and bounded confusion depth

Unfortunate but unsurprising, the case seems somewhat unrealistic however

Definition

The extension confusion depth of an extensible automaton A = ((Q, Σ, q0, δ, F), ∆, wt) is the smallest k ∈ N such that for all α1, . . . , αn ∈ Σ (for n ∈ N) and q ∈ Q, we have |↓{∆′ ⊆ ∆ | q0

α1

− →A+∆′ · · · αn − →A+∆′ q}| ≤ k. where ↓S ⊆ S are the minimal incomparable sets of the set of sets S A mouthful, summary: bound the number of mutually exclusive realizations make some state reachable on some string More premissive than the prefix ambiguity of A+∆

slide-21
SLIDE 21

CCMR w. monotonic wt and bounded confusion depth

Algorithm for CCMR with monotonic weights in terms of confusion depth:

❉❡✜♥✐t✐♦♥ ✾✳ ❚❤❡ ❡①t❡♥s✐♦♥ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ♦❢ ❛♥ ❡①t❡♥s✐❜❧❡ ❛✉t♦♠❛t♦♥ ✇t ✐s t❤❡ s♠❛❧❧❡st s✉❝❤ t❤❛t ❢♦r ❛❧❧ ✭❢♦r ✮ ❛♥❞ ✱ ✇❡ ❤❛✈❡ ✳ ❘❡♠❛r❦ ✸✳ ❙♣❡r♥❡r✬s t❤❡♦r❡♠ ❬❙♣❡✷✽❪ ❞✐❝t❛t❡s t❤❛t t❤❡ ❡①t❡♥s✐♦♥ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ♦❢ ❡①t❡♥s✐❜❧❡ ❛✉t♦♠❛t❛ ✐s ❜♦✉♥❞❡❞ ❜② ✭❛♥❞ t❤✐s ❜♦✉♥❞ ✐s t✐❣❤t✮✳ ❚❤❡ ❛ss✉♠♣t✐♦♥ t❤✐s s❡❝t✐♦♥ ♦♣❡r❛t❡s ✉♥❞❡r✱ ❤♦✇❡✈❡r✱ ✐s t❤❛t t❤❡ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ✇✐❧❧ ✐♥ ❢❛❝t ❜❡ ❜♦✉♥❞❡❞ ❜② s♦♠❡ ♣♦❧②♥♦♠✐❛❧ ✐♥ ✳ ❊①❛♠♣❧❡ ✸✳ ❚❤❡ ❛✉t♦♠❛t♦♥ ✐♥ ❊①❛♠♣❧❡ ✷ ❤❛s ❡①t❡♥s✐♦♥ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ✷✱ ❛s r❡❛❝❤✐♥❣ ♦♥ t❤❡ str✐♥❣ ♠♦♥✐t♦r ❣❡t ✉♥♠♦♥✐t♦r ❝❛♥ ❜❡ ❞♦♥❡ ✇✐t❤ ❡✐t❤❡r t❤❡ r❡❛❧✐③❛t✐♦♥

❋✶ ❋✷

♦r t❤❡ r❡❛❧✐③❛t✐♦♥

❋✸ ✱ ❜✉t ♥♦ s✉❜s❡t ♦❢ ❡✐t❤❡r✳

❋✉rt❤❡r ♥♦t❡ t❤❛t t❤❡ ❝♦♥str✉❝t✐♦♥ ✐♥ t❤❡ ♣r♦♦❢ ♦❢ ▲❡♠♠❛ ✶ ✇✐❧❧ ♣r♦❞✉❝❡ ❛♥ ❡①t❡♥s✐❜❧❡ ❛✉t♦♠❛t♦♥ ✇✐t❤ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ❛t ❧❡❛st ✭✇❤❡r❡ ✐s t❤❡ ♥✉♠❜❡r ♦❢ ✈❛r✐❛❜❧❡s✱ ❛s ✐♥ t❤❡ ❝♦♥str✉❝t✐♦♥✮✱ ❛s ❧♦♥❣ ❛s ❡❛❝❤ ✈❛r✐❛❜❧❡ ♦❝❝✉rs ❜♦t❤ ♥❡❣❛t❡❞ ❛♥❞ ♥♦♥✲♥❡❣❛t❡❞ ✐♥ t❤❡ ❢♦r♠✉❧❛✳ ❚♦ s❡❡ t❤✐s✱ ♣✐❝❦ t❤❡ st❛t❡ ❛♥❞ t❤❡ str✐♥❣ ✱ t❤✐s str✐♥❣ r❡❛❝❤❡s t❤❡ st❛t❡ ❜② ♣✐❝❦✐♥❣ ❛♥② r❡❛❧✐③❛t✐♦♥ ❝♦♥s✐st✐♥❣ ♦❢ ❡①t❡♥s✐♦♥s s❡tt✐♥❣ ❡❛❝❤ ♦❢ t❤❡ ✈❛r✐❛❜❧❡s✱ ❢♦r ✐♥❝♦♠♣❛r❛❜❧❡ ♦♣t✐♦♥s✳ ❲✐t❤ t❤❡s❡ ❞❡✜♥✐t✐♦♥s ✐♥ ❤❛♥❞ ❆❧❣♦r✐t❤♠ ✶ s♦❧✈❡s ❈❈▼❘ ❢♦r ♠♦♥♦t♦♥✐❝ ✇❡✐❣❤t ❢✉♥❝t✐♦♥s✱ ❛♥❞ ❞♦❡s s♦ ❡✣❝✐❡♥t❧② ✐❢ t❤❡ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ ✐s ❜♦✉♥❞❡❞✳ ❆❧❣♦r✐t❤♠ ✶ ❙♦❧✈❡✲▼♦♥♦t♦♥✐❝✲❈❈▼❘

■♥♣✉t✿ ✭✐✮ ❛ str✐♥❣ α1 · · · αn❀ ✭✐✐✮ ❛♥ ❡①t❡♥s✐❜❧❡ ❛✉t♦♠❛t♦♥ A = (B, ∆, ✇t) ✇✐t❤ ❡①t❡♥✲ s✐♦♥ ❝♦♥❢✉s✐♦♥ ❞❡♣t❤ k ❛♥❞ ❛ ♠♦♥♦t♦♥✐❝ ✇❡✐❣❤t ❢✉♥❝t✐♦♥ ✇t✱ ❧❡tt✐♥❣ B = (Q, Σ, q0, δ, F)❀ ❛♥❞❀ ✭✐✐✐✮ ❛ ♠✐♥✐♠✉♠ ✇❡✐❣❤t c✳ P❡r❢♦r♠ st❡♣s✿ ✶✳ ■♥✐t✐❛❧✐③❡ t❛❜❧❡s T, T ′ : Q → 22∆ t♦ ❜❡ ✉♥❞❡✜♥❡❞ ❡✈❡r②✇❤❡r❡✳ ✷✳ ❙❡t T(q0) := ∅✳ ✸✳ ❋♦r ❡❛❝❤ s②♠❜♦❧ α ✐♥ α1, . . . , αn✱ ✐♥ ♦r❞❡r✿ ✸✳✶ ❋♦r ❡❛❝❤ q ∈ ❞♦♠(T)✿ ✸✳✶❆ ■❢ q

α

− →B q′ s❡t T ′(q′) := T(q)✳ ✸✳✶❇ ❖t❤❡r✇✐s❡✱ ✐t❡r❛t✐✈❡❧②✱ ❢♦r ❡❛❝❤ δ′ ∈ ∆ ✇✐t❤ q

α

− →A+{δ′} q′ ❢♦r s♦♠❡ q′ ∈ Q✱ s❡t T ′(q′) := ↓(T ′(q′) ∪ {∆′ ∪ {δ′} | ∆′ ∈ T(q)) ✐❢ T ′(q′) ✐s ❞❡✜♥❡❞✱ ↓({∆′ ∪ {δ′} | ∆′ ∈ T(q)) ♦t❤❡r✇✐s❡✳ ✸✳✷ ❙❡t T := T ′ ❛♥❞ s❡t T ′ t♦ ❜❡ ✉♥❞❡✜♥❡❞ ❡✈❡r②✇❤❡r❡✳ ✹✳ ❋♦r ❡❛❝❤ qf ∈ F ❛♥❞ ❡❛❝❤ ∆′ ∈ T(qf)✿ ✹✳✶ ❝❤❡❝❦ ✐❢ A+∆′ ✐s ❛ ♣r♦♣❡r r❡❛❧✐③❛t✐♦♥✱ ✹✳✷ ✐❢ ✇t(∆′) ≥ c✱ ❛♥s✇❡r ✏tr✉❡✑✳ ✺✳ ❖t❤❡r✇✐s❡ ❛♥s✇❡r ✏❢❛❧s❡✑✳

◆❡①t t♦ ❞❡♠♦♥str❛t❡ t❤❡ ❛❧❣♦r✐t❤♠ ❝♦rr❡❝t✳ ✶✶

Runs in O(nmk2) (n input, m automaton, k confusion depth)

slide-22
SLIDE 22

CCSR w. monotonic weights and bounded confusion depth

CCSR remains hard:

Theorem

CCSR is NP-complete even for extensible automata with extension confusion depth 1 and a constant wt. x1 xn . . . cm c1 . . . q0 qf x1 x3 c4 c1 · · · q4,1 q4,2 q4,3 · · · q1,3 q1,2 q1,1 a a a a a δ¬x1 δx3 δ1,3 δ4,3 a a

Unreachable parts cheating? Confusion depth 2 then.

slide-23
SLIDE 23

Refining CCSR for monotonic weights

We can at least count carefully: starting with an exponential search procedure finding a proper realization with low enough weight

  • Explore space of L × A+∆ building up mappings σ : L → A+∆
  • Keep ensuring that the mapping of transitions under σ corresponds

to some proper realization ∆′

  • Ensure the candidates do not exceed weight c

This is then successively refined by pruning the search space L × A+∆

slide-24
SLIDE 24

Refining CCSR for monotonic weights

We can at least count carefully: starting with an exponential search procedure finding a proper realization with low enough weight

  • Explore space of L × A+∆ building up mappings σ : L → A+∆
  • Keep ensuring that the mapping of transitions under σ corresponds

to some proper realization ∆′

  • Ensure the candidates do not exceed weight c

This is then successively refined by pruning the search space L × A+∆

Theorem

For a CCSR instance (L, A = (B, ∆, wt), c), letting D be the minimal DFA accepting L, evaluating Algorithm 4 using deterministic search runs in time O(nml2min(s,|∆|)) where n is the number of states in B, m the number of states in D, l the size of the input alphabet, and s is the degree of nondeterminism of b-prune(f-prune(D × A+∆)).

slide-25
SLIDE 25

Conclusions

We’ve seen:

  • A take on capturing an important class of questions
  • A formalization approximating the questions
  • Some results attempting to make the induced problems

tractable in a computationally constrained environment Left as future work:

  • The formalization should be extended: transducers (with
  • rigin information?) and more complex state obvious choices
  • The modeling of feature models should be refined
  • The specifics of the probabilistic case should be explored
slide-26
SLIDE 26

Conclusions

We’ve seen:

  • A take on capturing an important class of questions
  • A formalization approximating the questions
  • Some results attempting to make the induced problems

tractable in a computationally constrained environment Left as future work:

  • The formalization should be extended: transducers (with
  • rigin information?) and more complex state obvious choices
  • The modeling of feature models should be refined
  • The specifics of the probabilistic case should be explored

Thanks for listening!