Adaptive Logics SS2015 @RUB Christian Straer Institute for - - PowerPoint PPT Presentation

adaptive logics ss2015 rub
SMART_READER_LITE
LIVE PREVIEW

Adaptive Logics SS2015 @RUB Christian Straer Institute for - - PowerPoint PPT Presentation

Adaptive Logics SS2015 @RUB Christian Straer Institute for Philosophy II, Ruhr-University Bochum, Germany Centre for Logic and Philosophy of Science Ghent University, Belgium Christian.Strasser@RUB.de April 16, 2015 1/111 Warming


slide-1
SLIDE 1

Adaptive Logics – SS2015 – @RUB

Christian Straßer

Institute for Philosophy II, Ruhr-University Bochum, Germany Centre for Logic and Philosophy of Science Ghent University, Belgium Christian.Strasser@RUB.de

April 16, 2015

1/111

slide-2
SLIDE 2

Warming up

◮ saying hi ◮ webpage:

http://www.ruhr-uni-bochum.de/philosophy/defeasible-reasonin

◮ formalities

2/111

slide-3
SLIDE 3

Useful Introductory Literature

◮ Batens, D. (2004). The need for adaptive logics in

  • epistemology. In (Eds.), Logic, epistemology, and the unity of

science (pp. 459485). Springer.

◮ Batens, D. (2007). A universal logic approach to adaptive

  • logics. Logica Universalis, 1, 221242.

◮ Christian Straßer (2014). Adaptive logic and defeasible

  • reasoning. applications in argumentation, normative reasoning

and default reasoning. Springer.

3/111

slide-4
SLIDE 4

Deductive Reasoning vs. Defeasible Reasoning?

Deductive Reasoning

n > 2 n is prime n is odd

◮ truth-conductive:

◮ if each premise is true ◮ then the conclusion is

true

◮ (no exceptions)

Defeasible Reasoning

Tweety is a bird. Birds fly. Tweety flies.

◮ What if Tweety is a

penguin?

◮ tentative ◮ not truth-conductive ◮ internal / external

dynamics

4/111

slide-5
SLIDE 5

5/111

slide-6
SLIDE 6

What makes defeasible inferences feasible?

◮ . . . and that despite the lack of truth conductiveness ◮ what “compensates” for that? ◮ nevertheless: they are “usually”, “in most cases”, “typically”

  • r “normally” truth-conductive, e.g.

◮ reasoning on the basis of normality: Tweety flies since

“normally” birds fly.

◮ inductive generalizations:

a restricted number of samples of a class of objects shares a property P all entities in the class share the property P Tacit assumption: the sample class is normal in the sense that the homogeneity of the observed property does apply to the whole class.

◮ probabilistic reasoning: statistical syllogism (Pollock)

X is an A P(A is a B) is high X is a B. Tacit assumption: X is not exceptional with respect to the given probabilities.

6/111

slide-7
SLIDE 7

The tacit normality assumption of defeasible reasoning

Premises Conclusion ceteris normalibus support

7/111

slide-8
SLIDE 8

The static character of non-defeasible reasoning

◮ Immunity to revision with respect to external information:

Monotonicity

◮ In terms of ⊢: ◮ We never throw away previous inferences in face of new

knowledge.

◮ In terms of Cn:

◮ Immunity to revision with respect to new insights won in the

reasoning process

8/111

slide-9
SLIDE 9

Two types of dynamics of defeasible reasoning

◮ External dynamics

◮ new info causes the retraction of previous inferences ◮ e.g. Tweety is a penguin. → Tweety flies. ◮ Pollock: synchronic defeasibility

◮ Internal dynamics

◮ growing insight in the given information can cause the

withdrawal of previous inferences

◮ Pollock: diachronic defeasibility

Premises / Conclusion knowledge : abnormal case

  • due to internal dynamics
  • due to external dynamics

withdrawal

9/111

slide-10
SLIDE 10

Formalizing Defeasible Reasoning: Why bother?

Understanding Unification (via Adaptive Logics) Comparability Finetuning Variation

10/111

slide-11
SLIDE 11

Two Types of Defeaters

Premises Conclusion undercut rebuttal

◮ undercut: premises do not warrant conclusion ◮ rebuttal: conclusion does not hold

11/111

slide-12
SLIDE 12

Towards ALs: a simple example

The logic CL◦

Take classical logic and add a ‘dummy operator’ ◦. More on CL in a moment . . .

12/111

slide-13
SLIDE 13

The Sherlock Holmes Twist

Interpret ◦A by “By the given evidence it is reasonable to assume A”.

◮ If our detective has reason to assume A, — ◦A ◮ infer that A is the case – defeasibly.

13/111

slide-14
SLIDE 14

Is CL◦ already a good logic for Sherlock Holmes?

◮ Suppose he gets some evidence that suggests that A is the

case, — ◦A.

◮ He cannot infer A yet with CL◦. ◮ Option 1: do nothing. This would be a boring detective. ◮ Option 2: ‘jump to the conclusion A’ ◮ however, what now if he gets different evidence that indicates

that ¬A is the case, — ◦¬A?

14/111

slide-15
SLIDE 15

How would Holmes reason?

  • A

A defeasible assumption: ◦A ⊃ A How to model this formally? ⇒ Adaptive Logics

15/111

slide-16
SLIDE 16

The Three Parts that Characterize Adaptive Logics

  • 1. The Lower Limit Logic
  • 2. The set of abnormalities
  • 3. The adaptive strategy

16/111

slide-17
SLIDE 17

The Lower Limit Logic

interprets the given information as “normally as possible” interprets the given information rigorously as normal lower limit logic (LLL) adaptive logic (AL) upper limit logic (ULL) strengthens with normality assumptions approximates

17/111

slide-18
SLIDE 18

The Lower Limit Logic: in our example

interprets the given information as “normal as possible” interprets the given information rigorously as norma lower limit logic (LLL): CL◦ adaptive logic (AL) upper limit logic (ULL) strengthens LLL with normality assumptions:: given ◦A . . . assume ◦A ⊃ A unless . . . approximates ⊢ULL ◦A ⊃ A

18/111

slide-19
SLIDE 19

Requirements for Lower Limit Logics

◮ reflexive: ◮ transitive: ◮ monotonic: ◮ compact: ◮ has a characteristic semantics ◮ often we need to speak about an enriched LLL: it is enriched

by classical operators denoted by a “check”: e.g. ˇ ¬, ˇ ∨ etc. (we will discuss this topic in more detail later)

◮ some papers make the distinction between the enriched LLL

and LLL explicit by writing LLL+ for the former system.

◮ premise sets are considered to not contain “checked

connectives”

19/111

slide-20
SLIDE 20

Inference Rules and Proofs: Hilbert Style

CL is defined by Modus Ponens (MP) and the following axiom schemata: (A⊃1) A ⊃ (B ⊃ A) (A⊃2) (A ⊃ (B ⊃ C)) ⊃ ((A ⊃ B) ⊃ (A ⊃ C)) (A⊃3) ((A ⊃ B) ⊃ A) ⊃ A (A∧1) (A∧2) (A∧3) (A∨1) Weakening (A∨2) Weakening (A∨3) (A ⊃ C) ⊃ ((B ⊃ C) ⊃ ((A ∨ B) ⊃ C)) Re. by cases (A≡1) (A ≡ B) ⊃ (A ⊃ B) (A≡2) (A ≡ B) ⊃ (B ⊃ A) (A≡3) (A ⊃ B) ⊃ ((B ⊃ A) ⊃ (A ≡ B)) (A¬1) (A ⊃ ¬A) ⊃ ¬A (A¬2) Ex Contradictione Quodlibet (A¬3)

  • Excl. Middle

20/111

slide-21
SLIDE 21

Example: A proof

Show: {p ∧ q, p ⊃ r, r ⊃ s} ⊢ s.

21/111

slide-22
SLIDE 22

Task: Proof A ⊃ A

Tip: you only need (A⊃1) and (A⊃2).

22/111

slide-23
SLIDE 23

Proof: (B ∨ C) ⊃ (¬B ⊃ C)

23/111

slide-24
SLIDE 24

Proof: A ⊃ B, B ⊃ C ⊢ A ⊃ C

24/111

slide-25
SLIDE 25

Proof: ¬¬A ⊃ A

25/111

slide-26
SLIDE 26

Proof: A ⊃ ¬¬A

26/111

slide-27
SLIDE 27

Resolution Theorem

The Resolution Theorem

Γ ⊢ A ⊃ B implies Γ ∪ {A} ⊢ B. Proof:

27/111

slide-28
SLIDE 28

The Deduction Theorem

Γ ∪ {A} ⊢ B implies Γ ⊢ A ⊃ B. Proof:

◮ Idea: every proof of B from Γ ∪ {A} can be transformed into

a proof of A ⊃ B from Γ.

◮ Let P be an arbitrary proof of B from Γ ∪ {A}. ◮ We show by induction on the length of P that for every line l

in proof P on which C is proved, A ⊃ C can be proven from Γ.

◮ “l = 1”: C is either an axiom or a premise in Γ or A.

◮ Suppose C is an axiom. Note that by (A⊃1), C ⊃ (A ⊃ C).

Also, we can introduce C since it is an axiom. By MP, A ⊃ C.

◮ Suppose C ∈ Γ. By (A⊃1), C ⊃ (A ⊃ C). By MP, A ⊃ C. ◮ Suppose C = A. We have shown above that ⊢ A ⊃ A. 28/111

slide-29
SLIDE 29

The Deduction Theorem

Γ ∪ {A} ⊢ B implies Γ ⊢ A ⊃ B. Proof (continued):

◮ “l ⇒ l+1”: (i) C is either an axiom, or (ii) C ∈ Γ, or (iii)

C = A, or (iv) C is obtained via MP from D and D ⊃ C which were derived on lines l′ and l′′ (where l′, l′′ ≤ l). The first three cases are as in the induction base. Hence, suppose (iv).

◮ By the induction hypothesis, we have Γ ⊢ A ⊃ D and

Γ ⊢ A ⊃ (D ⊃ C).

◮ Hence, there are proofs P1 of X = A ⊃ D from Γ and P2 of

Y = A ⊃ (D ⊃ C) from Γ.

◮ We concatenate P1 and P2 obtaining P3. ◮ By (A⊃2), Z = (A ⊃ (D ⊃ C)) ⊃ ((A ⊃ D) ⊃ (A ⊃ C)). ◮ By Y , Z and MP, W = (A ⊃ D) ⊃ (A ⊃ C). ◮ By X, W and MP, A ⊃ C. 29/111

slide-30
SLIDE 30

Semantics: Providing Meaning

◮ atomic level: assignment function v : A → {0, 1} ◮ model: M is associated with an assignment function ◮ truth in a model defined recursively:

◮ M |

= A where A ∈ A iff v(A) = 1

◮ M |

= A ∧ B iff M | = A and M | = B

◮ M |

= A ∨ B iff M | = A or M | = B

◮ M |

= ¬A iff M | = A

◮ M |

= A ⊃ B iff M | = A or M | = B

◮ M |

= A ≡ B iff (M | = A iff M | = B)

◮ or via an evaluation function vM : W → {0, 1}

◮ vM(A) = v(A) where A ∈ A ◮ vM(A ∧ B) = min(vM(A), vM(B)) ◮ vM(A ∨ B) = max(vM(A), vM(B)) ◮ vM(A ⊃ B) = max(1 − vM(A), vM(B)) ◮ etc.

◮ Semantic consequence: Γ A iff for all models M: if M |

= B for all B ∈ Γ then M | = A.

◮ Truth-functional operator π:

vM(π(A1, . . . , An)) = f (vM(A1), . . . , vM(An)) for some function f .

30/111

slide-31
SLIDE 31

Soundness

Γ ⊢ D implies Γ D.

Proof.

◮ Take an arbitrary proof of D from Γ. Let M be an arbitrary

model of Γ.

◮ We proof by induction over the length of the proof that for

each formula E derived at a line l, M | = E.

◮ “l = 1”: E is either (i) an axiom or (ii) E ∈ Γ. (ii) is trivial.

Suppose (i). Suppose E = A ⊃ (B ⊃ A) (see (A⊃1)). Let M be a model of Γ. M | = A ⊃ (B ⊃ A) iff vM(A ⊃ (B ⊃ A)) = 1 iff max(vM(1 − vM(A), max(1 − vM(B), vM(A)))) = 1. Note that the latter holds. The proof is similar for the other axioms.

◮ “l ⇒ l+1”: Either (i) E is an axiom, or (ii) E ∈ Γ or (iii) E is

derived via MP from F ⊃ E and F where F ⊃ E and F are derived at lines l′ ≤ l and l′′ ≤ l resp. Only (iii) is non-trivial. By the induction hypothesis, M | = F and M | = F ⊃ E. Thus, vM(F) = vM(F ⊃ E) = max(1 − vM(F), vM(E)) = 1. Thus, M | = E.

31/111

slide-32
SLIDE 32

Completeness

Γ A implies Γ ⊢ A. Proof: we need some preparation for that.

32/111

slide-33
SLIDE 33

◮ A set Γ is inconsistent iff Γ ⊢ A for all A ∈ W. ◮ Γ is consistent iff Γ is not inconsistent.

Proposition 1

If Γ A then Γ ∪ {¬A} is consistent.

Proof.

◮ Suppose Γ ∪ {¬A} is inconsistent. ◮ Hence, Γ ∪ {¬A} ⊢ ¬¬A. ◮ Hence, by the deduction theorem, Γ ⊢ ¬A ⊃ ¬¬A. ◮ By (A¬1) and MP, Γ ⊢ ¬¬A. ◮ Since ¬¬A ⊢ A (see above), Γ ⊢ A.

33/111

slide-34
SLIDE 34

Explosion in Hilbert

Show {A, ¬A} ⊢ B. Tip: use (A¬2) and MP.

34/111

slide-35
SLIDE 35

Proposition 2

If Γ is consistent then there is a model of Γ.

Proof.

◮ Let W be enumerated by A1, A2, . . .. ◮ Let Γ0 = Γ and define

Γi+1 = Γi ∪ {Ai} if Γi ∪ {Ai} is consistent Γi ∪ {¬Ai} else.

◮ Let Γ∗ = i≥0 Γi. ◮ Claim: Γ∗ is consistent. Proof: by induction. IB: Γ0 is

consistent by supposition. “i → i+1”: by definition of Γi+1. Hence, each Γi is consistent. Assume Γ∗ is inconsistent. Hence, Γ∗ ⊢ A1 and Γ∗ ⊢ ¬A1. Hence (by compactness), there is a Γi such that Γi ⊢ A1 and there is a Γj such that Γj ⊢ ¬A1. Take k = max(i, j). Then (by monotonicity), Γk ⊢ A1 and Γk ⊢ ¬A1. Hence, Γk is inconsistent (since A, ¬A ⊢ B).

35/111

slide-36
SLIDE 36

◮ Claim: Γ∗ ⊢ A implies A ∈ Γ∗ (Deductive Closure). Suppose

Γ∗ ⊢ A and assume A / ∈ Γ∗. Hence, ¬A ∈ Γ∗. But then Γ∗ ⊢ A and Γ∗ ⊢ ¬A and hence Γ∗ is inconsistent,—a contradiction.

◮ Claim: B, C ∈ Γ∗ iff B ∧ C ∈ Γ∗. Suppose B, C ∈ Γ∗. Hence,

by ResThm and (A∧3), Γ∗ ⊢ B ∧ C. Hence, B ∧ C ∈ Γ∗. Suppose B ∧ C ∈ Γ∗. Hence, Γ∗ ⊢ B (by (A∧1)). Thus B ∈ Γ∗.

◮ Claim: B ∨ C ∈ Γ∗ iff (B ∈ Γ∗ or C ∈ Γ∗). Let B ∨ C ∈ Γ∗.

Since B ∨ C ⊢ ¬B ⊃ C, ¬B ⊃ C ∈ Γ∗. Suppose B / ∈ Γ∗ and hence ¬B ∈ Γ∗. Hence, by MP, C ∈ Γ∗. Now suppose B ∈ Γ∗. Since B ⊃ (B ∨ C) also B ∨ C ∈ Γ∗. The case for C ∈ Γ∗ is analogous.

◮ Claim: A ∈ Γ∗ iff ¬A /

∈ Γ∗. Suppose A ∈ Γ∗. Assume ¬A ∈ Γ∗, then Γ∗ is inconsistent,—a contradiction. The other way is analogous.

36/111

slide-37
SLIDE 37

◮ Define M via the assignment ◮ We show by induction over the length of a formula that

M | = A iff A ∈ Γ∗.

◮ A ∈ A: by definition. ◮ Let B, C be such that M |

= B [C] iff B [C] ∈ Γ∗.

◮ Let A = B ∧ C. Let B ∧ C ∈ Γ∗. Hence, B, C ∈ Γ∗. Hence, by

the induction hypothesis, M | = B and M | = C. Hence, M | = B ∧ C. The other way around is analogous.

◮ Let A = B ∨ C. Let B ∨ C ∈ Γ∗. Hence, B ∈ Γ∗ or C ∈ Γ∗.

By the induction hypothesis, M | = B or M | = C. Hence, M | = B ∨ C. The other way around is analogous.

◮ Let A = ¬B. Let ¬B ∈ Γ∗. Hence, B /

∈ Γ∗. By the induction hypothesis, M | = B and hence M | = ¬B. The other way around is analogous.

◮ etc. 37/111

slide-38
SLIDE 38

Completeness: Γ A implies Γ ⊢ A.

Proof.

38/111

slide-39
SLIDE 39

Abnormalities

◮ They determine the normality assumptions by means of which

the AL strengthens the LLL.

◮ In other words, they determine what is means to interpret

premises “normal”. (we will come to the “as possible part” later)

◮ characterized by a logical form F ◮ the set of all abnormalities denoted by

. . . in our example . . .

◮ recall: the normality assumption was that if ◦A then A is the

case

◮ hence, Ω = {◦A ∧ ¬A}

39/111

slide-40
SLIDE 40
  • 3. The adaptive strategy

effects both,

  • 1. the proof theory, and
  • 2. the semantics

Let’s start with the proof theory.

40/111

slide-41
SLIDE 41

Adaptive proofs

41/111

slide-42
SLIDE 42

Adaptive proofs: the generic rules

◮ PREMises are introduced on the empty condition (no

normality assumption is needed for that) If A ∈ Γ : . . . . . . A ∅ (PREM)

◮ the Unconditional Rule:

If A1, . . . , An ⊢LLL B : A1 ∆1 . . . . . . An ∆n B ∆1 ∪ · · · ∪ ∆n (RU) These two rules give us the full power of the LLL: If Γ ⊢LLL A, then Γ ⊢AL A.

42/111

slide-43
SLIDE 43

Adaptive proofs: the conditional rule

If A1, . . . , An ⊢LLL B ˇ ∨ Dab(Θ) : A1 ∆1 . . . . . . An ∆n B ∆1 ∪ · · · ∪ ∆n ∪ Θ (RC)

◮ Dab(Θ) is a notational convention that denotes the

disjunction of abnormalities in Θ,

◮ where Θ ⊆ Ω is a finite set of abnormalities ◮ as in RU, the conditions of the used lines (∆1, . . . , ∆n) are

carried forward

The rational of RC:

From A1, . . . , An follows by the LLL that either B is true or one of the abnormalities in Θ is true. The AL allows us to conditionally derive B under the assumption that neither of the abnormalities in Θ is true.

43/111

slide-44
SLIDE 44

Time for examples . . .

◮ recall: ◦A ⊢CL◦ A ∨ (◦A ∧ ¬A) ◮ a conditional derivation by means of RC:

1

  • A

PREM ∅ 2 A 1;RC

  • A ∧ ¬A
  • ◮ also: {◦A, A ⊃ B} ⊢CL◦ B ∨ (◦A ∧ ¬A)

3 A ⊃ B PREM ∅ 4 B 1,3;RC

  • A ∧ ¬A
  • Now what if we also have ◦¬A?

44/111

slide-45
SLIDE 45

Marking of lines in adaptive proofs

Example 1: a simple case of marking

1

  • A

PREM ∅

62

A 1;RC

  • A ∧ ¬A
  • 3

A ⊃ B PREM ∅

64

B 1,3;RC

  • A ∧ ¬A
  • 5

¬A PREM ∅ 6

  • A ∧ ¬A

1,5;RU ∅

◮ the condition has been derived (on the empty condition) ◮ so –obviously– it’s not save anymore to assume that ◦A ∧ ¬A

is not the case (resp. that ◦A implies A)

◮ thus, mark all lines with this assumption

45/111

slide-46
SLIDE 46

Marking of lines in adaptive proofs

Example 2: a more complex case of marking

1

  • A

PREM ∅

72

A 1;RC

  • A ∧ ¬A
  • 3

A ⊃ B PREM ∅

74

B 1,3;RC

  • A ∧ ¬A
  • 5

¬A ∨ ¬B PREM ∅ 6

  • B

PREM ∅ 7 (◦A ∧ ¬A) ∨ (◦B ∧ ¬B) 1,5,6;RU ∅

◮ here ◦A ∧ ¬A is part of a disjunction of abnormalities that has

been derived on the empty condition (line 7)

◮ note that the formula at line 7 is a Dab-formula ◮ this disjunction is minimal: right now we have no means to

decide whether ◦A ∧ ¬A or ◦B ∧ ¬B is the case (or even both)

◮ hence, we’re cautious and mark lines that intersect with

members of the minimal disjunction of abnormalities on line 7

46/111

slide-47
SLIDE 47

Adaptive strategies and marking

◮ the specifics of the marking definition of an AL depend on the

adaptive strategy that is used

◮ there are two standard strategies

  • 1. reliability strategy
  • 2. minimal abnormality strategy

◮ we write ALr for an AL characterized by a triple LLL, Ω,

reliability

◮ we write ALm for an AL characterized by a triple LLL, Ω,

minimal abnormality

47/111

slide-48
SLIDE 48

The reliability strategy: marking

◮ a stage of a proof is a list of consecutive lines ◮ where Dab(∆1), Dab(∆2), . . . are the minimal disjunctions of

abnormalities that are derived at some stage s on the empty condition from the premise set Γ, let Σs(Γ) =df {∆1, ∆2, . . .}

◮ the set of unreliable formulas at stage s is defined by

Us(Γ) =df ∆1 ∪ ∆2 ∪ . . . = Σs(Γ)

Marking definition for the reliability strategy

A line l with condition ∆ is marked at stage s iff ∆ ∩ Us(Γ) = ∅.

◮ in words: a line is marked iff its condition contains unreliable

formulas.

◮ put differently: a line is marked if its condition contains

formulas that are part of minimal disjunctions of abnormalities

◮ lets take a look at our examples . . .

48/111

slide-49
SLIDE 49

Example 1: a simple case of marking

1

  • A

PREM ∅

62

A 1;RC

  • A ∧ ¬A
  • 3

A ⊃ B PREM ∅

64

B 1,3;RC

  • A ∧ ¬A
  • 5

¬A PREM ∅ 6

  • A ∧ ¬A

1,5;RU ∅

◮ Σ6(Γ) = ◮ U6(Γ) =

49/111

slide-50
SLIDE 50

Example 2: a more complex case of marking

1

  • A

PREM ∅

72

A 1;RC

  • A ∧ ¬A
  • 3

A ⊃ B PREM ∅

74

B 1,3;RC

  • A ∧ ¬A
  • 5

¬A ∨ ¬B PREM ∅ 6

  • B

PREM ∅ 7 (◦A ∧ ¬A) ∨ (◦B ∧ ¬B) 1,5,6;RU ∅

◮ Σ7(Γ) = ◮ U7(Γ) =

50/111

slide-51
SLIDE 51

1

  • A

PREM ∅ 2 A 1;RC

  • A ∧ ¬A
  • 3

A ⊃ B PREM ∅ 4 B 1,3;RC

  • A ∧ ¬A
  • 5

¬A ∨ ¬B PREM ∅ 6

  • B

PREM ∅ 7 (◦A ∧ ¬A) ∨ (◦B ∧ ¬B) 1,5,6;RU ∅ 8 ¬B PREM ∅ 9

  • B ∧ ¬B

6,8;RU ∅

◮ Σ7(Γ) = {{(◦A ∧ ¬A), (◦B ∧ ¬B)}} ◮ U7(Γ) = {◦A ∧ ¬A, ◦B ∧ ¬B} ◮ note that at line 9 the formula at stage 7 looses its status of

being a minimal(!) Dab-formula

◮ Σ9(Γ) = ◮ U9(Γ) =

51/111

slide-52
SLIDE 52

Markings come and go: lines which are unmarked may be marked at a later stage, and be unmarked again at an even later stage. BUT: when does a formula count as a consequence of the AL?

Final derivability

A formula A is finally derived at line l at a finite stage s iff (i) l is unmarked at stage s and (ii) for every extension of the proof in which l is marked, there is a further extension in which l is unmarked. (Note: the extensions in question may be infinite.)

The adaptive derivability relation ⊢AL and the adaptive consequence set CnAL

Γ ⊢AL A iff there is an AL-proof from Γ in which A is finally derived. CnAL (Γ) = {A | Γ ⊢AL A}

52/111

slide-53
SLIDE 53

Some nice properties of the consequence/derivability relation

◮ recall: Us(Γ) =df

Σs(Γ) where Σs(Γ) =df {∆ | Dab(∆) is a minimal Dab-formula at stage s}

◮ let Σ(Γ) be the set of all ∆ for which Γ ⊢LLL Dab(∆) and for

all ∆′ ⊂ ∆, Γ ⊢LLL Dab(∆′).

◮ let U(Γ) =df

Σ(Γ)

Theorem 1

Γ ⊢ALr A iff there is a ∆ ⊆ Ω for which Γ ⊢LLL A ˇ ∨ Dab(∆) and ∆ ∩ U(Γ) = ∅.

53/111

slide-54
SLIDE 54

Another example

Suppose a reliable although not infallible witness report that

◮ Mr. X wore a long black coat in the bar in which he was seen

half an hour before the murder. — ◦l Another reliable although not infallible source however witnesses that

◮ Mr. X wore a short dark blue jacket and black trousers at the

same time. — ◦j Obviously ¬(l ∧ j), since both cannot be the case. Moreover, we have

◮ If Mr. X was dressed in a long black coat, then he was dressed

in a dark way. — l ⊃ m

◮ If Mr. X was dressed in a short dark blue jacket and black

trousers, then he was dressed in a dark way. — j ⊃ m

54/111

slide-55
SLIDE 55

1

  • l

PREM ∅ 2

  • j

PREM ∅ 3 ¬(l ∧ j) PREM ∅ 4 l ⊃ m PREM ∅ 5 j ⊃ m PREM ∅

106

l 1; RC

  • l ∧ ¬l
  • ?7

m 4, 6; RU

  • l ∧ ¬l
  • 108

j 2; RC

  • j ∧ ¬j
  • ?9

m 5, 8; RU

  • j ∧ ¬j
  • 10

(◦l ∧ ¬l) ∨ (◦j ∧ ¬j) 1,2,3; RU ∅

◮ according to the reliability strategy lines 7 and 9 are marked ◮ the rationale is: since (◦l ∧ ¬l) ∨ (◦j ∧ ¬j) is a minimal

Dab-formula, one of the two abnormalities is the case or even both

◮ in case both are the case, neither of the lines 7 and 9 is safe

55/111

slide-56
SLIDE 56

1

  • l

PREM ∅ 2

  • j

PREM ∅ 3 ¬(l ∧ j) PREM ∅ 4 l ⊃ m PREM ∅ 5 j ⊃ m PREM ∅

106

l 1; RC

  • l ∧ ¬l
  • ?7

m 4, 6; RU

  • l ∧ ¬l
  • 108

j 2; RC

  • j ∧ ¬j
  • ?9

m 5, 8; RU

  • j ∧ ¬j
  • 10

(◦l ∧ ¬l) ∨ (◦j ∧ ¬j) 1,2,3; RU ∅

◮ another rationale: interpreting the premises as normally as

possible means that we assume that as less abnormalities as possible are the case

◮ for the disjunction at line 10 this means that we assume that

  • nly one of the two abnormalities is the case (we don’t know

which one though)

◮ however, then at least one of the two assumptions at line 7

and 9 can be considered as safe and thus it is still (defeasibly) warranted to infer m

56/111

slide-57
SLIDE 57

The minimal abnormality strategy

◮ Recall: Σs(Γ) =df {∆ |

Dab(∆) is a minimal Dab-formula at stage s}.

◮ Φs(Γ) is the set of all minimal choice sets of Σs(Γ) ◮ a choice set of {∆i | i ∈ I} is a set that contains a member of

each ∆i (i ∈ I)

◮ ϕ is a minimal choice set of iff there is no choice set ϕ′ such

that ϕ′ ⊂ ϕ

◮ example: Let S = {{1, 2}, {2, 3}}.

◮ {1} is not a choice set of S since {1} ∩ {2, 3} = ∅ ◮ {1, 2} is a choice set of S ◮ {1, 3} and {2} are the minimal choice sets of S

◮ each set ϕ ∈ Φs(Γ) offers a minimally abnormal interpretation

  • f the given premises resp. minimal Dab-formulas according to

the current stage of the proof s. By minimally abnormal we mean that as few abnormalities as possible are interpreted as true.

57/111

slide-58
SLIDE 58

Marking for minimal abnormality

A line l with conditions ∆ and formula A is marked at stage s iff (i) there is no ϕ ∈ Φs(Γ) such that ϕ ∩ ∆ = ∅ or (ii) for some ϕ ∈ Φs(Γ) there is no line at which A is derived on a condition Θ for which Θ ∩ ϕ = ∅.

what does this mean, intuitively . . .

◮ condition (i) expresses that the assumption ∆ on which A is

derived is not warranted in any minimally abnormal interpretation offered by Φs(Γ), since in each ϕ ∈ Φs(Γ) there is an abnormality that is also in ∆ and since the assumption expressed by the condition ∆ is that no abnormality in ∆ is true.

◮ condition (ii) expresses that there is a minimally abnormal

interpretation ϕ ∈ Φs(Γ) such that A is not derived under any condition that is warranted in ϕ.

58/111

slide-59
SLIDE 59

So, when is a line unmarked according to minimal abnormality?

A line l with formula A and condition ∆ us not marked at stage s iff (i) there is a ϕ ∈ Φs(Γ) such that ∆ ∩ ϕ = ∅ and (ii) for each ϕ ∈ Φs(Γ) there is a ∆ϕ such that ∆ϕ ∩ ϕ = ∅ and A is derived on the condition ∆ϕ at stage s.

What does this mean, intuitively . . .

◮ condition (i) expresses that there is a minimally abnormal

interpretation ϕ ∈ Φs(Γ) in which the assumption ∆ is warranted

◮ condition (ii) expresses that for each minimally abnormal

interpretation ϕ ∈ Φs(Γ) our A is derived on an assumption ∆ϕ that is warranted in ϕ

59/111

slide-60
SLIDE 60

1

  • l

PREM ∅ 2

  • j

PREM ∅ 3 ¬(l ∧ j) PREM ∅ 4 l ⊃ m PREM ∅ 5 j ⊃ m PREM ∅

106

l 1; RC

  • l ∧ ¬l
  • 7

m 4, 6; RU

  • l ∧ ¬l
  • 108

j 2; RC

  • j ∧ ¬j
  • 9

m 5, 8; RU

  • j ∧ ¬j
  • 10

(◦l ∧ ¬l) ∨ (◦j ∧ ¬j) 1,2,3; RU ∅

◮ Σ10(Γ) = {{(◦l ∧ ¬l), (◦j ∧ ¬j)}} ◮ Φ10(Γ) = {{◦l ∧ ¬l}, {◦j ∧ ¬j}} ◮ lines 6 and 8 are marked since they violate condition (ii) ◮ lines 7 and 9 are not marked:

◮ concerning (i): there is a minimal choice set with which the

condition has empty intersection

◮ concerning (ii): there is no choice set that intersects with both

conditions, {◦l ∧ ¬l} and {◦j ∧ ¬j}

60/111

slide-61
SLIDE 61

Floating conclusions and the adaptive strategies

Floating conclusion

A is a floating conclusion in case it is reach be various conflicting arguments.

◮ reliability blocks the floating conclusion m from

Γ = {◦l, ◦j, ¬(l ∧ j), l ⊃ m, j ⊃ m}: Γ ⊢CLr

  • m

◮ minimal abnormalities derives the floating conclusion:

Γ ⊢CLm

  • m

61/111

slide-62
SLIDE 62

Some nice property

◮ recall: Φs(Γ) is the set of minimal choice sets of Σs(Γ) ◮ Σ(Γ) is the set of all ∆ for which Γ ⊢LLL Dab(∆) and for all

∆′ ⊂ ∆, Γ ⊢LLL Dab(∆′).

◮ Let Φ(Γ) be the set of minimal choice sets of Σ(Γ)

Theorem 2

Γ ⊢ALm A iff for every ϕ ∈ Φ(Γ) there is a ∆ ⊆ Ω for which ∆ ∩ ϕ = ∅ and Γ ⊢LLL A ˇ ∨ Dab(∆).

62/111

slide-63
SLIDE 63

Final derivability revisted?

◮ recall: A formula A is finally derived at a finite stage s at line

l iff (i) l is unmarked at stage s and (ii) for every extension of the proof in which l is marked, there is a further extension in which l is unmarked.

◮ say: A formula A is finally *-derived at a finite stage s at line

l iff (i) l is unmarked at stage s and (ii) for every finite extension of the proof in which l is marked there is a finite further extension in which l is not marked.

◮ Let Γ ⊢∗ AL A iff there is a proof in which A is finally *-derived.

Does this work?

Γ ⊢AL A iff Γ ⊢∗

AL A.

Nope

E.g. Γ = {(◦Ai ∧ ¬Ai) ∨ (◦Aj ∧ ¬Aj) | j > i > 0} ∪ {B ∨ Ai | i > 1}. Here Γ ⊢AL B while Γ ⊢∗

AL B.

63/111

slide-64
SLIDE 64

Diderik Batens. A universal logic approach to adaptive logics. Logica Universalis, 1:221–242, 2007. Diderik Batens. Towards a dialogic interpretation of dynamic proofs. In C´ edric D´ egremont, Laurent Keiff, and Helge R¨ uckert, editors, Dialogues, Logics and Other Strange Things. Essays in Honour of Shahid Rahman, pages 27–51. College Publications, London, 2009. Peter Verd´ ee. Adaptive logics using the minimal abnormality strategy are π1

1-complex.

Synthese, 167:93–104, 2009.

64/111

slide-65
SLIDE 65

Semantics for Adaptive Logics: The basic idea

◮ Take the set of LLL-models of a premise set Γ ◮ order them according to their abnormal part, i.e.

Ab(M) = {A ∈ Ω | M | = A} M6 M4 M5 M3 M1 M2

◮ in flat adaptive logics in standard format this is done by

means of: M1 ≺ M2 iff Ab(M1) ⊂ Ab(M2)

◮ select models that are beyond a certain threshold

M6 M4 M5 M3 M1 M2

65/111

slide-66
SLIDE 66

What threshold?

the threshold depends on the strategy:

Minimal Abnormality

◮ Idea: take the minimally

abnormal models

◮ M ∈ MALm(Γ) iff

M ∈ MLLL(Γ) and for all M′ ∈ MLLL(Γ), if Ab(M′) ⊆ Ab(M) then Ab(M′) = Ab(M). M6 M4 M5 M3 M1 M2

Reliability

◮ Idea: take models whose

abnormal part only consists

  • f unreliable abnormalities

◮ we call this models “reliable” ◮ M ∈ MALr(Γ) iff

M ∈ MLLL(Γ) and Ab(M) ⊆ U(Γ)

66/111

slide-67
SLIDE 67

Let’s go back to our example...

◮ Γ = {♦l, ♦j, ¬(l ∧ j), l ⊃ m, j ⊃ m} ◮ Γ ⊢T (◦l ∧ ¬l) ∨ (◦j ∧ ¬j) ◮ Γ ⊢T ◦l ∧ ¬l, ◮ Γ ⊢T ◦j ∧ ¬j ◮ hence, U(Γ) = {◦l ∧ ¬l, ◦j ∧ ¬j} ◮ we have for instance the following models M1, . . . , M6 where

◮ Ab(M1) = {◦l ∧ ¬l}, ◮ Ab(M2) = {◦j ∧ ¬j}, ◮ Ab(M3) = {◦l ∧ ¬l, ◦j ∧ ¬j}, ◮ Ab(M4) = {◦l ∧ ¬l, ◦k ∧ ¬k} ◮ Ab(M5) = {◦j ∧ ¬j, ◦o ∧ ¬o} ◮ Ab(M6) = {◦l ∧ ¬l, ◦j ∧ ¬j, ◦k ∧ ¬k, ◦o ∧ ¬o} ◮ models M1 and M2 are minimally abnormal ◮ models M1, M2, and M3 are reliable 67/111

slide-68
SLIDE 68

M6 M4 M5 M3 M1 M2

(a)

M6 M4 M5 M3 M1 M2

(b)

M6 M4 M5 M3 M1 M2

(c)

(a) the ordering of the models according to their abnormal part (b) the threshold for the reliable models (c) the threshold for the minimal abnormal models Note that every reliable model is minimal abnormal: MALm(Γ) ⊆ MALr(Γ)

68/111

slide-69
SLIDE 69

Is the ordering of models smooth?

The danger: infinite descending chains without minima

M6 M4 M5 M3 M1 M2 ∞ ∞

◮ w.r.t. the infinite chains without minima there are no

minimally abnormal models

◮ e.g. if there are only infinite chains without minima there are

no minimally abnormal models: Γ | =AL ⊥ (although there are LLL-models of Γ and hence Γ | =LLL ⊥)

69/111

slide-70
SLIDE 70

Smoothness and Reassurance

◮ a partial order X, ≺ is well-founded iff there are no infinitely

descending chains.

◮ a partial order X, ≺ is smooth (resp. stoppered) iff for each

x ∈ X there is a minimal element y ∈ x such that y ≺ x or y = x

◮ what we need is: {Ab(M) | M ∈ MLLL(Γ)}, ⊂ is smooth.

(Note it may be smooth but not well-founded (e.g. invert the

  • rder on the natural numbers))

Theorem 3

  • 1. For every LLL-model M of Γ, M is minimally abnormal or

there is an LLL-model M′ of Γ such that Ab(M′) ⊂ Ab(M) and M′ is minimally abnormal.

  • 2. {Ab(M) | M ∈ MLLL(Γ)}, ⊂ is smooth.
  • 3. If Γ has LLL-models, then there are minimally abnormal

models of Γ.

  • 4. If Γ |

=LLL ⊥ then Γ | =AL ⊥.

70/111

slide-71
SLIDE 71

Simple facts about choice sets

Let in the following Σ = Σ(Γ).

Fact 4

Where ϕ is a choice set of Σ and A ∈ ϕ: If A satisfies there is a ∆ ∈ Σ : ϕ ∩ ∆ = {A} (†) then ϕ \ {A} is not a choice set of Σ.

Fact 5

Where ϕ is a choice set of Σ and A ∈ ϕ: If A doesn’t satisfy (†) then ϕ \ {A} is also a choice set of Σ.

Fact 6

Where ϕ is a choice set of Σ: each A ∈ ϕ satisfies (†) iff ϕ is a minimal choice set of Σ.

71/111

slide-72
SLIDE 72

Lemma 7

Where ϕ = {A1, A2, . . .} is a choice set of Σ let ˆ ϕ =

i∈N ϕi

where ϕ1 = ϕ and ϕi+1 =

  • ϕi

if there is a ∆ ∈ Σ s.t. ϕi ∩ ∆ = {Ai} ϕi \ {Ai} else ˆ ϕ is a minimal choice set of Σ.

Proof.

◮ note that ϕi is a choice set of Σ for each i ∈ N ◮ Assume for some ∆ ∈ Σ, ˆ

ϕ ∩ ∆ = ∅. Note that since ∆ is finite ∆ ∩ ϕ1 = {B1, . . . , Bn} for some n ∈ N. Assume there no Bj s.t. for all i ∈ N, Bj ∈ ϕi ∩ ∆. Hence, for all Bj’s there is a ij such that ϕij ∩ ∆ = ∅. Take k = max({ij | 1 ≤ j ≤ n}), then Bj / ∈ ϕk ∩ ∆ since (⋆) {B1, . . . , Bn} ⊇ ϕi ∩ ∆ ⊇ ϕi+1 ∩ ∆. This is a contradiction since ϕk is a choice set of Σ and (⋆).

◮ Suppose some Ai ∈ ˆ

ϕ does not satisfy (†). Hence, for all ∆ ∈ Σ, ˆ ϕ ∩ ∆ = {Ai}. Hence, ϕi ∩ ∆ = {Ai} for all ∆ ∈ Σ. But then Ai / ∈ ˆ ϕ,—a contradiction.

◮ Hence, by the fact above, ˆ

ϕ is a minimal choice set.

72/111

slide-73
SLIDE 73

Simple facts about the relation between choice sets and the abnormal parts of models

73/111

slide-74
SLIDE 74

Lemma 8

If ϕ ∈ Φ(Γ) then there is a M ∈ MLLL(Γ) for which Ab(M) ⊆ ϕ.

Proof.

◮ assume ∃M ∈ MLLL(Γ) s.t. Ab(M) ⊆ ϕ ◮ then Γ ∪ (Ω \ ϕ)¬ has no LLL-models (where

Θ¬ =df {¬A | A ∈ Θ})

◮ by the compactness of LLL, there is a finite ∆ ⊆ Ω \ ϕ such

that Γ ∪ ∆¬ has no LLL-model

◮ hence Γ LLL Dab(∆) and hence Γ ⊢LLL Dab(∆) ◮ this is a contradiction to ϕ ∈ Φ(Γ)

Lemma 9

Where M ∈ MLLL(Γ), Ab(M) is a choice set of Σ(Γ).

Proof.

Let ∆ ∈ Σ(Γ), then Γ ⊢LLL Dab(∆). Hence, Ab(M) ∩ ∆ = ∅.

74/111

slide-75
SLIDE 75

Corollary 10

Where M ∈ MLLL(Γ), Ab(M) ⊂ ϕ for all ϕ ∈ Φ(Γ).

Corollary 11

For all ϕ ∈ Φ(Γ) there is a M ∈ MLLL(Γ) such that (i) Ab(M) = ϕ and (ii) M ∈ MALm(Γ).

Corollary 12

Where M ∈ MLLL(Γ), M ∈ MALm(Γ) iff Ab(M) ∈ Φ(Γ).

Corollary 13 (Strong Reassurance)

For each M ∈ MLLL(Γ) there is a M′ ∈ MALm(Γ) such that Ab(M′) ⊆ Ab(M).

Proof.

By Lemma 9 and Lemma 7 there is a ϕ ∈ Φ(Γ) such that ϕ ⊆ Ab(M). By Corollary 11 there is a M′ ∈ MALm(Γ) for which Ab(M′) = ϕ.

75/111

slide-76
SLIDE 76

Links between the marking and the semantic selection: Reliability

Syntax Theorem 14

Γ ⊢ALr A iff there is a ∆ ⊆ Ω for which Γ ⊢LLL A ∨ Dab(∆) and ∆ ∩ U(Γ) = ∅.

Semantics

Γ | =ALr A iff (for each M ∈ MLLL(Γ), if Ab(M) ⊆ U(Γ), then M | = A).

76/111

slide-77
SLIDE 77

Links between the marking and the semantic selection: Minimal Abnormality

Syntax Theorem 15

Γ ⊢ALm A iff for every ϕ ∈ Φ(Γ) there is a ∆ ⊆ Ω for which ∆ ∩ ϕ = ∅ and Γ ⊢LLL A ∨ Dab(∆).

Semantics Theorem 16

Let MLLL(Γ) be non-empty.

  • 1. MALm(Γ) =

ϕ∈Φ(Γ){M ∈ MLLL(Γ) | Ab(M) = ϕ}

  • 2. ϕ ∈ Φ(Γ) iff there is an M ∈ MALm(Γ) for which Ab(M) = ϕ.

77/111

slide-78
SLIDE 78

Conflicts in adaptive proofs

A conflict between a defeasible inference and a “hard fact”

◮ “hard facts”: derived on empty condition ◮ Type 1: hard facts conflict with defeasible assumptions

◮ → marking

◮ Type 2: hard facts conflict with defeasible conclusions

l A . . . ∆ l′ ¬A . . . ∅

◮ in this case Γ ⊢LLL Dab(∆) ◮ line will be marked ◮ shortcut rule

A ∆ ¬A ∅ Dab(∆) ∅ (RC0)

Lemma 17

An AL-proof contains a line at which A is derived on the condition ∆ iff Γ ⊢LLL A ∨ Dab(∆).

78/111

slide-79
SLIDE 79

A conflict between two defeasible inferences

◮ Type 1: concerning the defeasible assumption

l A . . . ∆ l′ Dab(∆) . . . Θ

◮ in this case Γ ⊢LLL Dab(∆ ∪ Θ) ◮ shortcut rule:

A ∆ Dab(∆) Θ Dab(∆ ∪ Θ) ∅ (RD1)

◮ Type 2: concerning defeasible consequences

l A . . . ∆ l′ ¬A . . . Θ

◮ in this case Γ ⊢LLL Dab(∆ ∪ Θ) ◮ shortcut rule

A ∆ ¬A Θ Dab(∆ ∪ Θ) ∅ (RD2)

79/111

slide-80
SLIDE 80

Some trouble with the classical connectives

◮ we need some classical connectives in order to express

Dab-formulas (i.e. the classical disjunction)

◮ but what if the LLL has already a classical disjunction? ◮ suppose ∨ is classical and part of the language of the LLL ◮ Let !A =df ♦A ∧ ♦¬A ◮ Let Γ = Γ1 ∪ Γ2 where

◮ Γ1 = {!Ai ∨ !Aj | 1 ≤ i < j} ◮ Γ2 = {

i≤i<j≤n(!Ai ∨ !Aj) ⊃ (A ∨ !An−1) | 1 < n}

◮ Note, Φ(Γ) = {ϕi | i > 0} where ϕi = Ω \ {!Ai}. ◮ Moreover, Γ ⊢LLL A ∨ !Ai ◮ Hence, for all M ∈ MTm(Γ), M |

= A and whence Γ | =Tm A.

80/111

slide-81
SLIDE 81

1 !A1 ∨ !A2 PREM ∅ 2 (!A1 ∨ !A2) ⊃ (A ∨ !A1) PREM ∅ 3 A ∨ !A1 1, 2; RU ∅

14

A 3; RC

  • !A1
  • 5

!A1 ∨ !A3 PREM ∅ 6 !A2 ∨ !A3 PREM ∅ 7

  • 1≤i<j≤3(!Ai

∨ !Aj) ⊃ (A ∨ !A2) PREM ∅ 8 A ∨ !A2 1, 4, 6, 7; RU ∅

69

A 8; RC

  • !A2
  • ◮ Φ4(Γ) = {{!A1}, {!A2}}

◮ Φ9(Γ) = {{!A1, !A2}, {!A1, !A3}, {!A2, !A3}} ◮ Γ ⊢∗ ALm A

81/111

slide-82
SLIDE 82

How to save the day?

◮ classical “checked” symbols are superimposed on the language

  • f LLL

◮ where W is the set of wffs of the LLL, W+ is the

ˇ ∨, ˇ ∧, ˇ ¬, . . .-closure of W

◮ premise sets are considered to be formulated in W ◮ sometimes authors distinguish btw. LLL and LLL+ ◮ Dab-formulas are formulated with ˇ

◮ Why does this solve our problem?

1 !A1 ∨ !A2 PREM ∅ 2 (!A1 ∨ !A2) ⊃ (A ∨ !A1) PREM ∅ 3 A ∨ !A1 1, 2; RU ∅ 4 A 3; RC

  • !A1
  • ◮ line 4 is not marked anymore since !A1 ∨ !A2 is not a

Dab-formula

82/111

slide-83
SLIDE 83

The Upper Limit Logic

◮ Recall: the upper limit logic rigorously interprets the premises

normal

◮ hence, ⊢ULL ˇ

¬ A for all A ∈ Ω

◮ the consequence relation of the upper limit logic is then

defined as follows: Γ ⊢ULL A iff Γ ∪ {ˇ ¬ A | A ∈ Ω} ⊢LLL A

◮ semantically ULL is characterized by all LLL-models M of Γ

that are “normal”, i.e. that have an empty abnormal part, Ab(M) = ∅.

◮ these are precisely the LLL-models of Γ ∪ {ˇ

¬ A | A ∈ Ω}.

83/111

slide-84
SLIDE 84

ALs approximate ULL

Theorem 18

CnLLL (Γ) ⊆ CnAL (Γ) ⊆ CnULL (Γ)

Definition 19

A premise set Γ is normal iff it has one of the following equivalent properties

  • 1. Γ ∪ {ˇ

¬ A | A ∈ Ω} is LLL-non-trivial

  • 2. there are LLL-models M of Γ that are normal, i.e. for which

Ab(M) = ∅

Theorem 20

If Γ is normal, then CnAL (Γ) = CnULL (Γ). If a premise set can rigorously be interpreted as normal, then the adaptive logic does so.

84/111

slide-85
SLIDE 85

Properties of the Standard Format

Theorem 21 (Soundness and Completeness)

Γ ⊢AL A iff Γ | =AL A.

Theorem 22 (Reflexivity)

Γ ⊆ CnAL (Γ)

Theorem 23 (Hierarchy of the Consequence Relations)

CnLLL (Γ) ⊆ CnALr (Γ) ⊆ CnALm (Γ) ⊆ CnULL (Γ)

Theorem 24 (Redundancy of LLL w.r.t. AL)

CnLLL (CnAL (Γ)) = CnAL (Γ)

Theorem 25

CnAL (CnLLL (Γ)) = CnAL (Γ)

Theorem 26 (Fixed Point)

CnAL (Γ) = CnAL (CnAL (Γ))

85/111

slide-86
SLIDE 86

Properties of the Standard Format

Theorem 27 (Cautious Cut / Cumulative Transitivity)

If Γ′ ⊆ CnAL (Γ) then CnAL (Γ ∪ Γ′) ⊆ CnAL (Γ).

Theorem 28 (Cautious Monotonicity)

If Γ′ ⊆ CnAL (Γ) then CnAL (Γ) ⊆ CnAL (Γ ∪ Γ′).

Corollary 29 (Cautious Indifference)

If Γ ⊆ CnAL (Γ) then CnAL (Γ) = CnAL (Γ ∪ Γ′).

Theorem 30 (Non-Monotonicity/Non-Transitivity)

If CnLLL (Γ) ⊂ CnAL (Γ) then AL is non-monotonic and non-transitiv.

86/111

slide-87
SLIDE 87

The “rational” properties

Theorem 31

In general AL is not rational monotonous, i.e. the following does not hold: If A ∈ CnAL (Γ) and A / ∈ CnAL (Γ ∪ {B}) , then ˇ ¬ B ∈ CnAL (Γ)

Theorem 32

Rational distributivity does not hold for ALs in general, i.e. the following does not hold: If A / ∈ CnAL (Γ ∪ {B}) and A / ∈ CnAL (Γ ∪ {C}) , then A / ∈ CnAL (Γ ∪ {B ˇ ∨

87/111

slide-88
SLIDE 88

Some open questions for you

What about some well-known weakenings of Rational Monotonicity?

◮ If B ∈ CnL (Γ) and ˇ

¬(B ∧ C) / ∈ CnL (Γ), then B ∈ CnL (Γ ∪ {C}). (proposed by Lou Goble)

◮ If B ∈ CnL (Γ) and ˇ

¬ B / ∈ CnL (Γ ∪ {C}), then B ∈ CnL (Γ ∪ {C}). (proposed by Giordano et al.) Goble L., A Proposal for Dealing with Deontic Dilemmas, in

  • A. Lomuscio, D. Nute (eds), DEON, vol. 3065 of Lecture

Notes in Computer Science, Springer, p. 74-113, 2004. Giordano L., Olivetti N., Gliozzi V., Pozzato G. L., ALC + T: a Preferential Extension of Description Logics., Fundamenta

  • Informaticaep. 341-372, 2009.

88/111

slide-89
SLIDE 89

Other strategies: the simple strategy

◮ applicable in case all minimal Dab-consequences are

abnormalities

◮ then: U(Γ) = Φ(Γ) and hence the reliability strategy and the

minimal abnormality strategy result in the same consequence set

◮ then: all adaptive models have the same abnormal part ◮ simplified marking condition ◮ semantic selection ala minimal abnormality or reliability (both

select the same models in this case)

◮ Task: understand why.

Definition 33 (Marking for the Simple Strategy)

A line l with condition ∆ is marked at stage s iff some B ∈ ∆ is derived on the empty condition.

Definition 34 (Marking for the Simple Strategy 2)

A line l with condition ∆ is marked at stage s iff for some ∆′ ⊆ ∆, Dab(∆′) is derived on the empty condition.

89/111

slide-90
SLIDE 90

Other strategies: normal selections

◮ Rescher-Manor consequence relations:

◮ strong: MCS(Γ) ◮ weak: MCS(Γ)

◮ Default reasoning

◮ skeptical: in all extensions of the given default theory ◮ credulous: in some extension of the given default theory

◮ Abstract argumentation

◮ skeptical: in all extensions of a given argumentation framework ◮ credulous: in some extension of a given argumentation

framework

◮ Adaptive Logics

◮ standard format:

M∈MAL(Γ){A | M |

= A}

◮ normal selections 90/111

slide-91
SLIDE 91

Normal Selections Strategy: going “weak” resp. “credulous”

Semantics

◮ equivalence relation on the LLL-models: M ∼ M′ iff

Ab(M) = Ab(M′)

◮ partition of the minimally abnormal models:

[M1]∼ [M2]∼ [M3]∼ . . . . . .

◮ Γ n AL A iff there is a M ∈ MALm(Γ) such that for all

M′ ∈ [M]∼, M′ | = A.

91/111

slide-92
SLIDE 92

Normal Selections

Note: not what is valid in some adaptive model is a consequence! Γ = {!A ∨ !B, X ∨ !A}. Minimally abnormal models:

◮ models with abnormal part {!A}:

◮ some validate C (some arbitrary non-abnormal formula) ◮ some validate ¬C

◮ models with abnormal part {!B}: these validate X.

We have Γ n

AL X but Γ n AL C.

92/111

slide-93
SLIDE 93

Normal Selections: Marking

Definition 35 (Marking for Normal Selections)

A line l with condition ∆ is marked at stage s iff Dab(∆) is derived on the empty condition at stage s. Take Γ = {!A ∨ !B, X ∨ !A, Y ∨ !A ∨ !B}. 1 !A ∨ !B PREM ∅ 2 X ∨ !A PREM ∅ 3 Y ∨ !A ∨ !B PREM ∅ 4 X 2; RC

  • !A
  • 65

Y 3; RC

  • !A, !B
  • 6

!A ˇ ∨ !B 1; RC ∅

93/111

slide-94
SLIDE 94

Combining ALs

References:

◮ Diderik Batens’ forthcoming book ◮ Frederik Van De Putte, Hierarchic Adaptive Logics [Logic

Journal of the IGPL, 2011]

◮ Frederik Van De Putte and Christian Straßer, Extending the

Standard Format of Adaptive Logics to the Prioritized Case [Logique et Analyse, To appear]

◮ Frederik Van De Putte and Christian Straßer, Three Formats

  • f Prioritized Adaptive Logics: a Comparative Study [Under

review,]

94/111

slide-95
SLIDE 95

Combining ALs

  • 1. diachronic combinations / sequential combination / vertical

combination / superposing ALs Γ AL1 AL2 . . . consequences

  • 2. synchronic combinations / horizontal combination / HAL

AL1 AL2 AL3 . . .

  • LLL

consequences

95/111

slide-96
SLIDE 96

Sequential Combinations

Consequence sets

◮ finite case:

CnSAL (Γ) = CnALsn

n

  • CnAL

sn−1 n−1

  • . . . CnAL

s2 2

  • CnAL

s1 1 (Γ)

  • . . .
  • ◮ infinite case:

CnSALi (Γ) = CnAL

si i

  • . . . (CnAL

s2 2

  • CnAL

s1 1 (Γ)

  • )
  • This is generalized to the infinite case as follows:

CnSAL (Γ) = lim inf

i→∞ CnSALi (Γ) = lim sup i→∞

CnSALi (Γ)

96/111

slide-97
SLIDE 97

Sequential Combinations

Semantics

◮ take all AL1-models: M1 ◮ in case s2 = m take all minimally abnormal models (w.r.t. Ω2)

from M1

◮ in case s2 = r take all reliable models (w.r.t Ω2) from M1:

select all models M ∈ M1 for which Ab(M) ⊆ {Ab(M′) | M′ ∈ Mm

2 } where Mm 2 is the set of all

minimally abnormal models (w.r.t. Ω2) from M1

◮ this way we get M2 ◮ repeat this procedure until you’re through with all the ALs in

the sequence

97/111

slide-98
SLIDE 98

Problems with Sequential Combinations: No Fixed Point

Suppose we have s1 = s2 = r and Γ = {!A1 ∨ !A2, !A1 ∨ !B, X ∨ !A2} where !A1, !A2 ∈ Ω1 \ Ω2 and !B ∈ Ω2 \ Ω1. Take a look at the following AL1-proof: 1 !A1 ∨ !A2 PREM ∅ 2 !A1 ∨ !B PREM ∅ 3 !A1 ˇ ∨ !A2 1; RU ∅ 4 X ∨ !A2 PREM ∅

35

X 4; RC

  • !A2
  • In AL2 we can proceed as follows (with the premise set CnAL1 (Γ)):

1 !A1 ∨ !B PREM ∅ 2 !A1 1; RC

  • !B
  • Hence !A1 ∈ CnAL2 (CnAL1 (Γ)).

98/111

slide-99
SLIDE 99

Problems with Sequential Combinations: No Fixed Point

1 !A1 ∨ !A2 PREM ∅ 2 !A1 ∨ !B PREM ∅ 3 !A1 ˇ ∨ !A2 1; RU ∅ 4 X ∨ !A2 PREM ∅

35

X 4; RC

  • !A2
  • In AL2 we can proceed as follows (with the premise set CnAL1 (Γ)):

1 !A1 ∨ !B PREM ∅ 2 !A1 1; RC

  • !B
  • Hence !A1 ∈ CnAL2 (CnAL1 (Γ)).

Let’s now apply AL1 to the premise set CnAL2 (CnAL1 (Γ)): 1 !A1 PREM ∅ 2 X ∨ !A2 PREM ∅ 3 X 2; RC

  • !A2
  • Now, X is a consequence of CnAL1 (CnAL2 (CnAL1 (Γ))) and hence

also of CnAL2 (CnAL1 (CnAL2 (CnAL1 (Γ)))).

99/111

slide-100
SLIDE 100

Problems with Sequential Combinations

◮ Note: lack of deduction theorem is the culprit:

Γ ∪ {!A1} ⊢AL1 X but Γ ⊢AL1 B ˇ ⊃X.

◮ This also shows that we don’t have Cautious Transitivity:

CnSAL (Γ ∪ {!A}) ⊆ CnSAL (Γ) although !A ∈ CnSAL (Γ).

100/111

slide-101
SLIDE 101

Problems with Sequential ALs: Lack of completeness for minimal abnormality

Let Γ = {X ∨ !Ai ∨ !Bi | i ∈ N} ∪ {!Ai ∨ !Aj | i = j} and Ai ∈ Ω1 \ Ω2 and Bi ∈ Ω2 \ Ω1. Take a look at the following ALm

1 -proof from Γ:

1 X ∨ !A1 ∨ !B1 PREM ∅

42

X ∨ !B1 1; RC

  • !A1
  • 3

!A1 ∨ !A2 PREM ∅ 4 !A1 ˇ ∨ !A2 3; RU ∅ Hence, X ∨ !Bi is not derivable for any i ∈ N. Take a look at the following ALm

2 -proof from CnALm

1 (Γ):

1 X ∨ !A1 ∨ !B1 PREM ∅ 2 X ∨ !A1 1; RC

  • !B1
  • Hence, X ∨ !A1 ∈ CnALm

2

  • CnALm

1 (Γ)

  • but there is no way of

deriving X.

101/111

slide-102
SLIDE 102

Problems with Sequential ALs: Lack of completeness for minimal abnormality

◮ Now let’s take a look at the semantic selection. ◮ MALm

1 (Γ) = {M ∈ MLLL(Γ) | Ab1(M) = {!Ai}, i ∈ N}.

◮ Hence, for each M ∈ MALm

1 (Γ), M |

= X ∨ !Bi for some i ∈ N.

◮ M2 = {M ∈ MALm

1 (Γ) | Ab2(M) = ∅}. Hence, for all

M ∈ M2, M | = X.

◮ Hence X /

∈ CnSAL (Γ) but Γ SAL X.

102/111

slide-103
SLIDE 103

Restricted positive results

◮ Suppose Ω1 ⊆ Ω2 ⊆ . . .. Then, SAL is sound. ◮ If one of the following holds, then SAL is sound and

complete, has a fixed point, is cautious transitive, etc. See

  • REF. Let Σ(Γ) and Φ(Γ) be the corresponding sets of the AL

LLL,

i∈N Ωi, m.

  • 1. Σ(Γ) is finite
  • 2. every ϕ ∈ Φ(Γ) is finite
  • 3. Φ(Γ) is finite

103/111

slide-104
SLIDE 104

Prioritized ALs

Prioritized abnormalities

◮ Ω1: contains the ones we want to avoid mostly ◮ Ωi: having a choice between a level i abnormality and higher

  • rder level abnormality we’d choose the level i, but would

prefer higher level abnormalities

Prioritized Format for ALs

◮ Lower limit logic: LLL ◮ sequence of abnormalities: Ωii∈I ◮ adaptive strategy: minimal abnormality and reliability

104/111

slide-105
SLIDE 105

Prioritized ALs

Lexicographic order

◮ used e.g. in telephone book: compare sequences

component-wise, as soon as one scores better it’s preferred

◮ general: Suppose we have a sequence of linear orders:

(Xi, <i)I (where I ⊆ N). Then define <lex⊆ ×IXi as follows: aiI <lex biI iff there is a minimal j such that (i) ak = bk for all k < j and (ii) aj <j bj.

◮ compare prioritized sequences of sets of abnormalities:

Let ∆iI, ∆′

iI ∈ ×I℘(Ωi).

∆iI ⊏lex ∆′

iI iff (i) there is an i ∈ I such that for all j < i,

∆j = ∆′

j and (ii) ∆i ⊂ ∆′ i.

Let ∆, ∆′ ∈ ℘(

I Ωi).

We write ∆ ⊏ ∆′ for ∆ ∩ ΩiI ⊏lex ∆′ ∩ ΩiI.

105/111

slide-106
SLIDE 106

Prioritized ALs: Semantics

Definition 36

Ab(M) = {A ∈ Ω | M | = A} where Ω =

I Ωi.

M ∈ MALm

⊏(Γ) iff M ∈ MLLL(Γ) and there is no M′ ∈ MLLL(Γ)

such that Ab(M′) ⊏ Ab(M).

Alternative Characterization

◮ let Φ⊏(Γ) be the ⊏-minimal choice sets of Σ(Γ). ◮ note: ∆ ⊏ ∆′ then ∆ ⊂ ∆′ and Φ⊏(Γ) ⊆ Φ(Γ) ◮ M ∈ MALm

⊏(Γ) iff M ∈ MLLL(Γ) and Ab(M) ∈ Φ⊏(Γ). 106/111

slide-107
SLIDE 107

Other alternative semantic characterizations

Suppose in the following that Ωi ⊆ Ωi+1 for all i, i + 1 ∈ I. Then we have two more alternative semantic characterizations:

  • 1. sequential selections:

◮ M[0] = set of LLL-models of Γ ◮ for each i in I do ◮ M[i] = the set of all minimal abnormal models in M[i − 1]

w.r.t. Ωi

  • 2. intersecting: MALm

⊏(Γ) =

I MALm

i (Γ). 107/111

slide-108
SLIDE 108

Prioritized ALs: Proof theory

◮ same generic rules as proofs in standard format ◮ let Φ⊏ s (Γ) be the set of ⊏-minimal choice sets of Σs(Γ)

Definition 37

A line l with formula A and condition ∆ is marked iff (i) no ϕ ∈ Φ⊏

s (Γ) is such that ϕ ∩ ∆ = ∅, or (ii) for a ϕ ∈ Φ⊏ s (Γ) there is

no line on which A is derived on a condition Θ for which Θ ∩ ϕ = ∅.

108/111

slide-109
SLIDE 109

Let Γ = {X ∨ !Ai ∨ !Bi | i ∈ N} ∪ {!Ai ∨ !Aj | i = j} and Ai ∈ Ω1 \ Ω2 and Bi ∈ Ω2 \ Ω1. 1 X ∨ !A1 ∨ !B1 PREM ∅ 2 X 1; RC

  • !A1, !B1
  • 3

!A1 ∨ !A2 PREM ∅ 4 X ∨ !A2 ∨ !B2 PREM ∅ 5 X 4; RC

  • !A2, !B2
  • Note that Φ⊏(Γ) = {Ω1 \ {!Ai} | i ∈ N}. Since we can derive X
  • n the condition {!Ai, !Bi} for each i ∈ N, Γ ⊢ALm

⊏ X. 109/111

slide-110
SLIDE 110

Γ = {!A ∨ !B, !A ∨ !C, X ∨ !A, Y ∨ !B}. Let !A ∈ Ω1, !B ∈ Ω2 \ Ω1 and !C ∈ Ω3 \ (Ω1 ∪ Ω2). 1 !A ∨ !B PREM ∅ 2 !A ∨ !C PREM ∅ 3 X ∨ !A PREM ∅ 4 Y ∨ !B PREM ∅ 5 X 3; RC

  • !A
  • 76

Y 4; RC

  • !B
  • 7

!A ˇ ∨ !B 1; RU ∅ 8 !A ˇ ∨ !C 2; RU ∅ We have Φ⊏

8 (Γ) = {{!B, !C}} since {!B, !C} ⊏ {!A}.

110/111

slide-111
SLIDE 111

Prioritized ALs: Meta-Theory

◮ very rich: similar to standard format, e.g. ◮ soundness and completeness ◮ Strong reassurance ◮ reflexivity ◮ cautious indifference ◮ fixed point ◮ if Γ is normal, then CnALm

⊏ (Γ) = CnULL (Γ). 111/111