A Computational Logic Approach to The Belief-Bias Effect in Human - - PowerPoint PPT Presentation
A Computational Logic Approach to The Belief-Bias Effect in Human - - PowerPoint PPT Presentation
A Computational Logic Approach to The Belief-Bias Effect in Human Reasoning Emmanuelle Dietz International Center for Computational Logic TU Dresden, Germany Lu s Moniz Pereira Centro de Intelig encia Artificial (CENTRIA) Universidade
The Belief-Bias Effect
The two minds hypothesis distinguishes between:
◮ the reflective mind, and ◮ the intuitive mind
This hypothesis is supported by showing the belief-bias effect (Evans [2012]):
◮ The belief-bias effect is the conflict between the reflective and intuitive minds
when reasoning about problems involving real-world beliefs. It is the tendency to accept or reject arguments based on own beliefs or prior knowledge rather than
- n the reasoning process.
How to identify the belief-bias effect?
◮ Do psychological studies about deductive reasoning which demonstrate
possibly conflicting processes in the logical and psychological level.
The Syllogisms Task
Evans et al. [1983] carried out an experiment where participants were presented different syllogisms for which they should decide whether they were logcially valid.
Type Example Evans Sdogs valid and believable No police dogs are vicious. Some higly trained dogs are vicious. Therefore, some highly trained dogs are not police dogs. 89% Svit valid and unbelievable No nutritional things are inexpensive. Some vitamin tablets are inexpensive. Therefore, some vitamin tablets are not nutritional. 56% Sadd invalid and believable No addictive things are inexpensive. Some cigarettes are inexpensive. Therefore, some addictive things are not cigarettes. 71% Srich invalid and unbelievable No millionaires are hard workers. Some rich people are hard workers. Therefore, some millionaires are not rich people. 10%
Using their reflexive minds, people read the instructions and understand that they are required to reason logically from the premises to the conclusion. However, when they look at the conclusion, their intuitive minds deliver strong tendency to say yes or no depending on whether it is believable.
Adequate Framework for Human Reasoning
How to adequately formalize human reasoning in computational logic? Stenning and van Lambalgen (2008) propose a two step process: human reasoning should be modeled by
- 1. reasoning towards an appropriate representation,
→ conceptual adequacy
- 2. reasoning with respect to this representation.
→ inferential adequacy The adequacy of a computational logic approach that aims a representing human reasoning should be evaluated based on how humans actually reason.
State of the Art
- 1. Stenning and van Lambalgen (2008) formalize Byrne’s (1989) Suppression Task,
where people suppress valid inferences when additional arguments get available.
- 2. Kowalski (2011) models Wason’s (1968) Selection Task, showing that people have
a matching bias, the tendency to select explicitely named values in conditionals.
- 3. H¨
- lldobler and Kencana Ramli (2009a) found some technical mistakes done by
Stenning and van Lambalgen and propose to model human reasoning by
◮ logic programs ◮ under weak completion semantics ◮ based on the three-valued
Lukasiewicz (1920) logic. This approach seems to adequately model the Suppression and the Selection Task. Can we adequately model the syllogisms task including the belief-bias effect under weak completion semantics?
Weak Completion Semantics
First-Order Language
A (first-order) logic program P is a finite set of clauses of the form p(X) ← a1(X) ∧ · · · ∧ an(X) ∧ ¬b1(X) ∧ · · · ∧ ¬bm(X)
◮ X is a variable and p, a1, . . . , an and b1, . . . , bm are predicate symbols. ◮ p(X) is an atom and head of the clause. ◮ a1(X) ∧ · · · ∧ an(X) ∧ ¬b1(X) ∧ · · · ∧ ¬bm(X) is a formula and body of the clause. ◮ p(X) ← ⊤ and p(X) ← ⊥ denote positive and negative facts, respectively. ◮ Variables are written with upper case and constants with lower case ones. ◮ A ground formula is a formula that does not contain variables. ◮ A ground instance results from substituting all occurrences of
each variable name by some constant of P.
◮ A ground program g P is comprised of all ground instances of its clauses. ◮ The set of all atoms in g P is denoted by atoms(P). ◮ An atom is undefined in g P if it is not the head of some clause in g P.
The corresponding set of these atoms is denoted by undef(P).
The (Weak) Completion of a Logic Program
Consider the following transformation for a given P:
- 1. Replace all clauses in g P with the same head (ground atom)
A ← body1, A ← body2, . . . by the single expression A ← body1 ∨ body2, ∨ . . . .
- 2. if A ∈ undef(g P) then add A ← ⊥.
- 3. Replace all occurrences of ← by ↔.
The resulting set of equivalences is called the completion of P (Clark [1978]). If Step 2 is omitted, then the resulting set is called the weak completion of P (wc P) (H¨
- lldobler and Kencana Ramli [2009a,b]).
Three-Valued Lukasiewicz Logic
¬ ⊤ ⊥ ⊥ ⊤ U U ∧ ⊤ U ⊥ ⊤ ⊤ U ⊥ U U U ⊥ ⊥ ⊥ ⊥ ⊥ ∨ ⊤ U ⊥ ⊤ ⊤ ⊤ ⊤ U ⊤ U U ⊥ ⊤ U ⊥ ←L ⊤ U ⊥ ⊤ ⊤ ⊤ ⊤ U U ⊤ ⊤ ⊥ ⊥ U ⊤ ↔L ⊤ U ⊥ ⊤ ⊤ U ⊥ U U ⊤ U ⊥ ⊥ U ⊤ Table: ⊤, ⊥, and U denote true, false, and unknown, respectively.
An interpretation I of P is a mapping of the Herbrand base BP to {⊤, ⊥, U} and is represented by an unique pair, I ⊤, I ⊥, where I ⊤ = {x ∈ BP | x is mapped to ⊤} and I ⊥ = {x ∈ BP | x is mapped to ⊥}
◮ For every I it holds that I ⊤ ∩ I ⊥ = ∅. ◮ A model of a formula F is an interpretation I such that F is true under I. ◮ A model of P is an interpretation that is a model of each clause in P.
Reasoning in an Appropriate Logical Form
Positive Encoding for Negative Conclusions
Premise 1 in our first case No police dogs are vicious is equivalent to There does not exist an X such that police dogs(X) and vicious(X). We can write it as For all X if police dogs(X) and not abnormal then not vicious(X). By default, nothing is abnormal. We use abnormality predicates to implement conditionals by licenses for implications (Stenning and van Lambalgen [2008]). Problem: we only consider logic programs that allow positive heads in the clauses. We introduce p′(X) and p′(X) ← ¬p(X) for every negative conclusion ¬p(X)1: Premise 1 police dogs′(X) ← vicious(X) ∧ ¬ab1a(X) police dogs(X) ← ¬police dogs′(X), ab1a(X) ← ⊥.
1More generally, we need an appropriate dual program like transformation when there are several positive rules.
Commonsense Implications within the four Syllogisms
Example Commonsense Implication Sdogs No police dogs are vicious. We generally assume Some highly trained dogs are vicious. that police dogs Therefore, some highly trained dogs are not police dogs. are highly trained. Svit No nutritional things are inexpensive. The purpose of Some vitamin tablets are inexpensive. vitamin tablets Therefore, some vitamin tablets are not nutritional. is to aid nutrition. Sadd No addictive things are inexpensive. We know that Some cigarettes are inexpensive. cigarettes Therefore, some addictive things are not cigarettes. are addictive. Srich No millionaires are hard workers. By definition, Some rich people are hard workers. millionaires are rich. Therefore, some millionaires are not rich people.
The sceond premises in each case contain some background knowledge which might provide the motivation on whether to validate the syllogisms.
Modeling Syllogism Sdogs: valid argument, believable conclusion
Premise 1 No police dogs are vicious. Premise 2 Some higly trained dogs are vicious. Conclusion Therefore, some highly trained dogs are not police dogs. Premise 2 states facts about, lets say some a. The program for Sdogs is: Pdogs = {police dogs′(X) ← vicious(X) ∧ ¬ab1a(X), police dogs(X) ← ¬police dogs′(X), ab1a(X) ← ⊥} ∪ {highly trained(a) ← ⊤, vicious(a) ← ⊤} ∪ {highly trained(X) ← police dogs(X) ∧ ¬ab1b(X), ab1b(X) ← ⊥} The corresponding weak completion is: wc g Pdogs = {police dogs′(a) ↔ vicious(a) ∧ ¬ab1(a), police dogs(a) ↔ ¬police dogs′(a), ab1(a) ↔ ⊥, highly trained(a) ↔ ⊤ ∨ (police dogs(a) ∧ ¬ab1b(a)), vicious(a) ↔ ⊤, ab1b(a) ↔ ⊥} How do we compute the intended model?
Reasoning with Respect to this Representation
Reasoning with Respect to Least Models
H¨
- lldobler and Kencana Ramli propose to compute the least model of the weak
completion of P (lm
Lwc P) which is identical to the least fixed point of ΦP, an
- perator defined by Stenning and van Lambalgen [2008].
Let I be an interpretation in ΦP(I) = J⊤, J⊥, where J⊤ = {A | there exists A ← body ∈ P with I(body) = ⊤}, J⊥ = {A | there exists A ← body ∈ P and for all A ← body ∈ P we find I(body) = ⊥}. H¨
- lldobler and Kencana Ramli showed that the model intersection property holds for
weakly completed programs. This guarantees the existence of least models for every P.
Computing the Least Model for Pdogs
wc g Pdogs = {police dogs′(a) ↔ vicious(a) ∧ ¬ab1a(a), police dogs(a) ↔ ¬police dogs′(a), ab1a(a) ↔ ⊥, highly trained(a) ↔ ⊤ ∨ (police dogs(a) ∧ ¬ab1b(a)), vicious(a) ↔ ⊤, ab1b(a) ↔ ⊥} Let’s start with interpretation I0 = ∅, ∅: I1= Φg Pdogs (I0)={vicious(a), highly trained(a)}, {ab1a(a), ab1b(a)} I2= Φg Pdogs (I1)={vicious(a), highly trained(a), police dogs′(a)}, {ab1a(a), ab1b(a)} I3= Φg Pdogs (I2)={vicious(a), highly trained(a), police dogs′(a)}, {ab1(a), ab1b(a), police dogs(a)} I4= Φg Pdogs (I3)=Φg Pdogs (I2) ⇐ lm
Lwc (g Pdogs(I2))
This model confirms Sdogs, because a is a highly trained dog and not a police dog.
Modeling Syllogism Svit: valid argument, unbelievable conclusion
Premise 1 No nutritional things are inexpensive. Premise 2 Some vitamin tablets are inexpensive. Conclusion Therefore, some vitamin tablets are not nutritional. Premise 2 states facts about, lets say some a. The program for Svit is: Pvit ={nutritional′(X) ← inexp(X) ∧ ¬ab2a(X), nutritional(X) ← ¬nutritional′(X), ab2a(X) ← ⊥} ∪ {vitamin(a) ← ⊤, inexp(a) ← ⊤} ∪ {ab2a(X) ← vitamin(X), inexp(X) ← vitamin(X) ∧ ¬ab2b(X), ab2b(X) ← ⊥} The least model of the weak completion of g Pvit is lm
Lwc Pvit ={vitamin(a), inexp(a), nutritional(a), ab2a(a)}, {nutritional′(a), ab2b(a)}
This model does not validate Svit and this confirms how the participants responded.
Modeling Syllogism Sadd: invalid argument, believable conclusion (1)
Premise 1 No addictive things are inexpensive. Premise 2 Some cigarettes are inexpensive. Conclusion Therefore, some addictive things are not cigarettes. Premise 2 states facts about, lets say some a. The program for Sadd is: Padd ={addictive′(X) ← inexp(X) ∧ ¬ab3a(X), addictive(X) ← ¬addictive′(X), ab3a(X) ← ⊥} ∪ {cigarettes(a) ← ⊤, inexp(a) ← ⊤} ∪ {ab3a(X) ← cigarettes(X), inexp(X) ← cigarettes(X) ∧ ¬ab3b(X), ab3b(X) ← ⊥} The least model of the weak completion of g Padd is: lm
Lwc Padd ={cigarettes(a), inexp(a), addictive(a), ab3a(a)}, {addictive′(a), ab3b(a)}
The Conclusion of Sadd states something that can obviously not be about a.
Abductive Reasoning
Definition (1)
Let P, AP, | =lm wc
- L
be an abductive framework, where:
◮ P is a knowledge base, ◮ AP a set of abducibles consisting of the
(positive and negative) facts for each undefined atom in P
◮ |
=lm wc
- L
a logical consequence relation where P | =lm wc
- L
F if and only if lm
Lwc P(F) = ⊤ for formula F.
Let O be an observation and E be an explanation where O and E are consistent:
◮ O is explained by E given AP and P iff P ∪ E |
=lm wc
- L
O where P ∪ E is consistent, P | =lm wc
- L
O and E ⊆ AP. We distinguish between two forms:
◮ F follows skeptically from P and O iff O can be explained,
and for all minimal explanations E it holds that P ∪ E | =lm wc
- L
O,
◮ F follows credulously from P and O iff
there exists a minimal explanation E such that P ∪ E | =lm wc
- L
O. In the following we apply abduction with respect to credulous reasoning: to validate Sadd we need one model, which contains something addictive that is not a cigarette.
Modeling Syllogism Sadd: invalid argument, believable conclusion (2)
We observe for something else, e.g. for b, that it is addictive: Oadd = {addictive(b) ← ⊤}. The set of abducibles for given g (Padd ∪ Oadd), is: Ag (Padd ∪Oadd ) = {cigarettes(b) ← ⊤, cigarettes(b) ← ⊥} We have the following two minimal explanations for Oadd: Ecig = {cigarettes(b) ← ⊤} and E¬cig = {cigarettes(b) ← ⊥} The least models of the weak completion of Padd together with each explanation are: lm
Lwc g (Padd ∪ E¬cig)={addictive(b)}, {addictive′(b), cigarettes(b),
ab3a(b), inexpensive(b), ab3b(b)} lm
Lwc g (Padd ∪ Ecig) ={addictive(b), ab3a(b), cigarettes(b),
inexpensive(b)}, {addictive′(b), ab3b(b)} Credulously, we conclude there exists some b that is addictive but not a cigarette.
Modeling Syllogism Srich: invalid argument, unbelievable conclusion (1)
Premise 1 No millionaires are hard workers. Premise 2 Some rich people are hard workers. Conclusion Therefore, some millionaires are not rich people. The program for Srich is: Prich ={millionaire′(X) ← hard worker(X) ∧ ¬ab4a(X), millionaire(X) ← ¬millionaire′(X), ab4a(X) ← ⊥} ∪ {rich(a) ← ⊤, hard worker(a) ← ⊤} ∪ {rich(X) ← millionaire(X) ∧ ¬ab4b(X), ab4b(X) ← ⊥} Again, what is talked about in Premise 2 cannot be the same as in the Conclusion.
Modeling Syllogism Srich: invalid argument, unbelievable conclusion (2)
We observe for something else, e.g. for b, that is a millionaire: Omil = {millionaire(b) ← ⊤} . For g (Prich ∪ Omil) we have the following set of abducibles: Ag (Prich∪Omil ) = {hard worker(b) ← ⊤, hard worker(b) ← ⊥} The least model of the weak completion of Prich and E¬hard worker is: lm
Lwc g (Prich ∪ E¬hard worker)
= {millionaire(b), rich(b)}, {ab4a(b), ab4b(b), millionaire′(b), hard worker(b)} This model does not validate Srich and this confirms how the participants responded.
Different kinds of Abnormalities: the Suppression and the Cigarette Task
If she has an essay to finish, then she goes to the library. If the library is open, then she goes to the library. She has an essay to finish. Pe+Add ={l ← e ∧ ¬ab1, ab1 ← ¬o, l ← o ∧ ¬ab2, ab2 ← ¬e, e ← ⊤} lm
Lwc Pe+Add ={e}, {ab2}
We cannot conclude whether she goes to the library. No addictive things are inexpensive. Some cigarettes are inexpensive. There are some addictive things. (Oadd = {addictive(b) ← ⊤}) We have two explanations for Oadd which have equal priority: Ecig = {cigarettes(b) ← ⊤} E¬cig = {cigarettes(b) ← ⊥} However, instead of assuming that b is a cigarette, we would first prefer to conclude that b is not inexpensive, except if we have more information about b.
Abductive Reasoning with Contextual Side-Effects
Contextual Side Effects
We define two forms of contextual side effects:
Definition (2)
Given a background knowledge P, two observations O1 and O2 and two explanations E1 and E2 O2 is a necessary contextual side-effect of explained O1 iff for every explanation E1 such that O1 is explained by E1 given P, there exists E2 such that O2 is explained by E2 given P, and E2 contains E1. O2 is a strict necessary contextual side-effect of explained O1 iff E2 = E1.
Definition (3)
Given a background knowledge P, two observations O1 and O2 and two explanations E1 and E2 O2 is a possible contextual side-effect of explained O1 iff there exist explanations E1 and E2 such that O1 is explained by E1 given P, and O2 is explained by E2 given P, and E2 contains E1. O2 is a strict possible contextual side-effect of explained O1 iff E2 = E1. Definitions (2) and (3) correspond to skeptical and credulous reasoning, respectively.
Contextual Side-Effects: Sadd
No addictive things are inexpensive. Some cigarettes are inexpensive. b is addictive. (Oadd = {addictive(b) ← ⊤}) Assume, we additionally observe Oinexp = {inexp(b) ← ⊤}. We can only explain it by: Ecig = {cigarettes(b) ← ⊤} Oadd is a strict necessary contextual side-effect of explained Oinexp, given Padd. But Oinexp is a strict possible contextual side-effect of explained Oadd given Padd as well. We didn’t specify which is the side-effect resulting from the primary explanation!
Inspection Points
Pereira and Pinto [2011] distinguish between occurrences of abducibles which allow them to be produced and those which may only be consumed, if produced elsewhere2. First, we introduce a reserved (meta-)predication for every abductive ground atom: UP = {ud(A) ← ⊤|A ∈ undef(P)} ∪ {ud(A) ← ⊥|A ∈ undef(P)} For every (abducible) atom A we introduce the two following clauses: inspect(A) ← A ∧ ¬ud(A) inspect¬(A) ← ¬A ∧ ¬ud(A) The program containing all inspect and inspect¬ clauses for every such atom in P is: IP = {inspect(A) ← A ∧ ¬ud(A)), inspect¬(A) ← ¬A ∧ ¬ud(A)) | A ∈ atoms(P)}
2More generally, certain occurrences may only be used in the abductive context of some other ones.
Modeling Syllogism Sadd with Inspect
Let us consider Sadd again: we prefer to conclude that cigarettes(X) is false for any variable if not stated otherwise. We modify our original program accordingly: Pinsp
add =
UPadd ∪ IPadd ∪ Padd \({ab3a(X) ← cigarettes(X)}) ∪ {ab3a(X) ← inspect(cigarettes(X))} If we only observe Oadd, we cannot abduce Ecig but E¬cig and b is not inexpensive! However, if we additionally observe Oinexp = {inexpensive(b) ← ⊤}, Ecig can be abduced by the following clause in Pinsp
add :
{. . . , inexpensive(b) ← cigarettes(b) ∧ ¬ab3b(b), . . . } Now, b is addictive, inexpensive and a cigarette! Oadd is a strict necessary contextual side-effect of explained Oinexp given Pinsp
add .
Conclusion and Future Work
Weak completion semantics We have examined how syllogisms and the belief-bias effect can be modeled by using abnormality predicates and abduction. Inspection Points We propose to introduce inspect predicates to deal with various kinds of abnormalities in human reasoning which allow different treatements of abducibles within conditionals. Future Work We intend to investigate other applications within human reasoning studies that might confirm or refine our definitions. We are currently working on extensions for:
- 1. Contextual Relevant Consequences
- 2. Contestable Contextual Side-effects
- 3. Contextual Counterfactual Side-effects
Thank you very much for your attention!
References
- R. M. J. Byrne. Suppressing valid inferences with conditionals. Cognition, 31:61–83, 1989.
Keith L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Data Bases, volume 1, pages 293–322. Plenum Press, New York, NY, 1978. J.St.B.T. Evans. Biases in deductive reasoning. In R.F. Pohl, editor, Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Psychology Press, 2012. J.St.B.T. Evans, Julie L. Barston, and Paul Pollard. On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11(3):295–306, 1983. ISSN 0090-502X. Steffen H¨
- lldobler and Carroline Dewi Kencana Ramli. Logic programs under three-valued
Lukasiewicz semantics. In Patricia M. Hill and David Scott Warren, editors, Logic Programming, 25th International Conference, ICLP 2009, volume 5649 of Lecture Notes in Computer Science, pages 464–478, Heidelberg, 2009a. Springer. Steffen H¨
- lldobler and Carroline Dewi Kencana Ramli. Logics and networks for human reasoning. In Cesare Alippi, Marios M. Polycarpou,
Christos G. Panayiotou, and Georgios Ellinas, editors, International Conference on Artificial Neural Networks, ICANN 2009, Part II, volume 5769 of Lecture Notes in Computer Science, pages 85–94, Heidelberg, 2009b. Springer. Robert Kowalski. Computational Logic and Human Thinking: How to be Artificially Intelligent. Cambridge University Press, Cambridge, 2011. Jan
- Lukasiewicz. O logice tr´
- jwarto´
- sciowej. Ruch Filozoficzny, 5:169–171, 1920. English translation: On three-valued logic. In:
- Lukasiewicz J. and Borkowski L. (ed.). (1990). Selected Works, Amsterdam: North Holland, pp. 87–88.
Lu´ ıs Moniz Pereira and Alexandre Miguel Pinto. Inspecting side-effects of abduction in logic programs. In M. Balduccini and Tran Cao Son, editors, Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning: Essays in honour of Michael Gelfond, volume 6565 of LNAI, pages 148–163. Springer, 2011. Keith Stenning and Michiel van Lambalgen. Human Reasoning and Cognitive Science. A Bradford Book. MIT Press, Cambridge, MA,
- 2008. ISBN 9780262195836.
- P. Wason. Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3):273–281, 1968.
Contextual Side-effects: Extended Syllogism Sill
Let us extend Sadd by the following conditional: If something is sold in the streets and not cigarettes, then it is illegal. Accordingly, the new program is: Pill = Padd ∪ {illegal(X) ← streets(X) ∧ ¬cig(X)}. The minimal explanation for Oill = {illegal(b) ← ⊤} is Estreets,¬cig = {streets(b) ← ⊤, cig(b) ← ⊥}. Oill is a possible contextual side-effect of explained Oadd given Pill. Let us simplify Pill as follows: Psim
ill
= Padd ∪ {illegal(X) ← ¬cig(X)}. Explanation for Oill is the same as for Oadd! illegal(b) is a strict possible contextual side-effect of explained Oadd, and vice versa!
We need to specify which observation can be abduced and which is a contextual side effect!
Modeling Extended Syllogism Sill with Inspect
We modify and extend our program accordingly: Pinsp
ill
= UPill ∪ IPill ∪ Pill \({illegal(X) ← streets(X) ∧ ¬cig(X)}) ∪ {illegal(X) ← streets(X) ∧ inspect¬(cig(X))} where as previously defined for IPill : inspect¬(cig(X)) ← ¬cig(X) ∧ ¬ud(X) We can only conclude that something is illegal when we already have abduced somewhere else that it is not a cigarette. Assume Oadd explained by E¬cig: lm
Lwc g (Pinsp
ill
∪ E¬cig ∪ Estreets) = {add(b), streets(b), illegal(b), inspect¬(cig(b))}, {add′(b), cig(b), ab1(b), inex(b), ab2(b)} Note that without E¬cig for Oadd we could not have concluded illegal(b).
Contextual Relevant Consequences
Definition (4)
Given a background knowledge P, two observations O1 and O2 and two explanations E1 and E2 O2 is a necessary contextual relevant consequence of explained E1 iff for every E2 such that O2 is explained by E2 given P, there exists an E1 consistent with E2 such that O1 is explained by E1 given P, and the intersection of E1 and E2 is nonempty. Let us ensure that something is dangerous before we conclude that it is addictive: Pdan = Pill \ ({add(X) ← ¬add′(X)}) ∪ {add(X) ← ¬add′(X) ∧ dangerous(X)} Given Oadd, we have two minimal explanations: Ecig,dan = {cig(b) ← ⊤, dangerous(b) ← ⊤}, E¬cig,dan = {cig(b) ← ⊥, dangerous(b) ← ⊤} Oill = {illegal(b) ← ⊤} is a contextual relevant consequence of explained Oadd. Note again, that the original observation and the side-effect are interchangeable.
Modeling extended Syllogism, Pdan with Inspect
Again, we intend that Oill is a contextual relevant consequence of explained Oadd: Pinsp
dan = UPdan ∪ IPdan ∪ Pdan
\({illegal(X) ← streets(X) ∧ ¬cig(X)}) ∪ {illegal(X) ← streets(X) ∧ inspect¬(cig(X))} The rule inspect¬(cig(X)) ← ¬cig(X) ∧ ¬ud(X) stays the same and it is easy to see that the outcome is similar as just demonstrated: illegal(b) can only be concluded when E¬cig is abduced elsewhere, e.g. for Oadd.
Contestable Contextual Side-effects - Abductive Explanation Contesting
Definition (5)
Given a background knowledge P, two observations O1 and O2 and two explanations E1 and E2 O2 is a necessarily contested contextual side-effect of explained O1 iff for every E1 such that O1 is explained by E1 given P, there exists E2 such that ¬O2 is explained by E2 given P and E1 contains E2. O2 is a possibly contested contextual side-effect of explained O1 iff there exists an E1 such that O1 is explained by E1 given P, there exists E2 such that ¬O2 is explained by E2 given P and E1 contains E2. Let us define the program Pleg dualistic to Psim
ill :
Pleg =Padd ∪ {leg(X) ← cig(X)} We originally observe Oadd possibly explained by E¬cig: lm
Lwc g (Pleg ∪ E¬cig) = {add(b)}, {add′(b), cig(b), inex(b), leg(b), . . . }
¬leg(b) is possible contestable contextual side-effect of explained Oadd, and vice versa.
Extended Syllogism Pleg with Inspect
Pinsp
leg
= UPleg ∪ IPleg ∪ Pleg \ ({legal(X) ← cig(X)}) ∪{legal(X) ← inspect(cig(X))} With the modified clause: {legal(X) ← inspect(cig(X))} We only conclude that something is legal if cig(X) is used to explain something else. Given E¬cig, ¬leg(b) is a possible contestable contextual side-effect of explained Oadd.
Contestable Contextual Side-effects - Abductive Rebuttal (1)
Another variation of contestable side-effects is abductive rebuttal: the side-effect directly contradicts an observation, that is O2 ≡ ¬O1 in Definition 5. Let us extend Padd by a conditional obviously contradicting the one in Padd: if something is a cigarette, then it is not inexpensive. Pinex = Padd ∪ {inex′(X) ← cig(X), inex(X) ← ¬inex′(X)} We observe Oinex and can explain it by either E¬cig or by Ecig. Ecig explains observation Oinex′ as well and obviously Oinex and Oinex′ are dualistic.
Contestable Contextual Side-effects - Abductive Rebuttal (2)
The weak completion of g (Pinex ∪ Ecig) is: wc g (Pinex ∪ Ecig) = {add′(b) ↔ inex(b) ∧ ¬ab1(b), add(b) ↔ ¬add′(b), inex(b) ↔ (cig(b) ∧ ¬ab2(b)) ∨ ¬inex′(b), ab2(b) ↔ ⊥, ab1(b) ↔ ⊥ ∨ cig(b), inex′(b) ↔ cig(b)} ∪ {cig(b) ↔ ⊤} and the corresponding least model is: lm
Lwc g (Pinex ∪ Ecig) ={add(b), ab1(b), inex(b),