Logic or probability? An ERP study of defeasible reasoning
Michiel van Lambalgen ILLC/Dept of Philosophy University of Amsterdam
Logic or probability? An ERP study of defeasible reasoning Michiel - - PowerPoint PPT Presentation
Logic or probability? An ERP study of defeasible reasoning Michiel van Lambalgen ILLC/Dept of Philosophy University of Amsterdam As a modelling tool in cognitive science, logic is on the back foot: [M]uch of our reasoning with conditionals
Michiel van Lambalgen ILLC/Dept of Philosophy University of Amsterdam
As a modelling tool in cognitive science, logic is on the back foot:
[M]uch of our reasoning with conditionals is uncertain, and may be
logic based approaches to inference are typically monotonic, and hence are unable to deal with this uncertainty. Moreover, to the extent that formal logical approaches embrace non-monotonicity, they appear to be unable to cope with the fact that it is the content of the rules, rather than their logical form, which appears to determine the inferences that people
people's knowledge, by probability theory, we may more adequately capture the nature of everyday human inference. This seems to make intuitive sense, because the problems that we have identified concern how uncertainty is handled in human inference, and probability is the calculus of uncertainty. (Oaksford & Chater, Bayesian Rationality, OUP 2007)
The task which occasioned these remarks: the suppression effect (Byrne (1989))
(1) If Marian has an essay, she studies late in the library. (2) Marian has an essay. (a) Does Marian study late in the library? (3) If the library is open, Marian studies late in the library. (b) Does Marian study late in the library? The percentage of `yes’ responses to (a) is around 90%; for (b) it is around 60% -- one says that `MP is suppressed’ Some argue that therefore subjects do not reason `logically’; although it is safer to say they don’t use a monotonic logic The supposed inability of `logic’ to handle this phenomenon has given a boost to probabilistic analyses in which the conditional is represented by a conditional probability
suppression task is Bayesian or closed world by means of an EEG study
Formalisation in logic programming with the Closed World Assumption
p1∧.....∧pn⟶q, such that no other clauses have the same consequent q
released from the closed world aassumption
¬s∧p1∧.....∧pn⟶q, do the preceding for the clause α⟶s (where we may have α=⊥) and replace s by its definition
definition of q is given by ⋁i (pi1∧.....∧pin) ↔ q [where the disjunction is taken over all such clauses]
Logical analysis of the suppression task
is the case, then E’
what abnormalities there are
¬ab and we can draw the modus ponens conclusion E
not open’], then ¬C → ab; but no other abnormalities
MP and MT significantly less often; prediction verified in Pijnacker & al, Neuropsychologia (2009)
classical propositional logic, satisfying
P(A) > 0
requirements
Bayesian conditionalisation: if E summarises all our evidence and E
priori conditional probability Pi(S|E) [‘probabilistic modus ponens’, but controversial]
E then S’ is represented as a conditional probability P(S|E)
variables of (possible) interest [recall: probability not truth functional]
find (rational) principles which govern the transfer of a probability from a set to an expansion of that set
conditional probability Pi(E|A) ≈ 1
probability Pf(E|A) << Pi(E|A)
all events; so we may assume there is a prior probability for the library being closed (C) Pi(E|A) = Pi(E|CA) Pi(C|A) + Pi(E|¬CA) Pi(¬C|A) = [independence
get high Pi(E|A), i.e. the fact that C becomes salient increases its probability
demands on storage, since based on knowledge not computation
P0 be defined on `all’ events
provided by having multiple algebras
hence at each time there is single algebra of events
Pi(S|AE)Pi(A|E) = Pi(AS|E) to ensure that Pi(S|E) is the same in the algebra with and without event A
novel events
probabilistic model of the suppression effect
Bayesianism thus modified
reasoning in the suppression task?
higher cognitive phenomena (Oaksford & Chater (2007, 2009), Gopnik & al (2004), ..)
predictions
Bayesian reasoning and closed world reasoning in a variant of the suppression task
Exceptions: An Event-related Brain Potentials Study’, J. Cogn Neurosci 2010)
The difference with the standard suppression task is that the possible exception to the conditional is now given in more explicit form The results are as usual: in the congruent condition 90% endorses MP, in the disabling condition
going to play hockey (H), she will wear contact lenses (W)’ must be evaluated by taking E into account: Pi(W|H) = Pi(W|EH) Pi(E|H) + Pi(W|¬EH) Pi(¬E|H) = = Pi(W|EH) Pi(E) + Pi(W|¬EH) Pi(¬E) (assuming independence of E and H)
probability, then Pi(W|H) has to be computed while processing the 2nd premiss
conditionalisation on H
= Pi(W|EH) Pi(E) + Pi(W|¬EH) Pi(¬E)
conditionalisation on H
conditionalisation, does not involve heavy computation
shouldn’t be any difference between the congruent and the disabling case because E has been given at the outset
four sentences
say r
with the recognition that the condition is a possibly disabling condition: r ∧¬ab’ ➝ ab, or congruent: no link between r and ab
therefore the link between r and ab is mediated by ¬ab’
new lenses before playing hockey
preparing the premisses for drawing a conclusion by closing the world: the completion
p∧¬ab ↔ q, whence p∧¬r ➝ q, and the conclusion q cannot be drawn
and q follows
and q follows
than in the congruent condition
the last word of first premiss (A), second premiss (B) and conclusion (C), and average over reasoning problems in a condition (40) and participants (18)
in A, B - so far in line with Bayesian prediction
contrary to Bayesian prediction, in line with closed world prediction
interpretations in working memory
friend spilled coffee on the tablecloth/paper’ (Baggio, vL, Hagoort, J. Mem.
as a computational model in the sense of Marr
algorithm, that should predict outcomes of an experiment (at least in a statistical sense)
conceivable that the algorithm is only a heuristic (Oaksford & Chater 1999))
picks up in the EEG is the neural correlate of the algorithm at work
Hölldobler (1996) for stable semantics, Stenning & vL (2008) for Kleene 3-valued semantics
is that (according to the Bayesians) the conditional probability forms part of the meaning of the conditional, whereas closing the world is an inference principle governing the conditional
are a faithful image of the symbolic computations sketched earlier, then the EEG data favour closed world reasoning
last step, the need to evaluate the conclusion, and all significant brain activity occurs at this last step
all effortful computation occurs at the last step (but then it is unclear how), or closed world reasoning is a more accurate representation of the computations involved