towards an understanding of human persuasion and biases
play

Towards an Understanding of Human Persuasion and Biases in - PowerPoint PPT Presentation

Towards an Understanding of Human Persuasion and Biases in Argumentation Pierre Bisquert, Madalina Croitoru, Florence Dupin de Saint-Cyr, Abdelraouf Hecham INRA & LIRMM & IRIT, France July 6th 2016 B, C, D & H Persuasion and


  1. Towards an Understanding of Human Persuasion and Biases in Argumentation Pierre Bisquert, Madalina Croitoru, Florence Dupin de Saint-Cyr, Abdelraouf Hecham INRA & LIRMM & IRIT, France July 6th 2016 B, C, D & H Persuasion and Biases CAF 2016 1 / 24

  2. Objectives Why are “good” arguments not persuasive? Why are “bad” arguments persuasive? How can we prevent these negative processes? ⇒ General aim : improve the quality of collective decision making B, C, D & H Persuasion and Biases CAF 2016 2 / 24

  3. Persuasion in AI Interactive technologies for human behavior ◮ Persuade humans in order to change behaviors [Oinas-Kukkonen, 2013] ⇒ Health-care [Lehto and Oinas-Kukkonen, 2015], environment [Burrows et al., 2014] Dialogue protocols for persuasion ◮ Derived from logic and philosophy [Hamblin, 1970], [Perelman and Olbrechts-Tyteca, 1969] ⇒ Ensure rational interactions between agents [Prakken, 2006] Argumentation theory ◮ Abstract and logical argumentation [Dung, 1995], [Besnard and Hunter, 2001] ⇒ Dynamics and enforcement [Baumann and Brewka, 2010], [Bisquert et al., 2013] etc. B, C, D & H Persuasion and Biases CAF 2016 3 / 24

  4. Our Approach Our approach : how does it “work”? Link between persuasion and cognitive biases [Clements, 2013] ◮ Computational analysis of cognitive biases ⇒ Explain why an argument has been persuasive or not ⇒ Understand better human persuasion processes ⇒ (Hopefully) Allow people to prevent manipulation attempts B, C, D & H Persuasion and Biases CAF 2016 4 / 24

  5. Outline Computational Model and Reasoning 1 Dual Process Theory S1/S2 Formalization Reasoning with the Model Argument Evaluation 2 Conclusion 3 B, C, D & H Persuasion and Biases CAF 2016 5 / 24

  6. Dual Process Theory Based on the work of Kahneman (and Tversky ) [Tversky and Kahneman, 1974] System 2 (S2) ◮ Conscious, thorough and slow process ◮ Expensive and “rational” reasoning System 1 (S1) ◮ Instinctive, heuristic and fast process ◮ Cheap and based on associations Biases (generally) arise when S1 is used ◮ fatigue, interest, motivation, ability, lack of knowledge B, C, D & H Persuasion and Biases CAF 2016 6 / 24

  7. Our take on S1 & S2 S2 is a logical knowledge base ◮ Beliefs ⋆ “Miradoux is a wheat variety”, “wheat contains proteins” ◮ Opinions ⋆ “I like Miradoux”, “I do not like spoiled wheat” S1 is represented by special rules ◮ “ PastaQuality is associated to Italy ” Biases arise when S1 rules are used instead of S2 rules ◮ Cognitive availability B, C, D & H Persuasion and Biases CAF 2016 7 / 24

  8. But how do we build them? Knowledge base : Datalog +/- ([Arioua et al., 2015]) ◮ “Miradoux is a wheat variety”: wheat ( miradoux ) ◮ “Wheat contains proteins”: ∀ X wheat ( X ) → proteins ( X ) ◮ “I like Miradoux”: like ( miradoux ) ⇒ Denoted BO Associations : obtained thanks to a Game With A Purpose ◮ Allows to extract associations for different profiles ◮ Associations are (manually) transformed ◮ ( PastaQuality , Italy ): ∀ X highQualityPasta ( X ) → madeInItaly ( X ) ⇒ Denoted A Each rule has a particular cognitive effort ◮ function e B, C, D & H Persuasion and Biases CAF 2016 8 / 24

  9. Example B 1 : wheat ( miradoux ) 10 B 2 : spoiled _ wheat ( miradoux 2) 10 B 3 : spoiled _ wheat ( X ) → low _ protein ( X ) 10 B 4 : low _ protein ( X ) ∧ has _ protein ( X ) → ⊥ 10 BO B 5 : wheat ( X ) → has _ protein ( X ) 10 B 6 : has _ protein ( X ) → nutrient ( X ) 10 O 1 : dislike ( miradoux 2) 5 O 2 : like ( X ) ∧ dislike ( X ) → ⊥ 5 A 1 : nutrient ( X ) → like ( X ) 1 A A 2 : has _ protein ( X ) → dontcare ( X ) 3 B, C, D & H Persuasion and Biases CAF 2016 9 / 24

  10. How do we reason? Reasoning Reasoning : K ⊢ R ϕ , with R a sequence from BO ∪ A Successive application of rules R: reasoning path wheat ( miradoux ) ⊢ R 1 like ( miradoux ), with R 1 = � B 5 , B 6 , A 1 � : ◮ B 5 : wheat ( X ) → has _ protein ( X ), ◮ B 6 : has _ protein ( X ) → nutrient ( X ) ◮ A 1 : nutrient ( X ) → like ( X ), ⇒ Total effort of R 1 : 21 wheat ( miradoux ) ⊢ R 2 dontcare ( miradoux ), with R 2 = � B 5 , A 2 � : ◮ A 2 : has _ protein ( X ) → dontcare ( X ) ⇒ Total effort of R 2 : 13 B, C, D & H Persuasion and Biases CAF 2016 10 / 24

  11. Cognitive Model Definition A cognitive model is a tuple κ = ( BO , A , e ) BO : beliefs and opinions, A : associations, e is a function BO ∪ A → N ∪ { + ∞} : effort required for each rule, Cognitive availability outside of the model B, C, D & H Persuasion and Biases CAF 2016 11 / 24

  12. Outline Computational Model and Reasoning 1 Argument Evaluation 2 Argument Definition Critical Questions and Answers Potential Status Conclusion 3 B, C, D & H Persuasion and Biases CAF 2016 12 / 24

  13. What is an argument? Definition An argument is a pair ( ϕ, α ) stating that having some beliefs and opinions described by ϕ leads to concluding α . “Miradoux is a very good wheat variety since it contains proteins” ⇒ ( has _ protein ( miradoux ), like ( miradoux )) B, C, D & H Persuasion and Biases CAF 2016 13 / 24

  14. How do we evaluate this argument? Critical Questions CQ 1 : BO ∪ A ∪ { α } ⊢ ⊥ ? (is it possible to attack the conclusion?) CQ 2 : BO ∪ A ∪ { ϕ } ⊢ ⊥ ? (is it possible to attack the premises?) CQ 3 : ϕ ⊢ α ? (does the premises allow to infer the conclusion?) With argument ( has _ protein ( miradoux ), like ( miradoux )): CQ 1 : BO ∪ A ∪ { like ( miradoux ) } ⊢ ⊥ CQ 2 : BO ∪ A ∪ { has _ protein ( miradoux ) } ⊢ ⊥ CQ 3 : has _ protein ( miradoux ) ⊢ like ( miradoux ) B, C, D & H Persuasion and Biases CAF 2016 14 / 24

  15. Positive/Negative Answers Proofs Given a CQ : h ⊢ c , a cognitive value cv and a reasoning path R : proof ca ( R , CQ ) def = ( eff ( R ) ≤ cv and h ⊢ R c ) � where eff ( R ) = e ( r ). r ∈ R Positive/Negative Answers Moreover, we say that: CQ is answered positively wrt to cv iff ∃ R s.t. proof cv ( R , CQ ), denoted positive cv ( CQ ), CQ is answered negatively wrt to cv iff ∄ R s.t. proof cv ( R , CQ ), denoted negative cv ( CQ ). B, C, D & H Persuasion and Biases CAF 2016 15 / 24

  16. Positive/Negative Answers – Example B 1 : wheat ( miradoux ) 10 B 2 : spoiled _ wheat ( miradoux 2) 10 B 3 : spoiled _ wheat ( X ) → low _ protein ( X ) 10 B 4 : low _ protein ( X ) ∧ has _ protein ( X ) → ⊥ 10 BO B 5 : wheat ( X ) → has _ protein ( X ) 10 B 6 : has _ protein ( X ) → nutrient ( X ) 10 O 1 : dislike ( miradoux 2) 5 O 2 : like ( X ) ∧ dislike ( X ) → ⊥ 5 A 1 : nutrient ( X ) → like ( X ) 1 A A 2 : has _ protein ( X ) → dontcare ( X ) 3 Argument ( has _ protein ( miradoux ), like ( miradoux )): CQ 1 is answered negatively : ∄ R s.t. BO ∪ A ∪ { like ( miradoux ) } ⊢ R ⊥ CQ 3 is answered positively (with cv ≥ 21): has _ protein ( miradoux ) ⊢ R 1 like ( miradoux ) with R 1 = � B 5 , B 6 , A 1 � B, C, D & H Persuasion and Biases CAF 2016 16 / 24

  17. Potential Status Potential Status of Arguments Given ca , we say that an argument is: acceptable ca iff there is an allocation c 1 + c 2 + c 3 = ca s.t. negative c 1 ( CQ 1 ), negative c 2 ( CQ 2 ), positive c 3 ( CQ 3 ) ◮ The agent may potentially accept the argument rejectable ca iff positive ca ( CQ 1 ) or positive ca ( CQ 2 ) or negative ca ( CQ 3 ). ◮ The agent may potentially reject the argument An argument can be both acceptable ca and rejectable ca How can we be more precise about the status? B, C, D & H Persuasion and Biases CAF 2016 17 / 24

  18. Potential Status Potential Status of Arguments Given ca , we say that an argument is: acceptable ca iff there is an allocation c 1 + c 2 + c 3 = ca s.t. negative c 1 ( CQ 1 ), negative c 2 ( CQ 2 ), positive c 3 ( CQ 3 ) ◮ The agent may potentially accept the argument rejectable ca iff positive ca ( CQ 1 ) or positive ca ( CQ 2 ) or negative ca ( CQ 3 ). ◮ The agent may potentially reject the argument An argument can be both acceptable ca and rejectable ca How can we be more precise about the status? ◮ Work in progress... ◮ Reasoning tendency: preference relation over reasoning path B, C, D & H Persuasion and Biases CAF 2016 17 / 24

  19. Outline Computational Model and Reasoning 1 Argument Evaluation 2 Conclusion 3 Summary Perspectives B, C, D & H Persuasion and Biases CAF 2016 18 / 24

  20. Summary Preliminary formalization of dual process theory and its link with human persuasion Proposition of a cognitive model acknowledging biases during argument evaluation Application on a real use case (Durum wheat knowledge base, implementation of a “GWAP”) B, C, D & H Persuasion and Biases CAF 2016 19 / 24

  21. Perspectives Evaluation strategies Rationality properties Cognitive model update More elaborate logic of “beliefs and preferences” Empirical study B, C, D & H Persuasion and Biases CAF 2016 20 / 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend