Examining Network Effects in the Argumentative Agent-Based Model of - - PowerPoint PPT Presentation

examining network effects in the argumentative agent
SMART_READER_LITE
LIVE PREVIEW

Examining Network Effects in the Argumentative Agent-Based Model of - - PowerPoint PPT Presentation

Examining Network Effects in the Argumentative Agent-Based Model of Scientific Inquiry AnneMarie Borg, Daniel Frey, Dunja eelja and Christian Straer July 18, RUB, Bochum Institute for Philosophy II, Ruhr-University Bochum An


slide-1
SLIDE 1

Examining Network Effects in the Argumentative Agent-Based Model of Scientific Inquiry

AnneMarie Borg, Daniel Frey, Dunja Šešelja and Christian Straßer July 18, RUB, Bochum

Institute for Philosophy II, Ruhr-University Bochum

slide-2
SLIDE 2
  • An Argumentative Agent-Based Model of Scientific Inquiry,

forthcoming, Proceedings of IEA/AIE, Springer-Verlag

  • Epistemic Effects of Scientific Interaction: approaching the

question with an argumentative agent-based model, special issue of Historical Social Research: "Agent Based Modelling across Social Science, Economics, and Philosophy" (under revision)

  • Examining Network Effects in an Argumentative Agent-Based

Model of Scientific Inquiry, Proceedings of LORI VI, FoLLI Series on Logic, Language and Information, Springer.

1/33

slide-3
SLIDE 3

Introduction

slide-4
SLIDE 4

Which social structures are conducive to efficient scientific inquiry?

1/33

slide-5
SLIDE 5

Communication networks

2/33

slide-6
SLIDE 6

Results

2/33

slide-7
SLIDE 7

ABMs on interaction among scientists

A high degree of connectedness may be counterproductive.

  • 1. Zollman (2007, 2010),
  • 2. Grim (2009), Grim et al. (2013)

The context of scientific diversity multiple rivaling theories in the given domain

3/33

slide-8
SLIDE 8

. . . are they robust?

3/33

slide-9
SLIDE 9

Robustness of results

Robustness under:

  • 1. the changes within the relevant parameter space
  • 2. different modeling choices

4/33

slide-10
SLIDE 10

Robustness of results

Robustness under:

  • 1. the changes within the relevant parameter space
  • 2. different modeling choices

Concerning 1: Rosenstock et al. (2016): Zollman’s results don’t hold for a large portion of the relevant parameter space.

4/33

slide-11
SLIDE 11

Robustness of results

Robustness under:

  • 1. the changes within the relevant parameter space
  • 2. different modeling choices

Concerning 1: Rosenstock et al. (2016): Zollman’s results don’t hold for a large portion of the relevant parameter space. Concerning 2: Grim (2009); Grim et al. (2013)

4/33

slide-12
SLIDE 12

Which results do we get by means of a different model?

4/33

slide-13
SLIDE 13

Introduction Argumentation-based ABMs Our results Outlook

5/33

slide-14
SLIDE 14

Argumentation-based ABMs

slide-15
SLIDE 15

The basic idea

  • argumentative

dynamics between scientists.

  • agents move on the

argumentative landscape.

  • the argumentative

landscape: rivaling theories

Research Program 1

♂ ♀

  • Research

Program 2

6/33

slide-16
SLIDE 16

Abstract argumentation frameworks

6/33

slide-17
SLIDE 17

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

7/33

slide-18
SLIDE 18

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

  • arrows: arg. attacks

7/33

slide-19
SLIDE 19

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

  • arrows: arg. attacks
  • rationality requirements: e.g.

7/33

slide-20
SLIDE 20

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

  • arrows: arg. attacks
  • rationality requirements: e.g.
  • conflict-free,

7/33

slide-21
SLIDE 21

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

  • arrows: arg. attacks
  • rationality requirements: e.g.
  • conflict-free,
  • admissibility (defense,

attacks the attackers)

7/33

slide-22
SLIDE 22

Abstract argumentation

a c b e d

  • argument: abstract, points

in a directed graph

  • arrows: arg. attacks
  • rationality requirements: e.g.
  • conflict-free,
  • admissibility (defense,

attacks the attackers)

labelling: status of an argument

  • green: accepted
  • red: rejected
  • gray: undecided

7/33

slide-23
SLIDE 23

Explanatory Argumentation Frameworks

Šešelja and Straßer, Synthese, 2013, 190:2195–2217

8/33

slide-24
SLIDE 24

Abstract argumentation framework in our ABM

  • We represent in an

abstract way:

  • arguments
  • discovery relation
  • attack relation

Research Program 1

♂ ♀

  • Research

Program 2

9/33

slide-25
SLIDE 25

Work week

Monday Tuesday Wednesday Thursday Friday

  • 10/33
slide-26
SLIDE 26

Exploration (process of scientific inquiry)

Monday Tuesday Wednesday Thursday

Friday

  • 10/33
slide-27
SLIDE 27

The landscape is dynamic

Mo Tue We Thu Fri

  • 11/33
slide-28
SLIDE 28

The landscape is dynamic

Mo Tue We Thu Fri

  • 11/33
slide-29
SLIDE 29

The landscape is dynamic

Mo Tue We Thu Fri

  • 11/33
slide-30
SLIDE 30

The landscape is dynamic

Mo Tue We Thu Fri

  • 11/33
slide-31
SLIDE 31

The landscape is dynamic

Mo Tue We Thu Fri

  • 11/33
slide-32
SLIDE 32

Exploration

Mo Tue We Thu Fri

  • Agents, representing

scientists, start from the root of

  • ne of the theories.

♀ ♂

12/33

slide-33
SLIDE 33

Exploration

Mo Tue We Thu Fri

  • They explore the landscape from

there, by:

13/33

slide-34
SLIDE 34

Exploration

Mo Tue We Thu Fri

  • They explore the landscape from

there, by:

  • 1. exploring a single argument,

gradually discovering possible attack and discovery relations;

♂ ♀

13/33

slide-35
SLIDE 35

Exploration

Mo Tue We Thu Fri

  • They explore the landscape from

there, by:

  • 1. exploring a single argument,

gradually discovering possible attack and discovery relations;

♂ ♀

  • 13/33
slide-36
SLIDE 36

Exploration

Mo Tue We Thu Fri

  • They explore the landscape from

there, by:

  • 1. exploring a single argument,

gradually discovering possible attack and discovery relations;

  • 2. moving along a discovery

relation to a neighboring argument within the same theory;

♂ ♀ ♀

13/33

slide-37
SLIDE 37

Exploration

Mo Tue We Thu Fri

  • They explore the landscape from

there, by:

  • 1. exploring a single argument,

gradually discovering possible attack and discovery relations;

  • 2. moving along a discovery

relation to a neighboring argument within the same theory;

  • 3. moving to an argument of a

rivaling theory.

♂ ♀ ♀

13/33

slide-38
SLIDE 38

Exploration (cont.)

Mo Tue We Thu Fri

  • This way agents gain subjective knowledge of the landscape.

14/33

slide-39
SLIDE 39

Theory choice

Monday Tuesday Wednesday Thursday Friday

  • 14/33
slide-40
SLIDE 40

Decision making

Mo Tue We Thu Fri

  • Every 5 rounds agents evaluate the theories based on their

subjective knowledge.

15/33

slide-41
SLIDE 41

Decision making

Mo Tue We Thu Fri

  • Every 5 rounds agents evaluate the theories based on their

subjective knowledge.

  • In view of this they decide whether to keep on exploring the

current theory, or to jump to another theory.

15/33

slide-42
SLIDE 42

Decision making

Mo Tue We Thu Fri

  • Every 5 rounds agents evaluate the theories based on their

subjective knowledge.

  • In view of this they decide whether to keep on exploring the

current theory, or to jump to another theory.

  • Agents have a degree of inertia towards their current theory

(they jump only after performing 10 evaluations that show their theory is not among the best ones). The evaluation criterion: the defensibility of each of the theories.

15/33

slide-43
SLIDE 43

Defensibility

Mo Tue We Thu Fri

  • A subset of arguments A of a given theory T is admissible iff for

each attacker b of some a in A there is an a′ in A that attacks b (a′ is said to defend a from the attack by b).

16/33

slide-44
SLIDE 44

Defensibility

Mo Tue We Thu Fri

  • A subset of arguments A of a given theory T is admissible iff for

each attacker b of some a in A there is an a′ in A that attacks b (a′ is said to defend a from the attack by b). An argument a in T is said to be defended in T iff it is a member

  • f a maximally admissible subset of T.

16/33

slide-45
SLIDE 45

Defensibility

Mo Tue We Thu Fri

  • A subset of arguments A of a given theory T is admissible iff for

each attacker b of some a in A there is an a′ in A that attacks b (a′ is said to defend a from the attack by b). An argument a in T is said to be defended in T iff it is a member

  • f a maximally admissible subset of T.

The degree of defensibility of T – equal to the number of defended arguments in T.

16/33

slide-46
SLIDE 46

Defensibility: examples

Mo Tue We Thu Fri

  • g

e a c f b d theory defended degree of def. T1 = {e, f } {f } 1 T2 = {a, b, g} {} T3 = {c, d} {}

17/33

slide-47
SLIDE 47

Defensibility: examples

Mo Tue We Thu Fri

  • g

e a c f b d theory defended degree of def. T1 = {e, f } {} T2 = {a, b, g} {a, b, g} 3 T3 = {c, d} {}

18/33

slide-48
SLIDE 48

Evaluation

Mo Tue We Thu Fri

  • Agents evaluate theories based on their degree of defensibility.
  • The best theories according to an agent’s subjective knowledge

are then:

  • the theory with the most defended arguments;
  • any theory that has a number of defended arguments within a

certain threshold of the best theory.

The objectively best theory the theory which is fully defensible in the objective landscape.

19/33

slide-49
SLIDE 49

Social networks

Monday Tuesday Wednesday Thursday Friday

  • 19/33
slide-50
SLIDE 50

Two types of networks

Mo Tue We Thu Fri

  • Collaborative networks:
  • five agents;
  • each agent shares her full subjective landscape with the other

members of her group.

20/33

slide-51
SLIDE 51

Two types of networks

Mo Tue We Thu Fri

  • Communal networks:
  • every five rounds each collaborative group appoints a

representative who shares information via one of the social networks:

20/33

slide-52
SLIDE 52

Information sharing

Mo Tue We Thu Fri

  • Agents share information about their direct neighborhood.

Receiving information costs time.

21/33

slide-53
SLIDE 53

Information sharing

Mo Tue We Thu Fri

  • Different approaches to information sharing:
  • reliable agents share all the information regarding their direct

neighborhood;

  • deceptive agents withhold the information on discovered

attacks on arguments in their own theory.

22/33

slide-54
SLIDE 54

Our results

slide-55
SLIDE 55

Simulations

10.000 runs for each of the scenarios:

  • 10, 20, 30, 40, 70 and 100 agents;
  • communal networks: cycle, wheel and complete graph;
  • the landscape: 2 or 3 theories;
  • an argument of each theory has 0.3 probability of being

attacked.

23/33

slide-56
SLIDE 56

Simulations

10.000 runs for each of the scenarios:

  • 10, 20, 30, 40, 70 and 100 agents;
  • communal networks: cycle, wheel and complete graph;
  • the landscape: 2 or 3 theories;
  • an argument of each theory has 0.3 probability of being

attacked. Two criteria of success:

  • 1. monist: if agents have converged onto the best theory;
  • 2. pluralist: if at the end of the run the number of agents

working on the best theory is not smaller than the number of agents on any other theory.

23/33

slide-57
SLIDE 57

Degree of connectedness

Higher degree of connectedness tends to lead to a more efficient inquiry. With respect to both criteria of success.

24/33

slide-58
SLIDE 58

Monist success

25/33

slide-59
SLIDE 59

Reliable vs. deceptive agents

Reliable agents are more successful while being slightly slower.

26/33

slide-60
SLIDE 60

Monist success

27/33

slide-61
SLIDE 61

Outlook

slide-62
SLIDE 62

To sum up

Main conclusions:

  • a higher degree of connected tends to be epistemically

beneficial;

  • reliable information sharing tends to be epistemically beneficial.

28/33

slide-63
SLIDE 63

Do our results challenge those obtained by Zollman and Grim et al.?

28/33

slide-64
SLIDE 64

Our ABM – still highly idealized

Towards more reliable results:

  • empirical calibration;
  • examination of the relevant parameter space;
  • different assessments underlying theory choice.

29/33

slide-65
SLIDE 65

Further applications and enhancements:

Different types of research behaviors

  • "mavericks" and "followers";
  • different heuristic behavior of agents;
  • interdisciplinary collaborative groups.

30/33

slide-66
SLIDE 66

Thank you!

31/33

slide-67
SLIDE 67

Bibliography

slide-68
SLIDE 68

Bibliography i

References

Grim, P.: 2009, ‘Threshold Phenomena in Epistemic Networks.’. In: AAAI Fall Symposium: Complex Adaptive Systems and the Threshold Effect. pp. 53–60. Grim, P., D. J. Singer, S. Fisher, A. Bramson, W. J. Berger, C. Reade, C. Flocken, and A. Sales: 2013, ‘Scientific networks on data landscapes: question difficulty, epistemic success, and convergence’. Episteme 10(04), 441–464. Rosenstock, S., C. O’Connor, and J. Bruner: 2016, ‘In Epistemic Networks, is Less Really More?’. Philosophy of Science.

slide-69
SLIDE 69

Bibliography ii

Zollman, K. J. S.: 2007, ‘The communication structure of epistemic communities’. Philosophy of Science 74(5), 574–587. Zollman, K. J. S.: 2010, ‘The epistemic benefit of transient diversity’. Erkenntnis 72(1), 17–35.