Abstract
In this paper we present an agent-based model (ABM) of scientific inquiry aimed at investigating how different social networks impact the efficiency of scientists in acquiring knowledge. As such, the ABM is a computational tool for tackling issues in the domain of scientific methodology and science policy. In contrast to existing ABMs of science, our model aims to represent the argumentative dynamics that underlies scientific practice. To this end we employ abstract argumentation theory as the core design feature of the model.
A. Borg and C. Straßer—Supported by the Alexander von Humboldt Foundation and the German Ministry for Education and Research.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
In this paper we present an agent-based model (ABM) as a computational tool for tackling issues in the domain of scientific methodology and science policy, which concern social aspects of scientific inquiry.Footnote 1 In contrast to most other ABMs of science (e.g. [2, 8,9,10,11]), our model is based on the idea that an essential component of scientific inquiry is an argumentative dynamics between scientists. To this end, we employ abstract argumentation frameworks as one of the design features of our ABM (previously shown fruitful for the modeling of scientific debates in [7] and employed in an ABM of social behavior in [4]). The model is designed to investigate how different social networks impact the efficiency of scientists in discovering the best of the pursued scientific theories.
2 The Model
The aim of our ABM is to represent scientists engaged in an inquiry with the goal of finding the best of the given rivaling theories, where they occasionally exchange arguments with other scientists, pro or con the given pursued theories. We tackle the question, which structure of the information flow leads scientists to most efficiently discover the best theory, where efficiency is measured in terms of their success and the time they need to complete their exploration.Footnote 2
Agents, representing scientists, move along an argumentative landscape. The argumentative landscape, which represents rivaling theories in a given scientific domain, is based on a dynamic abstract argumentation framework.
Similarly to Dung’s abstract argumentation framework (AF) [3], the framework underlying our model consists of a set of arguments \(\mathcal {A}\) and an attack relation \({\leadsto }\) over \(\mathcal {A}\). In addition to attacking each other, arguments may also be connected by a discovery relation \(\hookrightarrow \). The latter represents the path which scientists have to take in order to discover different parts of the given theory.
An argumentative landscape is given by a triple \(\langle \mathcal {A}, \leadsto , \hookrightarrow \rangle \) where \(\mathcal {A} = \langle \mathcal {A}_1, \ldots , \mathcal {A}_m \rangle \) is partitioned in m many theories \(T_i = \langle \mathcal {A}_i, a_i, \hookrightarrow \rangle \) which are trees with \(a_i \in \mathcal {A}_i\) as a root and
Given the abstract nature of arguments, we interpret them as hypotheses which scientists investigate, occasionally encountering defeating evidence, represented by attacks from other arguments, and then attempting to find defending arguments for the attacked hypothesis.
The model is round-based. Each round (\(\approx \) a research day) agents perform one of the following actions: 1a. Explore a single argument a. This way they gradually discover possible attacks (on a and from a to an argument from another theory) as well as neighboring arguments via the discovery relation. 1b. Alternatively, if probabilistically triggered, move to a neighboring argument along the discovery relation. 2. Move to an argument of a rivaling theory. In order to decide whether to work on the current theory (1a, 1b) or to move to another one (2), every five rounds (\(\approx \) a research week) agents assess the degree of defensibility of the theories. A theory has degree of defensibility n if it has n defended arguments where an argument a is defended in the theory if each attacker b from another theory is itself attacked by some argument c in the current theory. Agents always prefer the most defensible theory.
An agent discovers the argumentative landscape by investigating arguments or by means of exchanging information about the landscape with other agents, connected by so-called social networks. We distinguish between two types of social networks. First, our agents are divided into collaborative networks that consist of up to five individuals who start from the same theory root. While each agent gathers information on her own, every five steps this information is shared with all other agents forming the same collaborative network.
Second, besides sharing information with agents from the same network, every five steps each agent shares information with agents from other collaborative networks with a given probability of information sharing.Footnote 3 This way the agents form ad-hoc and random communal networks with agents from other collaborations. A higher probability of information sharing leads to a higher degree of interaction among agents.
Finally, we represent reliable and deceptive scientists. Reliable agents share all the information they have gathered during their exploration of the current theory, while deceptive agents don’t share the information regarding the discovered attacks on their current theory. Hence, deceptive agents only provide some information while they withhold other. In this way they lead the receiver to a wrong inference [1].
Agents share information in a unidirectional or a bidirectional way (with a 50/50 chance). Moreover, our model takes into account the fact that receiving information is time costly.
3 The Main Findings
We have run the simulation 100 times with 10, 20, 30, 40, 70 and 100 agents by varying: the probability of information sharing (namely: 0.3; 0.5; 1.0); reliable and deceptive agents. The landscape consists of 3 theories, only one of which has the maximum degree of defensibility, representing the objectively best theory.Footnote 4 The program runs until each agent is on a fully explored theory. In order to assess the efficiency of agents, we have defined their success similarly to other ABMs of science, e.g. in [10, 11]: a run is considered successful if, at the end of the run, all agents have converged onto the objectively best theory.
In what follows we present the most significant results of our simulations.
Information sharing. For smaller groups of reliable agents (up to 20) the impact of information sharing is rather small (Fig. 1a). From 30 agents on, we observe a positive impact of an increase in information sharing on the successful convergence, with no negative effect on time steps needed (Fig. 1b). While for smaller groups of deceptive agents a higher degree of information sharing has a relatively small impact, we notice positive effects in cases of larger communities, without slowdowns.
Reliable vs. deceptive agents. If we compare groups with same degrees of information sharing, reliable agents tend to be more successful than the deceptive ones, while being equally fast (and only sometimes being slightly slower).
Size of the scientific community. Larger populations of 70 and 100 agents are outperformed by smaller populations (with an optimum around 20 and 30). A possible explanation is that with larger sized populations information circulates less among research groups, which may prevent them from converging.
Our finding that increased communication tends to be epistemically beneficial (or at least, not epistemically harmful) undermines the robustness of conclusions drawn from ABMs in [5, 6, 10, 11], under different modeling choices.
Notes
- 1.
For an extended version of our paper see: https://arxiv.org/abs/1612.04432..
- 2.
The source code is available at https://github.com/g4v4g4i/ArgABM/tree/AppArg2017.
- 3.
While agents share their full subjective knowledge within their collaborative networks, the information which they share with agents from other networks concerns recently obtained knowledge of the theory which they are currently exploring.
- 4.
Each theory is modeled as a (discovery-)tree of depth 3, where each argument (except for the final leaves) has 4 child-arguments (altogether 85 arguments).
References
Caminada, M.: Truth, lies and bullshit: distinguishing classes of dishonesty. In: Social Simulation Workshop at the International Joint Conference on Artificial Intelligence (SS@ IJCAI). Citeseer (2009)
Douven, I.: Simulating peer disagreements. Stud. Hist. Philos. Sci. Part A 41(2), 148–157 (2010)
Dung, P.M.: An argumentation-theoretic foundation for logic programming. J. Logic program. 22(2), 151–171 (1995)
Gabbriellini, S., Torroni, P.: A new framework for ABMs based on argumentative reasoning. In: Kamiński, B., Koloch, G. (eds.) Advances in Social Simulation, pp. 25–36. Springer, Heidelberg (2014)
Grim, P.: Threshold phenomena in epistemic networks. In: AAAI Fall Symposium: Complex Adaptive Systems and the Threshold Effect, pp. 53–60 (2009)
Grim, P., Singer, D.J., Fisher, S., Bramson, A., Berger, W.J., Reade, C., Flocken, C., Sales, A.: Scientific networks on data landscapes: question difficulty, epistemic success, and convergence. Episteme 10(04), 441–464 (2013)
Šešelja, D., Straßer, C.: Abstract argumentation and explanation applied to scientific debates. Synthese 190, 2195–2217 (2013)
Thoma, J.: The epistemic division of labor revisited. Philos. Sci. 82(3), 454–472 (2015)
Weisberg, M., Muldoon, R.: Epistemic landscapes and the division of cognitive labor. Philos. Sci. 76(2), 225–252 (2009)
Zollman, K.J.S.: The communication structure of epistemic communities. Philos. Sci. 74(5), 574–587 (2007)
Zollman, K.J.S.: The epistemic benefit of transient diversity. Erkenntnis 72(1), 17–35 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Borg, A., Frey, D., Šešelja, D., Straßer, C. (2017). An Argumentative Agent-Based Model of Scientific Inquiry. In: Benferhat, S., Tabia, K., Ali, M. (eds) Advances in Artificial Intelligence: From Theory to Practice. IEA/AIE 2017. Lecture Notes in Computer Science(), vol 10350. Springer, Cham. https://doi.org/10.1007/978-3-319-60042-0_56
Download citation
DOI: https://doi.org/10.1007/978-3-319-60042-0_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-60041-3
Online ISBN: 978-3-319-60042-0
eBook Packages: Computer ScienceComputer Science (R0)