Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In this paper we present an agent-based model (ABM) as a computational tool for tackling issues in the domain of scientific methodology and science policy, which concern social aspects of scientific inquiry.Footnote 1 In contrast to most other ABMs of science (e.g. [2, 8,9,10,11]), our model is based on the idea that an essential component of scientific inquiry is an argumentative dynamics between scientists. To this end, we employ abstract argumentation frameworks as one of the design features of our ABM (previously shown fruitful for the modeling of scientific debates in [7] and employed in an ABM of social behavior in [4]). The model is designed to investigate how different social networks impact the efficiency of scientists in discovering the best of the pursued scientific theories.

2 The Model

The aim of our ABM is to represent scientists engaged in an inquiry with the goal of finding the best of the given rivaling theories, where they occasionally exchange arguments with other scientists, pro or con the given pursued theories. We tackle the question, which structure of the information flow leads scientists to most efficiently discover the best theory, where efficiency is measured in terms of their success and the time they need to complete their exploration.Footnote 2

Agents, representing scientists, move along an argumentative landscape. The argumentative landscape, which represents rivaling theories in a given scientific domain, is based on a dynamic abstract argumentation framework.

Similarly to Dung’s abstract argumentation framework (AF) [3], the framework underlying our model consists of a set of arguments \(\mathcal {A}\) and an attack relation \({\leadsto }\) over \(\mathcal {A}\). In addition to attacking each other, arguments may also be connected by a discovery relation \(\hookrightarrow \). The latter represents the path which scientists have to take in order to discover different parts of the given theory.

An argumentative landscape is given by a triple \(\langle \mathcal {A}, \leadsto , \hookrightarrow \rangle \) where \(\mathcal {A} = \langle \mathcal {A}_1, \ldots , \mathcal {A}_m \rangle \) is partitioned in m many theories \(T_i = \langle \mathcal {A}_i, a_i, \hookrightarrow \rangle \) which are trees with \(a_i \in \mathcal {A}_i\) as a root and

$${\leadsto } \subseteq \bigcup _{\begin{array}{c} 1 \le i, j \le m \\ i \ne j \end{array}} (\mathcal {A}_i \times \mathcal {A}_j) \quad \text{ and } \quad {\hookrightarrow } \subseteq \bigcup _{1 \le i \le m} (\mathcal {A}_i \times \mathcal {A}_i).$$

Given the abstract nature of arguments, we interpret them as hypotheses which scientists investigate, occasionally encountering defeating evidence, represented by attacks from other arguments, and then attempting to find defending arguments for the attacked hypothesis.

The model is round-based. Each round (\(\approx \) a research day) agents perform one of the following actions: 1a. Explore a single argument a. This way they gradually discover possible attacks (on a and from a to an argument from another theory) as well as neighboring arguments via the discovery relation. 1b. Alternatively, if probabilistically triggered, move to a neighboring argument along the discovery relation. 2. Move to an argument of a rivaling theory. In order to decide whether to work on the current theory (1a, 1b) or to move to another one (2), every five rounds (\(\approx \) a research week) agents assess the degree of defensibility of the theories. A theory has degree of defensibility n if it has n defended arguments where an argument a is defended in the theory if each attacker b from another theory is itself attacked by some argument c in the current theory. Agents always prefer the most defensible theory.

An agent discovers the argumentative landscape by investigating arguments or by means of exchanging information about the landscape with other agents, connected by so-called social networks. We distinguish between two types of social networks. First, our agents are divided into collaborative networks that consist of up to five individuals who start from the same theory root. While each agent gathers information on her own, every five steps this information is shared with all other agents forming the same collaborative network.

Second, besides sharing information with agents from the same network, every five steps each agent shares information with agents from other collaborative networks with a given probability of information sharing.Footnote 3 This way the agents form ad-hoc and random communal networks with agents from other collaborations. A higher probability of information sharing leads to a higher degree of interaction among agents.

Finally, we represent reliable and deceptive scientists. Reliable agents share all the information they have gathered during their exploration of the current theory, while deceptive agents don’t share the information regarding the discovered attacks on their current theory. Hence, deceptive agents only provide some information while they withhold other. In this way they lead the receiver to a wrong inference [1].

Agents share information in a unidirectional or a bidirectional way (with a 50/50 chance). Moreover, our model takes into account the fact that receiving information is time costly.

3 The Main Findings

We have run the simulation 100 times with 10, 20, 30, 40, 70 and 100 agents by varying: the probability of information sharing (namely: 0.3; 0.5; 1.0); reliable and deceptive agents. The landscape consists of 3 theories, only one of which has the maximum degree of defensibility, representing the objectively best theory.Footnote 4 The program runs until each agent is on a fully explored theory. In order to assess the efficiency of agents, we have defined their success similarly to other ABMs of science, e.g. in [10, 11]: a run is considered successful if, at the end of the run, all agents have converged onto the objectively best theory.

In what follows we present the most significant results of our simulations.

Information sharing. For smaller groups of reliable agents (up to 20) the impact of information sharing is rather small (Fig. 1a). From 30 agents on, we observe a positive impact of an increase in information sharing on the successful convergence, with no negative effect on time steps needed (Fig. 1b). While for smaller groups of deceptive agents a higher degree of information sharing has a relatively small impact, we notice positive effects in cases of larger communities, without slowdowns.

Reliable vs. deceptive agents. If we compare groups with same degrees of information sharing, reliable agents tend to be more successful than the deceptive ones, while being equally fast (and only sometimes being slightly slower).

Size of the scientific community. Larger populations of 70 and 100 agents are outperformed by smaller populations (with an optimum around 20 and 30). A possible explanation is that with larger sized populations information circulates less among research groups, which may prevent them from converging.

Our finding that increased communication tends to be epistemically beneficial (or at least, not epistemically harmful) undermines the robustness of conclusions drawn from ABMs in [5, 6, 10, 11], under different modeling choices.

Fig. 1.
figure 1

(a) Success (b) Time needed