Keywords

1 Introduction

The occurrence of cognitive biases is inherent to the human mind, and as such, can influence all individuals taking part in the software development process [10]: developers [3], architects [14], designers [9], testers [2].

In particular, cognitive biases have been proven to distort architectural decision-making [18] by influencing software architects’ reasoning [14]. This influence can be particularly strong, since every systems architecture is actually a set of design decisions [6] made by individuals. Thorough education about cognitive biases turned out to significantly improve software effort estimation [12], which is severely afflicted by cognitive biases [5]. Similarly, in this work we examine, (RQ) whether educating software architects about cognitive biases can provide a beneficial debiasing effect, which increases the rationality of decision-making.

In order to answer this question, we designed an experiment and ran a pilot study on two groups of students. The preliminary findings show that educating engineers about the possible impact of cognitive biases is not sufficient to mitigate the influence of cognitive biases on design decisions.

Therefore, more advanced debiasing techniques are needed. We analysed how exactly cognitive biases influenced various elements of the conversation (arguments, counterarguments, and general conversation). Based on that, we proposed additional debiasing techniques that can be used in order to create a more effective debiasing treatment. We plan to perform a modified version of this experiment, on a larger sample, in the near future. Our long time objective is to develop effective, debiasing techniques for architectural decision-making.

2 Related Work

The concept of cognitive biases was introduced by Tversky and Kahneman in their work about Representativeness, Availability and Anchoring biases [17]. Cognitive biases are a by-product of the dual nature of the human mind – intuitive (known as System 1) and rational (known as System 2) [7]. When the logic-based reasoning of System 2 is not applied to the initial decisions of System 1, we can say that the decision was biased.

Software architecture, defined as set of design decisions [6], is influenced by various human factors [16]. One of these factors are cognitive biases [18]. Their influence on architectural decision-making has been shown as significant in recent research [8, 14, 18, 19]. When no debiasing interventions are applied, the consequences of such biased decisions can be severe – for example resulting in taking on harmful Architectural Technical Debt [1].

In the domain of architecture decision-making, various debiasing techniques were proposed [1, 18]. The use of techniques that prompt designers to reflect on their decisions, have turned out to be effective in improving the quality of the reasoning behind design decisions [15].

Debiasing, by educating software developers about the existence of cognitive biases and their influences, has recently been proven to work as a powerful tool in the realm of software effort estimation [12]. The effectiveness of this approach to debiasing architectural decision making, has not yet been empirically tested.

3 Study Design

3.1 Bias Selection

Based on the cognitive biases researched previously in relation to software development [10], as well as biases shown previously as influencing software architecture [1, 18, 19], we selected three cognitive biases as the subject of the experiment:

  1. 1.

    Anchoring – when an individual over-relies on a particular solution, estimate, information or item, usually, the first one that they discovered or came up with [17].

  2. 2.

    Optimism bias – when baseless, overly positive estimates, assumptions and attributions are made [11].

  3. 3.

    Confirmation bias – the tendency to avoid the search for information that may contradict one’s beliefs [13].

3.2 Data Acquisition

In order to obtain the data for our study, we took part in four meetings with two groups of students that were working on a group project during their coursework. The meetings were conducted online through the MS Teams platform. Both groups were supposed to plan, design and implement a system as a part of their course. The topic for the project was at their discretion, with the only hard requirement being the use of Kubernetes in their solution.

In the case of one of the groups, we prepared a presentation during which we explained the concept of cognitive biases, and how they can influence architectural decision-making. We explicitly explained the three researched cognitive biases and gave examples of their possible influence on the students’ project. We did not mention anything about cognitive biases or debiasing to the second group.

The meetings proceeded as follows:

  1. 1.

    We asked the participants for their consent to record the meeting and to use their data for the purpose of our research.

  2. 2.

    In the case of the debiased group (Team 2), we showed them our presentation about cognitive biases in architectural decision-making. We did not perform this action with the other group (Team 1).

  3. 3.

    The meeting continued naturally, without our participation, although a researcher was present and made notes when necessary.

We also asked the participants to fill in a small survey to obtain basic statistical data about them.

3.3 Data Analysis

The recordings from the meetings were transcribed. In order to identify the cognitive biases, and their influence on decision-making, we defined a coding scheme presented in Table 1. The codes were applied to indicate the occurrence of the researched biases, as well as the arguments for and against the discussed architectural decisions.

The first and second author coded the transcripts independently. Then, they used the negotiated coding [4] method to discuss and correct the coding until they reached a full consensus.

Subsequently, we counted the number of occurrences of each code, and analysed the fragments of the meetings that were found to have been influenced by cognitive biases.

3.4 Participants

We recorded four meetings with two different groups of students that were working on their Master’s degrees in Computer Science at Warsaw University of Technology. The students grouped themselves into teams depending on their own preferences and had to choose a team leader. The teams consisted of five members each. Most of the students (with a single exception) had prior professional experience in software development. More detailed information on the students is presented in Table 2.

Table 1. Coding scheme
Table 2. Participant data

4 Results

Using the coding scheme presented in Table 1, we obtained the following information:

  • The percentage of biased arguments in statements for or against certain architectural decisions (see Fig. 1).

  • How many arguments for and against certain architectural decisions were made during the meeting (see Fig. 2).

  • How many of these arguments and counterarguments, were influenced by cognitive biases (see Fig. 3).

  • How many cognitive biases were present in statements not related to architectural decisions (see Fig. 3).

Figure 1, which presents the percentage of biased arguments used during the meetings, shows that Team 1 (non-debiased) used more rational arguments than Team 2 (debiased). This means that the debiasing treatment – simply informing the participants about the existence of cognitive biases – was ineffective.

Figure 2 shows that there was a significant difference between the amount of arguments and counterarguments in the discussions. Teams were less likely to discuss the drawbacks of their decisions than their positive aspects.

Fig. 1.
figure 1

Biased arguments

Fig. 2.
figure 2

Argument count

Figure 3 illustrates the number of biased statements, as well as the ratio between the researched biases depending on statement type.

Fig. 3.
figure 3

Biases in statements

In the case of both teams, most cognitive biases were present in statements not related to architectural decision-making. In this type of discussion, confirmation bias and optimism bias were the most prevalent. This was usually due to the teams’ need to reassure themselves that their course of action was correct.

In both teams, most of the biased arguments were influenced by the anchoring bias. This means that both teams considered an array of solutions that came to their minds first, without any additional argumentation on why the specific solution is correct. When it comes to counterarguments, against specific architectural solutions, confirmation bias was prevalent in both teams. This was usually due to the teams’ unwillingness to change a previously made decision.

5 Threats to Validity

In this work, we describe a pilot study. Its main weakness is the small number of participants that took part in the experiment. This means that all of our findings are preliminary and cannot be perceived as final. We plan to perform a modified version of this experiment with a larger number of teams, to obtain more data to verify our findings.

6 Discussion

The team that was not debiased by our presentation used a significantly lower number of biased arguments. This implies that a simple debiasing treatment, by simply reporting on the biases is not strong enough to counter the influence of cognitive biases on architectural decision-making.

We discovered the typical scenario of bias-influenced architectural decision making. First, one team member proposes an idea that first came to their mind (an idea prompted by System 1). If the solution does not disturb the current project, other team members are unlikely to give any counterarguments (only around half of the arguments used were counterarguments) as they are already anchored on the initial proposition. If the solution requires changes to previously made decisions, other team members (due to confirmation bias), are likely to give biased counterarguments to avoid changes. Additionally, the whole atmosphere of the conversation is heavily influenced by the confirmation bias and optimism bias, making the team unlikely to notice any errors in their decision-making.

With these findings in mind, we propose (Sect. 7) a set of modifications to our debiasing approach.

7 Research Outlook

Since the pilot study showed that a simple debiasing treatment does not help to overcome the biases, we plan to extend and repeat this experiment with the following modifications:

  • Since the most biased arguments in favour of a solution were influenced by anchoring, and participants were overall less likely to use counterarguments – we propose that the person presenting a solution, should also present at least one drawback.

  • Since most biased counterarguments were influenced by confirmation bias, due to the teams’ reluctance to change a previously made decision – we propose that one of the team members should monitor the discussion and point out the occurrence of such a biased argumentation.

  • Since optimism bias and confirmation bias influenced the overall atmosphere of the meetings - we propose that, at the end of the meeting, after making the initial decisions, teams should explicitly list their drawbacks. Then, if the need arises, decisions should be changed accordingly.

  • We will add an additional code to the coding scheme - “decision”. Which will mean the decision that was ultimately made during the meeting. This will enable us to count how many rational and biased arguments were made in favour of the decisions that were eventually chosen.

  • Instead of a simple debiasing presentation, we will hold a longer debiasing workshop. During this workshop, we will do more than simply inform the participants about the influence of cognitive biases on architectural decision-making. The participants will also be taught, through a series of practical exercises, how to apply our debiasing techniques.

  • The next experiment will be performed on a significantly bigger sample of participants.

8 Conclusion

The preliminary results (see Sect. 4) show that a simple presentation about cognitive biases and their possible influence on architectural decision-making is not an effective debiasing method. At the same time the pilot study revealed crucial information about how biases influenced the arguments for and against certain decisions. This made it possible to develop a series of modifications to our debiasing approach (as presented in Sect. 7) in order to reshape the entire experiment.