Keywords

1 Introduction

The human brain is the most complex organ in the human body. Therefore, with a view to reaching a better understanding of how the brain functions, the Neuroscience approach was developed. Neuroscience engages on the study of neural system and promotes understanding of how the mechanisms of our body function. Neuroscience has been used by many areas of knowledge to improve systems [23].

With regard to decision making, this approach seeks to provide a fuller understanding of the mechanisms that underlie the decision process. As Neuroscience can be related to the decision process of many different areas, some specific approaches have been developed, such as: Neuroeconomics, NeuroIS, Consumer Neuroscience, Neuromarketing, Management Neuroscience and Organizational Neuroscience [20].

Neuroeconomics has become a complement to classical economic theories, since these alone are no longer sufficiently broad to represent and fully encompass the decision process [3, 4, 12, 15]. NeuroIS was developed to better understand cognition, emotion and behavior processes and arose from research studies on neuro-adaptive information systems [16].

Consumer Neuroscience is used to identify consumers’ preferences, while Neuromarketing leads to products that are compatible with consumers’ preferences. These approaches have been developing suggestions to guide design concepts and to present products [5, 9, 13].

Due to neuroscience having become an important support tool for several areas of knowledge, several kinds of equipment that measure body variables have been developed. These include: galvanic skin response sensors, heart rate meters; and devices that measure electric signals between neurons, the oxygenation rate of hemoglobin molecules, and ocular movements.

In this context, experiments have been developed using tools to analyze some decision situations. Using fMRI (functional Magnetic Resonance Imaging) to analyze brain activation, Sanfey et al. [17] presented a simple game, called Ultimatum Game, to evaluate the limitations of classical economic models in providing a real representation of the decision-making process. Goucher-Lambert et al. [5] and Sylcott et al. [21] evaluate consumers’ preference judgments for sustainable products and the combination of the form and function of a product.

With specific regard to eye movements, using eye-tracking, Ares et al. [1] and Guixeres [6] evaluate the differences between yogurt labels and the effectiveness of ads. Using eye-tracking and electroencephalograph (EEG), Slanzi et al. [19] and Khushaba [9] evaluate clicks on five websites and consumers’ preference for three types of crackers.

As to the multicriteria decision process and Neuroscience, there are papers in the literature that evaluate several criteria but none of them use Neuroscience as tool to support multicriteria decision processes, showing the gap between these approaches [7, 10].

Therefore, this paper sets out to evaluate behavioral aspects in the FITradeoff method. To do so, an experiment was undertaken and results evaluated. This experiment was developed to analyze the specific step of graphical visualization in the FITradeoff Decision Support System (DDS). Thus, the research question concerns how decision makers evaluate graphical visualization and, therefore, how does this lead them to select the best alternative. To conduct this experiment, eye-tracking equipment was used. There were two end-purposes: to give insights to the analyst and to improve FITradeoff DSS.

This paper is organized as follows. Section 2 gives a brief description of the FITradeoff Method. Section 3 describes a behavioral experiment; Sect. 4 gives the results from the experiment while Sect. 5 analyzes and discusses these results. Final remarks are made and some conclusions are drawn in Sect. 6, which also suggests some lines for future research studies.

2 Flexible Interactive Tradeoff Method

The Flexible Interactive Tradeoff method - FITradeoff [2], was developed in order to elicit scaling constants in the context of Multi-Attribute Value Theory – MAVT [8]. This method is based on the Traditional Tradeoff [8] which has the same axiomatic structure, but FITradeoff has some advantages when compared with the traditional method.

The FITradeoff method has three steps, which seek to evaluate the intra-criteria utilities, to rank the criteria weights and to evaluate the criteria weights. The first step is common to most multicriteria methods and in this case, the decision maker (DM) imports the decision matrix.

The second step is the same as in the Traditional Tradeoff, namely, the DM compares the criteria weights and ranks these criteria. After this step, the first inequality is obtained, presented in expression (1), in which ki is the scaling constant of criterion i.

$$ k_{i} > k_{j} > k_{m} \ldots k_{n} $$
(1)

The third step is characterized as being when consequences are compared in the decision matrix. Thus, adjacent criteria are compared. The best consequence of the second criterion is compared to a hypothetical consequence of the first criterion, which is lower than the best consequence of the first criterion. So, from a relation with the strict preference expressed by the DM, two inequalities can be obtained, as shown in expressions (2) and (3).

$$ k_{j} v_{j} (x'_{j} ) > k_{j + 1} $$
(2)
$$ k_{j} v_{j} (x^{\prime\prime}_{j} ) < k_{j + 1} $$
(3)

Compared to the Traditional Tradeoff elicitation procedure, the difference between this step and the original one is the absence of the indifference point. In FITradeoff, the DM does not need to express the exact point of indifference, which is why it is considered to be cognitively easier to understand. According to Weber and Borcherding [22], difficulties found in identifying indifference points leads to 67% inconsistency in results.

After each comparison has been made by the DM, a linear programing problem (LPP) is solved using the inequalities obtained above. These inequalities represent the DM’s preference for one or other consequence. Thus, after each LPP has been solved, the range of initial alternatives decreases and they become Potentially Optimal Alternatives (POA). FITradeoff is considered interactive because of this step, where the DM makes comparisons and analyzes POA throughout the whole process.

Another advantage of FITradeoff is that it presents information that can be visualized graphically, in particular POAs, which helps the DM to make decisions. This feature characterizes FITradeoff as being a flexible and important tool because the time that the DM takes to process information is reduced and consequently, the DM can reach the final solution more quickly and therefore stop the process of seeking the best alternative.

Besides making use of a Neuroscience approach, the focus of this research is on analyzing ways to use graphical visualization. The next section discusses the three types of graphical visualization supported by the FITradeoff DSS (Bar Graph, Bubble Graph and Spider Graph – see Fig. 1) and other two added. The FITradeoff elicitation process is illustrated in Fig. 2.

Fig. 1.
figure 1

FITradeoff graphics

Fig. 2.
figure 2

FITradeoff process

The FITradeoff method is available by request to the authors at www.fitradeoff.org.

3 Behavioral Experiment with Neuroscience Tools

Graphical Visualization can be present in a decision-making process as a support tool to help the DM. Therefore, an experiment was undertaken, the purpose of which was to analyze how the DMs both understood different types of graphic visualization and used them to make decisions.

Five types of visualization were used in the experiment, namely: Bar Graph (G), Bubble Graph (GBubble), Spider Graph (GSpider), Table (T) and Bar Graph with Table (GT). In total, twenty-four graphics were compiled, which consisted of different combinations of items (alternatives vs. criteria) and different scale constants (same weights (S) and different weights (D)).

Bar graphics were the most predominant type, with eighteen units, which differed from each other by having three, four and five alternatives and criteria. These eighteen graphics were split into two groups of nine, one of which had the same weights and the other had different weights. For example, GS3A3C is the acronym for the bar graphic with the same weights, 3 alternatives and 3 criteria and GD4A5C is the acronym for the bar graphic with different weights, 4 alternatives and 5 criteria.

As to Bubble graphs and Spider graphs, only one unit of each was developed with the same weights, four alternatives and five criteria (GSpider4A5C and GBubble4A5C). For the Table and Bar Graph with Table, two units were developed with the same weights, three or four alternatives and five criteria (T3A5C, T4A5C, GT3A5C and GT4A5C). These types of visualization were developed to compare with the corresponding bar graphics (GS3A5C and GS4A5C) aiming to analyze which is the best for the DM.

After the graphics had been developed, they were mixed into three distinct sequences. The first sequence, called S1, was characterized by the growth in the degree of difficulty for the DM related to the number of items. S1 was developed with nine bar graphs, with the same weights, followed by the six others types of visualization and finally nine bar graphs with different weights. The second sequence, S2, had the characteristic of decreasing degree of difficulty and was constructed in the opposite way to S1. And finally, S3 presented the bar charts in a totally random way. In general, sequences had twenty-four visualization shapes, varying the position of the bar graphs and keeping the different visualization shapes, in the middle of the sequence.

To conduct the experiment the eye-tracking equipment X120 by Tobbi Studio was used. This equipment uses emission of infrared rays and the reflection of these by the cornea to measure the eye movements. Based on elements present in the eye-tracking software, the three similar experiments, each of which had one sequence, comprised: explanatory slides, images of each form of visualization and questionnaires. The questionnaires were presented after each image and had the following question: What is the best alternative?

The best alternative was previously defined by the researcher using the Additive Model [19]. The researcher wished to evaluate the hit rate (HR) for each graph and how it interacted with the left eye pupil diameter (LEPD) and fixation duration (FD).

An initial sample of fifty-four management engineering students and PhD professors took part in the experiment. A total of thirty-six recordings of eye movements were used and the results from these were analyzed. The recordings were of sixteen undergraduate students, ten master’s degree students, six doctoral students and four PhD Professors. The sequence in which each participant took part in the experiment was determined at the convenience of the researcher and in accordance with the availability of the participants. There was a sample population of twelve participants for each of the sequences.

Finally, meetings were held in the NSID (NeuroScience for Information and Decision) laboratory. Prior instructions were provided in same way for each participant and the research project was approved by the Ethics Committee of the Federal University of Pernambuco before the data were collected. Figure 3 shows a participant taking part in a real experiment.

Fig. 3.
figure 3

A participant in the experiment

4 Results of the Experiment

Based on variables collected in the experiment, some results were generated in order to analyze the HR variable. This was considered the most relevant variable because of its relationship to the research question: how do participants understand the graphical visualization forms and, therefore, how does this lead them to select the best alternative?

Hit Rate values were derived from the ratio of the number of correct answers to the total number of answers for each graph. A correct answer was deemed to be the best alternative for each graph, previously found by the Additive Model. Therefore, the researcher compared the participants’ answers to the questionnaires with the answers from the Additive Model to determine how many of a participant’s answers were correct. Table 1 presents the HR for each type of visualization, following the order that it appears in each sequence.

Table 1. Hit rate

The first analysis was developed in an attempt to explain HR using the FD, based on the reasoning: the longer that a graph is visualized, the more difficult it is to analyze it and the lower the HR value. Thus, the following research question was: Is there a correlation between the variables (FD and HR) for each sequence?

In order to develop this analysis, the original FD values that had been extracted from each of the recordings, were simplified. The average of the twelve values extracted for each graph, was calculated, resulting in a final value for each graph, as shown in Table 2.

Table 2. Average duration of fixation in milliseconds

The second analysis was developed to evaluate the HR using the LEPD. As to pupil diameter, several studies have proven that this variable has a strong relationship with the intensity of mental activity: the diameter is greater when a greater effort is made [11, 14].

Thus, this analysis is based on the reasoning: the larger pupil diameter is, the more difficult it is for someone to analyze a visual and therefore the lower the HR value is. A similar research question was drawn up to test this hypothesis: Is there a correlation between the variables (LEPD and HR) for each sequence?

In order to perform this analysis, all LEPD values captured during the recordings were extracted and separated into each visualization type for each participant. Thereafter, an average was calculated for each participant and thus a single value of the LEPD in each visualization type was obtained. Finally, another average was calculated of the twelve values of LEPD, found for each participant and thus, a unique value for each graphic in each sequence was obtained. For all visualization shapes, the LEPD ranged between 4.05 and 4.75 mm.

Only the left eye was chosen for the analysis so as to simplify the research experiment. Choosing to do so is supported by the literature which gives evidence that the results from analyzing either eye are indifferent to each other [18].

For these two analyses, the Spearman Correlation was applied to find the relationship between the variables selected for evaluation. The results are given in Table 3. The absence of a strong causality between these variable was evidenced by the low correlation rates.

Table 3. Results of the Spearman Correlation

Because of the absence of correlation in the analysis above, a final descriptive analysis was developed with a view to recommending a minimum confidence level for graphs with the same number of items. The aim of this descriptive analysis was to support the analyst in his recommendations on whether or not to use graphical visualization in decision problems.

To perform this analysis, a quality interval was constructed using percentage of acceptance levels estimated by the researcher based on the amount of wrong answers in each graph. This interval was built using levels of acceptance on the number of wrong answers in each graph, as shown in Table 4.

Table 4. Quality interval

Therefore, from the aggregation of the HR and the classification for each graphic, the minimum confidence level was estimated based mainly on worst values of HR in S3. This category was chosen due to the randomness of S3 thus trying to find a more assertive level of confidence for all graphics. The confidence level for six types of visualization, comparing with the corresponding bar graph, is shown in Table 5. For the bar graph, those with equal weights are compared to those with different weights, as shown in Table 6.

Table 5. Confidence level for six other types of visualization
Table 6. Confidence level for bar graphs

In addition to the analyses of the HR, another complementary analysis was developed using areas of interest (AOI) to further enhance the FITradeoff DSS. Areas of interest were regions drawn in each graphic to collect the variables. Based on eye-tracking, the FD was collected for each graph and for each specific area within the graphs. Thus, areas of interest were set for each criterion, in each bar graphics with different weight, in order to evaluate how participants visualized each criterion and with a view to confirming that weights were being positioned consistently (from left to right) in the FITradeoff DSS. The Table 7 shows the criterion most visualized in each graphic for each sequence.

Table 7. AOI analysis

Thus, based on Table 7, for graphs different weights, the left and central criteria were the most visualized since they received the highest FD. This result proves that criteria were properly positioned in the FITradeoff DSS. The next section offers further comments on the results developed.

5 Discussion of Results

The goal of the experiment was to evaluate the graphic visualization, given the flexibility that this tool brings to FITradeoff method. In addition, based on the research question – how do decision makers evaluate graphical visualization and how do they use this to select one of the final alternatives? There were two main purposes – to evaluate whether or not the graphic visualizations both aid the analyst to have insights and improve the FITradeoff DSS. The analyses that were developed are discussed in this section.

The Hit Rate variable was focused on in this research due to its relevance for the research question. Therefore, two variables (FD and LEPD) were collected using eye-tracking and the Spearman Correlation (HR v FD and HR v LEPD) was calculated in an attempt to explain the HR.

However, based on the results of the correlation, it was not possible to verify the relationship between these variables. So, it was not possible to state anything about the difficulty related to the number of items, based on FD and LEPD. To further explore the HR, a descriptive analysis was performed, thereby providing a confidence level for each type of visualization which had a similar number of items.

Therefore, based on the confidence level, some conclusions can be supposed. First, S3 had a greater number of hits than the other two sequences. Secondly, the bar graphs with equal weights had a higher HR when compared individually with graphs with different weights. Thirdly, the Spider graph may be more appropriate for problems with a large number of items and, fourthly, in this experiment, the Tables received a higher HR compared to the other types of visualization.

Based on these conclusions and the two main goals of the experiment, give insights to the analyst and improve FITradeoff DSS, it is observed that: the conclusions about bar graphs with equal weights and the spider graph can be used by the analyst, in addition to Table 2, as possibilities that may well assist solving decision problems. And the fourth conclusion that tables receive a higher HR than other forms of graphical visualization can be used as a recommendation to include the possibility of using them in FITradeoff DSS, since they have not been included already. As to enhancing FITradeoff DSS, based on AOI analysis, this form of analysis confirmed that when criteria weights are placed in left-central positions, they received the highest FD and consequently highest weights.

More generally, in addition to discussions about results, further questions can also be generated. Thus, this research study leads to the need to ask further questions, such as: If the sample is most diverse, would tables continue to lead to better HR? Were other factors maybe associated with HR? This factor can be the way of how data was composed (indicating that some decision matrices should not be used to build graphics)? These questions can be explored in future research.

6 Conclusion

Neuroscience approach is characterized as a study of the neural system and how this affects processes in the human body. Therefore, this research study was developed with a view to integrating neuroscience into a multicriteria decision-making approach, and in particular, for the FITradeoff Method.

Several studies in the literature have applied neuroscience experiments to decision-making, but most of them are related to decision in cognitive sense within a health context. As regard to decision-making in the organizational context, most of these studies are related to risk decision analysis, many of them within utility theory background. Yet, no studies have been found in the literature which do so in conjunction with specific multicriteria methods, as it is the case of either tradeoff elicitation procedure or FITradeoff method. Thus, this paper has developed and applied an experiment, using eye-tracking, to investigate how the participants understood graphical visualization and if this led them to selecting the best alternative. The objectives were to improve the design of the DSS and to assist the analyst in obtaining insights.

As to the first objective, a confidence level for each type of visualization was developed and can be used in any other multicriteria method at the discretion of the analyst. This possibility is particularly helpful if the analyst has not been hitherto aware of what graphical visualization shapes may assist in tackling decision problems. With regard to the second objective, this paper shows that preliminary studies indicate that the use of tables led the participants to better answers than other visuals. Currently, the DSS does not include the possibility of using tables. Therefore, this paper suggests that including tables in the DSS may well assist a DM. Further studies need to be conduct on this topic and also to consider the possibility that this may depend on different DM’s styles.

Finally, as suggestions for future research studies, the authors recommend developing experiments to investigate issues in eliciting preferences which is related to Step three of the FITradeoff method. Secondly, it would be helpful to replicate the experiment undertaken in this study with a larger and more diverse sample population.