Keywords

1 Introduction

Making choices and decisions is one of the fundamental human activities . In the work of managers it plays such a key role, that some authors put an equal sign between decision making and management (Drucker 1955). If we take into account the basic functions of management—planning, organizing, directing and controlling—decision making appears as their main ingredient (Soares 2010). Regardless of the function in which the decision-making process takes place, it is very important to ensure that it runs smoothly and brings the organization the best results. Making good decisions is a prerequisite for good management—the decision accuracy is the most significant measure of manager’s performance (Harrison 1995). Due to the importance of the problem, over the years different methods, which are designed to support decision-making process, were developed and used. Regardless of whether these are theoretical models or IT systems, supporting decision makers in calculating the validity of each election, the goal remains the same and that is the desire to reduce the number of wrong decisions made by managers.

Field of the decision-making support is still developing, and it uses the latest achievements of many sciences. Tools supporting the decision process fulfill their role in many different ways. Among them we can mention: information management, quantification of data or models manipulation. Information management refers to storing, searching and reporting information in a convenient format for the user. Quantification of the data is a process in which large amounts of information are condensed and presented using some of the basic indicators that capture the data essence. Manipulation of models refers to the creation and implementation of different scenarios to answer questions such as “What would happen if …?” (Gupta and Harris 1989). To this traditional approach of decision supporting systems, one more functionality could be added—this would be indication of the conditions in which a better decision can be made. In this context, the cognitive neuroscience techniques can be very helpful. At the basis of this approach is the assumption that when we will get to know the neurological grounds for decision making and we will decode the decision-making process that takes place in the brain, we will be also able to understand and take advantage of factors that contribute to making the right decisions. Research in this area are still at an early stage of development.

The aim of the presented research is to determine, if by analyzing the brain signals , we are able to predict the adequacy of a decision made by a subject. First section of the chapter shows, how cognitive neuroscience techniques can be used in decision-making support. Next part explains the scenario of the experiment and the following two sections describe certain phases of data acquisition and analysis : EEG recordings, pre-processing and signal filtering and artifact removal. In section four, an algorithm for feature extraction from signals is presented. Finally, the empirical result and conclusion are shown.

2 Cognitive Neuroscience Techniques in Managerial Decisions

Among the possible applications of cognitive neuroscience techniques in relation to managerial decisions several main directions, which have gained particular popularity, can be distinguished. These include research concerning (Yu & Zhou 2007; Loewenstein et al. 2008):

  • the role of emotions in decision making,

  • decision making under risk and uncertainty,

  • decisions in a social context,

  • determining the usefulness and rationality of decisions.

Exemplary publications concerning the research of the above-mentioned subject matter are summarized in Table 17.1. Their assignment to each of the groups is, to some extent, arbitrary, as very often experiments relate to several subjects.

Table 17.1 Examples of studies on the use of cognitive neuroscience techniques in relation to managerial decisions (own elaboration)

The presented experiment and its results concern the topic of the decisions made in the conditions of risk and uncertainty. In this chapter, we try to predict manager decision on the basis of observations of brain signals activity during the time of the decision-making process. The data are registered with the use of electro-encephalogram (EEG) due to its small size and low cost. The decisions that are made by participants are connected with the choice between two lotteries with different payoffs.

3 Experiment Design

The proposed experiment scenario for each participant includes some basic steps. They are shown in the Table 17.2.

Table 17.2 Scenario of the experiment (own elaboration)

Registration of EEG for the experiment was carried out in accordance with the guidelines of the 10–20 system (Purves et al. 2013, p. 32), which specifies the location of the individual electrodes on the scalp. Scheme of their deployment, with particular emphasis on electrodes used in this experiment is shown in Fig. 17.1.

Fig. 17.1
figure 1

Placement of electrodes according to the 10–20 system (Trans Cranial Technologies 2012)

The focus remains only on 7 electrodes mounted on the frontal lobe head (Fp1, Fp2, F7, F3, Fz, F4 and F2), because according to the findings of neuroscience in this region are areas of the brain associated with higher cognitive functions.

After preliminary preparation of the participant, proper experiment begins. Starting the presentation with the black screen for 2 min is intended to make the participant calm down (Hosseini and Khalilzadeh 2010). Later in the experiment, the volunteers have to answer ten questions concerning the choice between the two variants of the lottery (Table 17.3). This element is taken from research publication of Holt and Laury (2002) and other articles that repeated their experiment in a slightly different configurations (He et al. 2012; Delaney et al. 2014).

Table 17.3 Terms of the lottery presented during the experiment (source: Holt and Laury 2002)

Neurophysiological data recorded during the entire study, will help us answer the question, if we are able to predict the decision adequacy. Due to the topic of the experiment we have decided to choose as participants students and teacher of Faculty of Economics and Management. There were 22 of them, aged 19–50. Because of their different native languages, experiment was provided in Polish, English and Arabic. Signal acquisition was done using a Contec KT88 device with sample rate set to 200 samples per second. The obtained data were exported into an edf (European Data Format) file to enable more convenient processing with the use of Matlab environment.

4 EEG Data Pre-processing

Raw EEG signals acquired from brain activity contain many other nested surrounding signals from varying peripheral devices. Simple digital filtering methods are used to retain interesting frequency components. We have applied low and high pass filter to remove data with a frequencies below 0.4 Hz and above 50 Hz (Nitschke et al. 1998). When brain activity signals were recorded through the electrodes placed on the scalp of a participant, eye blinks and muscle movements caused contamination of the EEG signal. Therefore, the next phase is artifacts removing process. A variety of methods have been proposed for correcting ocular and muscle artifacts. One common strategy is artifact rejection. In our experiment we used automatic artifact removal based on Blind Source Separation (BSS) techniques. The main method which was applied is wavelet Independent Component Analysis (wICA , (Castellanos and Makarov 2006)), which has been proven useful for suppression of artifacts in EEG recordings, both in the time and frequency domains.

To explain this methods, we will use an example of eye blinking artifacts rejection. Such artifacts are present in EEG signal as large pulses and they have a great impact on registered data. Fig. 17.2a shows an example of EEG signal taken from one of the experiment’s subjects. The data segment contains an eye blinking artifact. The episode is localized around 0.5 s and it spreads almost over all channels, strongly affecting the most frontal sites (Fp1 and Fp2). We clean the EEG signal by the wICA algorithm . It can be conducted by performing the following steps (Castellanos and Makarov 2006):

Fig. 17.2
figure 2

(a) Eye blinking artifacts of EEG signal (from the top channels: Fp1, F7, F3, Fz, Fp2, F4 and F8) (b) EEG signal corrected by wICA; x-axis represents time (own elaboration)

  1. 1.

    Applying a conventional ICA algorithm to raw EEG, obtaining the mixing matrix M and N independent components: {s 1(t), s 2(t),  … , s N (t)}.

  2. 2.

    Obtaining representations of components using the wavelet transform: \( {\left\{W\left(j,k\right)\right\}}_{s_i} \).

  3. 3.

    Thresholding the wavelet coefficients—setting W(j, k) = 0 for the coefficients that are higher than the threshold, |W(j, k)| > K.

  4. 4.

    Inversing wavelet transform of the threshold coefficients W(j, k) thus recomposing components consisting sources of the neural origin only {n i (t)}.

  5. 5.

    Compose wICA-corrected EEG: \( \overset{\sim }{X}(t)=M\bullet {\left[{n}_1(t),{n}_2(t),\dots, {n}_N(t)\right]}^T \).

Results of applying these steps on EEG signals to remove artifacts is illustrated in Fig. 17.2b.

Publications included in the tables do not cover all the research that is being done in experimental economics with the application of cognitive neuroscience tools. They present, however, an overview of the most popular issues that have been recently examined by experimental economists .

5 Feature Extraction

Feature extraction is an important phase to the analysis of brain signals characteristic. During this stage major frequency sub-bands—delta, theta, alpha, beta and gamma, are elicited from signals. To perform this operation we have used discrete wavelet transform (DWT) that is considered to be more advantageous than Fourier transform (Schiff et al. 1994). For this method, selecting suitable decomposition level is very important for analysis of the signal. The scope of interest ranges between 0 and 50 Hz. The decomposition level which we have used is five, because all other ranges are considered to be noise or they are used for another purpose like epilepsy monitoring (Joyce et al. 2004). On the other hand, the decomposition process of the signal depends on the number of sampling frequency that was used for the signal recording. In our experiment, we used frequency of 200 samples per second. To obtain satisfactory outcome we applied the (db8) function (Malina et al. 2002). As a result we observe the signals band above 32 Hz for gamma, 13–31 Hz for beta, 8–12 Hz for alpha, 4–7 Hz for theta and below 4 Hz for delta as illustrated in Fig. 17.3.

Fig. 17.3
figure 3

DWT results for all five frequency bands; on x-axis is number of sample, y-axis shows the amplitude of signals (own elaboration)

Having separate frequency bands extracted from the recorded EEG signal we have used Fast Fourier Transform (FFT) to calculate high order spectrum (as shown in Fig. 17.4) that was used to obtain input values for a classification phase.

Fig. 17.4
figure 4

High order spectrum (HOS) for each band; on x-axis is number of sample, y-axis shows the amplitude of signals (own elaboration)

The entire procedure of feature extraction is illustrated in Fig. 17.5.

Fig. 17.5
figure 5

Steps of feature extraction (own elaboration)

In our experiment every participant makes ten decisions and deals with each of them separately. Therefore, for every epoch that corresponds to a single decision, we obtain 35 features that are used to classify decisions into right and wrong ones. It has to be stated, that as right decision in the context of the lottery used in the experiment, we consider one that is made according to the expected utility theory (it maximizes the expected payoff of a participant).

6 Features Classification and Results

The brain signals classification problem is a difficult task. There are many techniques for the implementation of such classification. Most commonly used are supervised algorithms (Siuly et al. 2016). We focused on five different methods, namely: support vector machine (SVM) , naïve Bayes (NB) , K-nearest-neighbors (kNN) , linear discriminant analysis (LDA) and probabilistic neural networks (PNN) . In this section we will mention briefly how those algorithms classify data into two classes (right and wrong decisions).

Among different supervised classifiers, SVM is the one that often performs significantly better than others. It was proposed by Boser et al. (1992). The concept of SVM is based on maximizing the margin between the training examples and the decision boundary. Optimal separation is achieved when there is no separation error and the distance between the closest data vector and the decision boundary is maximal (Stoean and Stoean 2014). Obtaining an optimal classification result with SVM is still difficult, therefore we have tested and applied other algorithms as well.

The Bayesian classification assumes an underlying probabilistic model and it allows us to capture uncertainty about the model in a principled way of determining probabilities of the outcomes. It tries to find the global minimum in the error function. In fact, Bayesian classifier separates the feature vectors by the comparing of decision functions of the classes and according to the largest output selects the class of input sample. This method calculates explicit probabilities for hypothesis and it is robust to noise in the input data (Chai et al. 2002). Naïve Bayes is a simplified case of the Bayesian classifier. In this classifier, it assumed that the features are independent from each other (Yaghoobi et al. 2014).

Another of the chosen methods was K-nearest-neighbors algorithm . The aim of this technique is to assign to an unseen point the dominant class among its nearest neighbors within the training set whose class is already known (Duda et al. 2000). The nearest neighbors are determined on the basis of a distance, for example using the Euclidean metric (Blankertz et al. 2002).

The next method, linear discriminant analysis , is a technique based upon the concept of searching for a linear combination of attributes that best separates two classes (wrong and right) of a binary attribute (“rightness” of decision). It is mathematically robust and often produces models whose accuracy is as good as more complex methods (Sayad 2011).

The last of applied methods was probabilistic neural networks (Specht 1990). This approach assumes replacing the standard neural networks sigmoid activation function with an exponential one. It allows to compute nonlinear decision boundaries that approach Bayes optimal. The most important advantage of the probabilistic neural networks is that training is easy and instantaneous. They can be used in real-time because as soon as one pattern representing each category has been observed, the network can begin to generalize to new patterns.

These five methods were tested in the research because they represent different approaches and as the literature overview indicates—they are very often used in the context of cognitive neuroscience experiments.

7 Experiment Results and Discussion

In this chapter correlation between brain signals activity during the decision making process and decision adquacy is studied. The major objective is to establish brain waves patterns for right and wrong decisions. In our experiment we have taken 70% of the collected data for the training set. Half of the samples were registered when participants made what was considered as right decisions and the other half when they made wrong decisions. Registered samples differ in terms of their duration, because every participant of the experiment, for every decision that was made, needed different amount of time. It does not have any impact on further analyses, because signals were examined in frequency domain. The end results are obtained on the basis of features that are extracted from the signals. Chosen set of training samples was used to teach every classifier to recognize two different classes (patterns) . Remaining 30% of recorded data were used for testing. Since the total number of decisions that were made during the experiment is 220 (there were 22 participants, each of them made 10 decisions), therefore we have 150 decisions for training and 70 for testing set. Each decision is represented by 35 features (5 frequency bands multiplied by 7 EEG channels). Results of classification performed with the use of five chosen methods are presented in the Table 17.4. It is worth noting, that the prediction rate shown there takes into account all frequency bands in the analyzed signals (delta, theta, alpha, beta and gamma). The best method in this case was Naïve Bayes. For the chosen size of training set, only this classifier allowed for prediction rate that was greater than 60%.

Table 17.4 Classification results for decision prediction (own elaboration)

In order to check, if we could get better results, when the database of patterns will contain bigger amount of samples (the training set will be greater), we have performed second analysis. The training set in this case was set to 170 decisions. The results that were obtained (presented in the Table 17.5), generally shown what was expected—in case of almost all classifiers, the prediction rate has increased. The best result could be observed for KNN and PNN, although LDA and NB also allow for achieving relatively high prediction rate. The increase, where present, is not substantial, but it shows a trend—by expanding the database of patterns, we can achieve better results of decision adequacy prediction. It is a promising result for further research.

Table 17.5 Classification results for decision prediction—increased training set (own elaboration)

8 Conclusions

Conducted research has shown in what way cognitive neuroscience techniques can be used in predicting possible decision adequacy. It was tested in the case of decisions under risk. Such setting was chosen, because it is relatively easy to differentiate right and wrong decisions in this context. For this specific experiment, decisions were divided into two groups on the basis of expected utility theory. Registering EEG signals from the subjects that have participated in the study allowed for creating a database of brain waves patterns that accompany each decision that was made. This set of signals were classified with the use of five different methods of supervised classification. Division into two classes was made on the basis of 35 features that define each EEG signal. The results obtained for the most of classification methods exceeds 50% of accuracy. I could be assumed that this number could be greater, when we would limit the set of features to the most significant in the context of risky decisions. Finding such set of features would be a topic for further research.

The results of the study could be used for elaboration of decision support system (DSS) , that could advise the user, if he can make a right decision in certain moment of time. The system would work in real-time, registering the brain signals, classifying them on the basis of collected patterns archived in the database. The feedback given from the system would inform the user, if he/she is able to make proper decision in a given situation, or not. The effectiveness of such DSS could be improved, if the database of patterns used would be created for its particular user. The potential of this solution is promising, especially in the context of managerial decision making.