Introduction

In cognitive neuroscience, there is an increasing trend in using data with highly complicated natural stimuli, such as video, audio-movie, narrative stories, and music, for increasing the credibility of the neuroimaging studies and creating a new kind of research regarding the emotions and complex cognitive functions (Alluri et al. 2012; Brennan et al. 2010; Emerson et al. 2015; Glerean et al. 2012; Nguyen et al. 2016). In daily conversations, the word “emotion” refers to various conscious emotions such as happiness, anger, fear, hatred and so on (Purves et al. 2017). Emotion is a complex set of regulative and cognitive functions that are defined with related changes to physiology and behavior, accompanying emotions that enable humans and other creatures to respond to biological stimuli in a more flexible manner (Purves et al. 2017). Hence, considering the fact that different emotional states lead to behavioral, neurophysiological and mental changes and sometimes may affect other people, exploring and studying in this field, becomes important. Chapter 10 of “Principles of Cognitive Neuroscience” (Purves et al. 2012) has discussed the theories and researches regarding the emotions in detail. In order to study the regions of the brain that are engaged in emotions, based on the latter meta-analytical studies that have gathered data from hundreds of research articles, it was concluded that the engagement of special structures such as Amygdala or Insula depends on the task, the expressed emotions, and other factors (Purves et al. 2012). Therefore, endeavors for recognizing the neurological correlations in emotion categories are continuing today (Purves et al. 2012).

Studying the regions involved in basic emotions (such as anger, fear, happiness, love, and sadness) is important for understanding the cognitive functions of the brain. The brain function during expressing emotions has been investigated in several functional Magnetic Resonance Imaging (fMRI) and Electroencephalography (EEG) studies (Fulwiler et al. 2012; Kotz et al. 2012; Dasdemir et al. 2017; Mitterschiffthaler et al. 2007; Brattico et al. 2011; Koelsch et al. 2013; Park et al. 2010; Pohl et al. 2013). Dasdemir et al. (2017) constructed an emotional EEG database using audio, video, and audio+video stimuli, and studied interactions between brain regions in positive (such as happiness), negative (such as sadness) and neutral emotions. They found that both left and right frontal regions involve in emotion processing. Also, their results demonstrated significant differences among emotions in functional connections between regions of left orbitofrontal (AF3) and left occipital lobe (O1), left posterior (P7) and left temporal (T7), P7 and right posterior (P8), right temporal (T8) and right inferior frontal (F8), right mid-frontal (F4) and F8, left mid-frontal (F3) and right orbitofrontal (AF4), O1 and AF4, O1 and right mid-frontal (F4), O1 and right frontocentral (FC6), and O1 and T8. In an fMRI study, Brattico et al. (2011) investigated the happy and sad emotions in music with and without lyrics. Their findings represented that the left thalamus and the right caudate are activated in sad versus happy music. Significant differences revealed in the left-hemispheric secondary and associative auditory cortices, including the insula in happy versus sad music. Also, they studied the effects of lyrics on sad versus happy music. Their results indicated that the sad music produced larger activations in the bilateral inferior frontal gyrus, the left transverse, middle and superior temporal gyri, the right superior temporal gyrus, the right inferior parietal lobule, and the bilateral insula.

As the external events lead to emotional reactions, perceiving internal physiological states is the main core of emotional experience. Nguyen et al. (2016) combined the high-resolution fMRI with simultaneous physiological recording, to study the neural mechanism of interoceptive integration during listening to an emotional audio-movie (Hanke et al. 2014). They used the inter-subject correlation strategy to evaluate the consistency of interoceptive signals, and dynamic causal modeling for deriving a network from causal relations between regions. They demonstrated that Anterior Insula (AI), especially in emotional moments during the audio stream, is used as an integration hub of interoceptive processing and in fact, the interoceptive states shown in Posterior Insula (PI) are combined with exteroceptive representations by AI, in order to highlight emotional moments.

Farahani et al. (2019) used the regression Dynamic Causal Modeling (rDCM) method to estimate the effective connectivity in a mixed model. Their purpose was to examine emotions and differences between emotional states using inference of effective connectivity. For this purpose, they used fMRI data (Hanke et al. 2014) with a complex natural stimulation. Finally, their results indicated that the distinction in the effective connections between some emotional states had more intense.

Analysis based on network theory is a common method for analyzing brain data. Studying the structure and function of the brain as a network gives a deeper insight into brain activity in different states. Particularly, in the resting-state fMRI dataset, nodes of brain graph may be voxels, spatial independent components, or regions of interest (ROIs) divided from the brain Atlas and, edges of the brain graph may be defined based on the cross-correlation between the time series of the nodes. Temporal network theory is a field of network theory and, by adding more information, results in putting aside assumptions and simplifications of the network theory and hence, increases the correspondence between the graph and the real state of the brain (Thompson et al. 2017).

In recent years, evaluating time-varying connectivity (TVC) in fMRI data has turned into a popular approach for studying the temporal dynamics of a large-scale brain network (Allen et al. 2014; Kiviniemi et al. 2011; Hutchison et al. 2013; Hindriks et al. 2016; Thompson and Fransson 2015a, 2016a; Shine et al. 2015). TVC offers a different kind of representation in comparison with functional connectivity. It derives an estimate of fluctuations of connectivity that occur through time. There are many methods for deriving TVC. Each of these methods can create new insights into cognitive functions of the brain. These methods may be categorized based on correlation, clustering, adjacent time-points, and similar spatial configurations. Thompson et al. (2017) for the first time precisely introduced temporal network theory and its metrics in network neuroscience and showed the ability of this method for studying the dynamic function of the brain, by analyzing a resting-state fMRI dataset in conditions of open-eyes and closed-eyes (in two different sessions). They introduced the Spatial Distance (SD) method with the approach of the weighted Pearson correlation to create temporal snapshots and represented that this method can calculate unified connectivity estimations for each time-point.

In our previous study (Ghahari et al. 2019), for the first time, we investigated the distinction between different emotional states during applying a long-term complex natural stimulus using temporal network theory. We used the SD method for deriving TVC in the fMRI data acquired during applying an emotional auditory stimulation (Hanke et al. 2014) and applied some binary temporal network measures to investigate the distinction between emotional states. Finally, we found that this analytic approach can represent that the pattern of the brain network is different while expressing different emotions and also varies through time.

The Jackknife Correlation (JC) method was introduced as a method for deriving TVC, for the first time by Thompson et al. (2018). They compared this method with four other methods (sliding window, tapered sliding window, SD, and temporal derivative) by using four simulation data. Different simulations were carried out to create signals similar to the fMRI BOLD signal. Finally, they showed that the JC and SD methods have better performance than the three other methods for deriving TVC (Thompson et al. 2018). The SD and JC methods were defined based on different assumptions, however, these methods have a very strong relationship with each other (Thompson et al. 2018). Both JC and SD methods obtain a unique connectivity estimate for each time-point.

The JC method is a new approach to derive TVC from fMRI data and create the temporal network of the brain. So far, among the studies in the field of temporal network theory in fMRI data, most of them have used binary measures.

In this study, the analysis was performed on the fMRI dataset (Hanke et al. 2014) obtained while applying a complex natural stimulus, in which the participants listened to a certain type of stimulation in the form of an audio-movie, which is associated with emotions similar to created emotions in everyday life. By analyzing the data acquired during applying a long-term complex naturalistic stimulus, it is highly possible to extract brain responses which are a representation of brain dynamics and states in natural events (Hanke et al. 2014).

In this research, we considered JC method for deriving a representation of time-varying functional brain connectivity. Then we quantified the connections using temporal network theory. Our aim was to use a method for estimation of TVC that increases temporal sensitivity as much as possible and does not need for setting different parameters. In order to obtain different properties of the temporal network, we calculated temporal degree centrality (\(D^{T}\)), temporal closeness centrality (\(C^{T}\)), fluctuability (\(F\)), volatility (\(V\)), temporal efficiency (\(E\)), and reachability latency (\(R\)) and thoroughly investigated weighted temporal degree centrality (\(D^{w,T}\)) and weighted volatility (\(V^{w}\)) within network neuroscience. The distinction between different emotions was studied using the above-mentioned approach. This study investigates the distinction between regions, time-varying functional brain connections and different aspects of the brain function during expressing different emotions.

Finally, we could distinguish different emotions using the JC method and temporal network measures. It shows that the brain network pattern changes during expressing different emotions.

Materials and methods

fMRI data

We used the fMRI dataset that is available at www.studyforrest.org. This dataset was downloaded from http://psydata.ovgu.de/studyforrest/phase1/. The fMRI data were recorded from 20 right-handed healthy participants (8 females and 12 males, average age 26.6) during long-term stimulation by “Forrest Gump” audio-movie (Hanke et al. 2014). To minimize the difference between the original audio-visual movie and audio movie, a narrator describes the movie story, which more describes the visual scenes and facial expressions and does not interfere with the story process (Hanke et al. 2014). The trial was carried out in two different sessions. The movie was divided into 8 audio segments of 15 min each and the participants listened to four segments in each session respectively (Hanke et al. 2014). Functional images were obtained using a 32 channel head coil on a whole-body 7-Tesla Siemens MAGNETOM scanner with TR = 2 s, TE = 22 ms, echo spacing = 0.78 ms, BW = 1488 Hz/Px, FoV = 224 × 224 mm, 36 axial slices and 1.4 mm isotropic voxels. fMRI data were acquired with a high spatial resolution of 2.75 mm3. A total of 3599 volumes were recorded for each participant (Hanke et al. 2014).

Preprocessing

In this study, we used preprocessed BOLD data (bold_dico_dico7Tad2grpbold7Tad_nl). The preprocessing pipeline is comprised of: Correcting distortion and motion, registering the images anatomically through non-linear warping transformation to BOLD group template image, smoothing the images spatially using Gaussian kernel with FWHM 4 mm and removing the baseline signal drifts and well-known cardiorespiratory artefacts in lower frequency range by applying high-pass filter with cut-off frequency of 0.0083 Hz (Hanke et al. 2014). The data of two participants were excluded from analysis (data of 4th participant due to the problem in the reconstruction of image and motion caused by coughing and data of 10th participant due to the invalid correction of distortion).

Extracting time series of ROIs

Considering our aims in this study, most of the ROIs were selected from visual and auditory cortices and regions involved in emotions, which comprise forty-four regions of Harvard-Oxford (HO) Atlas (Evans et al. 2012). Since many subcortical regions are necessary for emotional processing, our focus was on these structures and we reduced the number of cortical regions by combining the left and right of each cortical region. Extracted ROIs from HO Atlas are shown in Table 6 in “Appendix”.

In order to obtain the time series of ROIs, first, we extracted the mask of ROIs from HO Atlas by using FSL software (https://fsl.fmrib.ox.ac.uk). Then we used SPM12 software (https://www.fil.ion.ucl.ac.uk/spm/software/spm12) to make extracted masks in identical sizes with data. Finally, we used the averaging method, which was done in MATLAB software with MarsBaR toolbox (http://marsbar.sourceforge.net), to extract ROIs’ time series.

Our aim was to separately study each emotion. Hence, considering the labeling of each second of the movie (totally for all the characters), continuous volumes of BOLD data were extracted, in which only the label of one emotion was present. Extracted time series of five emotions were analyzed. These five emotions included states of happiness, sadness, anger, fear, and love. The number of time-points and the length of each emotion are shown in Table 1.

Table 1 Details of emotions’ time series

Creating temporal snapshots

In order to create temporal snapshots for investigating the dynamic function of the brain, we applied the JC approach to the time series of each emotion (in each subject).

JC method is a specific version of the sliding window method. This method uses all time-points except t to estimate correlation at time-point t, which makes it possible to obtain an inverse approximation of correlation for time-point t. In order to correct this inversion, the correlation value calculated at the time-point t is multiplied by − 1. Finally, at each time-point, we have a unique correlation. This approach is far better than the sliding window, which uses less data (depending on window size) to calculate correlation in a time-point, which makes it possible to have a lower temporal sensitivity and the results with lower accuracy (Thompson et al. 2018).

In this research, we used the Pearson correlation coefficient in the JC method to estimate connectivity at time-point t. Equation (1) (Thompson et al. 2018) computes Jackknife Correlation between two signals x and y at time-point t:

$$JC_{t} = - \left( {\frac{{\mathop \sum \nolimits_{i}^{T} \left( {x_{i} - \bar{x}_{t} } \right)\left( {y_{i} - \bar{y}_{t} } \right)}}{{\sqrt {\mathop \sum \nolimits_{i}^{T} \left( {x_{i} - \bar{x}_{t} } \right)^{2} \mathop \sum \nolimits_{i}^{T} \left( {y_{i} - \bar{y}_{t} } \right)^{2} } }}} \right),\quad i \ne t$$
(1)

where T is the number of time-points. \(\bar{x}_{t}\) and \(\bar{y}_{t}\) are the expected values, with the exception of the data at time-point t (Thompson et al. 2018):

$$\begin{aligned} \bar{x}_{t} & = \frac{1}{T - 1}\mathop \sum \limits_{i}^{T} x_{i} ,\quad i \ne t \\ \bar{y}_{t} & = \frac{1}{T - 1}\mathop \sum \limits_{i}^{T} y_{i} ,\quad i \ne t \\ \end{aligned}$$
(2)

As mentioned in the Introduction section, in a recent simulation study, the five TVC estimation methods were examined and it was shown that the JC method has superior performance at tracking fluctuations in co-variance over time (Thompson et al. 2018).

After obtaining the connectivity time series using TVC derivation, they must be post-processed so that we trust in the results of temporal network measures in order to attain better estimations. First, in order to stabilize the variance of connectivity time series, we applied Fisher transformation and then Box-Cox (BC) transformation (for more details please refer to Thompson and Fransson 2016b). For BC transformation, the range of \(\uplambda\) parameter in connectivity time series was considered between − 40 and 40 with 0.1 increase and the optimum \(\uplambda\) was estimated using the maximum likelihood method. Then, each connectivity time series was standardized by subtracting the mean and dividing by the standard deviation (i.e. connectivity time series were converted into Z-values). The standardized JC method is not biased by the underlying static functional connectivity (Fransson et al. 2018). So, this issue did not affect the results of investigating the distinction between emotions. In the end, the variance-based thresholding method was used (Thompson and Fransson 2015b). Weighted temporal snapshots were created by setting the edges with less than two standard deviations to zero, in each connectivity time series. In order to create binary temporal snapshots, for each connectivity time series, we set the edges with more than two standard deviations to 1 and otherwise to 0.

For instance, Fig. 1a shows the binary temporal network of the 15th subject in fear. In Fig. 1b, we illustrated the strength of connections respectively in time-points of 1, 2, 3, and 34.

Fig. 1
figure 1

Thresholded temporal network of the 15th subject in fear. a Binary temporal connections for all ROIs. Horizontal axis and vertical axis represent the time-points and the ROIs, respectively. Abbreviation of ROIs’ name is shown in Table 6 in “Appendix”. We used Dynamic-Graph-Metrics toolbox (https://github.com/asizemore/Dynamic-Graph-Metrics) for drawing the temporal network. b Strength of connections in time-points of 1, 2, 3, and 34. For illustrating each temporal snapshot, we scaled the values of each weighted temporal snapshot between 0 and 1. We used the codes that are available at https://github.com/paul-kassebaum-mathworks/circularGraph to illustrate the strength of connections

Applying temporal network measures

When temporal snapshots were obtained, we used two nodal measures and four global measures to quantify different features of the binary temporal network, which are respectively stated in Table 7 in “Appendix” (for more explanation regarding temporal network theory and its binary measures, please refer to Thompson et al. 2017).

In order to study the weighted temporal network, we thoroughly investigated a weighted nodal measure and a weighted global measure which are both explained below.

As it is aforementioned, after creating weighted and binary undirected temporal snapshots, we applied the measures to represent different aspects of the human brain function while expressing different emotions.

In order to calculate the shortest temporal paths, since the temporal resolution of the dataset used is 2 s, we considered the number of edges that can be moved in each time-point as the total number of existing edges in each temporal snapshot. Reachability latency measure was calculated by setting the value of r to 1 (i.e. all the nodes must be reached). Since the number of time-points in time series of five emotions was not equal, for between-group statistical analysis, we had to normalize the obtained measures. Most measures are normalized depending on their definition, therefore normalization was done only for three measures of temporal degree centrality, weighted temporal degree centrality, and fluctuability.

Weighted temporal degree centrality

The effect of one node on a weighted temporal network is the sum of edges’ weight associated with that node and their sum through time. Weighted temporal degree centrality for an ith node is calculated as:

$$D_{i}^{w,T} = \mathop \sum \limits_{j = 1}^{N} \mathop \sum \limits_{t = 1}^{T} A_{i,j}^{w,t}$$
(3)

where T is the number of time-points, N is the number of nodes, \(A^{w,t}\) is weighted temporal snapshot at time-point t, and \(A_{i,j}^{w,t}\) is the weighted edge between nodes i and j at time-point t.

This measure estimates the centrality of one node in a weighted temporal network.

Weighted temporal degree centrality is the same as the temporal degree centrality but it is calculated on weighted temporal snapshots. Therefore, this measure shows which nodes have higher connection weight through time. It is possible that one node among all the nodes, has a higher temporal degree centrality, but the weight of its connections becomes lower than another node.

Weighted volatility

This measure shows the variety of weighted temporal network through time. Weighted volatility is defined as:

$$V^{w} = \frac{1}{T - 1}\mathop \sum \limits_{t = 1}^{T - 1} D\left( {A^{w,t} ,A^{w,t + 1} } \right)$$
(4)

where D is the distance function which we considered as Euclidean distance.

The measure of weighted volatility represents the amount of change in weighted temporal snapshots in each time-point.

Statistical comparisons

In order to carry out statistical comparisons, we used the nonparametric permutation test (Nichols and Holmes 2001; Holme and Saramaki 2012). For between-group comparisons, null distributions were done by 100,000 permutations separately between each pair of emotions. In each permutation, the results of temporal network measure of subjects, were displaced randomly between the groups (each pair of emotions) and all the comparisons were two-tailed. In global measures, we considered the test statistic as the median difference and mean difference. In nodal measures and the comparisons between two measures, we considered the test statistic as the Spearman rank correlation coefficient. In statistical comparisons of the global and nodal measures, we used Bonferroni-corrected for multiple comparisons (\(p \le 0.005\)). For determining which nodes have a higher-than-expected centrality, we carried out 1000 permutations. In each permutation, the order of node in results of centrality measure for each subject was displaced randomly and the centrality was averaged over all the subjects. So, 44 null distributions were created. The null distribution with the largest 950th value was chosen for significant level (\(p < 0.05\)).

Software note

The entire analyses were done using hand-written codes in MATLAB environment. Furthermore, software called DUDTeN was created to estimate the time-varying connections by JC and SD methods and calculate the temporal network measures, that is available at https://github.com/shghahari/dudten (http://doi.org/10.5281/zenodo.3382274).

Results

Nodal measures

Measures of temporal degree centrality, weighted temporal degree centrality, and temporal closeness centrality were applied to temporal networks and then, compared between different emotional states. Considering the statistical comparisons that are represented in Table 2, \(D^{T}\), \(D^{w,T}\), and \(C^{T}\) had no significant correlation (\(p \le 0.005\), Bonferroni-corrected) between the pair of emotional states, which shows the nodes have different centrality properties in emotional states.

Table 2 Statistical comparison of three centrality measures between pairs of emotions

The scatterplots that demonstrate the statistical comparisons in each centrality measure for different pairs of emotions are illustrated in Supplementary Figs. S1-S3.

Global measures

Figure 2, shows the results of applying global measures to temporal networks in different emotional states. The value of averaged global measures over the entire subjects showed a higher amount of \(F\) and \(R\) in sadness (respectively, Fig. 2a, d) and a higher amount of \(V\), \(V^{w}\), and \(E\) in fear (respectively, Fig. 2b, c, e).

Fig. 2
figure 2

Violin plot of global measures in five emotions. a In each emotion, each colored point specifies a subject. For each emotion, the mean value of fluctuability is shown with a colored line and the median value of fluctuability is shown with a white dot. be, Like a, but respectively for volatility, weighted volatility, reachability latency, and temporal efficiency (to illustrate violin plots, we used the codes that are available at https://github.com/bastibe/Violinplot-Matlab). (Color figure online)

As shown in Fig. 2, the mean is affected by outlier data, therefore we trust more in the results from median-difference test statistic.

Considering Table 3, at the global level of the network, each of the measures could show a significant difference (\(p \le 0.005\), Bonferroni-corrected) between several pairs of emotions. \(R\), as well as \(V\), with two different test statistics (median difference and mean difference), showed identical results. Only between fear and love, and happiness and anger no significant difference was observed in each measure.

Table 3 Statistical comparison of global measures between pairs of emotions

Therefore, considering Table 3, between pair of emotions in which significant difference was created, an emotional state that has higher median (or mean) comparing to the other state (please pay attention to Fig. 2), in \(F\) has more diverse connections throughout the time, in \(V\) and \(V^{w}\) its connections changes faster through time, in \(E\) on average, has shorter temporal paths and in \(R\) has lower information transfer speed.

Tables of global measures that show the difference between mean values and the difference between median values, along with their p values, are presented in Supplementary Tables S1-S5.

Statistical comparison between two measures in each emotion

\(C^{T}\) and \(D^{T}\), quantify different aspects of temporal dynamics of the brain and we expect to find no significant positive correlation between these two measures in each emotion. Considering Table 4, except sadness emotion, in none of the other emotional states, no significant relation (\(p \le 0.05\)) was found between two measures of centrality. Strong negative correlations (\(p \le 0.05\)) between \(E\) and \(R\) were observed in each emotion (only in love, there was no significant negative correlation). Therefore, each of these measures can express the different properties of the temporal network.

Table 4 Statistical comparison between two measures in each emotion

Scatterplots illustrate the statistical comparisons between two measures randomly for two emotional states that are shown in Fig. 3. The scatterplots of other comparisons are shown in Supplementary Figs. S4 and S5.

Fig. 3
figure 3

Scatterplot of two measures in each emotion. a Temporal closeness centrality of each node against temporal degree centrality for sadness. b Temporal efficiency of each subject against reachability latency for anger

Investigating centrality of nodes in each emotion

First, the spatial distribution of centrality measures in the brain is shown randomly in several specific emotional states (Fig. 4). Values of three centrality measures were averaged over all the subjects. Figure 4a shows the average of temporal degree centrality in love, Fig. 4b shows the average of weighted temporal degree centrality in sadness and Fig. 4c shows the average of temporal closeness centrality in fear in the brain for forty-four ROIs. The spatial distribution of centrality measures in other emotions is illustrated in Supplementary Figs. S6-S8.

Fig. 4
figure 4

Spatial distribution of centrality measures. a Spatial distribution of temporal degree centrality across all nodes in love. Abbreviation of ROIs’ name is shown in Table 6 in “Appendix”. b Like a, but for weighted temporal degree centrality in sadness. c Like a, but for temporal closeness centrality in fear

In each emotion, nodes with higher-than-expected centrality (\(p < 0.05\)) were compared separately which are presented in Table 5. In \(D^{T}\), in different emotional states, there was no similar region. In \(D^{w,T}\), Thal.R was both in states of love and sadness, and Put.L was similar in states of happiness and sadness. In \(C^{T}\), Pall.R in states of happiness and sadness was similar. Only in happiness, Pall.R in three centrality measures was revealed. As expected, due to the similar definitions of the two measures of \(D^{T}\) and \(D^{w,T}\), common regions were found in these two measures.

Table 5 Nodes with higher-than-expected centrality (\(p < 0.05\)) in each emotion and each centrality measure

Considering Table 5, in each emotion in \(C^{T}\) and \(D^{T}\) (and also \(D^{w,T}\)), different regions became significant. The latter shows that the regions in the brain that have short temporal paths to all the other regions, are different from the regions that have the most connectivity through time.

Discussion

In this research, we studied the distinction between different emotional states in an fMRI dataset acquired during an emotional audio-movie stimulation. We used the Jackknife Correlation method for creating a time-varying functional connectivity representation and applied temporal network theory for quantifying this representation. We used different binary measures and thoroughly investigated two weighted measures within network neuroscience to examine the features of binary and weighted temporal networks.

Centrality measures can represent the temporal dynamics properties of the brain network at the nodal level. Considering the statistical comparisons, it can be said that the nodes have different temporal centrality properties in emotional states. Furthermore, in each emotion, nodes that passed the significance threshold in measures of centrality were unveiled. Our previous study (Ghahari et al. 2019) was performed to investigate emotions in an audio-movie dataset (Hanke et al. 2014) by using binary temporal network measures. In the previous study (Ghahari et al. 2019) and the present study, only in temporal closeness centrality, regions of right putamen and right amygdala in sadness were similar. If the different significant nodes in each emotion (in temporal degree centrality and temporal closeness centrality) were revealed, it may be due to the different assumptions of SD and JC methods that cause the values of centrality measures in each node and each subject to be different between two methods, and these differences lead the results of statistical comparisons in group analysis to differ from each other. In the SD method, a weight vector that was calculated for time-point t is used to estimate the connectivity at t, and this method has more parameter choices, while the JC method uses all time-points except t to derive the connectivity at t. It is worth mentioning, the effect of obtaining a unique connectivity estimate for each time-point is that noise will be maintained per time-point (Thompson 2017; Thompson et al. 2018).

Although the performance of the JC method has been investigated in resting-state and task (working memory) fMRI data (Thompson et al. 2018; Fransson et al. 2018), so far this method has not been used to identify brain function during complex stimuli, such as emotional stimulus. Therefore, there was no a priori hypothesis about how the performance of this method to investigate a dataset acquired from applying such stimuli, and we did not consider a priori hypothesis regarding this method to examine the distinction between emotions in the brain. Also, though previous studies have explored the activity of different regions during expressing different emotions in the brain, we used a dataset obtained from applying a complex natural emotional stimulation, and based on the performed meta-analytical studies in this field, the activation of regions involved in emotions depends on the type and design of stimulus (Purves et al. 2012), so we did not use a priori hypothesis regarding the role of several particular brain regions in different emotions.

We have compared the significant nodes in each emotion with previous studies. The regions of planum tempolare, frontal medial cortex, and central opercular cortex were activated in anger that the frontal medial cortex was found in previous studies (Murphy et al. 2003; Fulwiler et al. 2012; Gu et al. 2019; Zhang et al. 2018). In fear, the right hippocampus, left accumbens, and occipital pole were found. These results are compatible with previous studies (Koelsch 2014; Eldar et al. 2007; Schaefer 2017; Sato et al. 2004; Koelsch et al. 2013). Our findings represented the activation of planum tempolare, heschl’s gyrus, pars triangularis, anterior superior temporal gyrus, left putamen, left thalamus, and pallidum in happiness. The regions of pars triangularis, superior temporal gyrus, right pallidum, and left putamen were also found in previous studies (Park et al. 2010; Zhang et al. 2018; Johnstone et al. 2006; Kotz et al. 2012; Mitterschiffthaler et al. 2007; Brattico et al. 2011; Okuya et al. 2017; Pohl et al. 2013; Fusar-Poli et al. 2009). The left putamen and right thalamus were involved in the love emotion that are compatible with previous findings (Acevedo et al. 2012; Bartels and Zeki 2004; Cacioppo et al. 2012). The pars triangularis was also activated in this emotion. In sadness, the posterior parahippocampal gyrus, right thalamus, right pallidum, right amygdala, putamen, and hippocampus were activated that in previous studies the hippocampus, amygdala, putamen, and parahippocompal gyrus were found (Koelsch 2014; Mitterschiffthaler et al. 2007; Schaefer 2017; Brattico et al. 2011; Koelsch et al. 2006; Fusar-Poli et al. 2009).

Appearance of different regions than similar previous studies (with the exception of our previous study (Ghahari et al. 2019)) that show regions engaged in emotions, might be due to the type of auditory stimulation that is associated with human life and makes people to empathize with the movie while they are listening to it and is different from types of stimulation in other studies.

Generally, the reason for the high centrality of the nodes in regions related to visual processing is that many of the people may have visualized or do another activity during emotional experiences, that activate these regions. For instance, upon hearing the voice of the narrator that describes the scenes of war, people might visualize wounded soldiers.

Each global measure, depending on its definition, expresses a different aspect of brain function at the global level. Considering the statistical comparisons, results from the mean-difference test statistic were almost the same as the median difference in global measures. In general, because the mean is sensitive to outlier data, it can be said that the confidence in the results of the median-difference test statistic is higher. In the previous study (Ghahari et al. 2019), we used the mean-difference test statistic in statistical comparisons of the global measures. In fluctuability and volatility, in the previous study (Ghahari et al. 2019), no significant difference was found between the pair of emotions. In the other binary global measures (reachability latency and temporal efficiency), in this study than the previous work (Ghahari et al. 2019), there was a significant distinction between more pairs of emotions.

In this study, we explored how the different properties of temporal network change by an emotional audio-movie stimulus. In the following, we endeavor to present more description of the temporal network measures to explain how the brain functions during expressing emotions. Based on the results of reachability latency, it can be stated that while a person feels sadness, the speed of information transfer within the brain is lower than when one experiences happiness, fear, love, or anger, and while one feels happiness or anger, the speed of information transfer is lower than the fear and love. In fluctuability, the results demonstrate that while a person feels sadness or anger, the brain connections are more diverse than the fear and love, and while one feels happiness, the brain connections are more diverse than the fear. The results of volatility indicate that while the fear is felt, the brain connections change faster than happiness, anger, and sadness, and while the love emotion is experienced, the brain connections change faster than sadness and anger. Based on the results of temporal efficiency, while a person feels fear or love, the brain regions are connected in a shorter time than the emotions of happiness, anger, and sadness.

Two measures of weighted temporal degree centrality and weighted volatility could perfectly express the properties of the weighted temporal network. Due to evaluation based on weighted temporal snapshots, relative to binary temporal snapshots, it may be a close study to explore the real function of brain.

Comparing the results of present research with previous study (Ghahari et al. 2019), it can be said that the resulting binary measures of temporal snapshots created by the JC method than the SD method can distinguish between more pairs of emotional states. Therefore, by using the JC method similar to the SD method, we could investigate the temporal dynamics of the brain network during expressing different emotions.

Considering the findings of the research, by using the Jackknife Correlation method to derive time-varying functional connections and applying the temporal network measures to quantify these connections, we were able to distinguish between different emotional states and to express regions engaged in each emotion. Furthermore, we represented that temporal network theory can investigate different aspects of dynamic function of the brain in an fMRI dataset acquired during a complex natural emotional auditory stimulation. Ultimately, we could show that the pattern of the brain network, the function of different regions and the global function of brain regions during expression of each emotion, change through time and differ from another in each emotion. Also, the significant regions in each emotion were almost in agreement with previous studies.

In order to implement all of the analyses in current study, we wrote the codes in the MATLAB environment and also created the DUDTeN software that is available at https://github.com/shghahari/dudten (http://doi.org/10.5281/zenodo.3382274) for free.

The dataset used is a combination of different emotional states and we separated the time series of each emotion in order to carry out our analysis. We only extracted some parts of the time series, which contained a specific emotion, yet that period may be affected by the previous emotions. Our study has been done to investigate the distinction among five emotions that occur under natural circumstances of life, therefore the slight effect of previous emotions was little importance. However, for distinguishing between the emotions under natural circumstances, it is better to use data that considered separate stimulation for each emotion.

In order to investigate the properties of the temporal network, other measures of the binary and weighted temporal network can be used.

In the future, we intend to use other thresholding approaches to create optimum connectivity matrices and also to develop DUDTeN software.

We hope that this research leads to more studies in this field in order to discover and revise new aspects of brain function.