Introduction

Physiological signals based emotional processing has recently drawn significant interest. Relations between emotional states and brain activities have been discovered employing EEG or fMRI imaging modalities. Among those, EEG modality has gained more attention (Güntekin and Başar 2014) in investigating brain dynamics during affective tasks, although there are recent studies using fMRI (Hattingh et al. 2013; Hooker et al. 2012; Lichev et al. 2015). Previous studies show that, emotional stimulus reveals responses in early time windows (Holmes et al. 2003; Palermo and Rhodes 2007; Pizzagalli et al. 1999). EEG and MEG with excellent temporal resolution are more prone to capture the brain dynamics than fMRI which fails to detect early responses. There are studies, also showing the response time differences between stimulus types used for emotion elicitation (Baumgartner et al. 2006; Chen et al. 2010; Zion-Golumbic et al. 2010). However, the effect of stimuli types on brain dynamics are yet to be discovered.

There are a vast number of studies on animals and humans providing evidence that sensorimotor, visual and cognitive tasks require the integration of numerous functional areas widely distributed over the brain. Studies at the level of EEG-based functional connectivity in the context of emotion recognition have gained a momentum (Kassam et al. 2013; Lee and Hsieh 2014; Lindquist et al. 2012; Shahabi and Moghimi 2016). EEG-based functional connectivity is used to investigate the brain areas involved in a particular task. Functional connectivity is studied by considering the similarities between the time series or activation maps. Various methods including linear coherence estimation in the frequency domain to investigate frequency locking (Bressler 1995; Brovelli et al. 2004; Ding et al. 2000; Nunez et al. 1997) and nonlinear methods to investigate synchronization are employed to explore the dependencies between time series. Nonlinear methods mostly focus on generalized synchronization (Stam and Dijk 2002) or phase synchronization (Lachaux et al. 1999; Mormann et al. 2000; Tass et al. 1998).

Various functional connectivity indices have been used to show the existence of diverse functional brain connectivity patterns for different emotional states for normal (Khosrowabadi et al. 2010; Lee and Hsieh 2014; Ma et al. 2012) and abnormal (Li et al. 2015; Quraan et al. 2014) cases. Emotional paradigms are used in research of evoked/event-related oscillations in analysis of functional brain connections (Güntekin and Başar 2014; Symons et al. 2016). In Lee and Hsieh (2014), emotional states are classified by means of EEG-based functional connectivity patterns. 40 participants viewed audio-visual film clips to evoke neutral, positive (one amusing and one surprising) or negative (one fear and one disgust) emotions. Correlation, coherence, and phase synchronization are used for estimating the connectivity indices. They stated significant differences among emotional states. A maximum classification rate of 82% was reported when phase synchronization index was used for connectivity measure. In Ma et al. (2012), EEG activities from 27 subjects while performing a spatial search task for facial expressions (visual stimuli) were recorded to explore the network organization of the EEG gamma oscillation during emotion processing. They reported that negative emotion processing showed more effective and optimal network organization than positive. Emotional states are also investigated in valence/arousal dimensions by visual stimuli as film clips (Liu et al. 2017).

In this study, interactions between brain regions are investigated through phase locking value for positive and negative emotions as it is demonstrated that this metric is successful in inferring the functional connectivity between brain regions using EEG signals (Dimitriadis et al. 2015; Hassan et al. 2015; Hassan and Wendling 2015; Kang et al. 2015; Sakkalis 2011; Sun et al. 2012). For this purpose, we constructed a multimodal emotional database from 25 voluntary subjects using 15 stimuli. Another major goal of the study is to explore the effects of stimuli type on interacting regions. Therefore, each stimulus was presented in audio only, video only and audio + video formats. There are numerous studies on oscillatory responses in perception of faces (see Güntekin and Başar 2014; Symons et al. 2016 and references therein). In Güntekin and Basar (2007) differences in alpha and beta bands between angry and happy face perception are reported when stimuli with high mood involvement are selected. Baumgartner et al. (2006) used EEG-Alpha-Power-Density to analyze emotion perception using face only stimulus and listening to fearful, happy and sad music and listening to emotional music while viewing pictures of the same emotional categories. Their results suggest that combined stimuli reveals the strongest activation and therefore emotion perception is enhanced when emotional music accompanies the affective pictures. An MEG study showed enhanced functional coupling in the alpha frequency range in sensorimotor areas during facial affect processing (Popov et al. 2013). Although functional role of beta-band oscillations in cognitive processing is not well understood, there are recent studies suggesting the involvement of beta-band oscillations in controlling the current sensorimotor or cognitive state (Engel and Fries 2010). Beta power change was reported over frontal and central regions for affective face stimuli (Güntekin and Basar 2007). MEG study comparing the evoked beta band activity between static and dynamic facial expressions revealed greater response for dynamic expressions (Jabbi et al. 2014).

Numerous studies on emotion perception observed the effect of gamma band activity (Balconi and Lucchiari 2008; Keil et al. 2001; Luo et al. 2007, 2009; Müsch et al. 2014; Sato et al. 2011). Sato et al. (2011) and Keil et al. (2001) reported higher gamma band activity in response to emotional pictures than neutral face pictures. In Luo et al. (2009), it was observed that, emotional stimuli induced increased event related synchronization in amygdala, visual, prefrontal, parietal, and posterior cingulate cortices relative to neutral. Besides, right hemisphere was announced to be effective in discriminating emotional faces from neutral faces.

This paper is organized as follows: EEG data acquisition is given in “Data acquisition” section. “Functional connectivity” section explains the phase locking value as the functional connectivity method. Results and conclusion are given in “Results” section and “Discussion” section respectively.

Data acquisition

A multimodal emotional database, which includes EEG recordings and face videos, is collected for the study. Two equipments were used for data acquisition; a wireless brain signal monitoring system, Emotiv EPOC (Emotiv Systems Inc., San Francisco, USA) wireless EEG headset with 14 channels, and a smart phone with \(1920 \times 1080\) HD 30 fps resolution for capturing the facial images. Database is collected from 25 voluntary subjects using 15 stimuli, which are 60 s long clips extracted from movies, in native and foreign languages. Note that, dubbed versions of foreign movies are used. Extracted movie segments are used in audio, video and audio + video format. Consequently, 45 stimuli were used in total.

Fig. 1
figure 1

Self assessment form (SAM self assessment manikin)

Stimuli selection

Several steps were followed for stimuli selection. First, 130 emotionally-evocative movie clips were manually selected from 59 movies (9 in foreign and 50 in native language) considering youtube evaluations. Numbers of movie clips in different affective states and their distributions over regions on Valence-Arousal dimension are given in Table 1. We paid attention in selection of movie clips to have a balanced number of emotions in different regions. 11 evaluators, (8 male, 3 female, average age: 20) who did not participate to data collection experiments, watched the movie clips, in audio + video format, and evaluated the clips via the Self-Assessment Form (Fig. 1) using Self-Assessment Manikins (SAM) (Bradley and Lang 1994).

Table 1 Numbers of emotionally-evocative movie clips selected manually from 59 movies for evaluations

Emotion evoked by the clip and if the evaluator has seen the movie clip before the assessment were filled in the Self-Assessment Form. Besides, evaluators have chosen intervals for each emotion dimension. Numbers of evoked emotions after evaluations are given in Table 2.

Table 2 Numbers of evoked emotions after evaluations

Movie clips to be used in data collection were selected according to these evaluations such that they are distributed equally to 4 regions in 2-dimensional Valence/Arousal space.

Fig. 2
figure 2

15 movie clips that have lowest and highest emotion scores

Distance of the emotional content of the movie clip, \(e_i\), named as emotional highlight in Koelstra et al. (2012), to the origin is calculated as in Eq. 1:

$$e_{i} = \sqrt{a_{i}^2+v_{i}^2}$$
(1)

where \(a_{i}\) is the ith movie clip’s arousal value and \(v_{i}\) is the ith movie clip’s valence value. Note that, small values of \(e_i\) show the proximity to the neutral emotion. Distribution of 130 points are shown in Fig. 2 and average values and standard deviations of the points for each region are shown in Table 3. In this study, 15 movie clips having highest and lowest values of \(e_i\) (shown in red in Fig. 2) are selected for the experiments since they would represent dense emotions and neutral emotions better, respectively.

Table 3 Valence-arousal values for selected movie clips

Participants and experiment protocol

Twenty five right-handed participants (20 male and 5 female), aged between 18 and 27 years (average age 20.52±1.69), volunteered for the experiments. The experimental procedure and processes were approved by the Mustafa Kemal University Human Research Ethics Committee. Participants were informed about the approval and signed a consent form. They were also informed about the experiment protocol and meanings of Valence/Arousal/Dominance used in self assessment form and were warned about getting a good sleep, not to get any stimulants and not to be hungry during the experiment, a day before the experiment. On the experiment day, they filled out a questionnaire on a dedicated computer about personal information; such as use of medication, number of sleep hours, tea and coffee consumption habits, date of birth, city they were born in, city they currently live in etc., prior to the experiment.

The experiment starts with 90 s of adjustments and 10 s of baseline recordings and then the participant is left alone in the experiment room. During the experiment, 15 movie clips are shown in audio, video and audio + video formats (a total of 45 stimuli) in a random order. Each stimulus is shown for 60 s, followed by a self assessment form (Fig. 1) filled in for 30 s and then a black screen was shown for 10 s for relaxation. The experiment protocol is shown in Fig. 3.

Fig. 3
figure 3

Experimental protocol. Order of stimulus, expected emotions, and number of subjects (S subjects)

EEG data collection

Emotiv EPOC is a light-weight and low cost wireless neuroheadset with a large user community. The device has been used in a variety of research studies recently (Some of the latest studies are McMahan et al. 2015; Rodríguez et al. 2015; Tripathy and Raheja 2015; Yu et al. 2015). There are studies which validate the experimental results obtained employing Emotiv EPOC (Badcock et al. 2013; Bobrov et al. 2011). Emotiv EPOC is a wireless headset system for EEG signal acquisition with 14 saline sensors; AF3, AF4, F3, F4, F7, F8, FC5, FC6, P7, P8, T7,T8, O1, O2 and two additional sensors that serve as CMS (Common Mode Sense)/DRL (Driven Right Leg) reference channels (one for the left and the other for the right hemisphere of the head). The electrodes are located at the positions according to the International 10–20 system forming 7 sets of symmetric channels. Electrode positions are shown in Fig. 4.

Fig. 4
figure 4

Electrode positions, according to the 10–20 systems, of the Emotiv EPOC device

The neuroheadset internally samples at a rate of 2048 Hz, and downsamples to 128 Hz per channel. Emotive EPOC was forced to start at the same time by means of a synchronization software written in Visual C# to start both modules together. After the data collection step, all collected data were transferred to Matlab for further processing, as described in the next sections.

Pre-processing

Artifact removal

EEGLAB (Delorme and Makeig 2004), an interactive MATLAB toolbox for electrophysiological signal processing, was used for preprocessing. Raw EEG data were first band-pass filtered to have only 0.16–45 Hz frequency content through EEGLAB.

EEG recordings are time series of measured potential differences between two scalp electrodes (active and reference electrodes). The recorded EEG data might be affected by artifacts (eye blinks, eye movements, scalp or heart muscle activity, line or other environmental noise). Independent Component Analysis (ICA) can be used to remove some of these artifacts (mainly ECG or EOG artifacts) from EEG data.

In this study, MARA (Multiple Artifact Rejection Algorithm) (Winkler et al. 2011), an open-source EEGLAB plug-in, is used for automatic artifact rejection using ICA. MARA is a supervised machine learning algorithm which solves a binary classification problem: “accept vs. reject” the independent component. It uses Current Density Norm and Range Within Pattern (these two features extract information form the scalp map of an IC), Fit Error, \(\lambda\), and 8–13 Hz (these three features are extracted from the spectrum), and Mean Local Skewness (this feature detects outliers in the time series) as the feature set. After applying MARA, AAR (Automatic Artifact Removal) toolbox (Gomez-Herrero 2007) was used for automatic correction of ocular and muscular artifacts in the EEG data.

Baseline correction

Poor contact of the electrodes, perspiration or muscle tension of the subject during the experiments might cause artifacts on recordings. In order to remove this type of noise from the EEG data, it is common to have a baseline interval, which is the recordings several tens or hundreds of milliseconds preceding the stimulus during which the subject is asked to stay still and therefore the brain is assumed to have no stimulus related activity. Mean signal over this interval is subtracted from the signal recorded during the stimulus at all time points for each channel individually. This procedure is known as baseline correction. In this study 10 s baseline intervals preceding each 60 s stimulus recordings are used for baseline correction.

Functional connectivity

Most of the actions we do require the integration of numerous functional areas widely distributed over the brain. Underlying mechanism behind this large scale network is generally described by the term functional connectivity. Functional connectivity is studied by considering the similarities between the time series or activation maps obtained using a functional imaging modality. Their excellent temporal resolutions make EEG and MEG excellent candidates for exploring this facet of neuronal activity. Similarities can be quantified using linear methods such as cross-correlation and coherence (Brovelli et al. 2004; Salenius and Hari 2003; Steyn-Ross et al. 2012; Tucker et al. 1986). However methods like coherence can only capture the linear relations between time series and may fail to identify nonlinear interdependencies. Various measures of synchronization, such as synchronization likelihood (Tucker et al. 1986) and phase synchronization (Bonita et al. 2014; Lachaux et al. 1999; Tass et al. 1998; Wilmer et al. 2010; Yener et al. 2010), have been proposed to detect more general inter-dependencies. Applications of these measures in last two decades to EEG and MEG data have shown that nonlinear relations between different brain regions indeed exist (Mormann et al. 2000; Schoenberg and Speckens 2015; Stam and Dijk 2002; Tass et al. 1998). For neural systems, synchronization is observed both in normal function, e.g. coordinated motion of several limbs, and abnormal systems, e.g. trembling activity of a Parkinson’s patient. Synchronization is also believed to be the central mechanism behind the interaction between brain areas. Results of micro electrode recording studies on animals showed that synchronization of neuronal activity among different areas of visual cortex can be interpreted as the mechanism to link the visual features (Eckhorn et al. 1988; Singer and Gray 1995). Another study by Murthy and Fetz (1992) showed the synchronous oscillatory activity in sensorimotor cortex of rhesus monkeys. Synchrony between widely separated areas, namely visual and parietal cortex of an awake cat was reported in Konigqt and Singer (1997). Neural synchronization also plays an important role in several neurological diseases like epilepsy (Mormann et al. 2000), pathological tremors (Tass et al. 1998; McAuley and Marsden 2000), and schizophrenia (Le Van Quyen et al. 2001). In phase synchronization, the only important concept is phase locking of the coupled oscillators while no restriction is enforced on their amplitudes. Phase synchronization occurs between interacting systems (or a system and an external force) when their phases are related while their amplitudes remain chaotic and, in general, uncorrelated. In the context of this study, phase synchrony between recording sites in predefined frequencies, namely frequency ranges in alpha (8–13), beta (14–30) and gamma (31–45) bands, are examined. Employed synchrony, called phase-locking value, was introduced in Lachaux et al. (1999).

Phase locking value

In order to compute the phase locking value (PLV) (Lachaux et al. 1999) between two signals, namely, \(s_{x}(t)\) and \(s_{y}(t)\), instantaneous phase values at the target frequency should be extracted. For this purpose, signals are band-pass filtered in desired frequency band. Then, instantaneous phase values are extracted by means of Hilbert Transform (note that phases were extracted by means of Gabor Wavelet Transform in Lachaux et al. (1999)). Then the analytic signal of \(s_{x}(t)\) is defined as:

$$z_{x}(t)=s_{x}(t)+j {\widetilde{s}}_{x}(t)=A_{x}(t)e^{j\phi _{x}(t)}$$
(2)

where \(A_{x}\) is the instantaneous amplitude, \(\phi _{x}(t)\) is the instantaneous phase (IP). \(\widetilde{s}_{x}(t)\) is the Hilbert transform of \(s_{x}(t)\). Analytic signal for \(s_{y}(t)\) is determined accordingly. Finally, PLV, between signals \(s_{x}(t)\) and \(s_{y}(t)\) is computed at time t by averaging over trials (Eq. 3) or by averaging over time windows of a single trial, n (Eq. 4) (Lachaux et al. 2000).

$$PLV_t= \frac{1}{N}\left| {\sum _{n=1}^{N}exp(j\theta (t,n))}\right|$$
(3)
$$S-PLV_{t,n}= \left| {\frac{1}{T} \int _{t-T/2}^{t+T/2} {exp(j\theta (\tau ,n))} d\tau }\right|$$
(4)

where \(\theta (t, n)\) is the phase difference of signals \(s_{x}(t)\) and \(s_{y}(t)\), namely \(\phi _{x}(t,n) - \phi _{y}(t,n)\). PLV measures how this phase difference changes across trials. If the phase difference is close to zero across trials, then PLV will be close to 1 and it is smaller otherwise. PLV is an important synchronization measure when working with biosignals (particularly electrical brain activity). PLV uses narrow band signals because of challenges of physical interpretation of the instantaneous phase value for wideband signals.

The recorded EEG signals are collected from 14 channels. PLVs for each electrode pair (91 pairs of electrodes in total) for neutral, positive and negative emotions in \(\alpha , \beta\), and \(\gamma\) bands are calculated separately to investigate the functional connectivity between brain regions. Calculations are conducted for all stimulus types (audio only, video only and audio + video) to determine the effects of stimulus type on inter-regional functional connectivity. PLVs are calculated by averaging across trials (N in Eq. 3 shows the number of trials) which have the same properties; i.e. same emotion and stimulus types. EEG signals were bandpass filtered before the calculations, using Hamming-window based linear-phase finite impulse response filter to obtain signals in predetermined frequency bands in each case. Permutation test is performed, in order to investigate whether the obtained phase locking values are due to the stimulus given to the subject and not due to the volume conduction effect. PLVs are calculated for baseline and stimulus periods and averaged over randomly shuffled trials between these periods, three stimulus types and randomly shuffled valence values. Permutation procedure is repeated for 1000 times. Only significant channel pairs (72 out of 91 channel pairs are significant for all conditions; \(p\le 0.05\)) are kept for further studies.

PLV values between channels P7 and P8 for each stimulus type averaged over all trials of all subjects (375 trials for each stimulus) and for each emotion class averaged over trials (numbers of trials for each condition are given in Table 6) with corresponding valence values for each stimulus type are given in Figs. 5 and 6 respectively.

Fig. 5
figure 5

Grand average of PLV values for audio, video and audio + video stimuli between left and right parietal channels

Fig. 6
figure 6

Group average PLVs for positive (left column), negative (middle column) and neutral (right column) emotion class for audio (upper row), video (middle row) and audio + video (lower row) stimulus between left and right parietal channels

Results

In this study, interactions between brain regions through phase locking value for positive and negative emotions in alpha, beta and gamma bands and effect of stimuli type on interacting regions are investigated.

Phase locking values for all electrode pairs are obtained for positive (valence\(\ge\)0.7), negative (valence\(\le\)0.3) and neutral (0.4\(\le\)valence\(\le\)0.6) emotional conditions for all subjects.

Fig. 7
figure 7

Significant and strong (\(PLV\ge 0.7\)) phase locking values between electrodes for positive/negative/neutral emotions. Columns represents stimulus types and rows represents oscillations in \(\alpha , \beta\), and \(\gamma\) bands

Statistical significance

In order to determine the statistical significance of each PLV, they are compared to the PLVs obtained between shifted trials (Lachaux et al. 2000). Surrogate values are acquired by computing phase differences over the shifted trials:

$$PLV_t^{surrogate(j)} = \frac{1}{N}\left| {\sum _{n=1}^{N}e^{j(\phi _x(t,n)-\phi _y(t,n_{perm(j)}))}}\right|$$
(5)

Permutation test, with 1000 surrogate values, revealed that most of the channel pairs have PLV values significantly larger than chance (\(p<0.001\)). p Values and PLVs for insignificant channel pairs are given in “Appendix”. Note that the synchronization for insignificant channel pairs are too weak (PLVs are less than 0.16). Figure 7 shows strong PLVs (\(PLV\ge 0.7\)) for each emotional case.

Three-way repeated Analysis of Variance (ANOVA) involving type of stimulus (three levels: audio, video and audio + video), emotion (three levels: positive, negative and neutral) and oscillations (three levels: \(\alpha , \beta\) and \(\gamma\)). Single trial phase locking values (S-PLV) are calculated for each condition, in order to generate the PLV distributions for ANOVA analysis. S-PLV values are averaged over 58-s time windows for each trial. First and last seconds of 60-s of EEG data are removed to avoid the startle effects. ANOVA analysis showed that there is no channel pair that has significant three-way interaction. All of the factors are significant with p\(<0.0001\), and no significant interaction was found between factors type of stimulusxoscillations and oscillationsxemotion. However, type of stimulusxemotion is significant with p\(<0.0001\).

Significance for emotions

One-way ANOVA test with the factor emotion (three levels: positive, negative and neutral) is realized to determine whether the degrees of phase locking values between electrode locations are significantly different between positive, negative and neutral emotions for each stimulus type and oscillation pair. Channels with significant PLVs are shown in Fig. 8. Test results are evaluated at significance level of 0.01.

Fig. 8
figure 8

Channel pairs with significantly different PLVs between positive, negative and neutral emotions. Test results are evaluated at significance level of 0.01

Tukey’s post hoc test showed that there is no channel pair having significant PLV difference between positive/negative and between negative/neutral emotions in \(\alpha\)-oscillations for any stimulus type.

\(\alpha\)-oscillations are only significantly different when audio stimulus is used and the difference is observed between channel pairs F3–AF4 and P7–P8 (\(p=0.006\) and \(p=0.007\) respectively). The only significant difference between emotions types is apparent between positive and negative emotions in channel pair T8–F8 (\(p=0.0044\)) when video only stimulus is used for emotion elicitation. Significant inter hemisphere synchronizations are apparent between positive/neutral (\(p=0.0038\)) and negative/neutral (\(p=0.0062\)) cases in \(\beta\)-oscillations for audio stimulus between left and right parietal electrodes. Positive emotions are observed to have significant \(\beta\)-oscillation differences from other emotions between channels T7 and P7 for audio + video stimulus (\(p\approx 0.001\) for both cases).

Our results also show that O1 has significant long range synchronization in \(\gamma\)-band between positive and negative emotions, with AF3 (\(p=0.0044\)), T8 (\(p=0.0056\)), FC6 (\(p=0.0035\)), F4 (\(p=0.0015\)) for audio only stimulus and with AF4 (\(p=0.006\)) for audio + video stimulus. Another difference in \(\gamma\)-band is observed between negative and neutral emotions between F4–F8 for audio + video stimulus.

Significant differences among stimulus types

Phase synchronization values may differ depending on the stimuli types. One of the focusing point of this study is to investigate the effect of the stimuli type on functional connections for emotional cases. One-way ANOVA test with one factor; type of stimulus (three levels: audio, video and audio + video) is applied to uncover the stimulus type effects on couplings between brain regions. Interestingly no significant difference is found for positive emotions. Considering negative emotions, however, stimulus type affects the phase locking values. Significant differences are shown in Table 4.

Table 4 List of significant differences betwen stimulus types

Tukey’s post hoc test revealed the differences between stimulus types. Results show that, channel pair O1–T8 has significantly different PLV values between audio and video stimuli in all bands (\(p=0.0044\), 0.0057 and 0.0046 for \(\alpha , \beta\) and \(\gamma\) bands respectively) for negative emotions. Phase locking values for pair T7–P8 differ between audio and audio + video stimuli in all bands (\(p=0.0049\), 0.0044 and 0.0037 for \(\alpha , \beta\) and \(\gamma\) bands respectively). Only difference is found in \(\gamma\) band between FC6 and F4 for neutral stimuli between audio and audio + video stimulus (\(p=0.0027\)).

Results reveal that stimulus involving a video is only separable from audio only stimulus for negative or neutral emotions. It is worth to note that, although weak, there exist significant couplings between hemispheres for different stimulus types.

Fig. 9
figure 9

Significant and strong (\(PLV\ge 0.6\)) inter-hemisphere phase locking values between for positive/negative/neutral emotions

Fig. 10
figure 10

Phase locking values between specific electrode pairs (AF3–F3, P8–T8, FC6–F8 and AF3–F4, AF3–AF4, F3–F4) for positive and negative emotions

Discussion

In this study, interactions between brain regions are studied through phase locking values for positive, negative and neutral emotions in \(\alpha , \beta\), and \(\gamma\) bands. Effects of stimulus type are also studied. For this purpose, we constructed an emotional EEG database using audio, video and audio + video stimuli.

PLVs for each channel pair are tested for significance using permutation test. Original PLVs are compared to PLVs calculated using the phase differences computed over randomly shuffled trials. Most of the channel pairs were approved to be significant in permutation test with \(\alpha =0.01\), showing that the PLVs are significantly larger than chance. Significant PLVs with strong couplings are shown in Fig. 7. Strong connections are found to be between regions on the same hemisphere as expected. Note that hemispheric lateralization is remarkable as phase synchronization values between channels are significant and high in right hemisphere for all emotions. This finding supports the theory stating the dominance of right hemisphere over the left for processing primary emotions (Holtgraves and Felton 2011). Similar results are also reported for processing affective stimuli in Borod et al. (1998), Joczyk (2016), Mashal and Itkes (2016), Mitchell et al. (2003).

Besides all electrodes in right hemisphere and left frontal electrodes have control over emotion in terms of functional connectivity. In left hemisphere, significantly different mean PLV values are detected between frontal and anterior-frontal regions for all cases. Strong synchronization values are observed between AF3–F3 in left hemisphere and P8–T8 and FC6–F8 in right hemisphere. PLV values for these channel pairs are shown in Fig. 10.

Maximum PLV values are observed between FC6 and F8 channels for positive emotions; whereas for negative and neutral emotions synchronization between right parietal and temporal regions are higher in \(\alpha\) and \(\beta\) oscillations.

It is known that \(\alpha\) oscillatory activity plays an important role in cognitive processing (Başar and Güntekin 2012). Del Zotto et al. (2013) reports important changes in low alpha range spectral power and stresses the role of right frontal regions in differentiating emotional valence. Our findings confirms the importance of \(\alpha\) oscillations as connectivity between electrodes are stronger in \(\alpha\) band than that of in \(\beta\) and \(\gamma\) bands. Besides, our results agree with the literature in terms of the importance of the anterior regions (Balconi and Lucchiari 2006; Balconi and Mazza 2009; Del Zotto et al. 2013; Güntekin and Basar 2007). A very interesting finding is the strong connectivity (\(PLV\ge 0.7\)) between AF3–F8 and AF3–F4 when processing neutral emotions using video (in \(\alpha\) and \(\beta\) oscillations) and audio + video (\(\alpha\) oscillations) stimuli respectively.

It is not strange that interhemisphere connectivity is weaker than that of within hemisphere regions. Therefore, we examine the inter-hemisphere connectivity values for a weaker threshold, namely 0.6 in Fig. 9. These results show that both left and right frontal regions contribute emotion processing in terms of functional connectivity. Significant and strong phase locking values are observed between left anterior frontal and right mid-frontal, inferior-frontal and anterior frontal regions; and also between left and right mid frontal regions.

Major goals of this study were to investigate the differences between emotion types and stimulus types. For this purpose, ANOVA analysis is conducted between conditions. Significant differences between emotions are detected between AF3–O1, P7–T7, P7–P8, T8–F8, F4–F8 and inter-hemisphere regions F3–AF4 and between O1 and AF4, F4, FC6, T8 (Fig. 10).

It is accepted that the right hemisphere has more control over emotion than left hemisphere, and there also are studies on complementary specialization of hemispheres for control of different emotion types (Harmon-Jones 2003; Iwaki and Noshiro 2012; Lane and Nadel 2002). In these studies, it is stated that left hemisphere primarily process positive emotions whereas right hemisphere primarily process negative emotions. Similarly, Alfano and Cimino (2008) showed that right hemisphere gets more active than left hemisphere when subjects were primed with a negative stimulus. Our findings show that, significant differences are observed between right temporal/right inferior frontal and left temporal/right parietal regions for video only and audio + video stimuli in \(\beta\) band respectively. For both cases PLV values for negative emotions are higher than that of for positive emotions. Therefore, we do not have the results that would support this statement about complementary specialization of hemispheres in terms of phase locking values. However, this issue should be investigated deeper as the difference is apparent in \(\beta\) oscillations and it should be clarified if this result is due to emotional difference or oscillations.

Besides, in Güntekin and Basar (2007), increased amplitudes of alpha and beta responses are found for angry stimulation than happy face stimulation. Face pictures were used for emotion elicitation. Significant differences were found in posterior and central regions for alpha and beta responses respectively. Although our results show no significant differences for PLV values in alpha oscillations, PLV values are significantly different for high and low valence trials between left posterior (P7) and central (T7) locations for audio + video stimulus and between right central (T8) and inferior frontal (F8) in beta oscillations. Note that, PET (Morris et al. 1996) and fMRI (Adolphs 2002) studies presented the role of the amygdala, specifically in perception of negative facial emotion. Supporting this statement, our findings show the existence of interaction between temporal lobe and frontal and parietal lobes. In addition, Adolphs (2002) also declares the importance of occipital and temporal lobes for detailed representation in early emotion processing and subsequent structures involving amygdala and orbitofrontal cortex. In this study, we observed significant phase locking value differences in gamma-band oscillations, for audio only stimulus, between occipital lobe (O1) and left orbitofrontal (AF3), right temporal (T8), mid-frontal (F4) and fronto central (FC6) electrodes, and between O1 and AF4 for audio + video stimulus, showing the existence of a network between these regions.

We also monitored significant differences in phase coupling between high and low valence in \(\gamma\) oscillations for left occipital electrode and right temporal, mid-frontal and fronto-central electrodes when audio only stimulus is used. Besides, it is significantly coupled to the right anterior-frontal regions if audio + video stimulus is used. For video stimulus there is no channel pair having significantly different PLVs for emotional and neutral stimuli.

In this paper, we also investigated the effect of stimulus type for high, low and moderate valence values. ANOVA analysis show that stimulus types are not separable for emotions having high valence. For negative emotions, however, PLV values between left occipital and right temporal electrodes for audio only and video only stimuli and between left temporal and right parietal electrodes for audio only and audio + video stimuli are found to be significantly different. Considering neutral emotions, audio only stimulus is separable for audio + video stimulus in \(\gamma\) band between FC6 and F4.

Results reveal that PLV values significantly differ only between audio only stimulus and video only or audio + video stimulus and video only stimulus is not separable from audio + video stimulus for any emotion type in terms of phase locking. This result might be interpreted as video content is the most effective stimulus type, since adding audio content to video stimulus do not significantly change PLV values. These results are in accordance with previous studies on classification of two valence classes using film clips (Liu et al. 2017) and picture and classical music (Jatupaiboon et al. 2013) which reported accuracies of 86.63 and 75.62%, showing that the video content is more effective than music or just a sequence of pictures. These correspondence should be double-checked by performing classification studies on subject dependent and independent studies on different stimulus types and the effect of the stimulus type should be discovered to select the best method for real time systems.

Table 5 Insignificant channel pairs with \(p>0.01\)