1 Introduction

At present, dynamic processes studies of a very different nature (mechanical, natural, medico-biological, social, historical) based on neural networks are gaining in popularity. Human body can be viewed as a complex, nonlinear biological shell, consisting of nervous, bone, muscular, cardiovascular and other systems, and is a continuous medium. Therefore, when studying it, one should apply all the variety of mathematical and probabilistic methods, including the human brain study. Medical and biological EEG signals are widely used in the diagnosis, further medical support and treatment for some forms of such disease as epilepsy. According to the WHO (World Health Organization), about fifty million people suffer from this neurological disease. Electroencephalography plays an important role in diagnosis of this disease and in monitoring the brain activity of patients with epilepsy. EEG recordings are time-varying signals of brain activity—time series, so they have a non-linear nature. Analysis of signals by methods of non-linear dynamics makes it possible to describe quantitatively EEG recordings, since signal characteristics can be measured. To diagnose a disease in the process of changing of the EEG signal, it is important to highlight the features (characteristic patterns) that accompany it.

For this kind of research, several neural networks types are used: artificial (ANN), probabilistic (PNN), convolutional (CNN). ANN can be considered as a directed graph with weighted connections and nodes—artificial neurons. According to the architecture of connections, artificial neural networks can be grouped into two classes: feedforward networks, recurrent networks, or feedback networks (RNN). In feedforward networks (multilayer perceptrons), neurons are arranged in layers and have unidirectional connections between layers. These networks are static, that is, for a given input they generate one set of output values that do not depend on the previous network state. Recurrent networks are dynamic, due to feedbacks in them the inputs of neurons are modified, and thereby the network state changes. They consist of a straight line neural network with circular connections.

In a number of papers [1,2,3,4,5,6,7,8,9], the authors propose an ANN-based automatic detection system for characteristic epileptic patterns that can work with the complexities associated with EEG signals to predict the most appropriate solution. In the research, the analysis is carried out using an artificial neural network of EEG obtained from an epileptic and healthy brain. To minimize the root mean square error (MSE) of the network, a genetic algorithm is used [1]. In [2], the epilepsy signs classification is carried out by the EEG signal based on the genetic algorithm (GA) and artificial neural network (ANN). Epileptic EEG signals are pre-processed using a discrete wavelet transform to be divided into frequency subbands (delta, theta, alpha, beta, gamma) using such feature as entropy. The results comparisons of studying the healthy person EEG signals and patients suffering from epilepsy in different periods—ictal and postictal, are carried out. ANN multilayer neural network with feedback and integrated GA improves classification accuracy when diagnosing and grouping EEG signals. In [3], the analysis is carried out using an artificial neural network (ANN), where FFT coefficients are used training of a values. The aim of the work was to select the most accurate method for training of a multilayer network for the qualitative classification of the EEG epilepsy features. For this, several learning methods comparisons are carried out: Levenberg-Marquardt (LM), Quickprop (QP), Delta-bar delta (DBD), Momentum and Conjugate Gradient (CG), genetic algorithm (GA). The best performance was achieved by optimizing the learning rate weights using GA. In [4], EEG segments are analyzed using a time-frequency distribution, and then, for each segment, several features are identified that represent the energy distribution in the time-frequency plane. Functions are used to train a neural network. The Fast Fourier Transform and multiple time-frequency distributions are compared. In [5], ANN acts as a features classifier that have been sorted from signals using a combination of discrete wavelet transform (DWT) and fast Fourier transform (FFT). Using this methods combination, a good accuracy of 98.889% can be achieved. In [6], the features analysis of epilepsy EEG signals is carried out using the genetic algorithm GAFDS. Several types of classifiers are compared. For this, the frequency domain elements are identified and combined with non-linear characteristics. Classifiers such as k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, naive Bayesian method are used. Combined with GAFDS, the accuracy is 99% and 97%. When conducting cross-sectional analyzes, the authors found that GAFDS performs well in identifying effective features for EEG classification. Therefore, the proposed model for characteristics selection and optimization can improve the classification accuracy. In [7], an artificial neural network (ANN) with feedback is used to identify the epileptic EEG signal. The classification criteria are the wavelet coefficients of the studied signals. GA is used for training. Harmonic weights are used to improve the classification accuracy, thus achieving an accuracy of 99.19%. In [8], the ictal state identification by calculating the maximum Lyapunov exponent (STLmax) is carried out according to the Kantz method. The proposed approach is based on dividing the EEG signal into periods corresponding to epileptic and non-epileptic activity. The STLmax values are used to classify the EEG signal. Segmentation and calculation of STLmax values are performed using a trained neural network. For the study, EEG signals of 5 healthy volunteers with open and closed eyes and 5 epilepsy patients were used. In [9], the EEG signal is first preprocessed using discrete wavelet transform (DWT) to remove noise and extract features. The processed data are the input values of the RNN for classification. Several experiments were carried out to obtain the optimal parameter for the model. The model then compared with Logistic Regression (LR), Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Random Forest (RF), and Decision Tree (DT).

In a series of works [10, 11], a probabilistic neural network (PNN) is used to classify the EEG signals features. A probabilistic neural network PNN is a kind of neural networks for classification and pattern recognition problems, where the class membership probability density is estimated by means of a kernel approximation. It is one of the so-called Bayesian networks type. This neural network type was derived from Bayesian network and Fisher’s statistical algorithm. With this method, the misclassification probability is minimized. In [10], comparisons are made between analysis results for probabilistic (PNN) and recurrent (RNN) neural networks. In general, according to the proposed methodology, the authors found that the recurrent network analysis results are more accurate than for feedforward network models. The Lyapunov exponents have become the attribute on which the classification is based. The probabilistic neural network has shown that it can be useful in the analysis of long-term EEG signals for the electroencephalographic changes early detection. In [11], the algorithm is constructed in such a way that decision-making is carried out in two stages: the Lyapunov exponents calculation in the feature vectors form and classification using classifiers trained on the extracted features. The combination of the Lyapunov exponent values and the probabilistic neural network was aimed to identify the optimal classification algorithm for the epileptic EEG in order to identify characteristic patterns and their possible regularities.

A convolutional neural network (CNN) is suitable for recognizing patterns characteristic of an epileptic EEG. Recognition is considered as the neural network ability to extract the necessary features and at the same time be invariant to various kinds of interference and image distortions. A number of works are devoted to this [12,13,14,15].

The Lyapunov exponents technique is well suited for the study of nonlinear biomedical signals [16,17,18]. The epilepsy diagnosis is based on the EEG chaotic behavior assessment. Lyapunov exponents are a quantitative measure for distinguishing of orbits in phase space according to their sensitivity to initial conditions and are used to determine the stability of any steady-state time series, as well as to determine the system dynamics complexity [19]. Previously, the use of the Lyapunov exponents analysis was very effective for studying the chaotic dynamics of distributed mechanical structures [20, 21].

In well-known works, studies were carried out for EEGs taken while patients were awake, and the values of wavelet transforms, Fourier transforms, k-nearest neighbor were most often used as a features classifier. There are separate works where only one Kantz [22] method was used in the study of the largest Lyapunov exponent. Considering that there is currently no universal method for calculating Lyapunov exponents, it is necessary to apply several methods for calculating Lyapunov exponents to obtain reliable results.

In the present work, it was possible to develop a method for detecting the epileptiform activity presence in patients based on the analysis of EEGs taken during various sleep stages, using methods for calculating the largest Lyapunov exponent and further training of the neural network using a genetic algorithm.

2 Neural Network

To implement this task, a neural network was designed, which can be classified according to the following criteria: belongs to the ANN class (artificial neural networks); according to the input information, the network is analog (information is presented in the form of real numbers); by the training form, the network is self-organizing (it forms the output solutions space only on the input actions basis); by the connections nature, the network is a direct propagation network (all connections are directed strictly from input neurons to output neurons) and static (each neuron output is connected to all inputs of the next layer neurons and there are no dynamic connections) (Fig. 1).

Fig. 1
figure 1

Three-layer neural network

The designed neural network has three layers. The number of neurons in the hidden layer is configurable. The hidden and output layers neurons have different combinations of three different activation functions types (Fig. 2).

  1. 1.

    Step activation function. It is represented by the function and has a derivative \(f'\left ( x \right ) = \left \{ {\begin {array}{*{20}{c}}{0,\mathrm {{\;}}x \ne 0}\\ {?,\mathrm {{\;}}x = 0}\end {array}} \right \}\).

    Fig. 2
    figure 2

    Activation functions

  2. 2.

    Linear activation function. It is represented by the function \(f\left ( x \right ) = Cx\) and has a derivative \(f'\left ( x \right ) = C\).

  3. 3.

    Sigmoid activation function. It is represented by the function \(f\left ( x \right ) = \sigma \left ( x \right ) = \frac {1}{{1 + {e^{ - x}}}}\) and has a derivative f′(x) = f(x)(1 − f(x)).

3 Genetic Algorithm

Genetic algorithm is used to solve optimization problems and in essence is a heuristic search algorithm. The algorithm mechanisms resemble those of biological evolution and work by sequential selection, combination, and variation of the sought parameters. The genetic algorithm focuses on the “crossing”, which performs operation of recombining of available solutions.

At the very beginning, there is a certain ancestor population, from which evolution process begins. The algorithm operation is divided into the following stages.

Stage 1.:

Crossbreeding. It takes two parents to get a child. In the crossing process, the offspring inherits traits of both parents. All possible pairs of individuals are collected from the population, between which crossing occurs. In addition, the process of crossing also includes a mechanism that allows one to get a greater offspring variety—mutations. In the mutation process, each individual with some probability can receive some “unplanned distortion” in the genes (Fig. 3).

Fig. 3
figure 3

Gene crossing and mutation

Stage 2.:

Selection. At this stage, a limited set of individuals is selected from the population that satisfy goal criteria more than others. For this, the fitness function is calculated for each individual and the population is sorted in result descending order of this function. The fitness function, in fact, directs evolution towards the optimal solution.

Stage 3.:

Formation of a new generation. In this step, the next individuals population is created, which is based on the “best” individuals from the previous generation. Individuals not included in the new generation “die” and do not participate in further evolution.

4 Approach Implementation

To increase efficiency of problem solving, a neural network, described in paragraph 1, was taken as the population individuals for the genetic algorithm. This approach will significantly speed up the neural networks training.

The algorithm was trained on a sample that included people with a known diagnosis (there is epilepsy, there is no epilepsy).

4.1 Object of Study

Patients EEG recording was performed at the Epineiro Medical Center for Neurology, Epilepsy Diagnosis and Treatment in Saratov city using 21 channels: O2, O1, P4, P3, C4, C3, F4, F3, Fp2, Fp1, T6, T5, T4, T3 , F8, F7, Pz, Cz, Fz, A2, A1 with the electrode arrangement shown in Fig. 4. On average, one signal duration is 10 seconds, sampling rate is 250 Hz. A neurophysiologist cleared the artifacts. EEGs taken during the first, second and third stages of sleep were analyzed for patients with epilepsy with different diagnoses (headaches, focal tonic spasms, generalized and focal seizures, absences) and in the control group.

Fig. 4
figure 4

Arrangement of EEG electrodes

4.2 Crossbreeding and Mutations

In order to solve the problem of crossing and mutating of neural networks, a method was developed that allows to linearize the neural network into a genes sequence (chromosome), as well as restoring the network back from this sequence (Fig. 5). To increase efficiency and broaden offspring diversity, several mechanisms of crossing and mutation have been developed. Both neurons themselves and synapses weights of the neural network are subject of crossing. With mutation, one of the following mechanisms is performed with equal probability: weights in the neural network synapses undergo mutations; neuron activation function parameters are mutated; mutates the entire neuron (one activation function is replaced by another).

Fig. 5
figure 5

Linearization of synapses and neurons into a gene sequence

As an adaptation function for individual, the calculation number of the correctly defined EEG signals (sick or healthy) is used. If more correctly defined EEGs, then better neural network is trained and more likely for its offspring to “survive”.

4.3 Neural Network Configuration

Input neurons count of the neural network is equal to channels in the EEG signal, and in this study, it is equal to 21. Input layer receives data as numbers representing calculated characteristics for each channel. The following characteristics were used: the largest Lyapunov exponent calculated by Rosenstein method [23]; the largest Lyapunov exponent according to Wolf method [24]; the largest Lyapunov exponent according to Sano-Sawada method [25].

Test sample: 33 patients, of which 6 are healthy and 27 patients with epilepsy with different diagnoses (headaches, generalized, focal). For each EEG channel, the first Lyapunov exponent was calculated by the methods of Sano-Sawada, Rosenstein and Wolf.

Training: For training, a neural network of the following configuration was used: 21 input neurons (the channels count in the EEG), 15 neurons in hidden layer, 1 output neuron. The largest Lyapunov exponents were fed to the neurons input of the input layer along all EEG channels (Fig. 6).

Fig. 6
figure 6

Visualization of the trained neural network during the EEG analysis

5 Results and Conclusions

In this work, a comprehensive approach has been developed for detecting of epilepsy presence for patients based on the EEG analysis taken during the first, second and third stages of sleep. EEG analysis is carried out using three different methods for calculating of the largest Lyapunov exponent, namely Rosenstein, Wolf, Sano-Sawada, and further training of neural network using a genetic algorithm.

In the studies course, it was possible to find out that the following combinations turned out to be the most accurate in determining of the epilepsy presence: EEG analysis of the first sleep stage using Sano-Sawada and Wolf methods, as well as the EEG of the third sleep stage by Sano-Sawada method. In general, Rosenstein method showed the worst results.

Some of the combinations made it possible to obtain 100% accuracy in determining the presence or absence of patients disease. Other combinations showed lower accuracy, which, however, was at least 85%. The data are presented in Table 1.

Table 1 Results accuracy