1 Introduction

The optimized radial basis function network is a kind of neural network that utilizes a step size inside of the gradient strategy [1, 2], and a Gaussian function as the activation function [3, 4] for the modeling, where a small step size will spend much time to reach a minimum, while a big step size will jump over the minimum; hence, it needs an acceptable step size. Seeking of an acceptable step size in the gradient strategy it is not easy, the genetic optimizer is one option to seek an acceptable step size.

The genetic optimizer has been evaluated in several optimization problems. In [5], the route calculation is performed by a genetic optimizer. In [6, 7], the hybrid genetic optimizer which combine the genetic mechanism with the gradient strategy is suggested. In [8], the genetic optimizer-based integer-valued optimization is considered for two machine optimization models. In [9], the genetic optimizer is utilized to optimize the terms of variational mode decomposition. In [10], the deep optimization trained by genetic optimizer is suggested. In [11], a cluster-based genetic optimizer that outputs a result of the bin pack problem is developed. In [12], a problem-specific non-dominated sorting genetic optimizer is suggested. In [13], the impact of utilizing constraint priorities on genetic optimizer is studied. In [14], a the genetic optimizer is addressed for the practical medical task of selecting drug combinations. In [15], the automatic design of dispatching rules using the genetic optimizer is addressed. In [16], the genetic optimizer is applied to achieve a balance between privacy protection and resource consumption. In [17], a genetic optimizer that uses a statistical-based chromosome replacement strategy is proposed. In [18], a high-performance solution is presented for parcel exchange based on a genetic optimizer. In [19], a hybrid feature selection method that combines a genetic optimizer with a proposed filter is presented. Nevertheless, the genetic optimizer is not evaluated to seek an acceptable step size in the gradient strategy for an optimized radial basis function network.

In this study, the genetic optimizer is suggested to seek an acceptable step size in the gradient strategy for an optimized radial basis function network. The justification of the genetic optimizer to seek an acceptable step size is described as follows: a small step size will reach small steps and will spend much time to reach a minimum in the gradient strategy, while a big step size will reach big steps and will jump over the minimum in the gradient strategy; hence, it is important to suggest a genetic optimizer to seek an acceptable step size. The advantage of our genetic optimizer with the other genetic optimizers is described as follows: the other genetic optimizers utilize high number of stages denoted as the initialization, fitness function, constraint handling, crossover, mutation, decode chromosomes, calculate fitness value, chromosome selection, replace old chromosome, and stopping criteria, while our genetic optimizer utilizes small number of stages denoted as the initialization, constraint handling, decode chromosomes, chromosome selection, and stopping criteria. The idea of utilizing small number of stages in our genetic optimizer is based on the simplex optimizer [20, 21] and bat optimizer [22, 23] which also utilize small number of stages.

On the other hand, fatigue driving is one of the leading causes of traffic accidents; consequently, the fatigue driving modeling plays a crucial role in road safety. To validate the performance of the optimized radial basis function network, the fatigue driving modeling in a vehicle is evaluated. A data table is utilized in a Arduino Mega to save the final data; after the data collection, the data table is utilized in a personal computer to train the optimized radial basis function network; and after of the training, the trained optimized radial basis function network is utilized in a personal computer for the testing.

The rest of the work is ordered in this sentence. Section 2 describes the literature review. Section 3 describes the research model. Section 4 describes the experimental design and performance evaluation. Section 5 describes the conclusion.

2 Literature review

In this section, the research which provides a broader context for our work is presented.

The genetic optimizers utilize several stages, but there are some works such as [24,25,26] that does not explain the stages utilized by the genetic optimizers.

There are other works that explain the stages utilized by the genetic optimizers. In [27], the chromosome representation, fitness function, and genetic operators are described as the stages utilized by genetic optimizers and gradient-free methods. In [28], the population initialization, selection operation, crossover operation, and mutation operation are described as the stages utilized by a stochastic gradient descent with genetic optimizer. In [29], the genetic encoding, objective function and fitness function, genetic operation, selection, crossover, and mutation are described as the stages utilized by a genetic optimizer-based fuzzy optimization. In [30], the adaptive crossover and mutation operation are described as the stages utilized by an adaptive genetic optimizer. In [31], the solution representation, generating the initial population and calculation of the fitness function, developed local search optimizer, selection, crossover, and mutation operators are described as the stages utilized by a hybrid genetic optimizer. In [32], the genetic operations, initialization, parent selection, crossover, mutation, and termination are described as the stages utilized by a genetic optimizer and a pore network model. In [33], the chromosome encoding, crossover operation, and mutation operation are described as the stages utilized by a clustering-based extended genetic optimizer. In [34], the initialization operator, design of chromosome coding, design of filter chain, crossover operator, mutation operator, correction operator, design of selection operator, design of termination condition are described as the stages utilized by an improved genetic optimizer. In [35], the chromosome representation, population initialization, cost function, operators, selection, crossover, and mutation are described as the stages utilized by a wireless sensor network using a genetic optimizer. In [36], the population initialization, select the fitness function, select operation, crossover operation, and mutation operation are described as the stages utilized by a neural network and a genetic optimizer. In [37], the initialize population, generate the next generation of individuals, cross operation, mutation operation, decode chromosomes, calculate the fitness value, chromosome selection operation, replace the old chromosome with simulated annealing operator, determine the number of iterations, and output results are described as the stages utilized by a simulated annealing genetic optimizer. In [38], the selection of initial population, crossover, and mutation are described as the stages utilized by a wireless sensor network using genetic optimizer. In [39], the genetic operator design, selection, cross, and variation are described as the stages utilized by a proposed genetic optimizer. In [40], the representation, initialization, fitness function, constraint handling, selection and reproduction, crossover, mutation, and stopping criteria are described as the stages utilized by a genetic optimizer based probabilistic model.

Some of the main stages utilized by a genetic optimizer are described as follows [37, 40].

Initialization. The size of population is selected. The chromosomes are created in the initial stage, and each unit of a chromosome is initialized with a random number.

Fitness function. The value of fitness corresponding to a chromosome is its efficacy. A good fitness value maximizes the success probability and minimizes the associated cost.

Constraint handling. Real-world problems are mostly constrained, i.e., solutions may lie outside the feasible region. To avoid constraint violation, chromosomes are encoded in harmony with the constraints.

Crossover. It is the exchange of gene fragments at the same position on two chromosomes. For the optimization of assembly sequence in precast concrete buildings, the cross operation needs to meet the following conditions: (a) inherit the excellent gene of mother chromosome to the greatest extent; (b) all gene values in offspring chromosomes cannot be repeated with each other; (c) offspring chromosomes must meet strict constraints.

Mutation. Because all gene values in the chromosome cannot be repeated with each other, the mutation operation can be realized by randomly exchanging two genes in the chromosome. This method is also called exchange mutation. Mutation operation adopts two-point reciprocity to mutate.

Decode chromosomes. The assembly sequence is obtained by decoding the components.

Calculate the fitness value. The fitness value is calculated according to the assembly difficulty coefficient and assembly time of components in the assembly process.

Chromosome selection. It is to select better individuals from the previous generation and pass them on to the next generation. Selection technology has a great impact on the efficiency of genetic programming. Roulette wheel selection is adopted. Roulette wheel selection is used to select outstanding individuals from a population.

Replace the old chromosome. The combination of simulated annealing and genetic optimizer is to judge whether to replace the old chromosome with simulated annealing operator.

Stopping criteria. A maximum number of generations is taken as the halting condition.

Figure 1 shows the difference between the other genetic optimizers and our genetic optimizer, where the other genetic optimizers utilize high number of stages denoted as the initialization, fitness function, constraint handling, crossover, mutation, decode chromosomes, calculate fitness value, chromosome selection, replace old chromosome, and stopping criteria, while our genetic optimizer utilizes small number of stages denoted as the initialization, constraint handling, decode chromosomes, chromosome selection, and stopping criteria. The idea of utilizing small number of stages in our genetic optimizer is based on the simplex optimizer [20, 21] and bat optimizer [22, 23] which also utilize small number of stages.

Fig. 1
figure 1

Comparison between the other genetic optimizers and our genetic optimizer

3 Research model

This section introduces the optimized radial basis function network. Later, this section presents the design and the pseudocode of the genetic optimizer to seek an acceptable step size in the gradient strategy for an optimized radial basis function network.

3.1 The optimized radial basis function network

This subsection introduces the optimized radial basis function network.

Figure 2 shows the application of the genetic optimizer to seek an acceptable step size in the gradient strategy for an optimized radial basis function network, where the collected data is used for the training of the optimized radial basis function network, later, the genetic optimizer is utilized to seek a new step size for the optimized radial basis function network; if the root mean square of the new step size is smaller to the root mean square of the previous step size, the new step size is chosen for the optimized radial basis function network, otherwise, the previous step size is chosen for the optimized radial basis function network. The details are in this and in the next section.

Fig. 2
figure 2

Application of the genetic optimizer to seek an acceptable step size in the gradient strategy

The optimized radial basis function network with a hidden layer is:

$$\begin{aligned} \begin{array}{c} w_{k}= {\displaystyle \sum \limits _{j=1}^{M}} a_{j,k}\alpha _{j}(u_{j,k}),\\ \alpha _{j}(u_{j,k})=e^{-u_{j,k}^{2}},\\ u_{j,k}= {\displaystyle \sum \limits _{i=1}^{N}} b_{ij,k}\left[ x_{i,k}-c_{i,k}\right] ,\\ \gamma _{j}(u_{j,k})=2u_{j,k}\alpha _{j}(u_{j,k}), \end{array} \end{aligned}$$
(1)

where \(i=1,\ldots ,N\), \(j=1,\ldots ,M\), \(x_{i,k}\in \Re \) is the optimized radial basis function network input, \(w_{k}\in \Re \) is the optimized radial basis function network output, \(a_{j,k}\in \Re \), \(b_{ij,k}\in \Re \), \(c_{i,k}\in \Re \) are the terms of the output layer, hidden layer, and centers, \(\alpha _{j}(u_{j,k})\in \Re \) and \(\gamma _{j}(u_{j,k})\) are nonlinear functions, \(u_{j,k}\in \Re \) is the addition function, M is the hidden layer neurons numeral. Figure 3 shows the architecture of the optimized radial basis function network with the input layer, hidden layer, and output layer.

Fig. 3
figure 3

The optimized radial basis function network

In this section, the optimized radial basis function network is suggested. In this part, the tuning law is obtained.

Define an error sum as:

$$\begin{aligned} \varepsilon _{k}=\frac{1}{2}\left( w_{k}-t_{k}\right) ^{2}, \end{aligned}$$
(2)

where \(\left( w_{k}-t_{k}\right) \) is the error, \(w_{k}\) is the optimized radial basis function network output, \(t_{k}\) is the target. The tuning law is achieved utilizing the following equations:

$$\begin{aligned} \begin{array}{c} \triangle a_{j,k}=a_{j,k+1}-a_{j,k}=-\eta \frac{\partial \varepsilon _{k} }{\partial a_{j,k}},\\ \triangle b_{ij,k}=b_{ij,k+1}-b_{ij,k}=-\eta \frac{\partial \varepsilon _{k} }{\partial b_{ij,k}},\\ \triangle c_{i,k}=c_{i,k+1}-c_{i,k}=-\eta \frac{\partial \varepsilon _{k} }{\partial c_{i,k}}, \end{array} \end{aligned}$$
(3)

where \(\eta \) is the step size to be defined in the next section. Utilizing the chain rule to achieve \(\frac{\partial \varepsilon _{k}}{\partial a_{j,k}}\) gives:

$$\begin{aligned} \begin{array}{c} \frac{\partial \varepsilon _{k}}{\partial a_{j,k}}=\frac{\partial \varepsilon _{k}}{\partial \left( w_{k}-t_{k}\right) }\frac{\partial \left( w_{k} -t_{k}\right) }{\partial w_{k}}\frac{\partial w_{k}}{\partial a_{j,k}}\\ =\alpha _{j}(u_{j,k})\left( w_{k}-t_{k}\right) , \end{array} \end{aligned}$$
(4)

Utilizing the chain rule to achieve \(\frac{\partial \varepsilon _{k}}{\partial b_{ij,k}}\) gives:

$$\begin{aligned} \begin{array}{c} \frac{\partial \varepsilon _{k}}{\partial b_{ij,k}}=\frac{\partial \varepsilon _{k}}{\partial \left( w_{k}-t_{k}\right) }\frac{\partial \left( w_{k}-t_{k}\right) }{\partial w_{k}}\frac{\partial w_{k}}{\partial b_{ij,k} }\\ =\gamma _{j}(u_{j,k})a_{j,k}\left[ c_{i,k}-x_{i,k}\right] \left( w_{k} -t_{k}\right) , \end{array} \end{aligned}$$
(5)

Utilizing the chain rule to achieve \(\frac{\partial \varepsilon _{k}}{\partial c_{i,k}}\) gives:

$$\begin{aligned} \begin{array}{c} \frac{\partial \varepsilon _{k}}{\partial c_{i,k}}=\frac{\partial \varepsilon _{k}}{\partial \left( w_{k}-t_{k}\right) }\frac{\partial \left( w_{k} -t_{k}\right) }{\partial w_{k}}\frac{\partial w_{k}}{\partial c_{i,k}}\\ =b_{ij,k}\gamma _{j}(u_{j,k})a_{j,k}\left( w_{k}-t_{k}\right) , \end{array} \end{aligned}$$
(6)

By substituting (4), (5) and (6) into (3), gradient strategy of the optimized radial basis function network is as follows:

$$\begin{aligned} \begin{array}{l} a_{j,k+1}=a_{j,k}-\eta \alpha _{j}(u_{j,k})\left( w_{k}-t_{k}\right) ,\\ b_{ij,k+1}=b_{ij,k}-\eta \gamma _{j}(u_{j,k})a_{j,k}\left[ c_{i,k} -x_{i,k}\right] \left( w_{k}-t_{k}\right) ,\\ c_{i,k+1}=c_{i,k}-\eta b_{ij,k}\gamma _{j}(u_{j,k})a_{j,k}\left( w_{k} -t_{k}\right) , \end{array} \end{aligned}$$
(7)

where\(\ \left( w_{k}-t_{k}\right) \) is the error of (2).

3.2 Genetic optimizer to seek an acceptable step size in the gradient strategy for an optimized radial basis function network

This subsection presents the design and the pseudocode of the genetic optimizer to seek an acceptable step size in the gradient strategy for an optimized radial basis function network.

Figure 4 shows the block diagram of the genetic optimizer to seek an acceptable step size in the gradient strategy for an optimized radial basis function network. From the gradient strategy (7), the genetic optimizer will tune the step size \(\eta \).

Fig. 4
figure 4

Block diagram of the genetic optimizer to seek an acceptable step size in the gradient strategy

Figure 5 shows the justification of the genetic optimizer to seek an acceptable step size \(\eta \) in the gradient strategy for an optimized radial basis function network, where a small step size \(\eta \) will reach small steps and will spend much time to reach a minimum in the gradient strategy, while a big step size \(\eta \) will reach big steps and will jump over the minimum in the gradient strategy.

Fig. 5
figure 5

Justification of the genetic optimizer to seek an acceptable step size in the gradient strategy

The information about k is:

$$\begin{aligned} \left\langle z_{k},v_{k},\beta _{k},y_{k}\right\rangle , \end{aligned}$$
(8)

where \(z_{k}\) is the chromosome in the time k, \(v_{k}\) is the velocity chromosome in the time k, \(\beta _{k}\) is the auxiliary velocity chromosome in the time k, \(y_{k}\) is the chromosome precision in the time k.

The step size \(\eta _{k}\) is assigned to the chromosome \(z_{k}\) of the genetic optimizer:

$$\begin{aligned} z_{k}=\eta _{k}, \end{aligned}$$
(9)

The chromosome precision \(y_{k}\) is tuned as follows:

$$\begin{aligned} y_{k}=\frac{z_{\max }-z_{\min }}{2^{C}-1}, \end{aligned}$$
(10)

where \(z_{\min }\) and \(z_{\max }\) are the minimum and maximum values that can take the next chromosome \(z_{k+1}\), C is the chromosome number in a population. The auxiliary velocity chromosome \(\beta _{ok}\) is tuned as follows:

$$\begin{aligned} \beta _{ok}=\left\{ \begin{array}{cc} 1 &{} \left( {\text {rand}}_{2}>0.5\right) \\ 0 &{} otherwise \end{array} \right. , \end{aligned}$$
(11)

where \({\text {rand}}_{2}\) is a random numeral in the interval [0, 1]. The velocity chromosome \(v_{k}\) is tuned as follows:

$$\begin{aligned} v_{k}= {\displaystyle \sum \limits _{o=1}^{C}} 2^{o}\beta _{ok}, \end{aligned}$$
(12)

where \(\beta _{ok}\) is the auxiliary velocity chromosome, C is the chromosome number in a population.

To reflect the next chromosome, the chromosome is changed with some randomness. The auxiliary chromosome \(\overline{z}_{k+1}\) is tuned as follows:

$$\begin{aligned} \overline{z}_{k+1}=v_{k}y_{k}+z_{\min }, \end{aligned}$$
(13)

where \(v_{k}\) is the velocity chromosome, \(\beta _{k}\) is the auxiliary velocity chromosome, \(y_{k}\) is the chromosome precision, and \(z_{k}\) is the chromosome.

Utilizing the chromosome \(z_{k}\) of (9), the objective function \(g(z_{k})\) is tuned as follows:

$$\begin{aligned} \begin{array}{c} a_{\eta j,k+1}=a_{\eta j,k}-\eta \frac{\partial \varepsilon _{k}}{\partial a_{j,k}},\\ w_{\eta ,k}= {\displaystyle \sum \limits _{j=1}^{M}} a_{j,k}\alpha _{j}(u_{j,k}),\\ g(z_{k})=\left( w_{\eta ,k}-t_{k+1}\right) ^{2}, \end{array} \end{aligned}$$
(14)

Utilizing the auxiliary chromosome \(\overline{z}_{k+1}\) of (13), the auxiliary objective function \(g(\overline{z}_{k+1})\) is tuned as following:

$$\begin{aligned} \begin{array}{c} \overline{a}_{\eta j,k+1}=\overline{a}_{\eta j,k}-\eta \frac{\partial \varepsilon _{k}}{\partial a_{j,k}},\\ \overline{w}_{\eta ,k+1}= {\displaystyle \sum \limits _{j=1}^{M}} \overline{a}_{j,k+1}\alpha _{j}(u_{j,k}),\\ g(\overline{z}_{k+1})=\left( \overline{w}_{\eta ,k+1}-t_{k+1}\right) ^{2}, \end{array} \end{aligned}$$
(15)

After the auxiliary chromosome \(\overline{z}_{k+1}\) is obtained with (13), the next chromosome \(z_{k+1}\) is tuned as follows:

$$ \begin{aligned} z_{k+1}=\left\{ \begin{array}{cc} \overline{z}_{k+1} &{} \left( g(\overline{z}_{k+1})<g(z_{k})\right) \& \left( \overline{z}_{k+1}\ge z_{\min }\right) \& \left( \overline{z}_{k+1}\le z_{\max }\right) \\ z_{k} &{} otherwise \end{array} \right. , \end{aligned}$$
(16)

where \(z_{\min }\) and \(z_{\max }\) are the minimum and maximum values that can take \(z_{k+1}\). (16) implicates that the chromosome is tuned when two requirements are satisfied: a) it finds the better position of the objective function, i.e., \(g(\overline{z}_{k+1})<g(z_{k})\), b) the values of \(z_{k+1} \) are bounded by the minimum \(z_{\min }\) and maximum values \(z_{\max }\).

The next chromosome \(z_{k+1}\) of the genetic optimizer is assigned to the next step size \(\eta _{k+1}\):

$$\begin{aligned} \eta _{k+1}=z_{k+1}, \end{aligned}$$
(17)

The optimization by the genetic optimizer uses the constants shown in Table 1. Since in this genetic optimizer, the chromosome is changed with some randomness, other similar constants will obtain similar results.

Table 1 Genetic optimizer constants

The pseudo code of the genetic optimizer to seek an acceptable step size in the gradient strategy is as follows:

Inputs: \(z_{1}\), \(v_{1}\), \(y_{1}\).

  1. 1.

    Generate the initial chromosome \(z_{1}\) and initial velocity chromosome \(v_{1}\);

  2. 2.

    Define the initial chromosome precision \(y_{1}\);

  3. 3.

    While (k < maximum iteration numeral) do

  4. 4.

          Tune the chromosome \(z_{k}\) with Eq. (9);

  5. 5.

          Tune the chromosome precision \(y_{k}\), the auxiliary velocity chromosome \(\beta _{k}\), and the velocity chromosome \(v_{k}\) with Eqs. (10), (11), and (12), respectively;

  6. 6.

          Tune the auxiliary chromosome \(\overline{z}_{k+1}\) with Eq. (13);

  7. 7.

          Determine the objective function \(g(z_{k})\) with Eq. (14);

  8. 8.

          Determine the auxiliary objective function \(g(\overline{z}_{k+1} )\) with Eq. (15);

  9. 9.

          If (\( \left( g(\overline{z}_{k+1})<g(z_{k})\right) \& \left( \overline{z}_{k+1}\ge z_{\min }\right) \& \left( \overline{z}_{k+1}\le z_{\max }\right) \)) then

  10. 10.

                Accept the new result \(z_{k+1}\) with Eq. (16);

  11. 11.

                Otherwise, take the past value \(z_{k}\) with Eq. (16).

  12. 12.

          End If

  13. 13.

    Tune the next step size \(\eta _{k+1}\) with Eq. (17);

  14. 14.

    End While

Output: \(\eta _{k+1}\).

The genetic optimizer of Eqs. (8)–(15) considered to seek an acceptable step size \(\eta \) in the gradient strategy and to reach the objective function \(g_{0}\) is described by:

$$\begin{aligned} \begin{array}{c} \min g_{0}=\min \left( \left( w_{\eta ,k}-t_{k}\right) ^{2}\right) \\ \text {subject to}\\ \eta _{\min }\le \eta _{k}\le \eta _{\max }\\ z_{k}=\eta _{k}\\ z_{\min }=\eta _{\min }\\ z_{\max }=\eta _{\max }, \end{array} \end{aligned}$$
(18)

4 Experimental design and performance evaluation

In this section, we compare the simplex optimizer of (19) [20, 21], the bat optimizer of (20) [22, 23], and the genetic optimizer of Eqs. (1 )–(7), (8)–(18), to seek an acceptable step size in the gradient strategy for an optimized radial basis function network. The aim of the strategies is that utilizing the inputs \(u_{i,k}\), the optimized radial basis function network output \(w_{k}\) needs to reach the target \(t_{k}\) faster.

4.1 Experimental materials

A data table is utilized in a Arduino Mega to save the final data; after the data collection, the data table is utilized in a personal computer to train the optimized radial basis function network; and after of the training, the trained optimized radial basis function network is utilized in a personal computer for the testing.

For the development of a sleep monitoring system in the vehicle, it is necessary to establish some sensors that will be measuring the state of the driver, taking into account some sleep features, it is convenient to place the following sensors inside the vehicle: the steering wheel force sensor, the steering wheel heart rate sensor, and the steering wheel blood oxygen saturation sensor. The signals of heart rate, pressure, and blood oxygen concentration chosen because these signals are easier and cheaper to collect. The structure of the circuit with the steering wheel heart rate sensor can be seen in Fig. 6; the structure of the circuits with the other two sensors is similar.

Fig. 6
figure 6

Structure of the circuit with the steering wheel heart rate sensor

To carry out the test of the steering wheel force sensor, the updating is carried out on the steering wheel as shown in Fig. 7, it consists of a pressure sensor FSR connected to the analog port of the Arduino Mega 2560, and through Octave software the reading is stored to obtain the final data. This procedure does not involve any data processing such as data cleaning, resampling, etc. The sampling frequency of the final data is 3200 samples per second.

Fig. 7
figure 7

The steering wheel force sensor

To carry out the test of the steering wheel heart rate sensor, with the help of the Octave software, the heart rate reading is obtained for the driver, this is done experimentally, for this it was necessary to adjust the sensor MAX30102 on the driver finger as shown in Fig. 8, in this way a good fixation of the sensor is obtained and thus not obtain erroneous readings.

Fig. 8
figure 8

The steering wheel heart rate sensor

The calculation of the oxygen saturation in a driver is carried out with the same sensor MAX30102 with which the heart rate is obtained as shown in Fig. 8, in this case the same tests are carried out under the same conditions so that the results are similar.

4.2 Experimental environment

Fatigue driving is one of the leading causes of traffic accidents as can be seen in Fig. 9; consequently, the fatigue driving modeling plays a crucial role in road safety. To validate the performance of the optimized radial basis function network, the fatigue driving modeling in a vehicle is evaluated.

Fig. 9
figure 9

Fatigue driving is one of the leading causes of traffic accidents

As the first comparison [20, 21], the objective function \(g_{0}\) considered of the simplex optimizer to achieve this aim is:

$$\begin{aligned} \begin{array}{c} \min g_{0}=\min \left( w_{\eta ,k}-t_{k}\right) \\ \text {subject to}\\ \eta _{\min }\le \eta _{k}\le \eta _{\max }\\ z_{k}=\eta _{k}\\ z_{\min }=\eta _{\min }\\ z_{\max }=\eta _{\max }, \end{array} \end{aligned}$$
(19)

As the second comparison [22, 23], the objective function \(g_{0}\) considered of the bat optimizer to achieve this aim is:

$$\begin{aligned} \begin{array}{c} \min g_{0}=\min \left( \left( w_{\eta ,k}-t_{k}\right) ^{2}\right) \\ \text {subject to}\\ \eta _{\min }\le \eta _{k}\le \eta _{\max }\\ z_{k}=\eta _{k}\\ z_{\min }=\eta _{\min }\\ z_{\max }=\eta _{\max }, \end{array} \end{aligned}$$
(20)

4.3 Parameters setting

For the awake driver and somnolent driver, we utilize 2 inputs to train the optimized radial basis function network:

  • \(u_{1.k}=\) the steering wheel force sensor.

  • \(u_{2,k}=\) the steering wheel blood oxygen saturation sensor.

and we utilize 1 target to train the optimized radial basis function network:

  • \(t_{k}=\) the steering wheel heart rate sensor.

We utilize 2 inputs \(u_{1,k}\), \(u_{2,k}\), 1 target \(t_{k}\), and 1 optimized radial basis function network output \(w_{k}\). The aim is that utilizing the inputs \(u_{i,k}\), the optimized radial basis function network output \(w_{k}\) needs to reach the target \(t_{k}\) faster. It is significant to note that the optimization utilizes time-varying terms and time-varying step size for the modeling of 8000 input and target data of an awake driver, the training utilizes time-varying terms and constant step size for the modeling of 8000 input and target data of an awake driver, and the testing utilizes constant terms and constant step size for the modeling of 8000 input and target data of an awake driver and a somnolent driver.

In this part of the study, the suggested optimizer is applied for the awake driver and the somnolent driver, where the root mean square error MSE is utilized as:

$$\begin{aligned} MSE=\left( \frac{1}{T} {\displaystyle \sum \limits _{k=1}^{T}} \left( w_{k}-t_{k}\right) ^{2}\right) ^{\frac{1}{2}}, \end{aligned}$$
(21)

where \(w_{k}-t_{k}\) is the error, \(w_{k}\) is the optimized radial basis function network output, \(t_{k}\) is the target, T is the final iteration.

Equation (19), [20, 21], describes the simplex optimizer with 2 inputs, 1 output, and 3 hidden layer neurons, \(z_{s,1}=\eta _{s,1}=0.5\) is the initial value of the step size before the optimization, \(z_{s,T}=\eta _{s,T}=0.4105\) is the final value of the step size after the optimization, \(a_{j,1}={\text {rand}}\), \(b_{ij,1} ={\text {rand}}\), \(c_{i,k}={\text {rand}}\), \({\text {rand}} \) is a random numeral in the interval [0, 1].

Equations (20) [22, 23] describes the bat optimizer with 2 inputs, 1 output, and 3 hidden layer neurons, \(z_{b,1}=\eta _{b,1}=0.5\) is the initial value of the step size before the optimization, \(z_{b,T}=\eta _{b,T}=0.3571\) is the final value of the step size after the optimization, \(a_{j,1}={\text {rand}}\), \(b_{ij,1} ={\text {rand}}\), \(c_{i,k}={\text {rand}}\), \({\text {rand}} \) is a random numeral in the interval [0, 1].

Equations (1)–(7), (8)–(18) describe the genetic optimizer with 2 inputs, 1 output, and 3 hidden layer neurons, \(z_{g,1}=\eta _{g,1}=0.5\), is the initial value of the step size before the optimization, \(z_{g,T}=\eta _{g,T}=0.7674\) is the final value of the step size after the optimization, \(a_{j,1}={\text {rand}}\), \(b_{ij,1} ={\text {rand}}\), \(c_{i,k}={\text {rand}}\), \({\text {rand}} \) is a random numeral in the interval [0, 1].

4.4 Performance evaluation

Example 1. Figure 10 shows the step size during the optimization for 8000 iterations and for the first 40 iterations. Figures 11 and 12 show the modeling and MSE during the training. Figures 13 and 14 show the modeling and MSE during the testing. Table 2 shows the MSE of (21) during the training and testing.

Fig. 10
figure 10

Step size during the optimization for example 1

Fig. 11
figure 11

Modeling during the training for example 1

Fig. 12
figure 12

MSE during the training for example 1

Fig. 13
figure 13

Modeling during the testing for example 1

Fig. 14
figure 14

MSE during the testing for example 1

Table 2 MSE for example 1

Example 2. Figure 15 shows the step size during the optimization for 8000 iterations and for the first 40 iterations. Figures 16 and 17 show the modeling and MSE during the training. Figures 18 and 19 show the modeling and MSE during the testing. Table 3 shows the MSE of (21) during the training and testing.

Fig. 15
figure 15

Step size during the optimization for example 2

Fig. 16
figure 16

Modeling during the training for example 2

Fig. 17
figure 17

MSE during the training for example 2

Fig. 18
figure 18

Modeling during the testing for example 2

Fig. 19
figure 19

MSE during the testing for example 2

Table 3 MSE for example 2

4.5 Discussion

Example 1. In Figs. 11, 13, since the signal of the genetic optimizer reaches better the signal of an awake driver than the signal of the simplex optimizer and bat optimizer, it is observed that the genetic optimizer reaches better step size than the simplex optimizer and bat optimizer. In Figs. 12, 14, and Table 2, since the MSE is the smallest for the genetic optimizer, it is noticed that the genetic optimizer reaches better step size than the simplex optimizer and bat optimizer. Thus, the genetic optimizer is the optimum one in example 1.

Example 2. In Figs. 16, 18, since the signal of the genetic optimizer reaches better the signal of a somnolent driver than the signal of the simplex optimizer and bat optimizer, it is observed that the genetic optimizer reaches better step size than the simplex optimizer and bat optimizer. In Figs. 17, 19, and Table 3, since the MSE is the smallest for the genetic optimizer and bat optimizer, it is noticed that the genetic optimizer reaches better step size than the simplex optimizer and bat optimizer. Thus, the genetic optimizer is the optimum one in example 2.

Remark 1

Even if the genetic optimizer, simplex optimizer, and bat optimizer have similar structure, the genetic optimizer outperform the simplex optimizer and bat optimizer because the genetic optimizer utilizes different equations than the utilized by the simplex optimizer and bat optimizer.

5 Conclusion

On the basis of the modeling, three strategies denoted as simplex optimizer, bat optimizer, and our genetic optimizer were compared to seek an acceptable step size in the gradient strategy for an optimized radial basis function network. To validate the performance of the optimized radial basis function network, the fatigue driving modeling in a vehicle was evaluated by two examples, where the genetic optimizer achieved better step size in the optimized radial basis function network than the simplex optimizer and bat optimizer. The suggested strategy has other applications as are the mechatronic, robotic, energy, electric, electronic, or computing. In the future, other optimizer will be studied to seek an acceptable step size in the gradient strategy for an optimized radial basis function network, or the suggested optimizer will be utilized in other kind of optimized radial basis function network.