1 Introduction

Electromyography (EMG) signal is one of the biomedical signals that expressed muscular activity during motions. EMG signal offers much useful information related to muscle condition, and it has received much attention in the pattern recognition system such as myoelectric prosthetic [1]. Myoelectric control makes use of EMG signals as the control source to activate the prosthesis, which allows the amputees to perform several basic hand motions in their daily living [2, 3]. Nevertheless, the quality of extracted features has a strong influence on the classification performance. This is because the features will characterize the pattern of recorded EMG signals according to the similarity and repeatability of motions [4]. Therefore, the choice of features is critically important in EMG signals classification.

In fact, the selection of features is considered as NP hard combinatorial problem in which the number of possible solutions increases exponentially with the number of features. Hence, it is impractical to perform the exhaustive search during a high dimensional feature vector [5]. For such reason, feature selection can be an essential step to solve the high dimensionality problem. Feature selection is a process of determining the subset of potential features that can best describe the target concept in the classification stage. It is not only improving the classification performance but also eliminating some of the redundant and irrelevant features [6, 7].

In general, feature selection can be categorized into filter and wrapper methods. The former identifies the relevant features based on the information theory and distance measurement, whereas the latter uses the optimization algorithm and a specific learning algorithm in the process of evaluation [7, 8]. As compared to the filter method, wrapper based feature selection can usually provide better classification performance, which becomes the major interest of researchers in feature selection. In the past study, Ramos et al. [9] introduced the harmony search to identify the best combination of relevant features. Emary et al. [10] proposed two novel binary grey wolf optimizations to solve the feature selection tasks. Ewees et al. [11] developed a new chaotic multi-verse optimizer with five different chaotic maps for feature selection. Moreover, Tan et al. [12] proposed an enhanced particle swarm optimization to tackle the feature selection problem in skin cancer detection. Previous works reported that the utilization of feature selection methods can effectively eliminate the irrelevant and redundant features, thus leading to optimal classification results.

Recently, a competitive binary grey wolf optimizer (CBGWO) has proposed to solve the feature selection problem in EMG signals classification [13]. The CBGWO is a variant of the grey wolf optimizer, which has proven to be superior against other conventional feature selection methods. In addition, CBGWO has the advantage of simplicity and low computational complexity. Beneficial its excellent performance, CBGWO can be useful for many engineering applications. However, CBGWO is only applicable to binary optimization problems, but not to the continuous optimization tasks. From the point of view, the functionality of CBGWO in worldwide applications has limited.

In this paper, we aim to model the CBGWO into a continuous version, which enables it to perform the search on the continuous search space. The proposed method is named as competitive grey wolf optimizer (CGWO). However, CGWO does not guarantee performance enhancement in feature selection. Therefore, we propose another variant of CGWO, namely opposition based competitive grey wolf optimizer (OBCGWO) to tackle the feature selection problem in EMG signals classification. The OBCGWO integrates the opposition based learning (OBL) strategy to boost the performance of CGWO. Initially, an OBL based initialization is introduced to enhance the quality of initial solutions. Later, an OBL based position updating strategy is proposed to improve the explorative behavior of the algorithm, which enables OCGWO to explore the untried areas to find the global optimum. In the first part of the experiment, several benchmark functions are used to test the performance of proposed CGWO and OBCGWO in optimization. In the second part of the experiment, the proposed methods are applied to solve the EMG feature selection problems. The EMG data of eight subjects acquired from the publicly access EMG database is used in this work. Four state-of-the-art methods include particle swarm optimization (PSO), butterfly optimization algorithm (BOA), flower pollination algorithm (FPA), and CBGWO are used to evaluate the effectiveness of proposed OBCGWO in feature selection. The experimental results illustrate that OBCGWO is able to offer competitive performance as compared to CGWO and other conventional methods.

The rest of paper is organized as follows: Sect. 2 describes the materials and methods, which includes the EMG data, feature extraction, the proposed CGWO and OBCGWO, and their application in EMG feature selection. Section 3 discusses the findings of the research. Finally, Sect. 4 concludes the findings of this work.

2 Materials and methods

2.1 EMG data

The EMG data are acquired from the public access 10mov4chUF_AFEs Dataset via the Biopatrec Repository [14]. This study utilizes the EMG database with TI ADS1299 configuration that composed the EMG signals of ten different hand and wrist motions (hand open, hand close, wrist flex, wrist extend, pro, supination, side grip, fine grip, agree, and pointer) recorded from eight healthy subjects. In the experiment, four bipolar electrodes (four channels) were used to record the EMG signals. The subject was asked to perform the hand motion for 3 s with 3 s of relaxation. Each hand motion was repeated for three trials, and the signals were sampled at 2000 Hz [15].

2.2 Feature extraction

The raw EMG signal is not suitable for classification since it composes a large quantity of data, and this may mislead the result [16]. Therefore, feature extraction is an essential step to obtain useful information by representing the raw EMG signal into a reduced set of features. In this study, the windowing technique is applied to segment the EMG signals due to lower computational cost and simplicity [17]. We adopted a window length of 200 ms with a 100 ms overlap.

In general, the EMG features can be categorized into time domain (TD), frequency domain (FD), and time–frequency domain (TFD) features. As compared to FD and TFD, TD features can be directly extracted from the EMG time series, which leads to low complexity and fast computation speed. Thus, TD features are more widely used in EMG studies [2, 18,19,20]. Moreover, some studies indicated that the combination of FD, TFD and TD features can further improve the performance of the classification model [17, 21, 22]. In the present work, we adopt thirty EMG features, as shown in Table 1. These features are selected due to their promising performances in previous works [17, 18, 23,24,25,26].

Table 1 Thirty EMG features used in this study

By performing the windowing, we obtain 870 EMG time series segments. Then, thirty features are extracted from each EMG segment and form a feature set. In total, 132 features (33 features × 4 channels) are extracted from each motion from each subject. Note that AR at the fourth-order generates four features, and thus resulting in 33 features. Finally, an EMG feature set that consists of 870 instances × 132 features is acquired from each subject.

2.3 Competitive binary grey wolf optimizer

Competitive binary grey wolf optimizer (CBGWO) is a recent feature selection method that has successfully applied to solve the feature selection problem in EMG signals classification [13]. Ordinarily, grey wolves live in a pack with an organization of 5–12 wolves. The main leader is known as alpha (α) wolf, followed by beta (β) and delta (δ) wolves. The rests are assumed to be omegas, which follow the leaders in the hunting and searching prey process [27]. The operation of CBGWO is described as follows:

Firstly, a population of grey wolves is randomly initialized (either 0 or 1). Secondly, the fitness values of wolves are evaluated, and the alpha, beta, and delta wolves are defined. In each iteration, the population of wolves is randomly partitioned into N/2 couples, where N is the population size. After this, there is a competition between two wolves in each couple. From the competition, the wolves that achieve better fitness values are known as winners, and they are directly passed into the new population. By contrast, the losers of the competition update their positions by learning from the winners and leaders. Mathematically, the position of the loser is updated as:

$$x^{d} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}l} {1,\quad {\text{if}}\;S\left( {\frac{{x_{1}^{d} + x_{2}^{d} + x_{3}^{d} }}{3}} \right) \ge rand} \hfill \\ {0,\quad {\text{otherwise}}} \hfill \\ \end{array} } \right.$$
(1)
$$S\left( x \right) = \frac{1}{{1 - \exp \left( { - 10\left( {x - 0.5} \right)} \right)}}$$
(2)

where rand is a random number distributed between 0 and 1, d is the dimension of search space, and S is the modified sigmoid function as in Eq. (2). The coefficients x1, x2, and x3 are expressed as follows:

$$x_{1} = x_{\alpha } - A_{1} \cdot \left( {D_{\alpha } } \right)$$
(3)
$$x_{2} = x_{\beta } - A_{2} \cdot \left( {D_{\beta } } \right)$$
(4)
$$x_{3} = x_{\delta } - A_{3} \cdot \left( {D_{\delta } } \right)$$
(5)

where xα, xβ, and xδ are the position of alpha, beta, and delta at iteration t. The A1, A2 and A3 are calculated using Eq. (6), and the Dα, Dβ and Dδ are computed using Eqs. (7), (8) and (9), respectively.

$$A = 2y \cdot r_{1} - y$$
(6)
$$D_{\alpha } = \left| {C_{1} x_{\alpha } - \left( {x_{w} - x_{l} } \right)} \right|$$
(7)
$$D_{\beta } = \left| {C_{2} x_{\beta } - \left( {x_{w} - x_{l} } \right)} \right|$$
(8)
$$D_{\delta } = \left| {C_{3} x_{\delta } - \left( {x_{w} - x_{l} } \right)} \right|$$
(9)

where parameter y is linearly decreasing from 2 to 0, r1 is a random number distributed between 0 and 1, xw is the winner wolf, xl is the loser wolf, and C is defined as follow:

$$C = 2r_{2}$$
(10)

where r2 is a random number distributed between 0 and 1. After position updates, the fitness values of new losers are evaluated and passed into the new population. Then, the alpha, beta, and delta are updated.

At the end of each iteration, a leader enhancement process is executed. In this process, the leaders (alpha, beta, and delta) are allowed to enhance themselves by performing the random walk. Mathematically, the random walk can be computed as:

$$L^{d} = \left\{ {\begin{array}{*{20}l} {rand\left( {0,1} \right),} \hfill & {{\text{if}}\;R \ge r_{3} } \hfill \\ {X_{L}^{d} ,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.$$
(11)

where XL is the leader (either alpha, beta or delta), rand (0,1) is a random number generated either 0 or 1, r3 is a random number distributed between 0 and 1, and R is a change rate linearly decreasing from 0.9 to 0.

$$R = 0.9 - t\left( {\frac{0.9}{{T_{{\max} } }}} \right)$$
(12)

where t is the current iteration, and Tmax is the maximum number of iterations. In the leader enhancement step, if the newly generated leader offers better fitness value, then the current leader will be replaced. Otherwise, the current leader is kept for the next iteration. The algorithm is repeated until the maximum number of iterations has reached. Finally, the global best alpha is known to be the optimal solution (best feature subset). The pseudocode of CBGWO can be found in [13].

2.4 Proposed competitive grey wolf optimizer

In the previous work, CBGWO was proven to be outperformed other conventional methods in feature selection. However, CBGWO is only applicable to binary optimization tasks, but not to the continuous optimization problems, which limits the functionality of CBGWO in many engineering applications. In this paper, we aim to model the CBGWO so that it can be useful for other continuous optimizations, as well as feature selection.

2.4.1 Initial population

The remodel approach is called competitive grey wolf optimizer (CGWO), which is a continuous version of CBGWO. Unlike CBGWO, the initialization of CGWO is given by

$$x_{i}^{d} = lb + \left( {ub - lb} \right)r_{4}$$
(13)

where i is the order of wolf, d is the dimension of search space, r4 is a random number distributed between 0 and 1, ub and lb are the upper and lower boundaries. As for feature selection, the parameters ub and lb are set at 1 and 0, respectively.

2.4.2 Position updating rule

Since CGWO is a continuous version of CBGWO, and thus the sigmoid function is no longer needed in the updating process. In CGWO, the position update of the loser is remodeling as follow:

$$x^{d} \left( {t + 1} \right) = \frac{{x_{1}^{d} + x_{2}^{d} + x_{3}^{d} }}{3}$$
(14)

where x1, x2, and x3 are computed using Eqs. (3), (4), and (5). In the process of leader enhancement, the random walk is formulated as:

$$L^{d} = \left\{ {\begin{array}{*{20}l} {lb + \left( {ub - lb} \right)rand,} \hfill & {{\text{if}}\;R \ge r_{5} } \hfill \\ {X_{L}^{d} ,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.$$
(15)

where rand and r5 are two independent random numbers distributed between 0 and 1, and R is computed using Eq. (12). The pseudocode of CGWO is presented in Algorithm 1.

2.5 Proposed opposition based competitive grey wolf optimizer

In the previous section, we have modeled the CBGWO into a continuous version of CGWO, which is now applicable to feature selection and other continuous optimization tasks. However, CGWO is simply a continuous version of CBGWO, which does not guarantee performance enhancement. Therefore, we propose a new opposition based competitive grey wolf optimizer (OBCGWO) to evolve the performance of CGWO in current work. The OBCGWO algorithm adopts the opposition based learning (OBL) strategy for performance enhancement.

figure a

2.5.1 Concept of opposition based learning

The OBL strategy is first introduced by Tizhoosh [28] to speed up the convergence behavior. According to literature, OBL was useful for improving the convergence of metaheuristic algorithms [29, 30]. Normally, metaheuristic algorithm generates a population of initial solution in a random manner. However, due to the lack of prior knowledge and experience, the algorithm cannot converge to the global optimum [31]. Therefore, integrating the OBL strategy into CGWO is an effective and reliable way to resolve the above issue.

The concept of opposite number is explained as follow [28]: Given a real number X in an interval between ub and lb, and the opposite number \(\overline{X}\) can be calculated as:

$$\overline{X} = ub + lb - X$$
(16)

In the multidimensional problem, the opposite solution of the solution X can be computed as [30]:

$$\overline{X}_{j} = ub_{j} + lb_{j} - X_{j}$$
(17)

where j is the dimension of search space.

2.5.2 The OBL based initialization

Algorithm 2 illustrates the pseudocode of the OBL based initialization strategy. The OBCGWO starts by generating a random population of N initial solutions, where N is the number of solutions. Afterward, OBL is applied to compute the opposite solution for each initial solution. The opposite and initial solutions are merged and sorted based on fitness values. Finally, the best N solutions that achieved better fitness values are used as the new initial solutions. In this way, a population of high quality initial solutions can be produced.

figure b

2.5.3 The OBL based position update

Algorithm 3 presents the pseudocode of the OBL based position updating process. In comparison with losers, the winners (solution achieves better fitness value in the competition) are more capable of providing high quality solutions in their opposite directions. So, we only compute the opposite solutions of winners in OBCGWO algorithm. For OBCGWO, the opposite solution of winner is computed as follow:

$$\overline{X}_{{w_{j} }} = \left\{ {\begin{array}{*{20}l} {lb_{j} + ub_{j} - X_{{w_{j} }} ,} \hfill & {{\text{if}}\,r_{6} \, > \,0.5} \hfill \\ {X_{{w_{j} }} ,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right.$$
(18)

where Xw is the winner, r6 is a random number distributed between 0 and 1, and j is the dimension. In OBCGWO, the opposite solutions are computed based on the probability, which intends to find preferable solutions while keeping some of the original information. The pseudocode of OBCGWO is shown in Algorithm 4.

figure c
figure d

2.6 Application of proposed CGWO and OBCGWO for EMG feature selection

Due to the high dimensionality, the classification of EMG signals using all features is extremely difficult. This is mainly due to the existence of irrelevant and redundant features, which significantly degrades the performance of the classification model. Therefore, we employ the CGWO and OBCGWO to solve the feature selection problem in EMG signals classification. Note that the dimension of the search space is equal to the number of features. In the proposed scheme, each dimension is bounded between ub and lb. To determine whether the feature is selected or not, a simple technique that applies the static threshold is utilized, and it can be defined as follow [32]:

$$\left\{ {\begin{array}{*{20}l} {x_{i}^{d} > 0.6,\quad {\text{Selected}}\,{\text{feature}}} \hfill \\ {x_{i}^{d} \le 0.6,\quad {\text{Unselected}}\,{\text{feature}}} \hfill \\ \end{array} } \right.$$
(19)

where xi is the wolf at i order and d is the dimension of search space. Figure 1 illustrates a sample of a solution with the dimension of eight (a feature set that comprises of eight features). As can be observed, the 2nd, 3rd, 6th, and 7th features have the values of higher than 0.6, and thus these four features are selected in this sample solution.

Fig. 1
figure 1

Example of a solution

Figure 2 exhibits the flow diagram of proposed methods for EMG feature selection and classification. In the first step, the features are extracted from the EMG signals to construct a feature set. After that, the CGWO and OBCGWO evaluate the relevant features from the feature set to form an optimal feature subset. As for feature selection, the fitness function that considered both classification performance and feature size are utilized to evaluate the individual solution, as shown in Eq. (20).

$$\downarrow {\text{Fitness}}\;{\text{Function}} = \alpha ER{ + }\left( {1 - \alpha } \right)\frac{\left| S \right|}{\left| R \right|}$$
(20)

where ER is the classification error rate computed by the k-nearest neighbor (KNN) algorithm, |S| is the length of feature subset, |R| is the total number of features, and α is a parameter to control the influence of classification performance and feature size. In this study, the EMG data are divided into 80% for the training set and 20% for the testing set. Finally, the features selected (optimal feature subset) by the CGWO and OBCGWO are fed into the KNN algorithm for the classification process. All the analysis is performed in MATLAB 9.4 using a computer with processing Intel Core i5-9400F CPU 2.90 GHz and 16 GB random access memory (RAM).

Fig. 2
figure 2

Flow diagram of proposed CGWO and OBCGWO for EMG pattern recognition

3 Results and discussion

The experiment is divided into two parts. First, the proposed methods are tested by solving the benchmark functions. Second, the proposed methods are applied to tackle the feature selection problem in EMG signals classification.

3.1 Evaluation metrics

Eight metrics include best fitness, worst fitness, mean fitness, standard deviation of fitness (STD), accuracy, feature size, feature selection ratio (FSR) and computational time (CT) are calculated to measure the performance of proposed algorithms in EMG feature selection.

  • Best fitness It is the minimum solution found by the algorithm in the final set [33].

  • Worst fitness It is the maximum solution obtained from the final set [33].

  • Mean fitness It is the average value of all solutions in the final set [10].

  • Standard deviation of fitness (STD) It is the standard deviation of all solutions in the final set [10].

  • Accuracy It estimates how accurate the features selected by the feature selection method in EMG signals classification. It can be defined as the ratio of the number of correctly predicted samples to the total number of samples.

  • Feature size It presents the number of features selected by the feature selection method in the final set.

  • Feature selection ratio (FSR) It shows the ratio of the number of selected features obtained from the final set to the total number of original features [34].

  • Computational time (CT) It implies how fast the processing speed of the feature selection method in the evaluation process.

The proposed algorithms are stochastic, which perform the search randomly. Thus each algorithm is repeated with 20 independent runs, using different random seeds. The evaluation metrics are calculated for each independent run. Finally, the average results obtained from 20 independent runs are recorded and displayed as the experimental results.

3.2 Experimental results of benchmark test

In the first part of the experiment, the numerical efficiency of proposed methods is tested by using 10 mathematical optimization problems [35, 36]. Table 2 depicts the description of 10 benchmark functions with their mathematical definitions. In this test, the proposed methods are compared with particle swarm optimization (PSO) [37] and flower pollination algorithm (FPA) [38]. Note that the population size (N) and maximum number of iterations (Tmax) of each algorithm is set at 30 and 500 according to [35].

Table 2 Description of 10 benchmark functions

Table 3 outlines the results of the best, worst, mean, and STD of fitness values obtained from 10 benchmark functions. As can be seen, the proposed OBCGWO outperformed other algorithms in finding the best solution. The results clearly show the superiority of OBCGWO in optimization tasks. On the one hand, the convergence curves of the proposed methods are presented in Fig. 3. As can be seen, the proposed OBCGWO can always find the global optimal faster, which contributed to high convergence rate. For example, for function F4, F6, and F7, the OBCGWO converged faster than its competitors when searching for the preferable solution. The foremost cause of accelerated curves for OBCGWO is due to the trait that OBL learning improves the quality of initial solutions and enhances the explorative behavior of the algorithm.

Table 3 Comparison of optimization results obtained from the 10 benchmark functions
Fig. 3
figure 3

Convergence curves of proposed methods on 10 benchmark functions

3.3 Experimental results of EMG feature selection

In the second part of the experiment, the effectiveness of proposed methods in EMG feature selection is examined. Remark, the EMG signals of ten different hand and wrist motions were collected from eight subjects. After that, the windowing was applied, and thirty features were extracted from each EMG segment. Then, we utilize the proposed CGWO and OBCGWO algorithms to find the optimal feature subset. However, it is interesting to know whether proposed algorithms can provide promising performance in feature selection. For such reason, four recent and popular algorithms include PSO [37], butterfly optimization algorithm (BOA) [39], FPA [38], and competitive binary grey wolf optimizer (CBGWO) [13] are used to investigate the efficacy and reliable of proposed algorithms in this work.

The parameter settings of feature selection methods are described as follows: To ensure all the algorithms conducted in the same condition, the population size (N) and maximum number of iteration (Tmax) are set at 50 and 100, respectively. Besides, we set the α to 0.9 since the classification performance is the most important measure. As for PSO, the acceleration coefficient c1 and c2 are set at 1.49618, and the inertia weight w is fixed at 0.7298. For FPA, the switch probability P is set at 0.8. For BOA, the modular modality c and switch probability p are set to 0.01 and 0.8, respectively. Note that there is no additional parameter setting for CBGWO, CGWO, and OBCGWO.

Table 4 demonstrates the experimental results of the best, worst, mean, and STD of fitness values on eight subjects. On the one hand, the convergence curves of proposed methods on eight subjects are shown in Fig. 4. In Table 4, the best results are highlighted with bold text. Note that the lower the best, worst, mean, and STD values are, the better the performance of the feature selection method is. As can be observed, OBCGWO scored the optimal properties in most of the datasets (six subjects). The results show that OBCGWO outperformed CGWO, CBGWO, BOA, PSO, and FPA in evaluating the most informative feature subset.

Table 4 Experimental results of the best, worst, mean, and STD of fitness values on eight subjects
Fig. 4
figure 4

Convergence curves of proposed methods on 8 subjects

From Table 4, OBCGWO perceived the lowest worst fitness in at least six subjects. Owing to OBL based initialization strategy, OBCGWO can be capable to enhance the quality of initial solutions and can effectively obtain the best solution in feature selection. On the other hand, CGWO was also presenting competitive results. Although CGWO was slightly worse than OBCGWO, however, its performance was much better than BOA, FPA and PSO.

Table 5 presents the experimental results of the accuracy, feature size, FSR, and computational cost on eight subjects. In Table 5, the best result for each algorithm is highlighted. Inspecting the result, OBCGWO achieved the highest average accuracy on five subjects, which overtook other competitors in current work. On the one hand, BOA was found to be the worst method that offered lower accuracy on most of the subjects.

Table 5 Experimental results of the accuracy, feature size, FSR, and CT on eight subjects

In comparison with OBCGWO and CGWO, CBGWO was highly capable of reducing the number of features. From Table 5, CBGWO was showing the optimal performance in feature reduction, which achieved the lowest feature size and FSR values in most subjects. On the other hand, OBCGWO and CGWO yielded the optimal feature size and FSR on two and three subjects, respectively. Even though CBGWO is good in feature reduction, however, the relevant feature might be eliminated and thus resulting in unsatisfactory performance.

As for computational time (CT), the fastest processing speed was achieved by CGWO and CBGWO. This is expected because CBGWO and CGWO employ the competition strategy that only updates the position of losers (half of the population) in the process of evaluation. Based on the results obtained, OBCGWO yielded high computation time in most subjects. The additional computational cost of OBCGWO is mainly coming from the OBL based position update, which spends more time to evaluate the opposite solution of winners. However, OBCGWO can usually find the relevant features that contributed to the highest accuracy in EMG signals classification.

Furthermore, the statistical t test is applied to examine whether the classification performance of the proposed OBCGWO is significantly better than other methods. Table 6 presents the result of the t-test with p-values. Note that the OBCGWO is used as the reference algorithm in this test. As can be seen, the classification performance of OBCGWO was significantly better than BOA, PSO, FPA, and CBGWO in most cases (p value < 0.05). The results again validate the effectiveness of proposed OBCGWO in EMG feature selection. The reason for the superior performance of OBCGWO is that the OBL strategy is equipped for both initialization and position update. The OBL based initialization enhances the quality of initial solutions, while the OBL based position update tends to improve the explorative behavior of the algorithm. Thanks to these interesting properties, OBCGWO can efficiently find the best solution and avoid the local optimal.

Table 6 Experimental result of t-test with p values

Figure 5 illustrates the result of the total number of times each feature (from 1 to 30) selected by the proposed OBCGWO for all eight subjects and four channels across 20 independent runs. One can see that the features that are mostly selected by OBCGWO are SSC (307), WA (289), IEMG (155), CARD (124), AR (88), and EWL (67) features. As compared to FD features, the result indicates that TD features can often provide useful signal information for discriminating the hand and wrist motions.

Fig. 5
figure 5

Result of the total number of times each feature (from 1 to 30) selected using OBCGWO

In this paper, we have proposed the CGWO and OBCGWO algorithms to tackle the feature selection problem in EMG signals classification. The proposed OBCGWO not only beneficial in EMG feature selection but also showing good performance in several benchmark tests. Our results showed that OBCGWO was able to provide promising performance in EMG signals classification. The OBCGWO worked significantly better than BOA, PSO, FPA, and CBGWO in finding the optimal feature subset. In short, OBCGWO is a useful tool, and it can be applied to other applications.

4 Conclusion

Feature selection is one of the effective ways to reduce the redundancy in the feature set. Wrapper based feature selection not only helps to minimize the feature size but also improves the accuracy of the system. In this regard, an opposition based competitive grey wolf optimizer (OBCGWO) is proposed to solve the feature selection problem in EMG signals classification. In comparison with PSO, FPA, BOA, CGWO, and CBGWO, OBCGWO can usually select the significant features, thus leading to the highest classification performance. According to the findings, OBCGWO may be treated as a valuable decision support tool. In the future, OBCGWO can be used in other continuous optimization tasks such as numerical optimization, optimized support vector machine, and neural network.