Keywords

1 Introduction

Surface electromyography (sEMG) measures the electrical activity generated by voluntary contractions of skeletal muscles. This technique has several uses in biomechanic, robotic and mechatronic systems [1, 2]. Gesture recognition consists of identifying the class among a set of classes of a hand movement given [11]. Hand gestures recognition system using sEMG can be used to control mechatronic devices [3,4,5]. The amplitude and frequency content of sEMGs are affected by different factors including: skin thickness, muscle strength, muscle volume, physiological interference, external noise and electrodes placed incorrectly.

Hand gestures recognition systems work well when the sensors are placed exactly in the same position that were used to acquire signal for training [6, 7]. However, placing a sensor in exactly the same position is difficult because the physical characteristics of people’s arms are different.

There are different devices for sensing sEMG such as Myo armband [12], gForce [10], DTing [13] and eCon [14]. These sensors allow to easily obtain electromyography data, in addition to the advantage of their portability. These sensors are easy to place making it practical to implement several application in any user. The Myo armband manufactured by Thalmic Labs is an EMG sensor that allows the registration of 8 bipolar signals on a person’s forearm. The manufacturer of the Myo armband suggests to place the armband in a certain position on the forearm to have good performance (see Fig. 1). However, placing the armband in the same position implies to know exactly the coordinates of the sensor with respect to a given point of reference in the forearm and at the test time placing the sensor in exactly the same coordinates. This requires measuring distances accurately for every single time that the sensor is used which is very difficult for practical applications.

There are two types of gesture recognition systems: general and user specific. General recognition systems are trained with a finite dataset acquired from a group of people and tested by any user. On the other hand, user specific systems require to be trained and tested with the data from the same person each time that the system is used. The use of general systems implies that the armband must be placed in the same position both for training and testing which is difficult to achieve in practice [8]. User specific models do not require placing the sensor in the same position because these systems are trained for each user and for each time they are used. However, training a model for each user and for each time it is used is time consuming, making thus difficult their use for practical applications. In practice, a user simply wants to wear the sensor and start using the recognition system right away. Therefore the best option for practical applications is the use of general recognition models, which should have a system that compensates the variation in the orientation of the sensor for the recognition systems to work well.

In [9] an algorithm is proposed to compensate the variation in sensor rotation. The sensor rotation decrements the performance of the recognition model and sometimes even makes inapplicable the use of the recognition model built in one position. In this work, the armband is rotated every 45\(^\circ \) and the data is recorded with that rotation. A remapping is performed according to the predicted angle and the distribution is marked on the user’s arm prior to the signal recording. In addition to the high complexity of the proposed algorithm, the correction of the orientation can be done in steps of 45\(^\circ \).

In [11] a general model is proposed to classify 40 gestures in real time. The proposed model works in both the right and the left hand and use the Myo armband for data acquisition and a support vector machine for the classification. The paper shows the real time classification of the gestures made. To compare the results obtained with the Myo armband’s own recognition system, users wear strictly the armband in the position recommended by the manufacturer. Unfortunately, the authors do not give any further details about the results when the armband is placed in different positions. However, authors show the possibility to classify the gestures independent of the arm on which the armband is worn.

In another hand gesture recognition system such as the one proposed by Weissmann and Salomon [15] provide good recognition results up to 100% recognition rate, but the need to wear a glove can restrict the user’s freedom of movements and it is not for practical applications.

To solve the problem of the variation in the orientation of the armband, a novel method is proposed in this paper. This method is based on the maximum amplitude detection (MAD) which identifies the sensor with the maximum activity in the sEMG and based on this detection related to a sensor, the data is rearranged by creating a new matrix with the reordered data. The maximum amplitude sensor is calculated using the movement wave out in a calibration process that is executed for every time that a person wants to use the recognition system. The recognition model used to test the algorithm for correcting the orientation is based on common features (mean value, windowing, energy, curve envelope, standard deviation) and a SVM classifier.

Following this introduction, the remaining of this paper is organized as follows. The proposed material and model section (Sect. 2) describes the materials used for data collection, how each data matrix is handled and how the new matrix is organized. The experiment section (Sect. 3) describes the 4 experiments with training and testing data. The results and analysis section (Sect. 4) shows a comparison between the traditional method and the proposed method.

2 Materials and Proposed Model

2.1 MYO Armband

The Myo armband is an electronic device that measures sEMGs. This armband consists of 8 bipolar channels which work with a sampling frequency of 200 Hz. Data are transmitted via Bluetooth to a personal computer. The measured data matrix consists of 8 columns and n rows, the rows depend on the recording time of the sEMG. For 1 s the number of rows is 200. Each column of the data matrix represents the measurements of each sensor.

The manufacturer of the armband suggests to place the armband on a specific position (see Fig. 1) on the forearm for achieving good recognition accuracy. The Myo armband brings a proprietary recognition system whose performance is sensitive to the variations of the recommended position. The Myo armband was tested by rotating it and the recognition data showed that the system has difficult to recognize gestures when the armband is rotated from its suggested position.

Fig. 1.
figure 1

Myo armband base position suggested by manufacturer.

2.2 Datasets

The dataset is organized as follows:

  1. 1)

    Training data

  2. 2)

    Testing \(data_1\) (for experiment 1 and 3)

  3. 3)

    Testing \(data_2\) (for experiment 2 and 4)

The training data was recorded using the same armband position for all users (position suggested by manufacturer). In order to compare the performance of the traditional system with the proposed one, the same training dataset for both systems are used. The Eq. (1) shows in general how is composed the matrix of the training data.

Training data from 40 people is used, 25 men and 15 women. The training data consists of 15 repetitions per gesture for each category and for each user.

Fig. 2.
figure 2

Myo armband rotated from the base position suggested by manufacturer.

Each training data matrix has eight columns and their values are normalized. Each column has the sEMG data measured by each sensor.

$$\begin{aligned} \begin{gathered} Dtr_{general}=[(V_1,D_1), \ldots , (V_i,D_i)] \\ Dtr_{general} \in R^{1000x8}, V_i\in [-1, 1]_{1000x1}, D_i\in [1,8] \end{gathered}\end{aligned}$$
(1)

The categorical variable is represented by \(Y \in \left\{ out,in,close,thumb,relax,\right. \)\(\left. tap \right\} \) and denotes the label for the gesture signal. The total training data per user consists of 90 rows.

The testing data consists of two datasets, test \(data_1\) and test \(data_2\). The test \(data_1\) was recorded using the position suggested by Myo armband manufacturer (see Fig. 1) and test \(data_2\) was recorded placing the armband in different positions (see Fig. 2). For test \(data_2\), people took the armband off and they put the armband back on the forearm in any position they wanted and no specific angle was rotated. Each recording was made during 5 s per gesture and user. The Eq. (2) shows in general how is composed the testing data matrix.

$$\begin{aligned} \begin{gathered} Dts_{general}=[(W_1,E_1), \ldots , (W_i,E_i)] \\ Dts_{general} \in R^{1000x8}, W_i\in [-1, 1]_{1000x1}, E_i\in [1,8] \end{gathered}\end{aligned}$$
(2)

2.3 Traditional Method

Traditional gesture recognition systems using the Myo armband need to be trained before they are used [7]. Commonly this methodology works well; however, after people take the armband off they must train the system again if they want to use it with good accuracy. The gestures performed and recorded during a session are shown in Fig. 3.

Fig. 3.
figure 3

Gestures to be recognized with both traditional and proposed method. (a) wave out, (b) wave in, (c) close, (d) thumb, (e) relax, (f) tap

To process the data, a matrix organized per sensor, user and categories is created (Eq. 3). \({Ms_{i}}\) is the transposed matrix with 15 repetitions for each gesture. \( Ms_{i}\) is the total training matrix for user i and has a dimension of 90 rows and 1000 columns. The 90 rows is the result of 15 repetitions multiplied by 6 gestures. Data training matrix for user i is described as follows:

User i:

$$\begin{aligned} Emg(user_i,category_j)=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8,Y]\end{aligned}$$
(3)
$$\begin{aligned} Ms_i \in R^{15x1000} Y \in \left\{ out,in,close,thumb,relax,tap \right\} \end{aligned}$$
$$\begin{aligned} \begin{array}{c} Dtrain_{user_i}=[Emg(user_i,out); \\ Emg(user_i,in);\\ Emg(user_i,close); \\ Emg(user_i,thumb);\\ Emg(user_i,relax);\\ Emg(user_i,tap) ] \end{array} \end{aligned}$$

Notice that the data has been transposed to be handled and organized according to each gesture and sensor. A table was created by gesture and sensor with the data transposed to work with Matlab. The total matrix for the 40 users is shown below and notice that for each user the data is concatenated.

Total training data for 40 users:

$$\begin{aligned} Dtrain_{total}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8,Y] \end{aligned}$$

where:

$$\begin{aligned} Ms_i \in R^{3600x1000} \text { and } Y \in \left\{ out,in,close,thumb,relax,tap \right\} \end{aligned}$$

Each data matrix coming from the armband in a time t always come in the same sequence even if the armband is located in a different position from the position suggested by the manufacturer.

The EMG data sequence coming from the armband by default is organized in the following order \(Emg_{default}(t)=[s_{1}(t),s_{2}(t),s_{3}(t),s_{4}(t),s_{5}(t),s_{6}(t),s_{7}(t),s_{8}(t)] \) where \(s_{1} \) represents the sensor number 1. When the orientation of the armband is changed, the data order is the same even though the armband was rotated. For \(user_1\) the signals were recorded with sensor number 2 matching with the position that was defined as base (see Fig. 1), getting a default matrix order \(Emg_{default}\).

Regarding the testing \(data_1\) the process to organize the data was followed as the previous one and a total matrix was defined for 40 users too, taking into account that these test data were recorded taking as reference the position suggested by the manufacturer.

Total testing matrix \(data_1\) for 40 users:

$$\begin{aligned} Dtest_{1}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8,Y] \end{aligned}$$

where:

$$\begin{aligned} Ms_i \in R^{3600x1000} \text { and } Y \in \left\{ out,in,close,thumb,relax,tap \right\} \end{aligned}$$

Regarding the testing \(data_2\), the test \(data_2\) matrix represents the set of recordings with different rotations of the armband for each user. It is worth mentioning that the test \(data_2\) has different sensors taken as a reference and their distribution activity is not equal. In Fig. 5 the different gestures activity distribution for four users is shown.

Total testing matrix \(data_2\) for 40 users:

$$\begin{aligned} Dtest_{2}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8,Y] \end{aligned}$$

where:

$$\begin{aligned} Ms_i \in R^{3600x1000} \text { and } Y \in \left\{ out,in,close,thumb,relax,tap \right\} \end{aligned}$$

For practical reasons the number of columns to work with were reduced from 1000 to 900, because when recording the signals not always the same amount of data was gotten. To avoid inconveniences when concatenating the data only 900 points were taken. For training data as well as test \(data_1\) and test \(data_2\) the same extractors were applied. A SVM \(classifier_1\) with original training data was trained and tested.

2.4 Proposed Method

The proposed method is based on the maximum amplitude detection (MAD). After that, the data matrix is rearranged according to the sensor with the highest mean amplitude detected. The sensor with highest amplitude is identified using the movement “wave out”, this movement allows the maxim values data be concentrated mainly in one sensor \( S_{x} \) which is taken as reference for the new order.

$$\begin{aligned} Emg=[V_1,V_2,V_3,V_4,V_5,V_6,V_7,V_8], Emg \in R^{200x8} \text { and } Vi\in [-1, 1]_{200x1},\end{aligned}$$
$$\begin{aligned} Emg_{mean}=mean(Emg) \end{aligned}$$
(4)
$$\begin{aligned} S_{x}=max(Emg_{mean}) \end{aligned}$$
(5)
$$\begin{aligned} S_{x}=max([V_{1mean},V_{2mean},V_{3mean},V_{4mean},V_{5mean},V_{6mean},V_{7mean},V_{8mean}]) \end{aligned}$$
(6)

where max function represents the maximum value of the vector. After the sensor is identified the new sEMG matrix is organized and described according to the Eq. (7):

$$\begin{aligned} Emg_{new} =[S_{x},S_{mod((x+1),8)},S_{mod((x+2),8)},....,S_{mod((x+7),8)},S_{mod((x+8),8)}] \end{aligned}$$
(7)

For \(user_{20}\) the MAD sensor is located in the sensor number 6 (s6). According to the proposed method the new matrix is organized as follows:

$$\begin{aligned} EMG_{new} = [s6, s7, s8, s1, s2, s3, s4, s5] \end{aligned}$$

For \(user_{30}\) the MAD sensor is located in sensor number 5 (s5). According to the proposed method the new matrix is organized as follows:

$$\begin{aligned} EMG_{new} = [s5, s6, s7, s8, s1, s2, s3, s4] \end{aligned}$$

Applying the MAD algorithm for the original training data, test \(data_1\) and test \(data_2\) new matrices labeled as training \(data^*\), test \(data_1^*\) and test \(data_2^*\) were gotten. It should be noticed that the new training matrix is organized according to the maximum amplitude sensor and does not imply that as a result of applying MAD algorithm the same reference sensor must be gotten for all recordings. However, the result of the sensor detection should give similar sensors like the original one obtained when the data were recorded using the position suggested by the manufacturer.

In Table 1 the result of applying the maximum amplitude detection algorithm in the original data for training and testing is shown. This method allows to have greater robustness to rotation as well as greater independence in the placement of the armband, also this allows to have higher performance and avoid the necessity to record the signals every time the systems is going to be used. Table 1 shows the reference electrode calculated for test \(data_1^*\) and test \(data_2^*\) using MAD sensor activity.

In Fig. 5. the data for four users whose data have been recorded using different orientation of the armband is showed. The distribution of the EMG activity is different and not concentrated in the same region although all the recordings correspond to the same gesture labeled with different colors respectively. The Fig. 5 shows each group of data separately according to the gesture performed for each user. For all users the wave out gesture is represented in dark blue. For \(user_{17}\), the concentration of the highest sEMG activity is detected over sensors 1, 2 and 3. For \(user_{18}\), the greatest concentration of activity during the wave out gesture is located in sensors 4, 5 and 6. For \(user_{19}\), the greatest EMG activity is concentrated on sensors 4, 5 and 6. Similarly for \(user_{20}\), the greatest activity is detected over sensors 5, 6, 7 and 8. It can be appreciated that the concentration of activity for the same gesture is different for each user and this is logical since each user placed the armband arbitrarily. For other hand gestures, for example for the wave in gesture labeled in orange the concentration of activity by sensor is not homogeneous in the same way.

The Fig. 6 shows the same data for the four previous users whose data have been recorded using different positions of the armband. However, to this data the orientation correction using the MAD algorithm was applied.

After applying the MAD algorithm the activity distribution is similar and concentrated in the same region. The recordings correspond to the same gesture labeled with different colors respectively for the four users using the armband placed in different positions.

It can be verified that using the MAD algorithm the data of the users 17, 18, 19 and 20 have been aligned and now these data could be used in any classifier, improving the accuracy because of the new data organization.

After making this correction in the orientation, the data entered into a new \(classifier_2\) always have the same order, regardless the position where the user use the armband. It is not necessary to rotate a specific angle to be able to perform the compensation for the rotation. The proposed method always searches for the sensor with the highest activity.

The Fig. 6 shows how the data is automatically aligned, since it takes as reference a sensor that has been calibrated during the beginning of the test session. A summary of the reference electrodes calculated by MAD algorithm applied to the original data is shown in the column 2, 3 and 4 of the Table 1.

Fig. 4.
figure 4

sEMG for user3 on sensor 7 \(s_7\) while wave out gesture was performed.

Table 1. Reference electrodes calculated by MAD algorithm.
Fig. 5.
figure 5

Normal testing \(data_2\) distribution activity for 4 users. All gesture activity is concentrated in different sensor when recording data in different positions

Fig. 6.
figure 6

Testing \(data_2^*\) distribution activity with correction to the armband rotation for 4 users. All gesture activity is concentrated in the same sensors (sensors 1, 2, 3) when recording data in different positions

2.5 Features Extractors

sEMG curve envelope, windowing, sEMG energy, mean absolute value and standard deviation are used for both methods as features extractors. Where features extractors are defined as follows:

Mean absolute value:

$$\begin{aligned} \begin{gathered} \mid {\mu }\mid =\frac{1}{N}\sum _{i=1}^N \mid {V_i}\mid \\ V_i\in [-1, 1]_{Nx1} \end{gathered}\end{aligned}$$
(8)

where, N denotes the number of points recorded per channel. Being \(N=1000\) points during 5 s.

Standard deviation:

$$\begin{aligned} \begin{gathered} S=\sqrt{\frac{1}{N-1}\sum _{i=1}^N \mid {V_i-\mu }\mid ^2} \\ V_i\in [-1, 1]_{Nx1} \end{gathered}\end{aligned}$$
(9)

The same features extractors are applied to both methods with the exception that in the proposed method the data is organized differently. The data matrix with the characteristics used to train \(classifier_1\) and \(classifier_2\) is described below.

$$\begin{aligned} Emg_{(user_i,feature)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$

where \(Ms_i \in R^{15x1000}\) and \(feature \in \left\{ std,envelope,welch,absmean,energy \right\} \)

$$\begin{aligned} Emg_{(user_i,std)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$
(10)
$$\begin{aligned} Emg_{(user_i,envelope)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$
(11)
$$\begin{aligned} Emg_{(user_i,welch)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$
(12)
$$\begin{aligned} Emg_{(user_i,absmean)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$
(13)
$$\begin{aligned} Emg_{(user_i,energy)}=[Ms_1,Ms_2,Ms_3,Ms_4,Ms_5,Ms_6,Ms_7,Ms_8] \end{aligned}$$
(14)
$$\begin{aligned}\begin{gathered} Features_{user_i}= \\ [Emg_{(user_i,std)}, Emg_{(user_i,envelope)}, Emg_{(user_i,welch)}, Emg_{(user_i,absmean)}, \\ Emg_{(user_i,energy)}] \end{gathered}\end{aligned}$$

where \(Emg_{(user_i,feature)} \in R^{15x8}\)

$$\begin{aligned} Matrix_{(user_i, category_j)}=[Features_{user_i},Y] \end{aligned}$$
(15)

where \(Features_{user_i} \in R^{15x40}\) and \(Y \in \left\{ out,in,close,thumb,relax,tap \right\} \)

The total matrix for training is as follows:

$$\begin{aligned} TrainMatrix_{total}=[Matrix_{(user_1, category_j)};...; Matrix_{(user_{40}, category_j)}] \end{aligned}$$
(16)

where the size \(TrainMatrix_{total} \in R^{3600x40}\)

The total matrix for test \(data_1\) and test \(data_2\) is as follows:

$$\begin{aligned} TestMatrix_{total}=[Matrix_{(user_1)};...; Matrix_{(user_{40})}] \end{aligned}$$
(17)

3 Experiments

In this section four experiments have been carried out and two SVM classifiers have been designed in order to check the operation of the proposed system in situations of rotation of the armband. Two SVM classifiers were trained using data for the traditional method and data for the proposed method. \(Experiment_1\) and \(experiment_2\) are analyzed using the SVM \(classifier_1\) that has been trained using the training data recorded using the position suggested by the manufacturer. \(Experiment_3\) and \(experiment_4\) are analyzed using the SVM \(classifier_2\). The MAD algorithm has been applied to training and testing data to reorder accordingly the new reference electrode. It should be noticed that in the Table 1 the training \(data^*\) and \(test_1^*\) reference electrodes are calculated using the highest potential sensor and do not differ greatly with the position suggested by the manufacturer.

In Table 1 the training \(data^*\) as well as the test \(data_1^*\) have approximately the same reference sensor after apply the proposed method. These reference sensors indirectly show how the armband was placed by the user. Comparing the two columns it is clear that the data are similar. For column 3, in test \(data_2^*\) the algorithm was also applied and the result obtained for the reference sensor is different.

In the Table 1 there are 3 users to have in consideration: users 3, 8, 34. These reference sensors are different compared with the other users, however this is due to the fact that when users made the gesture wave out they unwittingly made a strong movement when returning to the relaxation position. This particular situation can be seen in Fig. 4, where the sEMG wave out gesture for \(user_3\) on sensor number 7 is performed and the reason why MAD algorithm selected sensor number 7 as new reference in Table 1 is showed. The algorithm confirms that the armband was in different positions, but in the same way when confirming the different positions of the armband, the algorithm have changed the order of the sensors.

3.1 Experiment 1

The \(experiment_{1}\) includes training the \(classifier_1\) with normal training data, then testing the \(classifier_1\) with test \(data_1\). Both training and test \(data_1\) are recorded using the recommendations of the armband manufacturer. Users are from 20 to 55 years old. In this experiment, 15 repetitions are performed for each gesture. Users are not asked to calibrate the system, simply they place the armband according to the suggested position.

3.2 Experiment 2

The \(experiment_{2}\) includes training the \(classifier_1\) with training data and testing the \(classifier_1\) with test \(data_2\) (armband placed in different positions). In this experiment, the user is previously asked to take the armband off. After this, the user is asked to place the armband in the desired position. In the same way each user is asked to repeat each gesture 15 times. In this experiment the \(classifier_1\) is tested with the data recorded in different positions.

3.3 Experiment 3

The \(experiment_{3}\) includes training the \(classifier_2\) with training \(data^*\) organized according the proposed method and testing the \(classifier_2\) with test \(data_1^*\) organized according the proposed method too. In this experiment, the MAD algorithm is used to correct the position. The reference sensor for the training \(data^*\) is obtained even though these data were recorded using the same position. This data is shown in the column 2 and 3 in the Table 1.

3.4 Experiment 4

The \(experiment_{4}\) includes training the \(classifier_2\) with training \(data^*\) organized according the proposed method and testing the \(classifier_2\) with test \(data_2^*\) (armband placed in different positions) organized according to the proposed method. All data for the \(experiment_{3}\) and \(experiment_{4}\) are the same like the \(experiment_{1}\) and \(experiment_{2}\) only with the difference that for \(experiment_{3}\) and \(experiment_{4}\) the rotation correction of the armband has been applied. The correction in the rotation can be seen in Figs. 5 and 6 applied for four users as example.

Fig. 7.
figure 7

Confusion matrix with test \(data_ 1\) (\(experiment_1\))

4 Results, Analysis and Comparisons

The confusion matrix for \(experiment_1\) is showed in Fig. 7.

The confusion matrix for \(experiment_2\) is showed in Fig. 8. Two SVM classifiers were trained and tested using two separate procedures. Training and testing dataset in default order for SVM \(Classifier_1\), \(training^*\) and \(testing^{*}\) dataset using the proposed method for SVM \(Classifier_2\). In Fig. 9 the confusion matrix for \(experiment_3\) is showed. Comparing test \(data_1\) working with the traditional method versus this novel method, the system accuracy decreases from 95.2% to 93.9% using test \(data_1^*\) as input. However, this result is because of the references calculated by MAD algorithm for users 3, 8, 34 are different from the others.

For \(experiment_4\) (armband rotated) the confusion matrix using the novel method for test \(data_2^*\) is showed in Fig. 10. Comparing test \(data_2\) working with the traditional method versus this novel method the system accuracy increases from 59.5% to 92.4% using testing \(data_2^*\) as input. With this novel method the recognition system can be used even by new people with great effectiveness and accuracy.

Fig. 8.
figure 8

Confusion matrix with test \(data_ 2\) (\(experiment_2\))

Fig. 9.
figure 9

Confusion matrix with novel method, test \(data_1^*\) (\(experiment_3\))

Fig. 10.
figure 10

Confusion matrix with novel method, test \(data_2^*\) (\(experiment_4\))

The accuracy system decreases 35.7% when a user uses the armband in different position working with the traditional method. The Fig. 5 shows how is the data distribution for users 17, 18, 19 and 20 when this data is going to be apply for traditional method. Data in dark blue color is related to the movement wave out. There are different distributions for all users and the signal power concentration is not located in the same sensor due to users placed the armband in different positions.

The accuracy system decreases only 1.5% when a user uses the armband in different position working with the proposed method. The Fig. 6 shows how is the data distribution for users 17, 18, 19 and 20 when this data is going to be apply using the novel method. Data in dark blue color is related to the movement wave out, there are almost the same distributions for all users and the signal power concentration is located in the same sensor even if user place the armband in different positions. Actually to calibrate the system using the MAD algorithm, the wave out gesture or the wave in gesture can be used.

The Table 2 shows the performance summary of the two systems.

Table 2. Accuracy systems comparison.

For consideration and experimentation by anyone interested in the proposed method, the code as well as the dataset of the paper can be found in the following link: https://drive.google.com/drive/folders/1bvWbh-16c4ShFQDP3Q6a8hwBu6UaAW4y.

5 Conclusion

In this paper, three main contributions have been made. The main contributions of this novel method for gesture recognition include, (1) robustness with placement sensors on the forearm to recognize 6 gestures with high accuracy, (2) the low necessity to train the system every time, (3) the recognition algorithm responds with an accuracy of 92.4% in different armband positions using the novel technique.

The system can be calibrated using the wave out or wave in gesture. The 1.5% decrease can be improved if the system is calibrated at the beginning of the data acquisition. In this paper the calibration at starting the acquisition is not performed. Using the wave out gesture to reorganize the data matrix is how the algorithm gets the new reference electrode. Any classifier can be used after the orientation correction. Similarly it is not necessary to use several features over the EMG signals, only 5 features were used in order to have good accuracy.

Future works will include the research for more than 20 hand gestures recognition and the implementation of the system that allows to obtain a response in less than 100 ms with great accuracy. The system response should be the same when the armband is placed on any forearm (right or left). The system will also be tested using an embedded system to make it more portable.