Keywords

1 Introduction

Proper coordination between brain and speech-producing muscles is required for the production of speech sounds [15]. Lack of this coordination leads to speech disorders, such as aparaxia, dysarthria, and stuttering. These disorders affect a person’s ability to produce speech sounds. They are further categorized as neurological or neurodegenerative diseases, such as cerebral palsy or Parkinson’s disease. The severity-level of these diseases might be mild or severe, depending upon the impact on the area of the brain. In the case of mild severity, the patient may mispronounce a few words, whereas, in high severity, the patient lacks the ability to produce intelligible speech. Among these speech disorders, dysarthria is a relatively common speech disorder [24]. Dysarthria is a neuro-motor speech disorder. The muscles that produce speech are weak in people with this disorder. Dynamic movements of articulators, such as lips, tongue, throat, and upper respiratory tract system are also affected due to brain damage. Apart from brain damage, cerebral palsy, muscular dystrophy, and stroke are also some of the other factors, which can cause dysarthria [19].

Severity-level of dysarthria depends on the impact and damage to the area of neurological injury, which is diagnosed using a brain and nerve test. The type, underlying cause, severity-level, and its symptoms, all influence the manner in which it is treated [4]. Due to this uncertainty in treatment, researchers are motivated to develop speech assistive tools for dysarthric intelligibility categorization.

In the literature, dysarthria severity-level classification has been exploited extensively using Short-Time Fourier Transform (STFT) [9], and various acoustical features [1]. State-of-the-art feature sets, such as Mel Frequency Cepstral Coefficients (MFCC) feature set was employed in [12] due to its capacity of capturing global spectral envelope properties. In addition to a perceptually-motivated state-of-the-art feature set, glottal excitation source parameters derived from the quasi-periodic sampling of the vocal tract system were implemented in [8]. In the signal processing framework, due to the wide and dynamic range of multiple frequency components in short-time spectra, speech signals are considered to be non-stationary signals. Due to the dynamic movements of articulators, the frequency spectrum varies instantaneously.

In this work, we demonstrate the capability of Continuous Wavelet Transform (CWT)-based representation (i.e., scalogram) for dysarthric severity-level classification. According to study in [5], wavelet transform has better frequency resolution in the low frequency regions, as compared to the STFT. In the literature, for acoustical research problems, wavelet-based features have been successfully implemented as in [3, 22]. To that effect, the motivation of utilizing CWT for this study is the improved frequency resolution of CWT-based scalograms at lower frequencies as compared to the STFT-based and Mel spectrogram-based techniques. To the best of the authors’ knowledge and belief, the use of CWT has been explored to Model Articulation Impairments in Patients with Parkinson’s Disease [23]. However the use of CWT to capture discriminative acoustic cues for dysarthric severity-level classification is being proposed for the first time in this study. Results are presented on standard Universal Access (UA)-Speech Corpus.

The rest of paper is organized as follows: Sect. 2 discusses the motivation of using scalogram-based approach over a spectrogram. Section 3 describes the proposed approach of Morse wavelet-based dysarthric severity level classification. Furthermore, experimental setup is described in Sect. 4, followed by experimental results in Sect. 5. The Sect. 6 concludes the paper along with potential future research directions.

2 Spectrogram and Scalogram

STFT-based spectrograms are made up of windows of equal and fixed lengths that run across the length of the signal. As a result, in a spectrogram, the spread in time, as well as frequency-domains, remains constant throughout the time-frequency plane (i.e., constant time and frequency resolution). On the other hand, we can achieve variable time-frequency resolution by employing CWT-based representation (also known as scalogram). The time-frequency spread of the wavelet atoms \(\psi _{u,s}\) determines the time-frequency resolution of scalogram. A Heisenberg box is defined by the spread in time multiplied by the spread in frequency in a time-frequency representation. In a scalogram, for low frequency regions, the spread in frequency is less, leading to a better frequency resolution, as shown by the boxes in Fig. 1. Furthermore, CWT can be computed using the wavelet \(\psi _{u,s}(t)\), which has its Fourier transform denoted by \(\hat{\psi }_{u,s}(\omega )\) [20]

Given that the center frequency of \(\hat{\psi }(\omega )\) is indicated by \(\eta \), the wavelet \(\psi _{u,s}\) has a center frequency at \(\frac{\eta }{s}\). The wavelet \(\psi _{u,s}\) has an energy spread about the center frequency of \(\psi _{u,s}\), which is given by [20]:

$$\begin{aligned} \frac{1}{2\pi }\int _{0}^{+\infty }\Big (\omega -\frac{\eta }{s}\Big )^{2} |\hat{\psi }_{u,s}(\omega )|^{2} d\omega = \frac{\sigma _{\omega }^2}{s^2}, \end{aligned}$$
(1)

where,

$$\begin{aligned} \sigma _{\omega }^2= \frac{1}{2\pi }\int _{0}^{+\infty } (\omega -\eta )^2 |\hat{\psi }(\omega )|^{2} d\omega . \end{aligned}$$
(2)

Furthermore, the energy density in local time-frequency plane is denoted \(P_{Wf}\), given by:

$$\begin{aligned} P_{W}f(u,\xi )= \Big |W f(u,s) \Big | ^2 = \Big |W f\Big (u,\frac{\eta }{\xi }\Big ) \Big |^2. \end{aligned}$$
(3)

The Eq. (3) is nothing but a scalogram with scaled time-frequency resolution.

Figure 1 shows the motivation behind choosing CWT-based approach over STFT. Energy conservation in STFT is [20]:

$$\begin{aligned} \int _{-\infty }^{+\infty }|f(t)|^2dt=\frac{1}{2\pi }\int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }|Sf(u,\zeta )|^2d\zeta du. \end{aligned}$$
(4)

Energy conservation is preserved in analytic WT as well [20].

$$\begin{aligned} \int _{-\infty }^{+\infty }|f_a(t)|^2dt=\frac{1}{C_\psi }\int _{0}^{+\infty }\int _{-\infty }^{+\infty }|Wf(u,s)|^2du\frac{ds}{s^2}. \end{aligned}$$
(5)
Fig. 1.
figure 1

Tilings of the time-frequency plane for (a) STFT and (b) CWT.

3 Proposed Work

3.1 Continuous Wavelet Transform (CWT)

Due to lack of co-ordination between brain and articulators, the speech produced by dysarthric patients have change in energy. To analyse this energy change in different severity-levels, recent investigations using spectrogram are made in [9]. However, to get better insight of energy spread in time-frequency representation, we propose CWT-based scalogram approach through this study. The key idea for employing CWT-based scalogram approach for dysarthric severity-level classification is to exploit the energy spread in the low frequency regions for different severity-levels in time-frequency distributions. A wavelet is a waveform with a zero-average and an effectively restricted duration, i.e., it is wave for a short duration and hence the name wavelet. It is defined as [17]:

$$\begin{aligned} \psi _{u,s}(t)=\frac{1}{\sqrt{s}}\psi ^{*}\Big (\frac{t-u}{s} \Big ), s \in R^{+}, u \in R, \end{aligned}$$
(6)

where the dilation (scaling) parameter is denoted by s and the translational (positional) parameter is denoted by u. The CWT of a signal f(t) is

$$\begin{aligned} \begin{aligned} W_f(u,s)&= {<}f(t), \psi _{u,s}(t){>}, \\&=\frac{1}{\sqrt{s}}\int _{-\infty }^{\infty }f(t)\psi ^*\left( \frac{t-u}{s}\right) dt, \end{aligned} \end{aligned}$$
(7)

where \({<}\cdot , \cdot {>}\) indicates inner product operation to compute wavelet coefficients, and \(*\) denotes complex conjugate. The scalogram is defined as the square of absolute of the CWT coefficients, i.e., \(|W_f(u,s)|^2\).

3.2 Exploiting Morse Wavelet for CWT

There are various types of analytic wavelets in the literature, such as Cauchy, complex Shannon, lognormal, Derivative of Gaussian, and Morlet wavelets [10, 13, 20]. However, due to the existence of various types of wavelets, choosing an appropriate wavelet for a particular task becomes an issue. Generalized Morse Wavelets (GMWs) is considered as a superfamily of analytic wavelets that are causal in the frequency-domain. In frequency-domain, the Morse wavelet is given by [16]:

$$\begin{aligned} \hat{\psi }_{\beta ,\gamma }(\omega )=\int _{-\infty }^{\infty } \psi _{\beta , \gamma }(t)e^{-i\omega t}dt = U(\omega )a_{\beta ,\gamma }\omega ^\beta e^{-\omega ^\gamma }, \end{aligned}$$
(8)

where \(\beta \) and \(\gamma \) are the two parameters of the Morse wavelet, which control the shape and size, respectively, of the wavelet and \(U(\omega )\) is unit-step function due to causality in the frequency-domain. The parameter \(\beta \) is called as the order and the parameter \(\gamma \) represents the family of wavelets. With each value of \(\gamma \), one can get a family of wavelets from the Morse wavelet representation as shown in Eq. (8) [16]. The amplitude of the wavelet is normalized by a real-valued constant factor given by \(\alpha _{\beta \gamma }\). The value of the constant scaling factor \(\alpha _{\beta \gamma }\) is given by [17]:

$$\begin{aligned} \alpha _{\beta \gamma } \equiv 2\left( \frac{e\gamma }{\beta }\right) ^{\frac{\beta }{\gamma }}. \end{aligned}$$
(9)

Furthermore, the “wavelet duration” denoted by \(P^{2}_{\beta ,\gamma }\) is given by the \(2^{nd}\) order derivative of Morse wavelet. Mathematically, \(P^{2}_{\beta ,\gamma }\) can be defined as [17]:

$$\begin{aligned} P^2_{\beta ,\gamma } \equiv - \frac{\omega ^2_{\beta ,\gamma } \hat{\psi }^{''}_{\beta ,\gamma }(\omega _{\beta ,\gamma })}{\hat{\psi }_{\beta ,\gamma }(\omega _{\beta ,\gamma })} = \beta \gamma . \end{aligned}$$
(10)
Fig. 2.
figure 2

Effect of \(\gamma \) parameter on the time-frequency Heisenberg area \(A_{\beta ,\gamma }\) w.r.t. wavelet duration \(P_{\beta ,\gamma }/\pi \). After [16].

The number of peak frequency oscillations that may be fitted in the central window of a wavelet in the time-domain is given by \(\frac{P^{2}_{\beta ,\gamma }}{2\pi }\). The Morse wavelet with parameter \(\gamma \) = 3 (also known as ’Airy family’) is used in this study. The optimum Heisenberg area \(A_{\beta ,\gamma }\), reached at \(\gamma \) = 3 even for a small wavelet duration (as shown in Fig. 2), justifies our choice of \(\gamma =3\). For a Morse wavelet, \(A_{\beta ,\gamma }\) is given by [5, 18]:

$$\begin{aligned} A_{\beta ,\gamma } \equiv \sigma _t\sigma _{\omega }, \end{aligned}$$
(11)

where time spread \(\sigma _{t}^{2}\) and frequency spread \(\sigma _{\omega }^{2}\) of wavelet atom representation are given by [17]:

$$\begin{aligned} \sigma _{t}^{2} = \omega _{\psi }^{2}\frac{\int t^{2}|\psi (t)|^{2}dt}{\int |\psi (t)|^{2}dt} \,\,\, \text {and}, \end{aligned}$$
(12)
$$\begin{aligned} \sigma ^2_\omega = \frac{1}{\omega ^2_\psi } \frac{\int (\omega - \tilde{\omega }_\psi )^2 |\psi (\omega )|^2 d\omega }{\int |\psi (\omega )|^2 d\omega }, \end{aligned}$$
(13)

where \(\tilde{\omega }_{\psi }\) represents the energy frequency of the Morse wavelet (which is also the mean of \(|\varPsi (\omega )|^2\)) [17]. The study, reported in [16] shows that all the Morse wavelets attain the information concentration of \(A_{\beta , \gamma } = 1/2\). For \(\gamma \) = 3, degree of concentration of information, i.e., \(A_{\beta , \gamma }\) is the highest even for a small value of wavelet duration, \(P_{\beta ,\gamma }/\pi \), as shown in Fig. 2. To that effect, in this work, scalogram images were extracted using MATLAB with \(\gamma \) = 3 and \(\beta \) = 20 (i.e., \(P_{\beta ,\gamma }^2\) = 60) as the default parameter setting for Morse wavelet-based scalogram for full frequency band upto 8 kHz (since sampling frequency \(F_s=16\) kHz). Each scalogram image extracted is of \(512 \times 512 \times 3\) dimension. These scalogram-based features are then fed as input to the CNN classifier. The experimental setup is explained in the following Section.

Fig. 3.
figure 3

Dysarthic speech utterance (for vowel /e/) for male speaker with various dysarthic severity-level (Panel I), corresponding STFT (Panel II), corresponding Mel spectrogram (Panel III), and corresponding Morse Wavelet Scalogram (Panel IV) for (a) normal, dysarthic speech with severity-level as (b) very low, (c) low, (d) medium, and (e) high. Best viewed in color. (Color figure online)

4 Experimental Setup

4.1 Dataset Used

The Universal Access dysarthric Speech (UA-Speech) corpus [25] is used to evaluate the proposed CWT-based approach. In this study, a dataset configuration identical to that described in [9] is used. It has 8 speakers, out of which 4 are male and 4 are female speakers. Furthermore, \(90\%\) of the dataset is dedicated to training set and the remaining 10% is dedicated to the testing partition.

4.2 Feature Details

In this study, the energy capturing capabilities of scalogram at low frequencies are compared with the baseline spectrogram and Mel spectrogram. As mentioned in [9], the STFT was applied to generate a time-frequency representation with a window size of 2 ms, and window overlap of 0.5 ms. Furthermore, the performance of scalogram was also compared with Mel spectrogram, which are generated with a window of size 2 ms and overlap of 0.5 ms. The dimensions of the generated Mel spectrogram are \(512 \times 512 \times 3\). As discussed in Sect. 3, the scalograms of dimension \(512 \times 512 \times 3\) were generated with \(\gamma \) = 3, and \(\beta \) = 20 (i.e., \(P_{\beta ,\gamma }=60)\) as the default parameter setting.

4.3 Classifier Details

Based on the experiments presented in [12], the Convolutional Neural Network (CNN) is used as a classifier in this study. According to a study reported in [12], CNN gives comparable results with the other deep neural network (DNN)-based classifiers for the UA-Speech corpus. For this study, the CNN model was trained employing the Adam optimizer algorithm, four convolutional layers with kernel size of \(5 \times 5\), and one Fully-Connected (FC) layer [14]. Mel spectrograms and scalograms, both of size \(512 \times 512\), were used in these investigations. A max-pool layer and Rectified Linear Activation (ReLU) are utilised. For loss estimation, a learning rate of 0.001 and cross-entropy loss are chosen.

4.4 Performance Evaluation

F1-Score. It is a widely used statistical parameter for analyzing the performance of the model. As stated in [7], it is calculated as the harmonic mean of the model’s precision and recall. Its value ranges from 0 to 1, with a score closer to 1 indicating higher performance.

MCC. It shows the degree of correlation between the expected and actual class [21]. For model comparison, it is typically regarded as a balanced measure. It is in the range of \(-1\) to 1.

Jaccard Index. The Jaccard index is a metric for determining how similar and different the two classes are. It is in the range of 0 to 1. It is described as [2]:

$$\begin{aligned} \text {Jaccard Index} = \frac{TP}{TP + FP + FN}, \end{aligned}$$
(14)

where TP, FP, and FN, represent True Positive, False Positive, and False Negative, respectively.

Hamming Loss. It considers class labels that were predicted wrongly. The prediction error (prediction of an incorrect label), and the missing error (prediction of a relevant label) are normalized across all the classes and test data. The following formula can be used to determine Hamming loss [6]:

$$\begin{aligned} \text {Hamming Loss} = \frac{1}{nL} \sum _{i=1}^{n} \sum _{j=1}^{L} I(y^j _i\ne \hat{y}^j _i), \end{aligned}$$
(15)

where \(y^j _i\) and \(\hat{y}^j _i\) are the actual and predicted labels, and I is an indicator function. The more it is close to 0, the better the performance of the algorithm.

5 Experimental Results

5.1 Spectrographic Analysis

Panel I of the Fig. 3 show the speech segment of vowel /e/. Panel II, III, and IV shows the spectrogram, Mel spectrogram, and scalogram, respectively, for (a) normal, (b) very low, (c) low, (d) medium, and (e) high dysarthric severity-level for the same speech segment. It can be observed from Fig. 4 that the scalogram-based features can capture energy-based discriminative acoustic cues for dysarthric severity-levels more accurately than the STFT and Mel spectrogram-based features. Furthermore, from scalogram, it can be observed that as the dysarthtic severity-level increases, patients struggle to speak the prolonged vowel, /e/. This may be due to the lack of coordination between articulators and the brain. Due to this, the energy spread is seen over the entire time-axis. However, the utterance of vowel /e/ is of short duration for medium and high dysarthtic severity-levels.

Fig. 4.
figure 4

Scatter plot obtained using LDA for (a) STFT, (b) Mel spectrogram, and (c) Scalogram. After [11]. Best viewed in color.

5.2 Performance Evaluation

The performance evaluation for various feature sets is done via % classification accuracy (as shown in Table 1). On CNN, the scalogram performs relatively better with a classification accuracy of \(95.17\%\) than the baseline STFT, and Mel spectrogram. The analyses in the following sub-Section and the % classification accuracy obtained through the CNN classifier, show the capabilities of the scalogram in capturing the energy spread generated during the speech production mechanism for various dysarthric severity-level. Furthermore, Table 2 shows the confusion matrix of the STFT, Mel spectrogram, and scalogram for CNN model. It can be observed that the scalogram reduces the false prediction error, which indicates the better performance of the scalogram w.r.t the baseline STFT, and Mel spectrogram. Additionally, Table 3 shows the comparison between statistical measures using the F-1 score, Jaccard index, MCC, and Hamming loss for various feature sets. It can be observed from Table 3 that scalogram performs relatively better than the baseline STFT and Mel spectrogram.

Table 1. Results in (% classification accuracy) for CNN classifier.
Table 2. Confusion matrix obtained for STFT, Mel-spectrogram, and scalogram.
Table 3. Various statistical measures of STFT, Mel spectrogram, and scalogram.

5.3 Visualization of Various Features Using Linear Discriminant Analysis (LDA)

The capabilities of scalogram for the classification of the dysarthic severity-level is also validated by LDA scatter plots due to it’s higher image resolution and better projection of the given higher-dimensional feature space to lower-dimensional than the scatter plots obtained using t-sne plots [11]. Here, the LDA plot of STFT, Mel spectrogram, and scalogram are projected onto 2-D feature space, and represented using the scatter plot shown in Fig. 4 (a), Fig. 4 (b), and Fig. 4 (c), respectively. From Fig. 4, it can be observed that wavelet-based scalogram has low intra-class variance and high inter-class variance, which increases the distance between the clusters w.r.t baseline STFT, and Mel spectrogram, thereby better classification performance by the Morse wavelet.

6 Summary and Conclusion

In this study, we investigated CWT, in particular, the Morse wavelet, to achieve improved resolution in time and frequency representation for various dysarthric severity levels. The low-frequency resolution of Morse wavelet-based scalogram is higher than the resolution of STFT and Mel spectrogram. Therefore, the energy spread corresponding to the dysarthric severity in low-frequency region is better visualized in the scalogram. Hence, the low-frequency discriminative cues are better classified using a scalogram. This can also be observed with the significant increase in % classification accuracy as compared to the STFT and Mel spectrogram. Furthermore, it was also observed that as the severity-level increases, due to difficulty for patients to utter the complete word, the energy spreading is more in frequency representation over the entire time-axis. The performance of the scalogram is also analyzed using various statistical performance parameters, such as F1-Score, MCC, Jaccard Index, Hamming Loss, and LDA scatter plots. Other dysarthric speech corpora, such as TORGO and Homeservice, will be used to further validate this work in the future. Our future efforts will focus on extending and validating this work on other dysarthric speech corpora, such as TORGO and Home service.

Acknowledgments. The authors would like to thank the Ministry of Electronics and Information Technology (MeitY), New Delhi, Govt. of India, for sponsoring the consortium project titled ‘Speech Technologies in Indian Languages’ under ‘National Language Translation Mission (NLTM): BHASHINI’, subtitled ‘Building Assistive Speech Technologies for the Challenged’ (Grant ID: 11(1)2022-HCC (TDIL)). We also thank the consortium leaders Prof. Hema A. Murthy, Prof. S. Umesh of IIT Madras, and the authorities of DA-IICT Gandhinagar, India for their support and cooperation to carry out this research work.