Keywords

1 Introduction

Diagnosis of disease, ideally even before symptoms are noticeable to individuals, facilitates early interventions and maximises the chance of successful treatments, especially for mental health. Whilst early diagnosis cannot enable curative treatment of all possible diseases, it provides the considerable chance of averting irreversible pathological changes in organ, skeletal, and nervous systems, as well as chronic pain and psychological stress [8]. Research in machine learning for audio-based digital health applications has increased in recent years [6]. Substantial contributions have been made to the development of audio-based techniques for the recognition of various health conditions, including neurodegenerative diseases such as Alzheimer’s or Parkinson’s [20], psychological disorders such as bipolar disorder [16], neurodevelopmental disorders such as Fragile X, Rett-Syndrome, or Autism Spectrum Disorder [17], and contagious diseases such as COVID-19 [15]. In the proceeding section of this paper, we first introduce seven health-related corpora for speech and acoustic health monitoring tasks (Sect. 2). In Sect. 3, we then introduce a set of contemporary computer audition methods and analyse their performance for various early digital health diagnosis and recognition tasks. The last section concludes our paper and discusses future work.

2 Speech and Acoustic Health Datasets

In this section, we introduce seven health related speech and audio datasets which have been used in recent editions of the INTERSPEECH Computational Paralinguistics ChallengE (COMPARE) [18, 19, 22]. We further provide information about the important characteristics of each dataset and the used partitions for the machine learning experiments (cf. Table 1).

Table 1. Number of instances per class in the all partitions for each dataset.

Cambridge COVID19 Sound Database – Speech & Cough. This dataset which was used for a sub-challenge in the 2019 edition of the INTERSPEECH ComParE contains two speech and cough subsets from the Cambridge COVID-19 Sound database [3, 11]. The audio files were resampled (in some cases, upsampled) and then converted to 16 kHz and mono/16 bit, and further normalised recording-wise to eliminate varying loudness. For the COVID-19 Cough (C19C), 725 recordings (one to three forced coughs) from 343 participants were provided, in total 1.63 h. For the COVID-19 Speech (C19S), 893 speech recordings from 366 individuals were used, in total 3.24 h.

Upper Respiratory Tract Infection Corpus (URTIC). This corpus is provided by the Institute of Safety Technology, University of Wuppertal, Germany, and consists of recordings of 630 subjects (382 m, 248 f, mean age 29.5 years, std. dev. 12.1 years, range 12-84 years), made in quiet rooms with a microphone/headset/hardware setup (sample rate 44.1 kHz, downsampled to 16 kHz, quantisation 16 bit). To obtain the state of health, each individual reported a binary one-item measure based on the German version of the Wisconsin Upper Respiratory Symptom Survey (WURSS-24), assessing the symptoms of common cold. The global illness severity item (on a scale of 0 = not sick to 7 = severely sick) was binarised using a threshold at 6.

Düsseldorf Sleepy Language (SLEEP) Corpus. This corpus [21] contains speech recordings of 915 individuals (364 f, 551 m) at different levels of sleepiness (1–9 KSS, 9 denotes extreme sleepiness). The participants performed various pre-defined speaking tasks and read out text passages. Moreover, spontaneous speech is collected in the form of elicited narrative content. The sessions which lasted roughly one hour per participant were further held between 6 am to 12 pm in order to acquire high variability in the levels of perceived sleepiness. Using this dataset, the sleepiness of a speaker can be assessed as regression problem. Continuous recognition of sleepiness is of high relevance for sleep disorder monitoring.

UCL Speech Breath Monitoring (UCL-SBM) Corpus. This corpus contains spontaneous speech recordings that took place in a quiet office space, and recordings from a piezoelectric respiratory belts worn by the subjects. All signals were sampled at 40 kHz; speech was downsampled to 16 kHz and breath belts to 25 Hz in post-processing [18]. All 49 speakers (29 f, 20 m) reported English as a primary language ages range from 18 to approximately 55 years old (mean age 24 years; std. dev. ~10 years). Breathing patterns also provide medical doctors vital information about an individual’s respiratory and speech planning [4].

Heart Sounds Shenzhen (HSS) Corpus. The HSS corpus, provided by the Shenzhen University General Hospital, contains heart sounds gathered from 170 subjects (55 f, 115 m; ages from 21 to 88 years (mean age 65.4 years, std. dev. 13.2 years) with various health conditions, such as coronary heart disease, heart failure, and arrhythmia. The acoustic signals were recorded using an electronic stethoscope with a 4 kHz sampling rate and a 20 Hz–2 kHz frequency response. Three types of heartbeats (normal, mild, and moderate/severe) have to be classified Table 1. Automatic machine learning based approaches could help monitoring patients with unclear symptoms of heartbeat abnormalities.

Munich-Passau Snore Sound Corpus (MPSSC). The MPSSC is introduced for classification of snore sounds by their excitation location within the upper airways. The corpus contains audio samples of 828 snore events from 219 subjects (cf. Table 1). The number of recordings per class in the corpus is unbalanced, with 84% of samples from the classes Velum (V) and Oropharyngeal lateral walls (O), 11%, Epiglottis (E)-events, and 5% Tongue (T)-snores. This is in line with the probability of occurrence during normal sleep [12].

Table 2. Results for all seven introduced corpora. The official challenge baselines and the winners of each sub-challenge are provided. UAR: Unweighted Average Recall. PCC: Pearson’s correlation coefficient. \(\rho \): Spearman’s correlation coefficient. *: [2] was a separate submission and not as a part of the sub-challenge.

3 State-of-the-Art Methodologies and Results

This section provides results from the winners of each sub-challenge (cf. Table 2). Further, the results are compared with the performance of four machine learning and deep learning baseline systems of ComParE, namely openSMILEFootnote 1 [7], End2YouFootnote 2 [24], auDeepFootnote 3 [1], and Deep SpectrumFootnote 4 [2]. Each of baseline system utilises a different methodology to extract or learn features from the audio signals. In particular, openSMILE is designed to extract expert-designed features such as pitch, energy, and prosody for specific speech and audio tasks. The End2You approach utilises an end-to-end learning paradigm to extract features from raw audio with a convolutional network and then performing the final classification using a subsequent recurrent network. auDeep makes use of recurrent sequence-to-sequence autoencoders for unsupervised representation learning, and Deep Spectrum applies transfer learning techniques with pre-trained image convolutional networks for deep feature extraction from audio plots.

4 Conclusions and Future Work

We have carefully selected seven (three speech-based and three body-acoustics-based plus one ‘inbetweener’ – breathing) medical datasets for audio-based early diagnosis of various health issues (cf. Sect. 2), and demonstrated the suitability of (deep) computer audition methods for all introduced tasks (cf. Sect. 3). For data of a more complex nature (e. g. SLEEP or C19C), we showed that unsupervised learning of representations provides better results compared to other baselines. For the regression task UCL-SBM, End2You (composed of convolutional and recurrent blocks) outperforms other systems showing its suitability for modelling time-continuous data. Further, we recommend the application of transfer learning approaches (e. g. Deep Spectrum) for audio health monitoring tasks where the data is scarce as such models are pre-trained on larger datasets. As a next step, more holistic views on audio-based health monitoring will be needed that do not focus on ‘healthy’ vs ‘sick’, but target the big picture of health state synergistically. With this and more data or data-efficient strategies, audio-based health monitoring in every-day life appears around the corner.