Keywords

1 Introduction

“Inner clock adapts our physiology to the dramatically different phases of the day, […] regulating critical functions such as behavior, hormone levels, sleep, body temperature and metabolism”. This phenomenon is known as the biological clock or circadian rhythm [1].

Circadian alteration has significant side effects in our life. Among many, it could lead to cardiovascular diseases, cancer and sleep disorders. It can affect lung function, immune function, angiogenesis and many more are significantly influenced by the circadian system, disrupting quality of life [2]. Moreover, recent researches [3] proved that patient with major alterations in circadian cycles are significantly less likely to survive to cancer treatments.

Thus, in order to reduce significant side effects due to circadian alterations also in chronotherapy and chronomedicine, there is the need to develop new methods to determine the state of a person’s circadian clock(s) in real-time.

Currently, the methods for circadian measurements are not suitable for continuous and simultaneous monitoring at home. In fact, circadian cycles are measured via laboratory tests (i.e., hormones measured via blood, urine or saliva specimens), which are expensive and not easy to be performed at home. Most recently, actigraphy has been explored for circadian rhythm estimations at home [4]. Nonetheless, benchmark methods are not yet available and non-invasive behavioral (i.e., actigraphy) and physiological monitoring has not been combined yet.

Therefore, the combination of wearable sensors, biomedical signal analysis and machine learning techniques to develop methods and tools to quantify alterations in internal clock could transform medicine from primarily intervention-focused to predictive and preventative.

Several cortisol indices are commonly used in the literature to determine circadian alterations such as amplitude, frequency and phase [5]. In particular, peak-to-trough difference is one of the most used index to assess rhythm alterations [6].

This paper presents a preliminary result from a feasibility study conducted on healthy subjects to identify a model to monitor circadian rhythms (peaks and trough) in real-time using artificial intelligence and unobtrusive wearable behavioral and physiological monitors.

2 Methods and Materials

2.1 Study Participants

8 healthy participants (4 men and 4 women, mean age (SD): 26.2 (3.3) years) in whom no abnormalities were detected by the medical history, were recruited in the study. Baseline characteristics, such as age, height, weight, general health status and use of medications, were collected during a baseline assessment and briefing session. The participants did not report history of heart disease, diabetes, systemic hypertension or hypotension, or sleep-disorders, or consumption of any medication throughout the course of the study, which could alter physiological signals being acquired. They had healthy body mass index (BMI), i.e. between 18.5 and 24.9.

The Biomedical and Scientific Research Ethics Committee of the University of Warwick approved this study (ref. REGO-2018-2205), assuring anonymity and no side effects or possible disadvantages for the participants. All participants were carefully instructed, and informed consent was acquired prior to the experiment. Participants were compensated with a fixed fee.

2.2 Protocol

Participants were asked to wear two different wearable devices for three nights and two consecutive days. The first wearable device, the Zephyr BioPatch, recorded ECG, breathing rate and raw 3-axis accelerations, with a sampling rate of 250 Hz, 18 Hz and 100 Hz respectively. The second one was a wireless data logger, the iButtons, which can be used to obtain a valid measurement of human skin temperature. One iButton was attached to each ankle, and another iButton was attached on each side of the chest, one or two inches below the clavicle in the mid- clavicular line in order to measure distal and proximal body temperature respectively [7]. The temperature sensors took a measurement every 10 min.

In this study, cortisol was used as a marker of circadian rhythm. Participants were instructed on how to take and store saliva samples, so as they could be sent to a specialized laboratory and analyzed for levels of salivary cortisol. Participants were instructed to take a sample immediately upon waking, and then to take further samples every two hours for the rest of the day until they went to bed. Saliva samples were acquired for two consecutive days via Salimetrics® Cortisol Enzyme Immunoassay Kit, which is an immunoassay specifically designed and validated for the quantitative measurement of salivary cortisol. The saliva was collected by the passive drool technique.

For each subject, behavioral and physiological signals were acquired for three nights and two days by wearable devices, and two days’ worth of salivary samples were taken.

Participants were asked to report physical activity and food intake [8]. Participants were also asked to complete the Pittsburgh Sleep Quality Index (PSQI) instrument [9] and a consensus sleep diary [10]. The PSQI and sleep diary results were used to compare reported sleep disturbances with alteration in circadian cycles.

All of the participants were asked to maintained ordinary daily schedules during the experiments.

2.3 Data Analysis

The maximum and minimum cortisol levels were obtained for each subject for the two days, and these were labelled respectively as “peak” and “trough”. In the cases where the minimum or maximum cortisol level for a period appeared in more than one measurement, then each measurement was also labelled.

Since physical exercise can greatly affect cortisol levels [11], only periods of time during which there were similar levels of activity were considered. For each peak and trough, a window of two hours around the time of the saliva measurement was taken. Within each window of time, activity level and posture were evaluated. The activity as reported from the Zephyr BioPatch represents a measure of second- to-second activity and is sensitive to small movements. In order to reduce this sensitivity, the signal was smoothed using a moving average, with a sample window of 60 s. This smoothed activity, in essence, represents minute-to-minute activity.

In order to control for activity and posture in the two hours window around each peak or trough, only times when activity was less than 0.2 (which corresponds to a level of activity less intense than walking) and the posture was between −20° and 20° (which corresponds to times when the chest was roughly upright) were considered.

For each selected window of time, distal and proximal body temperature were also considered. Distal body temperature was calculated as the mean of all measurements from ankle temperature sensors during the selected window of time. Proximal body temperature was calculated as the mean of all measurements from the clavicle temperature sensors during the selected window of time.

For each selected window of time, the RR interval time-series was extracted from ECG records using an automatic QRS detector, WQRS, available in the PhysioNet’s toolkit [12]. QRS review and correction was performed using PhysioNet’s WAVE. The fraction of total RR intervals labelled as normal-to-normal (NN) intervals was computed as NN/RR ratio. NN/RR ratio was then used to measure the reliability of the data. Records with NN/RR ratio less than 90% threshold were excluded from the analysis. Heart Rate Variability (HRV) analysis was performed on 5 min excerpts using Kubios (version premium) [13]. Time and frequency-domain features were analyzed according to international guidelines [14], while non-linear measures were analyzed as described in [15]. Frequency domain features were extracted from power spectrum estimated with autoregressive (AR) model methods [13]. Finally, 20 HRV features were extracted and examined.

2.4 Statistical Analysis and Classification

Given that HRV features were found non-normally distributed, Median (MD), Median Absolute Deviation (MAD) and interquartile range (IQR) (i.e., non-parametric descriptors) were computed for each repetition. The non-parametric Wilcoxon Signed-Rank Test was used to appraise statistical differences of HRV features and temperature variation between the “peak” and “trough” of cortisol measures.

In order to optimize the performance of the machine learning models, the number of features should be limited by the number of instances of the event to detect (in this instance, a peak or trough in cortisol levels). Furthermore, a reduction in the number of features greatly simplifies the medical interpretation of any results achieved. Therefore, the features selection was performed using relevance and redundancy analysis as described in [16]. Training of the machine-learning models (including the algorithm parameter tuning) was performed using a leave-one-outcross-validation approach on 6 participants. Binary classification performance measures were adopted according to the standards reported in [15]. Five different machine-learning methods were used to train, validate and test the classifiers (SVM, MLP, IBK, RF and LDA); the model was chosen as the classifier achieving the highest Area under the Curve (AUC), which is a reliable estimator of both sensitivity and specificity rates. The model was then tested on the remaining 2 participants.

3 Results and Conclusion

Preliminary analysis shows promising results to automatically detect cortisol levels as high or low (peaks or troughs), based on HRV and temperature data extracted during periods where activity and posture are controlled for. Some moderately successful classifiers were produced. Random Forest outperformed the other classifiers achieving 78% AUC and 73% overall accuracy. These results provide encouragement that such a protocol may be successful with further refinement, and wearable devices (through the measurement of HRV) may indeed be useful in the real-time monitoring of circadian rhythm.