Top-5 machine learning + biosignals boosters in remote health, wellbeing, and safety monitoring
Scientific recipes to boost your product with ML power
PPG and ECG have been known for a long time and are used in medicine to check pulse and arrhythmias, as well as to diagnose pathologies of the heart. But these methods have been used mainly in clinics and require the assistance of qualified medical professionals. The methods received a new breath in the last five years for two reasons: the emergence of these technologies in wearable devices such as smartwatches and the development of AI/ML, which have significantly expanded the range of applications and automated diagnostics. ECG has all the capabilities of PPG and provides additional ones. However, the ECG is less accessible for use since it requires the direct participation of a patient — either a device with several electrodes, or it is necessary to touch hands to the measuring electrodes. PPG has long been used in smartwatches and fitness wristbands. In the past few years, smartwatches from several manufacturers such as Apple and Samsung started to measure ECG.
In this article, we tried to make a simple review of scientific papers that discuss the use of AI/ML to analyse and interpret PPG and ECG, drawing your attention to the variety of applications.
What are PPG and ECG?
Photoplethysmography (PPG) is a simple optical technique used to detect volumetric changes in blood in the peripheral circulation. Variations in blood volume correspond to heartbeats. Electrocardiogram (ECG or EKG) is a plot of voltage versus time of the heart’s electrical activity, measured using electrodes placed on the skin. These electrodes detect small electrical changes resulting from the heart muscle’s depolarization, followed by repolarisation during each cardiac cycle (heartbeat). Changes in the normal ECG pattern occur because of numerous cardiac abnormalities, including cardiac arrhythmias and heart attacks. If you struggle to understand how to differentiate ECG and PPG, we recommend deep diving with the following resources:
PPG and ECG have a set of valuable features used for diagnosing by both therapists and machine learning models. The main one is heart rate (HR), yet, heart rate variability (HRV) is more informative. HRV is the physiological phenomenon of variation in the time interval between heartbeats. See An Overview of Heart Rate Variability Metrics and Norms. There are up to 30 different HRV features that can be categorised into three groups: time-domain indexes, frequency-domain, and non-linear measurements. Another group of features specific for ECG is PQRST-complex. All of the waves on an ECG tracing and the intervals between them have a predictable time duration, a range of acceptable amplitudes (voltages), and typical morphology. Changes in the heart’s structure and its surroundings (including blood composition) change the patterns of these four entities. Any deviation from the normal tracing indicates potential pathology and therefore is of clinical significance.
Booster #1: cardiac cycle abnormalities
Automatic ECG diagnosis using deep learning
This article proposed classification architecture based on convolutional neural networks (CNN), which have gained some hype recently. The database consisted of more than 4000 ECG signals. The confusion matrix derived from the dataset testing indicated 99% accuracy for the “normal” class. For the “atrial premature beat” class, ECG segments were correctly classified 100% of the time. Finally, for the “premature ventricular contraction” class, ECG segments were correctly classified 96% of the time. In total, there was an average classification accuracy of 98.33%. The sensitivity (SNS) and the specificity (SPC) were 98.33% and 98.35% respectively.
In this paper, the authors present an unsupervised time series anomaly detection algorithm. It learns with recurrent Long Short-Term Memory (LSTM) networks to predict the normal time series behaviour. The Mahalanobis distance is used as an indicator of abnormal behaviour in the time-series signal. In the LSTM-AD algorithm, points with a Mahalanobis distance larger than a specified anomaly threshold will be flagged as deviant. The method was validated with the well-known MIT-BIH ECG and compared to state-of-the-art anomaly detectors (NuPic, ADVec). The mean F1-score, which is the harmonic mean of the positive predictive value and sensitivity, for anomalies detection was 0.83.
Cardiologist-level arrhythmia detection using machine and deep learning
In the Apple Heart Study, the photoplethysmographic monitoring algorithm of the Apple Watch was evaluated in 419 297 participants and demonstrated 0.84 and 0.71 precision levels and recall for atrial fibrillation (AF), respectively. But for sure, they do not disclose AI/ML algorithms used.
In a similar study done in China, AF screening using photoplethysmographic monitoring technology of Huawei wristband and wristwatch was evaluated in 187 912 individuals. And precision and recall were 0.87 and 92, respectively.
In this article, automated cardiologist-level classification of 12 different rhythms was obtained through the Deep Learning Model using 91 232 single-lead ECGs from 53 549 patients. The average F1 score for the DNN (0.837) exceeded that of average cardiologists (0.780). These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists.
Booster #2: Stress & Recovery
DeStress: deep learning for mental stress identification in firefighters
An emerging trend from 2019 is using unsupervised deep learning to classify mental stress associated with HRV using autoencoders. This work uses traditional K-Means clustering with engineered time and frequency domain features, convolutional autoencoders, and long short-term memory (LSTM) autoencoders. It was demonstrated that the clusters produced by the convolutional autoencoders consistently and successfully stratify stressed versus normal samples. It was shown that there is a significant difference between the mean RMSSD and LF-HF Ratio for clusters encoded by the convolutional autoencoders.
Biosignals in virtual reality (metaverse?)
Other authors applied a kernel-based extreme-learning machine (K-ELM) to correctly classify stress levels induced by the VR task based on photoplethysmogram (PPG), electrodermal activity (EDA), and skin temperature (SKT) to reflect five different stress levels: baseline, mild stress, moderate stress, severe stress, and recovery. As a result, the average classification accuracy was over 95%.
A bit more general review on the intersection of stress and biosignals you can find in this work.
Booster #3: Fitness and Endurance
Fatigue detection
Using two types of biosignals, a novel approach detects fatigue onset combining dimensionless global fatigue descriptor (GFD) and support of vector machine (SVM) classifier. Based on nine main combined features, the system achieves fatigue regime classification performances of 0.82 ±0.24, ensuring a successful preventive assessment when dangerous fatigue levels are reached.
Booster #4: Sleep quality
Using heart rate and sequential models for sleep stages monitoring
This study aimed to construct an accessible and objective sleep stage monitoring method. The sleep stage was determined using the calculated HRV feature values from the ECG. In this study, a recurrent neural network (RNN), Hidden Markov Model (HMM), neural network (NN), support vector machine (SVM), and random forest (RF) were used as classifiers for sleep stage determination. The maximum recognition rate of 72% was achieved with RNN and HMM using time-series information of observed values.
Using deep learning with HRV data
A long short-term memory (LSTM) network is proposed as a solution to model long-term cardiac sleep architecture information. It was validated on a comprehensive data set of 292 participants, 584 nights, 541.214 annotated 30-second sleep segments, including a wide range of ages and pathological profiles. Annotated according to the Rechtschaffen and Kales (R&K) annotation standard. The model was shown to outperform state-of-the-art approaches, which were often limited to non-temporal or short-term recurrent classifiers. The model achieves a Cohen’s k of 0.61 ± 0.15 and 77.00 ± 8.90% accuracy across the entire database.
Noise-reduced fractal property of HRV and sleep stage classification
This study aimed to examine distinctive features related to sleep stages (wake, light sleep, deep sleep) from heart rate variability (HRV) and evaluate their usefulness in classifying sleep stages. The authors looked into the DFA alpha1 sequence (DFAseq) in which DFA (Detrended fluctuation analysis) alpha1 values are calculated for every R point of HRV signal over time. The prediction model distinguished the wake stage from the other sleep stages based on NR-DFAseq and obtained 77% sensitivity and 73% specificity.
Booster #5: Emotion recognition
Affective sounds and heart rate variability reaction
This paper reports how emotional states elicited by affective sounds can be effectively recognized using heart rate variability (HRV) exclusively. The affective sounds were grouped into four arousal levels (intensity) and two valence levels (unpleasant and pleasant). 17 HRV features showing significant changes between the arousal and valence dimensions were used as input of a classification system for the recognition of the four classes of arousal and two classes of valence. Results demonstrated that a quadratic discriminant classifier was able to achieve a recognition accuracy of 84.72% on the valence dimension and 84.26% on the arousal dimension.
Heart rate and 4 emotions recognition
Another paper aims to investigate whether HR signals can be used to classify four-class emotions with the emotion model from Russell’s in virtual reality (VR) environment, Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Random Forest (RF). For intra-subject classification, all three classifiers, SVM, KNN, and RF, achieved 100% as the highest accuracy, while inter-subject classification achieved 46.7% for SVM, 42.9% for KNN, and 43.3% for RF.
ECG and finger pulse data fusion for emotion recognition
Last but not least, this study aimed to classify emotional responses using a simple dynamic signal processing technique and fusion frameworks. The electrocardiogram and finger pulse activity of 35 participants were recorded during resting state and when subjects listened to music intended to stimulate certain emotions. Four emotion categories included happiness, sadness, peacefulness, and fear. Estimating heart rate variability (HRV) and pulse rate variability (PRV), 4 Poincare indices in 10 lags were extracted. The support vector machine classifier was used for emotion classification. Both feature level (FL) and decision level (DL) fusion schemes were examined. In both cases, the classification rates improved up to 92% (with the sensitivity of 95% and specificity of 83.33%).
Instead of a conclusion
This review was aimed to give you a feel for how AI/ML applications to now-classic biosignals analysis can open up product opportunities. This became a reality with the rapid development of AI computing and human-centered IoT. New product features mined from wearable biosignals have a great potential to boost your monitoring solutions for health, wellbeing, and safety.
Neurons Lab builds robust cooperation with product companies and startups, relying on deep expertise in advanced HealthTech science, AI engineering, and design thinking. We bring AI reality for our partners from research and product strategy to fundraising, from product development and launch to developing company AI expertise.
You can reach us out at info@neurons-lab.com or contact Paul, author of this article and Managing Director & Partner, Healthcare Practice directly at paul.bulai@neurons-lab.com.