Movement is detected, biometric data is collected, and data is monitored in real-time. The HPA axis, an Autonomic Nervous System, and the immunological system can all be evaluated as a result of stress using wearable sensors. The effectiveness of individual-level models as well as machine learning techniques for stress detection is being studied as a continuous time series. There isn't a single thresholding method that can be used everywhere.
The Empatica E4 is one of the most cutting-edge electronic watches available for tracking and monitoring one's own health. Stress responses are being measured with the use of machine learning models. This study provides a comprehensive overview of the current state of advancement in stress detection using machine learning methods, with an emphasis on the ability to generalise and the possibility for reproducibility of results. We looked through 973 papers about stress, ML, and wearables, and picked 33 to systematically review. Thanks to hardware improvements and the miniaturisation of components, more advanced technology can now be packed into smaller devices at a lower cost.
EDA is a solid non-subjective psycho-physiological biomarker for psychological stress, and class balancing is a frequent pre-processing step for wearable stress data. Differential privacy (i.e. DP) was employed to protect patient anonymity, and public datasets were utilised extensively. Feature Engineering utilises summary statistics, such as skew and kurtosis, to extract meaningful traits from physiologic time series data.
Wearable devices have been used to identify stress using machine learning techniques, with SVM, k-nearest neighbours, & tree-based models being the most common approaches. In order to create an effective stress detection system, it is crucial to select classification models with care. Greater rates of accuracy were found with more involved labelling procedures, such as the marking of stressful and non-stressful times during experimental audio recordings, self-reporting questionnaires, as well as third-party grading. Cross-validation is a form of resampling used to enhance the accuracy of machine-learning models.
It can be used to identify over-fitting and selection bias, and gives us a glimpse into how well the algorithm may generalise to an independent dataset. For stress monitoring to be effective in the real world, models based on machine learning must be created, with sustainability, model bias, as well as ethics all taken into account. The basic objective of automatic stress identification & measurement is to construct robust, extremely accurate machine learning algorithms that can generalise successfully on unknown data.
Any electronic gadget that may be attached to a person's body is considered wearable technology. Wearable technology can come in a wide variety of forms, from jewellery and accessories to medical devices and even clothing and accessories. While the term "wearable computing" may imply some level of processing or communications capability, the reality is that the level of sophistication varies widely across wearables.
Advanced examples of technology that can be worn include holographic computers in the form of virtual reality (VR) headsets, augmented reality (AR) glasses, and artificially intelligent (AI) hearing aids from companies like Google and Microsoft. Wearable technology comes in a wide variety of complexity levels, with disposable skin patches with sensors being one example.
There is a wide range of practicality in today's wearable technologies, from web-enabled spectacles and Bluetooth headsets to web-enabled jewellery and fitness monitors like the Fitbit Charge. The purpose and mode of operation of wearables, whether they be designed for health and fitness or for media and play, vary widely. Wearable technology often operates by including microprocessors, batteries, and internet connectivity to enable syncing of acquired data with other electronic devices like mobile phones and computers. The sensors inside wearables can detect motion, collect biometric data for identification purposes, or help you maintain tabs on your position. The most popular wearables, such as activity trackers and smartwatches, use a wristband that records the wearer's movements and vitals at regular intervals.
Many wearables are worn directly on the body as well as are connected to clothing, but others only require proximity to the user to perform their intended function. People can still be followed while they go about their day if they have a cell phone, smart tag, or computer with them. Some wearables use optical sensors to measure heart rate and glucose levels, while others rely on distant smart sensors as well as accelerometers to monitor movement and speed. All of these wearable technologies share the ability to monitor information in real-time.
The use of wearable sensors as a non-invasive way to collect biomarkers related to stress levels has shown potential. Biomarkers such as heart rate variability (i.e. HRV), electrodermal electrical activity (i.e. EDA), and heart rate (HR) can be used to assess the stress response of the Autonomic Nervous System (ANS), the immune system, and a Hypothalamic-Pituitary-Adrenal (i.e. HPA) axis, respectively. While the magnitude of the cortisol response continues to be the most commonly used indicator over stress assessment, developments in wearable technology have led to the release of a variety of consumer devices that can record biomarkers such as heart rate variability (HRV), emotional day activity (EDA), and heart rate (HR) sensor signals. Researchers have also been using machine learning methods on the collected biomarkers to develop models that might be able to foretell the onset of stress.
The term "stress" refers to the physiological and psychological reaction of the body to any kind of demand, whether physical, emotional, or mental. When faced with such a disruption, the brain and the body react with a series of physiological changes (the stress response). In aiding an organism's adaptability to a dynamically changing both its internal and external environment, the stress response plays a vital evolutionary role (Sun et al., 2019). This is accomplished by redistributing energy to the systems that need it most for the adaptational response and mobilising the resources necessary to sustain those systems (Del et al., 2011). There is currently no internationally accepted method for assessing stress, and researchers also lack an extensive framework for studying the ways in which organisms respond to and adapt to their dynamic surroundings. Stress is treated as a dichotomous variable in the setting of this article and the papers it reviews. In some of these studies, data was labelled with stressed and non-stressed time periods, as well as classifiers trained on these datasets could predict an observation as stressed or non-stressed. Other studies [datasets used an everyday stress inventory score, and one study's dataset was labelled through observer scoring (from 0 to 1, low to high) for stress. Study subject-wise to acquire a balanced accuracy rate as high as feasible. (Siirtola et al., 2019) investigated and contrasted models trained as a classifier to a model trained for logistic regression, where a level of accuracy was created by analysing the resulting continuous prediction values. According to the analysed literature, there is no universally applicable thresholding technique.
A growing body of research compares biomarker data collected within a study setting to data collected in real-world scenarios, and additional research investigates the impact of context on training and assessing predictive power (Tempelaar et al., 2015). More and more research is evaluating the possibility of person-specific models as opposed to generic general-in-nature models, via person-specific algorithms showing great potential as powerful predictions of stress, whereas the majority of the research included in this overview approached the development of machine learning algorithms for identifying stress as thesingle time-series dataset (Reichstein et al., 2019).
Since the introduction of the very first Fitbit in 2009 and the Empatica Embrace model in2016 ( Md Faridee et al., 2019), electronic watches for personal health tracking and monitoring have grown in popularity and sophistication (Banos et al., 2016). The Empatica E4 is one of the most recent, high-tech gadgets that can measure a wide range of physiological data. (Peake et al., 2017) critically reviewed the technical characteristics, reliability, and validation of existing wearable devices in these areas to give bio-feedback, and monitor stress, and sleep. Researchers can potentially identify and track a wide range of health-related events, such as seizures, dehydration, cognitivemental workload, physical activity, and emotions (Sharifi et., 2021), and most specifically related to this review, stress, by continuously measuring physiological indicators recorded using wearables.
Furthermore, a number of critical issues, including the statistical strength of the training information utilised or its labelling protocols, which may affect the performance of machine learning models, were not addressed in the previous reviews. Neither have they considered the possibility of a machine learning model generalising to a new dataset or a dataset recorded under different conditions, such as an altered experimental setup, shorter or longer sessions, or a different labelling methodology (Yang and Xu, 2020).
In order to begin answering these concerns, we will first survey the present status of stress detection as well as measurement via consumer-grade medical wearables. Furthermore, we investigate the approaches taken, as well as detection accuracy scores attained, by machine learning models, developed on publicly available datasets constructed from data from sensors recorded from these devices (Vos et al., 2023). Finally, we explain the present level of research into employing wearable sensors to correctly measure stress reactions, and we explore the generalisation ability and limitations associated with these machine learning models.
Wearable detection of stress and machine learning are topics that have been reviewed extensively in the past. The stress indicators and measuring systems reviewed (Samson & Koh, 2020) range from salivary detection to electrochemical detection using wearables. However, the application of machine learning to the detection and measurement of stress has not yet been discussed. Wearable sensors recording Electrocardiogram (i.e. ECG), Electroencephalogram (i.e. EEG), as well as Photoplethysmography (PPG) signals are used for stress detection in a literature review conducted by (Gedam and Paul, 2021). However, in this paper, we systematically review studies that primarily use biomarker information gathered from consumer-grade wearable devices which are of medical-grade quality, as this is where the majority of interest lies with the growing popularity of personal wellness monitoring.
The primary objective of this work has become to provide a comprehensive review of the state of the art in stress detection utilising machine learning techniques, with a focus on the generalisation capacity of models trained on the public stress biomarker data sets as well as the potential reproducibility of the results they produce employing the IJMEDI checklist to evaluate the quality of the literature that was included. The following are some possible forms our study questions can take:
From 2012 to 2022, we surveyed the most important articles published on publicly available datasets relating to stress, with a focus on data collected by wearable devices; and on stress measurement and prediction by machine learning. By searching for stress, machine learning, and wearable in the titles or abstracts of articles in Google Scholar, Crossref, DOAJ, and PubMed, a total of 973 papers were located. After eliminating 16 duplicates, the total number of papers to be taken into account for the remaining stages was 957. Only papers with relevant abstracts were considered, and those without abstracts or full texts were disregarded. Only a few papers were left out because they were primarily concerned with the effects of stress on animals or human psychology. We also did not include studies that only used devices that are not typically thought of as wearables, such as electroencephalogram (EEG) or chest-worn monitors, or studies that used devices that are commonly recognised as health trackers or lifestyle monitors. Additionally, we focused on analysing only machine learning models that had been trained.
Devices capable of measuring numerous biomarkers at once, including heart rate variability (HRV), end-diastolic volume (EDV), heart rate (HR), and interbeat interval (IBI) (Mejia and Kuriacou, 2022). Last but not least, we rejected the papers that didn't go into sufficient depth about fundamental machine-learning approaches like feature engineering and model validation. Therefore, 33 papers were selected for a systematic review, and they were categorised according to the following broad headings: datasets; machine learning methods for stress detection; and future studies and open problems. The papers considered for this analysis are listed in Table 1.
The miniaturisation of components and other hardware advancements have made it possible to pack more technology into smaller devices at a cheaper price. Healthcare practitioners, hardware/software engineers, data scientists, policymakers, cognitive neuroscientists, device engineers, and materials scientists, among others, must work together to overcome the adoption challenge [52]. From the first Fitbit released in 2009 to the most recent Empatica E4 and Oura Ring 3, a lot of progress has been made in terms of basic features and capabilities especially related to monitoring the user's health and promising to help them improve it.
The medical-grade Empatica Embrace Plus, theEmpatica E4, NOWATCH, as well as Oura Ring, as well as the consumer-oriented Apple iWatch, Fitbit, Garmin, and Samsung Gear, are just a few examples of the many wearable gadgets currently available [17] for use in health monitoring. In contrast to medical-grade gadgets like those in the Empatica range, which offer full biomarker data download as well as extra assistance for researchers to properly utilise the raw signals immediately for study, consumer-oriented devices typically provide web-based platforms as well as mobile applications for reporting many reasons health statistics as well as levels of stress.
However, in this assessment, we focused only on devices that can be used independently for monitoring without a separate harness as well as pairing with another device (worn around the wrist, finger, or arm), to be this might limit their use for research outside of a controlled laboratory environment. The well-known wearables in Table 2 are not an entire list, but they may be able to measure and monitor stress levels.

One study conducted by (Siirtola, 2019) found that a single biomarker (Human Resources) reported by smartwatches was sufficient for diagnosing stress. Amongst the same people, EDA is a strong and reliable non-subjective psycho-physiological biomarker for psychological stress, as determined by the research by (Farrow et al., 2017). Based on their research, (Greco et al., 2014) determined that the EDA biomarker alone is sufficient for predicting stress. Section 4.1 goes into depth on an open research subject about the reliability of sensor biomarkers. Therefore, we did not include stress-reporting devices that relied on just one biomarker (like heart rate or heart rate variability).
Publicly available datasets were used extensively across the studies considered in this meta-analysis, and the Empatica E4 was the most common wearable device. Differential Privacy (i.e. DP) has developed as an effective way for releasing data, especially data from wearable devices, which is important because patient privacy is still an issue when using public health data through wearable research. Several solutions for safeguarding patient privacy were proposed after Saifuzzaman et al. [55] systematically reviewed the literature to identify, select, and critically appraise studies on DP to learn about the various methods for publishing wearable data. All personally identifiable information was scrubbed from the public databases used in this study.
Screening process of the article and the intermediate counts Screening process of the article and the intermediate counts

Figure 1: Screening process of the article and the intermediate counts
A number of datasets are publicly available containing sensor data recorded using a variety of devices matching ourinclusion criteria. The reviewed datasets contain the biomarkers predominantly utilized for stress detection, specifically EDA andHR signals. Apart from the Toadstool dataset, all recorded sessions exceed 60 minutes. The Affective ROAD and Toadstool datasets contain biomarkers for a relatively small sample size of 10 subjects each, and small sample size of 25 subjects or less is a common feature of all public datasets reviewed. The largest public dataset included for review, Stress-Predict, contains biomarker data recorded using an Empatica E4 device for 35 test subjects.
The included datasets were labelled in one of two ways: (i) periodically, where specific time frames throughout the experiment were labelled as stressed or not stressed while the test subject was assigned to work under the fact that perceived condition (a particularly stressful test or action, or non-stressful, restful period), as well as (ii) scored as experiencing stress or not experiencing stress during theparticular period, either through completing a self-scoring evaluation or through an observer who perceived a stressful condition.
The American Psychological Association distinguishes between Absolute Stressors, which everyone who is exposed to them will interpret as stressful, and Relative Stressors, which only some people who are exposed to them will interpret as stressful. Time stress, anticipatory stress (worries about future occurrences), situational stress (situations over which you have no control), and encounter stress (worries about interacting with a specific individual or group of people) are four frequent types of stress further defined by (Albrecht, 2018).
Different electronic sensors utilised in wearable devices to record biomarkers operate and record at different sampling frequencies due to these fundamental differences. While the Empatica E4's HR signal is recorded at 1 Hz, the EDA signal is tested at 4 Hz. Because of this discrepancy, researchers will need to process the sensor information by away back-sampling the EDA signals to 1 Hz to assure a like-for-like timestamp match to the HR signal and, consequently, a stress metric label for the same time period. Down-sampling was utilised on the data used in the experiments in the evaluation.
Data will probably be uneven, with more non-stressed samples available in any given dataset than stressed samples due to varied experimental techniques including the ease of collecting of non-stressed samples. Class balancing is another common pre-processing step for wearable stress data, and it can be accomplished in a variety of ways. As an example, Nkurikiyeyezu et al. [14] ensured a Gaussian distribution by applying logarithmic, square root, as well as Yeo-Johnson transformations to the recorded sensor data, as required by the application of a linear regression model, and by randomly discarding certain samples from the vast majority (non-stressed) class. Class balancing was similarly conducted by Can et al. [29] by randomly down-sampling the dominant class (non-stressed observation) to equalise with the subset of the population that experienced stress.
One study conducted (Siirtola, 2019) found that a single biomarker (HR) reported by smartwatches was sufficient for diagnosing stress. Amongst the same people, EDA is a strong and reliable non-subjective psycho-physiological biomarker of psychological stress, as determined by the research by (Farrow et al., 2017). Based on their research, (Greco et al., 2014) determined that the EDA biomarker alone is sufficient for predicting stress. Open research subject about the reliability of sensor biomarkers. Therefore, we did not include stress-reporting devices that relied on just one biomarker (like heart rate or heart rate variability).
Publicly available datasets were used extensively across the research investigations included in this meta-analysis, and the Empatica E4 was the most common wearable device. Differential Privacy (DP) has developed as an effective way to make available data, including data from wearable devices, which is important because patient privacy is still an issue when using public health information for wearables research. Several solutions for safeguarding patient privacy were proposed after (Saifuzzaman et al., 2014) accomplished a Systematic Literature Review to find, select, as well as critically appraise studies on DP to learn about the various methods for publishing wearable data. All personally identifiable information was scrubbed from the public databases used in this study.
It is possible that the lack of an established approach employed when selecting which results to discard may have resulted in information lossof essential biomarker information during the sampling process, as neither up-sampling noraway back-sampling techniques showed a significant difference as well as improvement in predictive power. It summarises the advantages and disadvantages of various class-balancing approaches, and other strategies have been developed to enhance class-balancing resampling methods because of this. For unbalanced data, (Deng et al., 2014) proposed an integrated strategy for time series with multiple variables classification, whilst (Lee et al., 2022) used Active Learning, a semi-supervised technique, to reduce the impact of mislabeled classes. Oversampling from noisy points is just one problem that was addressed by (Jiang et al., 2016) proposal of a new oversampling method that relies on the classification contribution degree. When working with highly imbalanced datasets, including the stress biomarker datasets used in this study, where the person who is stressed period tends to belong to the minority class, relying on class balancing can be problematic because it limits the ability to reproduce and generalise results to new, unseen data which might contain significant outliers as well as a different class distribution due to differences in study design and biomarker recording protocol. More work is needed to discover effective methods of addressing class imbalances in thepsychological biomarker datasets.
Some algorithms for machine learning struggle with data that has significant differences in range, units, and scale; therefore, standardisation is often used to transform the data into a normal distribution with a mean of 0 as well as astandard deviation of 1. The normalisation, like scaling, aims to bring all of a dataset's numerical columns to the same level without changing the relative sizes of their values. Normalisation and standardisation were used in the framework of stress detection, with conducting experiments on both raw and standardised data and discovering that standardisation provided enhanced prediction performance throughout all 10 machine learning techniques examined. Filtering is a common preprocessing procedure applied to biomedical inputs like stress indicators gathered by wearable sensors. As a result, we get cleaner results with less noise and outliers. For instance, to smooth out the data and get rid of noise, the raw EDA signal was filtered with a 5 Hz low-pass filter in a 100 Hz high-pass filter, and a 4 Hertzfourth-order Butterworth low-frequency filter.
Extracting meaningful characteristics that describe physiologically time series data is often done by summarising the existing data's evolving properties using summary statistics. Common combinations of summary statistics, including [min, max, mean] as well as [min, max, average, standard deviation (std)], were shown to generate good prediction outcomes in most circumstances by (Guo et al., 2020) study evaluating summary statistics as features of clinical prediction tasks. However, they stated that the distributional shape indicators of skew and kurtosis performed poorly when employed alone as prediction features, but frequently appeared when used in optimal combinations, suggesting a role for them as supplementary data.
Studies reviewing methods for identifying stress commonly used those described by (Guo et al., 2020). Sliding-window summaries of biomarkers were used in 14 of the examined methods, with variable degrees of success (ranging between 0.25 seconds in a single experiment to twenty minutes in others). Based on the assumption that this element correlates with physiological reaction, [69] notes that summary windows of thirty seconds to sixty seconds are most commonly used. Since the tonic component involves more slow, long-term changes and the phasic component includes faster, event-related changes, (Can et al., 2019) used a convex optimisation approach to decompose the EDA signal into its phasic and tonic components. Detection accuracy was improved by using sliding windows between 10 as well as 17.5 minutes, as discovered by both. However, various algorithms for machine learning relied on various window widths, as noted by [29], thus this should be taken into account in future studies.
Using the tsfresh Python module, (Jin et al., 2020) utilised theRandom Forest structure as a machine learning strategy to automatically produce 4536 features from their preexisting data. The results were organised around the primary biomarker (i.e. HR, EDA, TEMP) from which the features were created in order to facilitate the evaluation of the performance associated with a large number of features. When using sensor-specific features, PPG-based characteristics achieved higher accuracy in predicting results, complied with the IBI-based features and the HR-based features, as noted by (Gjoreski et al., 2017), who used greedy step-wise choosing to determine the top features deemed to be most helpful for their particular machine learning model. While (Iqbal et al., 2020) prioritised features based on heart rate and respiratory rate, (Orgaz et al., 2014) zeroed in on heart rate variability as a biomarker and discovered that HRV features are reliable indicators of stress.
While researching the use of wearable devices to detect high levels of stress, we came across multiple applications of machine learning techniques. The reviewed articles and ML techniques used are summarised in Table 6. In the sections that follow, we'll talk about the individual components that make up the machine learning pipelines, and we'll examine the methods that have been used to complete those components in the past, pointing out their advantages and disadvantages.
Twenty-three studies using machine learning for stress detection were reviewed, and among them, we found the use of sixteen different algorithms. These included Logistic Regression (LR),Random Forest Tree(RFT), Decision Tree (DT),Bayesian Networks (BN), Principal Component Analysis (PCA), Support Vector Machines (SVM),Linear Discriminant Analysis (LDA), k-Nearest Neighbours (kNN), Multi-layer Perceptron (MLP), Multi-task Learning (MTL The most popular methods for detecting stress were support vector machines (SVM), k-nearest neighbours (kNN), and tree-based models (RF, GB, etc.), with the latter two typically providing superior predictive performance overall supervised binary classification goals.
A common method is to identify several candidate algorithms that could solve the issue, train them all, and then pick the one with the highest predicted accuracy. While only tested one approach (Random Forest), compared 13 various algorithms to see which would be more effective at predicting high-stress levels using categorization-based models compared to models, and the winner was Bagged Trees. The accuracy of stress predictions was also examined across 5–7 algorithms, with the best-performingmodels. (Iqbal et al., 2014) when comparing the performance of seven supervised methods with the performance of seven unsupervised methods, and discovered that thecareful selection of models for classification is required to develop a reliable stress detection system.
Model ensembling is the process of taking the predictions from many algorithms and combining them into a single forecast depending on some metric, such as averaging, weighted averaging, or voting.
Having reliable labelled data is crucial when creating supervised machine learning algorithms. We identified three primary approaches taken to classifying states of increased stress. (i) marking periods of stress and non-stress during experimental recordings; (ii) self-reporting through questionnaires; and (iii) labelling by thethird-party observer, and she observes subjects' reaction to a situation and numerically ratings/grades the level of anxiety observed. Accuracy rates were reported by the labelling method for each year of studies analysed. The majority of studies relied on periodic labelling, and compared to self-scoring as well as third-party scoring, it consistently resulted in greater rates of accuracy.
Using datasets labelled with particular, marked stress/no-stress intervals, the highest-performing models from all experiments obtained at least 64.5% test accuracy, as shown in Table 6. Furthermore,reported binary classification accuracy during test rates of over 90%. As stress is aphysiological reaction, the predictive accuracy in these studies is determined by comparing a labelled metric from the same time period (stressed versus non-stressed) with the included features (biomarkers). Using a regression technique rather than a classification one, (Siirtola et al., 2019) aimed to characterise the strength of this link, whereas (Umematsu et al., 2015) zeroed attention on the challenge of predicting stress events in the future rather than assessing stress based on historical data.
When testing models based on machine learning on a small dataset, cross-validation is a resampling procedure used to improve model performance. The goal of cross-validation in machine learning is to ensure that the model can make reliable predictions on data that has not been seen before. It provides insights into how well the algorithm will generalise to theindependent dataset and can be used to detect concerns like over-fitting and selection bias. Leave One Subject Out (i.e. LOSO) cross-validation was used in 6 of the 11 studies, 11 of the 13 studies, 22 of the 26 studies, 32 of the 45 investigations, and 12 of the 14 studies, 25 of the 28 studies, 29 of the 31 studies, and 33 of the 42 studies. Furthermore, combined LOSO as well as K-fold cross-validation (with K=5), wherein the problem category was described as a stress level measurement rather than a binary stressed vs non-stressed problem, all examined papers handled stress predictions as theproblem involving binary classification. No significant difference existed between the stated accuracy rates utilising LOSO cross-validation and K-fold cross-validation.
There are four criteria we take into account while designing a stress detection machine learning model. Sensor biomarker information has to be valid as well as sufficiently varied to represent a wide spectrum of potential stress-related physiological responses; (ii) to earn supervised machine learning, this information needs to be accurately labelled where observations have been designated as stressed as well as non-stressed as well as a stress score range is given to allow the model to derive information from the data; (iii) where a particular theory is being tested, an appropriate amount of statistical power is required, among other things. Those are the four primary criteria around which this review revolves.
Using that IJMEDI checklist, we rated the included machine learning research and found only one to be of high quality, the others to be of medium level, and one to be of low quality. The problem, data comprehension, and modelling scores were particularly high across the board. The validation domain's high-priority items also received particularly low ratings, as did the domain's data preparation & deployment scores. Interestingly, there was no discernible rise in the overall quality of studies over time.
There are five studies with scores above 30 (of medium to high quality). Using the scores collected from these eight studies in the modelling, validation, and deployment domains, we find that just the validation domain has improved over time, indicating a lack of advancement in the modelling domain and little attention to the deployment of mathematical models in real-world scenarios, including considerations of sustainability, model bias, and ethics.
There are three significant obstacles that must be overcome before credible machine-learning models can be developed for use in real-world stress monitoring.
Building strong, highly precise machine learning models that can generalise well on new, unknown data is the primary goal of automated stress identification and measurement. This analysis summarised the relevant literature and provided key details from prior research on stress prediction with wearable devices. Specifically, we looked at the freely accessible stress biomarker dataset used in several publications, the machine learning algorithms utilised, their strengths and weaknesses, and their capacity to generalise to unknown data. Our perspectives on the potential and threats in this new field were also summed up. We anticipate this overview will contribute to the body of knowledge around machine learning for identifying stress using wearable devices, bringing us one step closer to the commercialization of efficient stress identification and management technologies.
Albrecht, W. S., Albrecht, C. O., Albrecht, C. C., & Zimbelman, M. F. (2018).Fraud examination. Cengage Learning.
Banos, O., Villalonga, C., Bang, J., Hur, T., Kang, D., Park, S., … & Lee, S. (2016). Human behavior analysis by means of multimodal context mining.Sensors,16(8), 1264.
Can, U., & Alatas, B. (2019). A new direction in social network analysis: Online social network analysis problems and applications.Physica A: Statistical Mechanics and its Applications,535, 122372.
Del Giudice, M., Ellis, B. J., & Shirtcliff, E. A. (2011). The adaptive calibration model of stress responsivity.Neuroscience & biobehavioral reviews,35(7), 1562-1592.
Deng, L., Liang, H., Burnette, B., Beckett, M., Darga, T., Weichselbaum, R. R., & Fu, Y. X. (2014). Irradiation and anti–PD-L1 treatment synergistically promote antitumor immunity in mice.The Journal of clinical investigation,124(2), 687-695.
Farrow, K., Grolleau, G., & Ibanez, L. (2017). Social norms and pro-environmental behavior: A review of the evidence.Ecological Economics,140, 1-13.
Gedam, S., & Paul, S. (2021). A review on mental stress detection using wearable sensors and machine l earning techniques.IEEE Access,9, 84045-84066.
Gjoreski, M., Luštrek, M., Gams, M., & Gjoreski, H. (2017). Monitoring stress with a wrist device using context.Journal of biomedical informatics,73, 159-170.
Greco, M., Capretti, G., Beretta, L., Gemma, M., Pecorelli, N., & Braga, M. (2014). Enhanced recovery program in colorectal surgery: a meta-analysis of randomized controlled trials.World journal of surgery,38, 1531-1541.
Guo, Y., Hu, Z., Wang, J., Peng, Z., Zhu, J., Ji, H., & Wan, L. J. (2020). Rechargeable Aluminium–Sulfur Battery with Improved Electrochemical Performance by Cobalt‐Containing Electrocatalyst.Angewandte Chemie,132(51), 23163-23167.
Iqbal, A., Sambyal, P., & Koo, C. M. (2020). 2D MXenes for electromagnetic shielding: a review.Advanced Functional Materials,30(47), 2000883.
Jiang, J., & Xiong, Y. L. (2016). Natural antioxidants as food and feed additives to promote health benefits and quality of meat products: A review.Meat science,120, 107-117.
Jin, Y., Yang, H., Ji, W., Wu, W., Chen, S., Zhang, W., & Duan, G. (2020). Virology, epidemiology, pathogenesis, and control of COVID-19.Viruses,12(4), 372.
Lees, C. C., Romero, R., Stampalija, T., Dall’Asta, A., DeVore, G. R., Prefumo, F., ... & Hecher, K. (2022). The diagnosis and management of suspected fetal growth restriction: an evidence-based approach.American journal of obstetrics and gynecology,226(3), 366-378.
Md Faridee, A. Z., Ramamurthy, S. R., & Roy, N. (2019). Happyfeet: Challenges in building an automated dance recognition and assessment tool.GetMobile: Mobile Computing and Communications,22(3), 10-16.
Mejía-Mejía, E., & Kyriacou, P. A. (2022). Photoplethysmography-Based Pulse Rate Variability and Haemodynamic Changes in the Absence of Heart Rate Variability: An In-Vitro Study.Applied Sciences,12(14), 7238.
Nkurikiyeyezu, K. N., Suzuki, Y., & Lopez, G. F. (2018). Heart rate variability as a predictive biomarker of t hermal comfort.Journal of Ambient Intelligence and Humanized Computing,9, 1465-1477.
Orgaz, J. L., Pandya, P., Dalmeida, R., Karagiannis, P., Sanchez-Laorden, B., Viros, A., ... & Sanz-Moreno, V. (2014). Diverse matrix metalloproteinase functions regulate cancer amoeboid migration.Nature communications,5(1), 4255.
Peake, J. M., Neubauer, O., Della Gatta, P. A., & Nosaka, K. (2017). Muscle damage and inflammation during recovery from exercise.Journal of applied physiology,122(3), 559-570.
Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., & Carvalhais, N. (2019). Deep learning and process understanding for data-driven Earth system science.Nature,566(7743), 195-204.
Saifuzzaman, M., & Zheng, Z. (2014). Incorporating human-factors in car-following models: a review of recent developments and research needs.Transportation research part C: emerging technologies,48, 379-403.
Samson, C., & Koh, A. (2020). Stress monitoring and recent advancements in wearable biosensors.Frontiers in bioengineering and biotechnology,8, 1037.
Sharifi, M., Asadi-Pooya, A. A., & Mousavi-Roknabadi, R. S. (2021). Burnout among healthcare providers of COVID-19; a systematic review of epidemiology and recommendations.
Siirtola, P. (2019, September). Continuous stress detection using the sensors of commercial smartwatch. InAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers(pp. 1198-1201).
Siirtola, P. (2019, September). Continuous stress detection using the sensors of commercial smartwatch. InAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers(pp. 1198-1201).
Sun, W., Ma, Z., Chen, H., & Liu, M. (2019). MYB gene family in potato (Solanum tuberosum L.): genome- wide identification of hormone-responsive reveals their potential functions in growth and development.International journal of molecular sciences,20(19), 4847.
Tempelaar, D. T., Rienties, B., & Giesbers, B. (2015). In search for the most informative data for feedback generation: Learning analytics in a data-rich context.Computers in Human Behavior,47, 157-167.
Uematsu, A., Tan, B. Z., & Johansen, J. P. (2015). Projection specificity in heterogeneous locus coeruleus cell populations: implications for learning and memory.Learning & memory,22(9), 444-451.
Vos, G., Trinh, K., Sarnyai, Z., & Azghadi, M. R. (2023). Generalizable Machine Learning for Stress Monitoring from Wearable Devices: A Systematic Literature Review.International Journal of Medical Informatics, 105026.
Yang, Y., & Xu, Z. (2020). Rethinking the value of labels for improving class-imbalanced learning.Advances in neural information processing systems,33, 19290-19301.
You Might Also Like:
ENGIN3503 Surface Mining Operations and Equipment
Examining Moderated Mediation Process Assignment Sample
1,212,718Orders
4.9/5Rating
5,063Experts
Turnitin Report
$10.00Proofreading and Editing
$9.00Per PageConsultation with Expert
$35.00Per HourLive Session 1-on-1
$40.00Per 30 min.Quality Check
$25.00Total
FreeGet
500 Words Free
on your assignment today
Get
500 Words Free
on your assignment today
Request Callback
Doing your Assignment with our resources is simple, take Expert assistance to ensure HD Grades. Here you Go....
Speak directly with a qualified subject expert.
Get clarity on your assignment, structure, and next steps.
In this free session, you can: