Next Article in Journal
A Beam Hardening Artifact Correction Method for CT Images Based on VGG Feature Extraction Networks
Previous Article in Journal
Exploring the Synergistic Effects of MoS2 and PVDF for Advanced Piezoelectric Sensors: A First-Principles Approach
Previous Article in Special Issue
Time-Normalization Approach for fNIRS Data During Tasks with High Variability in Duration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing

by
Wenchao Zhu
and
Yingzi Lin
*
Intelligent Human Machine Systems Laboratory, Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02155, USA
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(7), 2086; https://doi.org/10.3390/s25072086
Submission received: 15 January 2025 / Revised: 11 March 2025 / Accepted: 11 March 2025 / Published: 26 March 2025
(This article belongs to the Special Issue Wearable Sensors for Human Health Monitoring and Analysis)

Abstract

:
Chronic pain is prevalent and disproportionately impacts adults with a lower quality of life. Although subjective self-reporting is the “gold standard” for pain assessment, tools are needed to objectively monitor and account for inter-individual differences. This study introduced a novel framework to objectively classify pain intensity levels using physiological signals during Quantitative Sensory Testing sessions. Twenty-four participants participated in the study wearing physiological sensors (blood volume pulse (BVP), galvanic skin response (GSR), electromyography (EMG), respiration rate (RR), skin temperature (ST), and pupillometry). This study employed two analysis plans. Plan 1 utilized a grid search methodology with a 10-fold cross-validation framework to optimize time windows (1–5 s) and machine learning hyperparameters for pain classification tasks. The optimal time windows were identified as 3 s for the pressure session, 2 s for the pinprick session, and 1 s for the cuff session. Analysis Plan 2 implemented a leave-one-out design to evaluate the individual contribution of each sensor modality. By systematically excluding one sensor’s features at a time, the performance of these sensor sets was compared to the full model using Wilcoxon signed-rank tests. BVP emerged as a critical sensor, significantly influencing performance in both pinprick and cuff sessions. Conversely, GSR, RR, and pupillometry demonstrated stimulus-specific sensitivity, significantly contributing to the cuff session but with limited influence in other sessions. EMG and ST showed minimal impact across all sessions, suggesting they are non-critical and suitable for reducing sensor redundancy. These findings advance the design of sensor configurations for personalized pain management. Future research will focus on refining sensor integration and addressing stimulus-specific physiological responses.

1. Introduction

Pain is a complex and subjective experience, remaining one of the most significant clinical challenges, with 51.6 million U.S. adults (20.9%) experiencing chronic pain and 17.1 million (6.9%) suffering from high-impact chronic pain during 2021 [1,2]. The symptom of chronic pain causes the greatest source of disability for human beings, leading to substantial issues and affecting the quality of life for individuals and society [3,4].
Pain can be understood as a conscious interpretation of sensory stimuli that triggers nociceptive afferents, accompanied by the mental projection of these stimuli onto specific body regions. Pain assessment involves approximating an individual’s subjective self-report, which serves as their ground truth [5,6]. The traditional pain assessment is performed through a survey based on participants’ subjective perception of their pain, such as the numeric rating scale (NRS), the visual analogue scale (VAS), and the verbal rating scale (VRS). However, self-reported assessments are prone to bias from anxiety, memories, pain intensity, and physical activities [6,7,8]. The consequences of such inaccuracies can lead to under-treatment and over-treatment of pain that is either ineffective or detrimental to patient safety [9,10].
Quantitative Sensory Testing (QST) was developed as an objective, standardized way to evaluate pain sensitivity and pain perception using calibrated mechanical or thermal stimuli to measure sensory thresholds and tolerances [11]. This technique can aid in diagnosing conditions such as neuropathic pain and chronic low back pain by detecting abnormalities in QST sessions. For example, one QST session, the cuff inflation test, has shown that cLBP patients require lower cuff pressure to evoke moderate pain compared with healthy controls, and they also rate mechanical probes as more painful [12]. In addition, psychosocial factors—including emotional states and pain catastrophizing—further influence nociceptive processing and contribute to variations in pain perception [13,14].
Physiological sensors have developed as an objective measurement of human states and characteristics [15,16,17], such as blood volume pulse (BVP), electroencephalogram (EEG), galvanic skin response (GSR), respiration rate (RR), electromyography (EMG), skin temperature (ST), and pupillometry. For example, increased skin conductance level in GSR was detected when external noxious stimuli (e.g., pressure, thermal, cold pain) were presented [18]. Heart rate and heart rate variability, which can be derived from BVP signals, is associated with a stress response. During different pain stimuli, a decreased BVP or an increase in heart rate have been observed [19,20]. ST can be measured in the palm and the back of the hand, and decreased ST has been reported during and after painful stimuli [20,21]. EEG studies have addressed the correlations between noxious stimuli and different EEG frequency bands. For example, decreases in the alpha band have been observed as the common indicator [22,23].
Research has demonstrated the potential of using sensors in classifying pain intensity levels [24,25,26]. For example, Guo et al. estimated three levels of cold pain using facial expression by comparing three neural network models, and the personalized spatial–temporal framework using a convolutional long short-term memory model achieved the highest performance [27]. Another study measured the pain level via features generated from the pupillometry data using a genetic algorithm with an artificial neural network classifier, and the best performance was obtained with an accuracy of 81% [28]. EEG studies have demonstrated statistical differences in central and occipital regions, and was able to classify pain and no-pain states using multi-layer CNN frameworks [29,30]. Multimodal physiological classification with decision-level fusion and feature-level fusion proved promising in pain level detection and classification [21,31,32].
Combining multimodal physiological sensors with QST represents a frontier in pain research, enhancing the objectivity and sensitivity of pain assessments. BVP signals have been used to classify the pressure session, achieving 96.6% accuracy in a binary (threshold vs. tolerance) task [33]. GSR has assessed conditioned pain modulation between patient and healthy groups, revealing significant differences in the dominant hand (p = 0.003) [34]. EMG signals were evaluated under varying cuff pressures [35]. Despite these advances, few studies have examined comprehensive pain-level classification across all QST sessions. Additionally, the sensitivity of each sensor to these classifications remains largely unexplored.
Time window selection influences the interpretation of physiological signals in the task of pain assessment. One study demonstrated two distinct labeling approaches: fixed time windows and percentage-based timestamps. The fixed time window method segments data in a consistent and fixed manner to provide a straightforward approach for analyzing responses [36,37]. In contrast, percentage-based timestamping aligns labels to the individual’s pain threshold and tolerance tailored to personal variations in pain perception [25]. Importantly, the chosen segmentation method directly impacts the number of samples generated for analysis and influences the dataset size available for machine learning models.
While multimodal physiological signals can aid in QST pain assessment tasks, two critical gaps remain in the current literature. First, existing studies have primarily focused on isolated noxious stimuli (e.g., cold pain, pressure, or cuff pressure) but rarely compared them across different noxious stimuli in a holistic way. Secondly, existing studies either combine all sensor data into the analysis model or exclusively analyze one of the modalities. The relative contributions of individual sensors remain unclear. Furthermore, the time window of signal segmentation under different tasks needs to be cautiously selected. Our study aims to advance the understanding of physiological sensor contributions to pain assessment and the development of individualized pain biomarkers, by addressing two research questions:
  • The first question is to quantify the sensitivity of different time windows and machine learning classification model selection for pain level classification.
  • The second question is to evaluate how excluding individual physiological sensors affects the model performance.

2. Materials and Methods

2.1. Participants

The study was conducted from January to May 2022 and approved by the Brigham and Women’s Hospital Institutional Review Board (IRB), Boston, MA, USA (protocol code 2019P002781, 18 November 2019). Healthy participants and chronic low back pain patients who have had cLBP for at least three months with an average intensity of more than three out of ten on pain scales were recruited. All participants were neurologically intact and had no history of myocardial infarction, substantial motor or sensory deficits, or no evidence of cognitive impairment.

2.2. Apparatus

The study used sensors to monitor physiological responses during QST sessions. Pupillometry data were tracked using Tobii Pro Glasses 2 (Tobii, Danderyd, Sweden). Other sensors (FlexComp Infiniti, Thought Technology, Montreal, QC, Canada) that measured the participant’s physiological responses included a BVP (SA9308M, Thought Technology) sensor for heart rate tracking through the middle finger of the non-dominant hand, a chest-mounted respiration sensor (SA9311M, Thought Technology), an EMG sensor (T9306M, Thought Technology) for muscle activity on the non-dominant forearm, an ST sensor (SA9310M, Thought Technology) on the back of the non-dominant hand, and a GSR sensor (SA9309M, Thought Technology) for electrical activity between the index and ring fingers on the non-dominant hand. A computer system was used to collect and store data (Dell Latitude E6230, Dell, Round Rock, TX, USA).

2.3. Experimental Procedures

Participants were familiarized with the QST equipment and completed the Brief Pain Inventory questionnaire [38]. The physiological data collection for each participant took approximately 80–120 min. As shown in Figure 1, the process involved the following:
(1)
Participants were seated comfortably in a reclining chair.
(2)
A research assistant helped participants wear all sensors, including pupillometry, BVP, GSR, EMG, ST, and RR. The setup took around 20 min.
(3)
A one-minute baseline was recorded, during which the participant stayed in a natural resting condition.
(4)
Data collection occurred over 30 min for one round of QST, during which participants followed instructions from the research assistant, reported pain intensities, and were asked to minimize unnecessary movement.
(5)
Another one-minute baseline was recorded.
(6)
Participants then performed physical maneuvers spanning about 3–5 min, with sensors disconnected.
(7)
Participants then repeated steps 3 to 5 for a second round of QST collection.
(8)
The sensors were removed, and participants were debriefed and compensated.
Due to COVID-19 safety measures, all research staff and study participants were required to wear a face covering/mask to cover the nose and mouth, and only four people were present in the testing room at one time due to the COVID-19 period.

2.4. Quantitative Sensory Testing

QST has four sessions: pressure pain threshold and tolerance, temporal summation of mechanical pinprick pain, temporal summation of cuff pain, and conditioned pain modulation. The Temporal Summation of Pain tests the ability of the central nervous system to amplify the incoming pain over time when applying an increasing pain. It can be demonstrated in various pain modalities, including mechanical pinprick and cuff pain.
(1)
Pressure pain threshold and tolerance were assessed using a digital pressure algometer. The testing sites were located on the dorsal surface of the forearm and over the trapezius muscle in the upper back and neck region. The researcher increased the pressure pain gradually via a flat round transducer on a small skin area (probe area 0.785 cm2) at a steady speed of ~1 lb./s (0.45 kg/s). The pressure value was first recorded when the participant reported the onset of pain as a pressure pain threshold and was terminated when the participant reached their maximum pain tolerance. Four trials were performed, including the left forearm, the right forearm, the left trapezius, and the right trapezius.
(2)
Mechanical pinprick pain was assessed by applying 10 calibrated force pinprick stimuli to the skin at a fixed frequency (1 Hz). Participants were asked to rate their pain intensity after the 1st, 5th, and 10th stimuli. The procedure was first applied on the left index finger and then repeated on the right index finger.
(3)
Cuff pain was assessed by inflating a blood pressure cuff on the left leg to a threshold pressure level (5 out of 10 on a scale) and maintained for a fixed duration (2 min). Participants were asked to rate their pain levels every 30 s.
(4)
Conditioned pain modulation was assessed by applying a noxious thermal stimulus and an increasing pressure pain simultaneously. Participants were first asked to submerge their dominant hand into the cold-water bath set at 6 degrees Celsius. Meanwhile, increasing pressure was applied to the non-dominant trapezius muscle, as described in the pressure pain steps. The participants then reported their onset of pain and their maximum pain tolerance. The post-pain rating was registered 15 s after the cessation of pressure pain.

2.5. Data Preprocessing

The overall research diagram is presented in Figure 2. First, physiological (BVP, GSR, EMG, ST, RR, and pupillometry) data were synchronized by resampling them to 50 Hz. The left-eye and right-eye pupillometry data were interpolated to fill in any missing gaps [20]. The BVP signal was filtered via a fifth-order Butterworth band-pass with [0.5, 12] Hz as cut-off frequencies. The GSR was filtered via a fifth-order 1 Hz low-pass Butterworth filter. The RR was filtered via a fifth-order Butterworth band-pass with [0.1, 1] Hz as cut-off frequencies. In addition, eight time-series HRV data were generated from BVP signals, including PPG rate, meanNN, SDNN, RMSSD, SDSD, HF, SD1, and SD2 using the NeuroKit2 package in Python 3.7.9 [28]. For extracting heart rate variability signals, a 15 s sliding time window with 50 Hz was selected.

2.6. Feature Extraction and Selection

Features were extracted from all physiological sensors. GSR signals were separated into phasic and tonic signals. Statistical features were then generated from all physiological sensors, such as mean, median, range, variance, standard deviation, skewness, and kurtosis [39]. Five additional features were generated from EMGs, including mean absolute value, root mean square, variance, zero crossings, waveform length, and slope signal changes [39].
Principal Component Analysis was utilized for feature selection by setting a 90% information variance threshold to determine the cumulative features to be used.

2.7. Analysis Plan

To solve the research questions, a two-phase analysis plan was employed.
Analysis Plan 1 (optimal time window analysis): this phase focused on determining the optimal time window for signal segmentation and evaluating the performance of various machine learning models across all QST sessions. The time window candidates included 1 s, 2 s, 3 s, 4 s, and 5 s. A grid search methodology was employed to explore the relationship between time window lengths and classification performance, using a 10-fold cross-validation framework. The output from Plan 1 was to identify the combination of time window and model hyperparameters that achieved the highest accuracy, F1 score, and sensitivity for each QST session.
Analysis Plan 2 (component sensitivity analysis): this phase investigated the individual contribution of each sensor modality to classification performance. Using a leave-one-out (LOO) iteration strategy, one sensor’s features were excluded at a time, and the model was retrained and tested using the remaining sensors. The performance of each LOO model was compared to the full model using statistical analysis (i.e., Wilcoxon signed-rank test).
The Wilcoxon signed-rank test is a nonparametric statistical test to compare two samples. This is a useful alternative to the paired t-test when the data do not follow a normal distribution. The differences between paired observations were computed and their absolute values were ranked. The test statistic was derived by summing the ranks of the positive and negative differences. Sensors with significant differences in Wilcoxon signed-rank test statistics (i.e., p value is below 0.05) were identified as critical sensor candidates. Sensors with minimal impact were identified as non-critical sensor candidates. The optimal time window and the optimal machine learning model were predetermined from Analysis Plan 1’s results.
The classification task was to predict pain intensity states from physiological features and compare them with subjective ratings as the ground truth. In the pressure pain test, a three-class classification task was used to differentiate among three pain states: baseline (no pain), threshold (threshold of pressure), and tolerance (tolerance of pressure). This classification was applied to data combining pressure and conditioned pain modulation sessions, as both sessions employed identical labels for pressure threshold and tolerance. The classification task in both the pinprick session and cuff session was classifying pain intensity states based on numerical rating scales (0–10). Participants’ self-reports were categorized into three levels: 1–3 (Mild Pain), 4–6 (Moderate Pain), and 7–10 (Severe Pain). This classification was conducted separately for stimuli applied to the left and right hands. Finally, the third task aimed to classify the pain intensity levels during a 2-minute temporal summation of cuff sensations. Similar to the pinprick classification task, the pain levels were categorized as Mild, Moderate, and Severe.
A grid search approach involving five classification models and their respective hyperparameters (detailed in Table 1) was used. These models included logistic regression (LOG), decision tree (DT), k Nearest Neighbors (KNN), Stochastic Gradient Descent (SGD), and AdaBoost (ADA). The Synthetic Minority Oversampling Technique (SMOTE) was applied to the training dataset to balance the minority classes [40]. Principal Component Analysis (PCA) was selected to reduce dimensionality with an 80% threshold.

3. Experimental Results

Detailed demographic information is presented in Table 2. A total of 25 participants were initially screened, and 1 participant was excluded due to schedule conflicts. Twenty-four participants were successfully recruited. It included 17 healthy adult participants (11 females, mean age 28.8 years old) and 7 cLBP patients (5 females, mean age 44.4 years old). With an average of 14 years of pain duration, cLBP patients reported higher pain intensity and interference (Brief Pain Inventory; intensity, 5.0 ± 1.4; interference, 3.7 ± 2.5) than healthy participants (intensity, 0.3 ± 0.4; interference, 0.1 ± 0.2).
Table 3 includes the sample size and time lengths of various session durations: baseline, the time from session start to pressure threshold, the time from pressure threshold to tolerance, pinprick, and cuff. The time variance among pressure threshold and pressure tolerance sessions is high, with standard deviations (STD) of 3.74 and 5.34, respectively, compared to their mean values of 6.86 and 13.99, respectively. The time lengths of pinprick and cuff sessions were more stable regarding the variance, with STDs of 1.44 and 3.39, respectively, compared to their mean values of 6.99 and 29.95, respectively.

3.1. Analysis Plan 1—Optimal Time Window Analysis

Figure 3 shows three accuracy curve plots of three sessions based on different algorithms and five segmented time windows. For each session, the time window selected was about three factors: accuracy of classification, time variance, and size of the datapoints.
For the pressure session, the highest average performance was achieved when the time window was set as 3 s (accuracy = 61.4%, f-1 = 54.3%), followed by 4 s (accuracy = 61.4%, f-1 = 53.5%). However, the highest performance was achieved in the logistic regression classifier of the 5 s time window (accuracy = 75.2%, f-1 = 67.2%). The size of the datapoint generated from a 5 s time window is 558, smaller than that generated from a 3 s time window, which is 1112 datapoints. Considering all factors, 3 s was selected as the optimal time window for the pressure session.
For the pinprick session, the highest performance was achieved in the SGD classifier using a 2 s time window (accuracy = 79.9%, f-1 = 64.8%). The highest average accuracy was achieved with the 2 s time window (69.9%), followed by 1 s (66.5%), and 3 s (65.2%). The optimal time window for the pinprick session was 2 s.
For the cuff session, the highest performance was achieved when the time window was set as 2 s in the LOG classifier (accuracy = 76.4%), followed by the LOG classifier (accuracy = 74.1%) and SGD classifier (accuracy = 72.6%) under a 1 s time window. The highest average performance was achieved in the 1 s time window (accuracy = 59.3%, f-1 = 41.2%), followed by the 2 s (accuracy = 57.3%, f-1 = 41.2%). Considering the dataset size, 4478 datapoints were segmented using the 1 s time window before SMOTE, and 2047 datapoints were generated with the 2 s time window. Therefore, a 1 s time window was chosen for the cuff session.

3.2. Analysis Plan 2—Component Sensitivity Analysis

This plan investigated the optimal sensor set for all QST sessions. First, a baseline model that included features from all sensors was established. This model employed PCA and performed the classification model based on the best model from the previous section. The LOG model was identified as the optimal for machine learning for all three sessions because this model achieved the highest classification accuracy performance. Second, six additional sensor set plans were compared by iteratively removing one sensor at a time from the following set: pupillometry, BVP, EMG, GSR, RR, and ST.
For the pinprick session, the baseline model demonstrated an accuracy of 79.8% (f-1 = 62.9%). Six additional sets were evaluated, as shown in Table 4. The differences in accuracy compared to the baseline model are depicted in Figure 4a. The removal of BVP was found to significantly enhance overall performance, with increases of 6.2% in accuracy and 10.9% in f-1 (p < 0.05, Wilcoxon signed-rank test). Compared to the baseline, EMG and ST sensors had a negligible impact on the accuracy, with absolute accuracy differences of less than 1%. Removing GSR and RR sensors resulted in non-significant (p > 0.05, Wilcoxon signed-rank test) accuracy improvements of 2.3% and 2.2%, respectively. Excluding pupillometry decreased non-significant accuracy by 1.4%.
For the cuff session, the baseline model achieved accuracies of 76.5% (f-1 = 60.8%). The performance from the LOO analysis is shown in Table 4 and illustrated in Figure 4b. For the LOG classifier, both the BVP and RR were observed to significantly decrease in accuracy by 6.1% (p < 0.05) and 2.3% (p < 0.05), respectively. Removing EMG and ST led to 1.9% and 5.5% decreases in accuracy, respectively, while these changes were not statistically significant. Excluding GSR and pupillometry showed an increase in performance, with accuracy improvements of 2.5% (p < 0.05) and 5.3% (p < 0.05), respectively.
In contrast, the pressure session showed minimal variation in performance across different sensor sets in Table 4. The performance of the LOG model was stable and not affected by the removal of individual sensors. The removal of BVP, EMG, GSR, ST, and pupillometry decreased the performance, with less than 2% for the accuracy and f-1 score.
In summary, it was found that removing BVP significantly improved accuracy for the pinprick session, whereas removing EMG, GSR, RR, ST, and pupillometry did not significantly impact the classification performance. Regarding the cuff session, the elimination of BVP, GSR, RR, and pupillometry significantly impacted accuracy, while EMG and ST showed non-significant performance, indicating they were non-critical sensor candidates. None of the singular sensors significantly impacted the performance of the pressure sessions.

4. Discussion

This study represents a novel approach to objectively assess and classify pain intensity levels utilizing physiological sensors across pressure, pinprick, and cuff sessions. Our methodology involved classifying traditional subjective ratings, such as baseline, threshold, and tolerance in the pressure session, and categorizing pain intensity levels into mild, moderate, and severe pain in the pinprick and cuff sessions based on physiological features and multiple classification models to achieve optimal performance. Two critical analyses were explored: (1) determining the optimal segmented time window (ranging from 1 to 5 s); (2) identifying the individual contributions of each singular sensor by implementing an LOO iteration strategy and classifying critical and non-critical sensor candidates.
Existing literature on pain intensity level classification using physiological signals typically employed fixed time windows, such as 1 s [41], 4 s [42], and 10 s [23]. Our study contributes to this field by investigating five different time windows under three QST sessions. The results highlighted uniformity among baseline, pinprick, and cuff sessions but a significant variance across the pressure sessions. Such variability presented a challenge in determining the segmented time window. Our study carefully weighed the trade-offs among the factors like accuracy, F1 score, number of datapoints, and distributions of time lengths. For instance, a 2 s time window was chosen for the pinprick session due to its superior performance in classification models. In contrast, the cuff session’s time window was selected based on average performance and dataset size. However, the pressure session did not show significant performance differences between the 3, 4, and 5 s time windows. This highlights the need for discussing confounding factors such as psychosocial factors [13,43].
In analyzing optimal sensor sets for QST sessions, our baseline all-sensor models were compared against six other sets, each excluding one sensor iteratively. Our results reveal that removing BVP improved accuracy in pinprick sessions but decreased accuracy in cuff sessions. Removing EMG and ST sensors had negligible impact on pinprick session outcomes. In the cuff session, removing BVP and RR had negative impacts, whereas eliminating GSR and pupillometry significantly improved performance. Other literature analyzed the relationship between cuff sessions with singular physiological signals, such as EMG [35] and BVP [44], but very few published studies have examined the cuff pain intensity level classification via physiological signals. The pressure session presented uniformly consistent results, with an average accuracy variation of less than 2% among seven sensor sets using the LOG models. The reasons for this uniformity are not fully understood and warrant further investigation. Potential reasons include confounding factors such as the selection of time windows, the number of data points, and the participant population. The performance is consistent with one other study, which found that the highest performance of a three-class pain level classification in pressure pain session was achieved with 69% accuracy, 83.3% sensitivity, and 75% specificity [33].
Our study contributed to the field by indicating sensors (i.e., EMG and ST) that contributed minimally to classification performance and, as a result, implying a solution for cases when reducing sensor redundancy if necessary. Our study also highlighted the sensor that significantly impacted performance in both pinprick and cuff sessions (i.e., BVP), and sensors that are critical but stimuli-sensitive (i.e., GSR and RR). These stimuli-sensitive sensors should be further analyzed in sensor configuration tests. Pressure sessions demonstrated uniform performance, indicating that the pressure pain classification may rely on generalized physiological responses rather than specific sensor inputs.
The limitations of our study are multifaceted. First, the study’s approach of consolidating a limited and unbalanced sample of healthy participants (N = 17) and chronic patients (N = 7) into a single group was necessary for generalizability. At the same time, it limits the sensitivity of our findings between these distinct groups. Second, the limited specificity performance of non-critical sensors (i.e., EMG and ST in cuff sessions; EMG, GSR, RR, ST in pinprick sessions) does not directly mean that they can be excluded in all cases. Psychological, environmental, and physical activity factors might lead to limited-specificity performance. Alternative solutions can be achieved in multiple ways. One solution can be to integrate multimodal deep learning models like long short-term memory models and transfer learning [45,46]. Exploring different sensor fusion methods such as feature-fusion, decision-level fusion can also be an alternative way [47]. A new field, network physiology, can be integrated into the pain assessment problem [48]. Instead of evaluating sensors in a deterministic role, this area can treat sensors as probabilistic models to analyze the connectivity between each modality [49,50]. This holistic approach acknowledges the dynamic connections between different physiological modalities, potentially resolving inconsistencies where a sensor may be effective in one context but not in another [49,51].
Exploring individual variations in response to different stimuli is a promising area to understand pain sensitivity. The current practice of pain sensitivity is assessed by patient’s self-report, which cannot exclude the presence of inter- and intra-subject variability in characteristics such as psychological factors [13]. Stimulus-specific physiological responses represent a novel and critical area of exploration, linking specific physiological modalities to distinct noxious stimuli. This consideration is particularly important; when considering enhancing the portability and practicality of sensor configurations for chronic pain patients, sensor selection should be tailored to the specific type of pain being assessed. Future research can replicate this study following the described QST procedures, analysis plans, and pseudocode algorithms in the Supplementary Materials. This framework can be easily extended to explore the relationship between physiological responses to other dimensions of pain assessment, such as types of stimuli and pain locations, beyond pain intensity. The ability to identify the types and locations of pain will benefit patients who have difficulties in self-reporting [52].
In terms of broader impacts, this sensor sensitivity study paves the way for enhancing the portability and feasibility of pain assessment, especially in at-home settings. This study suggests that sensors having minimal impact on performance can be excluded from wearable pain assessment devices. This result can be used to simplify pain assessment device design by only including sensors such as GSR, BVP, and RR. That is to say, there are plenty of digital health technologies for remote data acquisition [53]. For example, the Empatica watch monitors BVP, GSR, and ST (Empatica, Empatica Inc., Cambridge, MA, USA); the Google Fitbit series collects different sets of sensors among PPG, oxygen saturation, GSR, and ST (Google, Santa Clara, CA, USA). By selecting sensor modalities sensitive to specific noxious stimuli, researchers can balance feasibility with model performance and enhance the practicality of remote pain assessment [26].

5. Conclusions

This study presented a novel framework for pain assessment using physiological sensors during QST sessions, integrating two complementary analysis plans. Analysis Plan 1 identified optimal time windows for signal segmentation, with 1–5 s windows yielding varied results across pinprick, cuff, and pressure sessions. The findings highlight the importance of tailoring time segmentation to specific stimuli to maximize classification performance. Analysis Plan 2 evaluated sensor contributions using leave-one-out iterations. BVP, GSR, RR, and pupillometry were identified as stimulus-specific critical sensor candidates, although only BVP showed significant performance across stimuli. In contrast, EMG and ST were found to be non-critical, showing minimal impact on performance across all sessions. Future research should explore stimulus-specific physiological responses further to optimize sensor configurations for different pain types. Incorporating advanced multi-sensor fusion techniques and individualization methodology can support the development of personalized, efficient, and practical wearable systems for chronic pain assessment and management.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s25072086/s1, File S1: Supplementary Material: Pseudocode Algorithms for Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing.

Author Contributions

Conceptualization, Y.L.; methodology, W.Z. and Y.L.; software, W.Z.; validation, W.Z.; formal analysis, W.Z.; investigation, W.Z. and Y.L.; resources, Y.L.; data curation, W.Z.; writing—original draft, W.Z.; writing—review and editing, W.Z. and Y.L.; visualization, W.Z.; supervision, Y.L.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been financially supported by a National Science Foundation (NSF) project entitled “Novel Computational Methods for Continuous Objective Multimodal Pain Assessment Sensing System (COMPASS)” under Award #1838796.

Institutional Review Board Statement

The study was approved by the Institutional Review Board of Brigham and Women’s Hospital (protocol code 2019P002781, 11/18/2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. All subjects and/or their legal guardian(s) consented to the publication of identifying information/images in an online open-access publication.

Data Availability Statement

Datasets in the study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rikard, S.M.; Strahan, A.E.; Schmit, K.M.; Guy, G.P. Chronic Pain Among Adults—United States, 2019–2021. MMWR Morb Mortal Wkly Rep. 2023, 72, 379–385. [Google Scholar] [CrossRef] [PubMed]
  2. Raja, S.N.; Carr, D.B.; Cohen, M.; Finnerup, N.B.; Flor, H.; Gibson, S.; Keefe, F.J.; Mogil, J.S.; Ringkamp, M.; Sluka, K.A.; et al. The revised International Association for the Study of Pain definition of pain: Concepts, challenges, and compromises. Pain 2020, 161, 1976–1982. [Google Scholar] [CrossRef] [PubMed]
  3. Lee, S.H.; Liang, H.W. Discriminative Changes in Sitting and Standing Postural Steadiness in Patients with Chronic Low Back Pain. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 3752–3759. [Google Scholar] [CrossRef] [PubMed]
  4. Slaboda, J.C.; Boston, J.R.; Rudy, T.E.; Lieber, S.J.; Rasetshwane, D.M. The use of splines to calculate jerk for a lifting task involving chronic lower back pain patients. IEEE Trans. Neural Syst. Rehabil. Eng. 2005, 13, 406–414. [Google Scholar] [CrossRef]
  5. Davis, K.D.; Aghaeepour, N.; Ahn, A.H.; Angst, M.S.; Borsook, D.; Brenton, A.; Burczynski, M.E.; Crean, C.; Edwards, R.; Gaudilliere, B.; et al. Discovery and validation of biomarkers to aid the development of safe and effective pain therapeutics: Challenges and opportunities. Nat. Rev. Neurol. 2020, 16, 381–400. [Google Scholar] [CrossRef]
  6. Berger, S.E.; Vachon-Presseau, É.; Abdullah, T.B.; Baria, A.T.; Schnitzer, T.J.; Apkarian, A.V. Hippocampal morphology mediates biased memories of chronic pain. Neuroimage 2018, 166, 86–98. [Google Scholar] [CrossRef]
  7. Naugle, K.M.; Ohlman, T.; Naugle, K.E.; Riley, Z.A.; Keith, N.R. Physical activity behavior predicts endogenous pain modulation in older adults. Pain 2017, 158, 383–390. [Google Scholar] [CrossRef]
  8. Nijs, J.; Girbés, E.L.; Lundberg, M.; Malfliet, A.; Sterling, M. Exercise therapy for chronic musculoskeletal pain: Innovation by altering pain memories. Man. Ther. 2015, 20, 216–220. [Google Scholar] [CrossRef]
  9. Deyo, R.A.; Mirza, S.K.; Turner, J.A.; Martin, B.I. Overtreating chronic back pain: Time to back off? J. Am. Board Fam. Med. 2009, 22, 62–68. [Google Scholar] [CrossRef]
  10. Von Korff, M.R. Health Care for chronic pain: Overuse, underuse, and treatment needs: Commentary on: Chronic pain and health services utilization—Is there overuse of diagnostic tests and inequalities in nonpharmacologic methods utilization? Med. Care 2013, 51, 857–858. [Google Scholar] [CrossRef]
  11. Edwards, R.R.; Sarlani, E.; Wesselmann, U.; Fillingim, R.B. Quantitative assessment of experimental pain perception: Multiple domains of clinical relevance. Pain 2005, 114, 315–319. [Google Scholar] [CrossRef] [PubMed]
  12. Meints, S.M.; Mawla, I.; Napadow, V.; Kong, J.; Gerber, J.; Chan, S.-T.; Wasan, A.D.; Kaptchuk, T.J.; McDonnell, C.; Carriere, J.; et al. The relationship between catastrophizing and altered pain sensitivity in patients with chronic low-back pain. Pain 2019, 160, 833–843. [Google Scholar] [CrossRef] [PubMed]
  13. Fillingim, R.B.; Bruehl, S.; Dworkin, R.H.; Dworkin, S.F.; Loeser, J.D.; Turk, D.C.; Widerstrom-Noga, E.; Arnold, L.; Bennett, R.; Edwards, R.R.; et al. The ACTTION-American Pain Society Pain Taxonomy (AAPT): An evidence-based and multidimensional approach to classifying chronic pain conditions. J. Pain 2014, 15, 241–249. [Google Scholar] [CrossRef] [PubMed]
  14. Chai, P.R.; Gale, J.Y.; Patton, M.E.; Schwartz, E.; Jambaulikar, G.D.; Taylor, S.W.; Edwards, R.R.; Boyer, E.W.; Schreiber, K.L. The impact of music on nociceptive processing. Pain Med. 2020, 21, 3047–3054. [Google Scholar] [CrossRef]
  15. Yang, G.; Lin, Y. Using ECG Signal to Quantify Mental Workload Based on Wavelet Transform and Competitive Neural Network Techniques. Biomed. Soft Comput. Hum. Sci. 2009, 14, 17–25. [Google Scholar]
  16. Liang, B.; Lin, Y. Using Physiological and Behavioral Measurements in a Picture-Based Road Hazard Perception Experiment to Classify Risky and Safe Drivers. Transp. Res. Part F Traffic Psychol. Behav. 2018, 58, 93–105. [Google Scholar] [CrossRef]
  17. Yang, G.; Lin, Y.; Bhattacharya, P. A driver fatigue recognition model based on information fusion and dynamic Bayesian network. Inf. Sci. 2010, 180, 1942–1954. [Google Scholar] [CrossRef]
  18. Wu, S.W.; Wang, Y.-C.; Hsieh, P.-C.; Tseng, M.-T.; Chiang, M.-C.; Chu, C.-P.; Feng, F.-P.; Lin, Y.-H.; Hsieh, S.-T.; Chao, C.-C. Biomarkers of neuropathic pain in skin nerve degeneration neuropathy: Contact heat-evoked potentials as a physiological signature. Pain 2017, 158, 516–525. [Google Scholar] [CrossRef]
  19. Jang, E.H.; Park, B.J.; Park, M.S.; Kim, S.H.; Sohn, J.H. Analysis of physiological signals for recognition of boredom, pain, and surprise emotions. J. Physiol. Anthr. 2015, 34, 25. [Google Scholar] [CrossRef]
  20. Johnson, A.; Yang, F.; Gollarahalli, S.; Banerjee, T.; Abrams, D.; Jonassaint, J.; Jonassaint, C.; Shah, N. Use of mobile health apps and wearable technology to assess changes and predict pain during treatment of acute pain in sickle cell disease: Feasibility study. JMIR Mhealth Uhealth 2019, 7, e13671. [Google Scholar] [CrossRef]
  21. Lin, Y.; Xiao, Y.; Wang, L.; Guo, Y.; Zhu, W.; Dalip, B.; Kamarthi, S.; Schreiber, K.L.; Edwards, R.R.; Urman, R.D. Experimental Exploration of Objective Human Pain Assessment Using Multimodal Sensing Signals. Front. Neurosci. 2022, 16, 831627. [Google Scholar] [CrossRef] [PubMed]
  22. Misra, G.; Wang, W.E.; Archer, D.B.; Roy, A.; Coombes, S.A. Automated classification of pain perception using high-density electroencephalography data. J. Neurophysiol. 2017, 117, 786–795. [Google Scholar] [CrossRef] [PubMed]
  23. Elsayed, M.; Sim, K.S.; Tan, S.C. A novel approach to objectively quantify the subjective perception of pain through electroencephalogram signal analysis. IEEE Access 2020, 8, 199920–199930. [Google Scholar] [CrossRef]
  24. Olesen, A.E.; Andresen, T.; Staahl, C.; Drewes, A.M. Human experimental pain models for assessing the therapeutic efficacy of analgesic drugs. Pharmacol. Rev. 2012, 64, 722–779. [Google Scholar] [CrossRef]
  25. Zhu, W.; Xiao, Y.; Lin, Y. A Novel Labeling Method of Physiological-based Pressure Pain Assessment Among Patients with and Without Chronic Low Back Pain. Proc. Human. Factors Ergon. Soc. Annu. Meet. 2024, 68, 456–459. [Google Scholar] [CrossRef]
  26. Dolgin, E. How a ‘pain-o-meter’ could improve treatments. Nature 2024, 633, S26–S27. [Google Scholar] [CrossRef]
  27. Guo, Y.; Wang, L.; Xiao, Y.; Lin, Y. A Personalized Spatial-Temporal Cold Pain Intensity Estimation Model Based on Facial Expression. IEEE J. Transl. Eng. Health Med. 2021, 9, 4901008. [Google Scholar] [CrossRef]
  28. Wang, L.; Guo, Y.; Dalip, B.; Xiao, Y.; Urman, R.D.; Lin, Y. An experimental study of objective pain measurement using pupillary response based on genetic algorithm and artificial neural network. Appl. Intell. 2021, 52, 1145–1156. [Google Scholar] [CrossRef]
  29. Wang, L.; Xiao, Y.; Urman, R.D.; Lin, Y. Cold pressor pain assessment based on EEG power spectrum. SN Appl. Sci. 2020, 2, 1976. [Google Scholar] [CrossRef]
  30. Chen, D.; Zhang, H.; Kavitha, P.T.; Loy, F.L.; Ng, S.H.; Wang, C.; Phua, K.S.; Tjan, S.Y.; Yang, S.-Y.; Guan, C. Scalp EEG-Based Pain Detection Using Convolutional Neural Network. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 274–285. [Google Scholar] [CrossRef]
  31. Zheng, J.; Lin, Y. Using Physiological Signals for Pain Assessment: An Evaluation of Deep Learning Models. In Proceedings of the 2024 30th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Leeds, UK, 3–5 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
  32. Zheng, J.; Lin, Y. An Objective Pain Measurement Machine Learning Model through Facial Expressions and Physiological Signals. In Proceedings of the 2022 28th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Nanjing, China, 16–18 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–4. [Google Scholar] [CrossRef]
  33. Khan, M.U.; Aziz, S.; Hirachan, N.; Joseph, C.; Li, J.; Fernandez-Rojas, R. Experimental Exploration of Multilevel Human Pain Assessment Using Blood Volume Pulse (BVP) Signals. Sensors 2023, 23, 3980. [Google Scholar] [CrossRef] [PubMed]
  34. Pickering, G.; Achard, A.; Corriger, A.; Sickout-Arondo, S.; Macian, N.; Leray, V.; Lucchini, C.; Cardot, J.-M.; Pereira, B. Electrochemical Skin Conductance and Quantitative Sensory Testing on Fibromyalgia. Pain Pract. 2020, 20, 348–356. [Google Scholar] [CrossRef] [PubMed]
  35. Gray, S.M.; Cuomo, A.M.; Proppe, C.E.; Traylor, M.K.; Hill, E.C.; Keller, J.L. Effects of Sex and Cuff Pressure on Physiological Responses during Blood Flow Restriction Resistance Exercise in Young Adults. Med. Sci. Sports Exerc. 2023, 55, 920–931. [Google Scholar] [CrossRef] [PubMed]
  36. Tiemann, L.; Achard, A.; Corriger, A.; Sickout-Arondo, S.; Macian, N.; Leray, V.; Lucchini, C.; Cardot, J.; Pereira, B. Distinct patterns of brain activity mediate perceptual and motor and autonomic responses to noxious stimuli. Nat. Commun. 2018, 9, 4487. [Google Scholar] [CrossRef]
  37. Nickel, M.M.; Hohn, V.D.; Dinh, S.T.; May, E.S.; Nickel, M.M.; Gross, J.; Ploner, M. Temporal–spectral signaling of sensory information and expectations in the cerebral processing of pain. Proc. Natl. Acad. Sci. USA 2022, 119, e2116616119. [Google Scholar] [CrossRef]
  38. Tan, G.; Jensen, M.P.; Thornby, J.I.; Shanti, B.F. Validation of the brief pain inventory for chronic nonmalignant pain. J. Pain 2004, 5, 133–137. [Google Scholar] [CrossRef]
  39. Zhu, W.; Kucyi, A.; Kramer, A.F.; Lin, Y. Multimodal Physiological Assessment of the Task-related Attentional States in a VR Driving Environment. In Proceedings of the 2022 28th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Nanjing, China, 16–18 November 2022; pp. 1–5. [Google Scholar]
  40. Fernández, A.; García, S.; Herrera, F.; Chawla, N.V. SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary. J. Artif. Intell. Res. 2018, 61, 863–905. [Google Scholar]
  41. Lin, Y.; Wang, L.; Xiao, Y.; Urman, R.D.; Dutton, R.; Ramsay, M. Objective Pain Measurement based on Physiological Signals. Proc. Int. Symp. Hum. Factors Ergon. Health Care 2018, 7, 240–247. [Google Scholar] [CrossRef]
  42. Zhu, W.; Liu, C.; Yu, H.; Guo, Y.; Xiao, Y.; Lin, Y. COMPASS App: A Patient-centered Physiological based Pain Assessment System. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2023, 67, 1361–1367. [Google Scholar] [CrossRef]
  43. Kent, M.L.; Tighe, P.J.; Belfer, I.; Brennan, T.J.; Bruehl, S.; Brummett, C.M.; Buckenmaier, C.C.; Buvanendran, A.; Cohen, R.I.; Desjardins, P.; et al. The ACTTION-APS-AAPM Pain Taxonomy (AAAPT) Multidimensional Approach to Classifying Acute Pain Conditions. Pain Med. 2017, 18, 947–958. [Google Scholar] [CrossRef]
  44. Cerqueira, M.S.; Costa, E.C.; Oliveira, R.S.; Pereira, R.; Brito Vieira, W.H. Blood Flow Restriction Training: To Adjust or Not Adjust the Cuff Pressure Over an Intervention Period? Front. Media 2021, 12, 678407. [Google Scholar] [CrossRef]
  45. Pouromran, F.; Lin, Y.; Kamarthi, S. Personalized Deep Bi-LSTM RNN Based Model for Pain Intensity Classification Using EDA Signal. Sensors 2022, 22, 8087. [Google Scholar] [CrossRef] [PubMed]
  46. Lopez-Martinez, D.; Picard, R. Multi-task Neural Networks for Personalized Pain Recognition from Physiological Signals. August. arXiv 2017, arXiv:1708.08755. [Google Scholar] [CrossRef]
  47. Aguileta, A.A.; Brena, R.F.; Mayora, O.; Molino-Minero-Re, E.; Trejo, L.A. Multi-Sensor Fusion for Activity Recognition—A Survey. Sensors 2019, 19, 3808. [Google Scholar] [CrossRef]
  48. Ivanov, P.C. The New Field of Network Physiology: Building the Human Physiolome. Front. Netw. Physiol. 2021, 1, 711778. [Google Scholar] [CrossRef]
  49. Bashan, A.; Bartsch, R.P.; Kantelhardt, J.W.; Havlin, S.; Ivanov, P.C. Network physiology reveals relations between network topology and physiological function. Nat. Commun. 2012, 3, 702. [Google Scholar] [CrossRef]
  50. Candia-Rivera, D.; Chavez, M.; De Vico Fallani, F. Measures of the coupling between fluctuating brain network organization and heartbeat dynamics. Netw. Neurosci. 2024, 8, 557–575. [Google Scholar] [CrossRef]
  51. Bartsch, R.P.; Liu, K.K.L.; Bashan, A.; Ivanov, P.C. Network Physiology: How Organ Systems Dynamically Interact. PLoS ONE 2015, 10, e0142143. [Google Scholar] [CrossRef]
  52. Rojas, R.F.; Brown, N.; Waddington, G.; Goecke, R. A systematic review of neurophysiological sensing for the assessment of acute pain. NPJ Digit. Med. 2023, 6, 76. [Google Scholar] [CrossRef]
  53. Lewis, A.; Valla, V.; Charitou, P.; Karapatsia, A.; Koukoura, A.; Tzelepi, K.; Bergsteinsson, J., I; Ouzounelli, M.; Vassiliadis, E. Digital Health Technologies for Medical Devices—Real World Evidence Collection—Challenges and Solutions Towards Clinical Evidence. Int. J. Digit. Health 2022, 2, 8. [Google Scholar] [CrossRef]
Figure 1. Experiment apparatus. In this pressure pain experiment, a digital pressure algometer was applied on the participant’s trapezius. Physiological signals (RR, BVP, GSR, EMG, ST, and ET) were collected in the meantime.
Figure 1. Experiment apparatus. In this pressure pain experiment, a digital pressure algometer was applied on the participant’s trapezius. Physiological signals (RR, BVP, GSR, EMG, ST, and ET) were collected in the meantime.
Sensors 25 02086 g001
Figure 2. Diagram of the study. After collecting raw datasets from multimodal physiological sensors (BVP: blood volume pulse, GSR: galvanic skin response, EMG: electromyography, ST: skin temperature, RR: respiration rate), the dataset underwent two analysis plans: (1) perform the optimal time window analysis to select the optimal time window and hyperparameters via grid search. Time windows included 1, 2, 3, 4, and 5 s; (2) undergo component sensitivity analysis to investigate the performance across 7 distinct leave-one-out sets.
Figure 2. Diagram of the study. After collecting raw datasets from multimodal physiological sensors (BVP: blood volume pulse, GSR: galvanic skin response, EMG: electromyography, ST: skin temperature, RR: respiration rate), the dataset underwent two analysis plans: (1) perform the optimal time window analysis to select the optimal time window and hyperparameters via grid search. Time windows included 1, 2, 3, 4, and 5 s; (2) undergo component sensitivity analysis to investigate the performance across 7 distinct leave-one-out sets.
Sensors 25 02086 g002
Figure 3. Accuracy curve of all algorithms and the mean accuracy of five algorithms under five segmented time windows (1, 2, 3, 4, 5 s) among pressure session (a), pinprick session (b), and cuff session (c). The red line in each figure shows the mean and standard deviation (STD) of all algorithms under different segmented time windows.
Figure 3. Accuracy curve of all algorithms and the mean accuracy of five algorithms under five segmented time windows (1, 2, 3, 4, 5 s) among pressure session (a), pinprick session (b), and cuff session (c). The red line in each figure shows the mean and standard deviation (STD) of all algorithms under different segmented time windows.
Sensors 25 02086 g003
Figure 4. Performance metrics of the pinprick session and cuff session are presented in grouped boxplots (a,b). Each bar shows the mean and standard deviation.
Figure 4. Performance metrics of the pinprick session and cuff session are presented in grouped boxplots (a,b). Each bar shows the mean and standard deviation.
Sensors 25 02086 g004
Table 1. Grid search hyperparameters of classifiers.
Table 1. Grid search hyperparameters of classifiers.
Logistic RegressionC10−3, 10−2, 10−1, 1, 10, 102, 103
PenaltyL1, L2
Decision TreeCriteriongini, entropy
Max Depth4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 20, 30, 40, 50, 70, 90, 120
K Nearest NeighborsAlgorithmBall tree, kd tree, brute
Leaf sizeRange from 1 to 50 step 3
N neighbors10, 13, 16, 19, 22, 25, 28
Stochastic Gradient DescentAlpha10−2, 10−3, 10−4
L1 ratio0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.12, 0.13, 0.14, 0.15, 0.2
PenaltyL1, L2
Loss functionhinge, log, modified Huber, squared hinge
AdaBoostBase estimatorDecision tree
Max depth2, 5, 8, 11
Min sample5, 10
N estimators10, 50, 100, 250
Learning rate0.01, 0.1
Table 2. Demographic information.
Table 2. Demographic information.
Mean ± SD or %cLBP Patient Healthy Group
Number of participants717
Age, y44.4 ± 14.528.8 ± 13.1
Female sex511
Pain duration, y14.0 ± 15.50
Pain intensity5.0 ± 1.40.3 ± 0.4
Pain interference3.7 ± 2.50.1 ± 0.2
Table 3. Time length statistics of QST sessions.
Table 3. Time length statistics of QST sessions.
QST SessionSample SizeMean ± STD (s)
Baseline3259.80 ± 6.93
Pressure–Threshold1606.86 ± 3.74
Pressure–Tolerance16013.99 ± 5.34
Pinprick1286.99 ± 1.44
Cuff12829.95 ± 3.39
Table 4. Performance of pinprick, cuff, and pressure sessions.
Table 4. Performance of pinprick, cuff, and pressure sessions.
PinprickCuffPressure
Sensor SetAccuracy %F-1 Score %Accuracy %F-1 Score %Accuracy %F-1 Score %
All sensors79.862.976.560.872.366.4
All w/o BVP86 ↑73.8 ↑70.4 ↓47.5 ↓72.366.4
All w/o EMG80.765.474.658.672.366.4
All w/o GSR82.167.779.0 ↑60.7 ↑72.366.4
All w/o RR80.663.874.2 ↓53.9 ↓72.466.7
All w/o ST80.767.471.051.172.366.4
All w/o pupillometry78.459.981.8 ↑62.3 ↑72.366.4
↑↓ indicates a statistically significant increase or decrease in performance (Wilcoxon signed-rank, p < 0.05).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, W.; Lin, Y. Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing. Sensors 2025, 25, 2086. https://doi.org/10.3390/s25072086

AMA Style

Zhu W, Lin Y. Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing. Sensors. 2025; 25(7):2086. https://doi.org/10.3390/s25072086

Chicago/Turabian Style

Zhu, Wenchao, and Yingzi Lin. 2025. "Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing" Sensors 25, no. 7: 2086. https://doi.org/10.3390/s25072086

APA Style

Zhu, W., & Lin, Y. (2025). Physiological Sensor Modality Sensitivity Test for Pain Intensity Classification in Quantitative Sensory Testing. Sensors, 25(7), 2086. https://doi.org/10.3390/s25072086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop