Next Article in Journal
A Discrete Choice Experiment to Assess Cat Owners’ Preferences for Topical Antiparasitics and the Comparative Ease of Use of a Combined Selamectin and Sarolaner Formulation: An International Survey
Previous Article in Journal
Dietary Tea Polyphenols Improve Growth Performance and Intestinal Microbiota Under Chronic Crowding Stress in Hybrid Crucian Carp
Previous Article in Special Issue
Beyond the Camera Trap: A Systematic Review of Computing Technology Used to Monitor and Interact with (More) Varied Taxa in Zoos and Aquariums
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals

College of Electrical Engineering and Information, Northeast Agricultural University, Harbin 150030, China
*
Author to whom correspondence should be addressed.
Animals 2025, 15(13), 1984; https://doi.org/10.3390/ani15131984
Submission received: 17 April 2025 / Revised: 28 May 2025 / Accepted: 3 July 2025 / Published: 5 July 2025
(This article belongs to the Special Issue Animal–Computer Interaction: New Horizons in Animal Welfare)

Simple Summary

Real-time emotion monitoring in pet dogs is essential for ensuring their well-being and improving interaction with humans. This study proposes a practical emotion classification system using four observable physiological signals—skin potential, muscle potential, respiration frequency, and voice pattern. Leveraging a compact, non-invasive sensor and the XGBoost algorithm, the system enables accurate real-time detection of canine emotional states, particularly abnormal ones. It offers a portable and efficient solution for everyday monitoring of dog emotions in practical settings.

Abstract

This study introduces an innovative dog emotion classification system that integrates four non-invasive physiological indicators—skin potential (SP), muscle potential (MP), respiration frequency (RF), and voice pattern (VP)—with the extreme gradient boosting (XGBoost) algorithm. A four-breed dataset was meticulously constructed by recording and labeling physiological signals from dogs exposed to four fundamental emotional states: happiness, sadness, fear, and anger. Comprehensive feature extraction (time-domain, frequency-domain, nonlinearity) was conducted for each signal modality, and inter-emotional variance was analyzed to establish discriminative patterns. Four machine learning algorithms—Neural Networks (NN), Support Vector Machines (SVM), Gradient Boosting Decision Trees (GBDT), and XGBoost—were trained and evaluated, with XGBoost achieving the highest classification accuracy of 90.54%. Notably, this is the first study to integrate a fusion of two complementary electrophysiological indicators—skin and muscle potentials—into a multi-modal dataset for canine emotion recognition. Further interpretability analysis using Shapley Additive exPlanations (SHAP) revealed skin potential and voice pattern features as the most contributive to model performance. The proposed system demonstrates high accuracy, efficiency, and portability, laying a robust groundwork for future advancements in cross-species affective computing and intelligent animal welfare technologies.

1. Introduction

Dogs, as highly empathetic and socially integrated animals, play an increasingly significant role in human life. They serve in diverse capacities such as search and rescue, guiding the visually impaired, emotional support, and companionship, thereby becoming indispensable to modern society [1]. However, abnormal emotional states in dogs can negatively impact their own physical and psychological health, while also posing potential safety risks to the public. Incidents involving aggressive canine behavior have raised societal concerns, making it increasingly challenging for dogs to coexist freely in human environments [2]. Therefore, effective monitoring of canine emotions, particularly the detection of abnormal states, is of great importance. Real-time tracking of physiological signals combined with emotion classification enables owners to better understand their dog’s psychological status, thereby enhancing human–animal interaction. More importantly, timely identification of negative emotional responses allows owners and bystanders to take precautionary measures to mitigate harmful behaviors [3].
Real-time monitoring of dog emotional states not only holds significant implications for human society, but more critically, serves to safeguard the physical and psychological well-being of the dogs themselves, particularly in the early identification of abnormal emotions. In this study, abnormal emotional states are defined as negative affective conditions, specifically including sadness, anger, and fear. Emotional assessment can be conducted either through human observation [4] or by employing machine learning algorithms [5]. When dogs experience such negative emotional shifts, distinct physiological changes are often observed. Behaviors such as fear-induced withdrawal or anger-driven aggression are closely associated with neurophysiological and hormonal responses. For instance, exposure to fear-related stimuli has been shown to elevate heart rate, body temperature, cortisol, and progesterone levels in dogs [6]. In severe cases, acute stress responses induced by excessive fear may lead to fatal outcomes, necessitating pharmacological intervention with anxiolytic or sedative agents for emotional regulation [7]. These findings highlight the critical importance of timely emotion recognition. Moreover, prolonged sadness in dogs may signal underlying pathological conditions. Therefore, early detection of abnormal emotions can facilitate preclinical diagnosis and enable prompt medical intervention, contributing significantly to animal welfare.
However, accurately detecting and classifying dog emotions remains a significant challenge. Traditional behavioral observation methods [8] are limited in their ability to capture the true emotional states of dogs in real time and often lack objectivity. Consequently, increasing research attention has shifted toward the analysis of physiological indicators in dogs. By investigating the correlation between physiological responses and emotional changes, researchers have developed emotion recognition systems that are less susceptible to external interference and capable of real-time monitoring of a dog’s psychological state.
Dogs exhibit distinct behavioral patterns corresponding to different emotional states. Accordingly, computer vision models have been employed to analyze and interpret facial and limb movements, particularly focusing on features such as the eyes, ears, and tail. Ref. [9] introduced a facial action coding system for dogs (EMDOGFACS), which associates specific facial actions with emotional expressions. Their findings highlighted that certain basic emotions in dogs are conveyed through identifiable movements, with ear-related actions playing a key role. Subsequent studies have leveraged image processing techniques to classify dog emotions via facial and body gesture recognition, particularly by analyzing muscle-driven movements of the ears and eyes [10,11,12]. In ref. [13], researchers collected and analyzed facial features—including ear position—from 28 pet dogs to distinguish between positive and negative emotional states. The study further enabled classification based on common actions such as blinking, lowering the chin, and nose-licking, and contributed to the refinement of the DogFACS coding system. A summary of frequently observed facial and behavioral indicators used in emotion classification is presented in Table 1 [14]. From a physiological perspective, such behavioral actions are known to induce changes in MP and RF, thereby offering theoretical support for the present study’s focus on observable physiological signals.
In addition to the visual method, vocalizations have also been utilized to infer canine emotional states [15,16]. Researchers commonly apply overlapping frame techniques to smooth the extracted acoustic contours, followed by feature extraction using methods such as Principal Component Analysis (PCA) [17,18] and Latent Dirichlet Allocation (LDA) [19]. LDA has achieved an emotion recognition accuracy of 69.81% in dog vocalization analysis [20]. A variety of machine learning algorithms have been adopted to train speech-based emotion recognition models, including Hidden Markov Models (HMM) [21], Gaussian Mixture Models (GMM) [22], Artificial Neural Networks (ANN) [23], and Support Vector Machines (SVM) [24,25]. These approaches typically yield classification accuracies around 70%, according to ref. [26], as summarized in Table 2.
Notably, ref. [27] applied a novel machine learning algorithm to analyze dog barks, incorporating context-specific and individual-specific acoustic features, and achieved a classification accuracy of 52%. More recently, ref. [28] proposed a multi-hop attention model for speech emotion recognition, which combined a BiLSTM network for extracting latent features from vocal signals with a multi-hop attention mechanism to refine classification weights. This approach significantly improved recognition performance, reaching an accuracy of 88.51%.
Currently, dog emotion classification primarily relies on visual processing and speech recognition techniques. However, visual methods become increasingly challenging in scenarios involving multiple dogs, as overlapping subjects and complex backgrounds hinder effective image segmentation. Moreover, the exponential growth in convolutional layer settings increases computational complexity and reduces the efficiency of edge feature extraction. In addition, image-based emotion recognition lacks routine applicability in real-world environments. Similarly, in multi-dog environments, overlapping vocalizations introduce significant interference, making it difficult to isolate the vocal signals of a specific dog.
To overcome the aforementioned limitations, we propose a multi-modal approach to emotion classification that moves beyond reliance on a single indicator. Specifically, we utilize vocal signals, which are both highly accurate and well suited for wearable device integration, in combination with muscle potential (MP) signals—associated with behavioral responses—and skin potential (SP) signals—reflecting neuroelectrical activity. Respiration frequency (RF) is additionally employed as a corrective parameter. Together, these modalities enable a comprehensive and real-time assessment of canine emotional states.
This model enables dog owners to assess whether their pet is in a normal emotional state. When a dog experiences abnormal emotions—such as anger or fear, as discussed in this study—it may exhibit behaviors that are detrimental to its own physical and psychological well-being, as well as to the safety of surrounding humans. For example, a dog that feels angry, tense, or on alert in response to environmental stimuli may display aggressive behavior toward passersby. Similarly, excessive fear can trigger severe stress responses, including acute panic, which in extreme cases may lead to cardiac arrest and death. Timely emotion monitoring using this system allows breeders or owners to take early corrective actions, thereby safeguarding the dog’s health and preventing potential harm to others. Moreover, the system may also assist in the early detection of disease. Emotional anomalies such as persistent unhappiness may indicate underlying physical pain or the early onset of illness. In summary, accurate emotion classification enabled by a lightweight, wearable device offers substantial benefits to dogs, their caretakers, and the surrounding environment.
A wearable emotion classification system for dogs not only enhances the management of individual animal welfare, but also drives innovation and sustainable development in the broader field of animal science. Overall, this paper makes the following contributions:
(a) An integrated emotion classification framework based on four observable physiological signals (SP, MP, RF, and VP) is introduced, and four machine learning algorithms were trained and evaluated. The results demonstrate the superiority of the multi-modal signal fusion in improving classification accuracy and confirm the global optimality of the selected algorithm—XGBoost.
(b) The framework introduces the application of two types of electrophysiological signals—skin potential (SP), associated with neurophysiological activity, and muscle potential (MP), associated with behavioral responses—for canine emotion classification. These signals were measured and analyzed across four dog breeds (including one small, one medium, and two large breeds), enabling a comparative study of signal characteristics across breed types.
(c) SHAP (Shapley Additive exPlanations) analysis was applied to rank the relative contributions of the four physiological indicators to emotion classification, as well as to assess the importance of specific features within each modality. This interpretability analysis offers valuable insights for the development of future wearable emotion recognition systems for animals.
The remainder of this paper is organized as follows: Section 2 formulates the signal acquisition methods and model architecture. Section 3 shows the experiments of the proposed multi-epiphysiological indicator dog emotion classification system framework. Comparison and discussions are presented in Section 4. Finally, Section 5 concludes the entire paper.

2. Materials and Methods

2.1. Signal Acquisition and Feature Extraction

The complete epiphysiological monitoring system proposed in this study is fully non-invasive. All four physiological signals—SP, MP, RF, and VP—were acquired using surface-attached sensors placed at anatomically relevant locations. In compliance with the ethical principles of Replacement, Reduction, and Refinement (3Rs) [29], we designed a controlled emotion-elicitation experiment to capture physiological responses associated with distinct emotional states.
To minimize environmental confounders, such as ambient noise, temperature variation, and lighting fluctuations, the experiments were conducted in a controlled environment with the following conditions: a quiet setting, ambient temperature maintained at 26 °C ± 1 °C, uniform lighting with a lighting coefficient of 1:10. Given the system’s intended application in real-life pet scenarios, we prioritized naturalistic and ecologically valid emotion elicitation. Emotion-inducing stimuli were selected from commonly used pet-interaction products and daily social situations. All participating subjects were domesticated companion dogs, voluntarily recruited from two local pet parks. Owners who expressed willingness to participate visited the laboratory in advance to verify the experimental safety and feasibility.
Considering the high individual variability in canine emotional expression, each dog’s owner—being most familiar with their pet’s behavior—was responsible for guiding the emotional transitions. This approach ensures ecological validity and aligns with potential real-world applications. For instance, emotional states were induced through personalized scenarios. For example, using dog treats to induce the dog’s happy emotion, some dogs will induce the sad emotion when their owner refuses their request to go out to play, some dog will induce anger when protecting food, and some dogs will induce fear when they hear the voices of more ferocious animals.
The emotion-elicitation approach used to induce specific emotional states in pet dogs has proven to be highly effective and aligns well with established principles in animal behavioral science. In dogs, for example, reward-related brain activation is commonly triggered by positive stimuli such as food, verbal praise, play, and familiar human scents, all of which reliably evoke feelings of happiness [30,31,32]. Conversely, anger is typically induced by threatening stimuli, including unfamiliar barking dogs, approaching strangers, or other perceived dangers [33]. Behavioral fear-induction tests in dogs often involve sudden loud noises or novel, unexpected objects [34]. A substantial body of literature has demonstrated strong correlations between specific behavioral responses and underlying emotional states in dogs, as also summarized in Table 1 of this study.
To further enhance the real-world applicability of our research, we conducted a behavior-based emotion conversion experiment with five Border Collies trained to use pet communication buttons. These dogs were able to express their emotional intent through button presses, enabling two-way interaction with their owners. The owners monitored the dogs via real-time video surveillance and used the button interface to engage in communication when they perceived a shift in the dog’s emotional state. The moment of emotion transition was then recorded based on the timestamp of the interaction. Experimental staff cross-referenced these timestamps with physiological data to obtain corresponding values for the four monitored physiological indicators. This voice-button-based method of confirming canine emotions has been shown to be highly reliable [35], allowing for more precise labeling of emotional categories at specific moments in time.
Table 3 shows the hardware materials used in the measurements during our experiments. Throughout the recording monitoring, data loggers continuously recorded all four physiological signals during each emotional episode. Behavioral cues were used to annotate and verify the emotional state, and data entries included: (a) the emotional category (happiness, sadness, anger, fear); (b) the perceived emotional intensity (rated on a scale from 1 to 5 based on behavioral duration and severity); (c) the exact onset time and duration of each emotional episode.
To ensure accurate and stable acquisition of epiphysiological signals, two anatomical regions were selected for sensor placement based on physiological relevance and signal fidelity. For SP signal and MP signal measurements, the forelimb was chosen as the recording site due to its abundant sweat gland distribution and relatively low subcutaneous fat content, which enhance the sensitivity and stability of electrical signal detection.
For VP and RF monitoring, the neck region was selected. To collect vocal signals while minimizing interference from environmental noise and overlapping vocalizations from other animals, we adopted a non-invasive air-conduction microphone. This sensor captures acoustic signals transmitted through the air from the dog’s throat region, providing a practical solution for naturalistic emotional monitoring in daily pet care. In addition, the proximity of the laryngeal area allows for the integration of both the air-conduction microphone and a respiration sensor, enabling synchronized acquisition of vocal and respiratory features. The combined sensor unit was affixed securely around the neck to ensure continuous, high-fidelity signal collection with minimal motion artifacts. The specific signal acquisition sites are shown in Figure 1, and the actual wearing of the testing equipment is shown in Figure 2.
Due to the differences in the amplitude of the EP, MP, RF, and VP signals in different individuals, we needed to normalize all signal data. This formula normalizes the signal to the [0, 1] interval:
X n o r m a l = X X m i n X m a x X m i n
where X m a x and X m i n represents the minimum and maximum values of the original signal.
Due to the extremely low probability of packet loss or corruption during Bluetooth transmission (typically less than 1% in practical scenarios), the SP and MP signals may occasionally fail to strictly meet the intended sampling frequency of 1000 Hz. To ensure signal integrity and uniform sampling, missing data points were reconstructed using MATLAB’s (MATLAB R2023a) built-in interp1 function. Given the rarity of packet loss, standard interpolation methods suffice to accurately restore the signal without introducing distortion. In this study, cubic spline interpolation was employed by specifying the ‘spline’ option within the interp1 function, enabling precise reconstruction while preserving the original waveform characteristics.
In total, we collected 523 emotional data samples from 30 dogs, including 13 Border Collies, 5 Samoyeds, 8 Golden Retrievers, and 4 Shih Tzus. Each sample comprises SP, MP, RF, and VP signal recordings during a 15-second period that represents a typical emotional state. After that, we mark the corresponding labels to form a dataset to build a sentiment recognition model. In this article, we use the discrete sentiment model for emotion classification. We explore only the four basic emotions (happiness, sadness, anger, and fear). Therefore, the samples include 187 happy samples, 115 sad samples, 103 anger samples, and 118 fear samples. The training set and the test set of data do not intersect, which are independent datasets, respectively.
We calculated and extracted 29 features from each sentiment sample, including 15 time-domain features, 13 frequency-domain features, and 1 nonlinear feature. Table 4 lists the names of the specific data features [36].
The extracted features include time-domain, frequency-domain, and nonlinear feature characteristics derived from the emotion-related physiological signal sequences. In the time domain, the following statistical features were computed from each emotional signal sample: the first quartile ( q 1 ), median ( m e d i a n ), third quartile ( q 3 ), minimum ratio ( m i n _ r a t i o ) and maximum ratio ( m a x _ r a t i o ). To capture the dynamic variation of the signal, we also computed the first-order and second-order differentials of the signal sequence. The mean values of these differential sequences—denoted as d i f f 1 m e a n and d i f f 2 m e a n —were used as additional time-domain features.
For frequency-domain analysis, we applied the Fast Fourier Transform (FFT) to each signal sequence to obtain its unilateral spectral components. From these, we extracted the mean frequency component ( m e a n f ), the median frequency component ( m e d i a n f ), and the mean values of the first- and second-order differentials of the spectral sequence, labeled as d i f f 1 m e a n f and d i f f 2 m e a n f , respectively. Among them, the minimum ratio ( m i n r a t i o ) of the emotional signal sequence is calculated as follows:
m i n _ r a t i o = X _ m i n l e n _ X
where l e n _ x is the data length of the signal, and x _ m i n is the minimum value of the emotion sample sequence. The maximum ratio ( m a x _ r a t i o ) is calculated in the same way.
The only nonlinear feature we extracted is the mean crossing rate ( m c r ) of the emotion sample sequence. It is based on the mean of the emotion sample sequence, and the number of times the signal passes through the mean. In other words, it is the number of times the signal changes from greater than the mean to less than the mean or from less than the mean to greater than the mean. This value represents the vibration level of the signal.

2.2. Classifier Model: XGBoost

We use four different machine learning algorithms based on signal features for training. Among them, the most important training model is the eXtreme Gradient Boosting (XGBoost) model. The basic steps of the XGBoost model are shown in Figure 3 below.
The XGBoosting algorithm can be used for both regression and classification. In this experiment, we use the classification function. We first give N training samples { X i , Y i } N , where X i = ( x i 1 , x i 2 , , x i 14 ) represents the four physiological indicator sequences of the ith emotion sample. Y i = ( y i 1 , y i 2 , , y i 14 ) , is the emotional label of X i , which marks the emotional state corresponding to the sequence of physiological indicators of the group. When the training sample belongs to the kth type of emotion, the vector y i k in Y i is 1, and the rest is 0. The training process of the classification model based on the XGBoost algorithm is as follows [37]:
(A) Initialize the model to train a decision tree for each class of sample X:
F k ( X ) = 0 , k = 1 , 2 , , K
where K represents the number of emotion types, and its value is 4.
(B) Construct K functions { F k ( X ) } k = 1 K by integrating T decision trees. The probability that the training sample belongs to each category is expressed as:
p k ( X ) = e x p ( F k ( X ) ) j = 1 K e x p ( F j ( X ) )
where F k ( X ) = t = 1 T η t · h t , k ^ ( X ) , h t , k ^ denotes the decision tree generated for category k in the t-round and η t is the learning rate.
(C) The objective function consists of a loss function and a regularization term:
F ( X ) = i = 1 N l ( y i , p ( X i ) ) + t = 1 T k = 1 K Ω ( h t , k )
Cross-entropy loss:
l ( y i , p ( X i ) ) = k = 1 K y i , k log p k ( X i )
where y i , k is the real label of k.
Regularization:
Ω ( h t , k ) = γ T t , k + 1 2 λ | | w t , k | | 2
where T t , k is the number of leaf sub-nodes of the t-round category k, w t , k is the leaf weight vector, and γ and λ are the regularization coefficients.
The second-order approximate expansion of the objective function is carried out, and the first-order gradient g i , k and the second-order Haysen matrix h i , k are introduced:
F k ( X ) ( t ) i = 1 N k = 1 K [ g i , k F k ( t ) ( X i ) + 1 2 h i , k ( F k ( t ) ( X i ) ) 2 ] + Ω ( h t , k )
where:
g i , k = l ( y i , p ( X i ) ) F k ( t 1 ) ( X i ) = p k ( t 1 ) ( X i ) y i , k
h i , k = 2 l ( y i , p ( X i ) ) ( F k ( t 1 ) ( X i ) ) 2 = p k ( t 1 ) ( X i ) ( 1 p k ( t 1 ) ( X i ) )
(D) For each candidate feature and splitting point, the gain is calculated to select the optimal split:
G a i n = 1 2 ( i I L g i , k ) 2 i I L h i , k + λ + ( i I R g i , k ) 2 i I R h i , k + λ ( i I g i , k ) 2 i I h i , k + λ γ
where I L and I R are the sample sets of the left and right sub-nodes after splitting, respectively. The optimal weight of the leaf node j is:
w j , k * = i I j g i , k i I j h i , k + λ
(E) The classification model consists of a set of K trees { F k } k = 1 K ; each of F k = { F k , 1 , F k , 2 , , F k , M } corresponds to a prediction function for class k. The output probability of the model is:
P k ( X ) = exp ( t = 1 T η t F k , t ( X ) ) j = 1 K exp ( t = 1 T η t F j , t ( X ) )
where η t is the learning rate of the tth tree.
For any sample x R N and category k, the SHAP value ϕ i , k of eigenvalue x i is expressed as:
i = 1 N ϕ i , k = f k ( X ) E [ f k ( X ) ]
where f k ( X ) = t = 1 T η t T k , t ( X ) , E [ f k ( X ) ] is the baseline expected value.
SHAP is an interpretive method based on game theory. It assigns the impact of each feature in the prediction by calculating the contribution of each feature to the prediction. Therefore, through the analysis of the contribution values of 29 eigenvalues of each of the four physiological indicators, we can know the weight of the influence of each physiological index on the emotional changes.
The training process for a classification model is described above. Finally, the iterative trained model can use the test samples in the test set to verify the recognition effect of the model. The test process uses the Equation (12) to calculate the probability that test sample X belongs to each category. The category with the highest probability is the one predicted by the model. In addition, the algorithm related flow diagram of the entire experiment is shown in Figure 4.

3. Results

Through the emotion-evoked experiment, we access the data of four epiphysiological indicators. Among them, we list eight sets of typical data as shown in Table 5 below.
We present some of the experimental data in Table 6. In this study, we innovatively introduce SP and MP signals as physiological indicators for canine emotion classification. To ensure diversity and generalizability, we selected four breeds—Shih Tzu (small), Border Collie (medium), and Golden Retriever and Samoyed (large)—and analyzed their SP and MP signal patterns under four distinct emotional states: happiness, sadness, anger, and fear. The results in Table 7 show that SP signals in small dogs tend to exhibit more pronounced fluctuations, suggesting a higher degree of emotional sensitivity and reactivity [38], while medium and large dogs display relatively consistent and stable signal patterns. Interestingly, SP waveforms vary noticeably across emotional states: happiness typically presents as a gradually ascending waveform, reflecting emotional buildup; sadness manifests as a downward trend, occasionally accompanied by smaller declining waves; anger produces intense, sharply rising waveforms with large amplitudes; and fear is characterized by complex, oscillatory patterns. These distinct waveform features across emotional categories indicate that SP signals, in particular, offer strong discriminative power for emotion classification. By capturing neurophysiological SP signal, the proposed approach enhances the interpretability and precision of emotion recognition in companion dogs.
Regarding the MP signal of Table 8, we can see that there is no significant difference between body size and MP signal change. We think this is because while small dogs are more prone to mood changes, medium and large dogs have more developed calf muscles and a more complex structure, so the difference is not very obvious. The MP signal in the happy mood vibrates very intensively, showing a dense, moderate amplitude vibration. The sad emotion is similar to the MP signal state in the calm state, showing only a small amplitude and a short period of vibration. This is because dogs mostly do not produce corresponding behaviors when they are in a sad mood. The MP signal changes significantly under the angry mood, which is dense and high-level vibration. The average maximum signal value is greater than 2000, and the signal characteristics are very obvious. The vibrations are also more intense under fear, but the amplitude of the vibrations is not high, and they are not continuous vibrations of the same degree. It can be seen that although there are some differences in MP signals under different emotions, the differences are not particularly obvious.
Comparing Table 7 and Table 8, the difference in skin resistance under different emotions is obvious, while the difference in muscle resistance is not easy to distinguish. This is due to the fact that the degree of sympathetic activation in dogs varies depending on the emotion, leading to a different degree of increased sweat gland secretion [39]. This results in a significant difference in the galvanic skin responses. However, under different emotions, dogs may not act accordingly, resulting in inaccessible changes in muscle resistance. For example, when a dog is happy, it sometimes stomps its feet and produce changes in muscle resistance, but sometimes only exhibits mental activity, which does not produce corresponding actions to produce changes in muscle resistance. Although there are differences in muscle resistance when dogs stomp happily and thump on the ground angrily, the behavior in the same psychological states is also different, and implies a comprehensive judgment based on the current environment. Therefore, the difference in SP is more pronounced than muscle conductance in classifying emotions.
All classification model algorithms are implemented based on Python (PyCharm 2024.1). In this experiment, we use the training set to train a classifier, and use a five-fold cross-validation training [40] method to select the optimal parameter value from the discrete range. Meanwhile, we use the loss and accuracy for the assessment of performance. In Figure 5, we can obtain the accuracy and loss of the model from 0–100 iterations. The graph shows a noticeable reduction in fitting errors, with the suggested model proving effective after about 90 iterations.
Meanwhile, according to Figure 6, which shows the contribution rate of each feature to the classification results through the SHAP value, comparing 4 × 29 eigenvalues, we can see that SP and VP have a greater influence on the classification and judgment of emotions, and the first-order differential standard deviation (SP_diff1_std) contribution of time-domain features in the SP signal is higher than that of 4 × 29 eigenvalues. MP varies depending on the mood, but sometimes the difference is not very significant. Changes in RF can generally be used to identify abnormal emotions, but they have little effect on the classification of abnormal emotions. From this, it can be concluded that the four indicators of the SP signal, MP signal, RF signal, and VP signal have an important role in assessing whether abnormal emotions occur. However, the SP and VP signals play an important role in the judgment of mood classification.
Figure 7 illustrates the learning curves of the XGBoost algorithm in classifying four emotional states—happiness, sadness, anger, and fear—based on varying sizes of training and test sets. Each subfigure represents the algorithm’s classification accuracy under different data volumes for a specific emotion.
In Figure 7a, which presents the learning curve for the happiness category, the training set includes a relatively large number of samples (n = 144), resulting in a stable and well-shaped curve. Although slight oscillations are observed in the early phase of the test curve, both training and test accuracies increase steadily, indicating the model’s robustness. The final accuracy of the training set marginally surpasses that of the test set, suggesting good generalization. This further implies that classification performance can be enhanced with an increased volume of happiness-related data. In Figure 7b, the curve corresponding to sadness shows signs of underfitting, evidenced by a noticeable gap between the training and test accuracies. Since regularization was already applied during the feature extraction stage, we attribute this underfitting primarily to the limited sample size. Nonetheless, the continuous upward trend of both curves reflects the model’s learning capacity and its effectiveness in identifying sadness-related patterns. Figure 7c demonstrates an ideal learning process for the anger emotion. The curve progresses smoothly without oscillations, and both training and test accuracies improve gradually. This performance is likely due to the pronounced and consistent physiological signal variations observed in dogs under angry states. Moreover, the extraction of 29 features from each physiological indicator contributes to the model’s ability to capture discriminative patterns, highlighting the reliability of feature engineering in this context. In Figure 7d, representing the fear emotion, slight overfitting is observed as the test accuracy slightly exceeds that of the training set. However, given the small dataset size and the relatively minor deviation, the overfitting is considered negligible. The overall curve remains stable and gradually ascends, demonstrating the algorithm’s consistency.
In summary, the learning curves across all four emotions are generally smooth and exhibit an upward trend with minimal oscillation, reflecting the overall stability and robustness of the XGBoost algorithm in emotion classification. Notably, the distinct signal characteristics associated with abnormal emotions—such as anger and fear—significantly contribute to classification accuracy. These results validate the feasibility of using a combination of four physiological indicators for real-time emotional state assessment in dogs and further confirm the suitability of XGBoost for animal emotion recognition tasks.

4. Discussion

In this experiment, we use four different machine learning algorithms based on signal features to train the classification model. These algorithms are Neural Network (NN), Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), and eXtreme Gradient Boosting (XGBoost).
The most basic machine learning algorithm, NN [41], has a high accuracy rate, which can confirm the feasibility and accuracy of the four-factor comprehensive judgment of emotion categories to the greatest extent. SVM and its extension algorithm are widely used to judge emotional states using sound as a single indicator [42,43], and we also use sound as an appearance indicator in this algorithm, so it has comparative significance. It can be proved that the integration of the other three physiological indicators can better correct the accuracy of the algorithm, which is very important. GBDT [44] was used to determine sentiment categories using the SP metrics and proved to perform the best among nine machine learning algorithms [36]. Therefore, we increased the order and chose to optimize using the XGBoost algorithm to see if the results were as we thought.
In order to facilitate comparison, the multi-epiphysiological indicator dog emotion classification model based on the XGBoost algorithm is called PRO for short. All classification model algorithms are implemented based on Python sklearn. In order to achieve a higher resolution, we set some of the more important hyperparameter values in the algorithm. Table 9, Table 10, Table 11 and Table 12 lists the hyperparameter settings for each classifier model. Parameters not listed in the table use the default parameters in the sklearn library.
Figure 8 shows the accuracy of the four classification models in the face of four emotion tests. As can be seen from the images, the accuracy of each classifier model in judging different emotions is greater than 75%. It shows that the approach is very feasible and effective for judging the type of emotion by using four physiological indicators: SP, MP, RF, and VP. The accuracy of negative emotions is higher than that of positive emotions. It may be that when the dog is angry or fearful, it will make a low growl sound that is different from the usual barking, and make violent movements, so the physiological indicators are significantly different. However, the physiological indicators of happiness are mainly focused on the changes in skin electrodermal and respiratory rate, and the judgment criteria are reduced.
Figure 9 illustrates the confusion matrix case for GBDT and XGBoost. Through the display of the confusion matrix, we can compare the accuracy of the two algorithms more clearly. For example, for the 28 test samples of sad emotions, the GBDT algorithm misjudged 3 as happy emotions and 1 as fearful emotions, while the XGBoost algorithm only misjudged 3 as happy emotions and did not misjudge fearful emotions. It can be seen that the XGBoost algorithm has higher classification accuracy.
Figure 10 illustrates the ROC curves of the NN, SVM, GBDT, XGBoost(PRO) classifier models. By comparison, we can see that the curve of NN is relatively fluctuating, reflecting the probabilistic output characteristics of the neural network. The judgment of negative predictive value and positive predictive value is not very stable in sentiment classification, and it is necessary to improve the stability of the algorithm by integrating other algorithms. The curve of the SVM rises in a stepped manner because it is a good way to distinguish between the different categories. However, due to the uneven distribution of data samples and the insufficient amount of data in the test set, there is a gentle gradient rise. The curve of GBDT shows a relatively smooth exponential increase, but it is not as smooth as XGBoost, indicating that tree judgment does have a great advantage in classification calculation. The second-order calculation of XGBoost further improves the algorithm’s ability to effectively distinguish between positive and negative classes, so the corresponding AUC value is higher and the curve is smoother.
Table 13 shows in detail the relevant performance parameters of the four comparison algorithms. Comparing the four algorithms, we can see that the XGBoost algorithm is more stable and effective. Although the GBDT model is more effective in judging happy emotions, it is not as accurate in judging negative emotions as XGBoost. NN and RF, as the basic algorithms of two deep learning algorithms, are slightly lacking in classification accuracy. As an improved model of GBDT, the XGBoost algorithm is indeed more accurate in emotion classification. XGBoost accelerates convergence through the second derivative, and the regularization term reduces overfitting, which improves the classification accuracy.

5. Conclusions

In this study, we design a multi-epiphysiological indicator dog emotion classification system based on the XGBoost algorithm. The following three aspects of the research have been realized:
1. We demonstrate the feasibility of using SP, MP, RF, and VP to comprehensively judge animal emotion classification compared with traditional single indicator methods, achieving an accuracy of more than 75% among the mentioned machine learning algorithms.
2. The experimental results show that XGBoost outperforms NN, SVM, and GBDT in emotion classification of dogs, achieving an average accuracy of 90.54%, especially excelling in identifying abnormal emotional states.
3. We propose the novel use of SP and MP for animal emotion classification and identify SP and VP as the most influential indicators among the four, providing valuable direction for future animal emotion detection systems.
Through the combination of deep learning algorithms and multi-indicator comprehensive judgment, the proposed system is proved to produce more accurate emotional classification judgment and a faster emotional classification process. Future work will collect data on more animal species and increase the variety of classified emotions, enabling a more integrated portable animal emotion classification system.

Author Contributions

Conceptualization, W.J. and Y.H.; methodology, W.J.; software, Y.H.; validation, Z.W. and K.S.; formal analysis, Z.W. and K.S.; investigation, Y.H. and Z.W.; resources, W.J. and B.H.; data curation, W.J. and B.H.; writing—original draft preparation, W.J. and B.H.; writing—review and editing, W.J. and Y.H.; visualization, Z.W.; supervision, B.H.; project administration, B.H.; funding acquisition, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 61601140 and the Postdoctoral Initiation Foundation of Heilongjiang under grant number 68641400.

Institutional Review Board Statement

Because the non-invasive external data collection method of physiological indicators does not affect the physical or mental health of pet dogs or compromise moral considerations, ethical review and approval of this study is not required. In the data collection, the authors assure that the study strictly adhered to international animal welfare guidelines, endeavouring to respect the dogs’ own habits and animal welfare and to act in an ethical manner.

Informed Consent Statement

All owners of the pet dogs who participated in the collection of our epiphysiological index data provided informed consent.

Data Availability Statement

The original contributions presented in this study are included in this article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We are also very grateful to the anonymous reviewers for their valuable comments and suggestions for the improvement of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SPSkin potential
MPMuscle potential
RFRespiration frequency
VPVoice pattern
XGBoostExtreme gradient boosting
NNNeural networks
SVMSupport vector machines
GBDTGradient boosting decision trees
SHAPShapley additive explanation
PCAPrincipal component analysis
LDALatent dirichlet allocation
HMMHidden Markov model
GMMGaussian mixture model
ANNArtificial neural network model
BiLSTMBidirectional long short-term memory network
3RsThe principles of replacement, reduction, and refinement

References

  1. Meehan, M.; Massavelli, B.; Pachana, N. Using Attachment Theory and Social Support Theory to Examine and Measure Pets as Sources of Social Support and Attachment Figures. Anthrozoös 2017, 30, 273–289. [Google Scholar] [CrossRef]
  2. Liu, S.; Li, W.; Hong, X.; Song, M.; Liu, F.; Guo, Z.; Zhang, L. Correction: Effects of anger and trigger identity on triggered displaced aggression among college students: Based on the “kicking the barking dog effect”. BMC Psychol. 2024, 12, 691. [Google Scholar] [CrossRef]
  3. Kim, A.S.; Bain, J.M. A dog with stranger-directed aggression. J. Am. Vet. Med. Assoc. 2024, 262, 1692–1694. [Google Scholar] [CrossRef]
  4. Burza, L.B.; Bloom, T.; Trindade, P.H.E.; Friedman, H.; Otta, E. Reading Emotions in Dogs’ Eyes and Dogs’ Faces. Behav. Processes 2022, 202, 104752. [Google Scholar] [CrossRef]
  5. Zdzisław, K.; Michał, C.; Weronika, Ż. Categorization of emotions in dog behavior based on the deep neural network. Comput. Intell. 2022, 38, 2116–2133. [Google Scholar]
  6. Riemer, S.; Heritier, C.; Windschnurer, I.; Pratsch, L.; Arhant, C.; Affenzeller, N. A Review on Mitigating Fear and Aggression in Dogs and Cats in a Veterinary Setting. Animals 2021, 11, 158. [Google Scholar] [CrossRef]
  7. Guo, S.; Elmadhoun, O.; Ding, Y. Mental stress studies in animals. Environ. Dis. 2018, 3, 29–30. [Google Scholar]
  8. Ng, Z.Y.; Pierce, B.J.; Otto, C.M.; Buechner-Maxwell, V.A.; Siracusa, C.; Werre, S.R. The effect of dog–human interaction on cortisol and behavior in registered animal-assisted activity dogs. Appl. Anim. Behav. Sci. 2014, 159, 69–81. [Google Scholar] [CrossRef]
  9. Meridda, A.; Gazzano, A.; Mariti, C. Assessment of dog facial mimicry: Proposal for an emotional dog facial action coding system (EMDOGFACS). J. Vet. Behav. 2014, 9, e3. [Google Scholar] [CrossRef]
  10. Caeiro, C.; Guo, K.; Mills, D. Author Correction: Dogs and humans respond to emotionally competent stimuli by producing different facial actions. Sci Rep. 2018, 8, 1. [Google Scholar] [CrossRef]
  11. Bertin, A.; Mulot, B.; Nowak, R.; Blache, M.C.; Love, S.; Arnold, M.; Pinateau, A.; Arnould, C.; Lansade, L. Captive Blue-and-yellow macaws (Ara ararauna) show facial indicators of positive affect when reunited with their caregiver. Behav. Processes 2023, 206, 104833. [Google Scholar] [CrossRef]
  12. Das, S.; Kumari, R.; Singh, K.R. Advancements in computational emotion recognition: A synergistic approach with the emotion facial recognition dataset and RBF-GRU model architecture. Int. J. Syst. Assur. Eng. Manag. 2024, 16, 1–16. [Google Scholar] [CrossRef]
  13. Bremhorst, A.; Mills, D.S.; Würbel, H.; Riemer, S. Evaluating the accuracy of facial expressions as emotion indicators across contexts in dogs. Anim. Cogn. 2021, 25, 1–16. [Google Scholar] [CrossRef]
  14. De Winkel, T.; van der Steen, S.; Enders-Slegers, M.J.; Griffioen, R.; Haverbeke, A.; Groenewoud, D.; Hediger, K. Observational behaviors and emotions to assess welfare of dogs: A systematic review. J. Vet. Behav. 2024, 72, 1–17. [Google Scholar] [CrossRef]
  15. Kremer, L.; Holkenborg, K.S.; Reimert, I.; Bolhuis, J.E.; Webb, L.E. The nuts and bolts of animal emotion. Neurosci. Biobehav. Rev. 2020, 113, 273–286. [Google Scholar] [CrossRef]
  16. Stadtländer, H.K.T.C. Exploring animal behavior through sound: Volume 1—methods. Bioacoustics 2025, 34, 88–91. [Google Scholar] [CrossRef]
  17. Kawakami, Y.; Hattori, T.; Kawano, H.; Izumi, T. Experimental Investigation of Feature Quantity in Sound Signal and Feeling Impression Using PCA. J. Robot. Netw. Artif. Life 2025, 1, 303–311. [Google Scholar] [CrossRef]
  18. Kingeski, R.; Henning, E.; Paterno, S.A. Fusion of PCA and ICA in Statistical Subset Analysis for Speech Emotion Recognition. Sensors 2024, 24, 5704. [Google Scholar] [CrossRef]
  19. Jawad, M.D.A.; Abbas, M.E. Automatic speech emotion recognition based on hybrid features with ANN, LDA and KNN classifiers. Multimed. Tools Appl. 2023, 82, 42783–42801. [Google Scholar]
  20. Pierre-Yves, O. The production and recognition of emotions in speech: Features and algorithms. Int. J. Hum. Comput. Stud. 2003, 59, 157–183. [Google Scholar] [CrossRef]
  21. Swain, M.; Sahoo, S.; Routray, A.; Kabisatpathy, P.; Kundu, J.N. Study of feature combination using HMM and SVM for multilingual Odiya speech emotion recognition. Int. J. Speech Technol. 2015, 18, 387–393. [Google Scholar] [CrossRef]
  22. Palo, K.H.; Chandra, M.; Mohanty, N. Emotion recognition using MLP and GMM for Oriya language. Comput. Vis. Robot 2017, 7, 426–442. [Google Scholar] [CrossRef]
  23. Darekar, R.V.; Chavan, M.; Sharanyaa, S.; Ranjan, N.M. A hybrid meta-heuristic ensemble based classification technique speech emotion recognition. Adv. Eng. Softw. 2023, 180, 103412. [Google Scholar] [CrossRef]
  24. Huang, S.; Dang, H.; Jiang, R.; Hao, Y.; Xue, C.; Gu, W. Multi-Layer Hybrid Fuzzy Classification Based on SVM and Improved PSO for Speech Emotion Recognition. Electronics 2021, 10, 2891. [Google Scholar] [CrossRef]
  25. Kang, X. Speech emotion recognition algorithm of intelligent robot based on ACO-SVM. Int. J. Cogn. Comput. Eng. 2025, 6, 131–142. [Google Scholar] [CrossRef]
  26. Ayadi, M.E.; Kamel, M.S.; Karray, F. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognit. 2011, 44, 572–587. [Google Scholar] [CrossRef]
  27. Molnár, C.; Kaplan, F.; Roy, P.; Pachet, F.; Pongrácz, P.; Dóka, A.; Miklósi, Á. Classification of dog barks: A machine learning approach. Anim. Cogn. 2018, 11, 389–400. [Google Scholar] [CrossRef]
  28. Yoon, S.; Byun, S.; Dey, S.; Jung, K. Speech Emotion Recognition Using Multi-hop Attention Mechanism. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 2822–2826. [Google Scholar]
  29. Mancini, C.; Nannoni, E. Relevance, impartiality, welfare and consent: Principles of an animal-centered research ethics. Front. Anim. Sci. 2022, 3, 800186. [Google Scholar] [CrossRef]
  30. Boissy, A.; Manteuffel, G.; Jensen, M.B.; Moe, R.O.; Spruijt, B.; Keeling, L.J.; Winckler, C.; Forkman, B.; Dimitrov, I.; Langbein, J.; et al. Assessment of positive emotions in animals to improve their welfare. Physiol. Behav. 2007, 92, 375–397. [Google Scholar] [CrossRef]
  31. Ohl, F.; Staay, D.V.F. Animal welfare: At the interface between science and society. Vet. J. 2012, 192, 13–19. [Google Scholar] [CrossRef]
  32. Polgár, Z.; Blackwell, J.E.; Rooney, J.N. Assessing the welfare of kennelled dogs—A review of animal-based measures. Appl. Anim. Behav. Sci. 2019, 213, 1–13. [Google Scholar] [CrossRef] [PubMed]
  33. Klausz, B.; Kis, A.; Persa, E.; Miklósi, Á.; Gácsi, M. A quick assessment tool for human-directed aggression in pet dogs. Aggress. Behav. 2014, 40, 178–188. [Google Scholar] [CrossRef]
  34. Hydbring-Sandberg, E.; von Walter, L.W.; Hoglund, K.; Svartberg, K.; Swenson, L.; Forkman, B. Physiological reactions to fear provocation in dogs. J. Endocrinol. 2004, 180, 439–448. [Google Scholar] [CrossRef]
  35. Bastos, A.P.; Evenson, A.; Wood, P.M.; Houghton, Z.N.; Naranjo, L.; Smith, G.E.; Cairo-Evans, A.; Korpos, L.; Terwilliger, J.; Raghunath, S.; et al. How do soundboard-trained dogs respond to human button presses? An investigation into word comprehension. PLoS ONE 2024, 19, e0307189. [Google Scholar] [CrossRef] [PubMed]
  36. Chen, S. Electronic Science and Technology. Emotion Recognition and Signal Characterization Based on Skin Potential. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2025. Volume 3. [Google Scholar]
  37. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  38. McGreevy, P.D.; Georgevsky, D.; Carrasco, J.; Valenzuela, M.; Duffy, D.L.; Serpell, J.A. Dog behavior co-varies with height, bodyweight and skull shape. PLoS ONE 2017, 8, e80529. [Google Scholar] [CrossRef]
  39. Phillips, M.L. Understanding the neurobiology of emotion perception: Implications for psychiatry. Br. J. Psychiatry 2003, 182, 190–192. [Google Scholar] [CrossRef] [PubMed]
  40. Zaidi, A.S.; Chouvatut, V.; Phongnarisorn, C.; Praserttitipong, D. Deep learning based detection of endometriosis lesions in laparoscopic images with 5-fold cross-validation. Intell. Based Med. 2025, 11, 100230. [Google Scholar] [CrossRef]
  41. Yang, C. Research and Implementation of FPGA-Based Artificial Neural Network. Ph.D. Thesis, Xidian University, Xi’an, China, 2016. [Google Scholar]
  42. Flower, T.M.L.; Jaya, T. Speech emotion recognition using Ramanujan Fourier Transform. Appl. Acoust. 2022, 201, 109133. [Google Scholar] [CrossRef]
  43. Raghu, K.; Manchala, S.; Hanumanthu, B. Deep Learning Algorithms for Speech Emotion Recognition with Hybrid Spectral Features. SN Comput. Sci. 2023, 5, 17. [Google Scholar]
  44. Zhang, L.; Li, Y.; Tang, Z. Breast Tissue Classification Method Based on Machine Learning. Recent Patents Eng. 2024, 18, 18–27. [Google Scholar]
Figure 1. Signal acquisition sites in dogs.
Figure 1. Signal acquisition sites in dogs.
Animals 15 01984 g001
Figure 2. A physical device containing the signal detection instrument.
Figure 2. A physical device containing the signal detection instrument.
Animals 15 01984 g002
Figure 3. The basic steps of the XGBoost model.
Figure 3. The basic steps of the XGBoost model.
Animals 15 01984 g003
Figure 4. Algorithm flow chart of the four-indicator emotion classification system.
Figure 4. Algorithm flow chart of the four-indicator emotion classification system.
Animals 15 01984 g004
Figure 5. Accuracy and loss performance in the training and testing phases from 0–100 iterations.
Figure 5. Accuracy and loss performance in the training and testing phases from 0–100 iterations.
Animals 15 01984 g005
Figure 6. The contribution rate of four physiological indicators.
Figure 6. The contribution rate of four physiological indicators.
Animals 15 01984 g006
Figure 7. The training set size learning curve of the XGBoost algorithm under four emotions.
Figure 7. The training set size learning curve of the XGBoost algorithm under four emotions.
Animals 15 01984 g007
Figure 8. The accuracy of the NN, SVM, GBDT, and PRO classifier models for four emotions.
Figure 8. The accuracy of the NN, SVM, GBDT, and PRO classifier models for four emotions.
Animals 15 01984 g008
Figure 9. Confusion matrix plots of the GBDT (left) and PRO (right) algorithms.
Figure 9. Confusion matrix plots of the GBDT (left) and PRO (right) algorithms.
Animals 15 01984 g009
Figure 10. ROC comparison of NN, SVM, GBDT, PRO.
Figure 10. ROC comparison of NN, SVM, GBDT, PRO.
Animals 15 01984 g010
Table 1. Correspondence between apparent morphology changes and emotions in dogs.
Table 1. Correspondence between apparent morphology changes and emotions in dogs.
Indicators Inferring EmotionsEmotions/Affective States
Ears sway back and forth from side to sideHappiness
Ears forward and slightly cockedHappiness
Bristle the tailHappiness
Ears backSadness
Ears back and flutter slightlyAnger
Bristle the coatFear/Anger
Tuck their tail between the legsFear
Roll their ears backFear (alertness)
Ears forwardFear (alertness)
Table 2. The accuracy of four machine learning algorithms in speech recognition emotion experiments [21].
Table 2. The accuracy of four machine learning algorithms in speech recognition emotion experiments [21].
ClassifierHMMGMMANNSVM
Average75.5–78.5%74.83–81.94%51.19-52.82%75.45–81.29%
classification accuracy 63–70%
Table 3. Measurement equipment and signal properties.
Table 3. Measurement equipment and signal properties.
Detected SignalEquipment PhotosTypeBrandSampling Rates (Hz)Measurement Frequency (Hz)Signal-to-Noise Ratio (dB)
SPAnimals 15 01984 i001STM32ST100010–110-
MPAnimals 15 01984 i002STM32ST100020–500-
RFAnimals 15 01984 i003ESP32Espressif Technologies2050–500-
VPAnimals 15 01984 i004ESP32Espressif Technologies80000–340060–65
Table 4. The division of the training set and the test set.
Table 4. The division of the training set and the test set.
AlgorithmsNumber of SubjectsSample Happiness MoodSample Sadness MoodSample Anger MoodSample Fear MoodSum
Training Set22141877279379
Test Set846283139144
Sum30187115103118523
Table 5. The 29 signal features extracted for each signal.
Table 5. The 29 signal features extracted for each signal.
DomainFeature Name Abbreviation
time domainq1,q3, median, mean, std, var, rms
min_ratio, max_ratio
diff1_mean, diff1_median, diff1_std
diff2_mean, diff2_median, diff2_std
frequency domainmean_f, median_f, std_f, var_f, rms_f
min_ratio_f, max_ratio_f
diff1_mean_f, diff1_median_f, diff1_std_f
diff2_mean_f, diff2_median_f, diff2_std_f
nonlinearmcr
Table 6. Part of the experimental data.
Table 6. Part of the experimental data.
Sample NumSP (μS)MP (μV)RF (Times)VP (Hz)Emotion
18.318.5571217Happiness
26.415.2461091Happiness
35.814.639927Sadness
46.716.142818Sadness
525.165.871423Anger
619.261.974228Anger
713.942.2651609Fear
812.449.3831940Fear
Table 7. SP signals of four breeds of dogs under four emotions. (Abscissa Label: SP signal frequency (Hz) Ordinate Label: Time (s)).
Table 7. SP signals of four breeds of dogs under four emotions. (Abscissa Label: SP signal frequency (Hz) Ordinate Label: Time (s)).
     Animals 15 01984 i005 Border Collie    Animals 15 01984 i006 Samoyed    Animals 15 01984 i007 Golden Retriever    Animals 15 01984 i008 Shih Tzu
Happiness Animals 15 01984 i009Animals 15 01984 i010Animals 15 01984 i011Animals 15 01984 i012
Sadness Animals 15 01984 i013Animals 15 01984 i014Animals 15 01984 i015Animals 15 01984 i016
Anger Animals 15 01984 i017Animals 15 01984 i018Animals 15 01984 i019Animals 15 01984 i020
Fear Animals 15 01984 i021Animals 15 01984 i022Animals 15 01984 i023Animals 15 01984 i024
Table 8. MP signals of four breeds of dogs under four emotions. (Abscissa Label: MP signal frequency (Hz) Ordinate Label: Time (s)).
Table 8. MP signals of four breeds of dogs under four emotions. (Abscissa Label: MP signal frequency (Hz) Ordinate Label: Time (s)).
     Animals 15 01984 i025 Border Collie    Animals 15 01984 i026 Samoyed    Animals 15 01984 i027 Golden Retriever    Animals 15 01984 i028 Shih Tzu
Happiness Animals 15 01984 i029Animals 15 01984 i030Animals 15 01984 i031Animals 15 01984 i032
Sadness Animals 15 01984 i033Animals 15 01984 i034Animals 15 01984 i035Animals 15 01984 i036
Anger Animals 15 01984 i037Animals 15 01984 i038Animals 15 01984 i039Animals 15 01984 i040
Fear Animals 15 01984 i041Animals 15 01984 i042Animals 15 01984 i043Animals 15 01984 i044
Table 9. The hyperparameter settings for the NN [28] classifier model.
Table 9. The hyperparameter settings for the NN [28] classifier model.
Hyperparameters SymbolValueExplanation
hidden_layer_sizes(50,100,100,50,40)hidden layer structure
solver‘lbfgs’weight optimization methods
alpha1 × 10 5 regularization parameter
Table 10. The hyperparameter settings for the SVM [29,30] classifier model.
Table 10. The hyperparameter settings for the SVM [29,30] classifier model.
Hyperparameters SymbolValueExplanation
C17penalty coefficient
kernel‘rbf’kernel function
gamma0.001kernel function coefficient
Table 11. The hyperparameter settings for the GBDT [25,31] classifier model.
Table 11. The hyperparameter settings for the GBDT [25,31] classifier model.
Hyperparameters SymbolValueExplanation
n_estimators100maximum number of iterations for weak learners
max_depth3maximum depth of the decision tree
min_samples_split2minimum number of samples contained in each non-leaf node
min_samples_leaf1minimum number of samples contained in each leaf node
Table 12. The hyperparameter settings for the XGBoost(PRO) classifier model.
Table 12. The hyperparameter settings for the XGBoost(PRO) classifier model.
Hyperparameters SymbolValueExplanation
learning_rate0.1learning rate of the model
max_depth6maximum depth of the decision tree
gamma1minimum loss reduction required for node splitting
objective‘multi:softprob’define task types
num_class4number of defined categories
min_child_weight1minimum value of the sum of leaf node sample weights
subsample0.8proportion of samples when training each tree
n_estimators90maximum number of iterations for weak learners
Table 13. Comparison of classification reports for NN, SVM, GBDT, PRO.
Table 13. Comparison of classification reports for NN, SVM, GBDT, PRO.
AlgorithmicAUCAccuracyRecallF1 ScoreSupport Number
NN0.780.860.720.70326
TrainingSVM0.820.870.780.76330
SetGBDT0.890.910.840.83345
PRO 0.91 0.92 0.88 0.88 347
NN0.750.830.690.71119
Test SetSVM0.760.850.720.74122
GBDT0.830.880.810.77127
PRO 0.85 0.90 0.87 0.79 130
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jia, W.; Hu, Y.; Wang, Z.; Song, K.; Huang, B. A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals. Animals 2025, 15, 1984. https://doi.org/10.3390/ani15131984

AMA Style

Jia W, Hu Y, Wang Z, Song K, Huang B. A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals. Animals. 2025; 15(13):1984. https://doi.org/10.3390/ani15131984

Chicago/Turabian Style

Jia, Wenqi, Yanzhi Hu, Zimeng Wang, Kai Song, and Boyan Huang. 2025. "A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals" Animals 15, no. 13: 1984. https://doi.org/10.3390/ani15131984

APA Style

Jia, W., Hu, Y., Wang, Z., Song, K., & Huang, B. (2025). A Multi-Epiphysiological Indicator Dog Emotion Classification System Integrating Skin and Muscle Potential Signals. Animals, 15(13), 1984. https://doi.org/10.3390/ani15131984

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop