From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition

The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR.


INTRODUCTION
Analyzing human movement through 3D skeleton data has promising non-trivial applications in high-stake sectors such as in healthcare and rehabilitation [1], security and surveillance [2], sports and athletics [3], and human-computer interaction [4].Because of this, integrating deep learning in skeleton data analysis requires an understanding of the model's decision-making processes.One particular application is compliance with the EU's proposed AI Act which emphasizes that transparency and human oversight should be embedded in high-risk applications such as AI-assisted medical diagnostics [5].A basic form of deep learning technique applied to human movement analysis using skeleton data is human activity/action recognition (HAR).State-ofthe-art HAR models are continually being developed and improved.It started with the introduction of Spatial Temporal Graph Convolutional Networks (ST-GCN) architecture in 2018 [6] as it was the first to use graph convolution for HAR.ST-GCN then became the baseline for dozens of emerging skeleton-based HAR models that seek to improve this original implementation.
Recent advancements in HAR model architectures have been significant, but strides in their explainability remain limited.Class Activation Mapping (CAM) was used in Efficient-GCN [7] and ST-GCN [8] to highlight the body points significant for specific actions.In [9], Gradient-weighted Class Activation Mapping (Grad-CAM) was implemented in ST-GCN.There is a growing trend towards using explainable AI (XAI) methods, extending from CNNs to ST-GCNs, yet XAI metrics to assess their reliability in this domain have yet to be tested.There is also a lack of comparative analysis between these methods, which leaves a gap in understanding their relative performance in HAR applications.Additionally, research is limited on metrics that assess XAI methods in the context of data perturbation.
While the paper in [9] evaluated the faithfulness and contrastivity of Grad-CAM, it did not offer insights into its performance relative to other XAI methods.Moreover, their choice of using masking/occlusion to check for changes in prediction output raises concerns.Masking can potentially distort the standard human skeletal structure that GCN-based models are trained on.Movements of the human body are governed by biomechanical principles, and perturbations that do not respect these principles can potentially result to misleading understanding of the model's faithfulness.Recognizing the growing relevance of skeleton-based HAR models in critical areas, our paper aims to test established metrics that assess their corresponding feature attribution techniques.Additionally, no other study has assessed explainability metrics using biomechanically accurate perturbations of the skeletal graph.In [10] and [11], perturbation of skeleton data was employed, which they claim to have maintained normal hu-arXiv:2402.12790v1[cs.LG] 20 Feb 2024 man kinematics during perturbation.However, the objective was for adversarial attack thus the perturbed skeleton joints were neither controlled nor deliberately targeted.In essence, we also tackle the absence of evaluation metrics grounded in biomechanically correct perturbations of the skeletal graph.
Alongside the pursuit of improved explainability, assessing the stability of model decisions and their explanations is important.As human skeletal data can exhibit subtle variances due to minor changes in posture, movement, or data capturing techniques, the decisions from the model and the explanations from the XAI methods should remain consistent and trustworthy.That is, dramatic shifts in explanations due to minor input changes cast doubt on model reliability.Moreover, imprecise estimation of joint center positions in 3D skeletal data analysis underscores the need to evaluate decision and explanation robustness using biomechanically and kinematically correct perturbations.To address this, we draw from metrics established for other data types.In this work, we focus on the two primary metrics highlighted in [12]: faithfulness [13], which gauges how closely an explanation mirrors the model's internal logic and stability [14], which pertains to the consistency of a model's explanations across similar inputs.This paper's key contributions are: • Testing established metrics and assessing its applicability for evaluating XAI methods in skeleton-based HAR.
• Introducing a controlled approach to perturb 3D skeleton data for XAI evaluation, ensuring biomechanically correct variations for realistic skeletal data representation.
• Assessing the impact of perturbation magnitude variations on metrics.
• Comparing the performance of CAM, Grad-CAM, and a random feature attribution method for HAR.

MATERIALS
To provide the framework for this research, the dataset used, the neural network architecture trained and tested, and the XAI metrics implemented are briefly summarized below.

NTU RGB+D 60 dataset and EfficientGCN
The NTU RGB+D-60 dataset [15] contains 60 action classes with over 56 thousand 3D skeleton data, each composed of sequential frames captured from 40 different subjects using the Kinect v2 camera with depth sensor.For evaluation purposes, the dataset is further divided into cross-subject and cross-view subgroups -the former is composed of different human subjects performing the same actions, while the latter uses different camera angle views.The EfficientGCN architecture [16] is a result of extending the concept of EfficientNet [17] for CNNs into ST-GCNs to reduce the computing resource demand for HAR.There are a total of 24 different EfficientGCN network configurations with different scaling coefficients that the user can choose and test.In this paper, we use the B4 configuration which has achieved the highest accuracy at 92.1% on cross-subject dataset and 96.1% on cross-view, compared to 81.5% and 88.3% of the baseline ST-GCN, respectively.

Faithfulness
Predictive faithfulness refers to the degree to which the changes in an explanation's features meaningfully alter the model's prediction, indicating the explanation's alignment with the model's actual reasoning process.Prediction Gap on Important feature perturbation (PGI) measures how much prediction changes when top-k features are perturbed.Conversely, Prediction Gap on Unimportant feature perturbation (PGU), measures the change in prediction when unimportant features are perturbed.The two metrics are derived from the formulas below [12].Let X represent the original input data with its associated explanation e X , and f (•) the output probability.Then, X ′ signifies the perturbed variant of X, and e X ′ the revised explanation after perturbation.
Fig. 2: The EfficientGCN pipeline [16] showing the variables for calculating faithfulness and stability.Perturbation is performed in Data Preprocess stage.

Stability
The concept of stability refers to the maximum amount of change in explanation (i.e.attribution scores) when the input data is slightly perturbed.There are three sub-metrics that can be calculated.
Relative Input Stability (RIS) measures the maximum change in attribution scores with respect to a corresponding perturbation in the input.Given that EfficientGCN has multiple input branches, it is essential to compute the RIS for each branch namely joint, velocity, and bone.From hereon, they shall be referred to as RISj, RISv, and RISb, respectively.
Relative Output Stability (ROS) measures the maximum ratio of how much the explanation changes to how much the model's prediction probability changes due to small perturbations in the input data.
Relative Representation Stability (RRS) measures the maximum change in a model's explanations relative to changes in the model's internal representations brought about by small input perturbations.In this context, the internal representation denoted as L X typically refers to an intermediate layer's output in a neural network, capturing the model's understanding of the input data.In our experiment, we extract and use the logits from the layer preceding the softmax function for our computations.

Skeleton Data Perturbation
In 3D space, skeleton joints can be perturbed using spherical coordinates by generating random angles θ and ϕ for perturbation direction, sourced from a Gaussian distribution.The magnitude of this perturbation is controlled by radius r.In a standard XAI metric test, we recommend to adhere to the principle that X ′ should be within the neighborhood of X to ensure that inputs remain representative of human kinematics and avoid skewing model predictions.This means constraining the magnitude of r, which in our pipeline is initially set to 2.5cm.When it comes to body point tracking, the Kinect v2's tracking error ranges from 1 to 7 cm compared to the goldstandard Vicon system [18] so a 2.5cm perturbation ensures the perturbed data stays within Kinect's typical accuracy tolerance.However, we also tested the metrics with increasing r (in cm: 2.5, 5, 10, 20, 40, and 80).While this contradicts our initial recommendation, varying the perturbation magnitude would allow us to (a) test the hypothesis that a small perturbation should result in meaningful changes in the prediction, which should be reflected in faithfulness results, and (b) observe its effects on the explanations, which should be reflected in stability results.The point P'(x', y', z') in Figure 1 can be calculated using the equations below, which are used to convert a point from spherical to Cartesian (rectangular) coordinates.In these equations, r represents the distance from the two points, θ denotes the azimuthal angle, and ϕ is the polar angle.A fixed random seed was used to generate reproducible angles θ and ϕ.The variables dx, dy, and dz are computed once, and each joint is given its own unique set of these values.When added to the original coordinates across all video frames, a mildly perturbed 3D point is produced.This method ensures that a particular joint undergoes the same random adjustment across all frames, rather than different perturbations in each frame.
Figure 2 and Equations 6 and 7 show how to get the variables to calculate the metrics.From the definition of CAM [19], w in Equation 6 are the weights after Global Average Pooling (GAP) for the specific output class, and F n denotes the nth feature map.Similarly, α in Equation 7for Grad-CAM [20] is calculated by averaging the gradients.Figure 3 shows a sample CAM and Grad-CAM explanations in comparison with the random baseline.For PGI and PGU, we begin by determining the average attribution scores for each skeletal joint across all frames.Next, the joints are ranked based on their order of significance.The top-k joints, along with their respective non-top-k counterparts, are then perturbed by selecting k nodes according to their attribution score ranking.
For each data instance and for values of k, PGI and PGU are computed.The results are then aggregated by calculating the AUC, which allows for reporting a single value for each metric.
Since the metrics are unitless, a random method serves as the benchmark for the least desirable outcome by randomly assigning feature attribution scores.Higher PGI values are optimal, indicating that altering important skeletal nodes has a marked impact on the prediction.Lower PGU values are better, suggesting that perturbing the identified unimportant skeletal nodes do not cause significant change in the model's output prediction.Lastly, a stability (RIS, RRS, and ROS) closer to zero is indicative of a model's robustness, signifying that minor perturbations to the input data do not lead to significant changes in the explanation.In order to thoroughly

RESULTS
To help us gauge the reliability of the XAI evaluation metrics, we test them by slowly increasing the perturbation magnitude, as described in the Methods section.Figure 4 and 5 show the line graphs for each metric tests on class 11 and 26, respectively, comparing the different XAI methods.From hereon, we shall refer to class 26 as the strongest class, while class 11 will be referred to as the weakest class.For the exact numerical values of the results with confidence intervals, please refer to Appendix A and B.

Faithfulness
The identical PGI and PGU values for CAM and Grad-CAM means both methods have the exact same ranking of features although the numerical attribution scores are not exactly the same.Unexpectedly, the random method appears to outperform both in PGI in the weakest class, except at r = 80cm.Conversely, looking at the results for the strongest class in Figure 5 suggests equal PGI performance among all methods up to r ≤ 5cm, beyond which random seems to surpass the others.In essence, on a class where the model has the best classification performance, the PGI test aligns with expected outcomes only at higher values of r, where data distortion is significant, which is no longer consistent with the rule that X ′ ∈ N X .Moreover, where model classification is weakest, PGI results consistently give unexpected outcomes.Analysis of the PGU results of the weakest class indicate that CAM and Grad-CAM outperform random as expected.In the strongest class, however, conformity to expected outcomes occur only when r ≥ 40cm, with random exhibiting higher values, while in lower perturbation magnitudes, the results seem to suggest that the three methods exhibit either comparable performance or that the random method has marginally higher PGU values.Therefore, PGU tests only meet expected outcomes primarily when there is weak model performance or when input perturbation is significant during strong model performance.These irregularities in both PGI and PGU suggest that the hypothesis of faithfulness-minor perturbations causing meaningful prediction shifts-is not upheld for the EfficientGCN model.Using output predictions for gauging explanation fidelity proves unreliable in this context.

Stability
Stability assessments for CAM and Grad-CAM yield nearly identical values, diverging only in less significant decimal places.This implies that despite both methods giving different raw scores, they tend to converge upon normalization.Stability test results, contrary to faithfulness, demonstrate robustness against increased perturbation, consistently indicating the superiority of CAM and Grad-CAM over random in the two classes tested.Thus, stability testing affirms that input perturbations do not drastically alter explanations compared to the random baseline.
It can be observed in Figure 4f and 5f that ROS results register very high numerical values, with the y-axis scaled to 1 × 10 7 .We inspected the individual terms in each of the ROS results and found that the cause for such high numbers is the extremely small denominator terms (typically less than 1).Since the denominator term of ROS is the difference in the original and perturbed predictions, it means that the change in the model's output probability is very small even when the perturbation magnitude is large.A small denominator, reflecting little changes in output probabilities even with substantial perturbations, corroborates the inefficacy of PGI and PGU tests in our context.These tests, which are reliant on shifts in prediction probabilities, fail to yield meaningful results in response to input perturbations, further supporting our hypothesis regarding the model's behavior under examination.

DISCUSSION AND CONCLUSION
Our research contributes to the understanding of explainable AI in the context of skeleton-based HAR, advancing the state-of-the-art by testing known metrics in this emerging domain and introducing a biomechanically-informed pertur-bation technique.A key finding from our experiments is that faithfulness-a widely recognized XAI metric-may falter in certain models, such as the EfficientGCN.This finding serves as a caution to XAI practitioners when using an XAI metric that measures the reliability of XAI methods indirectly through the change in the model's prediction probability.In contrast, stability, which measures direct changes in explanations, emerged as a dependable metric.However, this leaves us with only a single metric which offers a limited view of an XAI method's efficacy, underscoring the need for developing or adapting additional testing approaches in this field.Our skeleton perturbation method, which leverages biomechanically precise modifications to evaluate explanation techniques, offers a promising framework for validating upcoming XAI metrics.
This study also identifies other gaps in XAI for ST-GCNbased HAR, which is an opportunity for future research directions.The nearly identical explanations produced by Grad-CAM and CAM when applied to EfficientGCN highlight a need for more diverse XAI techniques, such as adaptations of model agnostic methods like LIME [21] and SHAP [22] for this specific domain.Additionally, comparative studies of XAI metrics across various HAR models remain scarce, which could be valuable as a guide for model selection where explainability is as important as accuracy.
Lastly, our comparative analysis between CAM and Grad-CAM revealing negligible stability differences suggests that neither method is superior; they are essentially equivalent.Yet, CAM's use of static model weights obtained posttraining means it demands less computational load compared to Grad-CAM, which needs gradient computation per data instance.This highlights CAM's suitability for large-scale data analysis.This consideration is especially pertinent for applications where computational efficiency is vital alongside accuracy and reliability.

Fig. 1 :
Fig.1: Illustration of perturbing a point P(x, y, z) in 3D space to a new position P'(x', y', z') using spherical coordinates.The perturbation magnitude is represented by r, with azimuthal angle θ and polar angle ϕ.

Fig. 3 :
Fig. 3: Left to right: CAM, Grad-CAM, and baseline random attributions for a data instance in 'writing' (class 11), averaged for all frames and normalized.The color gradient denotes the score intensity: blue indicates 0, progressing to red which indicates a score of 1.

Table 1 :
Class 11 tabular data.↑ indicates that higher values are better while ↓ indicates that lower values are optimal.

Table 2 :
Class 26 tabular data.↑ indicates that higher values are better while ↓ indicates that lower values are optimal.