Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling
Highlights
- An adaptive refined graph convolutional network integrating multi-order features (joints/bones/angles) with static-dynamic domains is proposed, enhancing adaptability to inter-class similarity and intra-class variation through dynamic topology learning and ARFAM mechanism.
- Joint-type semantics and frame-index semantics are introduced as spatio-temporal constraint modeling, improving the capability to capture temporal evolution patterns of actions and enhancing discriminability and logical consistency of complex long-term sequences.
- The method’s effectiveness is validated on NTU-RGB+D and self-constructed ICAO ground crew datasets (90.71% real-time accuracy), addressing the technical challenge of recognizing highly similar gestures in UAV ground crew marshalling scenarios.
- Technical support is provided for UAV-ground command system coordination, with robustness validated through edge device deployment, facilitating practical applications of smart airport ground operations in the low-altitude economy context.
Abstract
1. Introduction
2. Related Work
3. Method
3.1. Multi-Order and Motion Feature Modeling
3.1.1. Angle Encoding
3.1.2. Static and Dynamic Domain Modeling of Joints and Bones
3.1.3. Feature Construction and Fusion
3.2. Self-Adaptive Graph Convolutional Module Based on Enhanced Data-Driven Learning
3.2.1. Adaptive Topology Construction Driven by Data
3.2.2. Adaptive Refinement Feature Activation Mechanism
3.2.3. Joint Semantic Adaptive Graph Convolutional Spatial Modeling Module
3.2.4. Frame-Index Semantic Temporal Feature Modeling Module
3.3. Computational Complexity Analysis
4. Experiments
4.1. Datasets and Experimental Settings
4.1.1. NTU-RGB+D 60/120 Datasets
4.1.2. Experimental Settings
4.2. Experimental Results and Analysis
4.2.1. Comparative Experiments
4.2.2. Ablation Studies
- (1)
- Effectiveness of Multi-Level Feature Modeling
- (2)
- Ablation Study on Adaptive Refined Feature Activation Mechanism and Semantic Information
- (3)
- Network Depth Ablation Study.
5. Application Study: UAV Ground Crew Marshalling Action Recognition
5.1. Dataset Construction
5.1.1. Dataset Specifications and Development
5.1.2. Annotation Software
5.1.3. Experimental Platform
5.2. Experimental Details and Results Analysis
5.2.1. Workflow of UAV Ground Crew Marshalling Gesture Recognitio
5.2.2. Application Experimental Results and Analysis
- (1)
- Comparison Experiments with Baseline Methods
- (2)
- Typical Action Recognition Results and Confusion Analysis
- (3)
- Visualization Analysis of Adaptive Graph Topology
5.2.3. Robustness Analysis
- (1)
- Robustness Analysis of Sequence Length and Execution Speed
- (2)
- Distance Robustness Analysis
- (3)
- Environmental Robustness Analysis
5.2.4. Edge Device Deployment and Real-Time Performance
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhou, Y. Unmanned aerial vehicles based low-altitude economy with lifecycle techno-economic-environmental analysis for sustainable and smart cities. J. Clean. Prod. 2025, 499, 145050. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, Y.; Zheng, Y. Overall eVTOL aircraft design for urban air mobility. Green Energy Intell. Transp. 2024, 3, 100150. [Google Scholar] [CrossRef]
- Jin, Y. The Evolution and Challenges of Low-Altitude Economy: Insights from Experience in China. In Proceedings of the 1st International Conference on Modern Logistics and Supply Chain Management (MLSCM 2024), Foshan, China, 28–30 June 2024. [Google Scholar]
- Postorino, M.N.; Sarné, G.M. Reinventing mobility paradigms: Flying car scenarios and challenges for urban mobility. Sustainability 2020, 12, 3581. [Google Scholar] [CrossRef]
- Ren, B.; Liu, M.; Ding, R.; Liu, H. A survey on 3d skeleton-based action recognition using learning method. Cyborg Bionic Syst. 2024, 5, 0100. [Google Scholar] [CrossRef] [PubMed]
- Shahroudy, A.; Liu, J.; Ng, T.-T.; Wang, G. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
- Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Duan, H.; Zhao, Y.; Chen, K.; Lin, D.; Dai, B. Revisiting skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2969–2978. [Google Scholar]
- Liu, J.; Shahroudy, A.; Xu, D.; Wang, G. Spatio-temporal lstm with trust gates for 3d human action recognition. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 816–833. [Google Scholar]
- Bono, F.M.; Radicioni, L.; Cinquemani, S.; Conese, C.; Tarabini, M. Development of soft sensors based on neural networks for detection of anomaly working condition in automated machinery. In Proceedings of the NDE 4.0, Predictive Maintenance, and Communication and Energy Systems in a Globally Networked World, Long Beach, CA, USA, 6 March–11 April 2022; pp. 56–70. [Google Scholar]
- Ke, Q.; Bennamoun, M.; An, S.; Sohel, F.; Boussaid, F. A new representation of skeleton sequences for 3d action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3288–3297. [Google Scholar]
- Cheng, K.; Zhang, Y.; Cao, C.; Shi, L.; Cheng, J.; Lu, H. Decoupling gcn with dropgraph module for skeleton-based action recognition. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 536–553. [Google Scholar]
- Chen, Y.; Zhang, Z.; Yuan, C.; Li, B.; Deng, Y.; Hu, W. Channel-wise topology refinement graph convolution for skeleton-based action recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 13359–13368. [Google Scholar]
- Cai, J.; Jiang, N.; Han, X.; Jia, K.; Lu, J. JOLO-GCN: Mining joint-centered light-weight information for skeleton-based action recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 2735–2744. [Google Scholar]
- Lee, J.; Lee, M.; Lee, D.; Lee, S. Hierarchically decomposed graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 10444–10453. [Google Scholar]
- Wang, S.; Zhang, Y.; Zhao, M.; Qi, H.; Wang, K.; Wei, F.; Jiang, Y. Skeleton-based action recognition via temporal-channel aggregation. arXiv 2022, arXiv:2205.15936. [Google Scholar]
- Chi, H.-G.; Ha, M.H.; Chi, S.; Lee, S.W.; Huang, Q.; Ramani, K. Infogcn: Representation learning for human skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20186–20196. [Google Scholar]
- Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; Tian, Q. Symbiotic graph neural networks for 3d skeleton-based human action recognition and motion prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3316–3333. [Google Scholar] [CrossRef] [PubMed]
- Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Decoupled spatial-temporal attention network for skeleton-based action-gesture recognition. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
- Shu, X.; Xu, B.; Zhang, L.; Tang, J. Multi-granularity anchor-contrastive representation learning for semi-supervised skeleton-based action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 7559–7576. [Google Scholar] [CrossRef] [PubMed]
- Lin, L.; Zhang, J.; Liu, J. Actionlet-dependent contrastive learning for unsupervised skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 2363–2372. [Google Scholar]
- Zhou, H.; Liu, Q.; Wang, Y. Learning discriminative representations for skeleton based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10608–10617. [Google Scholar]
- Do, J.; Kim, M. Skateformer: Skeletal-temporal transformer for human action recognition. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 401–420. [Google Scholar]
- Long, N.H.B. Step catformer: Spatial-temporal effective body-part cross attention transformer for skeleton-based action recognition. arXiv 2023, arXiv:2312.03288. [Google Scholar]
- Xiang, W.; Li, C.; Zhou, Y.; Wang, B.; Zhang, L. Generative action description prompts for skeleton-based action recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 10276–10285. [Google Scholar]
- Wang, Y.; Wu, Y.; He, W.; Guo, X.; Zhu, F.; Bai, L.; Zhao, R.; Wu, J.; He, T.; Ouyang, W. Hulk: A universal knowledge translator for human-centric tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 5672–5689. [Google Scholar] [CrossRef] [PubMed]
- Qin, Z.; Liu, Y.; Perera, M.; Gedeon, T.; Ji, P.; Kim, D.; Anwar, S. Anubis: Skeleton action recognition dataset, review, and benchmark. arXiv 2022, arXiv:2205.02071. [Google Scholar] [CrossRef]
- Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Skeleton-based action recognition with directed graph neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7912–7921. [Google Scholar]
- Qin, Z.; Liu, Y.; Ji, P.; Kim, D.; Wang, L.; McKay, R.I.; Anwar, S.; Gedeon, T. Fusing higher-order features in graph neural networks for skeleton-based action recognition. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 4783–4797. [Google Scholar] [CrossRef] [PubMed]
- Sun, H.; Wen, Y.; Feng, H.; Zheng, Y.; Mei, Q.; Ren, D.; Yu, M. Unsupervised bidirectional contrastive reconstruction and adaptive fine-grained channel attention networks for image dehazing. Neural Netw. 2024, 176, 106314. [Google Scholar] [CrossRef] [PubMed]
- Hu, K.; Shen, C.; Wang, T.; Shen, S.; Cai, C.; Huang, H.; Xia, M. Action Recognition Based on Multi-Level Topological Channel Attention of Human Skeleton. Sensors 2023, 23, 9738. [Google Scholar] [CrossRef] [PubMed]
- Liu, D.; Xu, H.; Wang, J.; Lu, Y.; Kong, J.; Qi, M. Adaptive attention memory graph convolutional networks for skeleton-based action recognition. Sensors 2021, 21, 6761. [Google Scholar] [CrossRef] [PubMed]
- Plizzari, C.; Cannici, M.; Matteucci, M. Skeleton-based action recognition via spatial and temporal transformer networks. Comput. Vis. Image Underst. 2021, 208, 103219. [Google Scholar] [CrossRef]
- Li, C.; Zhong, Q.; Xie, D.; Pu, S. Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. arXiv 2018, arXiv:1804.06055. [Google Scholar] [CrossRef]
- Si, C.; Chen, W.; Wang, W.; Wang, L.; Tan, T. An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1227–1236. [Google Scholar]
- Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; Tian, Q. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3595–3603. [Google Scholar]
- Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12026–12035. [Google Scholar]
- Zhang, P.; Lan, C.; Zeng, W.; Xing, J.; Xue, J.; Zheng, N. Semantics-guided neural networks for efficient skeleton-based human action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1112–1121. [Google Scholar]
- Cheng, K.; Zhang, Y.; He, X.; Chen, W.; Cheng, J.; Lu, H. Skeleton-based action recognition with shift graph convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 183–192. [Google Scholar]
- Liu, Z.; Zhang, H.; Chen, Z.; Wang, Z.; Ouyang, W. Disentangling and unifying graph convolutions for skeleton-based action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 143–152. [Google Scholar]
- Song, Y.-F.; Zhang, Z.; Shan, C.; Wang, L. Constructing stronger and faster baselines for skeleton-based action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1474–1488. [Google Scholar] [CrossRef] [PubMed]
























| Methods | X-Sub (%) | X-View (%) |
|---|---|---|
| ST-GCN [7] | 81.5 | 88.3 |
| ST-TR [33] | 89.3 | 94.3 |
| HCN [34] | 86.5 | 91.1 |
| AGC-LSTM [35] | 87.5 | 93.5 |
| AS-GCN [36] | 86.8 | 94.2 |
| 2S-AGCN [37] | 88.5 | 92.9 |
| SGN [38] | 88.4 | 93.8 |
| OURS | 89.4 | 94.2 |
| Joint | Joint_vel | Bone | Bone_vel | Angle | NTU-RGB+D 60 | NTU-RGB+D 120 | FLOPs (M) | ||
|---|---|---|---|---|---|---|---|---|---|
| X-Sub (%) | X-View (%) | X-Sub (%) | X-Set (%) | ||||||
| ✓ | - | - | - | - | 73.6 | 83.2 | 77.0 | 75.6 | 11.2 |
| ✓ | ✓ | - | - | - | 86.9 | 92.7 | 79.9 | 81.7 | 22.6 |
| - | - | ✓ | - | - | 65.3 | 69.9 | 71.1 | 73.5 | 11.3 |
| - | - | ✓ | ✓ | - | 86.5 | 89.6 | 74.8 | 79.5 | 22.6 |
| ✓ | - | ✓ | - | - | 86.0 | 92.1 | 79.2 | 81.0 | 22.5 |
| ✓ | ✓ | ✓ | - | - | 89.1 | 93.5 | 80.3 | 82.0 | 33.9 |
| ✓ | ✓ | ✓ | ✓ | - | 89.2 | 93.8 | 80.9 | 82.2 | 45.1 |
| ✓ | ✓ | ✓ | ✓ | ✓ | 89.4 | 94.2 | 81.7 | 83.3 | 56.5 |
| Methods | X-Sub (%) | X-View (%) | FLOPs (M) |
|---|---|---|---|
| Joint | 73.6 | 83.2 | 11.2 |
| +Local | 83.1 | 89.7 | 22.5 |
| +Center | 82.8 | 89.6 | 22.7 |
| +Pair | 83.4 | 89.7 | 22.8 |
| +All | 83.7 | 89.9 | 23.1 |
| Action | Joint (%) | +All (%) | Acc_Improve (%) | Similar Action |
|---|---|---|---|---|
| A44: headache | 37.7 | 75.4 | 37.7 | A47: neck pain |
| A41: sneeze/cough | 42.0 | 68.8 | 26.8 | A37: wipe face |
| A5: drop | 55.6 | 81.5 | 25.9 | A24: kicking something |
| A33: check time | 59.8 | 85.5 | 25.7 | A39: put palms together |
| A10: clapping | 44.3 | 67.8 | 23.5 | A34: rub two hands |
| A12: writing | 24.6 | 45.6 | 23.0 | A11: reading |
| A45: chest pain | 69.5 | 88.8 | 19.3 | A46: back pain |
| A42: staggering | 74.6 | 93.8 | 19.2 | A51: kicking |
| A13: tear up paper | 64.9 | 84.1 | 19.2 | A11: reading |
| A6: pick up | 78.6 | 97.1 | 18.5 | A16: put on a shoe |
| Attention Mechanism | Joint Type | Frame Index | FLOPs (M) | NTU-RGB+D 60 | NTU-RGB+D 120 | ||
|---|---|---|---|---|---|---|---|
| X-Sub (%) | X-View (%) | X-Sub (%) | X-Set (%) | ||||
| w/o | - | - | 53.4 | 88.5 | 92.9 | 79.8 | 81.6 |
| GCN1(ARFAM) | - | - | 54.3 | 88.7 | 93.1 | 80.4 | 81.8 |
| GCN2(ARFAM) | - | - | 54.3 | 88.6 | 92.9 | 80.2 | 81.7 |
| GCN3(ARFAM) | - | - | 54.3 | 88.6 | 93.0 | 80.3 | 81.7 |
| GCN(ARFAM) | - | - | 55.9 | 88.9 | 93.1 | 80.4 | 82.0 |
| SA | - | - | 61.5 | 88.1 | 92.4 | 79.3 | 81.0 |
| MHSA | - | - | 62.0 | 88.5 | 92.8 | 79.9 | 81.6 |
| w/o | ✓ | ✓ | 54.0 | 88.9 | 93.8 | 80.9 | 82.8 |
| GCN1(ARFAM) | ✓ | ✓ | 54.9 | 89.3 | 94.2 | 81.6 | 83.2 |
| GCN2(ARFAM) | ✓ | ✓ | 54.9 | 89.0 | 93.9 | 81.6 | 83.0 |
| GCN3(ARFAM) | ✓ | ✓ | 54.9 | 89.1 | 94.1 | 81.4 | 82.9 |
| GCN(ARFAM) | ✓ | - | 56.2 | 89.2 | 93.8 | 81.1 | 82.7 |
| GCN(ARFAM) | - | ✓ | 56.2 | 89.1 | 93.6 | 80.9 | 82.5 |
| GCN(ARFAM) | ✓ | ✓ | 56.5 | 89.4 | 94.2 | 81.7 | 83.3 |
| Methods | Accuracy (%) | Jaccard (%) | F1-Score (%) |
|---|---|---|---|
| ST-GCN | 87.76 | 80.62 | 87.52 |
| Shift-GCN | 86.19 | 80.15 | 85.01 |
| 2s-AGCN | 88.30 | 82.88 | 89.29 |
| MS-G3D | 89.15 | 83.21 | 88.67 |
| CTR-GC | 89.47 | 83.58 | 89.04 |
| SGN | 89.82 | 83.89 | 89.35 |
| EfficientGCN | 89.56 | 83.74 | 89.18 |
| GAP | 89.67 | 82.17 | 88.92 |
| STEP-CATFormer | 88.93 | 81.45 | 88.26 |
| Ours | 90.71 | 84.32 | 90.13 |
| Methods | Accuracy (%) | Jaccard (%) | F1-Score (%) |
|---|---|---|---|
| ST-GCN | 92.96 | 89.67 | 91.25 |
| Shift-GCN | 92.83 | 88.59 | 91.04 |
| 2s-AGCN | 94.12 | 91.40 | 94.38 |
| MS-G3D | 94.58 | 91.85 | 94.72 |
| CTR-GC | 94.73 | 92.08 | 94.91 |
| SGN | 95.21 | 92.73 | 95.47 |
| EfficientGCN | 94.89 | 92.35 | 94.68 |
| GAP | 95.08 | 92.21 | 95.13 |
| STEP-CATFormer | 94.35 | 91.58 | 94.52 |
| Ours | 96.09 | 93.62 | 96.22 |
| Sequence Length (Frames) | Accuracy (%) | Jaccard (%) | F1-Score (%) |
|---|---|---|---|
| 15 | 94.67 | 91.12 | 94.55 |
| 20 | 96.09 | 93.62 | 96.22 |
| 25 | 95.78 | 93.18 | 95.89 |
| 30 | 95.53 | 92.87 | 95.64 |
| Speed Multiplier | Frame Interval () | Accuracy (%) | Jaccard (%) | F1-Score (%) |
|---|---|---|---|---|
| 1 | 96.09 | 93.62 | 96.22 | |
| 2 | 95.47 | 92.85 | 95.61 | |
| 3 | 94.13 | 91.38 | 94.32 |
| Body Proportion | Corresponding Distance | Accuracy (%) | Jaccard (%) | F1-Score (%) |
|---|---|---|---|---|
| Original Dataset (34–68%) | 3–6 m | 96.09 | 93.62 | 96.22 |
| 25% | ∼9 m | 95.83 | 93.21 | 95.94 |
| 20% | ∼11 m | 95.41 | 92.78 | 95.53 |
| 15% | ∼14 m | 94.25 | 91.36 | 93.81 |
| Environmental Condition | Test | Test + Train | ||||
|---|---|---|---|---|---|---|
| Acc. (%) | Jac. (%) | F1 (%) | Acc. (%) | Jac. (%) | F1 (%) | |
| Original | 90.71 | 84.32 | 90.13 | 90.92 | 84.58 | 90.38 |
| Illumination | 89.78 | 83.15 | 89.05 | 90.85 | 84.49 | 90.29 |
| Illumination + Haze | 89.82 | 83.21 | 89.11 | 90.88 | 84.54 | 90.33 |
| Illumination + Rain | 89.95 | 83.38 | 89.26 | 90.91 | 84.57 | 90.36 |
| Shadow | 89.32 | 82.56 | 88.64 | 90.52 | 84.08 | 89.94 |
| Occlusion | 87.64 | 80.87 | 87.21 | 89.91 | 83.42 | 89.26 |
| Environmental Condition | Test | Test + Train | ||||
|---|---|---|---|---|---|---|
| Acc. (%) | Jac. (%) | F1 (%) | Acc. (%) | Jac. (%) | F1 (%) | |
| Original | 96.09 | 93.62 | 96.22 | 96.25 | 93.82 | 96.40 |
| Illumination | 95.15 | 92.48 | 95.28 | 96.17 | 93.71 | 96.31 |
| Illumination + Haze | 95.19 | 92.54 | 95.33 | 96.19 | 93.74 | 96.34 |
| Illumination + Rain | 95.26 | 92.65 | 95.41 | 96.22 | 93.79 | 96.38 |
| Shadow | 94.87 | 92.03 | 94.95 | 95.85 | 93.22 | 95.96 |
| Occlusion | 93.21 | 89.76 | 93.18 | 95.18 | 92.39 | 95.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Q.; Dong, L.; Zhang, Z.; Xu, Y.; Xiao, F.; Wang, Y. Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling. Drones 2025, 9, 819. https://doi.org/10.3390/drones9120819
Zhou Q, Dong L, Zhang Z, Xu Y, Xiao F, Wang Y. Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling. Drones. 2025; 9(12):819. https://doi.org/10.3390/drones9120819
Chicago/Turabian StyleZhou, Qing, Liheng Dong, Zhaoxiang Zhang, Yuelei Xu, Feng Xiao, and Yingxia Wang. 2025. "Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling" Drones 9, no. 12: 819. https://doi.org/10.3390/drones9120819
APA StyleZhou, Q., Dong, L., Zhang, Z., Xu, Y., Xiao, F., & Wang, Y. (2025). Adaptive Refined Graph Convolutional Action Recognition Network with Enhanced Features for UAV Ground Crew Marshalling. Drones, 9(12), 819. https://doi.org/10.3390/drones9120819

