Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition
Abstract
1. Introduction
- Objective subset definition: We defined static and dynamic activity groups in an objective and reproducible manner using optical-flow-based statistical indicators [9], thereby establishing a quantitative classification scheme grounded in motion characteristics.
- Dual-teacher selective distillation: Unlike existing multi-teacher KD approaches that mainly rely on structural diversity or ensemble averaging, we independently trained subset-specialized teachers and integrated their knowledge into the student through a selective KD strategy. To support this process, we proposed a hybrid weighting mechanism that combines teacher confidence with loss, enabling selective transfer that simultaneously reflects teacher reliability and complementary signals.
- Comprehensive evaluation: We conducted a subset-based performance analysis together with a teacher–student distribution similarity assessment. Results show that the proposed DTKD not only improves overall accuracy but also enables students to selectively mimic teacher distributions and effectively acquire subset-specific knowledge.
2. Related Works
2.1. Quantitative Analysis of Motion in Action Recognition
2.2. Action Recognition Architectures
2.3. Knowledge Distillation
3. Methods
3.1. Optical Flow-Based Quantification of Motion Characteristic
3.2. Proposed Dual Teacher Knowledge Distillation Framework
3.2.1. Backbone Architecture Based on Dual Pathways
3.2.2. DTKD Framework Structure
3.2.3. DTKD Training Procedure
Temperature-Adjusted Softmax for Distillation
Responsibility Partition
Hybrid Score
Teacher Soft Target
KD Loss
Student Loss
Final Loss
4. Experiments
4.1. Experimental Setup
4.2. Baseline Model
4.3. Class Specific Teacher Models
4.4. Evaluation of the Dual Teacher Knowledge Distillation Framework
4.4.1. Hyperparameter Sensitivity
4.4.2. Effect of Teacher Tuning
4.4.3. Contribution of Selective Transfer in DTKD
4.4.4. Subset-Based Aggregate Performance Analysis
4.4.5. Teacher–Student Distribution Similarity Analysis
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| St-T | Static Teacher |
| Dy-T | Dynamic Teacher |
| St-subset | Static subset |
| Dy-subset | Dynamic subset |
| KL-Div | Kullback–Leibler divergence |
| KD | Knowledge Distillation |
| DTKD | Dual Teacher Knowledge Distillation |
| BSKD | Baseline Selective Knowledge Distillation |
| LT | Locked Teacher |
| FT | Frozen Teacher at student stage |
| TU | Teacher UCF101 |
| TH | Teacher HMDB51 |
| SU | Student UCF101 |
| SH | Student HMDB51 |
| SM | Selectivity Margin |
| SIR | Selective Imitation Ratio |
References
- Carreira, J.; Zisserman, A. Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 9 November 2017; pp. 6299–6308. [Google Scholar] [CrossRef]
- Feichtenhofer, C. X3D: Expanding architectures for efficient video recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 5 August 2020; IEEE: New York, NY, USA, 2020; pp. 200–210. [Google Scholar] [CrossRef]
- Choi, J.; Gao, C.; Messou, J.C.; Huang, J.B. Why can’t I dance in a mall? learning to mitigate scene bias in action recognition. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates Inc.: Red Hook, NY, USA; pp. 853–865. [Google Scholar] [CrossRef]
- Li, Y.; Li, Y.; Vasconcelos, N. RESOUND: Towards action recognition without representation bias. In Proceedings of the Computer Vision—ECCV 2018. ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 11210, pp. 520–535. [Google Scholar] [CrossRef]
- Rezazadegan, F.; Shirazi, S.; Upcroft, B.; Milford, M. Action Recognition: From Static Datasets to Moving Robots. arXiv 2017. [Google Scholar] [CrossRef]
- Kaseris, M.; Kostavelis, I.; Malassiotis, S. A comprehensive survey on deep learning methods in human activity recognition. Mach. Learn. Knowl. Extr. 2024, 6, 842–876. [Google Scholar] [CrossRef]
- Feichtenhofer, C.; Fan, H.; Malik, J.; He, K. SlowFast networks for video recognition. arXiv 2018, arXiv:1812.03982. [Google Scholar] [CrossRef]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. In Proceedings of the NeurIPS Deep Learning and Representation Learning Workshop, Montreal, QC, Canada, 9 March 2015; Available online: https://arxiv.org/abs/1503.02531 (accessed on 9 September 2025).
- Horn, B.K.P.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
- Simonyan, K. Zisserman, Two-stream convolutional networks for action recognition in videos. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA (NIPS’14). ; Volume 1, pp. 568–576. [Google Scholar] [CrossRef]
- Soomro, K.; Zamir, A.R.; Shah, M. UCF101: A Dataset of 101 Human Action Classes from Videos in the Wild. CRCV-TR, 2012. Available online: https://www.crcv.ucf.edu/data/UCF101.php (accessed on 2 September 2025).
- Kuehne, H.; Jhuang, H.; Stiefelhagen, R.; Serre, T. HMDB51: A large video database for human motion recognition. In High Performance Computing in Science and Engineering ’12; Nagel, W.E., Kröner, D.H., Resch, M.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 571–582. [Google Scholar] [CrossRef]
- Yosry, S.; Elrefaei, L.; ElKamaar, R.; Ziedan, R.R. Various frameworks for integrating image and video streams for spatiotemporal information learning employing 2D–3D residual networks for human action recognition. Discov. Appl. Sci. 2024, 6, 141. [Google Scholar] [CrossRef]
- Sevilla-Lara, L.; Liao, Y.; Güney, F.; Jampani, V.; Geiger, A.; Black, M.J. On the Integration of Optical Flow and Action Recognition. arXiv 2017. [Google Scholar] [CrossRef]
- Zhu, Y.; Lan, Z.; Newsam, S.; Hauptmann, A.G. Hidden two-stream convolutional networks for action recognition. In Computer Vision—ACCV 2018; Lecture Notes in Computer Science; Jawahar, C.V., Li, H., Mori, G., Schindler, K., Eds.; Springer: Cham, Switzerland, 2019; Volume 11363, pp. 363–378. [Google Scholar] [CrossRef]
- Sayed, N.; Brattoli, B.; Ommer, B. Cross and learn: Cross-modal self-supervision. In Pattern Recognition. GCPR 2018; Lecture Notes in Computer Science; Brox, T., Bruhn, A., Fritz, M., Eds.; Springer: Cham, Switzerland, 2019; Volume 11269, pp. 228–243. [Google Scholar] [CrossRef]
- Wang, H.; Schmid, C. Action recognition with improved trajectories. In Proceedings of the IEEE International Conference on Computer Vision (ICCV 2013), Sydney, NSW, Australia, 1–8 December 2013; IEEE: New York, NY, USA, 2013; pp. 3551–3558. [Google Scholar] [CrossRef]
- Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV 2015), Santiago, Chile, 7–13 December 2015; IEEE: New York, NY, USA, 2015; pp. 4489–4497. [Google Scholar] [CrossRef]
- Liu, Z.; Ning, J.; Cao, Y.; Wei, Y.; Zhang, Z.; Lin, S.; Hu, H. Video Swin transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), New Orleans, LA, USA, 18–24 June 2022; pp. 3202–3211. [Google Scholar] [CrossRef]
- Bertasius, G.; Wang, H.; Torresani, L. Is space-time attention all you need for video understanding? In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), Online, 18–24 July 2021; PMLR: Cambridge, MA, USA, 2021; pp. 813–824. [Google Scholar] [CrossRef]
- Arnab, A.; Dehghani, M.; Heigold, G.; Sun, C.; Lučić, M.; Schmid, C. ViViT: A video vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV 2021), Montreal, QC, Canada, 10–17 October 2021; pp. 6836–6846. [Google Scholar] [CrossRef]
- Xu, Y.; Lu, Y. An Action Recognition Method based on 3D Feature Fusion. In Intelligent Human Systems Integration (IHSI 2025): Integrating People and Intelligent Systems. AHFE (2025) International Conference; Ahram, T., Karwowski, W., Martino, C., Giuseppe Di Bucchianico, C., Maselli, V., Eds.; AHFE International-AHFE Open Access: Honolulu, HI, USA, 2025; Volume 160. [Google Scholar] [CrossRef]
- Ye, Q.; Tan, Z.; Zhang, Y. Human action recognition method based on motion excitation and temporal aggregation module. Heliyon 2022, 8, e11401. [Google Scholar] [CrossRef] [PubMed]
- Fan, L.; Wang, Y.; Zhang, Y. Object action recognition algorithm based on asymmetric fast and slow channel feature extraction. In Proceedings of the 2024 2nd International Conference on Signal Processing and Intelligent Computing (SPIC), Guangzhou, China, 20–22 September 2024; IEEE: New York, NY, USA, 2024; pp. 549–553. [Google Scholar] [CrossRef]
- Tran, D.; Wang, H.; Torresani, L.; Ray, J.; LeCun, Y.; Paluri, M. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, UT, USA, 18–23 June 2018; pp. 6450–6459. [Google Scholar] [CrossRef]
- Kalfaoglu, M.E.; Kalkan, S.; Alatan, A.A. Late temporal modeling in 3D CNN architectures with BERT for action recognition. In Proceedings of the Computer Vision—ECCV 2020 Workshops. ECCV 2020, Glasgow, UK, 23–28 August 2020; Lecture Notes in Computer Science. Bartoli, A., Fusiello, A., Eds.; Springer: Cham, Switzerland, 2020; Volume 12539, pp. 731–747. [Google Scholar] [CrossRef]
- Meng, L.; Zhao, B.; Chang, B.; Huang, G.; Sun, W.; Tung, F.; Sigal, L. Spatio-temporal attention for action recognition in videos. arXiv 2018. [Google Scholar] [CrossRef]
- Wang, J.; Wen, X. A spatio-temporal attention convolution block for action recognition. J. Phys. Conf. S. 2020, 1651, 012193. [Google Scholar] [CrossRef]
- Han, X.; Lu, Y.; Guo, Q.; Liu, J.; Fei, C. Human action recognition research based on channel-temporal self-attention block network. In Proceedings of the 2024 6th International Conference on Robotics and Computer Vision (ICRCV), Wuxi, China, 20–22 September 2024; pp. 79–87. [Google Scholar] [CrossRef]
- Huang, L.; Liu, Y.; Wang, B.; Pan, P.; Xu, Y.; Jin, R. Self-supervised video representation learning by context and motion decoupling. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13881–13890. [Google Scholar] [CrossRef]
- Haoze, W.; Jiawei, L.; Xierong, Z.; Meng, W.; Zheng-Jun, Z. Multi-scale spatial-temporal integration convolutional tube for human action recognition. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI’20), Yokohama, Japan, 7–15 January 2021; pp. 753–759. [Google Scholar] [CrossRef]
- Zhang, Y. MEST: An action recognition network with motion encoder and spatio-temporal module. Sensors 2022, 22, 6595. [Google Scholar] [CrossRef] [PubMed]
- Chen, B.; Meng, F.; Tang, H.; Tong, G. Two-level attention module based on Spurious-3D residual networks for human action recognition. Sensors 2023, 23, 1707. [Google Scholar] [CrossRef] [PubMed]
- Gou, J.; Yu, B.; Maybank, S.J.; Tao, D.; Distillation, K. Knowledge distillation: A survey. Int. J. Comput. Vis. 2021, 129, 1789–1819. [Google Scholar] [CrossRef]
- Shen, C.; Wang, X.; Song, J.; Sun, L.; Song, M. Amalgamating knowledge towards comprehensive classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3068–3075. [Google Scholar] [CrossRef]
- Wu, M.-C.; Chiu, C.-T.; Wu, K.-H. Multi-teacher knowledge distillation for compressed video action recognition on deep neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), Brighton, UK, 12–17 May 2019; IEEE: New York, NY, USA, 2019; pp. 2202–2206. [Google Scholar] [CrossRef]
- Jiang, Y.; Feng, C.; Zhang, F.; Bull, D. MTKD Multi-teacher knowledge distillation for image super-resolution. In Proceedings of the Computer Vision—ECCV 2024, Milan, Italy, 4 October 2024; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany; Milan, Italy, 2024; Volume 14233, pp. 364–382. [Google Scholar] [CrossRef]
- Cheng, X.; Zhou, J. LGFA-MTKD: Enhancing multi-teacher knowledge distillation with local and global frequency attention. Information 2024, 15, 735. [Google Scholar] [CrossRef]
- Chang, C.-J.; Chen, O.; Tseng, V. DL-KDD: Dual-Light Knowledge Distillation for Action Recognition in the Dark. arXiv 2024. [Google Scholar] [CrossRef]
- Guo, Y.; Zan, H.; Xu, H. Dual-teacher Knowledge Distillation for Low-frequency Word Translation. In Findings of the Association for Computational Linguistics: EMNLP 2024; Association for Computational Linguistics: Miami, FL, USA, 2024; pp. 5543–5552. [Google Scholar]
- Wei, Y.; Bai, Y. Dynamic Temperature Knowledge Distillation. arXiv 2024. [Google Scholar] [CrossRef]
- Fan, H.; Murrell, T.; Wang, H.; Alwala, K.V.; Li, Y.; Li, Y.; Xiong, B.; Ravi, N.; Li, M.; Yang, H.; et al. PyTorchVideo: A deep learning library for video understanding. In Proceedings of the 29th ACM International Conference on Multimedia (MM ’21), Chengdu, China, 20–24 October 2021; ACM: New York, NY, USA, 2021; pp. 3800–3803. [Google Scholar] [CrossRef]
- Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Statist. 1951, 22, 79–86. [Google Scholar] [CrossRef]
- PyTorch Documentation. KLDivLoss. Available online: https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html (accessed on 2 September 2025).
- Kay, W.; Carreira, J.; Simonyan, K.; Zhang, B.; Hillier, C.; Vijayanarasimhan, S.; Viola, F.; Green, T.; Back, T.; Natsev, P.; et al. The kinetics human action video dataset. arXiv 2017, arXiv:1705.06950. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An imperative style, high-performance deep learning library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
- SlowFast Baseline Code, GitHub Repository. Available online: https://github.com/leftthomas/SlowFast (accessed on 12 June 2025).








| Dataset | UCF101 | HMDB51 | ||||||
|---|---|---|---|---|---|---|---|---|
| Q1 | Mean | STD | IQR | Q1 | Mean | STD | IQR | |
| Overall | 0.508 | 1.157 | 0.835 | 1.088 | 0.688 | 1.160 | 0.598 | 0.837 |
| Dy-subset | 0.531 | 1.433 | 0.799 | 1.129 | 0.934 | 1.369 | 0.551 | 0.743 |
| St-subset | 0.280 | 0.361 | 0.105 | 0.153 | 0.495 | 0.548 | 0.110 | 0.157 |
| List No. | Dataset | UCF101 | HMDB51 | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Class | Train | Test | Total | Class | Train | Test | Total | ||
| 1 | Overall | 101 | 9537 | 3783 | 13,320 | 51 | 3570 | 1530 | 5100 |
| Dy-subset | 75 | 6949 | 2772 | 9721 | 38 | 2660 | 1140 | 3800 | |
| St-subset | 26 | 2588 | 1011 | 3599 | 13 | 910 | 390 | 1300 | |
| 2 | Overall | 101 | 9586 | 3734 | 13,320 | 51 | 3570 | 1530 | 5100 |
| Dy-subset | 75 | 6988 | 2733 | 9721 | 38 | 2660 | 1140 | 3800 | |
| St-subset | 26 | 2598 | 1001 | 3599 | 13 | 910 | 390 | 1300 | |
| 3 | Overall | 101 | 9624 | 3696 | 13,320 | 51 | 3570 | 1530 | 5100 |
| Dy-subset | 75 | 7033 | 2688 | 9721 | 38 | 2660 | 1140 | 3800 | |
| St-subset | 26 | 2591 | 1008 | 3599 | 13 | 910 | 390 | 1300 | |
| UCF101 | HMDB51 | SlowFast Model | Role | ||||
|---|---|---|---|---|---|---|---|
| Model | Top1 (%) | Top5 (%) | Model | Top1 (%) | Top5 (%) | ||
| U1 | 95.14 | 99.68 | H1 | 77.10 | 95.28 | R101_16 × 8 | Baseline (List-1) |
| U2 | 95.63 | 99.63 | H2 | 76.07 | 94.89 | R101_16 × 8 | Cross-validation (List-2) |
| U3 | 96.50 | 99.84 | H3 | 78.24 | 95.34 | R101_16 × 8 | Cross-validation (List-3) |
| U4 | 93.23 | 99.76 | H4 | 70.93 | 92.32 | R101_16 × 8 | Frozen Baseline |
| U5 | 94.90 | 99.50 | H5 | 77.03 | 95.41 | R101_8 × 8 | SlowFast 8 × 8 |
| U6 | 94.68 | 99.50 | H6 | 76.38 | 95.28 | R50_8 × 8 | SlowFast 8 × 8 |
| UCF101 | HMDB51 | SlowFast Model | Role | ||||
|---|---|---|---|---|---|---|---|
| Model | Top1 (%) | Top5 (%) | Model | Top1 (%) | Top5 (%) | ||
| TU1 | 96.46 | 99.89 | TH1 | 77.71 | 95.86 | R101_16 × 8 | Dynamic Teacher |
| TU2 | 99.21 | 100 | TH2 | 90.49 | 99.49 | R101_16 × 8 | Static Teacher |
| TU3 | 94.12 | 99.93 | TH3 | 70.04 | 92.25 | R101_16 × 8 | Frozen Dynamic Teacher |
| TU4 | 99.01 | 99.90 | TH4 | 86.12 | 99.49 | R101_16 × 8 | Frozen Static Teacher |
| LT | FT | UCF101 | HMDB51 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Model | Hyperparameter | Top1 (%) | Top5 (%) | Model | Hyperparameter | Top1 (%) | Top5 (%) | ||||||
| T | T | SU1 | 0.3 | 2 | 0.2 | 95.82 | 99.87 | SH1 | 0.3 | 2 | 0.2 | 78.87 | 95.41 |
| T | T | SU2 | 0.5 | 2 | 0.2 | 95.69 | 99.71 | SH2 | 0.5 | 2 | 0.2 | 78.41 | 95.80 |
| T | T | SU3 | 0.7 | 2 | 0.2 | 95.72 | 99.68 | SH3 | 0.7 | 2 | 0.2 | 79.40 | 95.60 |
| T | T | SU4 | 0.3 | 2 | 0.4 | 96.06 | 99.74 | SH4 | 0.7 | 2 | 0.4 | 79.20 | 95.34 |
| T | T | SU5 | 0.3 | 2 | 0.8 | 93.42 | 99.79 | SH5 | 0.7 | 2 | 0.8 | 78.08 | 75.14 |
| T | T | SU6 | 0.3 | 4 | 0.2 | 95.77 | 99.76 | SH6 | 0.7 | 4 | 0.2 | 79.33 | 95.47 |
| T | T | SU7 | 0.3 | 4 | 0.4 | 95.96 | 99.68 | SH7 | 0.7 | 4 | 0.4 | 79.00 | 95.47 |
| T | T | SU8 | 0.3 | 4 | 0.6 | 95.35 | 99.68 | SH8 | 0.7 | 4 | 0.6 | 77.82 | 95.08 |
| T | T | SU9 | 0.3 | 8 | 0.2 | 95.59 | 99.81 | SH9 | 0.7 | 8 | 0.2 | 78.08 | 95.14 |
| T | F | SU10 | 0.3 | 2 | 0.4 | 95.85 | 99.68 | SH10 | 0.7 | 2 | 0.2 | 79.27 | 95.28 |
| F | T | SU11 | 0.3 | 2 | 0.4 | 95.85 | 99.76 | SH11 | 0.7 | 2 | 0.2 | 78.35 | 95.47 |
| F | F | SU12 | 0.3 | 2 | 0.4 | 96.14 | 99.74 | SH12 | 0.7 | 2 | 0.2 | 79.00 | 95.28 |
| T | T | SU13 | - | 2 | 0.4 | 91.70 | 99.15 | SH13 | - | 2 | 0.2 | 72.79 | 93.04 |
| UCF101 | HMDB51 | |||||||
|---|---|---|---|---|---|---|---|---|
| Dataset | KL(S‖St-T) | KL(S‖Dy-T) | SM | SIR | KL(S‖St-T) | KL(S‖Dy-T) | SM | SIR |
| Overall | 19.22 | 4.52 | NA | NA | 11.69 | 6.09 | NA | NA |
| St-subset | 0.097 | 16.55 | 16.45 | 170.62 | 1.91 | 18.40 | 16.49 | 9.63 |
| Dy-subset | 26.20 | 0.13 | 26.07 | 201.54 | 15.04 | 1.88 | 13.16 | 7.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Park, Y.-J.; Cho, H.-S. Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition. Electronics 2025, 14, 4130. https://doi.org/10.3390/electronics14204130
Park Y-J, Cho H-S. Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition. Electronics. 2025; 14(20):4130. https://doi.org/10.3390/electronics14204130
Chicago/Turabian StylePark, Young-Jin, and Hui-Sup Cho. 2025. "Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition" Electronics 14, no. 20: 4130. https://doi.org/10.3390/electronics14204130
APA StylePark, Y.-J., & Cho, H.-S. (2025). Subset-Aware Dual-Teacher Knowledge Distillation with Hybrid Scoring for Human Activity Recognition. Electronics, 14(20), 4130. https://doi.org/10.3390/electronics14204130

