Deep Lifelong Learning Optimization Algorithm in Dense Region Fusion
Abstract
:1. Introduction
- A deep lifelong learning optimization method, DLLO-DRF, is proposed, and its corresponding objective function is designed. This method analyzes the differences in the distribution of model parameters in different stages of lifelong learning, and then dynamically integrates these model parameters to obtain weight parameters that effectively capture the overall data patterns.
- Dense region fusion is introduced into the objective function of DLLO-DRF, and the corresponding algorithm is designed.
- Comparative experiments are conducted using self-labeled transmission line defect datasets, and with various deep object detection algorithms. The results show that DLLO-DRF effectively improves the performance of deep neural network models under lifelong learning.
2. Related Work
2.1. Deep Lifelong Learning
2.2. Deep Model Integration
3. Materials and Methods
3.1. Overall Structure of DLLO-DRF
3.2. Algorithm Principle in DLLO-DRF
3.2.1. Lifelong Learning Paradigm Based on Model Fine-Tuning
3.2.2. Dense Region Fusion
Algorithm 1: Dimension Mapping |
Input: : The set of the trained model. Output: : Model set with weight parameters reduced to one dimension. : Original model parameter dimension. 1 Obtain the parameter dimension in 2 For each model, , traverse all parameters based on their dimensions and merge them into a one-dimensional array; 3 For each model, , replace all the parameters to an array obtained in the last step. |
Algorithm 2: Region Division |
Algorithm 3: Dense Region Fusion |
3.2.3. Process of DLLO-DRF
3.2.4. Complexity Analysis
3.3. Experimental Material
3.3.1. Dataset
3.3.2. Evaluation Metrics
3.3.3. Experimental Setting
3.3.4. Friedman Test
4. Results and Analysis
4.1. Experiment Effect of Lifelong Learning Training
4.2. Longitudinal Comparative Experiments on Basic Algorithms
4.2.1. Comparison of DLLO-DRF Detection Algorithms Used and Unused in the Bird’s Nest Foreign Object Dataset
4.2.2. Comparison of DLLO-DRF Detection Algorithms Used and Unused in the Cement Pole Damage Dataset
4.2.3. Comparison of DLLO-DRF Detection Algorithms Used and Unused in the Shockproof Hammer Slip Dataset
4.2.4. Comparison of DLLO-DRF Detection Algorithms Used and Unused in the Insulator Self-Explosion Dataset
4.3. Horizontal Comparison Experiment with Other Algorithms
4.3.1. Comparison between DLLO-DRF and Other Algorithms in the Bird’s Nest Foreign Object Dataset
4.3.2. Comparison between DLLO-DRF and Other Algorithms in the Cement Pole Damage Dataset
4.3.3. Comparison between DLLO-DRF and Other Algorithms in the Shockproof Hammer Slip Dataset
4.3.4. Comparison between DLLO-DRF and Other Algorithms in the Insulator Self-Explosion Dataset
4.4. Friedman Test Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
DLLO-DRF | Deep Lifelong Learning Optimization algorithm based on Dense Region Fusion |
EWC | Elastic Weight Consolidation |
LwF | Learning without Forgetting |
LFL | Less-Forgetting Learning |
DMC | Deep Model Consolidation |
R-CNN | Region-based Convolutional Neural Network |
LSTM | Long Short-Term Memory |
ATSS | Adaptive Training Sample Selection |
DDOD | Disentangled Dense Object Detector |
GFL | Generalized Focal Loss |
FPN | Feature Pyramid Network |
References
- Zhao, T.; Wang, Z.; Masoomi, A.; Dy, J. Deep bayesian unsupervised lifelong learning. Neural Netw. 2022, 149, 95–106. [Google Scholar] [CrossRef] [PubMed]
- Lee, S.; Stokes, J.; Eaton, E. Learning Shared Knowledge for Deep Lifelong Learning using Deconvolutional Networks. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019; pp. 2837–2844. [Google Scholar]
- Lee, S.; Behpour, S.; Eaton, E. Sharing less is more: Lifelong learning in deep networks with selective layer transfer. In Proceedings of the International Conference on Machine Learning, PMLR. Virtual, 18–24 July 2021; pp. 6065–6075. [Google Scholar]
- Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, Z.; Hoiem, D. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2935–2947. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jung, H.; Ju, J.; Jung, M.; Kim, J. Less-forgetting learning in deep neural networks. arXiv 2016, arXiv:1607.00122. [Google Scholar]
- Zhang, J.; Zhang, J.; Ghosh, S.; Li, D.; Tasci, S.; Heck, L.; Zhang, H.; Kuo, C.C.J. Class-incremental learning via deep model consolidation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1131–1140. [Google Scholar]
- Tarvainen, A.; Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Adv. Neural Inf. Process. Syst. 2017, 30, 1195–1204. [Google Scholar]
- Matena, M.S.; Raffel, C.A. Merging models with fisher-weighted averaging. Adv. Neural Inf. Process. Syst. 2022, 35, 17703–17716. [Google Scholar]
- Von Oswald, J.; Kobayashi, S.; Sacramento, J.; Meulemans, A.; Henning, C.; Grewe, B.F. Neural networks with late-phase weights. arXiv 2020, arXiv:2007.12927. [Google Scholar]
- Thrun, S.; Mitchell, T.M. Lifelong robot learning. Robot. Auton. Syst. 1995, 15, 25–46. [Google Scholar] [CrossRef]
- Parisi, G.I.; Kemker, R.; Part, J.L.; Kanan, C.; Wermter, S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019, 113, 54–71. [Google Scholar] [CrossRef]
- McCloskey, M.; Cohen, N.J. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 1989; Volume 24, pp. 109–165. [Google Scholar]
- Lee, S.W.; Kim, J.H.; Jun, J.; Ha, J.W.; Zhang, B.T. Overcoming catastrophic forgetting by incremental moment matching. Adv. Neural Inf. Process. Syst. 2017, 30, 4655–4665. [Google Scholar]
- Feng, T.; Wang, M.; Yuan, H. Overcoming catastrophic forgetting in incremental object detection via elastic response distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 24 June 2022; pp. 9427–9436. [Google Scholar]
- Liu, X.; Masana, M.; Herranz, L.; Van de Weijer, J.; Lopez, A.M.; Bagdanov, A.D. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2262–2268. [Google Scholar]
- Mo, J.; Chen, Y. Class Incremental Learning Based on Variational Pseudosample Generator with Classification Feature Constraints. Control Decis. 2021, 36, 2475–2482. [Google Scholar]
- Isele, D.; Cosgun, A. Selective experience replay for lifelong learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
- Shin, H.; Lee, J.K.; Kim, J.; Kim, J. Continual learning with deep generative replay. Adv. Neural Inf. Process. Syst. 2017, 30, 2994–3003. [Google Scholar]
- Mallya, A.; Davis, D.; Lazebnik, S. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 67–82. [Google Scholar]
- Roy, D.; Panda, P.; Roy, K. Tree-CNN: A hierarchical deep convolutional neural network for incremental learning. Neural Netw. 2020, 121, 148–160. [Google Scholar] [CrossRef]
- Aljundi, R.; Chakravarty, P.; Tuytelaars, T. Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3366–3375. [Google Scholar]
- Cheng, M.; Wang, H.; Long, Y. Meta-learning-based incremental few-shot object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 2158–2169. [Google Scholar] [CrossRef]
- Ramakrishnan, K.; Panda, R.; Fan, Q.; Henning, J.; Oliva, A.; Feris, R. Relationship matters: Relation guided knowledge transfer for incremental learning of object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Snowmass Village, CO, USA, 1–5 March 2020; pp. 250–251. [Google Scholar]
- Yan, S.; Xie, J.; He, X. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3014–3023. [Google Scholar]
- Huang, G.; Li, Y.; Pleiss, G.; Liu, Z.; Hopcroft, J.E.; Weinberger, K.Q. Snapshot ensembles: Train 1, get m for free. arXiv 2017, arXiv:1704.00109. [Google Scholar]
- Izmailov, P.; Podoprikhin, D.; Garipov, T.; Vetrov, D.; Wilson, A.G. Averaging weights leads to wider optima and better generalization. arXiv 2018, arXiv:1803.05407. [Google Scholar]
- Garipov, T.; Izmailov, P.; Podoprikhin, D.; Vetrov, D.P.; Wilson, A.G. Loss surfaces, mode connectivity, and fast ensembling of dnns. Adv. Neural Inf. Process. Syst. 2018, 31, 8803–8812. [Google Scholar]
- Frankle, J.; Dziugaite, G.K.; Roy, D.; Carbin, M. Linear mode connectivity and the lottery ticket hypothesis. In Proceedings of the International Conference on Machine Learning, PMLR. Virtual, 13–18 July 2020; pp. 3259–3269. [Google Scholar]
- Wortsman, M.; Ilharco, G.; Kim, J.W.; Li, M.; Kornblith, S.; Roelofs, R.; Lopes, R.G.; Hajishirzi, H.; Farhadi, A.; Namkoong, H.; et al. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 24 June 2022; pp. 7959–7971. [Google Scholar]
- Kaddour, J.; Liu, L.; Silva, R.; Kusner, M.J. Questions for flat-minima optimization of modern neural networks. arXiv 2022, arXiv:2202.00661v2. [Google Scholar]
- Neyshabur, B.; Sedghi, H.; Zhang, C. What is being transferred in transfer learning? Adv. Neural Inf. Process. Syst. 2020, 33, 512–523. [Google Scholar]
- Wortsman, M.; Ilharco, G.; Gadre, S.Y.; Roelofs, R.; Gontijo-Lopes, R.; Morcos, A.S.; Namkoong, H.; Farhadi, A.; Carmon, Y.; Kornblith, S.; et al. Model soups: Averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In Proceedings of the International Conference on Machine Learning, PMLR. Baltimore, MD, USA, 17–23 July 2022; pp. 23965–23998. [Google Scholar]
- Zhang, S.; Chi, C.; Yao, Y.; Lei, Z.; Li, S.Z. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Snowmass Village, CO, USA, 1–5 March 2020; pp. 9759–9768. [Google Scholar]
- Chen, Z.; Yang, C.; Li, Q.; Zhao, F.; Zha, Z.J.; Wu, F. Disentangle your dense object detector. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual, 20–24 October 2021; pp. 4939–4948. [Google Scholar]
- Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; Yang, J. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Adv. Neural Inf. Process. Syst. 2020, 33, 21002–21012. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1483–1498. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, H.; Chang, H.; Ma, B.; Wang, N.; Chen, X. Dynamic R-CNN: Towards high quality object detection via dynamic training. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XV 16;. Springer: Berlin/Heidelberg, Germany, 2020; pp. 260–275. [Google Scholar]
- Lu, X.; Li, B.; Yue, Y.; Li, Q.; Yan, J. Grid r-cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7363–7372. [Google Scholar]
- Pang, J.; Chen, K.; Shi, J.; Feng, H.; Ouyang, W.; Lin, D. Libra r-cnn: Towards balanced learning for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 821–830. [Google Scholar]
- Sun, P.; Zhang, R.; Jiang, Y.; Kong, T.; Xu, C.; Zhan, W.; Tomizuka, M.; Li, L.; Yuan, Z.; Wang, C.; et al. Sparse r-cnn: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14454–14463. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
Notation | Description |
---|---|
D | Initial training dataset |
The training dataset used in the stage in lifelong learning | |
The model in the stage in lifelong learning | |
The parameter at position i of | |
The parameter set encompassing all stage models at position i | |
j | The number of regions that divided |
The maximum value in | |
The minimum value in | |
The dense region | |
The parameter at the position in the fusion model | |
The number of elements in the set X | |
The size of region |
Dataset | Total | Training Set | Validation Set |
---|---|---|---|
Bird’s nest foreign body | 385 | 300 | 85 |
Damaged cement pole | 384 | 300 | 84 |
Shockproof hammer slipping | 385 | 300 | 85 |
Insulator self-explosion | 376 | 300 | 76 |
Algorithm | 1th | 2th | 3th | 4th | 5th | 6th | 7th | 8th | 9th | 10th |
---|---|---|---|---|---|---|---|---|---|---|
ATSS | 15.3 | 21.3 | 24.3 | 24 | 25.6 | 27.5 | 26 | 27.2 | 27.1 | 27.7 |
DDOD | 22.7 | 25.4 | 31 | 30.9 | 31.2 | 30.3 | 31.8 | 31.3 | 31.9 | 33 |
GFL | 17 | 22.5 | 27.1 | 26.4 | 26.7 | 28.3 | 28.8 | 28.8 | 30.7 | 29.4 |
Cascade R-CNN | 25.8 | 37.1 | 39.5 | 39.6 | 39.7 | 38.4 | 38.8 | 40 | 39.9 | 39.9 |
Faster R-CNN | 23.8 | 35.3 | 37.6 | 38.9 | 38.4 | 36.9 | 38.3 | 37.8 | 40.9 | 41.4 |
Dynamic R-CNN | 27.5 | 32.5 | 35.4 | 38 | 38.8 | 36 | 38.1 | 37.8 | 37.3 | 37.7 |
Grid R-CNN | 18.2 | 33.8 | 36.6 | 37.2 | 40.3 | 37.6 | 38.6 | 40.5 | 40.6 | 40.2 |
Libra R-CNN | 23.1 | 30.7 | 32.6 | 35.5 | 34.9 | 34.3 | 35.3 | 36 | 36.2 | 35.7 |
Algorithm | 1th | 2th | 3th | 4th | 5th | 6th | 7th | 8th | 9th | 10th |
---|---|---|---|---|---|---|---|---|---|---|
ATSS | 48.7 | 58.1 | 61.3 | 61.2 | 60.6 | 64.3 | 61.7 | 63.2 | 63.4 | 64 |
DDOD | 52.4 | 57.7 | 66.7 | 65.9 | 65.9 | 63.3 | 65.5 | 65.4 | 64.8 | 66.6 |
GFL | 51.2 | 54.6 | 61.3 | 62.6 | 62.5 | 62 | 64.2 | 63 | 65.5 | 63.8 |
Cascade R-CNN | 57.5 | 63.9 | 65.6 | 63.4 | 61 | 58.4 | 60 | 60.7 | 61.2 | 61.9 |
Faster R-CNN | 52.1 | 62.1 | 63.8 | 65.7 | 63.9 | 61.9 | 63.4 | 61.9 | 65.8 | 66.9 |
Dynamic R-CNN | 51.9 | 60.3 | 61.2 | 62 | 62.4 | 59.9 | 60.7 | 60.4 | 59.9 | 60.2 |
Grid R-CNN | 42.6 | 65.2 | 66.1 | 64.9 | 68.9 | 62.3 | 62.6 | 66.8 | 64 | 63.7 |
Libra R-CNN | 49.2 | 57.3 | 58.2 | 61.4 | 60.8 | 62.4 | 61.1 | 64.8 | 61.1 | 61.2 |
Algorithm | 1th | 2th | 3th | 4th | 5th | 6th | 7th | 8th | 9th | 10th |
---|---|---|---|---|---|---|---|---|---|---|
ATSS | 6 | 9.9 | 12.2 | 10.5 | 17.1 | 16.7 | 17.3 | 18.1 | 18 | 19.3 |
DDOD | 14.3 | 23.6 | 27.2 | 28.4 | 29.7 | 29.3 | 31.6 | 27.3 | 29.7 | 31.5 |
GFL | 6.1 | 12.2 | 18.1 | 19 | 17.2 | 19.4 | 18.7 | 20.5 | 24.8 | 20.1 |
Cascade R-CNN | 23.3 | 36.6 | 45.2 | 45.5 | 47.4 | 44.3 | 45.4 | 45.4 | 46.7 | 48.2 |
Faster R-CNN | 20.3 | 35.2 | 38.3 | 42.7 | 44.7 | 43.3 | 43.6 | 44.4 | 44.2 | 46.2 |
Dynamic R-CNN | 28.1 | 33.2 | 33 | 40.2 | 42.7 | 40 | 42.7 | 43.4 | 42.1 | 42.4 |
Grid R-CNN | 11.8 | 32.7 | 38.9 | 43.9 | 43.4 | 39.8 | 45.6 | 43.4 | 45.9 | 44.4 |
Libra R-CNN | 20.7 | 32.6 | 32 | 37 | 37.7 | 34.3 | 37.1 | 37.6 | 38.5 | 38 |
Algorithm | 1th | 2th | 3th | 4th | 5th | 6th | 7th | 8th | 9th | 10th |
---|---|---|---|---|---|---|---|---|---|---|
ATSS | 33.3 | 36.7 | 37.9 | 37.2 | 37.4 | 38.5 | 37.4 | 38 | 38.3 | 37.8 |
DDOD | 36.1 | 36.8 | 39.3 | 38.8 | 39 | 37.3 | 39 | 39.3 | 39.5 | 40.6 |
GFL | 34.4 | 36.9 | 40 | 41.1 | 41.3 | 40.5 | 41.4 | 40.1 | 42.1 | 40.3 |
Cascade R-CNN | 35.4 | 44.6 | 46.1 | 45.4 | 45.1 | 43.2 | 43 | 44.6 | 44.5 | 44.6 |
Faster R-CNN | 35.9 | 43 | 44.1 | 46.2 | 44.7 | 44.2 | 44.7 | 45.1 | 46.1 | 47.3 |
Dynamic R-CNN | 38.5 | 40.3 | 43 | 46.1 | 46.4 | 43.6 | 45.1 | 46 | 44.6 | 45.7 |
Grid R-CNN | 30.7 | 41.3 | 43 | 43.2 | 45.7 | 43.8 | 44.6 | 46.8 | 46.8 | 46.9 |
Libra R-CNN | 38.6 | 42.6 | 44.3 | 45.9 | 46.2 | 45.4 | 46.2 | 46.2 | 46.9 | 46.3 |
Algorithm | Using DLLO-DRF | Without DLLO-DRF | ||||||
---|---|---|---|---|---|---|---|---|
ATSS | 26.5 | 66 | 12.5 | 39.8 | 26.8 | 65.7 | 11.6 | 37.7 |
DDOD | 30.1 | 67.2 | 16.6 | 42.4 | 30.5 | 66.8 | 16.2 | 42.8 |
GFL | 19.3 | 46.9 | 11.6 | 36.6 | 19.5 | 46.4 | 10.6 | 33.2 |
Cascade R-CNN | 30.2 | 70.1 | 14.4 | 37.6 | 29.7 | 70 | 12.9 | 37.7 |
Faster R-CNN | 31.1 | 73 | 14.2 | 38.1 | 30.4 | 72.9 | 14.2 | 38 |
Dynamic R-CNN | 27.7 | 68.8 | 11.5 | 36.5 | 27.9 | 67.1 | 10.8 | 37.2 |
Grid R-CNN | 29.8 | 65.2 | 23.3 | 38.4 | 29.5 | 64.7 | 23.6 | 38 |
Libra R-CNN | 26.6 | 62.9 | 13.2 | 37.2 | 26.8 | 64.2 | 12.3 | 36.9 |
Sparse R-CNN | 25.4 | 65.2 | 11.5 | 40.9 | 25.5 | 65.5 | 11 | 40.8 |
Average | 27.4 | 65 | 14.3 | 38.6 | 27.4 | 64.8 | 13.7 | 38 |
Algorithm | Using DLLO-DRF | Without DLLO-DRF | ||||||
---|---|---|---|---|---|---|---|---|
ATSS | 13.4 | 37.6 | 9.3 | 28.2 | 13.6 | 37.1 | 9.2 | 29 |
DDOD | 15.6 | 47.2 | 12.2 | 21.2 | 15.4 | 47 | 12.1 | 21.1 |
GFL | 21.9 | 44.8 | 18.7 | 30.7 | 21.8 | 44.9 | 18.1 | 30.3 |
Cascade R-CNN | 16.9 | 35.4 | 15.2 | 21.8 | 16.8 | 34.2 | 15.6 | 21.4 |
Faster R-CNN | 17 | 33.5 | 16.5 | 22.7 | 16.8 | 33.4 | 18.4 | 23 |
Dynamic R-CNN | 13.4 | 26 | 10.5 | 20.1 | 13.2 | 25.6 | 9.4 | 19.9 |
Grid R-CNN | 16 | 37.8 | 9.1 | 22.6 | 15.9 | 36.2 | 10.4 | 22.5 |
Libra R-CNN | 18.8 | 45.6 | 13.7 | 24.1 | 18 | 45.7 | 13.2 | 23.6 |
Sparse R-CNN | 18.3 | 47.7 | 5 | 26 | 18.4 | 47.8 | 4.9 | 25.6 |
Average | 16.8 | 39.5 | 12.2 | 24.2 | 16.7 | 39.1 | 12.4 | 24 |
Algorithm | Using DLLO-DRF | Without DLLO-DRF | ||||||
---|---|---|---|---|---|---|---|---|
ATSS | 28.2 | 64 | 19.4 | 38 | 27.7 | 64 | 19.3 | 37.8 |
DDOD | 33.2 | 66.8 | 31.7 | 40.8 | 33 | 66.6 | 31.5 | 40.6 |
GFL | 29.8 | 64.2 | 20.1 | 41.1 | 29.4 | 63.8 | 20.1 | 40.3 |
Cascade R-CNN | 39.6 | 62 | 48.3 | 44.4 | 39.9 | 61.9 | 48.2 | 44.6 |
Faster R-CNN | 40.9 | 67.3 | 46.5 | 46.7 | 41.4 | 66.9 | 46.2 | 47.3 |
Dynamic R-CNN | 38 | 60.6 | 42.6 | 46 | 37.7 | 60.2 | 42.4 | 45.7 |
Grid R-CNN | 40.2 | 64.4 | 43.6 | 47 | 40.2 | 63.7 | 44.4 | 46.9 |
Libra R-CNN | 35.9 | 61.1 | 38.2 | 46.5 | 35.7 | 61.2 | 38 | 46.3 |
Sparse R-CNN | 18.3 | 40.3 | 14.6 | 43.5 | 18.5 | 40 | 14.9 | 43 |
Average | 33.8 | 61.2 | 33.9 | 43.8 | 33.7 | 60.9 | 33.9 | 43.6 |
Algorithm | Using DLLO-DRF | Without DLLO-DRF | ||||||
---|---|---|---|---|---|---|---|---|
ATSS | 13.1 | 41.8 | 3.6 | 34.8 | 13 | 41.6 | 3.6 | 34.5 |
DDOD | 19.6 | 53.9 | 8.3 | 39.5 | 19.4 | 53 | 6.7 | 38.8 |
GFL | 16.3 | 46 | 5.5 | 37.8 | 16.2 | 45.6 | 5.5 | 37.7 |
Cascade R-CNN | 29.5 | 67.3 | 19.9 | 43.7 | 28.9 | 65.9 | 19.6 | 43.7 |
Faster R-CNN | 16.1 | 45.3 | 7.9 | 31.6 | 16 | 44.9 | 7.9 | 31.7 |
Dynamic R-CNN | 23.6 | 61.2 | 13.4 | 37.2 | 23.2 | 60.6 | 13.6 | 37.4 |
Grid R-CNN | 27.6 | 60.9 | 19.7 | 46 | 27.5 | 60.4 | 20.1 | 46.2 |
Libra R-CNN | 21.1 | 52.6 | 14.2 | 37.2 | 20.3 | 51.1 | 13.7 | 37 |
Sparse R-CNN | 20.8 | 49.4 | 15.3 | 37.5 | 20.7 | 49.4 | 15.2 | 37.3 |
Average | 20.9 | 53.2 | 12 | 38.4 | 20.6 | 52.5 | 11.8 | 38.3 |
Object Algorithm | Fusion Method | ||||
---|---|---|---|---|---|
ATSS | Uniform soup | 25.7 | 63.5 | 11.6 | 37.5 |
Greedy soup | 26 | 64.4 | 12.3 | 37.5 | |
DLLO-DRF | 26.5 | 66 | 12.5 | 39.8 | |
DDOD | Uniform soup | 28 | 64.6 | 14.6 | 40.7 |
Greedy soup | 29.3 | 65.8 | 16.3 | 41.5 | |
DLLO-DRF | 30.1 | 67.2 | 16.6 | 42.4 | |
GFL | Uniform soup | 17.3 | 39.9 | 10.8 | 35.4 |
Greedy soup | 17.8 | 41.6 | 10.5 | 36 | |
DLLO-DRF | 19.3 | 46.9 | 11.6 | 36.6 | |
Cascade R-CNN | Uniform soup | 28.9 | 67.9 | 11.9 | 36.6 |
Greedy soup | 29.3 | 68.2 | 13.1 | 37 | |
DLLO-DRF | 30.2 | 70.1 | 14.4 | 37.6 | |
Faster R-CNN | Uniform soup | 27.9 | 71.3 | 12.3 | 36 |
Greedy soup | 28.6 | 71.9 | 11.9 | 36.9 | |
DLLO-DRF | 31.1 | 73 | 14.2 | 38.1 | |
Dynamic R-CNN | Uniform soup | 26.6 | 66.5 | 12.1 | 36.8 |
Greedy soup | 26.9 | 66.5 | 10.7 | 36.9 | |
DLLO-DRF | 27.7 | 68.8 | 11.5 | 36.5 | |
Grid R-CNN | Uniform soup | 28.8 | 64.5 | 17.5 | 36.4 |
Greedy soup | 28.5 | 64.5 | 14.2 | 37.6 | |
DLLO-DRF | 29.8 | 65.2 | 23.3 | 38.4 | |
Libra R-CNN | Uniform soup | 25.7 | 62.3 | 11.5 | 35.4 |
Greedy soup | 26.7 | 63.8 | 11.7 | 36.6 | |
DLLO-DRF | 26.6 | 62.9 | 13.2 | 37.2 | |
Sparse R-CNN | Uniform soup | 25.1 | 67 | 9.5 | 39.7 |
Greedy soup | 24.1 | 66.1 | 6.9 | 38.9 | |
DLLO-DRF | 25.4 | 65.2 | 11.5 | 40.9 |
Object Algorithm | Fusion Method | ||||
---|---|---|---|---|---|
ATSS | Uniform soup | 13.4 | 37.5 | 9.2 | 28.3 |
Greedy soup | 13.3 | 37.2 | 9.2 | 28.2 | |
DLLO-DRF | 13.4 | 37.6 | 9.3 | 28.2 | |
DDOD | Uniform soup | 15.2 | 45.9 | 12.3 | 20.4 |
Greedy soup | 15.3 | 46.5 | 12.3 | 20.8 | |
DLLO-DRF | 15.6 | 47.2 | 12.2 | 21.2 | |
GFL | Uniform soup | 21.4 | 42.2 | 21.7 | 29.2 |
Greedy soup | 21.4 | 42.5 | 22.1 | 28.9 | |
DLLO-DRF | 21.9 | 44.8 | 18.7 | 30.7 | |
Cascade R-CNN | Uniform soup | 17.5 | 34.9 | 16.5 | 21.8 |
Greedy soup | 17.2 | 34.7 | 16 | 21.7 | |
DLLO-DRF | 16.9 | 35.4 | 15.2 | 21.8 | |
Faster R-CNN | Uniform soup | 15.8 | 30.4 | 15 | 21.6 |
Greedy soup | 16.2 | 31.9 | 14.8 | 22.3 | |
DLLO-DRF | 17 | 33.5 | 16.5 | 22.7 | |
Dynamic R-CNN | Uniform soup | 12.8 | 25.4 | 9.9 | 19.3 |
Greedy soup | 12.4 | 25.1 | 9.4 | 18.4 | |
DLLO-DRF | 13.4 | 26 | 10.5 | 20.1 | |
Grid R-CNN | Uniform soup | 15.5 | 36 | 10.5 | 22.2 |
Greedy soup | 15.7 | 36 | 10.4 | 22.4 | |
DLLO-DRF | 16 | 37.8 | 9.1 | 22.6 | |
Libra R-CNN | Uniform soup | 18.7 | 45.2 | 14 | 23.7 |
Greedy soup | 18.1 | 45.4 | 13.5 | 23.6 | |
DLLO-DRF | 18.2 | 45.3 | 13.3 | 23.9 | |
Sparse R-CNN | Uniform soup | 17.5 | 46.7 | 5.5 | 25.3 |
Greedy soup | 17.2 | 45.8 | 3.7 | 24.8 | |
DLLO-DRF | 18.3 | 47.7 | 5 | 26 |
Object Algorithm | Fusion Method | ||||
---|---|---|---|---|---|
ATSS | Uniform soup | 27.9 | 63.8 | 18.3 | 37.6 |
Greedy soup | 27.8 | 63.9 | 17.4 | 37.8 | |
DLLO-DRF | 28.2 | 64 | 19.4 | 38 | |
DDOD | Uniform soup | 31.8 | 64.1 | 29.7 | 39.6 |
Greedy soup | 32.3 | 65.8 | 30.1 | 40.1 | |
DLLO-DRF | 33.2 | 66.8 | 31.7 | 40.8 | |
GFL | Uniform soup | 29 | 63.9 | 19.4 | 40.6 |
Greedy soup | 29.1 | 63.8 | 19.2 | 40.9 | |
DLLO-DRF | 29.8 | 64.2 | 20.1 | 41.1 | |
Cascade R-CNN | Uniform soup | 39.4 | 61.1 | 48.1 | 44.3 |
Greedy soup | 39.9 | 61.8 | 48.6 | 44.8 | |
DLLO-DRF | 39.6 | 62 | 48.3 | 44.4 | |
Faster R-CNN | Uniform soup | 40.5 | 66.5 | 46.1 | 45.8 |
Greedy soup | 40.5 | 66.4 | 46.8 | 45.7 | |
DLLO-DRF | 40.9 | 67.3 | 46.5 | 46.7 | |
Dynamic R-CNN | Uniform soup | 38.5 | 60.2 | 43.6 | 46 |
Greedy soup | 38.8 | 60.4 | 43.7 | 46.1 | |
DLLO-DRF | 38 | 60.6 | 42.6 | 46 | |
Grid R-CNN | Uniform soup | 40.3 | 64.4 | 44.7 | 46.9 |
Greedy soup | 40.7 | 64.4 | 45.6 | 47.4 | |
DLLO-DRF | 40.2 | 64.4 | 43.6 | 47 | |
Libra R-CNN | Uniform soup | 35.3 | 60 | 37.7 | 46.2 |
Greedy soup | 35.3 | 60.4 | 37.7 | 46.2 | |
DLLO-DRF | 35.9 | 61.1 | 38.2 | 46.5 | |
Sparse R-CNN | Uniform soup | 17.4 | 38.7 | 13.8 | 42 |
Greedy soup | 17.6 | 38.9 | 14 | 42.1 | |
DLLO-DRF | 18.3 | 40.3 | 14.6 | 43.5 |
Object Algorithm | Fusion Method | ||||
---|---|---|---|---|---|
ATSS | Uniform soup | 12.5 | 41.7 | 3.2 | 34.5 |
Greedy soup | 12.6 | 41.5 | 3.2 | 34.6 | |
DLLO-DRF | 13.1 | 41.8 | 3.6 | 34.8 | |
DDOD | Uniform soup | 19.8 | 55 | 8 | 39.1 |
Greedy soup | 19.8 | 55 | 8.5 | 38.8 | |
DLLO-DRF | 19.6 | 53.9 | 8.3 | 39.5 | |
GFL | Uniform soup | 16.6 | 48.6 | 6.8 | 38.5 |
Greedy soup | 16.7 | 48.4 | 6.8 | 38.2 | |
DLLO-DRF | 16.3 | 46 | 5.5 | 37.8 | |
Cascade R-CNN | Uniform soup | 29.1 | 66.1 | 21.9 | 42.5 |
Greedy soup | 28.8 | 65.3 | 22.5 | 42.5 | |
DLLO-DRF | 29.5 | 67.3 | 19.9 | 43.7 | |
Faster R-CNN | Uniform soup | 14.5 | 41.3 | 8.6 | 30.4 |
Greedy soup | 15 | 42.9 | 9 | 31.3 | |
DLLO-DRF | 16.1 | 45.3 | 7.9 | 31.6 | |
Dynamic R-CNN | Uniform soup | 22.4 | 59.9 | 13.3 | 36.4 |
Greedy soup | 21.6 | 59.9 | 9.6 | 35.3 | |
DLLO-DRF | 23.6 | 61.2 | 13.4 | 37.2 | |
Grid R-CNN | Uniform soup | 26.8 | 59.1 | 20.9 | 46.7 |
Greedy soup | 27 | 60.1 | 19.1 | 46.2 | |
DLLO-DRF | 27.6 | 60.9 | 19.7 | 46 | |
Libra R-CNN | Uniform soup | 20.2 | 50 | 13.4 | 35.5 |
Greedy soup | 20.9 | 49.8 | 15.1 | 35.8 | |
DLLO-DRF | 21.1 | 52.6 | 14.2 | 37.2 | |
Sparse R-CNN | Uniform soup | 20 | 48.9 | 12.2 | 36.7 |
Greedy soup | 20.1 | 48.2 | 13.3 | 36.2 | |
DLLO-DRF | 20.8 | 49.4 | 15.3 | 37.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Ding, F.; Xiang, S.; Qin, Z.; Chang, Z.; Wang, H. Deep Lifelong Learning Optimization Algorithm in Dense Region Fusion. Appl. Sci. 2023, 13, 7549. https://doi.org/10.3390/app13137549
Zhang L, Ding F, Xiang S, Qin Z, Chang Z, Wang H. Deep Lifelong Learning Optimization Algorithm in Dense Region Fusion. Applied Sciences. 2023; 13(13):7549. https://doi.org/10.3390/app13137549
Chicago/Turabian StyleZhang, Linghao, Fan Ding, Siyu Xiang, Zhang Qin, Zhengwei Chang, and Hongjun Wang. 2023. "Deep Lifelong Learning Optimization Algorithm in Dense Region Fusion" Applied Sciences 13, no. 13: 7549. https://doi.org/10.3390/app13137549
APA StyleZhang, L., Ding, F., Xiang, S., Qin, Z., Chang, Z., & Wang, H. (2023). Deep Lifelong Learning Optimization Algorithm in Dense Region Fusion. Applied Sciences, 13(13), 7549. https://doi.org/10.3390/app13137549