SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery
Abstract
1. Introduction
2. Method
- Feature Extraction Optimization: To address the issue of tiny object features being easily lost in deep networks, we design novel convolution modules DMS-Conv, to enhance the network’s feature representation capabilities.
- Feature Fusion Enhancement: To improve the efficiency of information flow between feature maps of different scales, we propose a more lightweight and efficient upsampling fusion mechanism, the Lightweight Upsampling Module (LUM).
- Localization Accuracy Improvement: To overcome the deficiencies of traditional loss functions in bounding box regression for small targets, we introduce a new geometric constraint, MPD-IoU Loss, to guide the model toward more precise localization.
2.1. Dense Multi-Scope Convolution (DMS-Conv)
2.2. Lightweight Upsampling Module (LUM)
2.3. MPD-IoU Loss
- Enhanced Boundary Alignment Constraint: By simultaneously considering the registration degree of both the top-left and bottom-right corners, MPD-IoU achieves a finer-grained alignment metric at the geometric level compared to a single center point.
- Adaptation to Small Object Detection: In scenarios with star-like small targets in SLR, aspect ratio differences contribute limitedly to regression, whereas the deviation of boundary points directly determines whether the target is covered. Thus, MPD-IoU fits the task requirements better.
- Optimized Convergence Stability: Corner distance constraints provide clearer gradient information, enabling the model to converge faster to high-quality bounding box predictions during training.
3. Experiments
3.1. Dataset
3.2. Dataset Feature Analysis
3.3. Experimental Environment and Evaluation Metrics
- Precision (P):where TP is True Positives and FP is False Positives.
- Recall (R):where FN is False Negatives.
- F1-Score:
- Mean Average Precision (mAP): For a given IoU threshold t,and
- :
3.4. Ablation Experiments
3.5. Comparative Experiments
3.5.1. Results and Analysis
3.5.2. Visualization Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Degnan, J.J. Satellite Laser Ranging: Current Status and Future Prospects. IEEE Trans. Geosci. Remote Sens. 1985, GE-23, 398–413. [Google Scholar] [CrossRef]
- Schreiber, K.U.; Kodet, J. The Application of Coherent Local Time for Optical Time Transfer and the Quantification of Systematic Errors in Satellite Laser Ranging. Space Sci. Rev. 2017, 214, 22. [Google Scholar] [CrossRef]
- Marshall, J.A.; Klosko, S.M.; Ries, J.C. Dynamics of SLR Tracked Satellites. Rev. Geophys. 1995, 33, 353–360. [Google Scholar] [CrossRef]
- Fumin, Y. Current Status And Future Plans For The Chinese Satellite Laser Ranging Network. Surv. Geophys. 2001, 22, 465–471. [Google Scholar] [CrossRef]
- Wu, Z.; Geng, R.; Tang, K.; Meng, W.; Zhang, H.; Cheng, Z.; Xiao, A.; Gao, S.; Wang, X.; Huang, Y.; et al. Experiments and Progress of Space-to-Ground Laser Time-Frequency Transfer for the China Space Station. Acta Opt. Sin. 2025, 45, 286–294. [Google Scholar]
- Geng, R.; Wu, Z.; Huang, Y.; Lin, H.; Yu, R.; Tang, K.; Zhang, H.; Zhang, Z. Experimental Study on Transponder Laser Time Transfer Based on Satellite Retroreflectors. Chin. J. Lasers 2023, 50, 280–289. [Google Scholar]
- Xiao, W.; Wu, Z.; Li, Z.; Fan, L.; Guo, S.; Chen, Y. Research on the Autonomous Orbit Determination of Beidou-3 Assisted by Satellite Laser Ranging Technology. Remote Sens. 2025, 17, 2342. [Google Scholar] [CrossRef]
- Schreiber, K.U.; Hugentobler, U.; Kodet, J.; Stellmer, S.; Klügel, T.; Wells, J.P.R. Gyroscope Measurements of the Precession and Nutation of Earth’s Axis. Sci. Adv. 2025, 11, eadx6634. [Google Scholar] [CrossRef]
- Li, X.; Lou, J.; Yuan, Y.; Wu, J.; Zhang, K. Determination of Global Geodetic Parameters Using Satellite Laser Ranging to Galileo, GLONASS, and BeiDou Satellites. Satell. Navig. 2024, 5, 10. [Google Scholar] [CrossRef]
- Cheng, M. Decadal Variations in Equatorial Ellipticity and Principal Axis of the Earth from Satellite Laser Ranging/GRACE. Surv. Geophys. 2024, 45, 1601–1626. [Google Scholar] [CrossRef]
- Steindorfer, M.A.; Wang, P.; Koidl, F.; Kirchner, G. Space Debris and Satellite Laser Ranging Combined Using a Megahertz System. Nat. Commun. 2025, 16, 575. [Google Scholar] [CrossRef]
- Newberry, M.V. SIGNAL-TO-NOISE CONSIDERATIONS FOR SKY-SUBTRACTED CCD DATA. Publ. Astron. Soc. Pac. 1991, 103, 122. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M. Rapid Object Detection Using a Boosted Cascade of Simple Features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, pp. I–I. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
- Felzenszwalb, P.; McAllester, D.; Ramanan, D. A Discriminatively Trained, Multiscale, Deformable Part Model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, Y.H.; Han, Z.; Wang, H.; Gao, C. Object Class Detection. ACM Comput. Surv. 2013, 46, 1–53. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. arXiv 2016, arXiv:1612.08242. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar] [CrossRef]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Khanam, R.; Hussain, M. YOLOv11: An Overview of the Key Architectural Enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar] [CrossRef]
- Tangyong, G.; Peiyuan, W.; Xin, L.; Wei, Z.; Tong, Z.; Shipeng, L.; Qingshan, L. Progress of the satellite laser ranging system TROS1000. Geod. Geodyn. 2015, 6, 67–72. [Google Scholar][Green Version]
- Ma, S.; Xu, Y. MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv 2023, arXiv:2307.07662. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar] [CrossRef]
- Yaseen, M. What Is YOLOv8: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector. arXiv 2024, arXiv:2408.15857. [Google Scholar] [CrossRef]
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar] [CrossRef]
- Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bojanowski, P.; Joulin, A. Emerging Properties in Self-Supervised Vision Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 9650–9660. [Google Scholar]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully Convolutional One-Stage Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9627–9636. [Google Scholar]
- Lyu, C.; Zhang, W.; Huang, H.; Zhou, Y.; Wang, Y.; Liu, Y.; Zhang, S.; Chen, K. RTMDet: An Empirical Study of Designing Real-Time Object Detectors. arXiv 2022, arXiv:2212.07784. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving into High Quality Object Detection. arXiv 2017, arXiv:1712.00726. [Google Scholar] [CrossRef]








| Item | Specification |
|---|---|
| Operating System | Ubuntu 22.04 |
| GPU | RTX 4090 (24 GB) |
| CPU | 16 vCPU Intel (Santa Clara, CA, USA) Xeon(R) Platinum 8352V CPU @ 2.10 GHz |
| Memory | 120 GB |
| Programming Language | Python 3.10 |
| Framework | PyTorch 2.1.0 + CUDA 12.1 |
| IDE | JupyterLab |
| Model | Components | Complexity | Performance (%) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| DMS | LUM | MPD | Params (M) | GFLOPs | Prec. | Recall | F1 | ||||
| Baseline | 2.58 | 6.30 | 88.51 | 86.89 | 87.69 | 90.91 | 42.65 | 47.24 | |||
| + LUM | ✓ | 2.59 | 6.30 | 88.87 | 86.77 | 87.80 | 91.20 | 42.70 | 47.22 | ||
| + DMS | ✓ | 2.56 | 6.20 | 89.23 | 86.64 | 87.91 | 91.49 | 42.76 | 47.20 | ||
| + DMS + LUM | ✓ | ✓ | 2.57 | 6.70 | 89.94 | 86.39 | 88.13 | 92.07 | 42.87 | 47.15 | |
| SLR-Net (Proposed) | ✓ | ✓ | ✓ | 2.57 | 6.70 | 90.30 | 86.27 | 88.24 | 92.36 | 42.92 | 47.13 |
| Model | Params (M) | GFLOPs | Prec. (%) | Recall (%) | F1 (%) | (%) | (%) | (%) | FPS |
|---|---|---|---|---|---|---|---|---|---|
| SLR-Net (Proposed) | 2.57 | 6.70 | 90.30 | 86.27 | 88.24 | 92.36 | 42.92 | 47.13 | 130.39 |
| Base | 2.58 | 6.30 | 88.51 | 86.89 | 87.69 | 90.91 | 42.65 | 47.24 | 201.28 |
| YOLO-based Detectors | |||||||||
| YOLOv3 [20] | 103.67 | 282.20 | 89.57 | 84.15 | 86.77 | 89.45 | 40.11 | 46.93 | 121.94 |
| YOLOv3-tiny [20] | 12.13 | 18.90 | 79.17 | 37.25 | 50.67 | 59.26 | 14.16 | 24.92 | 275.76 |
| YOLOv5-m | 25.05 | 64.00 | 87.85 | 85.29 | 86.55 | 91.41 | 43.64 | 47.27 | 176.75 |
| YOLOv5-n | 2.50 | 7.10 | 86.98 | 85.11 | 86.03 | 87.44 | 36.50 | 44.11 | 216.34 |
| YOLOv6-m [27] | 51.98 | 161.10 | 84.30 | 77.45 | 80.73 | 83.36 | 23.76 | 38.00 | 195.63 |
| YOLOv6-n [27] | 4.23 | 11.80 | 70.90 | 88.24 | 78.63 | 82.79 | 19.21 | 35.99 | 221.42 |
| YOLOv8-m [29] | 25.84 | 78.70 | 86.45 | 87.25 | 86.85 | 91.35 | 42.57 | 48.22 | 195.79 |
| YOLOv8-n [29] | 3.01 | 8.10 | 81.85 | 83.82 | 82.83 | 81.65 | 30.83 | 38.90 | 211.24 |
| YOLOv9-m [30] | 20.16 | 77.00 | 86.32 | 86.63 | 86.48 | 90.94 | 40.08 | 46.50 | 124.36 |
| YOLOv9-t [30] | 1.97 | 7.60 | 87.90 | 87.75 | 87.82 | 90.94 | 43.34 | 47.35 | 137.14 |
| YOLOv10-m [31] | 15.31 | 58.90 | 84.35 | 79.90 | 82.07 | 89.54 | 46.06 | 48.43 | 161.16 |
| YOLOv10-n [31] | 2.27 | 6.50 | 86.31 | 80.37 | 83.23 | 88.55 | 38.24 | 45.63 | 205.11 |
| Anchor-Free/Transformer-based Detectors | |||||||||
| YOLOX-S [33] | 8.94 | 8.52 | 93.37 | 82.84 | 87.79 | 85.80 | 27.80 | 39.50 | 120.35 |
| DINO [32] | 47.54 | 80.70 | 85.37 | 85.78 | 85.57 | 87.70 | 25.60 | 38.50 | 39.95 |
| FCOS [34] | 32.11 | 50.29 | 97.50 | 38.24 | 54.93 | 78.90 | 20.30 | 32.00 | 100.87 |
| RTMDet-S [35] | 8.86 | 9.44 | 89.94 | 70.10 | 78.79 | 79.70 | 16.70 | 30.80 | 106.15 |
| RetinaNet [36] | 36.33 | 52.20 | 52.53 | 76.47 | 62.28 | 60.80 | 3.30 | 20.30 | 99.63 |
| Faster R-CNN [17] | 41.35 | 63.18 | 74.36 | 14.22 | 23.87 | 18.50 | 2.20 | 5.70 | 91.12 |
| Cascade R-CNN [37] | 69.15 | 90.98 | 33.33 | 1.47 | 2.82 | 3.10 | 0.00 | 0.60 | 68.43 |
| Metric | YOLOv11 (Base) | SLR-Net (Proposed) | Mean |
|---|---|---|---|
| (%) | +0.90 | ||
| (%) | +1.58 | ||
| (%) | +3.88 | ||
| Precision (%) | +0.64 | ||
| Recall (%) | +1.02 | ||
| F1-Score (%) | +0.84 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zhu, W.; Hu, J.; Gong, W.; Wang, Y.; Zhang, Y. SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery. Sensors 2026, 26, 732. https://doi.org/10.3390/s26020732
Zhu W, Hu J, Gong W, Wang Y, Zhang Y. SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery. Sensors. 2026; 26(2):732. https://doi.org/10.3390/s26020732
Chicago/Turabian StyleZhu, Wei, Jinlong Hu, Weiming Gong, Yong Wang, and Yi Zhang. 2026. "SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery" Sensors 26, no. 2: 732. https://doi.org/10.3390/s26020732
APA StyleZhu, W., Hu, J., Gong, W., Wang, Y., & Zhang, Y. (2026). SLR-Net: Lightweight and Accurate Detection of Weak Small Objects in Satellite Laser Ranging Imagery. Sensors, 26(2), 732. https://doi.org/10.3390/s26020732

