GAC-Net: A Geometric–Attention Fusion Network for Sparse Depth Completion from LiDAR and Image
Abstract
1. Introduction
- We propose a dual-branch PointNet++-S encoder to extract scale-aware geometric features from sparse point clouds and form a robust 3D representation;
- We design a Channel Attention-Based Feature Fusion Module (CAFFM) to adaptively fuse geometric priors with RGB-depth features;
- Extensive experiments on the KITTI depth completion benchmark demonstrate that GAC-Net outperforms BPNet and other published methods in both accuracy, robustness, and structural preservation, ranking first among peer-reviewed methods on the official leaderboard at the time of submission.
2. Related Work
2.1. Two-Dimensional-Based Depth Completion
2.2. 2D–3D Joint Depth Completion: Geometric, Multi-View, and Transformer Approaches
3. Methods
- (1)
- Pre-processing stage: A bilateral propagation module generates an initial dense depth map from the sparse input, providing a more structured estimate for subsequent fusion;
- (2)
- Enhanced multi-modal fusion: a U-Net backbone [27] extracts multi-scale 2D features, while the proposed dual-branch PointNet++-S encodes density-adaptive local and contextual 3D priors. These features are adaptively fused through the proposed Channel Attention-Based Feature Fusion Module (CAFFM), which reweights channels to enhance cross-modal consistency;
- (3)
3.1. Preprocessing via Bilateral Propagation
3.2. Enhanced Multi-Modal Feature Fusion
3.2.1. U-Net Backbone for 2D Feature Fusion
3.2.2. Dual-Branch PointNet++-S Encoder for 3D Geometry Representation
- is the number of center points sampled from the point cloud using farthest point sampling (FPS);
- is the ball query radius used to group neighboring points around each center;
- is the number of neighboring points selected within radius for local feature aggregation.
3.2.3. Channel Attention-Based Feature Fusion Module (CAFFM)
3.2.4. Pseudocode for Enhanced Multi-Modal Feature Fusion
Algorithm 1: Enhanced Multi-Modal Feature Fusion at scale (Equations (2)–(15)) |
Input: RGB image Sparse depth map Pre-processed dense depth (from BP) Camera intrinsics Output: Fused feature map Notion: : channel concatenation; : channel-wise multiplication; : global average pooling; : ReLU; : Sigmoid : dense depth pre−processed via Bilateral Propagation Procedure: 1: Step 1: Back-projection to 3D 2: Construct sparse point cloud . 3: Step 2: 2D feature encoding 4: Form 2D input 5: 6: Step 3: Dual-branch PointNet++-S 7: For each branch : 8: For each SA layer with config : 9: 10: End for 11: End for 12: Aggregate to multi-scale 3D feature 13: Step 4: Channel recalibration on 3D 14: ; 15: ; 16: . 17: Step 5: Spatial broadcasting of 3D feature 18: 19: Step 6: CAFFM: channel-attention fusion 20: ; 21: ; 22: ); 23: . 24: return . |
3.3. Depth Refinement
3.4. Loss Function
4. Experiment
4.1. Experiment Setting
4.1.1. Datasets
4.1.2. Training Details
4.1.3. Metrics
4.2. Comparison with SoTA Methods
4.2.1. Quantitative Comparison
4.2.2. Qualitative Comparison
4.3. Ablation Studies
4.3.1. Effectiveness of PointNet++-S
4.3.2. Effectiveness of CAFFM
4.3.3. Visual Comparison of Ablation Results
4.4. Sparsity Level Analysis
4.5. Complexity Analysis
5. Conclusions
- (1)
- Efficiency under deployment constraints. Although not strictly real-time, our results indicate that incorporating 3D modeling through the PointNet++-S branch and the SE-based CAFFM fusion module can be achieved with only modest computational overhead. Future work will explore hardware-friendly acceleration (e.g., TensorRT, mixed precision), model compression (pruning/quantization/distillation), and lightweight design choices (e.g., streamlined multi-scale stages and operator fusion) to further reduce latency and memory footprint. We will also investigate efficient cross-modal reasoning modules (e.g., state-space models such as Mamba or linear-attention variants) under strict efficiency constraints.
- (2)
- Generalization beyond KITTI. To strengthen external validity, we plan to evaluate cross-dataset performance and domain transfer (e.g., additional outdoor/indoor benchmarks) and study robustness to sparsity, noise, transparency/reflectivity, and calibration shifts. In particular, recent benchmarks such as RSUD20K highlight the importance of dataset diversity and robustness evaluation under diverse environmental conditions, which will inspire our future exploration of generalization across challenging scenarios.
- (3)
- Temporal and system aspects. We will extend GAC-Net to sequential/streaming depth completion with temporal consistency and conduct system-level measurements (FLOPs and inference memory on diverse hardware, including embedded platforms), providing a more comprehensive view of computational costs for practical deployment.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Xie, Z.; Yu, X.; Gao, X.; Li, K.; Shen, S. Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 3395–3415. [Google Scholar] [CrossRef]
- Huang, Z.; Lv, C.; Xing, Y.; Wu, J. Multi-Modal Sensor Fusion-Based Deep Neural Network for End-to-End Autonomous Driving With Scene Understanding. IEEE Sens. J. 2021, 21, 11781–11790. [Google Scholar] [CrossRef]
- Song, Z.; Lu, J.; Yao, Y.; Zhang, J. Self-Supervised Depth Completion from Direct Visual-LiDAR Odometry in Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 11654–11665. [Google Scholar] [CrossRef]
- Bai, L.; Zhao, Y.; Elhousni, M.; Huang, X. DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles. IEEE Access 2020, 8, 227825–227833. [Google Scholar] [CrossRef]
- Gofer, E.; Praisler, S.; Gilboa, G. Adaptive LiDAR Sampling and Depth Completion Using Ensemble Variance. IEEE Trans. Image Process. 2021, 30, 8900–8912. [Google Scholar] [CrossRef]
- Cui, Y.; Chen, R.; Chu, W.; Chen, L.; Tian, D.; Li, Y.; Cao, D. Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review. IEEE Trans. Intell. Transp. Syst. 2022, 23, 722–739. [Google Scholar] [CrossRef]
- Su, S.; Wu, J. GeometryFormer: Semi-convolutional transformer integrated with geometric perception for depth completion in autonomous driving scenes. Sensors 2024, 24, 8066. [Google Scholar] [CrossRef]
- Zou, N.; Xiang, Z.; Chen, Y.; Chen, S.; Qiao, C. Simultaneous Semantic Segmentation and Depth Completion with Constraint of Boundary. Sensors 2020, 20, 635. [Google Scholar] [CrossRef]
- Jeong, Y.; Park, J.; Cho, D.; Hwang, Y.; Choi, S.B.; Kweon, I.S. Lightweight Depth Completion Network with Local Similarity-Preserving Knowledge Distillation. Sensors 2022, 22, 7388. [Google Scholar] [CrossRef]
- Pan, J.; Zhong, S.; Yue, T.; Yin, Y.; Tang, Y. Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer. Sensors 2024, 24, 2374. [Google Scholar] [CrossRef]
- El-Yabroudi, M.Z.; Abdel-Qader, I.; Bazuin, B.J.; Abudayyeh, O.; Chabaan, R.C. Guided Depth Completion with Instance Segmentation Fusion in Autonomous Driving Applications. Sensors 2022, 22, 9578. [Google Scholar] [CrossRef] [PubMed]
- Zhai, D.-H.; Yu, S.; Wang, W.; Guan, Y.; Xia, Y. TCRNet: Transparent Object Depth Completion with Cascade Refinements. IEEE Trans. Autom. Sci. Eng. 2025, 22, 1893–1912. [Google Scholar] [CrossRef]
- Wang, M.; Huang, R.; Liu, Y.; Li, Y.; Xie, W. suLPCC: A Novel LiDAR Point Cloud Compression Framework for Scene Understanding Tasks. IEEE Trans. Ind. Inform. 2025, 21, 3816–3827. [Google Scholar] [CrossRef]
- Lu, H.; Xu, S.; Cao, S. SGTBN: Generating Dense Depth Maps from Single-Line LiDAR. IEEE Sens. J. 2021, 21, 19091–19100. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, H.; Wu, L.; Zhou, Y.; Wu, D. Spatiotemporal Guided Self-Supervised Depth Completion from LiDAR and Monocular Camera. In Proceedings of the 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), Shenzhen, China, 1–4 December 2020; pp. 54–57. [Google Scholar] [CrossRef]
- Wang, Y.; Dai, Y.; Liu, Q.; Yang, P.; Sun, J.; Li, B. CU-Net: LiDAR Depth-Only Completion with Coupled U-Net. IEEE Robot. Autom. Lett. 2022, 7, 11476–11483. [Google Scholar] [CrossRef]
- Fan, Y.-C.; Zheng, L.-J.; Liu, Y.-C. 3D Environment Measurement and Reconstruction Based on LiDAR. In Proceedings of the 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Houston, TX, USA, 14–17 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
- Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity Invariant CNNs. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; IEEE: New York, NY, USA, 2017; pp. 11–20. [Google Scholar] [CrossRef]
- Zunair, H.; Khan, S.; Hamza, A.B. Rsud20K: A dataset for road scene understanding in autonomous driving. In Proceedings of the 2024 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 27–30 October 2024; IEEE: New York, NY, USA, 2024; pp. 708–714. [Google Scholar] [CrossRef]
- Tang, J.; Tian, F.-P.; Feng, W.; Li, J.; Tan, P. Learning Guided Convolutional Network for Depth Completion. IEEE Trans. Image Process. 2021, 30, 1116–1129. [Google Scholar] [CrossRef]
- Hu, M.; Wang, S.; Li, B.; Ning, S.; Fan, L.; Gong, X. PENet: Towards Precise and Efficient Image Guided Depth Completion. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 13656–13662. [Google Scholar] [CrossRef]
- Cheng, X.; Wang, P.; Yang, R. Learning Depth with Convolutional Spatial Propagation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2361–2379. [Google Scholar] [CrossRef]
- Cheng, X.; Wang, P.; Guan, C.; Yang, R. CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 10615–10622. [Google Scholar] [CrossRef]
- Lin, Y.; Cheng, T.; Zhong, Q.; Zhou, W.; Yang, H. Dynamic Spatial Propagation Network for Depth Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 1638–1646. [Google Scholar]
- Liu, L.; Liao, Y.; Wang, Y.; Geiger, A.; Liu, Y. Learning Steering Kernels for Guided Depth Completion. IEEE Trans. Image Process. 2021, 30, 2850–2861. [Google Scholar] [CrossRef] [PubMed]
- Tang, J.; Tian, F.-P.; An, B.; Li, J.; Tan, P. Bilateral Propagation Network for Depth Completion. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 9763–9772. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30, pp. 5099–5108. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Imran, S.; Long, Y.; Liu, X.; Morris, D. Depth Coefficients for Depth Completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 12438–12447. [Google Scholar] [CrossRef]
- Ma, F.; Karaman, S. Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 4796–4803. [Google Scholar] [CrossRef]
- Zhang, Y.; Guo, X.; Poggi, M.; Zhu, Z.; Huang, G.; Mattoccia, S. CompletionFormer: Depth Completion with Convolutions and Vision Transformers. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; IEEE: New York, NY, USA, 2023; pp. 18527–18536. [Google Scholar] [CrossRef]
- Park, J.; Joo, K.; Hu, Z.; Liu, C.K.; So Kweon, I. Non-local Spatial Propagation Network for Depth Completion. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 120–136. [Google Scholar]
- Chen, D.; Huang, T.; Song, Z.; Deng, S.; Jia, T. Agg-Net: Attention Guided Gated-Convolutional Network for Depth Image Completion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; IEEE: New York, NY, USA, 2023; pp. 8853–8862. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. RigNet: Repetitive Image Guided Network for Depth Completion. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 214–230. [Google Scholar]
- Yan, Z.; Li, X.; Hui, L.; Zhang, Z.; Li, J.; Yang, J. Rignet++: Semantic Assisted Repetitive Image Guided Network for Depth Completion. arXiv 2023, arXiv:2309.00655. [Google Scholar] [CrossRef]
- Nazir, D.; Pagani, A.; Liwicki, M.; Stricker, D.; Afzal, M.Z. SemAttNet: Toward Attention-Based Semantic Aware Guided Depth Completion. IEEE Access 2022, 10, 120781–120791. [Google Scholar] [CrossRef]
- Qiu, J.; Cui, Z.; Zhang, Y.; Zhang, X.; Liu, S.; Zeng, B.; Pollefeys, M. DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene from Sparse LiDAR Data and Single Color Image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 3308–3317. [Google Scholar] [CrossRef]
- Xu, Y.; Zhu, X.; Shi, J.; Zhang, G.; Bao, H.; Li, H. Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2811–2820. [Google Scholar] [CrossRef]
- Zhao, S.; Gong, M.; Fu, H.; Tao, D. Adaptive Context-Aware Multi-Modal Network for Depth Completion. IEEE Trans. Image Process. 2021, 30, 5264–5276. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Shao, X.; Wang, B.; Li, Y.; Wang, S. Graphcspn: Geometry-aware depth completion via dynamic GCNs. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 90–107. [Google Scholar]
- Chen, Y.; Yang, B.; Liang, M.; Urtasun, R. Learning Joint 2D-3D Representations for Depth Completion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 10022–10031. [Google Scholar]
- Yu, Z.; Sheng, Z.; Zhou, Z.; Luo, L.; Cao, S.-Y.; Gu, H.; Zhang, H.; Shen, H.-L. Aggregating Feature Point Cloud for Depth Completion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 8698–8709. [Google Scholar]
- Zhou, W.; Yan, X.; Liao, Y.; Lin, Y.; Huang, J.; Zhao, G.; Cui, S.; Li, Z. BEV@DC: Bird’s-Eye View Assisted Training for Depth Completion. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 9233–9242. [Google Scholar]
- Yan, Z.; Lin, Y.; Wang, K.; Zheng, Y.; Wang, Y.; Zhang, Z.; Li, J.; Yang, J. Tri-Perspective View Decomposition for Geometry-Aware Depth Completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 4874–4884. [Google Scholar]
- Wang, S.; Jiang, F.; Gong, X. A Transformer-Based Image-Guided Depth-Completion Model with Dual-Attention Fusion Module. Sensors 2024, 24, 6270. [Google Scholar] [CrossRef] [PubMed]
- Shi, Y.; Singh, M.K.; Cai, H.; Porikli, F. DeCoTR: Enhancing Depth Completion with 2D and 3D Attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 10736–10746. [Google Scholar]
- Xie, G.; Zhang, Y.; Jiang, Z.; Liu, Y.; Xie, Z.; Cao, B.; Liu, H. HTMNet: A Hybrid Network with Transformer-Mamba Bottleneck Multimodal Fusion for Transparent and Reflective Objects Depth Completion. arXiv 2025, arXiv:2505.20904. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, Y.; Zhang, G.; Wang, S.; Li, B.; Liu, Q.; Hui, L.; Dai, Y. Improving Depth Completion via Depth Feature Upsampling. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; IEEE: New York, NY, USA, 2024; pp. 21104–21113. [Google Scholar] [CrossRef]
- Imran, S.; Liu, X.; Morris, D. Depth Completion with Twin Surface Extrapolation at Occlusion Boundaries. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 2583–2592. [Google Scholar] [CrossRef]
- Wang, Y.; Li, B.; Zhang, G.; Liu, Q.; Gao, T.; Dai, Y. LRRU: Long-Short Range Recurrent Updating Networks for Depth Completion. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; IEEE: New York, NY, USA, 2023; pp. 9388–9398. [Google Scholar] [CrossRef]
- Huynh, L.; Nguyen, P.; Matas, J.; Rahtu, E.; Heikkilä, J. Boosting Monocular Depth Estimation with Lightweight 3D Point Fusion. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; IEEE: New York, NY, USA, 2021; pp. 12747–12756. [Google Scholar] [CrossRef]
Method | 2D | 3D | RMSE↓ | MAE↓ | iRMSE↓ | iMAE↓ | Publication |
---|---|---|---|---|---|---|---|
CPSN [22] | ✓ | 1019.64 | 279.46 | 2.93 | 1.15 | ECCV 2018 | |
TWISE [51] | ✓ | 840.20 | 195.58 | 2.08 | 0.82 | CVPR 2021 | |
CSPN++ [23] | ✓ | 743.69 | 209.28 | 2.07 | 0.90 | AAAI 2020 | |
PENet [21] | ✓ | 730.08 | 210.5 | 2.17 | 0.94 | ICRA 2021 | |
RigNet [35] | ✓ | 712.66 | 203.25 | 2.08 | 0.90 | ECCV 2022 | |
LRRU [52] | ✓ | 696.51 | 189.96 | 1.87 | 0.81 | ICCV 2023 | |
BP-Net [26] | ✓ | 684.90 | 194.69 | 1.82 | 0.84 | CVPR 2024 | |
FuseNet [42] | ✓ | ✓ | 752.88 | 221.19 | 2.34 | 1.14 | ICCV 2019 |
ACMNet [40] | ✓ | ✓ | 744.91 | 206.09 | 2.08 | 0.90 | T-IP 2021 |
PointFusion [53] | ✓ | ✓ | 741.90 | 201.10 | 1.97 | 0.85 | ICCV 2021 |
GraphCSPN [41] | ✓ | ✓ | 738.41 | 199.31 | 1.96 | 0.84 | ECCV 2022 |
PointDC [43] | ✓ | ✓ | 736.07 | 201.87 | 1.97 | 0.87 | ICCV 2023 |
DeCoTR [47] | ✓ | ✓ | 717.07 | 195.30 | 1.92 | 0.84 | CVPR 2024 |
TPVD [45] | ✓ | ✓ | 693.97 | 188.60 | 1.82 | 0.81 | CVPR 2024 |
GAC-Net(ours) | ✓ | ✓ | 680.82 | 193.85 | 1.81 | 0.84 | - |
GAC-Net | Without | PointNet++ | PointNet++-S | RMSE | MAE |
---|---|---|---|---|---|
i | ✓ | 719.08 | 204.36 | ||
ii | ✓ | 714.34 | 200.12 | ||
iii | ✓ | 709.4 | 195.63 |
GAC-Net | Add | Concat | CAFFM | RMSE | MAE |
---|---|---|---|---|---|
i | ✓ | 716.69 | 202.54 | ||
ii | ✓ | 715.06 | 201.36 | ||
iii | ✓ | 709.4 | 195.63 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gan, X.; Zhu, K.; Sun, M.; Zhao, L.; Lai, C. GAC-Net: A Geometric–Attention Fusion Network for Sparse Depth Completion from LiDAR and Image. Sensors 2025, 25, 5495. https://doi.org/10.3390/s25175495
Gan X, Zhu K, Sun M, Zhao L, Lai C. GAC-Net: A Geometric–Attention Fusion Network for Sparse Depth Completion from LiDAR and Image. Sensors. 2025; 25(17):5495. https://doi.org/10.3390/s25175495
Chicago/Turabian StyleGan, Xingli, Kuang Zhu, Min Sun, Leyang Zhao, and Canwei Lai. 2025. "GAC-Net: A Geometric–Attention Fusion Network for Sparse Depth Completion from LiDAR and Image" Sensors 25, no. 17: 5495. https://doi.org/10.3390/s25175495
APA StyleGan, X., Zhu, K., Sun, M., Zhao, L., & Lai, C. (2025). GAC-Net: A Geometric–Attention Fusion Network for Sparse Depth Completion from LiDAR and Image. Sensors, 25(17), 5495. https://doi.org/10.3390/s25175495