GS-MSDR: Gaussian Splatting with Multi-Scale Deblurring and Resolution Enhancement
Abstract
1. Introduction
- We introduce the GS-MSDR framework, which significantly improves 3D reconstruction results in blurred scenes through multi-scale deblurring and resolution enhancement.
- We propose the Multi-scale Adaptive Attention Network (MAAN), which enables the extraction of multi-scale features and selective feature enhancement.
- We demonstrate the effectiveness of GS-MSDR through experiments in three types of image blur scenarios, showing significant improvements in both quantitative metrics and visual effects compared to current state-of-the-art methods.
2. Related Works
2.1. Image Deblurring
2.2. Neural Radiance Fields (NeRF)
2.3. NeRF-Based Deblurring
2.4. 3DGS-Based Deblurring
3. Methodology
3.1. Preliminaries: Three-Dimensional Gaussian Splatting
3.2. Multi-Scale Adaptive Attention Network
3.2.1. Feature Attention Network
3.2.2. Multi-Modal Contextual Features
3.2.3. Multi-Modal Context Adapter
3.3. Hierarchical Progressive Kernel Optimization
4. Experiments
4.1. Implementation Details
4.2. Benchmark Datasets
4.3. Baselines and Metrics
4.4. Computational Complexity and Trade-Off Analysis
5. Results
5.1. Ablation Study
5.1.1. Functional Component Analysis
5.1.2. Cross-Module Interaction Analysis
5.2. Contrast Experiment
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 2023, 42, 139–153. [Google Scholar] [CrossRef]
- Fridovich-Keil, S.; Meanti, G.; Warburg, F.R.; Recht, B.; Kanazawa, A. K-planes: Explicit radiance fields in space, time, and appearance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 12479–12488. [Google Scholar]
- Fridovich-Keil, S.; Yu, A.; Tancik, M.; Chen, Q.; Recht, B.; Kanazawa, A. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 5501–5510. [Google Scholar]
- Garbin, S.J.; Kowalski, M.; Johnson, M.; Shotton, J.; Valentin, J. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; IEEE: New York, NY, USA, 2021; pp. 14346–14355. [Google Scholar]
- Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 2022, 41, 1–15. [Google Scholar] [CrossRef]
- Sun, C.; Sun, M.; Chen, H.T. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 5459–5469. [Google Scholar]
- Chen, A.; Xu, Z.; Geiger, A.; Yu, J.; Su, H. Tensorf: Tensorial radiance fields. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 333–350. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 5470–5479. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Zip-nerf: Anti-aliased grid-based neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; IEEE: New York, NY, USA, 2023; pp. 19697–19705. [Google Scholar]
- Liu, Y.L.; Gao, C.; Meuleman, A.; Tseng, H.Y.; Saraf, A.; Kim, C.; Chuang, Y.Y.; Kopf, J.; Huang, J.B. Robust dynamic radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 13–23. [Google Scholar]
- Pumarola, A.; Corona, E.; Pons-Moll, G.; Moreno-Noguer, F. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 10318–10327. [Google Scholar]
- Tretschk, E.; Tewari, A.; Golyanik, V.; Zollhöfer, M.; Lassner, C.; Theobalt, C. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; IEEE: New York, NY, USA, 2021; pp. 12959–12970. [Google Scholar]
- Ma, L.; Li, X.; Liao, J.; Zhang, Q.; Wang, X.; Wang, J.; Sander, P.V. Deblur-nerf: Neural radiance fields from blurry images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; IEEE: New York, NY, USA, 2022; pp. 12861–12870. [Google Scholar]
- Lee, D.; Lee, M.; Shin, C.; Lee, S. DP-nerf: Deblurred neural radiance field with physical scene priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 12386–12396. [Google Scholar]
- Peng, C.; Chellappa, R. PDRF: Progressively deblurring radiance field for fast and robust scene reconstruction from blurry images. arXiv 2022, arXiv:2208.08049. [Google Scholar] [CrossRef]
- Wang, P.; Zhao, L.; Ma, R.; Liu, P. Bad-nerf: Bundle adjusted deblur neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 4170–4179. [Google Scholar]
- Lee, B.; Lee, H.; Ali, U.; Park, E. Sharp-nerf: Grid-based fast deblurring neural radiance fields using sharpness prior. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; IEEE: New York, NY, USA, 2024; pp. 3709–3718. [Google Scholar]
- Low, W.F.; Lee, G.H. Deblur e-NeRF: NeRF from motion-blurred events under high-speed or low-light conditions. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 192–209. [Google Scholar]
- Tang, W.Z.; Rebain, D.; Derpanis, K.G.; Yi, K.M. LSE-NeRF: Learning sensor modeling errors for deblured neural radiance fields with RGB-event stereo. arXiv 2024, arXiv:2409.06104. [Google Scholar]
- Lee, B.; Lee, H.; Sun, X.; Ali, U.; Park, E. Deblurring 3D Gaussian splatting. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 127–143. [Google Scholar]
- Peng, C.; Tang, Y.; Zhou, Y.; Wang, N.; Liu, X.; Li, D.; Chellappa, R. Bags: Blur agnostic gaussian splatting through multi-scale kernel modeling. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 293–310. [Google Scholar]
- Zhao, L.; Wang, P.; Liu, P. Bad-gaussians: Bundle adjusted deblur gaussian splatting. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 233–250. [Google Scholar]
- Oh, J.; Chung, J.; Lee, D.; Lee, K.M. Deblurgs: Gaussian splatting for camera motion blur. arXiv 2024, arXiv:2404.11358. [Google Scholar] [CrossRef]
- Weng, Y.; Shen, Z.; Chen, R.; Wang, Q.; Wang, J. Eadeblur-gs: Event-assisted 3D deblur reconstruction with Gaussian splatting. arXiv 2024, arXiv:2407.13520. [Google Scholar]
- Deguchi, H.; Masuda, M.; Nakabayashi, T.; Saito, H. E2gs: Event enhanced gaussian splatting. In Proceedings of the 2024 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 27–30 October 2024; IEEE: New York, NY, USA, 2024; pp. 1676–1682. [Google Scholar]
- Seiskari, O.; Ylilammi, J.; Kaatrasalo, V.; Rantalankila, P.; Turkulainen, M.; Kannala, J.; Rahtu, E.; Solin, A. Gaussian splatting on the move: Blur and rolling shutter compensation for natural camera motion. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 160–177. [Google Scholar]
- Lee, D.; Park, J.K.; Lee, K.M. GS-Blur: A 3D scene-based dataset for realistic image deblurring. Adv. Neural Inf. Process. Syst. 2024, 37, 125394–125415. [Google Scholar]
- Abuolaim, A.; Afifi, M.; Brown, M.S. Improving single-image defocus deblurring: How dual-pixel images help through multi-task learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; IEEE: New York, NY, USA, 2022; pp. 1231–1239. [Google Scholar]
- Ruan, L.; Chen, B.; Li, J.; Lam, M. Learning to deblur using light field generated and real defocus images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Waikoloa, HI, USA, 3–8 January 2022; IEEE: New York, NY, USA, 2022; pp. 16304–16313. [Google Scholar]
- Khan, M.K.; Morigi, S.; Reichel, L.; Sgallari, F. Iterative methods of Richardson-Lucy-type for image deblurring. Numer. Math. Theory Methods Appl. 2013, 6, 262–275. [Google Scholar]
- Biswas, P.; Sarkar, A.S.; Mynuddin, M. Deblurring images using a Wiener filter. Int. J. Comput. Appl. 2015, 109, 36–38. [Google Scholar] [CrossRef]
- Ding, J.-J.; Chang, W.-D.; Chen, Y.; Fu, S.-W.; Chang, C.-W.; Chang, C.-C. Image deblurring using a pyramid-based Richardson-Lucy algorithm. In Proceedings of the 2014 19th International Conference on Digital Signal Processing, Hong Kong, China, 20–23 August 2014; IEEE: New York, NY, USA, 2014; pp. 204–209. [Google Scholar]
- Jassim, D.A.; Jassim, S.I.; Alhayani, N.J. Image de-blurring and de-noising by using a Wiener filter for different types of noise. In International Conference on Emerging Technologies and Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2022; pp. 451–460. [Google Scholar]
- Chen, L.; Zhang, J.; Li, Z.; Wei, Y.; Fang, F.; Ren, J.; Pan, J. Deep Richardson-Lucy deconvolution for low-light image deblurring. Int. J. Comput. Vis. 2024, 132, 428–445. [Google Scholar] [CrossRef]
- Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2023; IEEE: New York, NY, USA, 2018; pp. 8183–8192. [Google Scholar]
- Kupyn, O.; Martyniuk, T.; Wu, J.; Wang, Z. Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 8878–8887. [Google Scholar]
- Mei, J.; Wu, Z.; Chen, X.; Qiao, Y.; Ding, H.; Jiang, X. Deepdeblur: Text image recovery from blur to sharp. Multimed. Tools Appl. 2019, 78, 18869–18885. [Google Scholar] [CrossRef]
- Wang, L.; Li, Y.; Wang, S. DeepDeblur: Fast one-step blurry face images restoration. arXiv 2017, arXiv:1711.09515. [Google Scholar]
- Cao, A.; Johnson, J. Hexplane: A fast representation for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 130–141. [Google Scholar]
- Nam, S.; Rho, D.; Ko, J.H.; Park, E. Mip-grid: Anti-aliased grid representations for neural radiance fields. Adv. Neural Inf. Process. Syst. 2023, 36, 2837–2849. [Google Scholar]
- Rho, D.; Lee, B.; Nam, S.; Lee, J.C.; Ko, J.H.; Park, E. Masked wavelet representation for compact neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 20680–20690. [Google Scholar]
- Wang, P.; Liu, Y.; Chen, Z.; Liu, L.; Liu, Z.; Komura, T.; Theobalt, C.; Wang, W. F2-nerf: Fast neural radiance field training with free camera trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 4150–4159. [Google Scholar]
- Lee, D.; Kim, D.; Lee, J.; Lee, M.; Lee, S.; Lee, S. Sparse-DeRF: Deblurred neural radiance fields from sparse view. arXiv 2024, arXiv:2407.06613. [Google Scholar] [CrossRef] [PubMed]
- Guo, M.-H.; Lu, C.-Z.; Liu, Z.-N.; Cheng, M.-M.; Hu, S.-M. Visual attention network. Comput. Visual Media 2023, 9, 733–752. [Google Scholar] [CrossRef]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 17–33. [Google Scholar]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 933–941. [Google Scholar]
- Hua, W.; Dai, Z.; Liu, H.; Le, Q. Transformer quality in linear time. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 9099–9117. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pvt v2: Improved baselines with pyramid vision transformer. Comput. Visual Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Schonberger, J.L.; Frahm, J.-M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016; pp. 4104–4113. [Google Scholar]
- Tancik, M.; Weber, E.; Ng, E.; Li, R.; Yi, B.; Wang, T.; Kristoffersen, A.; Austin, J.; Salahi, K.; Ahuja, A.; et al. Nerfstudio: A modular framework for neural radiance field development. In Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings, Los Angeles, CA, USA, 6–10 August 2023; ACM: New York, NY, USA, 2023; pp. 1–12. [Google Scholar]
- Yu, Z.; Chen, A.; Huang, B.; Sattler, T.; Geiger, A. Mip-splatting: Alias-free 3D Gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; IEEE: New York, NY, USA, 2024; pp. 19447–19456. [Google Scholar]
- Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating more pixels in image super-resolution transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: New York, NY, USA, 2023; pp. 22367–22377. [Google Scholar]
- Ni, S.; Wang, S.; Li, H.; Li, P. Multi-scale residual attention network for image super-resolution. In Proceedings of the 2023 International Conference on Telecommunications, Electronics and Informatics (ICTEI), Lisbon, Portugal, 11–13 September 2023; IEEE: New York, NY, USA, 2023; pp. 399–403. [Google Scholar]




| Parameter | Value/Description |
|---|---|
| Number of Scales | 3 (Small, Medium, Large) |
| Kernel Sizes | (Small), (Medium), (Large) |
| Core Size | Dynamically adjusted based on scale (e.g., , , ) |
| Learning Rates | Position: 0.00016 → 0.0000016 Feature: 0.0025 Opacity: 0.05 Scaling: 0.005 Rotation: 0.001 |
| Termination Criteria | 60,000 iterations, opacity reset every 3000 iterations, densification every 100 iterations |
| Scenario | Training Time (h) | Testing Time (s) | GPU Memory Usage (%) |
|---|---|---|---|
| Camera Motion Blur | ≈ | ≈6 | ≈ |
| Real Defocus Blur | ≈ | ≈8 | ≈ |
| Mixed Resolution | ≈ | ≈12 | ≈ |
| Method | PSNR | SSIM |
|---|---|---|
| NeRF [1] | 22.69 | 0.635 |
| 3D-GS [2] | 21.66 | 0.615 |
| Mip-Splatting [53] | 21.87 | 0.627 |
| Deblur-NeRF [14] | 25.63 | 0.768 |
| DP-NeRF [15] | 25.91 | 0.775 |
| PDRF-10 [16] | 25.98 | 0.725 |
| BAGS [22] | 26.70 | 0.824 |
| Ours | 27.09 | 0.885 |
| Method | PSNR | SSIM |
|---|---|---|
| NeRF [1] | 22.40 | 0.666 |
| 3D-GS [2] | 20.57 | 0.606 |
| Mip-Splatting [53] | 21.08 | 0.623 |
| Deblur-NeRF [14] | 23.46 | 0.720 |
| Sharp-NeRF [18] | 23.55 | 0.724 |
| DP-NeRF [15] | 23.67 | 0.730 |
| PDRF-10 [16] | 23.85 | 0.738 |
| BAGS [22] | 23.95 | 0.754 |
| Ours | 23.97 | 0.883 |
| Method | PSNR | SSIM |
|---|---|---|
| NeRFacto [52] | 21.90 | 0.548 |
| Mip-NeRF 360 [9] | 26.18 | 0.721 |
| Mip-Sp [53] | 26.11 | 0.727 |
| Mip-Sp-HAT [54,55] | 24.99 | 0.603 |
| BAGS [22] | 27.11 | 0.791 |
| Ours | 28.30 | 0.819 |
| Method | Camera Motion Blur | Real Defocus Blur | Mixed Resolution | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| GAKA | SAFM | MCA | RGBD | HPKO | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| ✗ | ✗ | ✗ | ✗ | ✗ | 24.91 | 0.794 | 22.65 | 0.824 | 27.21 | 0.756 |
| ✗ | ✓ | ✓ | ✓ | ✓ | 26.44 | 0.861 | 23.52 | 0.852 | 27.87 | 0.792 |
| ✓ | ✗ | ✓ | ✓ | ✓ | 25.44 | 0.840 | 23.47 | 0.864 | 27.72 | 0.786 |
| ✓ | ✓ | ✗ | ✓ | ✓ | 25.69 | 0.851 | 23.62 | 0.876 | 27.91 | 0.796 |
| ✓ | ✓ | ✓ | ✗ | ✓ | 25.12 | 0.843 | 23.31 | 0.858 | 27.67 | 0.776 |
| ✓ | ✓ | ✓ | ✓ | ✗ | 25.27 | 0.849 | 23.40 | 0.861 | 27.66 | 0.782 |
| ✓ | ✓ | ✓ | ✓ | ✓ | 27.09 | 0.885 | 23.97 | 0.883 | 28.30 | 0.819 |
| Configuration | Camera Motion Blur | Real Defocus Blur | Mixed Resolution | |||
|---|---|---|---|---|---|---|
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
| Baseline | 24.91 | 0.794 | 22.65 | 0.824 | 27.21 | 0.756 |
| MSD-only | 25.60 | 0.842 | 23.05 | 0.846 | 27.55 | 0.775 |
| RE-only | 25.27 | 0.849 | 23.40 | 0.861 | 27.66 | 0.782 |
| GS-MSDR | 27.09 | 0.885 | 23.97 | 0.883 | 28.30 | 0.819 |
| Removed Module | Camera Motion Blur | Real Defocus Blur | Mixed Resolution | |||
|---|---|---|---|---|---|---|
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
| GAKA | −0.65 | −0.024 | −0.45 | −0.031 | −0.43 | −0.027 |
| SAFM | −1.65 | −0.045 | −0.50 | −0.019 | −0.58 | −0.033 |
| MCA | −1.40 | −0.034 | −0.35 | −0.007 | −0.39 | −0.023 |
| RGBD | −1.97 | −0.042 | −0.66 | −0.025 | −0.63 | −0.043 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wan, F.; Ding, S.; Li, T.; Lei, G.; Xu, L.; Ming, T. GS-MSDR: Gaussian Splatting with Multi-Scale Deblurring and Resolution Enhancement. Sensors 2025, 25, 6598. https://doi.org/10.3390/s25216598
Wan F, Ding S, Li T, Lei G, Xu L, Ming T. GS-MSDR: Gaussian Splatting with Multi-Scale Deblurring and Resolution Enhancement. Sensors. 2025; 25(21):6598. https://doi.org/10.3390/s25216598
Chicago/Turabian StyleWan, Fang, Sheng Ding, Tianyu Li, Guangbo Lei, Li Xu, and Tingfeng Ming. 2025. "GS-MSDR: Gaussian Splatting with Multi-Scale Deblurring and Resolution Enhancement" Sensors 25, no. 21: 6598. https://doi.org/10.3390/s25216598
APA StyleWan, F., Ding, S., Li, T., Lei, G., Xu, L., & Ming, T. (2025). GS-MSDR: Gaussian Splatting with Multi-Scale Deblurring and Resolution Enhancement. Sensors, 25(21), 6598. https://doi.org/10.3390/s25216598

