Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement
Abstract
1. Introduction
- Reflectance Estimation with ASM: We employ the ASM for reflectance estimation, enhancing the stability and accuracy of the initial reflectance map in non-uniform low-light environments through refined initialization and compensation of atmospheric light and transmittance.
- Weighted Reflectance Reconstruction: We introduce a weighted reconstruction approach to replace global gamma correction, which adaptively enhances reflectance values in different regions while preserving the overall image depth.
- Robustness and Noise Suppression: The Non-Uniform Illumination Compensation model introduces a noise regularization term and alternating optimization to suppress noise while robustly preserving fine details, ensuring improved image naturalness and quality in low-light areas.
- Comprehensive Experimental Validation: Extensive testing across multiple datasets demonstrates the model’s robust performance and generalization, validating its effectiveness in low-light image enhancement under diverse conditions.
2. Related Work
2.1. Conventional Methods
2.2. Atmospheric Scattering Model
2.3. Deep Learning-Based Methods
3. Proposed Method
3.1. A Novel Robust Non-Uniform Illumination Compensation-Based Model
3.2. An Alternative Updating Scheme
3.3. Initialization for and Based on ASM
Algorithm 1 Algorithm for image enhancement |
Input: Input image , initialized parameters. 1. Initialize via (29); 2. Initialize via (32); 3. Initialize: via (21); 4. Initialize: via (22); 5. For to maxiter do 6. Update via (15); 7. Update via (17); 8. Update via (18); 9. Update via (20); 10. End for Output: Enhanced image . |
4. Experiments
4.1. Experimental Settings
4.2. Ablation Study
4.3. Comparison with State-of-the-Art Methods
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhang, J.; Lei, J.; Xie, W.; Li, Y.; Yang, G.; Jia, X. Guided hybrid quantization for object detection in remote sensing imagery via one-to-one self-teaching. IEEE Trans. Geosci. Remote Sens. 2023, 61. [Google Scholar] [CrossRef]
- Zhang, Y.-f.; Zheng, J.; Li, L.; Liu, N.; Jia, W.; Fan, X.; Xu, C.; He, X. Rethinking feature aggregation for deep RGB-D salient object detection. Neurocomputing 2021, 423, 463–473. [Google Scholar] [CrossRef]
- Chen, F.; Chen, X.; Van de Voorde, T.; Roberts, D.; Jiang, H.; Xu, W. Open water detection in urban environments using high spatial resolution remote sensing imagery. Remote Sens. Environ. 2020, 242, 111706. [Google Scholar] [CrossRef]
- Brum-Bastos, V.; Long, J.; Church, K.; Robson, G.; de Paula, R.R.; Demšar, U.K. Multi-source data fusion of optical satellite imagery to characterize habitat selection from wildlife tracking data. Ecol. Inform. 2020, 60, 101149. [Google Scholar] [CrossRef]
- Zhang, Q.; Levin, N.; Chalkias, C.; Letu, H.; Liu, D. Nighttime light remote sensing: “Monitoring human societies from outer space. In Remote Sensing Handbook, Volume VI; CRC Press: Boca Raton, FL, USA, 2024; pp. 381–421. [Google Scholar]
- Lee, C.; Lee, C.; Kim, C.-S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Shen, P.; Peng, X.; Zhu, G.; Song, J.; Wei, W.; Song, H. Simultaneous enhancement and noise reduction of a single low-light image. IET Image Process. 2016, 10, 840–847. [Google Scholar] [CrossRef]
- Cai, R.; Chen, Z. Brain-like retinex: A biologically plausible retinex algorithm for low light image enhancement. Pattern Recognit. 2023, 136, 109195. [Google Scholar] [CrossRef]
- Wang, R.; Zhang, Q.; Fu, C.-W.; Shen, X.; Zheng, W.-S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6849–6857. [Google Scholar]
- Wang, Y.n.; Jiang, Z.; Liu, C.; Li, K.; Men, A.; Wang, H.; Chen, X. Shedding light on images: Multi-level image brightness enhancement guided by arbitrary references. Pattern Recognit. 2022, 131, 108867. [Google Scholar] [CrossRef]
- Fu, Z.; Yang, Y.; Tu, X.; Huang, Y.; Ding, X.; Ma, K.-K. Learning a simple low-light image enhancer from paired low-light instances. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22252–22261. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Coltuc, D.; Bolon, P.; Chassery, J.-M. Exact histogram specification. IEEE Trans. Image Process. 2006, 15, 1143–1152. [Google Scholar] [CrossRef]
- Ibrahim, H.; Kong, N.S.P. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.-u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Ma, J.; Fan, X.; Ni, J.; Zhu, X.; Xiong, C. Multi-scale retinex with color restoration image enhancement based on Gaussian filtering and guided filtering. Int. J. Mod. Phys. B 2017, 31, 1744077. [Google Scholar] [CrossRef]
- Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
- Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.-P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
- Gu, Z.; Li, F.; Lv, X.-G. A detail preserving variational model for image Retinex. Appl. Math. Model. 2019, 68, 643–661. [Google Scholar] [CrossRef]
- Jia, F.; Wong, H.S.; Wang, T.; Zeng, T. A reflectance re-weighted retinex model for non-uniform and low-light image enhancement. Pattern Recognit. 2023, 144, 109823. [Google Scholar] [CrossRef]
- Wang, W.; Yan, D.; Wu, X.; He, W.; Chen, Z.; Yuan, X.; Li, L. Low-light image enhancement based on virtual exposure. Signal Process. Image Commun. 2023, 118, 117016. [Google Scholar] [CrossRef]
- Yu, S.-Y.; Zhu, H. Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Trans. Circuits Syst. Video Technol. 2017, 29, 28–37. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
- Zhang, X.; Shen, P.; Luo, L.; Zhang, L.; Song, J. Enhancement and noise reduction of very low light level images. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 2034–2037. [Google Scholar]
- Jiang, X.; Yao, H.; Zhang, S.; Lu, X.; Zeng, W. Night video enhancement using improved dark channel prior. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 553–557. [Google Scholar]
- Wei, X.; Lin, X.; Li, Y. DA-DRN: A degradation-aware deep Retinex network for low-light image enhancement. Digit. Signal Process. 2024, 144, 104256. [Google Scholar] [CrossRef]
- Wang, M.; Li, J.; Zhang, C. Low-light image enhancement by deep learning network for improved illumination map. Comput. Vis. Image Underst. 2023, 232, 103681. [Google Scholar] [CrossRef]
- Al Sobbahi, R.; Tekli, J. Comparing deep learning models for low-light natural scene image enhancement and their impact on object detection and classification: Overview, empirical evaluation, and challenges. Signal Process. Image Commun. 2022, 109, 116848. [Google Scholar] [CrossRef]
- Tang, H.; Zhu, H.; Fei, L.; Wang, T.; Cao, Y.; Xie, C. Low-illumination image enhancement based on deep learning techniques: A brief review. Photonics 2023, 10, 198. [Google Scholar] [CrossRef]
- Wang, L.-W.; Liu, Z.-S.; Siu, W.-C.; Lun, D.P. Lightening network for low-light image enhancement. IEEE Trans. Image Process. 2020, 29, 7984–7996. [Google Scholar] [CrossRef]
- Xu, K.; Yang, X.; Yin, B.; Lau, R.W. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2281–2290. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
- Tang, G.; Ni, J.; Chen, Y.; Cao, W.; Yang, S.X. An improved CycleGAN-based model for low-light image enhancement. IEEE Sens. J. 2023, 24, 21879–21892. [Google Scholar] [CrossRef]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Giusti, E.; Williams, G.H. Minimal Surfaces and Functions of Bounded Variation; Springer: Berlin/Heidelberg, Germany, 1984; Volume 80. [Google Scholar]
- Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-light image enhancement using variational optimization-based retinex model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar] [CrossRef]
- Ren, X.; Yang, W.; Cheng, W.-H.; Liu, J. LR3M: Robust low-light enhancement via low-rank regularized retinex model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef] [PubMed]
- Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
- Han, J.; Zhang, S.; Fan, N.; Ye, Z. Local patchwise minimal and maximal values prior for single optical remote sensing image dehazing. Inf. Sci. 2022, 606, 173–193. [Google Scholar] [CrossRef]
- Xing, L.; Qu, H.; Xu, S.; Tian, Y. CLEGAN: Toward low-light image enhancement for UAVs via self-similarity exploitation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Yao, Z.; Fan, G.; Fan, J.; Gan, M.; Chen, C.P. Spatial-frequency dual-domain feature fusion network for low-light remote sensing image enhancement. IEEE Trans. Geosci. Remote Sens. 2024, 62. [Google Scholar] [CrossRef]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Loh, Y.P.; Chan, C.S. Getting to know low-light images with the exclusively dark dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Wang, Y.-F.; Liu, H.-M.; Fu, Z.-W. Low-light image enhancement via the absorption light scattering model. IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef]
- Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. Star: A structure and texture aware retinex model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Nie, Y.; Zheng, W.S. Dual illumination estimation for robust exposure correction. Comput. Graph. Forum 2019, 38, 243–252. [Google Scholar] [CrossRef]
- Liang, D.; Li, L.; Wei, M.; Yang, S.; Zhang, L.; Yang, W.; Du, Y.; Zhou, H. Semantically contrastive learning for low-light image enhancement. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; pp. 1555–1563. [Google Scholar]
- Zhang, F.; Li, Y.; You, S.; Fu, Y. Learning temporal consistency for low light video enhancement from single images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4967–4976. [Google Scholar]
- Cui, Z.; Li, K.; Gu, L.; Su, S.; Gao, P.; Jiang, Z.; Qiao, Y.; Harada, T. You only need 90k parameters to adapt light: A light weight transformer for image enhancement and exposure correction. arXiv 2022, arXiv:2205.14871. [Google Scholar]
- Liu, R.; Ma, L.; Ma, T.; Fan, X.; Luo, Z. Learning with nested scene modeling and cooperative architecture search for low-light vision. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5953–5969. [Google Scholar] [CrossRef]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1398–1402. [Google Scholar]
- Lai, W.-S.; Huang, J.-B.; Hu, Z.; Ahuja, N.; Yang, M.-H. A comparative study for single image blind deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1701–1709. [Google Scholar]
- Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Bovik, A.C. A feature-enriched completely blind image quality evaluator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef]
- Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Yan, J.; Li, J.; Fu, X. No-reference quality assessment of contrast-distorted images using contrast enhancement. arXiv 2019, arXiv:1904.08879. [Google Scholar]
Method | PSNR | SSIM | VIF | LRSD |
---|---|---|---|---|
ALSM | 21.27 | 0.82303 | 0.20698 | 6.0224 |
Brain | 16.5766 | 0.85562 | 0.22094 | 10.614 |
DUAL | 16.6699 | 0.87681 | 0.19858 | 9.2933 |
Robust | 15.465 | 0.72866 | 0.21274 | 18.1203 |
Scl | 14.6052 | 0.77429 | 0.23229 | 14.849 |
LIE | 14.9849 | 0.8010 | 0.19854 | 6.7417 |
RUAS | 15.9718 | 0.7842 | 0.20195 | 10.0014 |
STA | 18.4145 | 0.73836 | 0.21797 | 18.3026 |
STAR | 16.8849 | 0.76211 | 0.21319 | 8.7696 |
ZERO | 18.7519 | 0.87647 | 0.20472 | 12.131 |
Ours | 22.1443 | 0.92009 | 0.23778 | 5.74 |
Method | NIQE | PIQE | Bright | CEIQ |
---|---|---|---|---|
ALSM | 3.1888 | 31.6198 | 0.49279 | 3.5693 |
Brain | 2.5048 | 27.8303 | 0.36544 | 3.358 |
DUAL | 3.5307 | 27.9626 | 0.39528 | 3.4052 |
Robust | 3.7204 | 38.3927 | 0.40241 | 3.3238 |
LIE | 3.1356 | 27.8832 | 0.5917 | 3.5142 |
RUAS | 3.1172 | 35.4116 | 0.3582 | 3.2458 |
Scl | 3.217 | 28.1019 | 0.33698 | 3.3529 |
STA | 2.7025 | 27.8341 | 0.51926 | 3.3394 |
STAR | 2.7456 | 39.337 | 0.25584 | 2.8332 |
ZERO | 3.1866 | 27.2919 | 0.40401 | 3.453 |
Ours | 2.5403 | 27.3841 | 0.56353 | 3.5467 |
Method | NIQE | PIQE | Bright | CEIQ |
---|---|---|---|---|
ALSM | 3.263 | 31.8025 | 0.48968 | 3.2719 |
Brain | 3.1619 | 30.2193 | 0.36418 | 3.09 |
DUAL | 3.4855 | 28.4695 | 0.38247 | 3.1919 |
Robust | 4.0469 | 51.1791 | 0.38045 | 3.0203 |
IAT | 2.9487 | 35.3282 | 0.4617 | 3.1470 |
RUAS | 3.4795 | 36.1626 | 0.3983 | 2.6241 |
Scl | 2.6517 | 31.1319 | 0.31757 | 2.9648 |
STA | 3.5878 | 40.6811 | 0.4963 | 3.0705 |
STAR | 3.7209 | 50.5329 | 0.26959 | 2.7589 |
ZERO | 2.7131 | 30.1898 | 0.38913 | 3.0747 |
Ours | 2.6831 | 28.1988 | 0.54649 | 3.3419 |
Method | SSIM | PSNR | VIF | LRSD |
---|---|---|---|---|
ALSM | 0.7967 | 21.78 | 0.6657 | 10.1907 |
Brain | 0.8080 | 15.85 | 0.7287 | 3.7916 |
DUAL | 0.7492 | 16.85 | 0.6581 | 4.4561 |
Robust | 0.8217 | 15.68 | 0.8367 | 9.7995 |
IAT | 0.8672 | 25.93 | 0.8510 | 5.2405 |
RUAS | 0.7245 | 17.43 | 0.6768 | 6.0751 |
Scl | 0.6737 | 16.77 | 0.8580 | 8.4665 |
STA | 0.8020 | 22.41 | 0.8355 | 10.1857 |
LIE | 0.7906 | 17.96 | 0.8097 | 8.5303 |
ZERO | 0.7567 | 19.43 | 0.8148 | 5.7520 |
Ours | 0.8879 | 24.99 | 0.8380 | 2.9407 |
Method | 600 × 400 | 960 × 680 | 1024 × 1024 | 1500 × 1500 |
---|---|---|---|---|
DUAL | 6.02 | 19.11 | 41.21 | 91.74 |
Scl | 7.20 | 19.39 | 31.13 | 66.68 |
STA | 0.44 | 1.17 | 1.87 | 3.99 |
ZERO | 6.19 | 16.34 | 26.24 | 56.01 |
RUAS | 0.24 | 0.59 | 0.91 | 1.92 |
IAT | 1.58 | 4.36 | 7.31 | 15.79 |
LIE | 0.98 | 2.60 | 4.43 | 9.51 |
ALSM | 21.36 | 36.89 | 58.39 | 112.19 |
Brain | 10.36 | 23.68 | 36.58 | 75.54 |
Robust | 14.22 | 42.49 | 65.19 | 141.14 |
STAR | 11.15 | 29.65 | 45.61 | 98.65 |
Ours | 10.23 | 26.28 | 47.65 | 92.35 |
Key Parameter | Function |
---|---|
Coefficient of the fidelity term in the model; low sensitivity | |
Smoothing term coefficient for the illumination weighting map; low sensitivity | |
Smoothing term coefficient; relatively high sensitivity | |
Sparsity constraint term for noise suppression; moderate sensitivity | |
Affects the overall value of the transmission map, thereby influencing the contrast | |
Adjusts the degree of suppression and enhancement in over-exposed regions | |
Directly determines the degree of illumination compensation |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, X.; Huang, L.; Li, M.; Han, C.; Nie, T. Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement. Remote Sens. 2025, 17, 2069. https://doi.org/10.3390/rs17122069
Zhao X, Huang L, Li M, Han C, Nie T. Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement. Remote Sensing. 2025; 17(12):2069. https://doi.org/10.3390/rs17122069
Chicago/Turabian StyleZhao, Xiaohang, Liang Huang, Mingxuan Li, Chengshan Han, and Ting Nie. 2025. "Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement" Remote Sensing 17, no. 12: 2069. https://doi.org/10.3390/rs17122069
APA StyleZhao, X., Huang, L., Li, M., Han, C., & Nie, T. (2025). Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement. Remote Sensing, 17(12), 2069. https://doi.org/10.3390/rs17122069