Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid
Abstract
:1. Introduction
2. Methods
2.1. Problem Definition
- (1)
- Pixels have large position changes.
- (2)
- Pixels have large rotation angles, as well as different movement amounts because of the varying distances between pixels and the image rotation center.
- (3)
- Large-scale changes of adjacent pixels caused by different distances and shooting angles of the camera center relative to the shooting area.
2.2. Improved Optical-Flow-Tracking Algorithm
2.3. Scalar Field Grid Construction and Interim Image Extraction
2.3.1. Scalar Field Grid Construction
- (1)
- Detect key-points in the image and extract the SIFT feature descriptors of these key-points.
- (2)
- Use a fast nearest-neighbor search algorithm to obtain matching relationships.
- (3)
- Combine the random sample consensus (RANSAC) [28] algorithm and the fundamental matrix to eliminate erroneous matching point pairs.
- (4)
- Remove overlapping points or points with close distances.
- (1)
- Based on the x and y coordinate pairs of homonymous points in the two images, the difference in x and y coordinates for each point is calculated (i.e., Δx and Δy).
- (2)
- Based on xi, yi, Δxi (i = 1,2, 3,…,n), the Ordinary Kriging interpolation algorithm is used to generate a raster file with the same size and resolution as the initial image, which is the x-direction scalar field grid.
- (3)
- Based on xi, yi, Δyi (i = 1,2, 3,…,n), the Ordinary Kriging interpolation algorithm is used to generate a raster file with the same size and resolution as the initial image, which is the y-direction scalar field grid.
2.3.2. Interim Image Extraction
2.4. Optical-Flow Tracking
- (1)
- The first step involves the extraction of feature corner points. When observing the motion of an object, the local motion information within the observation window is limited due to the window’s size. As a result, accurate tracking of motion along the image gradient within the observation window, such as object edges, becomes challenging. This phenomenon is known as the aperture problem [29]. Additionally, optical-flow tracking can be inaccurate in areas with uniform texture. To ensure accurate tracking results, feature points must be extracted from the image before performing optical-flow tracking. In this paper, the features from the Accelerated Segment Test (FAST) algorithm [30] are employed to extract feature corner points from the current image, forming Coordinate Point Set 1.
- (2)
- Based on Coordinate Point Set 1, obtained in the first step, the coordinates of each point on the interim image were estimated one by one, resulting in the forward-tracking results denoted as Coordinate Point Set 2. These coordinates represent the points that can be successfully tracked in the interim image. After this step, Coordinate Point Set 1 is updated to reflect the new set of points used for tracking. Furthermore, record the correspondence between homonymous points in Coordinate Point Sets 1 and 2.
- (3)
- Based on the Coordinate Point Set 2 obtained in the second step, the coordinates of each point on the current images were estimated one by one. This process is known as backward tracking and results in Coordinate Point Set 3. Additionally, record the correspondence between homonymous points in Coordinate Point Set 2 and Coordinate Point Set 3.
- (4)
- Calculate the Euclidean distance between the homonymous points in Coordinate Point Set 1 and Coordinate Point Set 3, as illustrated in Figure 4. If the distance is less than the specified threshold, then the optical-flow tracking result is deemed accurate, and the matching point pairs are retained. Conversely, if the distance exceeds the threshold, then the result is considered incorrect, and the erroneous matching point pairs are removed. In this paper, the distance threshold is set to 1 pixel. After this step, Coordinate Point Sets 1 and 2 are updated with the refined and accurate matching point pairs.
- (5)
- Using the matching-point pairs verified by forward and backward optical-flow tracking, which are the results obtained in the fourth step, apply the vector field consensus algorithm to further eliminate erroneous matching point pairs. After this step, update Coordinate Point Sets 1 and 2.
- (6)
- Based on Coordinate Point Set 2, calculate the coordinates of each point on the next image one by one using Equations (8) and (9). This process involves restoring the tracking coordinates, and the resulting coordinates are denoted as Coordinate Point Set 4.
- (7)
- Using the reliable matching point pairs obtained in Section 2.3.1, calculate the fundamental matrix between the current image and the next image. Utilizing this fundamental matrix, compute the epipolar lines for all points in Coordinate Point Set 1 in the next image and record the correspondence between the points and their respective epipolar lines. Subsequently, calculate the Euclidean distance from each point in Coordinate Point Set 4 to its corresponding epipolar line individually. If the distance exceeds the threshold, then the point is considered an erroneous matching point, as illustrated in Figure 5. In this paper, the threshold for considering a point as erroneous is set to 1 pixel.
3. Experiment and Analysis
3.1. Experimental Data
3.2. Comparative Experiment
3.2.1. Comparison with Feature-Matching Algorithms
Scene | Method | ≤1 | >1 | MA | RMS | Scene | Method | ≤1 | >1 | MA | RMS |
---|---|---|---|---|---|---|---|---|---|---|---|
Scene 1 | Brisk | 10,044 | 1222 | 89.2% | 0.39 | Scene 4 | Brisk | 7097 | 1480 | 82.7% | 0.43 |
SURF | 4453 | 481 | 90.3% | 0.35 | SURF | 2295 | 453 | 83.5% | 0.42 | ||
Akaze | 3301 | 153 | 95.6% | 0.30 | Akaze | 2666 | 329 | 89.0% | 0.37 | ||
SIFT | 8007 | 158 | 98.1% | 0.28 | SIFT | 4499 | 204 | 95.7% | 0.30 | ||
Ours | 48,980 | 1202 | 97.6% | 0.26 | Ours | 31,683 | 660 | 98.0% | 0.25 | ||
Scene 2 | Brisk | 11,921 | 1371 | 89.7% | 0.39 | Scene 5 | Brisk | 2325 | 200 | 92.1% | 0.40 |
SURF | 3772 | 456 | 89.2% | 0.37 | SURF | 2423 | 634 | 79.3% | 0.48 | ||
Akaze | 4022 | 159 | 96.2% | 0.29 | Akaze | 372 | 46 | 89.0% | 0.36 | ||
SIFT | 5927 | 166 | 97.3% | 0.29 | SIFT | 1894 | 72 | 96.3% | 0.33 | ||
Ours | 46,733 | 177 | 99.6% | 0.22 | Ours | 62,799 | 773 | 98.8% | 0.33 | ||
Scene 3 | Brisk | 20,041 | 2274 | 89.8% | 0.39 | Scene 6 | Brisk | 2014 | 272 | 88.1% | 0.41 |
SURF | 6052 | 481 | 92.6% | 0.33 | SURF | 2487 | 337 | 88.1% | 0.42 | ||
Akaze | 6720 | 150 | 97.8% | 0.25 | Akaze | 1275 | 99 | 92.8% | 0.33 | ||
SIFT | 11,041 | 104 | 99.1% | 0.24 | SIFT | 4028 | 57 | 98.6% | 0.25 | ||
Ours | 62,633 | 131 | 99.8% | 0.18 | Ours | 57,244 | 307 | 99.5% | 0.25 |
3.2.2. Comparison with Optical-Flow Algorithms
3.3. Accuracy Verification
3.3.1. Analysis of Reliable Matching Points on Accuracy
3.3.2. Quantitative Analysis of Accuracy
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Arnold, G.E.; Helbert, J.; Kappel, D. Studying the early solar system—Exploration of minor bodies with spaceborne VIS/IR spectrometers: A review and prospects. In Proceedings of the in Annual Conference on Infrared Remote Sensing and Instrumentation XXVI held part of the Annual SPIE Optics + Photonics Meeting, San Diego, CA, USA, 20–22 August 2018. [Google Scholar]
- Matheny, J.G. Reducing the risk of human extinction. Risk Anal. 2007, 27, 1335–1344. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Huang, J.; He, R.; Gen, Y.; Meng, L. The Development Overview of Asteroid Exploration. J. Deep Space Exploration. 2019, 6, 417–423,455. [Google Scholar]
- Xu, Q.; Wang, D.; Xing, S.; Lan, C. Mapping and Characterization Techniques of Asteroid Topography. J. Deep Space Explor. 2016, 3, 356–362. [Google Scholar]
- Liu, W.C.; Wu, B. An integrated photogrammetric and photoclinometric approach for illumination-invariant pixel-resolution 3D mapping of the lunar surface. ISPRS J. Photogramm. Remote Sens. 2020, 159, 153–168. [Google Scholar] [CrossRef]
- Peng, M.; Di, K.; Liu, Z. Adaptive Markov random field model for dense matching of deep space stereo images. J. Remote Sens. 2013, 8, 1483–1486. [Google Scholar]
- Wu, B.; Zhang, Y.; Zhu, Q. A Triangulation-based Hierarchical Image Matching Method for Wide-Baseline Images. Photogramm. Eng. Remote Sens. 2011, 77, 695–708. [Google Scholar] [CrossRef]
- Lohse, V.; Heipke, C.; Kirk, R.L. Derivation of planetary topography using multi-image shape-from-shading. Planet. Space Sci. 2006, 54, 661–674. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant key-points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Cui, P.; Shao, W.; Cui, H. 3-D Small Body Model Reconstruction and Spacecraft Motion Estimation during Fly-Around. J. Astronaut. 2010, 31, 1381–1389. [Google Scholar]
- Ke, Y.; Sukthankar, R. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, Washington, DC, USA, 27 June 2004; p. II. [Google Scholar]
- Lan, C.; Geng, X.; Xu, Q.; Cui, P. 3D Shape Reconstruction for Small Celestial Body Based on Sequence Images. J. Deep Space Explor. 2014, 1, 140–145. [Google Scholar]
- Liu, X.; Wu, Y.; Wu, F.; Gu, Y.; Zheng, R.; Liang, X. 3D Asteroid Terrain Model Reconstruction Based on Geometric Method. Aerosp. Control Appl. 2020, 46, 51–59. [Google Scholar]
- Baker, S.; Matthews, I. Lucas-kanade 20 years on: A unifying framework. Int. J. Comput. Vision 2004, 56, 221–255. [Google Scholar] [CrossRef]
- Rodriguez, M.P.; Nygren, A. Motion Estimation in Cardiac Fluorescence Imaging with Scale-Space Landmarks and Optical Flow: A Comparative Study. IEEE Trans. Biomed. Eng. 2015, 62, 774–782. [Google Scholar] [CrossRef] [PubMed]
- Granillo, O.D.M.; Zamudio, Z. Real-time Drone (UAV) trajectory generation and tracking by Optical Flow. In Proceedings of the IEEE International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE), Cuernavaca, Mexico, 24–27 November 2018; pp. 38–43. [Google Scholar]
- Bakir, N.; Pavlov, V.; Zavjalov, S.; Volvenko, S.; Gumenyuk, A.; Rethmeier, M. Novel metrology to determine the critical strain conditions required for solidification cracking during laser welding of thin sheets. In Proceedings of the 9th International Conference on Beam Technologies and Laser Application (BTLA), Saint Petersburg, Russia, 17–19 September 2018. [Google Scholar]
- Debei, S.; Aboudan, A.; Colombatti, G.; Pertile, M. Lutetia surface reconstruction and uncertainty analysis. Planet. Space Sci. 2012, 71, 64–72. [Google Scholar] [CrossRef]
- Chen, W.; Sun, T.; Chen, Z.; Ma, G.; Qin, Q. Optical Flow Based Super-resolution of Chang’E-1 CCD Multi-view Images. Geomat. Inf. Sci. Wuhan Univ. 2014, 39, 1103–1108. [Google Scholar]
- Weinzaepfel, P.; Revaud, J.; Harchaoui, Z.; Schmid, C. DeepFlow: Large displacement optical flow with deep matching. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 1385–1392. [Google Scholar]
- Brox, T.; Malik, J. Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 500–513. [Google Scholar] [CrossRef]
- Liu, B.; Chen, X.; Guo, L. A Feature Points Matching Algorithm Based on Harris-Sift Guides the LK Optical Flow. J. Geomat. Sci. Technol. 2014, 31, 162–166. [Google Scholar]
- Wang, J.; Wang, J.; Zhang, J. Non-rigid Medical Image Registration Based on Improved Optical Flow Method and Scale-invariant Feature Transform. J. Electron. Inf. Technol. 2013, 35, 1222–1228. [Google Scholar] [CrossRef]
- Wang, G.; Tian, J.; Zhu, W.; Fang, D. Non-Rigid and Large Displacement Optical Flow Based on Descriptor Matching. Trans. Beijing Inst. Technol. 2020, 40, 421–426,440. [Google Scholar]
- Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the IJCAI’81: 7th international joint conference on Artificial intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 674–679. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Cham, Switzerland, 2010. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Xue, T.; Mobahi, H.; Durand, F.; Freeman, W.T. The aperture problem for refractive motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3386–3394. [Google Scholar]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In Proceedings of the Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Volume 9, pp. 430–443. [Google Scholar]
- Rizk, B.; Drouet, D.C.; Golish, D.; DellaGiustina, D.N.; Lauretta, D.S. Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx): OSIRIS-REx Camera Suite (OCAMS) Bundle 11.0, urn:nasa:pds:orex.ocams::11.0. NASA Planet. Data Syst. 2021. [Google Scholar] [CrossRef]
- Nathues, A.; Sierks, H.; Gutierrez-Marques, P.; Ripken, J.; Hall, I.; Buettner, I.; Schaefer, M.; Chistensen, U. DAWN FC2 CALIBRATED CERES IMAGES V1.0, DAWN-A-FC2-3-RDR-CERES-IMAGES-V1.0; NASA Planetary Data System. 2016. Available online: https://pds.nasa.gov/ds-view/pds/viewDataset.jsp?dsid=DAWN-A-FC2-3-RDR-CERES-IMAGES-V1.0https://pds.nasa.gov/ds-view/pds/viewDataset.jsp?dsid=DAWN-A-FC2-3-RDR-CERES-IMAGES-V1.0 (accessed on 14 December 2023).
- Nathues, A.; Sierks, H.; Gutierrez-Marques, P.; Schroeder, S.; Maue, T.; Buettner, I.; Richards, M.; Chistensen, U.; Keller, U. DAWN FC2 CALIBRATED VESTA IMAGES V1.0, DAWN-A-FC2-3-RDR-VESTA-IMAGES-V1.0; NASA Planetary Data System. 2011. Available online: https://pds.nasa.gov/ds-view/pds/viewDataset.jsp?dsid=DAWN-A-FC2-3-RDR-VESTA-IMAGES-V1.0 (accessed on 14 December 2023).
- Alcantarilla, P.F.; Solutions, T. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell 2011, 34, 1281–1298. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable key-points. In Proceedings of the 2011 International conference on computer vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Pock, T.; Urschler, M.; Zach, C.; Beichel, R.; Bischof, H. A duality based algorithm for TV-L1-Optical-Flow image registration. In Proceedings of the 10th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2007), Brisbane, Australia, 29 October–2 November 2007; p. 511. [Google Scholar]
- Kroeger, T.; Timofte, R.; Dai, D.X.; Van Gool, L. Fast Optical Flow Using Dense Inverse Search. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 471–488. [Google Scholar]
- Bouguet, J.Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corp. 2001, 5, 4. [Google Scholar]
- Hu, Y.L.; Song, R.; Li, Y.S. Efficient Coarse-to-Fine PatchMatch for Large Displacement Optical Flow. In Proceedings of the in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 5704–5712. [Google Scholar]
Scene | Scene 1 | Scene 2 | Scene 3 | Scene 4 | Scene 5 | Scene 6 |
---|---|---|---|---|---|---|
Current image | ||||||
Next image | ||||||
Asteroid | Bennu | Bennu | Bennu | Bennu | Vesta | Ceres |
Characteristics | Large terrain changes | Large displacement | Rotate + Large displacement | Rotate + Large displacement | Uneven displacement | Uneven displacement |
Scene | Key-Points | Method | ≤1 | >1 | MA | RMS | Scene | Key-Points | Method | ≤1 | >1 | MA | RMS |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Scene 1 | 8007 | TV-L1 | 0 | 8007 | Scene 4 | 4499 | TV-L1 | 0 | 4499 | ||||
DeepFlow | 0 | 8007 | DeepFlow | 0 | 4499 | ||||||||
DIS | 3191 | 4816 | 39.9% | 0.54 | DIS | 0 | 4499 | ||||||
LDOP | 1271 | 6736 | 15.9% | 0.57 | LDOP | 0 | 4499 | ||||||
PyrLK | 7276 | 481 | 93.8% | 0.24 | PyrLK | 1956 | 2433 | 44.6% | 0.25 | ||||
CPM | 7799 | 208 | 97.4% | 0.42 | CPM | 4359 | 140 | 96.9% | 0.42 | ||||
Ours | 7519 | 57 | 99.2% | 0.25 | Ours | 4249 | 54 | 98.7% | 0.23 | ||||
Scene 2 | 5927 | TV-L1 | 0 | 5927 | Scene 5 | 1894 | TV-L1 | 2 | 1892 | 0.1% | 0.48 | ||
DeepFlow | 0 | 5927 | DeepFlow | 5 | 1889 | 0.3% | 0.52 | ||||||
DIS | 132 | 5795 | 2.2% | 0.58 | DIS | 30 | 1864 | 1.6% | 0.58 | ||||
LDOP | 0 | 5927 | LDOP | 32 | 1862 | 1.7% | 0.55 | ||||||
PyrLK | 1426 | 2743 | 34.2% | 0.21 | PyrLK | 1265 | 287 | 81.5% | 0.40 | ||||
CPM | 5797 | 130 | 97.8% | 0.41 | CPM | 1808 | 86 | 95.5% | 0.42 | ||||
Ours | 5678 | 13 | 99.8% | 0.22 | Ours | 1720 | 27 | 98.5% | 0.28 | ||||
Scene 3 | 11041 | TV-L1 | 0 | 11,041 | Scene 6 | 4028 | TV-L1 | 16 | 4012 | 0.4% | 0.53 | ||
DeepFlow | 0 | 11,041 | DeepFlow | 18 | 4010 | 0.4% | 0.53 | ||||||
DIS | 6844 | 4197 | 62.0% | 0.45 | DIS | 17 | 4011 | 0.4% | 0.48 | ||||
LDOP | 6307 | 4734 | 57.1% | 0.47 | LDOP | 17 | 4011 | 0.4% | 0.52 | ||||
PyrLK | 10,723 | 22 | 99.8% | 0.17 | PyrLK | 9 | 3135 | 0.3% | 0.53 | ||||
CPM | 10,878 | 163 | 98.5% | 0.41 | CPM | 3725 | 303 | 92.5% | 0.45 | ||||
Ours | 10,767 | 13 | 99.9% | 0.17 | Ours | 3870 | 9 | 99.8% | 0.26 |
Scene | Interval | ≤1 | >1 | MA | RMS |
---|---|---|---|---|---|
Scene 1 | 50 | 48,678 | 1064 | 97.9% | 0.26 |
100 | 47,426 | 1129 | 97.7% | 0.26 | |
200 | 43,933 | 1171 | 97.4% | 0.25 | |
300 | 39,574 | 1360 | 96.7% | 0.25 | |
400 | 26,060 | 1790 | 93.6% | 0.25 | |
Scene 5 | 50 | 59,793 | 1328 | 97.8% | 0.32 |
100 | 59,919 | 1341 | 97.8% | 0.32 | |
200 | 58,878 | 1646 | 97.3% | 0.32 | |
300 | 56,676 | 1634 | 97.2% | 0.33 | |
400 | 50,724 | 1639 | 96.9% | 0.33 | |
Scene 6 | 50 | 56,083 | 415 | 99.3% | 0.25 |
100 | 55,524 | 445 | 99.2% | 0.26 | |
200 | 52,028 | 641 | 98.8% | 0.28 | |
300 | 50,942 | 836 | 98.4% | 0.31 | |
400 | 41,341 | 1258 | 97.0% | 0.32 |
Scene | Scene 1 | Scene 2 | Scene 3 | Scene 4 | Scene 5 | Scene 6 | ALL |
---|---|---|---|---|---|---|---|
MA | 99.06% | 99.56% | 100% | 98.44% | 99.99% | 100% | 99.51% |
RMSE | 0.23 | 0.27 | 0.18 | 0.37 | 0.22 | 0.26 | 0.25 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, S.; Xue, Y.; Tang, Y.; Zhu, R.; Jiang, X.; Niu, C.; Yin, W. Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid. Remote Sens. 2023, 15, 5786. https://doi.org/10.3390/rs15245786
Zhang S, Xue Y, Tang Y, Zhu R, Jiang X, Niu C, Yin W. Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid. Remote Sensing. 2023; 15(24):5786. https://doi.org/10.3390/rs15245786
Chicago/Turabian StyleZhang, Sheng, Yong Xue, Yubing Tang, Ruishuan Zhu, Xingxing Jiang, Chong Niu, and Wenping Yin. 2023. "Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid" Remote Sensing 15, no. 24: 5786. https://doi.org/10.3390/rs15245786
APA StyleZhang, S., Xue, Y., Tang, Y., Zhu, R., Jiang, X., Niu, C., & Yin, W. (2023). Feature Scalar Field Grid-Guided Optical-Flow Image Matching for Multi-View Images of Asteroid. Remote Sensing, 15(24), 5786. https://doi.org/10.3390/rs15245786