Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images
Abstract
:1. Introduction
- Considering the temperature variations in the scene from one acquisition to the other, the most appropriate and robust components among temperature, emissivity, and radiance features of hyperspectral pixels are investigated to convert the 3D hyperspectral images to 2D images.
- In order to solve the local misalignment problems in global and GPS-IMU-based registrations, two blockwise methods are developed. While the geometric-based solution searches for the best local homography from the keypoints in the neighborhood of each block, the alternative optimization-based solution takes the global transformation as the initial estimation for each block and iteratively improves the homography with respect to the similarity of the transformed and reference blocks.
- In order to account for the variations in the radiance data due to the temperature changes in the scene, the performances of different metrics in multimodal image registration, namely the geometric mean square error between the keypoints and their correspondences (geometric MSE), structural similarity index (SSIM) [37], and mutual information (MI) [38], are examined.
- The proposed method with local refinements is compared with manual and GPU-IMU-based registrations along with the state-of-the-art image registration methods by using the hyperspectral LWIR images captured on the same and different days.
2. Related Work
2.1. Hyperspectral Image Registration Methods
2.2. State-of-the-Art Image Registration Methods
3. Experimental Dataset
4. Proposed Registration Methods
- 3D–2D Conversion: The hyperspectral images with two spatial and one spectral dimension, namely HSI1 and HSI2, are transformed to 2D maps in this stage. Different conversions based on temperature and emissivity components in the LWIR range are proposed and compared.
- Blockwise Refinement: The global planar projective transformation is refined over blocks to align the detail edges, objects, and contrast in local regions by using the proposed geometry-based or optimization-based methods.
- Pixelwise Refinement: Blocking artifacts that can occur due to the individual mapping of neighbor blocks are further refined by investigating the best homography among different candidates for each pixel at this final stage.
- 2D Mosaicing and Outputs: Given the 2D images, one image is geometrically transformed to the coordinate system of the other (reference) image by using the estimated transforms for each pixel. The resulting mosaic image and the similarity metrics, namely mutual information (MI) and structural similarity index (SSIM), between the transformed and reference images are returned as outputs for visual inspection and performance evaluation.
4.1. 3D–2D Conversions
4.1.1. Brightness Temperature Estimation for Hyperspectral Pixels
- The radiance of the hyperspectral pixel is assigned to a vector for each pixel of the hyperspectral image:
- 2
- Planck curves [47] for different temperatures are generated for the spectral range of LWIR camera from a minimum temperature, Tmin, to a maximum temperature, Tmax, with a step size of temperature, ΔT:
- 3
- The mean square error (MSE) between the generated Planck curve for each temperature and the radiance of the hyperspectral pixel is computed. The temperature giving the minimum MSE is assigned as the brightness temperature of the hyperspectral pixel:
- 4
- The algorithm takes the Tmin, Tmax, and ΔT values along with the hyperspectral image as inputs. The 2D image, , involving the estimated brightness temperature for each pixel, is returned as the output.
4.1.2. Average Spectral Energy and PCA Components of Radiance Spectra as 2D Maps
4.1.3. Average Spectral Energy and PCA Components of Emissivity Signal as 2D Maps
- The given hyperspectral image, , is first separated into temperature and emissivity components by using a temperature emissivity separation (TES) algorithm [48]:
- The average emissivity energy for each pixel, , is computed as follows:
4.2. Global Pose Estimation Based on Keypoint Matching
4.3. Blockwise Refinement
4.3.1. Geometric-Based Local Refinement
- First, divide the reference image, into the nonoverlapping blocks of size . Denote the resulting blocks as , where and refer to the horizontal and vertical indices for the block number as illustrated in Figure 3.
- For each block, , find the spatial (Euclidian) distances of all keypoints to the center of that block. Find the closest number of matched keypoints to the center of that block. As an example, the selected closest keypoints are illustrated with solid circles in Figure 3.
- For each 4-point combination of the points,
- Derive a homography matrix by using their corresponding matches [44].
- Form the transformed block, , by finding the corresponding position, , for each pixel position, , inside the block, with the given homography matrix and by performing a bilinear interpolation on .
- Compute the distance (with respect to a metric) between and .
- Assign the homography matrix, which gives the minimum distance among all the homography matrices including also the inverse of the global homography, , as the homography of the block . The resulting homography for the block is denoted as .
- The ultimate transformed image is formed by concatenating the transformed image blocks with the resulting homographies (.
4.3.2. Optimization-Based Local Refinement
- First, divide the reference image, into the nonoverlapping blocks of size . Denote the resulting blocks as , where and refer to the horizontal and vertical indices for the block number, respectively.
- For each block, ,
- Assign H as at the initialization.
- Form the transformed block, , by finding the corresponding position, , for each pixel position, , inside the block, with the given homography matrix and by performing a bilinear interpolation on .
- Compute the distance between and y), namely .
- Compute the gradient and Hessian of the cost function with respect to .
- Update the H with respect to the quasi-Newton algorithm:
- If the number of iteration is smaller than the maximum number of iteration and the change in the cost function, , is greater than a threshold value, then update the homography as and go to step ii. Otherwise, finish the iteration.
- The ultimate homography for the block is denoted as .
4.4. Pixelwise Refinement
- Apply the homographies H1, H2, …, HM to the original image,, resulting a set of transformed images, L1 (x, y), L2 (x, y), …, LM (x, y).
- Compute the SSIM map between each of the transformed image and the reference image, R (x, y), resulting in the SSIM maps, S1(x, y), S2(x, y), …, SM (x, y). Note that Si (x, y) indicates the similarity of the pixels R (x, y) and Li (x, y) with respect to SSIM.
- Apply an averaging filter to the resulting SSIM maps in order also to include the effect of neighbor pixels in the similarity computation.
- For each pixel (x, y),
- Find the highest score among S1(x, y), S2 (x, y), …, SM (x, y).
- Assuming that Sk (x, y) is the highest score, assign the pixel value of the transformed image, Lk (x, y), as the value of the .
4.5. Outputs: 2D Mosaic and Objective Similarity Metrics
5. Results
- First, the performances of the proposed 3D–2D conversions are discussed and the best conversions for the global pose estimation are determined in Section 5.1.
- This is followed by the description of controlled experiments conducted to select the design parameters for the proposed blockwise local refinement in Section 5.2.
- The improvements of the pixelwise refinement in comparison with the blockwise refinement are then discussed in Section 5.3.
- The next subsection, Section 5.4, compares the performances of the proposed geometric-based and optimization-based local refinements.
- Finally, the improvements of the proposed registration method are revealed with respect to the conventional approaches, such as manual and direct georeferencing-based registration, and state-of-the-art image registration methods, including deep learning-based approaches, in Section 5.5.
5.1. Selection of 3D to 2D Conversion Method for Global Pose Estimation
5.2. Selection of Design Parameters and Distance Metrics for Blockwise Local Refinement
5.3. Comparison of Blockwise and Pixelwise Local Refinements
5.4. Comparison of Geometric-Based and Optimization-Based Local Refinements
5.5. Improvements of the Proposed Refinements with Respect to the Baseline and State-of-the-Art Methods
6. Discussion
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Brown, L.G. A survey of image registration techniques. ACM Comput. Surv. 1992, 24, 325–376. [Google Scholar] [CrossRef]
- Liao, R. Deformable Image Registration for Hyperspectral Images. Master’s Thesis, Dept. of Electrical and Systems Eng., McKelvey School of Engineering, Washington University, St. Louis, MO, USA, May 2019. [Google Scholar]
- Veste, S. Registration in Hyperspectral and Multispectral Imaging. Master’s Thesis, Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway, January 2017. [Google Scholar]
- Mukherjee, A.; Velez-Reyes, M.; Roysam, B. Interest Points for Hyperspectral Image Data. IEEE Trans. Geosci. Remote. Sens. 2009, 47, 748–760. [Google Scholar] [CrossRef]
- Goncalves, H.; Corte-Real, L.; Goncalves, J.A. Automatic Image Registration Through Image Segmentation and SIFT. IEEE Trans. Geosci. Remote. Sens. 2011, 49, 2589–2600. [Google Scholar] [CrossRef] [Green Version]
- Sima, A.A.; Buckley, S.J. Optimizing SIFT for Matching of Short Wave Infrared and Visible Wavelength Images. Remote. Sens. 2013, 5, 2037–2056. [Google Scholar] [CrossRef] [Green Version]
- Yi, Z.; Zhiguo, C.; Yang, X. Multi-spectral remote image registration based on SIFT. Electron. Lett. 2008, 44, 107–108. [Google Scholar] [CrossRef]
- Vural, M.F.; Yardımcı, Y.; Temizel, A. Registration of Multispectral Satellite Images with Oriented-Restricted SIFT. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009. [Google Scholar]
- Ordóñez, A.; Heras, D.B.; Argüello, F. Surf-Based Registration for Hyperspectral Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 63–66. [Google Scholar]
- Rzhanov, Y.; Pe’Eri, S. Pushbroom-Frame Imagery Co-Registration. Mar. Geod. 2012, 35, 141–157. [Google Scholar] [CrossRef]
- Dorado-Munoz, L.P.; Velez-Reyes, M.; Mukherjee, A.; Roysam, B. A Vector SIFT Detector for Interest Point Detection in Hyperspectral Imagery. IEEE Trans. Geosci. Remote. Sens. 2012, 50, 4521–4533. [Google Scholar] [CrossRef]
- Dorado-Muñoz, L.P.; Velez-Reyes, M.; Roysam, B.; Mukherjee, A. Interest point detection for hyperspectral imagery. In Proceedings of the SPIE Defense; SPIE-Intl Soc Optical Eng: Bellingham, WA, USA, 2009; Volume 7334, p. 73340O. [Google Scholar]
- Kern, J.P.; Pattichis, M.; Stearns, S.D. Registration of image cubes using multivariate mutual information. In Proceedings of the The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003. [Google Scholar]
- Luo, X.; Guo, L.; Yang, Z. Registration of Remn ote Sensing Hyperspectral Imagery Using Mutual Information and Stochastic Optimization. Remote Sens. Technol. Appl. 2006, 21, 61–66. [Google Scholar]
- Hasan, M.; Pickering, M.; Robles-Kelly, A.; Zhou, J.; Jia, X. Registration of hyperspectral and trichromatic images via cross cumulative residual entropy maximization. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 3 December 2010; pp. 2329–2332. [Google Scholar]
- Zhou, Y.; Rangarajan, A.; Gader, P.D. Nonrigid Registration of Hyperspectral and Color Images with Vastly Different Spatial and Spectral Resolutions for Spectral Unmixing and Pansharpening. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1571–1579. [Google Scholar]
- Zhou, Y.; Rangarajan, A.; Gader, P.D. An Integrated Approach to Registration and Fusion of Hyperspectral and Multi-spectral Images. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3020–3033. [Google Scholar] [CrossRef]
- Pluim, J.P.W.; Maintz, J.B.A.; Vierveger, M.A. Mutual Information based registration of medical images. IEEE Trans. Med Imaging 2003, 22, 986–1004. [Google Scholar] [CrossRef]
- Kim, B.H.; Kim, M.Y.; Chae, Y.S. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications. Sensors 2017, 18, 60. [Google Scholar] [CrossRef] [Green Version]
- Steven, V.; Hoecke, V.; Nele, T.; Bart, M.; Bart, S.; Peter, L.; Charles-Frederik, H.; Rik, V.D.W. Hot Topics in Video Fire Surveillance. In Video Surveillance; InTech: London, UK, 2011. [Google Scholar]
- Koz, A.; Çalışkan, A.; Aydın, A.; Alatan, A. Registration of MWIR-LWIR Band Hyperspectral Images. In Proceedings of the 8th Work-shop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Los Angeles, CA, USA, 21–24 August 2016.
- Yang, K.; Pan, A.; Yang, Y.; Zhang, S.; Ong, S.H.; Tang, H. Remote Sensing Image Registration Using Multiple Image Features. Remote Sens. 2017, 9, 581. [Google Scholar] [CrossRef] [Green Version]
- Ma, J.; Zhou, H.; Zhao, J.; Gao, Y.; Jiang, J.; Tian, J. Robust Feature Matching for Remote Sensing Image Registration via Locally Linear Transforming. IEEE Trans. Geosci. Remote. Sens. 2015, 53, 6469–6481. [Google Scholar] [CrossRef]
- Abbas, M.; Saleem, S.; Subhan, F.; Bais, A. Feature points-based image registration between satellite imagery and aerial images of agricultural land. Turk. J. Electr. Eng. Comput. Sci. 2020, 28, 1458–1473. [Google Scholar] [CrossRef]
- Mostafa, M.M.R.; Hutton, J. Direct Positioning and Orientation Systems. In How do they work? What is the attainable accu-racy? In Proceedings of the American Society of Photogrammetry and Remote Sensing Annual Meeting, St. Louis, MO, USA, 24–27 April 2001. [Google Scholar]
- Cramer, M.; Stallmann, D.; Haala, N. Direct Georefererencing Using GPS/Inertial Exterior Orientations for Photogrammetric Applications. In Proceedings of the XIXth International Society for Photogrammetry and Remote Sensing Congress, Amsterdam, The Netherlands, 16–23 July 2000. [Google Scholar]
- Yuan, X.; Zhang, X. Theoretical Accuracy of Direct Georeferencing with Position and Orientation System in Aerial Pho-togrammetry. In Proceedings of the XXIth, International Society for Photogrammetry and Remote Sensing Congress, Beijing, China, 3–11 July 2008. [Google Scholar]
- Efe, U.; Gokalp, K.; Ince, A.; Alatan, A. DFM: A Performance Baseline for Deep Feature Matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Virtual, 25 June 2021; pp. 4284–4293. [Google Scholar]
- Rocco, I.; Cimpoi, M.; Arandjelovic, R.; Torii, A.; Pajdla, T.; Sivic, J. Ncnet: Neighbourhood con-sensus networks for estimating image correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Q.; Sattler, T.; Leal-Taixe, L. Patch2pix: Epipolar-guided pixel-level correspondences. Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021. [Google Scholar]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. Self-supervised interest point detection and description. In Proceedings of the Computer Vision and Pattern Recognition Workshop, Salt Lake City, Utah, 18–22 June 2018. [Google Scholar]
- Dusmanu, M.; Rocco, I.; Pajdla, T.; Pollefeys, M.; Sivic, J.; Torii, A.; Sattler, T. D2-Net: A Trainable CNN for Joint Description and Detection of Local Features. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 16–20 June 2019; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA; pp. 8084–8093. [Google Scholar]
- Ma, J.; Zhao, J.; Guo, H.; Jiang, J.; Zhou, H.; Gao, Y. Locality Preserving Matching. Int. J. Comput. Vis. 2019, 127, 512–531. [Google Scholar] [CrossRef]
- Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching with Graph Neural Networks. In Proceedings of the2020 Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4937–4946. [Google Scholar]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. Deep image homography estimation. In Proceedings of the RSS Workshop: Limits and Potentials of Deep Learning in Robotics, Ann Arbor, MI, USA, 18 June 2016. [Google Scholar]
- Nguyen, T.; Chen, S.W.; Shivakumar, S.S.; Taylor, C.J.; Kumar, V. Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model. IEEE Robot. Autom. Lett. 2018, 3, 2346–2353. [Google Scholar] [CrossRef] [Green Version]
- Viola, P.; Iii, W.M.W. Alignment by Maximization of Mutual Information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error measurement to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. SURF: Speeded Up Robust Features. Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Hackwell, J.A.; Warren, D.W.; Bongiovi, R.P.; Hansel, S.J.; Hayhurst, T.L.; Mabry, D.J.; Sivjee, M.G.; Skinner, J.W. LWIR/MWIR imaging hyperspectral sensor for airborne and ground-based remote sensing. In Proceedings of the Imaging Spectrometry II; SPIE-Intl Soc Optical Eng: Bellingham, WA, USA, 1996; Volume 2819, pp. 102–108. [Google Scholar]
- Pipia, L.; Aragüés, F.P.; Tardà, A.; Martinez, L.; Palà, V.; Arbiol, R. Thermal Airborne Spectrographic Imager for Temperature and Emissivity Retrieval. In Proceedings of the 3rd International Symposium on the Recent Advances in Quantitative Remote Sensing, Valencia, Spain, 27 September–1 October 2010. [Google Scholar]
- Hartley, R.; Zisserman, A.; Faugeras, O. Multiple View Geometry in Computer Vision; Cambridge University Press (CUP): Cambridge, UK, 2004. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Read. Comput. Vis. 1987, 24, 726–740. [Google Scholar] [CrossRef]
- Koz, A.; Soydan, H.; Duzgun, H.S.; Alatan, A.A. A local extrema based method on 2D brightness temperature maps for detection of archaeological artifacts. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2016; pp. 92–95. [Google Scholar]
- Smith, R.B. Computing the Planck Function. 2003. Available online: http://yceo.yale.edu/sites/default/files/files/ComputingThePlanckFunction.pdf (accessed on 23 June 2021).
- Dorado-Munoz, L.P.; Velez-Reyes, M.; Mukherjee, A.; Roysam, B. A Temperature and Emissivity Seperation Algorithm for Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1113–1126. [Google Scholar]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
- Terriberry, T.B. Adaptive motion compensation without blocking artifacts. In Proceedings of the Visual Information Processing and Communication VI, San Francisco, CA, USA, 10–12 February 2015; Volume 9410. [Google Scholar] [CrossRef]
- Vedaldi, A.; Fulkerson, B.M. VLFeat: An open and portable library of computer vision algorithms. In Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy, 25–29 October 2010. [Google Scholar]
- Matlab Hyperspectral Toolbox by Isaac Greg. Available online: https://github.com/isaacgerg/matlabHyperspectralToolbox/tree/master/hyperspectralToolbox (accessed on 23 June 2021).
Study | Spectral Range | Class | |
---|---|---|---|
Image 1 | Image 2 | ||
Rzhanov et al. [10] | RGB image | VNIR (377–1041 nm) | feature-based |
Sima et al. [6] | RGB image | SWIR (1300–2500 nm) | feature-based |
Dorado-Munoz [11] | VNIR (400–970 nm) | VNIR (400–970 nm) | feature-based |
Mukherjee et al. [4] | VNIR + SWIR (357–2576 nm) | VNIR + SWIR (357–2576 nm) | feature-based |
Gonçalves et al. [5] | VNIR + SWIR (400–2500 nm) | VNIR + SWIR (400–2500 nm) | feature-based |
Hasan et al. [15] | RGB image (440, 540, 570 nm) | VNIR image (430–630 nm) | optimization-based |
Zhou et al. [16] | RGB image | VNIR (430–860 nm) | optimization-based |
Steven et al. [20] | RGB image | Broadband thermal (LWIR) sensor image | feature-based |
Kim et al. [19] | Broadband thermal (MWIR) sensor image | Broadband thermal (LWIR) sensor image | feature-based |
Koz et al. [21] | MWIR (3500–4900 nm) | LWIR (7800–11,200 nm) | feature-based |
Methods | Year | Main Stages during Registration | Input | Output | |||
---|---|---|---|---|---|---|---|
Feature Detection | Feature Description | Feature Matching | Geometric Estimation | ||||
DFM [28] | 2021 | + | + | + | Image Pair | Putative Matches | |
NCNet [29] | 2020 | + | + | + | Image Pair | Putative Matches | |
Patch2Pix [30] | 2020 | + | + | + | Image Pair | Putative Matches | |
SuperPoint [31] | 2018 | + | + | Image | Features + Descriptors | ||
D2-Net [32] | 2019 | + | + | Image | Features + Descriptors | ||
LPM [33] | 2018 | + | Features + Descriptors | Putative Matches | |||
SuperGlue [34] | 2020 | + | Features + Descriptors | Putative Matches | |||
Deep Homog. Est. [35] | 2016 | Image Pair | Homography | ||||
Unsuper. Deep Homog. [36] | 2018 | Image Pair | Homography |
Abbreviation | Spectral Range | No. of Bands | Capturing Day | Capturing Time | Capturing Height |
---|---|---|---|---|---|
LWIR1a | 7.6–13.5 µm | 128 | 20 August 2014 | 18:05 | 500 m |
LWIR1b | 7.6–13.5 µm | 128 | 20 August 2014 | 16:35 | 500 m |
LWIR1c | 7.6–13.5 µm | 128 | 12 August 2014 | 18:18 | 500 m |
LWIR2a | 7.6–13.5 µm | 128 | 20 August 2014 | 18:05 | 500 m |
LWIR2b | 7.6–13.5 µm | 128 | 20 August 2014 | 16:35 | 500 m |
LWIR2c | 7.6–13.5 µm | 128 | 12 August 2014 | 18:18 | 500 m |
LWIR3a | 8.0–11.5 µm | 32 | 12 August 2014 | Not provided | 2000 ft |
LWIR3b | 8.0–11.5 µm | 32 | 12 August 2014 | Not provided | 2500 ft |
LWIR3c | 8.0–11.5 µm | 32 | 19 August 2014 | Not provided | 2500 ft |
Pairs | S/D | BT | ER | PCA1R | PCA2R | Ee | PCA1e | PCA2e |
---|---|---|---|---|---|---|---|---|
LWIR1a–LWIR1b | S | 86/176 (49%) | 89/163 (57%) | 89/173 (51%) | 80/167 (48%) | 60/130 (46%) | 109/175 (62%) | 71/156 (50%) |
LWIR1a–LWIR1c | D | 11/81 (14%) | 15/104 (14%) | X | X | 15/87 (17%) | 13/101 (13%) | X |
LWIR2a–LWIR2b | S | 25/117 (21%) | 27/119 (23%) | X | X | X | X | X |
LWIR2a–LWIR2c | D | 9/89 (10%) | 11/89 (12%) | 13/96 (14%) | X | X | 25/103 (24%) | X |
LWIR3a–LWIR3b | S | 88/134 (66%) | 94/138 (68%) | 101/142 (71%) | X | 20/50 (40%) | X | X |
LWIR3a–LWIR3c | D | 39/91 (43%) | 41/83 (49%) | 45/104 (43%) | 14/53 (26%) | X | X | X |
Pairs | S/D | Mutual Information | ||||||
BT | ER | PCA1R | PCA2R | Ee | PCA1e | PCA2e | ||
LWIR1a–LWIR1b | S | 0.52 | 0.56 | 0.59 | 0.95 | 0.66 | 0.85 | 0.68 |
LWIR1a-LWIR1c | D | 0.26 | 0.29 | X | X | 0.40 | 0.40 | X |
LWIR2a–LWIR2b | S | 0.50 | 0.59 | X | X | X | X | X |
LWIR2a–LWIR2c | D | 0.50 | 0.62 | 0.64 | X | X | 0.94 | X |
LWIR3a–LWIR3b | S | 1.07 | 1.10 | 1.10 | X | 0.25 | X | X |
LWIR3a–LWIR3c | D | 0.82 | 0.84 | 0.86 | 0.30 | X | X | X |
Pairs | S/D | SSIM | ||||||
BT | ER | PCA1R | PCA2R | Ee | PCA1e | PCA2e | ||
LWIR1a–LWIR1b | S | 0.18 | 0.22 | 0.22 | 0.25 | 0.12 | 0.17 | 0.17 |
LWIR1a-LWIR1c | D | 0.08 | 0.06 | X | X | 0.04 | 0.07 | X |
LWIR2a–LWIR2b | S | 0.12 | 0.18 | X | X | X | X | X |
LWIR2a–LWIR2c | D | 0.07 | 0.08 | 0.08 | X | X | 0.14 | X |
LWIR3a–LWIR3b | S | 0.31 | 0.32 | 0.32 | X | 0.05 | X | X |
LWIR3a–LWIR3c | D | 0.29 | 0.25 | 0.28 | 0.04 | X | X | X |
SSIM for Same-Day Pairs | SSIM for Different-Day Pairs | |||||
---|---|---|---|---|---|---|
1a–1b | 2a–2b | 3a–3b | 1a–1c | 2a–2c | 3a–3c | |
Baseline Methods | ||||||
Global | 0.1833 | 0.1244 | 0.3091 | 0.0772 | 0.1054 | 0.2624 |
Manual | 0.1687 | 0.1384 | 0.1933 | 0.0368 | 0.1122 | 0.1302 |
Georeferencing-based | 0.3111 | 0.3029 | - | 0.0119 | 0.0031 | - |
State of the Art methods | ||||||
SuperGlue | 0.2255 | 0.1466 | 0.2787 | 0.0620 | 0.0617 | 0.2233 |
D2-Net | 0.1788 | 0.1064 | 0.2455 | 0.0325 | 0.0447 | 0.1371 |
LPM | 0.1873 | 0.1465 | 0.3130 | 0.0388 | x | 0.2322 |
Proposed Refinements | ||||||
Geometric-based | 0.4223 | 0.3704 | 0.3811 | 0.1097 | 0.1404 | 0.3000 |
Optimization-based | 0.3603 | 0.2569 | 0.3382 | 0.0996 | 0.1676 | 0.3118 |
MI for Same-Day Pairs | MI for Different-Day Pairs | |||||
---|---|---|---|---|---|---|
1a–1b | 2a–2b | 3a–3b | 1a–1c | 2a–2c | 3a–3c | |
Baseline Methods | ||||||
Global | 0.5300 | 0.4814 | 1.0665 | 0.2453 | 0.6479 | 0.8039 |
Manual | 05317 | 0.5132 | 0.9375 | 0.2531 | 0.5642 | 0.6376 |
Georeferencing-based | 0.5532 | 0.4777 | - | 0.1648 | 0.4383 | - |
State of the Art methods | ||||||
SuperGlue | 0.5376 | 0.4914 | 1.0460 | 0.2134 | 0.5664 | 0.7702 |
D2-Net | 0.4852 | 0.4634 | 0.9663 | 0.2668 | 0.5794 | 0.7070 |
LPM | 0.5292 | 0.4851 | 1.0818 | 0.2176 | x | 0.7680 |
Proposed Refinements | ||||||
Geometric-based | 0.7745 | 0.6162 | 1.1075 | 0.2501 | 0.6761 | 0.8384 |
Optimization-based | 0.7260 | 0.5569 | 1.1030 | 0.2592 | 0.6762 | 0.8428 |
Patch1 | Patch 2 | Patch 3 | Patch 4 | |
---|---|---|---|---|
Reference Patch | ||||
Global | ||||
Manual | ||||
Georeferencing-based | ||||
SuperGlue | ||||
D2-Net | ||||
LPM | ||||
Geometric-Based Refinement | ||||
Optimization-Based Refinement |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Koz, A.; Efe, U. Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images. Remote Sens. 2021, 13, 2465. https://doi.org/10.3390/rs13132465
Koz A, Efe U. Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images. Remote Sensing. 2021; 13(13):2465. https://doi.org/10.3390/rs13132465
Chicago/Turabian StyleKoz, Alper, and Ufuk Efe. 2021. "Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images" Remote Sensing 13, no. 13: 2465. https://doi.org/10.3390/rs13132465
APA StyleKoz, A., & Efe, U. (2021). Geometric- and Optimization-Based Registration Methods for Long-Wave Infrared Hyperspectral Images. Remote Sensing, 13(13), 2465. https://doi.org/10.3390/rs13132465