You are currently viewing a new version of our website. To view the old version click .
Remote Sensing
  • Technical Note
  • Open Access

5 December 2022

Multi-Sensor Fusion of SDGSAT-1 Thermal Infrared and Multispectral Images

,
,
,
and
1
State Key Laboratory of Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 500 Yu Tian Road, Shanghai 200083, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
4
International Research Center of Big Data for Sustainable Development Goals (CBAS), Beijing 100094, China
This article belongs to the Special Issue Multi-Sensor Fusion Technology in Remote Sensing: Datasets, Algorithms and Applications

Abstract

Thermal infrared imagery plays an important role in a variety of fields, such as surface temperature inversion and urban heat island effect analysis, but the spatial resolution has severely restricted the potential for further applications. Data fusion is defined as data combination using multiple sensors, and fused information often has better results than when the sensors are used alone. Since multi-resolution analysis is considered an effective method of image fusion, we propose an MTF-GLP-TAM model to combine thermal infrared (30 m) and multispectral (10 m) information of SDGSAT-1. Firstly, the most relevant multispectral bands to the thermal infrared bands are found. Secondly, to obtain better performance, the high-resolution multispectral bands are histogram-matched with each thermal infrared band. Finally, the spatial details of the multispectral bands are injected into the thermal infrared bands with an MTF Gaussian filter and an additive injection model. Despite the lack of spectral overlap between thermal infrared and multispectral bands, the fused image improves the spatial resolution while maintaining the thermal infrared spectral properties as shown by subjective and objective experimental analyses.

1. Introduction

Thermal infrared (TIR) imaging can determine the nature, state, and change patterns of ground objects by measuring the differences in infrared properties reflected or radiated by the ground. In addition to its importance to global energy transformations and sustainable development, it has been extensively researched in the fields of surface temperature inversion, urban heat island effect, forest fire monitoring, prospecting, and geothermal exploration [1,2]. Due to the limitations of remote sensors, the thermal infrared band generally has a coarser spatial resolution than the visible band, which reduces its accuracy. As a result, improving the spatial resolution of thermal infrared images is of great significance and value.
In order to produce synthetic TIR images, TIR images can be fused with reflection bands of higher spatial resolution using image fusion techniques. In most cases, current image fusion methods assume that there is a significant correlation between panchromatic (PAN) and multispectral (MS) images. Data fusion of PAN and MS images has been widely used to create fused images with higher spatial and spectral resolution [3]. Although there are several image fusion methods available for MS and PAN images, only a few in the literature claim to be applicable to TIR and reflection data, for example, pixel block intensity modulation [4], nonlinear transform and multivariate analysis [5], and optimal scaling factor [6]. There are two problems with the current methods: (i) Since the TIR spectral range is far off from the reflectance spectral range, the correlation between TIR and reflectance data is generally weak, resulting in blurred images or significant spectral distortions. (ii) TIR and reflection data fusion models are currently subject to strong subjective influences on parameter selection for different scenes.
Previous research on PAN and MS image fusion methods revealed that multi-resolution analysis (MRA) is widely used due to its high computational efficiency and excellent fusion performance. The MRA method primarily uses wavelet transforms, Laplacian pyramids, etc. The aim is to extract information about spatial structure that is affected by spatial resolution and inject it into the hyperspectral image to enhance its spatial detail. Inspired by this, we propose a Generalized Laplacian Pyramid with Modulation Transfer Function matched filter model to fuse the thermal infrared band (30 m) and multispectral band (10 m) information (MTF-GLP-TAM) of SDGSAT-1. There are three payloads in SDGSAT-1: a thermal infrared spectrometer, a microlight, and a multispectral imager. The synergistic observation performed by these three payloads round the clock provides short-time phase, high-resolution, and high-precision image data for the fine portrayal of human traces, offshore ecology, urban heat island effect, and polar environment. The multispectral imager (MSI) is one of its main optical payloads, containing a total of seven bands, mainly in 380~900 nm, with a spatial resolution of 10 m. The thermal infrared spectrometer (TIS) mainly collects three thermal infrared bands with a spatial resolution of 30 m [7,8,9,10]. The low resolution of images in the thermal infrared band compared with the MS band limits their further application. Thus, it is necessary to integrate SDGSAT-1 MS images with TIS images. It is important to preserve the spatial information of the multispectral bands while maintaining the spectral properties of the three thermal infrared bands in the fused images.
This paper is organized as follows: In Section 2, we review different methods for remote sensing image fusion and analyze the applicability of these methods on thermal infrared and multispectral data fusion. In Section 3, we describe the whole framework of the image fusion algorithm in detail. In Section 4, we compare the results of the proposed method with those of other methods in a comprehensive manner, and we select several scenes to demonstrate its performance after fusion. In Section 5, we discuss the fusion performance of the proposed algorithm on Landsat series satellite images and compare it with those of other advanced algorithms. Finally, Section 6 summarizes the main conclusions.

3. Methodologies

We refined the method based on multi-resolution analysis and applied it to multispectral and thermal infrared remote sensing image fusion. The contribution of multispectral images to the spatial detail of the final fusion product is achieved by calculating the difference between the higher-resolution multispectral images and their low-pass components. The method obtains the spatial details with the multi-scale decomposition of the high-spatial-resolution multispectral images and injects them into the thermal infrared image bands obtained with scaled up-sampling based on the multispectral image size. The main advantages of the fusion technique based on multi-resolution analysis are as follows: (1) good temporal coherence; (2) strong spectral consistency; and (3) robustness to blending under appropriate conditions. The flow chart of our proposed algorithm is shown in Figure 1. Specifically, the fusion algorithm in this paper can be decomposed into the following sequential processes: (1) up-sample the thermal infrared image according to the dimensions of the multispectral image; (2) calculate the low-pass components of the multispectral image with filters for an R-fold sampling ratio; (3) calculate the injection gain; and (4) inject the extracted details.
Figure 1. The entire flow chart of the proposed algorithm, where MS image indicates multispectral image and TIR image indicates thermal infrared image. Circles with a straight or diagonal cross in the middle indicate additive binding or multiplicative injection, respectively.
The thermal infrared image has three bands, which we denote as TIR k , where k   =   1 , 2 , 3 . The multispectral image has seven bands we denote as MS n , where n   =   1 , 2 , , 7 . We find the multispectral band with the maximum correlation coefficient with the thermal infrared band by calculating the correlation number, which we denote as MS. The goal of our algorithm is to inject the high-resolution details from the MS image into the three thermal infrared bands. Accordingly, a formula describing this fusion process is given by Expression (1).
T I R ^ k   =   T I R ˜ k   +   F k M S
where subscript k (ranging from 1 to B) indicates the spectral band and B is the number of the TIR bands. T I R ˜ k and T I R ^ k are the kth channels of the TIR image up-sampled to the MS size and of the fused product, respectively. M S indicates the MS details obtained as the difference in MS image M S ˘ and its low resolution version, M S ˘ L . The specific formula is shown in (2), where M S ˘ is the result of the histogram matching of MS with each TIR band, as shown in Equation (3). In Equation (3), μ denotes the mean value of the image, and σ denotes the variance. Finally, F k [·] are the functions that modulate the injection of the MS details into the TIR bands and distinguish, together with the method used for producing M S ˘ L .
M S   =   M S ˘     M S ˘ L
M S ˘   =   M S     μ M S · σ T I R ˜ k σ M S L   +   μ T I R ˜ k
In fact, Equation (1) is a generalization of the MRA approach, where each band is independently treated. Almost all classical approaches employ a linear function F k [·], which is obtained through the pointwise multiplication (indicated by ◦) of the MS details by a coefficient matrix G k . There are different ways for obtaining low-pass component M S ˘ L and defining G k form.
F k M S   =   G k M S
There are usually two types of G k forms. Global gain coefficients: for all k   =   1 , 2 , 3 , G k   =   1 is a matrix of appropriate size with all elements equal to a fixed constant. This definition is the so-called additive injection scheme. Pixel-level gain coefficients: for all k   =   1 , 2 , 3 , G k   =   T I R ˜ k / M S ˘ L . In this case, the details are weighted by the ratio between the up-sampled thermal infrared image and the low-pass filtered multispectral image in order to reproduce the local intensity contrast of the multispectral image in the fused image. However, the local intensity contrast in the multispectral image does not reflect the true thermal contrast information, so we use global gain coefficients here.
In this paper, we use the classical MTF-GLP-based model, which is based on MTF Gaussian filters for detail extraction and an additive injection model, where G k = 1 for each k   =   1 , 2 , 3 . Therefore, the final fusion equation of the TIR image and the MS image is shown in (5). In order to better show the spectral changes of the three bands of thermal infrared after fusion in the results, its three bands B 1 B 2 B 3 are pseudo-colored according to the RGB channel, and finally, color image T I R ^ P C is Obtained.
T I R ^ k   =   T I R ˜ k   +   M S ˘     M S ˘ L

4. Experiment and Results

4.1. Test Data and Fusion Methods for Comparison

The TIR and MS images taken in Shanghai on 8 April 2022 and in Beijing on 3 May 2022 were used as examples to evaluate the fusion algorithm proposed in this paper. Among them, the specific parameter information of TIR and MS images is shown in Table 1. The size of the real thermal infrared image was 336 pixels × 336 pixels, and the size of the corresponding multispectral image was 1008 pixels × 1008 pixels. In the simulation experiments, we triple-down-sampled the TIR and MS images, i.e., we obtained images of size 112 × 112 pixels (90 m resolution) and 336 × 336 pixels (30 m resolution), respectively. After fusion, we obtained 30 m resolution thermal infrared images, which were compared with the real 30 m resolution thermal infrared data before degradation to obtain quantified analysis results.
Table 1. SDGSAT-1 TIS/MII main technical specifications.
To better illustrate the superiority of the MTF-GLP-TAM model, we applied five other different types of image fusion methods to the test data: Gram–Schmidt (GS) expansion [26]; P + XS [28]; MTF-GLP-HPM [29]; OSF [6]; and SRT [14]. Among them, GS is a typical component replacement method (CS); P + XS is an algorithm for variable classification; and MTF-GLP-HPM is one of the multi-resolution analyses using pixel-level gain coefficients. Because our algorithm is based on the MTF-GLP model and is applied to TIR band images and MS band images, we named our algorithm MTF-GLP-TAM. OSF and SRT are two newer algorithms that have been successfully applied to multi-sensor image fusion for satellites such as KOMPSAT-3A, Landsat7, and Landsat8.

4.2. Evaluation Criteria

The three thermal infrared bands were pseudo-colored after resolution enhancement in order to evaluate the images after fusion using different methods. Using pseudo-colored thermal infrared images, we can better assess the variation in spatial details and spectral distortion in the fused images. In addition, besides the subjective evaluation, we also used some objective quality evaluation indexes to assess the quality of the fused obtained images.
In real datasets without high-resolution TIR images as reference images, fusion performance is usually evaluated at reduced resolution. The evaluation of degraded resolution is based on Wald’s protocol [30]. The original real TIR image is used as the reference image. Then, the low-resolution TIR and MS images obtained by downscaling are used to obtain the fused images using a fusion algorithm. Finally, the fused image is compared with the original TIR image using the quality evaluation index to complete the objective quality evaluation of downscaled resolution. In this thesis, different fusion methods were analyzed objectively and quantitatively at reduced resolution using the commonly used metrics of CC, SAM, RMSE, UIQI, and ERGAS.
(1)
Cross Correlation (CC)
CC is a spatial evaluation metric describing the geometric distortion of the fused image and is defined as
C C   =   σ H , H ^ σ H σ H ^
where H denotes the reference TIR image, H ^ denotes the fused sharpened image, σ H , H ^ denotes the covariance of images H and H ^ , and σ H and σ H ^ are the standard deviations of H and H ^ , respectively. The closer the value of CC is to 1, the better the performance of the sharpening algorithm is.
(2)
Spectral Angle Mapper (SAM)
The SAM is a spectral quality index defined as
S A M u H i , u H ^ i   =   arccos   ( u H i , u H ^ i u H i 2 u H ^ i 2 )
where u H i and u H ^ i represent the spectral vector corresponding to the ith pixel point on images H and H ^ , respectively, u H i , u H ^ i represents the dot product operation between u H i and u H ^ i , and   2 represents the two-parametric operation. It should be noted that the spectral angle distortion of the TIR image is obtained by averaging the spectral angle distortion values of all the pixels on the image. The closer the SAM value is to 0, the smaller the spectral distortion caused by the sharpening algorithm is.
(3)
Root Mean Squared Error (RMSE)
The RMSE is used to quantify the amount of distortion for each pixel in the fused image. The root mean square error between the fused image and the reference image is defined as
R M S E H , H ^   =   H     H ^ 2 λ N
where λ is the number of pixel points and N denotes the number of bands. The value of RMSE is equal to 0 when there is no deviation between the fused image and the reference image.
(4)
Relative Dimensionless Global Error In Synthesis (ERGAS)
The ERGAS represents the spatial and spectral differences between the two images of the fused image and the reference image, i.e., a parameter indicating the overall quality of the fused image, and is defined as
E R G A S H , H ^   =   100 R 1 N k   =   1 N ( RMSE k μ k ) 2
where μ k denotes the mean value of the kth band of the reference image and R denotes the ratio of the linear resolutions of the MS and TIR images. The closer ERGAS is to 0, the better the performance of the fusion algorithm is.
(5)
Universal Image Quality Index (UIQI)
The UIQI is the similarity index for identifying spectral and spatial distortions. The covariance, variance, and the mean values of both the fused and reference images affect the value of the UIQI. Therefore, it is a measure of correlation loss, luminance distortion, and contrast distortion. The closer the value of UIQI is to 1, the better the quality of the fused value image is.
U I Q I   =   σ H , H ^ σ H σ H ^ 2 μ H μ H ^ μ H 2   +   μ H ^ 2 2 σ H σ H ^ σ H 2   +   σ H ^ 2
where H denotes the reference TIR image; H ^ denotes the fused sharpened image, σ H , H ^ denotes the covariance of images H and H ^ ; σ H and σ H ^ are the standard deviations of H and H ^ , respectively; and μ H and μ H ^ are the mean values of H and H ^ , respectively.

4.3. Simulation Data Experiment

The results of the different fusion algorithms tested on the simulated data are shown in Figure 2, Figure 3, Figure 4 and Figure 5. Four different groups of scenes were selected for the experiment: ports, residential areas, airports, and mountains. The large image on the left is the TIR triple-band pseudo-color image used as the reference image. The small image on the right is a zoomed-in view of the local details within the red and green boxes. The images below the red- and green-boxed vignettes on the right are the results of the difference between the corresponding regions and the GT images. GT indicates the original 30 m resolution thermal infrared band image, which we used as the reference image. The spectral distortion caused by the GS algorithm was particularly noticeable compared with the reference image, as its fusion resulted in a larger color difference from the reference image. Compared with the GS algorithm, the PX+S algorithm showed some improvement in spectral fidelity, but the spatial quality was still not satisfactory. The a priori assumptions of the PX+S algorithm are not applicable to the problem of TIR image and MS image fusion, so the final fused image did not obtain better results. MTF-GLP-HPM was similar to the method in this paper in terms of fusion results, and both are essentially MRA methods. The main difference is that the former adopts a multiplicative injection scheme to reproduce the local intensity contrast of the MS image in the fused image. However, the multiplicative injection scheme produced some distorted details when applied to TIR image fusion, such as the black area in the lower right corner of the red box in Figure 3. The algorithms based on OSF and SRT fused images with clearer ground details, but their retention of thermal infrared spectral properties was low. Moreover, due to the inconsistency of this method for different scene parameters, the performance of its algorithm was very unstable, which we are able to see from the data of the airport in Figure 4.
Figure 2. Comparison of fusion results of different algorithms on simulation data (the original image was taken in Shanghai on 8 April 2022; scene: port).
Figure 3. Comparison of fusion results of different algorithms on simulation data (the original image was taken in Beijing on 3 May 2022; scene: residential area).
Figure 4. Comparison of fusion results of different algorithms on simulation data (the original image was taken in Beijing on 3 May 2022; scene: airport).
Figure 5. Comparison of fusion results of different algorithms on simulation data (the original image was taken in Beijing on 3 May 2022; scene: mountains).
Table 2 gives the objective quality evaluation metrics of the algorithm used in this paper and the other five different classes of fusion algorithms at reduced quality resolution. Consistent with the subjective analysis, the results of the evaluation metrics show that the performance of the GS algorithm was much lower than those of the other four. The performance of the two MRA-based algorithms was similar and better than that of the algorithm with variable classification. The OSF and SRT algorithms had better performance in the mountain data. Their poor performance in the airport data confirmed the lack of scene generalization due to the uncertainty of the parameters. Overall, the methods used in this paper basically achieved optimal values for all quality evaluation metrics. This further demonstrates that the MTF-GLP-TAM algorithm used in this paper can obtain a trade-off between good spectral fidelity and spatial clarity when fusing TIR images and MS images. To better illustrate the universality of our algorithm, we selected 36 scenes covering different time periods. Since the RMSE provides the standard error between the fused image and the reference image, it is measurable that the fused image contains spatial and spectral distortion, which usually matches the visual evaluation results. We plotted RMSE for different algorithms, and the results are shown in Figure 6. We can see from the figure that the RMSE of the method proposed in this paper was minimal in most scenarios.
Table 2. Objective evaluation indexes of the method in this paper and several advanced fusion algorithms.
Figure 6. Comparison of RMSE results of different algorithms on simulation data.

4.4. Real-Data Experiment

Figure 7, Figure 8, Figure 9 and Figure 10 show the fusion results of different fusion algorithms on real data. The large image on the left side shows the multispectral grayscale image used for fusion. The small images on the right side show the local details of the different algorithms after fusion. For better visual comparison before and after fusion, we used the result after the triple interpolation of the TIR image using bilinear interpolation as the reference image. The result of the up-sampling of the thermal infrared image using bilinear interpolation is the initial input of the thermal infrared image before fusion. This simpler interpolation does not introduce additional spatial information and is more reflective of the increased ground detail in the fused image by making comparisons. By observing the local details, we could find that the fused images are more informative, such as some details of the ground buildings in Figure 7 and the textures on the airport tracks in Figure 8. The GS and OSF algorithms could maintain high spatial performance, but their fused images all had spectral distortion. The PX+S algorithm could maintain good spectral performance, but the fused images had obvious blurring. The algorithm proposed in this paper provided the best visual results among all the fusion algorithms compared.
Figure 7. The results of different fusion methods for the Forbidden City area in Beijing are shown.
Figure 8. The results of different fusion methods for Beijing Capital International Airport are shown.
Figure 9. The results of different fusion methods for Shanghai Hongqiao Airport are shown.
Figure 10. The results of different fusion methods are shown for the Dripping Lake area in Shanghai.

5. Discussion

In this section, the application of our proposed algorithm on other satellites is discussed, and two advanced multi-sensor fusion algorithms are selected for visual comparison. The results of the experiments are shown in Figure 11, Figure 12 and Figure 13, where Figure 11a shows the thermal infrared data taken by the Landsat8 TIRS payload and Figure 11b shows the panchromatic band data taken by the OLI payload; Figure 12a and Figure 13a show the thermal infrared data taken by the Landsat9 TIRS payload, and Figure 12a and Figure 13b show the panchromatic band data taken by the OLI payload; and Figure 11c–e, Figure 12c–e and Figure 13c–e show the fusion results of OSF, SRT, and our proposed algorithm, respectively. The OSF algorithm has been successfully applied to Landsat8 and KOMPSAT-3A satellites, and it controls the trade-off between spatial details and thermal information through the optimal scaling factor. The SRT algorithm is mainly applied to Landsat7 satellites, and it mainly uses the sparse representation technique for the fusion of panchromatic and thermal infrared bands. From the experimental results, we could see that the OSF fusion algorithm had clearer ground details, but it had poorer retention of the spectral properties of the thermal infrared bands, such as the circular building in Figure 11 and the airport runway in Figure 13. This approach controls the trade-off between spatial detail and thermal information by introducing a scaling factor, with the disadvantage that its optimal scale factor needs to be re-estimated for each set of images. For different scenarios, the optimal value changes. However, it has better performance in specific application scenarios, for example, when more spatial details are needed for military applications of remote sensing, which can be achieved by increasing the scale factor. The SRT fusion algorithm results was visually close to that of our proposed algorithm, but using the SRT algorithm requires human judgment of the best fusion parameters for each scene. In addition, we calculated some statistics of the grayscale values of thermal infrared images before and after fusion, mainly including the maximum value, minimum value, mean value, and standard deviation, and the results are shown in Table 3.
Figure 11. Experimental results of different fusion algorithms on Landsat8 data: (a) Landsat8 thermal infrared band image resolution of 100 m; (b) Landsat8 panchromatic band image resolution of 15 m; (c) fusion results of OSF algorithm; (d) fusion results of SRT algorithm; (e) experimental results of our proposed algorithm.
Figure 12. Experimental results of different fusion algorithms on Landsat9 data: (a) Landsat9 thermal infrared band image resolution of 100 m; (b) Landsat9 panchromatic band image resolution of 15 m; (c) fusion results of OSF algorithm; (d) fusion results of SRT algorithm; (e) experimental results of our proposed algorithm.
Figure 13. Experimental results of different fusion algorithms on Landsat9 data: (a) Landsat9 thermal infrared band image resolution of 100 m; (b) Landsat9 panchromatic band image resolution of 15 m; (c) fusion results of OSF algorithm; (d) fusion results of SRT algorithm; (e) experimental results of our proposed algorithm.
Table 3. Summary of some statistics of thermal infrared images before and after fusion.
Overall, the method proposed in this paper achieved the best results in terms of both subjective visual evaluation and some objective statistical measures of metrics. This excellent performance was achieved thanks to the contribution of multispectral images to the spatial detail of the final fusion results performed by calculating the difference between the high-resolution images and their low-pass components. The difference among the successive orders of the Gaussian pyramid that we employ defines the Laplacian pyramid. The Gaussian filter can be tuned to simulate the sensor modulation transfer function by adjusting the Nyquist frequency. This facilitates the extraction of details from high-resolution images that cannot be captured by thermal infrared sensors due to their low spatial resolution and can effectively improve the performance of the fusion algorithm. In this case, the only parameter characterizing the entire distribution is the standard deviation of the Gaussian distribution, which is determined using sensor-based information (usually the amplitude response value at the Nyquist frequency provided by the manufacturer or using in-orbit measurements).
Using this image fusion method, the thermal infrared band can be improved from 30 m resolution to 10 m resolution. Higher spatial resolution thermal infrared remote sensing can better solve many practical environmental problems. Thermal details can be obtained with 10 m resolution in surface temperature inversion. We can fuse images to finely portray the spatial distribution of high-energy sites and residence types in urban areas. The method can also be applied to detect the precise movement conditions of volcanic lava, the radioactive exposure of nuclear power plants, and land cover classification, among others.

6. Conclusions

Thermal infrared images record radiometric information radiated from features that is invisible to the naked eye and use this information to identify features and invert surface parameters (e.g., temperature, emissivity, etc.). However, the low spatial resolution severely limits its potential applications. Image fusion techniques can be used to fuse TIR images with higher spatial resolution reflectance bands to produce synthetic TIR images. The multi-sensor fusion of MS and TIR images is a good example of improved observability. In this paper, a fusion algorithm based on the MTF-GLP model is proposed for fusing TIR images of SDGSAT-1 with MS images. The fusion method was experimented on real images and simulated images with three-fold degradation in spatial resolution. Compared with the existing image fusion methods, the synthesized TIR images performed better in visualization and did not suffer from spectral distortion anymore. The proposed method in this paper achieved optimal performance in the quantitative evaluation metrics such as CC, SAM, RMSE, UIQI, and ERGAS. Finally, we successfully applied the algorithm to the fusion of thermal infrared data from Landsat series satellites with panchromatic band data, and we obtained better results in visual evaluation compared with several advanced fusion algorithms.

Author Contributions

Conceptualization, L.Q. and X.Z.; methodology, L.Q.; software, L.Q.; validation, L.Q. and X.N; investigation, X.N.; resources, Z.H. and F.C.; data curation, Z.H. and F.C.; writing—original draft preparation, L.Q.; writing—review and editing, L.Q.; visualization, X.Z.; supervision, L.Q.; project administration, Z.H. and F.C.; funding acquisition, Z.H. and F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Strategic Priority Research Program of the Chinese Academy of Sciences (grant number XDA19010102) and National Natural Science Foundation of China (grant number 61975222).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank SDG BIG DATA Center and National Space Science Center for providing us with the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, H.; Xie, X.; Liu, E.; Zhou, L.; Yan, L. Application of Infrared Remote Sensing and Magnetotelluric Technology in Geothermal Resource Exploration: A Case Study of the Wuerhe Area, Xinjiang. Remote Sens. 2021, 13, 4989. [Google Scholar] [CrossRef]
  2. Zhou, D.; Xiao, J.; Bonafoni, S.; Berger, C.; Deilami, K.; Zhou, Y.; Frolking, S.; Yao, R.; Qiao, Z.; Sobrino, J.A. Satellite re-mote sensing of surface urban heat islands: Progress, challenges, and perspectives. Remote Sens. 2018, 11, 48. [Google Scholar] [CrossRef]
  3. Scarpa, G.; Ciotola, M. Full-resolution quality assessment for pansharpening. Remote Sens. 2022, 14, 1808. [Google Scholar] [CrossRef]
  4. Guo, L.J.; Moore, J.M.M. Pixel block intensity modulation: Adding spatial detail to TM band 6 thermal imagery. Int. J. Remote Sens. 1998, 19, 2477–2491. [Google Scholar] [CrossRef]
  5. Jing, L.; Cheng, Q. A technique based on non-linear transform and multivariate analysis to merge thermal infrared data and higher-resolution multispectral data. Int. J. Remote Sens. 2010, 31, 6459–6471. [Google Scholar] [CrossRef]
  6. Oh, K.-Y.; Jung, H.-S.; Park, S.-H.; Lee, K.-J. Spatial Sharpening of KOMPSAT-3A MIR Images Using Optimal Scaling Factor. Remote Sens. 2020, 12, 3772. [Google Scholar] [CrossRef]
  7. Chen, F.; Hu, X.; Li, X.; Yang, L.; Hu, X.; Zhang, Y. Invited research paper on wide-format high-resolution thermal infrared remote sensing imaging technology. China Laser 2021, 48, 1210002. [Google Scholar]
  8. Hu, Z.; Zhu, M.; Wang, Q.; Su, X.; Chen, F. SDGSAT-1 TIS Prelaunch Radiometric Calibration and Performance. Remote Sens. 2022, 14, 4543. [Google Scholar] [CrossRef]
  9. Qi, L.; Li, L.; Ni, X.; Chen, F. On-Orbit Spatial Quality Evaluation of SDGSAT-1 Thermal Infrared Spectrometer. IEEE Geosci. Remote Sens. Lett. 2022, 19, 7507505. [Google Scholar] [CrossRef]
  10. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  11. Fasbender, D.; Tuia, D.; Bogaert, P.; Kanevski, M. Support-based implementation of Bayesian data fusion for spatial en-hancement: Applications to ASTER thermal images. IEEE Geosci. Remote Sens. Lett. 2008, 5, 598–602. [Google Scholar] [CrossRef]
  12. Jung, H.-S.; Park, S.-W. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images. Sensors 2014, 14, 24425–24440. [Google Scholar] [CrossRef] [PubMed]
  13. Han, L.; Shi, L.; Yang, Y.; Song, D. Thermal physical property-based fusion of geostationary meteorological satellite visi-ble and infrared channel images. Sensors 2014, 14, 10187–10202. [Google Scholar] [CrossRef] [PubMed]
  14. Jin, H.S.; Han, D. Multisensor fusion of Landsat images for high-resolution thermal infrared images using sparse repre-sentations. Math. Probl. Eng. 2017, 2017, 2048098. [Google Scholar] [CrossRef]
  15. Ahrari, A.; Kiavarz, M.; Hasanlou, M.; Marofi, M. Thermal and Visible Satellite Image Fusion Using Wavelet in Remote Sensing and Satellite Image Processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 11–15. [Google Scholar] [CrossRef]
  16. Chen, Y.; Shi, K.; Ge, Y.; Zhou, Y.N. Spatiotemporal remote sensing image fusion using multiscale two-stream convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4402112. [Google Scholar] [CrossRef]
  17. Pan, Y.; Pi, D.; Chen, J.; Meng, H. FDPPGAN: Remote sensing image fusion based on deep perceptual patchGAN. Neural Comput. Appl. 2021, 33, 9589–9605. [Google Scholar] [CrossRef]
  18. Azarang, A.; Kehtarnavaz, N. Image fusion in remote sensing: Conventional and deep learning approaches. Synth. Lect. Image Video Multimed. Process. 2021, 10, 1–93. [Google Scholar]
  19. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
  20. Xu, Q.; Zhang, Y.; Li, B. Recent advances in pansharpening and key problems in applications. Int. J. Image Data Fusion 2014, 5, 175–195. [Google Scholar] [CrossRef]
  21. Filippidis, A.; Jain, L.C.; Martin, N. Multisensor data fusion for surface land-mine detection. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 2000, 30, 145–150. [Google Scholar] [CrossRef]
  22. Huang, P.S.; Tu, T.M. A target fusion-based approach for classifying high spatial resolution imagery. In Proceedings of the IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, Greenbelt, MD, USA, 27–28 October 2003; pp. 175–181. [Google Scholar]
  23. Vivone, G.; Dalla Mura, M.; Garzelli, A.; Restaino, R.; Scarpa, G.; Ulfarsson, M.O.; Alparone, L.; Chanussot, J. A new benchmark based on recent advances in multispectral pansharpening: Revisiting pansharpening with classical and emerging pansharpening methods. IEEE Geosci. Remote Sens. Mag. 2020, 9, 53–81. [Google Scholar] [CrossRef]
  24. Chavez, P.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispec-tral data- Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 295–303. [Google Scholar]
  25. Shah, V.P.; Younan, N.H.; King, R.L. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1323–1335. [Google Scholar] [CrossRef]
  26. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  27. Aiazzi, B.; Alparone, L.; Baronti, S.; Selva, M. Twenty-five years of pansharpening. In Signal and Image Processing for Remote Sensing; CRC Press: Boca Raton, FL, USA, 2012; pp. 533–548. [Google Scholar]
  28. Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A variational model for P+ XS image fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
  29. Vivone, G.; Restaino, R.; Dalla Mura, M.; Licciardi, G.; Chanussot, J. Contrast and error-based fusion schemes for multi-spectral image pansharpening. IEEE Geosci. Remote Sens. Lett. 2013, 11, 930–934. [Google Scholar] [CrossRef]
  30. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.