Next Article in Journal
Quantifiable Measures of Abdominal Wall Motion for Quality Assessment of Cine-MRI Slices in Detection of Abdominal Adhesions
Next Article in Special Issue
Human Activity Recognition Using Cascaded Dual Attention CNN and Bi-Directional GRU Framework
Previous Article in Journal
Big-Volume SliceGAN for Improving a Synthetic 3D Microstructure Image of Additive-Manufactured TYPE 316L Steel
Previous Article in Special Issue
A 3DCNN-Based Knowledge Distillation Framework for Human Activity Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images

DIST—Department of Science and Technology, Parthenope University of Naples, Centro Direzionale, Isola C4, 80143 Naples, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(5), 93; https://doi.org/10.3390/jimaging9050093
Submission received: 19 March 2023 / Revised: 27 April 2023 / Accepted: 28 April 2023 / Published: 30 April 2023
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)

Abstract

:
In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. However, it is not trivial to choose a suitable pan-sharpening algorithm: there are several, but none of these is universally recognized as the best for any type of sensor, in addition to the fact that they can provide different results with regard to the investigated scene. This article focuses on the latter aspect: analyzing pan-sharpening algorithms in relation to different land covers. A dataset of GeoEye-1 images is selected from which four study areas (frames) are extracted: one natural, one rural, one urban and one semi-urban. The type of study area is determined considering the quantity of vegetation included in it based on the normalized difference vegetation index (NDVI). Nine pan-sharpening methods are applied to each frame and the resulting pan-sharpened images are compared by means of spectral and spatial quality indicators. Multicriteria analysis permits to define the best performing method related to each specific area as well as the most suitable one, considering the co-presence of different land covers in the analyzed scene. Brovey transformation fast supplies the best results among the methods analyzed in this study.

1. Introduction

In order to increase the accuracy of a measurement, it is sometimes possible to merge data obtained from different sensors [1]. These techniques allow to achieve improved accuracies compared to those that could be obtained using only a single sensor and are called data fusion techniques [2].
Data fusion techniques applied in the field of remote sensing allow the integration of data from multiple sensors and hence the production of more coherent, accurate and useful information than that which is provided by each single sensor [3]. Additionally, they find increasing diffusion in the related technical scientific fields, such as Earth observations and the accurate monitoring of the dynamics affecting the territory and the environment [4]. Such techniques become fundamental for many studies and applications, concerning, for example, the effects of climate change [5], desertification processes [6], deforestation [7], coastal erosion [8], burned area recognition [9], the phases of urban development [10], seismic damage evaluation [11] and cultural heritage preservation [12]. In fact, it is undeniable that the information from certain types of sensors remains partial, while the integration of heterogeneous data allows us to highlight specificities and details that would not be detected by sectoral analysis [13]. Nevertheless, data acquired from optical sensors can be integrated with the data of a different nature, such as LiDAR [14], SAR [15] and microwave [16], which are useful for better object identification and classification.
Among the data fusion techniques, those relating to pan-sharpening play a particularly important role, thanks to which, it becomes possible to merge the high geometric resolution of panchromatic images with the high spectral resolution of the multispectral bands [17]. As it is known, to reduce the effects of noise on signals [18], the sensors that acquire information in the panchromatic band, operating in a relatively broader band (including the entire visible range, sometimes even the near infrared) allow high geometric resolutions to be reached, in certainly better conditions than those obtainable in the case of multispectral bands. In fact, as the bandwidth decreases, it becomes necessary in order to optimize the signal-to-noise ratio to increase the size of the ground resolution cell; it follows that the multispectral images have a lower geometric resolution than the one that characterizes the panchromatic data [19,20]. The pan-sharpening techniques allow us to overcome this limit, so multispectral data with the same pixel size of the panchromatic data are achieved. The algorithms present in the literature which are useful for this purpose are many in quantity, and the results that can be obtained from their application are different: as a consequence, a careful evaluation process of the characteristics of the images derived from the pan-sharpening function applications is required [21]. In 2015 Vivone et al. [22] provided an effective analysis and comparison of the classic pan-sharpening methods belonging to the component substitution or multiresolution analysis families.
The pan-sharpened images provide the highest level of detailed information and are also useful to support the multi-representation of geographical data [23]. For this reason, pan-sharpening studies have had a great improvement in recent years: an edge-adaptive pan-sharpening method was proposed in Rahamani et al.’s work to enforce spectral fidelity away from the edges [24]; and more recently, Masi et al. proposed to use convolutional neural networks (CNN) for pan-sharpening [25]. Ultimately, deep learning techniques were investigated by several researchers. In some cases, they start from and adapt super-resolution (SR) [26,27,28], a technique that enhances minute details of the features in an image, thereby improving the image’s spatial information. Rohith and Kumar [29] tested and analyzed ten state-of-the-art SR techniques based on deep learning techniques using ten different publicly available datasets; in addition, they proposed a new method that is based on the integration of SR with a band-dependent spatial detail (BDSD) algorithm [30]. Xiong et al. [31] designed a loss function suitable for pan-sharpening and a four-layer convolutional neural network capable of adequately extracting spectral and spatial features from the original source images; the approach does not require the designed loss function to have the reference fused image so as to avoid the preprocessing of the data and generate training samples. Jones et al. [32] introduced the normalized difference vegetation index in the CNN, so the spectral distortions produced by pan-sharpening were reduced through taking the normalized difference ratio of the spectral bands. In 2020, Vivone et al. [33] revisited pan-sharpening with classical and emerging pan-sharpening methods, but mainly focused on non-deep learning methods, while in 2022, Deng et al. [34] provided a more detailed comparison for deep learning methods.
The most recent studies subsequently turn their attention to both deep learning approaches and land covers. In fact, pan-sharpening is also applied for land cover mapping [35,36] and vice versa; pan-sharpening methods are often tested for different land covers [37]. A pan-sharpening algorithm can provide different results in relation to the investigated scene, so it became of fundamental importance to test and find the most performing methods [38].
This paper focuses on the latter aspect: to analyze pan-sharpening techniques in relation to different land covers. A GeoEye-1 dataset is taken into analysis, from which four different areas are extracted. Each area, that we call a frame, presents a different land cover and is classified into urban, semi-urban, rural or natural, depending on the quantity of vegetation it presents. For each frame, nine pan-sharpening techniques (intensity-hue-saturation, intensity-hue-saturation fast, Brovey transformation, Brovey transformation fast, Gram–Schmidt mode 1, Gram–Schmidt fast, Gram–Schmidt mode 2, smoothing filter intensity-based modulation, high-pass filter) are applied and compared to find the most performing one in consideration of spectral similarity with the original multispectral images and spatial correlation with the panchromatic one. In order to assess the reliability of the methods, four indices are taken into account, particularly two spectral indices (UIQI and ERGAS) and two spatial indices (Zhou index and spatial ERGAS). Finally, a multicriteria analysis is carried out to find the best performing algorithm for each type of frame as well as the most suitable one, considering the co-presence of different land covers in the analyzed scene.
All the operations are carried out using software free and open source, i.e., Quantum GIS version 3.10.3 [39] and SAGA GIS 2.3.2 [40].

2. Materials and Methods

2.1. Dataset and Study Areas

For this paper, a GeoEye-1 dataset is chosen. Formerly known as OrbView-5, GeoEye-1 satellite was launched on 6 September 2008, as a next-generation high-resolution imaging mission. GeoEye-1 images are used in a wide variety of applications, such as cartography and location-based services, risk management, environmental monitoring and natural resources, defense and national security [41].
GeoEye-1 imaging system is a push-broom imaging system, which supplies panchromatic (PAN) and multispectral (MS) images, as reported in Table 1 [42].
The spectral response associated with the GeoEye-1 MS and PAN sensors is shown in Figure 1.
The chosen dataset is localized in the Campania region (Italy), as shown in Figure 2.
Particularly, it concerns the stretch of land extending from the mouth of the river Volturno in the south, to the city of Mondragone in the north. The images extend from the coastal zone in the west to the inland areas in the east, which are rich in crops. Varied land covers are therefore present, passing from urban to rural environments. The dataset is georeferenced in UTM/WGS84 (Zone 33 N) coordinate system. The 4 study areas chosen for this article are extracted from the same Geoeye-1 imagery as shown in Figure 3 and in detail in Figure 4.
The frames chosen for this study have an area of 0.25 Km2 each (500 m × 500 m), and can be described as follows:
  • Frame 1—A natural area, it presents a forest (green area) and nude soil, no man-made features (included between coordinates East 408,000 m–408,500 m and North 4,553,500 m–4,554,000 m);
  • Frame 2—A rural area, it mainly presents two kinds of cultivated areas, prevalently covered by vegetation, slightly man-made (included between coordinates East 415,900 m–416,400 m and North 4,548,400 m–4,548,900 m);
  • Frame 3—A semi-urban area, it presents a mix of land cover, such as houses, vegetation and nude soil, averagely man-made (included between coordinates East 409,500 m–410,000 m and North 4,545,400 m–4,545,900 m);
  • Frame 4—An urban area, it presents few green areas, mostly represented by trees, and a typical urban land cover with houses, strongly man-made (included between coordinates East 407,300 m–407,800 m and North 4,551,900 m–4,552,400 m).
In this way, according to the approach proposed by Meng et al. [43], we have samples that are emblematic for the typical thematic surface features present in the study area.

2.2. Classification

The study areas are classified into urban, semi-urban, rural and natural based on the quantity of vegetation present in them. For this purpose, the normalized difference vegetation index (NDVI) is applied, the formula of which is expressed as follows [44]:
N D V I = N I R R E D N I R + R E D
NDVI highlights the vegetated areas with respect to the bare soil, so that the vegetation is represented with higher brightness values than the rest.
The classification is therefore carried out by applying the maximum likelihood classification (MLC), a supervised classification technique [45] employing training sites to estimate statistical characteristics of the classes, which are used to evaluate probabilities that a pixel is assigned to a determinate class [46]. MLC is applied directly on NDVI.

2.3. Pan-Sharpening Methods

In the literature, there are many pan-sharpening techniques useful for fusing the high spectral resolution of MS with the high spatial resolution of PAN [47,48]. The following 9 algorithms are applied in this study: intensity-hue-saturation (IHS), IHS fast (IHSF), Brovey transformation (BT), Brovey transformation fast (BTF), Gram–Schmidt mode 1 (GS1), Gram–Schmidt fast (GSF), Gram–Schmidt mode 2 (GS2), smoothing filter intensity-based modulation (SFIM) and high-pass filter (HPF). The characteristics of the methods are reported below.

2.3.1. Intensity-Hue-Saturation

The IHS method is based on switching from the RGB (red-green-blue) to the IHS (intensity-hue-saturation) color model [49]. The intensity component, which is a synthetic panchromatic image (S), is used to fuse PAN and MS data according to the fusion framework, called the generalized IHS (GIHS) [50], where the intensity component is supplied by:
S = 1 n k = 1 n M S k
where n represents the number of the multispectral bands and M S k is the k-th multispectral image.
The pan-sharpened multispectral images are produced using the following formula:
M S k = M S k + P A N S
where M S k is the k-th pan-sharpened image.
By analyzing the spectral response of the original dataset, weights can be introduced to calculate S [51]. This is the so-called IHS fast (IHSF), where S is obtained as follows [52]:
S = 1 1 n w k k = 1 n w k · M S k
where w k is the weight of k-th multispectral band.

2.3.2. Brovey Transformation

The Brovey transformation (BT) was developed to visually increase the contrast in the low and high ends of an image’s histogram and thus change the original scene’s radiometry [53]. The BT pan-sharpened image can be computed as [54]:
M S k = M S k S · P A N
where S is the synthetic panchromatic image.
As for IHS, when weights are introduced to calculate S, this approach is called Brovey transformation fast (BTF).

2.3.3. Gram–Schmidt Transformation

The Gram–Schmidt pan-sharpening method is based on the mathematical approach of the same name, by applying the orthonormalization of a set of vectors; particularly, in the case of images, each band (panchromatic or multispectral) corresponds to one vector [55]. To apply the Gram–Schmidt transformation (GST), the first step is to create a lower resolution panchromatic image from the multispectral band images (S). GST is performed to orthogonalize and decorrelate S and the MS bands. Particularly, S is used as the first band in the Gram–Schmidt process. At the end of the transformation, the PAN takes the place of S and the inverse GST is performed to produce the enhanced spatial resolution multispectral digital image [56]. The fused bands are obtained as follows:
M S k = M S k + g k ( P A N S )
where g k is the gain, given by:
g k = c o v M S k , S v a r ( S )
where c o v M S k , S is the covariance between the initial k-th multispectral image and the synthetic image; v a r ( S ) is S variance.
Different versions of GST are available, depending on the way S is generated. The simplest way to produce the synthetic image is supplied by Equation (2): in this case, the method is named GS mode 1 (GS1). If weights are introduced, this method is referred as Gram–Schmidt fast (GSF) [57].
Another possibility is to degrade the panchromatic by applying a smoothing filter. The degraded image (D) is then used as follows:
M S k = M S k + g k P A N D
This method is known as Gram–Schmidt mode 2 (GS2).

2.3.4. Smoothing Filter-Based Intensity Modulation

This technique was developed by Liu and is based on the concept that, by using a ratio between a PAN and its low-pass filtered image (D), spatial details can be modulated to a co-registered lower resolution multispectral image without altering its spectral properties and contrast [58].
In this case, the gains can be considered as:
g k = M S k D
The fused images are produced as follows:
M S k = M S k + M S k D P A N D

2.3.5. High-Pass Filter

The high-pass filter method (HPF) was introduced by Chavez and Bowel [59]. According to Vivone et al., the high frequency component of the PAN image can be extracted by applying the smoothing filter to the PAN image and subtracting the result to the PAN as follows [22]:
M S k = M S k + P A N D

2.4. Quality Assessment

To evaluate the quality of the pan-sharpened data, various indices are available in the literature [60], particularly those investigating the spectral correlation with the MS images and the spatial similarity with PAN [61]. In this paper, universal image quality index (UIQI) and erreur relative globale adimensionalle de synthèse (ERGAS) are adopted as indicators of the spectral correlation between the MS original bands and the fused ones; Zhou index (ZI) and spatial ERGAS (S-ERGAS) are used to determine the spatial similarity between PAN and the fused images. The adopted indices are briefly reported below.

2.4.1. Universal Image Quality Index (UIQI)

This index takes into account three components and is obtained as follows [62]:
U I Q I = c o v ( M S k , M S k ) v a r M S k v a r ( M S k ) · 2 ( M S k ) ( M S k ) ( M S k ) 2 + ( M S k ) 2 · 2 v a r M S k v a r ( M S k ) v a r M S k + v a r ( M S k )
where c o v M S k , M S k is the covariance between the initial k-th multispectral image and the corresponding pan-sharpened image; v a r M S k is M S k variance; v a r ( M S k ) is M S k variance; ( M S k ) is the mean value of M S k and ( M S k ) is the mean value of M S k .
The range of UIQI is [−1, 1]: values close to 1 indicate a good performance of the pan-sharpening technique [63].

2.4.2. Erreur Relative Globale Adimensionalle de Synthèse (ERGAS)

It quantifies the spectral quality of the fused images with the following formula [64]:
E R G A S = 100 · h l · 1 n · k = 1 n R M S E ( M S k ) μ k 2
where h is the spatial resolution of reference image (PAN); l is the spatial resolution of original multispectral images ( M S k ); n is the number of spectral bands and µk is the mean of the k-th band of the original image. RMSE is the root mean square error for k-band between fused ( M S k ) and original bands ( M S k ) and is obtained as follows [65]:
R M S E = 1 M N i = 1 M j = 1 N ( M S k i , j M S k i , j ) 2
where M S k i , j represents the pixel value in the original (reference) image; M S k i , j is the pixel value in the corresponding fused image; i and j identify the pixel position in each image and M and N are, respectively, the number of rows and the number of columns that are present in each image. Low values of ERGAS suggest a likeness between original and fused bands.

2.4.3. Zhou’s Spatial Index (ZI)

As a first step, the high frequency information from PAN and M S k is extracted by using a high frequency Laplacian filter:
L a p l a c i a n   K e r n e l = 1 1 1 1 8 1 1 1 1
As a result, the high-pass PAN (HPPAN) and the high-pass M S k   ( H P M S k ) are obtained and used to calculate ZI as follows [66]:
Z I = c o v ( H P P A N , H P M S k ) v a r H P P A N v a r ( H P M S k )

2.4.4. Spatial ERGAS (S-ERGAS)

By introducing a spatial RMSE, it is possible to redefine ERGAS as a spatial index [67]. Spatial RMSE is achieved as follows:
S p a t i a l   R M S E = 1 M N i = 1 M j = 1 N ( P A N M S k i , j ) 2

3. Results and Discussion

3.1. Classification Results

The application of NDVI generates a synthetic band, as shown in Figure 5.
By applying the MLC, two classes (vegetation/non-vegetation) are identified, and a regular grid of half a kilometer is applied to the classified dataset, as shown in Figure 6.
The four categories are identified based on the percentage of vegetation pixels according to the following thresholds:
  • Natural area: 75–100%;
  • Rural area: 50–75%;
  • Semi-urban area: 25–50%;
  • Urban area: 0–25%.
Figure 7 shows the results of the classification on the four selected frames.

3.2. Pan-Sharpening Results

Table 2 shows the results of the fusions for frame 1.
In frame 1, the HPF method presents the best results in terms of spectral correlation with the original images, since it provides the higher values of UIQI and the lowest ERGAS. Good results are also provided by SFIM, BTF and IHSF. The UIQI values vary significantly between bands in each method. In terms of spatial similarity, BT, BTF, IHSF and GS1 present the best results. Typically, a band that provides the higher UIQI has the lowest ZI and vice versa, as can be seen, for example, in HPF, GS2, GSF and IHSF. Overall, we can declare that BTF and IHSF are the most performing methods in frame 1.
Table 3 shows the results of the fusions for frame 2.
In frame 2 HPF, SFIM and GS2 provides the best results in terms of spectral correlation with the original images, in particular, HPF is the most performing one in terms both of UIQI and ERGAS. BT and BTF show the greatest spatial similarity, although S-ERGAS values do not have not very good variabilities for each method. GS1 presents a particular case since it provides the lowest S-ERGAS values, but also the lowest ZI; this demonstrates the importance of taking into account different indices. Overall, it can be said that HPF, SFIM and GS2 are the most performing methods for frame 2. This area is characterized by a lower variability in terms of features if compared to frame 1; particularly, it presents two uniformly cultivated zones that supplied the highest values of UIQI, because in this case, pan-sharpening application does not introduce a high level of shape enhancement and similarity between original MS image and fusion products. This effect is more evident in the NIR band due to the highest reflectance in the presence of both soil and vegetation [68].
Table 4 shows the results of the fusions for frame 3.
In frame 3, HPF and GSF are the best methods in terms of spectral correlation, followed by IHSF and BTF. Exceptional results are presented by BT in terms of spatial similarity, but BTF, IHSF and GSF also provide good results. SFIM, HPF and GS2 present the worst results in spatial similarity due to the relatively high variability of the images if compared with frames 1 and 2: these methods use a low-pass filter to degrade the PAN, so the boarders of the features are less defined [69]. Overall, BTF, IHSF and GSF are the most performing methods in frame 3.
Table 5 shows the results of the fusions for frame 4.
In frame 4, IHSF and HPF are the best methods in terms of spatial correlation. Additionally, GSF and BTF provide good results. BT is the most performing technique in terms of spatial similarity, followed by BTF, IHSF and GSF. Overall, IHSF is the best method in frame 4. This area presents the greatest variability in terms of features, so the methods that apply the low-pass filter (HPF, SFIM and GS2) do not perform well in spatial terms. Frame 4 can be seen as an opposite situation with respect to frame 2: the first is a completely urbanized area, including mostly buildings, roads and few trees, while the second includes two very large homogeneous cultivated areas with very few variations.
As already stated in other studies, comparing each frame, it is not possible to choose a pan-sharpening technique a priori, since each method performs in a different way in relation to the land cover [70,71].
What emerges from this study can be summarized as follows:
  • Weighted methods always perform better than the respective unweighted techniques in terms of spectral correlation;
  • Weighted methods tend to maintain the ZI and S-ERGAS values of their respective unweighted methods;
  • Low-pass filter-based techniques perform quite well in low-variating land covers, but tend to perform poorly in variegated land cover;
  • Low-pass filter-based techniques never present the best performance in terms of spatial similarity with PAN.
As already stated in other studies [72], usually, when pan-sharpening is applied, the better the image spatial quality, the worse the image spectral quality and vice versa. In order to find a compromise in this paper, we apply the multicriteria analysis [73] approach proposed by Alcaras et al. in 2021 [74]:
  • A ranking is made for the methods in consideration of each indicator, assigning a score from 1 to 9.
  • The spectral indicators are then mediated between them, as well as the spatial indicators.
  • A general ranking is obtained by averaging the two results.
Finally, the general ranking of the methods for each frame is shown in Table 6, where rank 1 is assigned to the method in the first position, rank 2 to the method in the second position, and so on.
To better understand the performances of each method, Figure 8 shows the trend of the pan-sharpening algorithms in each frame.
The results show excellent performances of the BTF algorithm in the most vegetated areas, i.e., rural and natural areas. In the semi-urban area, the best results were achieved by the exploitation of the IHSF, while in the urban area, the most efficient algorithm is the GSF. In general, the “fast” methods are the most reliable, especially the BTF, which is the only method to consistently be in the top three of the best methods in all frames. The IHS method does not present good results in areas where the vegetation is higher than 50% (rural and natural), and on the contrary, the SFIM does not present good results in areas with a low vegetation rate, i.e., below 50% (semi-urban and urban).
Finally, we would like to underline that each of the indicators used manages to highlight the level of spectral or spatial quality of the pan-sharpened image. There are no studies in the literature that identify an indicator as the most performing; for this reason, four indicators (i.e., UIQI, ERGAS, ZI, S-ERGAS) are considered in our experiments. The multicriteria analysis approach adopted is believed to strike the right balance by bringing the different indicators into play.
According to the concept, a visual inspection of the resulting images is useful to evaluate the color preservation quality and the spatial improvements in object representation [75]. Figure 9, Figure 10, Figure 11 and Figure 12 show the RGB composition of the multispectral pan-sharpened images obtained for the least and best-performing methods in each frame (according to Table 6) compared with the corresponding initial RGB image composition.
By means of visual inspection, it is evident that, in general, the fusion process leads to an enhancement of the geometric resolution.
From the analysis of frame 1, it is evident that the application of the IHS algorithm (the least performing) generates darker colors and a yellowing of the RGB image, and the enhanced geometric resolution is, however, appreciable. The result produced by applying the BTF method (the best in this frame) instead generates an RGB image with colors closer to those of the initial RGB composition.
The geometries of the second frame are different compared to the first one, but the scene still shows a high vegetated area. The IHS method (in this case, the worst one) again generates darkened images and presents some areas in which the grid typical of the MS image is visible. The grid completely disappears in the RGB image produced using the BTF method (the best in this frame), which presents, in its RGB composition, brighter colors.
The detail of frame 3 shows a reduced presence of vegetation compared to the first two frames. In the RGB composition of the SFIM method (in this case, the least performing), the pixels that form the road present the typical grid of the MS image, while adequately preserving the color of the scene. On the other hand, the square pattern is not visible in the RGB composition of the IHSF method (the most performing of frame 3).
The detail relating to frame 4 presents a football field and the roof of a church: the results obtained are satisfactory; as can be seen, the lines of the football field are clearly visible in the pan-sharpened images, which is not possible in the RGB composition of the initial images. The SFIM method (the least performing in this frame) presents the grid typical of MS images along the edges of the roof of the church, which, on the opposite, cannot be found in the RGB composition of the GSF method (the best in this frame). However, the dome of the church presents the peculiar square pattern in both the SFIM method and the GSF method.

4. Limitations and Future Research Directions

The methods analyzed in this article fall within the classical approaches to pan-sharpening and highlight how the performances are different in relation to the type of land cover. However, the limitations of the approach are evident, and we can identify at least three lacks.
Firstly, the analysis is conducted for a single type of image, GeoEye-1: even if the applications could be applied in relation to other types of sensors operating in the same bands (such as IKONOS and Pléiades), the ranges of acquisition wavelengths may be slightly different. Consequently, the results should be properly analyzed and evaluated for each sensor type products.
Secondly, the methods are applied on four frames, chosen according to the amount of vegetation present in each of them. The analysis of the results would be much more robust if the methods are applied to a greater number of samples.
Finally, the pan-sharpening techniques analyzed are among the classics, not considering the new pan-sharpening trends that generally apply deep learning and neural networks techniques.
In order to overcome the three limitations identified above, we will conduct a larger study in the future that will cover more types of high-resolution satellite images, analyze a greater number of samples, and also include the most recent pan-sharpening trends. Particularly, approaches to synthesize high-resolution MS based on deep learning and CNN will be considered, analyzing and comparing innovative methods such as those proposed by Jeong and Kim [76], Xu et al. [77] and Liu et al. [78].

5. Conclusions

In this paper, nine pan-sharpening techniques (IHS, IHSF, BT, BTF, GS1, GSF, GS2, SFIM, HPF) are compared in relation to four specific areas. Each area has different land covers and responds in a distinct way to the pan-sharpening applications. Particularly, the following representative zones are considered: natural area, rural area, semi-urban area and urban area. The frames are selected from a GeoEye-1 dataset localized in the Campania region (Italy), north of Naples.
The classification of each area takes place on the basis of the amount of vegetation present in practice by evaluating the percentage of pixels classified as vegetation, following the application of NDVI and MLC.
To evaluate the performance of each method, visual, spectral and spatial analyses are carried out. Visual analysis is performed considering RGB true color composition, while spectral and spatial analyses are based on quality indices calculation: UIQI and ERGAS for spectral inspection and ZI and S-ERGAS for spatial similarity evaluation.
Each quality index provides a different indication, and for this reason, a multicriteria analysis is carried out to identify the best algorithm in each area.
The performance of each pan-sharpening method is different in relation to the considered frame. However, each selected frame does not supply the same ranking of the method performances.
The introduction of the weights to define the synthetic panchromatic image in some methods (BTF, IHSF, GSF) allows us to enhance the resulting performance in terms of spectral correlation, and to maintain the spatial similarity to the panchromatic image ensured by the respective unweighted methods (BT, IHS, GS).
Pan-sharpening methods based on low-pass filter-based application (GS2, SFIM, HPF) do not provide optimal results for the selected frames. However, SFIM supplies good performances for the natural areas, which present a homogeneous land cover while presenting poor results for variegated land cover, i.e., urban and semi-urban areas. Similar poor results are achieved by the GS2 algorithm in urban and semi-urban areas. HPF trends are in line with the previous two methods, but generally provides better performances. Particularly, the similarity of the resulting fused products with PAN image is low.
The results of this study suggest that, in relation to GeoEye-1 images, the best algorithms for pan-sharpening are: BTF for rural areas, as well as for natural areas; IHSF for semiurban areas; and GSF for urban areas. When the analyzed scenes show the co-presence of different types of areas, the most effective method is BTF as it is able to provide acceptable results even when it is not the most performing one. Considering the variability of the areas that may occur as well as the specificity of the used sensors, i.e., acquisition bands and spectral response, a comparison of different pan-sharpening methods is recommended and the multicriteria approach adopted in this article is useful to select the most performing one.
To include all of the abovementioned considerations in the pan-sharpening process, an automation of the comparison of different approaches is suggested to facilitate and support the user to select the most performing algorithm. To reduce the processing and calculation time, a first selection of the algorithms to be compared can be performed by taking into account the results of this study; in fact, our experiments already highlight the performance among some of the most efficient and widespread methods.
Finally, future work will investigate convolutional neural network algorithms based on deep learning to implement pan-sharpening in high and very high resolution images.

Author Contributions

The authors contributed equally to this work. E.A. and C.P. conceived the article and designed the experiments; E.A. conducted the bibliographic research and C.P. organized data collection; E.A. carried out experiments on pan-sharpening applications and C.P. performed the quality tests; all authors took part in the analyses and in writing the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study’s data are available upon request from the corresponding authors for academic research and noncommercial purposes only. Restrictions apply to derivative images and models trained using the data, and proper referencing is required.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Carvalho Filho, J.G.N.; Carvalho, E.Á.N.; Molina, L.; Freire, E.O. The impact of parametric uncertainties on mobile robots velocities and pose estimation. IEEE Access 2019, 7, 69070–69086. [Google Scholar] [CrossRef]
  2. Hall, D.L.; Llinas, J. An introduction to multisensor data fusion. Proc. IEEE 1997, 85, 6–23. [Google Scholar] [CrossRef] [Green Version]
  3. Kong, L.; Peng, X.; Chen, Y.; Wang, P.; Xu, M. Multi-sensor measurement and data fusion technology for manufacturing process monitoring: A literature review. Int. J. Extrem. Manuf. 2020, 2, 022001. [Google Scholar] [CrossRef]
  4. Stateczny, A.; Bodus-Olkowska, I. Sensor data fusion techniques for environment modelling. In Proceedings of the 2015 16th International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2015; pp. 1123–1128. [Google Scholar] [CrossRef]
  5. Keenan, T.F.; Davidson, E.; Moffat, A.M.; Munger, W.; Richardson, A.D. Using model-data fusion to interpret past trends, and quantify uncertainties in future projections, of terrestrial ecosystem carbon cycling. Glob. Change Biol. 2012, 18, 2555–2569. [Google Scholar] [CrossRef] [Green Version]
  6. Fernández Prieto, D. Change detection in multisensor remote-sensing data for desertification monitoring. In Proceedings of the Third International Symposium on Retrieval of Bio- and Geophysical Parameters from SAR Data for Land Applications, Sheffield, UK, 11–14 September 2001; Volume 475. [Google Scholar]
  7. Schultz, M.; Clevers, J.G.; Carter, S.; Verbesselt, J.; Avitabile, V.; Quang, H.V.; Herold, M. Performance of vegetation indices from Landsat time series in deforestation monitoring. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 318–327. [Google Scholar] [CrossRef]
  8. Ge, L.; Li, X.; Wu, F.; Turner, I.L. Coastal erosion mapping through intergration of SAR and Landsat TM imagery. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, VIC, Australia, 21–26 July 2013; pp. 2266–2269. [Google Scholar] [CrossRef]
  9. Sakellariou, S.; Cabral, P.; Caetano, M.; Pla, F.; Painho, M.; Christopoulou, O.; Sfougaris, A.; Dalezios, N.; Vasilakos, C. Remotely sensed data fusion for spatiotemporal geostatistical analysis of forest fire hazard. Sensors 2020, 20, 5014. [Google Scholar] [CrossRef] [PubMed]
  10. Hu, Y.; Jia, G.; Pohl, C.; Feng, Q.; He, Y.; Gao, H.; Feng, J. Improved monitoring of urbanization processes in China for regional climate impact assessment. Environ. Earth Sci. 2015, 73, 8387–8404. [Google Scholar] [CrossRef]
  11. Baiocchi, V.; Brigante, R.; Dominici, D.; Milone, M.V.; Mormile, M.; Radicioni, F. Automatic three-dimensional features extraction: The case study of L’Aquila for collapse identification after April 06, 2009 earthquake. Eur. J. Remote Sens. 2014, 47, 413–435. [Google Scholar] [CrossRef]
  12. Lasaponara, R.; Masini, N. Satellite Remote Sensing: A New Tool for Archaeology; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; p. 16. [Google Scholar] [CrossRef] [Green Version]
  13. Mandanici, E.; Bitelli, G. Preliminary comparison of sentinel-2 and landsat 8 imagery for a combined use. Remote Sens. 2016, 8, 1014. [Google Scholar] [CrossRef] [Green Version]
  14. Bork, E.W.; Su, J.G. Integrating LIDAR data and multispectral imagery for enhanced classification of rangeland vegetation: A meta analysis. Remote Sens. Environ. 2007, 111, 11–24. [Google Scholar] [CrossRef]
  15. Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.; Brisco, B. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GIScience Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
  16. Mateo-Sanchis, A.; Piles, M.; Muñoz-Marí, J.; Adsuara, J.E.; Pérez-Suay, A.; Camps-Valls, G. Synergistic integration of optical and microwave satellite data for crop yield estimation. Remote Sens. Environ. 2019, 234, 111460. [Google Scholar] [CrossRef] [PubMed]
  17. Ehlers, M.; Klonus, S.; Johan Åstrand, P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  18. Parente, C.; Santamaria, R. Increasing geometric resolution of data supplied by Quickbird multispectral sensors. Sens. Transducers 2013, 156, 111. [Google Scholar]
  19. Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A comparative analysis of image fusion methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
  20. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef] [Green Version]
  21. Garzelli, A.; Nencini, F.; Alparone, L.; Aiazzi, B.; Baronti, S. Pan-sharpening of multispectral images: A critical review and comparison. In Proceedings of the IGARSS 2004. 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; Volume 1. [Google Scholar] [CrossRef]
  22. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
  23. Falchi, U. IT tools for the management of multi-representation geographical information. Int. J. Eng. Technol. 2018, 7, 65–69. [Google Scholar] [CrossRef] [Green Version]
  24. Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An adaptive IHS pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef] [Green Version]
  25. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef] [Green Version]
  26. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  27. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep learning for single image super-resolution: A brief review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
  28. Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3867–3876. [Google Scholar]
  29. Rohith, G.; Kumar, L.S. Super-Resolution Based Deep Learning Techniques for Panchromatic Satellite Images in Application to Pansharpening. IEEE Access 2020, 8, 162099–162121. [Google Scholar] [CrossRef]
  30. Vivone, G. Robust band-dependent spatial-detail approaches for panchromatic sharpening. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6421–6433. [Google Scholar] [CrossRef]
  31. Xiong, Z.; Guo, Q.; Liu, M.; Li, A. Pan-sharpening based on convolutional neural network by using the loss function with no-reference. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 897–906. [Google Scholar] [CrossRef]
  32. Jones, E.G.; Wong, S.; Milton, A.; Sclauzero, J.; Whittenbury, H.; McDonnell, M.D. The impact of pan-sharpening and spectral resolution on vineyard segmentation through machine learning. Remote Sens. 2020, 12, 934. [Google Scholar] [CrossRef] [Green Version]
  33. Vivone, G.; Dalla Mura, M.; Garzelli, A.; Restaino, R.; Scarpa, G.; Ulfarsson, M.O.; Chanussot, J. A new benchmark based on recent advances in multispectral pansharpening: Revisiting pansharpening with classical and emerging pansharpening methods. IEEE Geosci. Remote Sens. Mag. 2020, 9, 53–81. [Google Scholar] [CrossRef]
  34. Deng, L.J.; Vivone, G.; Paoletti, M.E.; Scarpa, G.; He, J.; Zhang, Y.; Plaza, A. Machine Learning in Pansharpening: A benchmark, from shallow to deep networks. IEEE Geosci. Remote Sens. Mag. 2022, 10, 279–315. [Google Scholar] [CrossRef]
  35. Jawak, S.D.; Luis, A.J. Improved land cover mapping using high resolution multiangle 8-band WorldView-2 satellite remote sensing data. J. Appl. Remote Sens. 2013, 7, 073573. [Google Scholar] [CrossRef]
  36. Wang, P.; Zhang, L.; Zhang, G.; Bi, H.; Dalla Mura, M.; Chanussot, J. Superresolution land cover mapping based on pixel-, subpixel-, and superpixel-scale spatial dependence with pansharpening technique. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4082–4098. [Google Scholar] [CrossRef]
  37. Li, H.; Jing, L.; Tang, Y. Assessment of pansharpening methods applied to WorldView-2 imagery fusion. Sensors 2017, 17, 89. [Google Scholar] [CrossRef] [PubMed]
  38. Medina, A.; Marcello, J.; Rodriguez, D.; Eugenio, F.; Martin, J. Quality evaluation of pansharpening techniques on different land cover types. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5442–5445. [Google Scholar] [CrossRef]
  39. QGIS. Available online: https://www.qgis.org/en/site/about/index.html (accessed on 2 September 2021).
  40. SAGA; GIS. Available online: http://www.saga-gis.org/en/index.html (accessed on 2 September 2021).
  41. Planetek Italia–GeoEye-1. Available online: https://www.planetek.it/prodotti/tutti_i_prodotti/geoeye_1 (accessed on 2 September 2021).
  42. Eo Portal–GeoEye-1. Available online: https://earth.esa.int/web/eoportal/satellite-missions/g/geoeye-1 (accessed on 2 September 2021).
  43. Rouse, J.W., Jr.; Haas, R.H.; Deering, D.W.; Schell, J.A.; Harlan, J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; NASA: Washington, DC, USA, 1974; No. E75-10354. [Google Scholar]
  44. Meng, X.; Xiong, Y.; Shao, F.; Shen, H.; Sun, W.; Yang, G.; Zhang, H. A large-scale benchmark data set for evaluating pansharpening performance: Overview and implementation. IEEE Geosci. Remote Sens. Mag. 2020, 9, 18–52. [Google Scholar] [CrossRef]
  45. Alcaras, E.; Amoroso, P.P.; Parente, C.; Prezioso, G. Remotely Sensed Image Fast Classification and Smart Thematic Map Production. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 46, 43–50. [Google Scholar] [CrossRef]
  46. Sisodia, P.S.; Tiwari, V.; Kumar, A. Analysis of supervised maximum likelihood classification for remote sensing image. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 1–4. [Google Scholar] [CrossRef]
  47. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  48. Pushparaj, J.; Hegde, A.V. Evaluation of pan-sharpening methods for spatial and spectral quality. Appl. Geomat. 2017, 9, 1–12. [Google Scholar] [CrossRef]
  49. Carper, W.J.; Lillesand, T.M.; Kiefer, R.W. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Rem. S. 1990, 56, 459–467. [Google Scholar]
  50. Tu, T.M.; Su, S.; Shyu, H.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  51. Parente, C.; Pepe, M. Influence of the weights in IHS and Brovey methods for pan-sharpening WorldView-3 satellite images. Int. J. Eng. Technol 2017, 6, 71–77. [Google Scholar] [CrossRef] [Green Version]
  52. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  53. Švab, A.; Oštir, K. High-resolution image fusion: Methods to preserve spectral and spatial resolution. Photogramm. Eng. Remote Sens. 2006, 72, 565–572. [Google Scholar] [CrossRef]
  54. Pohl, C.; Van Genderen, J.L. Review article multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef] [Green Version]
  55. Karakus, P.; Karabork, H. Effect of pansharpened image on some of pixel based and object based classification accuracy. International Archives of the Photogrammetry. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; Volume XLI-B7. [Google Scholar] [CrossRef]
  56. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6011875, 4 January 2000. [Google Scholar]
  57. Maurer, T. How to pan-sharpen images using the Gram-Schmidt pan-sharpen method-a recipe. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 1, W1. [Google Scholar] [CrossRef] [Green Version]
  58. Liu, J.G. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  59. Chavez, P.S., Jr.; Bowell, J.A. Comparison of the spectral information content of Landsat Thematic Mapper and SPOT for three different sites in the Phoenix, Arizona region. Photogramm. Eng. Remote Sens. 1988, 54, 1699–1708. [Google Scholar]
  60. Karathanassi, V.; Kolokousis, P.; Ioannidou, S. A comparison study on fusion methods using evaluation indicators. Int. J. Remote Sens. 2007, 28, 2309–2341. [Google Scholar] [CrossRef]
  61. Shahdoosti, H.R.; Ghassemian, H. Fusion of MS and PAN images preserving spectral quality. IEEE Geosci. Remote Sens. Lett. 2014, 12, 611–615. [Google Scholar] [CrossRef]
  62. Wang, Z.; Bovik, A.C. A universal image quality index. Signal Process. Lett. IEEE 2002, 9, 81–84. [Google Scholar] [CrossRef]
  63. Nikolakopoulos, K.; Oikonomidis, D. Quality assessment of ten fusion techniques applied on Worldview-2. Eur. J. Remote Sens. 2015, 48, 141–167. [Google Scholar] [CrossRef]
  64. Wald, L. Quality of High Resolution Synthesised Images: Is There a Simple Criterion. In Proceedings of the Third Conference Fusion of Earth Data: Merging Point Measurements, Raster Maps and Remotely Sensed Images, Sophia Antipolis, France, 26–28 January 2000; pp. 99–103. [Google Scholar]
  65. Ranchin, T.; Wald, L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  66. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
  67. Lillo-Saavedra, M.; Gonzalo, C.; Arquero, A.; Martinez, E. Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain. Int. J. Remote Sens. 2005, 26, 1263–1268. [Google Scholar] [CrossRef]
  68. Pleniou, M.; Koutsias, N. Sensitivity of spectral reflectance values to different burn and vegetation ratios: A multi-scale approach applied in a fire affected area. ISPRS J. Photogramm. Remote Sens. 2013, 79, 199–210. [Google Scholar] [CrossRef]
  69. Lee, J.; Lee, C. Fast and efficient panchromatic sharpening. IEEE Trans. Geosci. Remote Sens. 2009, 48, 155–163. [Google Scholar] [CrossRef]
  70. Cánovas-García, F.; Pesántez-Cobos, P.; Alonso-Sarría, F. fusionImage: An R package for pan-sharpening images in open source software. Trans. GIS 2020, 24, 1185–1207. [Google Scholar] [CrossRef]
  71. Alcaras, E.; Della Corte, V.; Ferraioli, G.; Martellato, E.; Palumbo, P.; Parente, C.; Rotundi, A. Comparison of different pan-sharpening methods applied to IKONOS imagery. Geogr. Tech. 2021, 16, 198–210. [Google Scholar] [CrossRef]
  72. Amolins, K.; Zhang, Y.; Dare, P. Wavelet based image fusion techniques—An introduction, review and comparison. ISPRS J. Photogramm. Remote Sens. 2007, 62, 249–263. [Google Scholar] [CrossRef]
  73. Dodgson, J.S.; Spackman, M.; Pearman, A.; Phillips, L.D. Multi-Criteria Analysis: A Manual; Department for Communities and Local Government: London, UK, 2009.
  74. Alcaras, E.; Parente, C.; Vallario, A. Automation of Pan-Sharpening Methods for Pléiades Images Using GIS Basic Functions. Remote Sens. 2021, 13, 1550. [Google Scholar] [CrossRef]
  75. Saroglu, E.; Bektas, F.; Musaoglu, N.; Goksel, C. Fusion of multisensory sensing data: Assessing the quality of resulting images. ISPRS Arch. 2004, 35, 575–579. [Google Scholar]
  76. Jeong, D.; Kim, Y. Deep learning based pansharpening using a Laplacian pyramid. Asian Conf. Remote Sens. ACRS 2019, 40, 1–8. [Google Scholar]
  77. Xu, S.; Zhang, J.; Zhao, Z.; Sun, K.; Liu, J.; Zhang, C. Deep gradient projection networks for pan-sharpening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 20–25 June 2021; pp. 1366–1375. [Google Scholar] [CrossRef]
  78. Liu, Q.; Meng, X.; Shao, F.; Li, S. Supervised-unsupervised combined deep convolutional neural networks for high-fidelity pansharpening. Inf. Fusion 2023, 89, 292–304. [Google Scholar] [CrossRef]
Figure 1. Spectral response of GeoEye-1 MS and PAN sensors as stated by the European Space Agency [42].
Figure 1. Spectral response of GeoEye-1 MS and PAN sensors as stated by the European Space Agency [42].
Jimaging 09 00093 g001
Figure 2. The geolocalization of the study areas in Italy.
Figure 2. The geolocalization of the study areas in Italy.
Jimaging 09 00093 g002
Figure 3. Location of the study areas and the GeoEye-1 dataset.
Figure 3. Location of the study areas and the GeoEye-1 dataset.
Jimaging 09 00093 g003
Figure 4. The study areas, from the left: frame 1—(a) natural area; frame 2—(b) rural area; frame 3—(c) semi-urban area; frame 4—(d) urban area.
Figure 4. The study areas, from the left: frame 1—(a) natural area; frame 2—(b) rural area; frame 3—(c) semi-urban area; frame 4—(d) urban area.
Jimaging 09 00093 g004
Figure 5. NDVI applied to the GeoEye-1 dataset.
Figure 5. NDVI applied to the GeoEye-1 dataset.
Jimaging 09 00093 g005
Figure 6. The result of the classification: green pixels identify vegetation.
Figure 6. The result of the classification: green pixels identify vegetation.
Jimaging 09 00093 g006
Figure 7. The classified areas, from the left: natural area (80.67% of vegetation); rural area (52.44% of vegetation); semi-urban area (41.16% of vegetation); urban area (17.59% of vegetation).
Figure 7. The classified areas, from the left: natural area (80.67% of vegetation); rural area (52.44% of vegetation); semi-urban area (41.16% of vegetation); urban area (17.59% of vegetation).
Jimaging 09 00093 g007
Figure 8. Trends of the pan-sharpening algorithms in each frame.
Figure 8. Trends of the pan-sharpening algorithms in each frame.
Jimaging 09 00093 g008
Figure 9. A detail of frame 1: on the left, the RGB composition of the MS images; in the middle, the RGB composition of IHS pan-sharpening; on the right, the RGB composition of BTF pan-sharpening.
Figure 9. A detail of frame 1: on the left, the RGB composition of the MS images; in the middle, the RGB composition of IHS pan-sharpening; on the right, the RGB composition of BTF pan-sharpening.
Jimaging 09 00093 g009
Figure 10. A detail of frame 2: on the left, the RGB composition of the MS images; in the middle, the RGB composition of IHS pan-sharpening; on the right, the RGB composition of BTF pan-sharpening.
Figure 10. A detail of frame 2: on the left, the RGB composition of the MS images; in the middle, the RGB composition of IHS pan-sharpening; on the right, the RGB composition of BTF pan-sharpening.
Jimaging 09 00093 g010
Figure 11. A detail of frame 3: on the left, the RGB composition of the MS images; in the middle, the RGB composition of SFIM pan-sharpening; on the right, the RGB composition of IHSF pan-sharpening.
Figure 11. A detail of frame 3: on the left, the RGB composition of the MS images; in the middle, the RGB composition of SFIM pan-sharpening; on the right, the RGB composition of IHSF pan-sharpening.
Jimaging 09 00093 g011
Figure 12. A detail of frame 14: on the left, the RGB composition of the MS images; in the middle, the RGB composition of SFIM pan-sharpening; on the right, the RGB composition of GSF pan-sharpening.
Figure 12. A detail of frame 14: on the left, the RGB composition of the MS images; in the middle, the RGB composition of SFIM pan-sharpening; on the right, the RGB composition of GSF pan-sharpening.
Jimaging 09 00093 g012
Table 1. Characteristics of GeoEye-1 images.
Table 1. Characteristics of GeoEye-1 images.
BandsWavelength (nm)Resolution (m)
Panchromatic450–8000.5
Band 1—Blue450–5102
Band 2—Green510–5802
Band 3—Red655–6902
Band 4—Near Infrared (NIR)780–9202
Table 2. Pan-sharpening quality indices values for frame 1.
Table 2. Pan-sharpening quality indices values for frame 1.
MethodBandsUIQIUIQIMERGASZIZIMS-ERGAS
BTBlue0.8120.8427.4580.9500.9236.250
Green0.8850.964
Red0.9290.849
NIR0.7430.930
BTFBlue0.6560.8204.2800.9780.9286.551
Green0.8290.990
Red0.9530.907
NIR0.8420.838
IHSBlue0.7830.7949.9520.9220.8606.717
Green0.7460.947
Red0.7080.905
NIR0.9370.664
IHSFBlue0.7580.8494.3900.9830.8826.674
Green0.7890.990
Red0.8940.975
NIR0.9540.578
GS1Blue0.9040.8259.5530.9140.8996.672
Green0.8360.946
Red0.6760.908
NIR0.8840.826
GSFBlue0.8270.8604.9810.9830.7776.748
Green0.8270.989
Red0.8010.986
NIR0.9840.149
GS2Blue0.8610.8774.8410.9560.8607.363
Green0.8400.960
Red0.8480.956
NIR0.9570.568
SFIMBlue0.7450.8514.0180.9630.8967.059
Green0.8650.946
Red0.9650.823
NIR0.8280.852
HPFBlue0.8450.8983.7970.9590.8517.085
Green0.8590.956
Red0.9310.915
NIR0.9560.574
Table 3. Pan-sharpening quality indices values for frame 2.
Table 3. Pan-sharpening quality indices values for frame 2.
MethodBandsUIQIUIQIMERGASZIZIMS-ERGAS
BTBlue0.8950.8917.2150.9390.9054.975
Green0.9140.947
Red0.9430.833
NIR0.8120.903
BTFBlue0.6640.8374.2260.9790.9175.152
Green0.7900.988
Red0.9330.903
NIR0.9620.800
IHSBlue0.8300.8379.2830.9240.8725.132
Green0.7720.947
Red0.7670.917
NIR0.9770.700
IHSFBlue0.7640.8464.2320.9820.8925.204
Green0.7480.990
Red0.8810.978
NIR0.9910.619
GS1Blue0.9910.9196.9280.5460.6344.947
Green0.9660.705
Red0.9900.376
NIR0.7290.910
GSFBlue0.7550.8085.3240.9820.8745.237
Green0.7600.990
Red0.7450.981
NIR0.9720.544
GS2Blue0.9250.9403.0930.9310.8765.682
Green0.9060.937
Red0.9350.934
NIR0.9930.702
SFIMBlue0.8690.9302.9860.9400.8695.630
Green0.9090.914
Red0.9670.786
NIR0.9750.836
HPFBlue0.9230.9482.6550.9330.8445.570
Green0.9130.930
Red0.9590.893
NIR0.9960.620
Table 4. Pan-sharpening results achieved for frame 3.
Table 4. Pan-sharpening results achieved for frame 3.
MethodBandsUIQIUIQIMERGASZIZIMS-ERGAS
BTBlue0.7930.8287.2710.9630.9466.134
Green0.8400.973
Red0.8900.927
NIR0.7870.923
BTFBlue0.7620.8325.9690.9740.9367.015
Green0.8300.987
Red0.8970.945
NIR0.8380.837
IHSBlue0.8120.8357.8590.9540.9298.120
Green0.7860.970
Red0.8370.953
NIR0.9040.840
IHSFBlue0.8120.8525.8360.9790.9297.250
Green0.8040.986
Red0.8720.972
NIR0.9200.777
GS1Blue0.8320.8358.0530.9520.9308.398
Green0.8120.969
Red0.7890.963
NIR0.9050.838
GSFBlue0.8270.8625.8930.9780.8977.488
Green0.8280.985
Red0.8320.982
NIR0.9620.641
GS2Blue0.8160.8436.6110.9260.8818.590
Green0.8180.924
Red0.8180.932
NIR0.9190.740
SFIMBlue0.7650.8216.4070.9340.8818.628
Green0.8270.902
Red0.8900.836
NIR0.8040.854
HPFBlue0.8440.8745.4290.9080.8568.289
Green0.8400.909
Red0.8940.864
NIR0.9180.743
Table 5. Pan-sharpening results achieved for Frame 4.
Table 5. Pan-sharpening results achieved for Frame 4.
MethodBandsUIQIUIQIMERGASZIZIMS-ERGAS
BTBlue0.7810.8427.4880.9540.9525.056
Green0.8530.969
Red0.8920.948
NIR0.8420.935
BTFBlue0.7620.8446.8390.9680.9435.798
Green0.8480.981
Red0.8960.953
NIR0.8680.872
IHSBlue0.8100.8467.7310.9540.9406.319
Green0.8020.967
Red0.8580.962
NIR0.9130.878
IHSFBlue0.8170.8606.6500.9750.9386.100
Green0.8200.982
Red0.8770.970
NIR0.9260.827
GS1Blue0.8560.8437.9600.9530.9546.242
Green0.8340.966
Red0.8210.969
NIR0.8590.927
GSFBlue0.8510.8636.7560.9740.9466.176
Green0.8440.981
Red0.8420.979
NIR0.9130.848
GS2Blue0.8260.8268.0420.9150.90513.206
Green0.8210.914
Red0.8010.930
NIR0.8570.861
SFIMBlue0.7230.8008.4070.9120.8739.166
Green0.8060.877
Red0.8560.853
NIR0.8170.850
HPFBlue0.8320.8686.6480.9130.8658.075
Green0.8380.906
Red0.8830.872
NIR0.9210.771
Table 6. Ranking of the pan-sharpening methods in each frame.
Table 6. Ranking of the pan-sharpening methods in each frame.
RuralSemi-UrbanUrbanNatural
BT2542
BTF1331
IHS9679
IHSF5124
GS17756
GSF8218
GS24887
SFIM6993
HPF3465
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alcaras, E.; Parente, C. The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images. J. Imaging 2023, 9, 93. https://doi.org/10.3390/jimaging9050093

AMA Style

Alcaras E, Parente C. The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images. Journal of Imaging. 2023; 9(5):93. https://doi.org/10.3390/jimaging9050093

Chicago/Turabian Style

Alcaras, Emanuele, and Claudio Parente. 2023. "The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images" Journal of Imaging 9, no. 5: 93. https://doi.org/10.3390/jimaging9050093

APA Style

Alcaras, E., & Parente, C. (2023). The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images. Journal of Imaging, 9(5), 93. https://doi.org/10.3390/jimaging9050093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop