Next Article in Journal
α-Mangostin Hydrogel Film Based Chitosan–Alginate for Recurrent Aphthous Stomatitis
Next Article in Special Issue
The Classification of Inertinite Macerals in Coal Based on the Multifractal Spectrum Method
Previous Article in Journal
Experimental Investigation on Contaminated Friction of Hydraulic Spool Valve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach for the Pan Sharpening of Very High Resolution Satellite Images Using a CIELab Color Based Component Substitution Algorithm

by
Alireza Rahimzadeganasl
1,
Ugur Alganci
2 and
Cigdem Goksel
2,*
1
Science and Technology Institute, Istanbul Technical University, Maslak, Istanbul 34469, Turkey
2
Department of Geomatics Engineering, Faculty of Civil Engineering, Istanbul Technical University, Maslak, Istanbul 34469, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(23), 5234; https://doi.org/10.3390/app9235234
Submission received: 2 October 2019 / Revised: 28 November 2019 / Accepted: 29 November 2019 / Published: 1 December 2019
(This article belongs to the Special Issue Advances in Image Processing, Analysis and Recognition Technology)

Abstract

:
Recent very high spatial resolution (VHR) remote sensing satellites provide high spatial resolution panchromatic (Pan) images in addition to multispectral (MS) images. The pan sharpening process has a critical role in image processing tasks and geospatial information extraction from satellite images. In this research, CIELab color based component substitution Pan sharpening algorithm was proposed for Pan sharpening of the Pleiades VHR images. The proposed method was compared with the state-of-the-art Pan sharpening methods, such as IHS, EHLERS, NNDiffuse and GIHS. The selected study region included ten test sites, each of them representing complex landscapes with various land categories, to evaluate the performance of Pan sharpening methods in varying land surface characteristics. The spatial and spectral performance of the Pan sharpening methods were evaluated by eleven accuracy metrics and visual interpretation. The results of the evaluation indicated that proposed CIELab color-based method reached promising results and improved the spectral and spatial information preservation.

1. Introduction

The earth observation satellites with very high resolution (VHR) optical sensors provide a multispectral (MS) image and a panchromatic (Pan) image that are acquired simultaneously in order to provide essential accommodation between spectral and spatial resolution, which is an important consideration for optical satellite sensors due to their physical limitations [1,2]. Spectral diversity is important for modeling the spectral characteristics of different land cover/use classes and identifying them; on the other hand, spatial information is very crucial for identifying spatial details and geometric characteristics. The Pan image provides high spatial resolution with a single, wide range spectral band, whereas the MS image provides several spectral bands in different sections of the electromagnetic spectrum with low spatial resolution in order to meet the abovementioned requirements.
The fusion of Pan and MS images that are acquired over the same area from the single or multiple satellite system is referred to as Pan sharpening. The main aim of Pan sharpening is to create a high-resolution MS image, having the spatial resolution of Pan but preserving the spectral characteristics of MS [3]. Unlike the challenging problem of multi-sensor data fusion, single sensor Pan sharpening does not need image-to-image registration, as the Pan and MS sensors are mounted on the same platform and the images are acquired simultaneously with well-matching viewing geometry [4]. Several earth observation satellites, such as Geo-Eye, OrbView, QuickBird, WorldView, Pléiades and Spot, have this capability, and bundle (PAN+MS) products from these systems can be used directly as the input for Pan sharpening.
An ideal Pan sharpening algorithm leads to the best performance in spatial and spectral domains by keeping the spatial resolution of a Pan image and preserving the spectral characteristics of an MS image. Launching of VHR sensors led to the appearance of diverse Pan sharpening methods in recent decades [5,6,7]. In addition, Pan sharpening is a primary image enhancement step for many remote sensing applications, such as object detection [8], change detection [9], image segmentation and clustering [10,11], scene interpretation and visual image analysis [12]. Commonly, image fusion can be classified into three levels—pixel level, feature level and decision or knowledge level—while the Pan sharpening is categorized as a sub-pixel level process [13,14].
Pan sharpening algorithms can be divided into four groups: (1) rationing methods; (2) injection-based methods; (3) model-based methods; and (4) component substitution (CS) methods. Of these methods, CS algorithms are more practical because of their calculation speed and performance compatibility. The CS methods can be categorized into four classes according to the transform matrix used in the algorithm; which are principle component analysis (PCA) [15,16], intensity-hue-saturation (IHS) [7,17], Gram–Schmidt (GS) [18,19] and generalized component substitution (GCS) [5,20]. The common and general limitation of all CS-based methods is the distortion in spectral characteristics when compared to original MS image [21,22].
This research proposes a robust CS method for Pan sharpening the Pleiades VHR satellite images with the aim of enhanced spatial resolution and reduced spectral distortion. The principle of the proposed method is similar to the IHS method, where a uniform CIELab color space based on human eye spectral response is used instead of IHS color space [23]. The CIELab color space has been used for different image processing tasks. Wirth and Nikitenko, 2010 [24], investigated the performance of CIELab color space on the application of unsharp masking and fuzzy morphological sharpening algorithms. In the study of [25], the experiments of the content-based image retrieval (CBIR) were used to evaluate the performance of CIELab and the other three color spaces (RGB, CIELuv and HSV) on an image retrieval process. In addition, CIELab color space was used to help different image segmentation tasks [26,27]. In a previous Pan sharpening research, color normalization-based on CIELab color space aided the image fusion algorithm with sharpening a Hyperion hyperspectral image with an Ikonos Pan image using the spectral mixing-based color preservation model [28]. In another study, a remote sensing image fusion technique using CIELab color space was proposed by Jin et al. [29]. In that study, the authors improved the performance of image fusion techniques by combining non-subsampled shearlet transform and pulse coupled neural network. However, this approach is computationally complicated and there is lack of a specific satellite dataset.
Although the CIELab method is used in different image processing tasks applied on natural and satellite images, evaluation of its performance on the Pan sharpening process is limited and there is no detailed evaluation of this method on VHR image Pan sharpening by considering the different landscape characteristics and with use of spatial and spectral metrics in addition to visual interpretation yet, to our knowledge. This research focused on proposing a robust, CIELab-based Pan sharpening approach and aimed to fill the abovementioned gap in detailed investigation and accuracy assessment of CIELab-based Pan sharpening in the literature. In this research, results from the proposed method were compared with the results from the six well-known methods, which are Ehlers, Generalized IHS, IHS, Gram-Schmitt, HCS and NNDiffuse methods. Pleiades satellite images of ten different test sites having different landscape characteristics were comparatively evaluated to check the spatial and spectral performance of the proposed method with quantitative accuracy metrics. In addition, visual interpretation-based analyses were performed on the results in order to discuss the performances of methods. The results illustrated advantages of using uniform color space for the aim of the Pan sharpening application.

2. State of Art

A brief review of the state of the art methods that used for comparative analysis with respect to the proposed method are presented in Table 1.

3. Methodology and Accuracy Assessment

This research proposes a new Pan sharpening method that relies on the CIELab transform, which modifies the color components of images to be used for Pan sharpening of VHR satellite images. The flowchart of the proposed method is given in Figure 1, and the details of the proposed method are described in the sub sections.

3.1. Multispectral Image Transform to CIELab Color System

The RGB color system was designed in such a way that it includes nearly all primary colors and can be comprehended by human vision. Nevertheless, it is a tough task to deal with RGB color due to strong correlation between its components [29]. In this study, a uniform and complete color model, the CIELab color system, was used. In this uniform color system, a variation in the coordinates of the color component provides the same amount of variation in the luminance and saturation components [10]. Besides, this color space is projected to draw human vision, unlike the RGB and CMYK (cyan, magenta, yellow, black) color spaces [38]. The CIELab color system was used in the proposed Pan sharpening approach in order to reduce the spectral distortion, while maintaining the color perception of human vision [39].
The design of the CIELab color system is based on Hering’s theory, which indicates that only red (R), green (G), blue (B) and yellow (Y) are unique among the thousands of colors that are used to characterize the hue component [40]. Although, other colors can be produced using these unique colors (for example it is possible to obtain orange by mixing red and yellow), they (R, G, B and Y) can be described only with their own name. R, G, B and Y, with black (B) and white (W), constitutes a color system with six basic color properties and three opponent pairs: R/G, Y/B and B/W. The opponency idea rises from observation upon colors attributes, which proves no color could be characterized using both blue and yellow or red and green together [41]. A blue shade of yellow does not exist. These three opponent pairs are represented in the form of a three-dimensional color space, as illustrated in Figure 2. In this figure, the vertical axis L* represents the luminance, in which perfect black is represented by 0 and perfect white is represented by 100. The a* and the b* are the axes that are perpendicular to luminance indicated chromaticity, and stand for redness/greenness and yellowness/blueness, respectively. Positive values represent redness (for a* component) and yellowness (for b* component), whereas greenness and blueness are denoted with negative values.
The L*, a* and b* values are computed using XYZ values. The XYZ system that was based on the RGB color space was presented by the International Commission on Illumination, CIE (Commission international de l’éclairage), in the 1920s and patented in 1931. The difference of RGB and XYZ lies in the light sources. The R, G and B elements are real light sources of known characteristics, whereas the X, Y and Z elements are three theoretical sources, which are selected in a way that all visible colors can be defined as a density of just-positive units of the three primary sources [10].
Occasionally, the colorimetric calculations with the use of color matching functions produce negative lobs. This problem can be solved by transforming the real light sources to these theoretical sources. In this color space, red, green and blue colors are more saturated than any spectral RGB. X, Y and Z components, represent red, green and blue colors respectively. RGB to XYZ and its reverse transformations can be performed by following equations:
[ X Y Z ] = [ 0 . 4124564 0 . 3575761 0 . 1804375 0 . 2126729 0 . 7151522 0 . 0721750 0 . 0193339 0 . 1191920 0 . 9503041 ] [ R G B ]
  [ R G B ] = [ 3 . 2404542 - 1 . 5371385 - 0 . 4985314 - 0 . 9692660 1 . 8760108 0 . 0415560 0 . 0556434 - 0 . 2040259 1 . 0572252 ] [ X Y Z ] .
Lab system is calculated by the following equations [23]:
L   =   116     F Y     16
  a   =   500   [ F X   -   F Y ]  
  b   =   200 [ F Y   -   F Z ]  
Where   F X = ( X X n ) 1 3   if   ( X X n ) > ( 24 116 ) 3  
And   F X = ( 841 108 ) ( X X n ) + 16 116   if   ( X X n ) ( 24 116 ) 3 ,  
where X n is the tristimulus value of a perfect white object color stimulus, which the light reflected from a perfect diffuser under the chosen illuminant. F Y and F Z values are calculated in the same way as F X .

3.2. Pan-Sharpening

In the Pan sharpening procedure, the MS image should be resampled to the same pixel size of the Pan image before converting it to CIELab color space. In this research, the bicubic interpolation method was used to resample 2 m resolution Pleiades MS images into 50 cm resolution to match the pixel size of Pleiades Pan image. This resampled dataset is used in all Pan sharpening methods used in this research, including the proposed one. After converting the MS image from RGB to CIELab space, the Pan sharpening process continues with replacing the Pan image with the L* component. Unlike the proposed method in [29] study, there is no need for color space conversion of Pan image in the proposed method, which leads to low computation and less data distortion. Before replacing the L* band of MS with the Pan image, there is a histogram matching step that could be considered as preprocessing step. After resampling the MS image to the same size of Pan image and converting the MS image color space, the histogram of Pan image has to be matched with the histogram of L* component in order to minimize the spectral differences [42]. For performing histogram matching task, mean and standard deviation normalizations were used [43]:
P a n H M = ( P a n μ P a n ) σ I σ P a n + μ I ,
where P a n H M stands for histogram matched Pan image, μ stands for mean and σ represents standad deviation. After these preprocessing steps, the L* component is replaced with a Pan image. The Pan sharpened image is then produced by implementing inverse conversion of CIELab color system on the Pan*a*b* image and results in a new MS image with high spatial resolution.

3.3. Accuracy Assessment

Several metrics were proposed to assess the accuracy of Pan-sharpened images that use the precise, high-resolution MS image as a reference image. In this research, the first seven metrics provided in Table 2 were used for the spectral quality assessment, while the later four metrics were used for spatial quality assessment of the results. Although metrics provide important quantitative insights about the algorithm performance, qualitative assessment of the color preservation quality and spatial improvements in object representation is required. Thus, results obtained from the Pan sharpening algorithms were also evaluated with visual inspection.

4. Experimental Results

4.1. Dataset

The primary product of Pléiades satellite images were used for performing experimental analysis of the proposed method. The Pléiades program, which was launched by CNES (the French space agency), is the optical Earth imaging component of French–Italian ORFEO (Optical and Radar Federated Earth Observation). The Pléiades constellation consists of two satellites with VHR optical sensors. The Pléiades 1A launched on 17.12.2011 and Pléiades 1B launched on 2.12.2012. Both of the satellites provide 0.5 m spatial resolution for the Pan sensor and 2 m for MS sensor with 20 km swath width and 12 bit radiometric resolution [44].
The dataset used in this research consists of three Pleiades image scenes, which cover different landscape characteristics (Table 3). The locations of scenes are provided in Figure 3. Ten different sub frames were selected from these image scenes, in order to evaluate the performance of Pan sharpening methods for varying landscape characteristics and seasonal conditions (Table 4).
Four sub frames cover rural areas that contain forests with different types of trees with varying heights and barren roads between them, and mountains and agricultural fields. Three sub frames cover urban areas that include complex buildings, roads and highways. Finally, three sub frames cover sub-urban areas that include all the complex buildings, factories, vegetation, roads, trees and bare soil parts to perform a precise survey on effects of the proposed method in Pan sharpening of vegetation, impervious and soil surfaces simultaneously.

4.2. Performing the Algorithm and Accuracy Assessment

To measure the performance of Pan sharpening results using the metrics that were presented in Section 3.3, the Wald protocol was used, due to lack of reference a high-resolution MS image [53]. According to Wald protocol, all Pan sharpening experiments were done using degraded datasets, which are produced by decreasing spatial resolution of the original dataset (reduce MS and Pan, respectively, to 8 m and 2 m). The Pan sharpening results obtained that way, can be compared with the original MS images for an accuracy assessment procedure. In this paper, six Pan sharpening methods and eleven accuracy indexes are evaluated to perform a comparative accuracy assessment of the proposed method. The numerical results of the accuracy indexes are presented in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6. The visuals belonging to Pan sharpening results of ten frames are presented in Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9 and Figure A10. In each figure, parts a and b are the original Pléiades MS and Pan images, respectively. Parts c, d and e are the Pan sharpened results from the CIElab, GIHS and GS methods, respectively. The Pan sharpened images from the HCS, IHS, NNDiffuse and Ehlers methods are shown in parts f–i, respectively.

4.3. Experimental Results from Rural Areas

Figure A1, Figure A2, Figure A3 and Figure A4 belongs the Pan sharpening results of the rural test sites (frame F1 from D1 dataset, frames F5 and F6 from D2 and Frame F10 from D3 datasets). Each figure belongs to a representative part from the whole image focusing on rural areas and presents visual comparison different Pan sharpening techniques.
The visual comparison of the Pan sharpening methods reveals that spatial resolution of MS images improved significantly in all methods. As for spectral information, parts c, e and h show that the CIELab GS and NNDiffuse methods protect the spectral characteristics better; specifically, for the bands belonging to the visible region. The color-based visual interpretation in vegetated and forest areas in Figure A2, Figure A3 and Figure A4 inform us that the Pan-sharpened and original MS images are very similar to each other for GS and CIELab methods. Similar comments can be made on NNDiffuse and CIELab methods in Figure A1. On the other hand, visual comparison of part a with parts d, f, g and i reveals that the remaining four methods were not able to preserve the spectral characteristics of vegetated and forest areas. Particularly, IHS, Ehlers and HCS methods inherited the high frequency impact over vegetated area and could not preserve original spectral/color information for the first test site. In addition, the result of the HCS method is more blurred than the others. The GIHS and—in some cases—the NNDiffuse methods, preserved the color information better than the IHS, Ehlers and HCS; nevertheless, observable spectral distortion is apparent in their resulting products. The GS method has good performance in the case of vegetation except Figure A1 part e, while results are not satisfactory in pathways and their surroundings. In addition, obvious distortions are apparent in the shadowed areas. Detailed investigation on Figure A3 (e) reveals that there is an obvious distortion in snowy parts of the frame almost in all methods except the proposed CIELab, which resulted in a nearly blue color instead of white snow color. However, CIELab method could be able to preserve the texture and keep the small variances in the color when compared to original MS image. Besides, visual interpretation of the CIELab Pan sharpening results (part c in all figures) demonstrated that use of this color space for Pan sharpening could help to distinguish different tree types and vegetation from each other in the absence of NIR band.
The seven quality metrics, which were presented in Table 2, were used for spectral quality assessment of the Pan sharpening results. Numerical results from these metrics for the rural frames (F1, F5, F6 and F10) are presented in Table A1. The metric values were calculated band by band, and the average values of three bands were used for the accuracy assessment procedure. Numerical results of ERGAS, RASE, RMSE and SAM metrics indicated that the proposed CIELab method produced better results than the remaining methods and was followed by the GS method for most of the metrics. Metric-based results were in line with the visual interpretation. The CIELab method also provided the highest accuracies according to the QAVG and PSNR metrics. In addition, the proposed method provides the value 1 for the SSIM metric, which is the best possible value. Moreover, the IHS method provides worst results for all quality indexes, with respect to Table A1. Lastly, Ehlers, GIHS and HCS methods provide lower accuracies in some cases. This unstable manner of these methods across different scenes is another problem that should be considered.
To assess the spatial quality of Pan sharpened images, the CC, the Zhou index, Sobel RMSE and spatial ERGAS indexes that are presented in Section 3.3, were calculated by comparing the Pan image and the intensity component of the Pan sharpened images. Numerical results of these metrics are presented in Table A2. According to comparative evaluation, the Pan sharpened image from the proposed CIELab method provided the highest spatial CC and Zhou values and lowest SRMSE and SP ERGAS values. These results indicate that the proposed method has the best spatial performance among all methods tested. Ehlers, HIS and HCS methods provided the lowest spatial performances according to the values presented in Table A2.
As a result, the proposed CIELab method provided the best performance for the rural scenes based on the visual interpretation and spectral and spatial quality metrics results.

4.4. Experimental Results from Urban Areas

Figure A5 through Figure A7 belongs to the Pan sharpening results of the urban test sites (frames F2, F4 and F9 from D1, D2 and D3 datasets respectively). Each figure belongs to a representative part from the whole image focusing on the buildings and roads, and presents visual comparison between different Pan sharpening techniques on the differently sized and oriented buildings and roads in the urban areas.
Visual comparison results of urban areas revealed that all the Pan sharpened images inherited the high spatial information from the Pan image, and likewise, the results of rural areas. Roads and buildings could be better identified in all Pan sharpened images compared to original MS image. As for spectral information, Figure A5 c,e,h, informed us that CIELab, GS and NNDiffuse methods preserved the spectral characteristics and color information in urban areas. In particular, the color information from the buildings with brick rooves are similar to the original MS image. Visual comparison of Figure A5 part a with parts d, f, g and i illustrated that of IHS, HCS, GIHS and Ehlers methods are not able to preserve the original spectral characteristics of buildings as well as the other three approaches did. In particular, Ehlers, HCS and IHS methods provided blurred and smoggy results with faded and paled colors. Parts g and I from Figure A6 and Figure A7 support that HIS and Ehlers methods provide worst visual results among all methods tested. Part e in Figure A6 and Figure A7 reveals the weakest side of GS method; that is, the poor performance in the Pan sharpening of white tones. White colors tend to seem blueish in results of this method. It is obvious from part f in Figure A6 and Figure A7 that the HCS method provided the most blurred result. GIHS and NNDiffuse methods have acceptable results in comparison with the results from other methods (except CIELab method). Detailed investigation of Figure A6 and Figure A7, parts d and h, prove that NNDiffuse method produces distortion in the shadowed areas and GIHS method has poor performance in vegetated areas and trees. Visual interpretation of Figure A5(c), Figure A6(c) and Figure A7(c) reveal the fact that the proposed CIELab method protected spectral properties of original MS image more than the other methods.
Numerical results of spectral quality assessment of Pan sharpened images belonging to the urban test sites (F2, F4 and F9) are presented in Table A3. Metric values demonstrated that the CIELab method provided the most promising results among all Pan sharpening methods used in this research. This method presented the lowest values for the ERGAS, RASE, RMSE and SAM metrics and highest values for QAVG, PSNR and SSIM metrics (again, the highest possible value obtained for SSIM). HCS and IHS methods provided the worst results for most of the metrics. Once again, the second performance rank for spectral quality was obtained by GS method in most of the metrics.
Table A4 presents the spatial quality metrics results that were calculated from Pan image and the intensity component of Pan sharpened images for the urban test sites. Similar to the rural test sites, the proposed CIELab method provided the highest CC and Zhou values alongside of lowest SRMSE and SP ERGAS values for urban images, which demonstrated the high spatial quality. In particular, there is great gap between the numeric results of SRMSE and SP ERGAS indexes presented with CIELab method and other methods. Ehlers, IHS and HCS methods acted as the worst methods in the case of spatial indexes, which is consistent with the visual results. Consequently, the proposed CIELab method provided the best performance for the urban test sites (frames F2, F4 and F9) as well, based on the visual interpretation and spectral/spatial quality metrics.

4.5. Experimental Results from Suburban Areas

Figure A8, Figure A9 and Figure A10 present the representative portions of the original images and Pan sharpening results of the suburban test sites from F3, F7 and F8 frames respectively.
Similar to the urban and rural areas, visual comparison of original MS and Pan sharpened images of this category revealed that all the Pan sharpened images produced higher spatial information than original MS image and benefited from Pan image detail level. However, the visual performance of suburban areas was variable, unlike the urban and rural areas. Results from frame F3 (Figure A8) show that all methods had acceptable performance except Ehlers and HIS. Nevertheless, small amount of distortion in vegetation and shadowed areas is apparent in the results of NNDiffuse and HCS methods.
Figure A9 illustrates the effectiveness of the CIELab method in Pan sharpening process. There is an obvious color distortion in all methods except proposed Lab method’s result. Parts e, g and i demonstrate similar distortion in the results of Ehlers, GS and HIS methods with a green dominant color distortion, while other three methods, which are presented in parts d, f and h, have purple dominant color distortion. These color distortions are apparent for all surface types including roads, rooves and other objects. The CIELab was the only method that provided acceptable performance for this test frame. Ehlers and IHS methods could not provide good performance for the last test frame, as is observable in parts c and g of the Figure A10. Parts d and f prove that the results of the GIHS and HCS are blurred and not acceptable. Green and white tones distortion is obvious in the result of GS (Part e from Figure A10). The CIELab method illustrates the best performance again in this frame. Regardless of the distortion in shadowed areas, NNDiffuse provided most similar results to original MS image after the CIELab results.
Numerical results of spectral quality assessment of Pan sharpened images belonging to suburban areas (frames F3, F7 and F8) are presented in Table A5. Once again, CIELab method provides the highest values for the QAVG, PSNR and SSIM metrics, while it achieves the lowest values for the ERGAS, RASE, RMSE and SAM metrics. The GS method has the second place again, similar to the urban and rural test sites by achieving better numeric values for most of the metrics. Similar to previous test site results, the Ehlers and IHS methods have the worst performance between all tested methods.
Spatial quality assessment of Pan sharpening results for the frames F3, F7 and F8 are presented in Table A6. The proposed CIELab method illustrated an unrivaled performance in the case of the spatial quality metrics. For the CC and Zhou index, the highest correlation values that indicate high spatial quality are provided by the CIELab method. Moreover, the lowest SRMSE and SP ERGAS values achieved by the proposed method are another proof of high spatial quality of this method. Ehlers, HCS and IHS methods present the lowest CC and Zhou values with the highest SRMSE and SP ERGAS values, which indicate the poor spatial performances of these methods, like their spectral performances.

4.6. Thematic Accuracy Evaluation with Spectral Index

The performance of the conventional and proposed Pan sharpening methods were evaluated with several quality indices and visual interpretation in this research. However, the effects of Pan sharpening on the information extraction, process such as image classification, segmentation and index-based analysis is another important concern that requires conservation of spectral properties of the MS image after Pan sharpening. One of the indirect methods frequently used to evaluate the abovementioned situation is to apply the spectral index on the MS and Pan sharpened images, and investigate their consistency. As only visible bands of the images were used in this research, the visible atmospherically resistant index (VARI) proposed by Gitelson et al., 2002 [54] was used for the evaluation as it uses the all visible bands for calculation.
V A R I = G r e e n R e d G r e e n + R e d B l u e .
The test site F3 was selected for this evaluation, as it is one of the most challenging sites in the dataset due to complex and heterogeneous land cover characteristics. The VARI index was applied on both the MS and CIELab Pan sharpened images, and a binary classification was performed with the use of the same threshold to map the manmade and natural lands in the region. According to results presented in Figure 4, same level of information extraction could be achieved with CIELab Pan sharpened image and it even provided better thematic representation by providing better geometric representations of the objects and less of the salt and pepper effect observed in the vegetated areas located in the north and south parts of the image.

4.7. Overall Comments

When the numerical values from the seven spectral metrics for three different test sites (Table A1, Table A3 and Table A5) were evaluated, the proposed CIELab had a consistent behavior for different metrics and for different land categories, and ranked as the first among all methods. On the other hand, for the other Pan sharpening methods, different metrics provided various accuracies and did not show a consistent manner for different metrics and even, for different images, the same metric. As an example, the GS method generally had the second ranking for the ERGAS metric for different test sites. However, in the case of the RASE metric, the GS method had second ranking just for some images, while the GIHS method took the second rank for the remaining images. This phenomenon is similar for the worse results; there is no one method that can be mentioned as the worst for all test sites and all metrics. All facts about the consistent manner of the proposed method can also be asserted for the spatial metrics. The CIELab method had the best spatial performance for the all ten test sites, while second through seventh rankings were variable across different metrics and different images. As an outcome, it is evident that the proposed method presents the best results considering spectral and spatial quality metrics and visual interpretation for ten different sites having different landscape characteristics. Moreover, it provided efficient spectral conservation performance according to comparative evaluation performed with binary classification of VARI index. An overall ranking is provided in Table 5, according to expert judgement by considering the quantitative, metric-based results and visual interpretation results together for different landscapes across spectral and spatial domains. Lastly, in order to check the consistency of spectral quality indices through each band of the images, these indices were calculated band-by-band for the test site F3 (Table A7). According to this evaluation, the averaged values for each index are in accordance with the band-based calculations and indices, providing consistent characteristics across image bands in most of the cases.

5. Conclusions

This research proposed an effective, component substitution-based image Pan sharpening method that uses CIELab color space for Pan sharpening of the VHR Pléiades satellite images. Ten test sites with different landscape characteristics were selected to evaluate the performance of the proposed method in conjunction with six common Pan sharpening algorithms; namely, GS, HCS, IHS, EHLERS, NNDiffuse and GIHS. The comparative evaluation results from Pléiades VHR images supports that the proposed CS algorithm is powerful and ensures better performance compared to the other Pan sharpening methods according to the spectral and spatial accuracy assessment procedures and the visual interpretation. In addition, results indicated that proposed method provided comparatively consistent results, while the performance of other methods varyied with respect to land surface characteristics of the region. As an example for RMSE metric, the best values among the all ten sites were obtained for forest and vegetated areas. Pan sharpening in urban areas resulted in coarser metric values, which illustrate the impact of different land characteristics on the performance of Pan sharpening algorithms. Characteristics of unique CIELab color space, led to producing similar brightness characteristics in Pan sharpened images compared to original MS image. The nature of L* component of MS image helps to preserve spectral and spatial information of original MS and Pan images, respectively. Further improvement of the CIELab-based method could be the implementation of this approach for Pan sharpening of satellite images with more than three bands. In addition, further studies are planned to evaluate the performance of CIELab in fusions of satellite images from different sources. Lastly, other accuracy assessment approaches, such as comparisons of classification and segmentation results of Pan sharpened images, could also help future investigations.

Author Contributions

conceptualization, A.R., U.A. and C.G.; methodology, A.R., U.A. and C.G.; formal analysis, A.R.; investigation, A.R.; data curation, A.R.; writing—original draft preparation, A.R., U.A. and C.G.; writing—review and editing, A.R. and U.A.; visualization, A.R.; supervision, U.A. and C.G.; project administration, C.G.

Funding

This research received no external funding.

Acknowledgments

Great appreciation to ITU CSCRS for providing VHR “Pléiades” images. The authors would also like to acknowledge the many useful contributions of Elif Sertel.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Result and comparison of the proposed Pan sharpening method for the F1 (zoomed) area, which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A1. Result and comparison of the proposed Pan sharpening method for the F1 (zoomed) area, which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a1
Figure A2. Result and comparison of the proposed Pan sharpening method for the F5 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A2. Result and comparison of the proposed Pan sharpening method for the F5 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a2
Figure A3. Result and comparison of the proposed Pan sharpening method for the F6 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A3. Result and comparison of the proposed Pan sharpening method for the F6 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a3
Figure A4. Result and comparison of the proposed Pan sharpening method for the F6 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A4. Result and comparison of the proposed Pan sharpening method for the F6 area (zoomed), which is represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a4
Table A1. Numeric results of spectral quality metrics of the Pan-sharpened images produced by selected algorithms for rural test sites (blue: highest accuracy; red: lowest accuracy).
Table A1. Numeric results of spectral quality metrics of the Pan-sharpened images produced by selected algorithms for rural test sites (blue: highest accuracy; red: lowest accuracy).
ERGASQAVGRASERMSESAMPSNRSSIM
F1Ehlers8.359730.7034531.037790.001693.6406955.458950.99423
GS3.532800.7288412.857210.000702.1734363.817120.99854
GIHS9.707350.6833254.940830.002993.7106150.498860.96170
HCS3.301040.7280413.129350.000713.0760262.931910.99913
IHS10.743940.6774760.825060.003313.7489849.615120.95743
CIELab2.077040.8356111.240360.000611.3383269.281161
NNDiffuse6.010080.6876229.324910.001591.6802055.952030.99065
F5Ehlers7.628630.6793230.116750.001362.7655357.326300.99653
GS3.438030.7083215.064420.000691.9687464.580660.99909
GIHS4.682400.7040320.213540.000911.7145160.789610.99738
HCS6.046010.6702324.666110.001111.6445059.060450.98721
IHS7.588780.6641932.805740.001482.3140356.583460.98436
CIELab3.15010.8103312.379560.000561.2564375.048351
NNDiffuse6.054440.6903925.027720.001171.6435559.531920.99658
F6Ehlers12.355110.6848544.424760.001335.6083857.503930.99650
GS4.951800.7324018.301830.000592.4664666.211700.99918
GIHS8.966520.7125336.052390.001083.8418759.317750.99551
HCS5.568290.7132022.523860.000683.7049663.403570.98920
IHS12.472360.6869050.195090.001515.4197356.443200.98278
CIELab4.530260.8358016.059620.000481.8681466.341721
NNDiffuse7.749540.6916632.236180.000972.7047260.289560.99680
F10Ehlers9.221070.6140436.488890.003164.4499949.996980.98567
GS4.781970.6309918.070300.001572.8630756.100880.99517
GIHS6.602840.6294532.995500.002862.4197150.871090.97682
HCS9.969650.5763140.178870.003482.1039549.160240.98781
IHS9.496490.6097347.554540.004123.4187947.696350.96361
CIELab4.36110.647817.149940.001491.7726457.554941
NNDiffuse4.465410.6338517.773950.001541.5008956.244510.99540
Table A2. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for the rural test sites (blue: highest accuracy; red: lowest accuracy).
Table A2. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for the rural test sites (blue: highest accuracy; red: lowest accuracy).
CCZhou’s SPSRMSESP ERGAS
F1Ehlers0.8766950.9396080.00652726.540923
GS0.9703170.9734900.00229825.222656
GIHS0.9781490.9799590.00521325.699874
HCS0.9392520.9636910.0029325.531640
IHS0.8694870.9298270.00659226.040642
CIELab0.9824360.9986175.19E-0817.345011
NNDiffuse0.9440220.9675110.00323724.580730
F5Ehlers0.9264860.929480.00624126.658944
GS0.9899940.989320.00332325.503829
GIHS0.9908750.9899460.00415325.881689
HCS0.8697090.9142250.0046726.457399
IHS0.9271040.9895600.00626126.460988
CIELab0.9968610.9944142.22E-084.9495726
NNDiffuse0.9716520.9894100.00430325.740781
F6Ehlers0.8914150.9298720.00612329.084361
GS0.9899920.9894500.00247326.047215
GIHS0.9899970.9899740.00442528.173902
HCS0.9122410.9726540.00290326.946861
IHS0.959280.9698860.00614129.301514
CIELab0.9972730.9961943.40E-0811.968807
NNDiffuse0.9699990.9799460.00306227.901263
F10Ehlers0.9234420.9196710.01421428.001920
GS0.9896190.9890610.00882826.488351
GIHS0.9714290.9699590.00873826.263696
HCS0.7994030.6947670.01285528.756548
IHS0.9142640.9297300.01426727.019798
CIELab0.9972580.9969562.69E-088.0781533
NNDiffuse0.9299990.9294050.006766326.362095
Figure A5. Result and comparison of the proposed Pan sharpening method for the F2 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A5. Result and comparison of the proposed Pan sharpening method for the F2 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a5
Figure A6. Result and comparison of the proposed Pan sharpening method for the F4 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A6. Result and comparison of the proposed Pan sharpening method for the F4 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a6
Figure A7. Result and comparison of the proposed Pan sharpening method for the F9 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A7. Result and comparison of the proposed Pan sharpening method for the F9 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a7
Table A3. Numeric results of spectral quality metrics of the Pan-sharpened images produced by select algorithms for urban test sites (blue: highest accuracy; red: lowest accuracy).
Table A3. Numeric results of spectral quality metrics of the Pan-sharpened images produced by select algorithms for urban test sites (blue: highest accuracy; red: lowest accuracy).
ERGASQAVGRASERMSESAMPSNRSSIM
F2Ehlers11.869210.5399147.338540.004014.7105347.940830.97725
GS5.237700.5826322.615940.001952.6485555.161070.99458
GIHS7.340630.5763933.870540.002872.4654050.848690.98449
HCS15.718810.4854162.437520.005292.7398845.536220.97461
IHS11.049690.5497151.037390.004323.4608747.287360.95942
CIELab4.223510.6862920.757410.001761.3620165.101671
NNDiffuse9.432200.5686749.550850.004203.4081847.544110.96330
F4Ehlers22.841430.5235689.683520.004057.9529447.848210.95598
GS12.572530.4238665.560010.002966.1338150.569680.96821
GIHS15.813850.6310990.105250.004073.1564647.807460.94837
HCS23.986280.28239121.710570.005508.2818145.195890.93994
IHS19.501330.38961111.130860.005028.9937345.985770.92838
CIELab6.869130.7014426.939260.001221.8973858.294741
NNDiffuse19.724000.6307798.226090.004416.1395250.531760.97519
F9Ehlers9.895950.5118939.618360.004954.2751946.102420.97015
GS4.965280.5553318.292490.002562.2021651.303200.99273
GIHS5.404010.5356224.494660.003061.9249650.278920.98486
HCS10.026270.4765939.671190.004962.0255046.090850.93648
IHS9.550510.5122043.308320.005413.2510145.328930.92951
CIELab4.29360.640217.166860.002151.1249354.366541
NNDiffuse8.254120.5829418.917890.011121.8069139.080570.94984
Table A4. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for urban test sites (blue: highest accuracy; red: lowest accuracy).
Table A4. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for urban test sites (blue: highest accuracy; red: lowest accuracy).
CCZhou’s SPSRMSESP ERGAS
F2Ehlers0.9231240.9395870.02771828.484374
GS0.9855040.9686970.00864325.816466
GIHS0.9861140.9899540.01251326.611553
HCS0.9370580.9496300.01980221.018918
IHS0.9139140.9296530.02777227.734556
CIELab0.9961290.9954557.17E-086.2709053
NNDiffuse0.9604430.9619420.01285322.490210
F4Ehlers0.8200140.9396550.01794931.953533
GS0.9899990.9898420.00936926.949381
GIHS0.9920010.9899530.01216828.452035
HCS0.7755760.6888260.01817133.941348
IHS0.9199410.9896660.01800031.259808
CIELab0.9972800.9961875.44E-085.8158174
NNDiffuse0.9145900.8880010.01225327.737860
F9Ehlers0.9249230.9496220.02442428.179705
GS0.9892370.9825930.01411526.132656
GIHS0.9789230.9899600.01572426.434713
HCS0.8602050.7947380.02380328.880067
IHS0.9125460.9696620.02447927.533331
CIELab0.998480.9980025.26E-084.4340045
NNDiffuse0.9290670.9586060.02191725.926934
Figure A8. Result and comparison of the proposed Pan sharpening method for the F3 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A8. Result and comparison of the proposed Pan sharpening method for the F3 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a8
Figure A9. Result and comparison of the proposed Pan sharpening method for the F7 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A9. Result and comparison of the proposed Pan sharpening method for the F7 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a9
Figure A10. Result and comparison of the proposed Pan sharpening method for the F8 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Figure A10. Result and comparison of the proposed Pan sharpening method for the F8 test frame, zoomed areas, which are represented in a true color (RGB) combination. (a) MS; (b) Pan; (c) Lab; (d) GIHS; (e) GS; (f) HCS; (g) IHS; (h) NNDiffuse; (i) Ehlers.
Applsci 09 05234 g0a10
Table A5. Numeric results of spectral quality metrics of the Pan-sharpened images produced by select algorithms for suburban test sites.
Table A5. Numeric results of spectral quality metrics of the Pan-sharpened images produced by select algorithms for suburban test sites.
ERGASQAVGRASERMSESAMPSNRSSIM
F3Ehlers10.983990.6321743.081400.003014.4841150.438090.98554
GS5.511940.6612823.693340.001542.8429856.807270.99542
GIHS6.977870.6574633.456190.002332.4008052.634360.98413
HCS9.588680.6207038.291940.002672.6796751.461740.99176
IHS10.611700.6320450.972030.003563.5296648.977250.97134
CIELab4.145060.7712720.066620.001401.8433167.074401
NNDiffuse6.931530.6858930.692620.002142.6707953.383210.99047
F7Ehlers9.666020.5884238.606870.002525.3483651.987000.98848
GS4.439790.6420728.362960.001832.1598658.927810.99695
GIHS4.862750.6176919.903480.001302.3951057.741710.99608
HCS6.212930.5986024.687550.001616.5733655.870730.99479
IHS9.991700.5844040.962180.002675.5474251.472630.98461
CIELab3.37800.7213417.431230.001141.8889068.893731
NNDiffuse6.003740.6022728.301740.001842.8723754.684030.98907
F8Ehlers7.614920.6815129.467070.002173.5887853.265970.99194
GS4.026940.7012114.486710.001072.7735959.433320.99737
GIHS5.761090.7000927.243840.002012.2033753.947340.98655
HCS6.823710.6818627.878730.002051.3216853.747250.99579
IHS8.134160.6820138.542930.002842.9781250.933810.97920
CIELab3.50840.713313.482780.000991.2040160.057121
NNDiffuse5.709920.6989914.950330.001101.3136959.159700.99740
Table A6. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for suburban test sites.
Table A6. Numeric results of spatial quality metrics of the Pan-sharpened images produced by select algorithms for suburban test sites.
CCZhou’s SPSRMSESP ERGAS
F3Ehlers0.9222990.9496570.01264528.478359
GS0.9899990.9798780.00712126.395064
GIHS0.9886640.9899550.0080826.561586
HCS0.9426610.9795060.0103426.580025
IHS0.9202110.9297230.01268727.542710
CIELab0.9961510.9963403.22E-087.7739934
NNDiffuse0.9403850.9722730.0068326.669404
F7Ehlers0.9404100.9297820.01352430.304598
GSc0.9899970.9899560.00805628.497749
GIHS0.9736390.9789960.00836729.121990
HCS0.9684840.9624810.00948629.539885
IHS0.9398010.9649730.01359130.416119
CIELab0.9986300.9962231.66E-083.1299142
NNDiffuse0.9961870.9899290.00868528.273530
F8Ehlers0.9188870.9193990.00918126.935703
GS0.9899980.9897530.00554225.948199
GIHS0.9721520.9899420.00575925.943732
HCS0.8003740.7000730.00695427.084421
IHS0.9215230.9955430.00923626.447253
CIELab0.9974120.9976552.26E-088.2907935
NNDiffuse0.9899990.9693530.00419826.011647
Table A7. Band-by-band calculation results of spectral quality metrics belonging to test site F3.
Table A7. Band-by-band calculation results of spectral quality metrics belonging to test site F3.
Method BandEhlersGCGIHSHCSIHSCIELabNNDIF
ERGASBand 113.235426.284947.5398611.5171414.686446.199229.43229
Band 210.863795.498246.7303910.6288010.078834.077977.66444
Band 36.524474.083714.977205.583417.753582.070752.81246
Average10.983995.511946.977879.5886810.61174.145066.93153
QAVGBand 10.865471.028260.959281.023870.877881.06071.03723
Band 20.576070.657200.694490.524060.568260.708480.60802
Band 30.489010.302770.365600.316170.451300.546210.41064
Average0.632170.661280.657460.620700.632040.771270.68589
RASEBand 149.9175230.0418639.3989842.7910554.6945225.3383034.82911
Band 246.7965622.3577835.2630937.5903352.2680521.1060831.65890
Band 334.1987418.5923425.4434936.1974245.7782613.7555625.58183
Average43.0814023.6933433.4561938.2919450.9720320.0666230.69262
RMSEBand 10.008330.004500.005250.006250.010330.004130.00525
Band 20.000400.000100.001860.001560.000420.000080.00106
Band 30.000310.000020.000010.000090.000010.000010.00009
Average0.003010.001540.002330.002670.003560.001400.00214
SAMBand 14.862373.611863.406193.106694.087122.226333.13166
Band 24.420032.426822.508142.874243.169971.820822.55769
Band 33.951932.357141.782392.203643.059831.532002.21219
Average4.484112.842982.40082.679673.529661.843312.67079
PSNRBand 156.4269560.5259154.1664654.1552751.8164969.9015357.27461
Band 248.0890459.4534251.8217951.0853847.5958567.7163957.20947
Band 347.4748150.3840650.6345049.2821946.9704665.3606645.98168
Average50.4380956.8072752.6343651.4617448.9772567.0744053.383210
SSIMBand 10.998880.997020.994170.992510.9807110.99051
Band 20.984430.995390.984070.991750.9723110.99048
Band 30.973310.993850.974150.991080.961420.99990.99042
Average0.985540.995420.984130.991760.9713410.99047
List A1. Definition of Terms in Table 2
RMSE: MN is the image size, PS ( i , j ) and MS ( i , j ) represent pixel digital number (DN) at ( i , j )   ’th position of Pan-sharpened and MS image.
ERGAS: d h d l represents the ratio between the pixel size of high resolution and low resolution images; e.g., ¼ for Pléiades data, and n number of bands. The RMSE represents root mean square error of band i .
SAM: The spectral vector V = { V 1 , V 2 ,   ,   V n } stands for reference MS image pixels and V ^ = { V ^ 1 , V ^ 2 ,   ,   V ^ n } stands for Pan-sharpened image pixels rep reference and both have L components.
RASE: The μ represnts the mean of bth band; b is the number of bands and RMSE represents root mean square error.
PSNR: The L represents the number of gray levels in the image; MN is the image size, I r ( i , j ) is pixel value of reference image and I p ( i , j ) is the pixel value of Pan-sharpened image. A higher PSNR value indicates more similarity between the reference MS and Pan-sharpened images.
QAVG: The x ¯ and y ¯ are the means of reference and Pan-sharpened images, respectively; σ x y is the covariance and σ x 2 and σ y 2 are variances. As QI can only be applied to one band, the average value of three or more bands (QAVG) is used for calculating a global spectral quality index for multi-bands images. QI values range between −1 and 1. A higher value indicates more similarity between reference and Pan-sharpened image.
SSIM: The μ stands for mean, σ stands for standard deviation; I r and I p represent reference and Pan-sharpened image respectively. The C1 and C2 are two necessary constants to avoid the index from a division by zero. These constants depend on the dynamic range of the pixel values. A higher value of the measured index shows the better quality of Pan-sharpened algorithm.
CC: Cr,f is the cross-correlation between reference and fused images, while Cr and Cf are the correlation coefficients belonging to reference and fused images respectively.
SRMSE: Edge magnitude (M) is calculated via spectral distance of horizontal and vertical ( M x and M y ) edge intensities.
Sp-ERGAS: d h d l represents the ratio between the pixel size of MS and Pan images, and n is the number of bands. Spatial RMSE is represented as below:
Spatial   RMSE = 1 MN i = 1 M j = 1 N ( Pan ( i , j ) P S ( i , j ) ) 2 ,
where MN is the image size, PS ( i , j ) and Pan ( i , j ) represents the pixel digital number (DN) at ( i , j )   ’th position of Pan-sharpened and Pan image.

References

  1. Aplin, P.; Atkinson, P.M.; Curran, P.J. Fine spatial resolution satellite sensors for the next decade. Int. J. Remote Sens. 1997, 18, 3873–3881. [Google Scholar] [CrossRef]
  2. Pohl, C.; van Genderen, J. Remote Sensing Image Fusion: A Practical Guide; CRC Press: Boca Raton, FL, USA, 2016; ISBN 9781498730020. [Google Scholar]
  3. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Member, S.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
  4. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef]
  5. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  6. Amro, I.; Mateos, J.; Vega, M.; Molina, R.; Katsaggelos, A.K. A survey of classical methods and new trends in pansharpening of multispectral images. EURASIP J. Adv. Signal Process. 2011, 1, 1–22. [Google Scholar] [CrossRef]
  7. Chien, C.-L.; Tsai, W.-H. Image fusion with no gamut problem by improved nonlinear IHS transforms for remote sensing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 651–663. [Google Scholar] [CrossRef]
  8. Mohammadzadeh, A.; Tavakoli, A.; Zoej, M.J. V Road extraction based on fuzzy logic and mathematical morphology from pan-sharpened IKONOS images. Photogramm. Rec. 2006, 21, 44–60. [Google Scholar] [CrossRef]
  9. Souza, C.; Firestone, L.; Silva, L.M.; Roberts, D. Mapping forest degradation in the Eastern Amazon from SPOT 4 through spectral mixture models. Remote Sens. Environ. 2003, 87, 494–506. [Google Scholar] [CrossRef]
  10. Rahimzadeganasl, A.; Sertel, E. Automatic building detection based on CIE LUV color space using very high resolution pleiades images. In Proceedings of the 25th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey, 15 May 2017; pp. 6–9. [Google Scholar]
  11. Rahkar Farshi, T.; Demirci, R.; Feizi-Derakhshi, M. Image clustering with optimization algorithms and color space. Entropy 2018, 20, 296. [Google Scholar] [CrossRef]
  12. Laporterie-Dejean, F.; De Boissezon, H.; Flouzat, G.; Lefevre-Fonollosa, M.J. Thematic and statistical evaluations of five panchromatic/multispectral fusion methods on simulated PLEIADES-HR images. Inf. Fusion 2005, 6, 193–212. [Google Scholar] [CrossRef]
  13. Pohl, C. Challenges of remote sensing image fusion to optimize earch observation data exploration. Eur. Sci. J. 2013, 4, 355–365. [Google Scholar]
  14. Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar] [CrossRef]
  15. Yang, S.; Wang, M.; Jiao, L. Fusion of multispectral and panchromatic images based on support value transform and adaptive principal component analysis. Inf. Fusion 2012, 13, 177–184. [Google Scholar] [CrossRef]
  16. Chavez, P.S., Jr.; Kwarteng, A.Y. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  17. Carper, W.J.; Lillesand, T.M.; Kiefer, R.W. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sensing 1990, 56, 459–467. [Google Scholar]
  18. Laben, C.; Brower, B. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. United States Patent US6011875A, 4 January 2000. [Google Scholar]
  19. Aiazzi, B.; Baronti, S.; Selva, M.; Alparone, L. Enhanced gram-schmidt spectral sharpening based on multivariate regression of MS and pan data. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Denver, CO, USA, 31 July–4 August 2006; pp. 3806–3809. [Google Scholar]
  20. Choi, J.; Yu, K.; Kim, Y. A new adaptive component-substitution-based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  21. Maurer, T.; Street, N.Y. How to pan-sharpen images using the Gram-Schmidt pan-sharpen method—A recipe. In Proceedings of the ISPRS Hannover Workshop 2013, Hanover, 21–24 May 2013; Volume XL, pp. 21–24. [Google Scholar]
  22. Grochala, A.; Kedzierski, M. A method of panchromatic image modification for satellite imagery data fusion. Remote Sens. 2017, 9, 639. [Google Scholar] [CrossRef]
  23. Schanda, J. Colorimetry: Understanding the CIE System; Wiley-Interscience: Hoboken, NJ, USA, 2007; ISBN 9780470049044. [Google Scholar]
  24. Wirth, M.; Nikitenko, D. The effect of colour space on image sharpening algorithms. In Proceedings of the CRV 2010—7th Canadian Conference on Computer and Robot Vision, Ottawa, ON, Canada, 31 May–2 June 2010; pp. 79–85. [Google Scholar]
  25. Singha, M.; Hemachandran, K. Performance analysis of color spaces in image retrieval. Assam Univ. J. Sci. Technol. 2011, 7. [Google Scholar]
  26. Ganesan, P.; Rajini, V.; Sathish, B.S.; Shaik, K.B. CIELAB color space based high resolution satellite image segmentation using modified fuzzy c-means clustering. MAGNT Res. Rep. 2014, 2, 199–210. [Google Scholar]
  27. Bora, D.J.; Gupta, A.K.; Khan, F.A. Comparing the performance of L*A*B* and HSV color spaces with respect to color image segmentation. Int. J. Emerg. Technol. Adv. Eng. 2015, 5, 192–203. [Google Scholar]
  28. Baisantry, M.; Khare, A. Pan sharpening for hyper spectral imagery using spectral mixing-based color preservation model. J. Indian Soc. Remote Sens. 2017, 45, 743–748. [Google Scholar] [CrossRef]
  29. Jin, X.; Zhou, D.; Yao, S.; Nie, R.; Yu, C.; Ding, T. Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks. J. Appl. Remote Sens. 2016, 10, 025023. [Google Scholar] [CrossRef]
  30. Shettigara, V.K. A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens. 1992, 58, 561–567. [Google Scholar]
  31. Pohl, C.; Van Genderen, J.L. Multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
  32. Ehlers, M.; Madden, M. FFT-enhanced IHS transform for fusing high-resolution satellite images FFT-enhanced IHS transform method for fusing high-resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2007, 61, 381–392. [Google Scholar]
  33. Sun, W.; Chen, B.; Messinger, D. Nearest-neighbor diffusion-based pansharpening algorithm for spectral images. Opt. Eng. 2014, 53, 013107. [Google Scholar] [CrossRef]
  34. Perona, P.; Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 629–639. [Google Scholar] [CrossRef]
  35. Tu, T.; Su, S.; Shyu, H.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  36. Li, H.; Jing, L.; Tang, Y.; Liu, Q.; Ding, H.; Sun, Z.; Chen, Y. Assessment of pan-sharpening methods applied to WorldView-2 image fusion. Sensors 2017, 17, 89. [Google Scholar] [CrossRef]
  37. Padwick, C.; Scientist, P.; Deskevich, M.; Pacifici, F.; Smallwood, S. WorldView-2 pan-sharpening. ASPRS 2010 2010, 48, 26–30. [Google Scholar]
  38. León, K.; Mery, D.; Pedreschi, F.; León, J. Color measurement in L∗A∗B∗ units from RGB digital images. Food Res. Int. 2006, 39, 1084–1091. [Google Scholar] [CrossRef]
  39. Hammond, D.L. Validation of LAB color mode as a nondestructive method to differentiate black ballpoint pen inks. J. Forensic Sci. 2007, 52, 967–973. [Google Scholar] [CrossRef] [PubMed]
  40. Hubel, D. David Hubel’s Eye, Brain, and Vision. Available online: http://hubel.med.harvard.edu/book/b44.htm (accessed on 25 August 2019).
  41. Gilchrist, A.; Nobbs, J. Colorimetry, theory. In Encyclopedia of Spectroscopy and Spectrometry, 3rd ed.; Lindon, J.C., Tranter, G.E., Koppenaal, D.W., Eds.; Academic Press: Oxford, UK, 2017; pp. 328–333. ISBN 978-0-12-803224-4. [Google Scholar]
  42. Yuan, D.; Elvidge, C.D. Comparison of relative radiometric normalization techniques. ISPRS J. Photogramm. Remote Sens. 1996, 51, 117–126. [Google Scholar] [CrossRef]
  43. Dou, W.; Chen, Y. An improved IHS image fusion method. In Proceedings of the The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Calgary, AB, Canada, 5–8 August 2008; ISPRS: Hanover, Germany, 2008; Volume XXXVII, pp. 1253–1256. [Google Scholar]
  44. Airbus, Pleiades Products. 2017. Available online: http://www.intelligence-airbusds.com/en/3027-pleiades-50-cmresolution-products (accessed on 5 September 2017).
  45. Jagalingam, P.; Vittal, A. A Review of quality metrics for fused image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  46. Helmy, A.K.; El-Tawel, G.S. An integrated scheme to improve pan-sharpening visual quality of satellite images. Egypt. Inform. J. 2015, 16, 121–131. [Google Scholar] [CrossRef]
  47. Naidu, V.P.S. Discrete cosine transform based image fusion techniques. J. Commun. Navig. Signal Process. 2012, 1, 35–45. [Google Scholar]
  48. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef]
  49. Li, S.; Kwok, J.T.; Wang, Y. Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Inf. Fusion 2002, 3, 17–23. [Google Scholar] [CrossRef]
  50. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
  51. Ashraf, S.; Brabyn, L.; Hicks, B.J. Image data fusion for the remote sensing of freshwater environments. Appl. Geogr. 2012, 32, 619–628. [Google Scholar] [CrossRef]
  52. Gonzalo-Martin, C.; Lillo, M. Balancing the spatial and spectral quality of satellite fused images through a search algorithm. InTechOpen 2011. [Google Scholar] [CrossRef] [Green Version]
  53. Wald, L. Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Les Presses de l’Ecole des Mines: Paris, France, 2002. [Google Scholar]
  54. Gitelson, A.A.; Stark, R.; Grits, U.; Rundquist, D.; Kaufman, Y.; Derry, D. Vegetation and soil lines in visible spectral space: A concept and technique for remote estimation of vegetation fraction. Int. J. Remote Sens. 2002, 23, 2537–2562. [Google Scholar] [CrossRef]
Figure 1. CIELab image Pan sharpening flowchart.
Figure 1. CIELab image Pan sharpening flowchart.
Applsci 09 05234 g001
Figure 2. CIELab color space [41].
Figure 2. CIELab color space [41].
Applsci 09 05234 g002
Figure 3. The locations of the Pléiades image scenes overlaid on Google Earth©.
Figure 3. The locations of the Pléiades image scenes overlaid on Google Earth©.
Applsci 09 05234 g003
Figure 4. Comparison of visible atmospherically resistant index (VARI) index results extracted from multispectral (MS) and CIELab pansharpened images of test site F3. (a) Original MS, (b) threshold applied VARI of MS, (c) zoomed region from VARI of MS, (d) original CIELab, (e) threshold applied VARI of CIELab and (f) zoomed region from VARI of CIELab.
Figure 4. Comparison of visible atmospherically resistant index (VARI) index results extracted from multispectral (MS) and CIELab pansharpened images of test site F3. (a) Original MS, (b) threshold applied VARI of MS, (c) zoomed region from VARI of MS, (d) original CIELab, (e) threshold applied VARI of CIELab and (f) zoomed region from VARI of CIELab.
Applsci 09 05234 g004
Table 1. The brief review of state of the art Pan sharpening methods.
Table 1. The brief review of state of the art Pan sharpening methods.
MethodDescriptionReferences
IHS MethodIn this system, the total amount of the brightness in one color is represented through intensity channel. The wavelength property of the color and the purity of the color are represented by hue and saturation respectively. [30,31]
EHLERS (FFT-Enhanced IHS Transform) MethodThe fundamental idea of this method is modifying the Pan image in the way that it looks more similar to the intensity component of MS image. This method uses FFT (fast fourier transform) filtering for partial replacement instead of entire replacement of the intensity component. [32]
NNDiffuse (Nearest-neighbor diffusion-based) MethodThis method, considers each pixel spectrum as a weighted linear combination of spectra of its sideward neighboring super pixels in the Pan sharpened image. Algorithm uses various factors like intensity smoothness (σ), spatial smoothness (σ_s) and pixel size ratio for conducting the Pan sharpening [33,34]
GIHS (Generalized IHS) MethodDirectly applying of IHS method needs many multiplication and addition operations, which makes the Pan sharpening operation computationally inefficient. GIHS method develops a computationally efficient Pan sharpening method, which does not require coordinate transformation [35]
Gram-Schmidt MethodThis method uses the Gram-Schmidt orthogonalization for converting the original low-resolution MS bands, which are linearly independent vectors, into a set of orthogonal vectors. The first vector in the orthogonal space is considered as simulated Pan image, which produced by weighted aggregation of the consecutive original MS bands.[21,22]
Hyperspherical Color Sharpening MethodThe Hyperspherical Color Sharpening method (HCS) is a Pan sharpening method designed for WorldView-2 sensor imagery and can be applied to any MS data containing 3bands or more. HCS approach is based on transforming original color space to hyperspherical color space. [36,37]
Table 2. The description of all accuracy indices (definition of terms provided in List A1).
Table 2. The description of all accuracy indices (definition of terms provided in List A1).
Quality MetricDescriptionFormulaWhat Value to Look for (Higher/Lower)Reference
RMSERoot Mean Square Error (RMSE) is used to calculate the variation in DN values for checking the difference between the Pan sharpened and reference image.   RMSE = 1 MN i = 1 M j = 1 N ( MS ( i , j ) PS ( i , j ) ) 2 Lower (near to zero)[45]
ERGASRelative dimensionless global error synthesis (ERGAS) is used to calculate the accuracy of Pan sharpened image considering normalized average error of each band of the result image.   ERGAS = 100 d h d l 1 n i = 1 n ( RMSE ( i ) Mean ( i ) ) 2 Lower (near to zero)[45]
SAMSpectral Angle Mapper (SAM) represents the spectral similarity between the Pan sharpened and reference MS image using the average spectral angle Lower (near to zero)[36]
RASERelative average spectral error (RASE) is an error index to calculate average performance of Pan sharpening algorithm into spectral bands.   RASE = 100 μ 1 b i = 1 b R M S E Lower (near to zero)[46]
PSNRPeak signal-to-noise ratio is widely used metric for comparison of distorted (Pan sharpened) and original (reference) image.   PSNR = 20 log 10 [ L 2 1 MN i = 1 M i = 1 N ( I r ( i , j ) I p ( i , j ) ) 2 ] Higher Value[47]
QAVGThe Average Quality index based on quality index is used to model the difference between reference and Pan sharpened images as a combination of three different factors: loss of correlation, luminance distortions and contrast distortion. As QI can only be applied to one band, the average value of three or more bands (QAVG) is used for calculating a global spectral quality index for multiband images.   Q I = 4 σ x y x ¯ y ¯ ( σ x 2 + σ y 2 ) [ ( x ¯ ) 2 + ( y ¯ ) 2 ] Higher Value (Close to 1)[48]
SSIMStructural Similarity index (SSIM) is a method for measuring the structural similarity between reference and Pan sharpened images. This method compares the local patterns (luminance, contrast and structure) using means and standard deviations of two images.   S S I M =   ( 2 μ I r μ I p + C 1 ) ( 2 σ I r I p + C 2 ) ( μ 2 I r + μ 2 I p + C 1 ) ( σ 2 I r + σ 2 I p + C 2 ) Higher Value[2]
CCTo assess the spatial quality of Pan sharpened images, the correlation coefficient between the Pan image and the intensity component of the Pan sharpened image is used.   C C =   2 C r f C r + C f Higher Value (Close to 1)[49]
ZHOU indexZhou’s spatial index uses a high frequency Laplacian filter for extracting high frequency information from both Pan and Pan sharpened images. Correlation coefficient is then calculated between filtered Pan image and each band of Pan sharpened image. The average of calculated cc is considered as spatial quality index.   Laplacian   Kernel =   [ 1 1 1 1 8 1 1 1 1 ] Higher Value (Close to 1)[50]
SRMSESobel based RMSE (SRMSE) is an index for spatial accuracy assessment that uses absolute edge magnitude difference of Pan and Pan sharpened image. This index utilizes 3 × 3 vertical and horizontal Sobel filter kernels for calculating the gradient of edge intensities. RMSE then calculated between Pan and Pan sharpened edge magnitude images.   M =   M x 2 + M y 2
Where M x = [ 1 0 1 2 0 2 1 0 1 ] × image
And M y = [ 1 2 1 0 0 0 1 2 1 ]   × image
Lower (near to zero)[51]
Sp-ERGASSpatial ERGAS (Sp-ERGAS) is an index for spatial quality assessment of Pan sharpened image, which uses spatial RMSE for assessment procedure   Spatial   ERGAS = 100 d h d l 1 n i = 1 n ( Spatial _ RMSE ( i ) Pan ( i ) ) 2   Lower (near to zero)[52]
Table 3. The description of three datasets.
Table 3. The description of three datasets.
NoLocationAcquisition DataPlatform
D1Istanbul2017-04-09PHR 1A
D2Izmir2015-12-04PHR 1B
D3Aydin2018-04-10PHR 1B
Table 4. The description of ten selected data frames.
Table 4. The description of ten selected data frames.
ImageLocationTypeObjects
F1IstanbulRuralTrees, Vegetation, Bare roads, Bare soil
F2IstanbulUrbanBuildings, Roads, Squares, Trees, Bare soil
F3IstanbulSuburbanVegetation, Roads and highways, Buildings, Bare soil, Industry buildings, Green fields.
F4IzmirUrbanBuildings, Roads, Vegetation, Bare soil
F5IzmirRuralAgricultural fields, Bare roads
F6IzmirRuralTrees, Mountain, Bare roads
F7IzmirSuburbanIndustry buildings, Agricultural fields, Roads, Bare soil
F8AydinSuburbanAgricultural fields, Roads, Bare roads, Trees, Buildings, Bare soil
F9AydinUrbanBuildings, Roads, Trees, Bare soil, Green fields
F10AydinRuralTrees, vegetation, agricultural fields, buildings, roads, Bare roads
Table 5. The overall relative ranking of the methods evaluated.
Table 5. The overall relative ranking of the methods evaluated.
MethodRuralUrbanSuburban
SpectralSpatialSpectralSpatialSpectralSpatial
CIELabGoodGoodGoodGoodGoodModerate
GIHSModerateModerateModerateModeratePoorPoor
GSGoodGoodModerateModeratePoorModerate
HCSPoorModeratePoorPoorPoorPoor
HISPoorPoorPoorPoorModeratePoor
NNDiffuseModerateModerateModeratePoorPoorModerate
EhlersPoorPoorPoorPoorModeratePoor

Share and Cite

MDPI and ACS Style

Rahimzadeganasl, A.; Alganci, U.; Goksel, C. An Approach for the Pan Sharpening of Very High Resolution Satellite Images Using a CIELab Color Based Component Substitution Algorithm. Appl. Sci. 2019, 9, 5234. https://doi.org/10.3390/app9235234

AMA Style

Rahimzadeganasl A, Alganci U, Goksel C. An Approach for the Pan Sharpening of Very High Resolution Satellite Images Using a CIELab Color Based Component Substitution Algorithm. Applied Sciences. 2019; 9(23):5234. https://doi.org/10.3390/app9235234

Chicago/Turabian Style

Rahimzadeganasl, Alireza, Ugur Alganci, and Cigdem Goksel. 2019. "An Approach for the Pan Sharpening of Very High Resolution Satellite Images Using a CIELab Color Based Component Substitution Algorithm" Applied Sciences 9, no. 23: 5234. https://doi.org/10.3390/app9235234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop