Next Article in Journal
Detecting Changes in Forest Structure over Time with Bi-Temporal Terrestrial Laser Scanning Data
Previous Article in Journal
Changes in Vegetation Cover in Reforested Areas in the State of São Paulo, Brazil and the Implication for Landslide Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis

Center for Environmental Remote Sensing (CEReS), Chiba University, 1-33 Yayo-icho, Inage, Chiba 263-8522, Japan
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2012, 1(3), 228-241; https://doi.org/10.3390/ijgi1030228
Submission received: 3 August 2012 / Revised: 25 September 2012 / Accepted: 9 October 2012 / Published: 16 October 2012

Abstract

:
Intensity-Hue-Saturation (IHS), Brovey Transform (BT), and Smoothing-Filter-Based-Intensity Modulation (SFIM) algorithms were used to pansharpen GeoEye-1 imagery. The pansharpened images were then segmented in Berkeley Image Seg using a wide range of segmentation parameters, and the spatial and spectral accuracy of image segments was measured. We found that pansharpening algorithms that preserve more of the spatial information of the higher resolution panchromatic image band (i.e., IHS and BT) led to more spatially-accurate segmentations, while pansharpening algorithms that minimize the distortion of spectral information of the lower resolution multispectral image bands (i.e., SFIM) led to more spectrally-accurate image segments. Based on these findings, we developed a new IHS-SFIM combination approach, specifically for object-based image analysis (OBIA), which combined the better spatial information of IHS and the more accurate spectral information of SFIM to produce image segments with very high spatial and spectral accuracy.

Graphical Abstract

1. Introduction

Recent years have seen an increase in the number of satellites that acquire high spatial resolution imagery. The latest generation of these satellites (e.g., GeoEye-1, Worldview-2, Pléiades 1A) acquire imagery at a very high spatial resolution for a panchromatic (PAN) band (around 0.5 m) and at a slightly lower resolution (around 2 m) for several multispectral (MS) bands. For these types of remote sensing data that contain PAN and MS bands of different spatial resolutions, image fusion methods referred to as “pansharpening” methods are often performed to increase the resolution of the MS bands using the PAN band [1]. Pansharpening allows for the high level of spatial information in the PAN band to be combined with the more detailed spectral information in the MS bands. A large number of pansharpening algorithms have been developed, and [2] provides an overview and comparison of many of the frequently-used algorithms. In general, some pansharpening algorithms, such as Intensity-Hue-Saturation (IHS) transformation [3] and the Brovey Transform (BT) [4] preserve almost all of the spatial details in the PAN image but distort the MS information to some degree [5,6], while others like the Smoothing-Filter-Based Intensity Modulation (SFIM) [7] preserve more accurate MS information at the cost of some reduction in spatial information [6]. In addition, some pansharpening methods incorporate an adjustable filter or parameter that allows users to control this tradeoff between color distortion and spatial resolution enhancement (e.g., [6,7,8,9,10]).
A large number of metrics have been developed to evaluate the spatial and/or spectral quality of different pansharpening techniques, and [2] review the most commonly-used metrics. These evaluation metrics all involve pixel-based calculations, but in many cases high resolution images are classified using an object-based image analysis (OBIA) approach rather than a pixel-based approach due to the higher classification accuracy often achieved by the OBIA approach [11,12,13]. In the OBIA approach, an image is first segmented into homogeneous regions called “image segments” or “image objects”, and then classified using the spectral (e.g., mean, standard deviation) and non-spectral (e.g., size, shape, etc.) attributes of these image segments [14]. Since classification variables in the OBIA approach are calculated for image segments rather than individual pixels, it should therefore be more appropriate to evaluate the quality of pansharpening algorithms using segment-based evaluation metrics rather than pixel-based metrics if the pansharpened image is intended for OBIA. These segment-based evaluation metrics should take into account the spatial and spectral accuracy of image segments.
One common method to estimate the spatial accuracy of image segments is to calculate the similarity in position, size, and/or shape between image segments and manually-digitized reference polygons [15]. These types of segment-based spatial accuracy measures have been used in past remote sensing studies for optimizing segmentation parameters [16] and for comparing different segmentation algorithms [17], but to our knowledge no studies have compared the spatial accuracy of segmentations using different pansharpening algorithms to see which lead to more spatially-accurate segmentations. In addition, while a large number of segment-based spatial accuracy metrics exist, there is a lack of segment-based spectral accuracy metrics, probably because it is assumed that segments with a higher spatial accuracy should also have more accurate spectral information. This is likely to be true for a set of segmentations of a single image. However, to compare segmentations produced by multiple pansharpening algorithms (and thus multiple images with different levels of spectral distortion), it is necessary to include a spectral accuracy measure because a more spatially-accurate segmentation may not necessarily be more spectrally-accurate.
In this study, we evaluated the effect that three commonly-used pansharpening techniques—IHS, BT, and SFIM—had on the spatial and spectral quality of image segmentations. As previously mentioned, IHS and BT are both excellent at preserving spatial details from the PAN image, while SFIM minimizes the distortion of spectral information from the MS image, so a comparison should be useful to see which characteristic, if either, is most useful for OBIA. For each pansharpened image, a wide range of segmentation parameters were tested, and the spatial accuracy of segments was calculated for each segmentation parameter. Like past studies, we assumed that, for an individual pansharpened image, segmentations with higher spatial accuracy should also be more spectrally-accurate (and thus did not calculate spectral accuracy for every segmentation parameter). However, once the most spatially-accurate segmentations of each pansharpened image were identified, we calculated the spectral accuracy of these segmentations to allow for comparisons of spectral distortion to be made among the three algorithms. We anticipated that IHS and BT pansharpening would produce more spatially-accurate segmentations since they better preserve spatial information, while SFIM would better preserve the spectral information of image segments. In past research, pixel-based evaluation metrics have indicated that the IHS, BT, and SFIM algorithms work well in combination [6], so another objective of this study was to develop a hybrid approach specifically for OBIA that can incorporate both the rich spatial information of IHS and BT and the accurate spectral information of SFIM.

2. Pansharpening Equations

IHS is one of the most commonly-used pansharpening methods in remote sensing due to its efficiency and ease of implementation [5]. The original IHS pansharpening algorithm was designed for only three spectral bands [2], but [18] provides an efficient n-band IHS-like algorithm. Using this IHS-like algorithm for a 4-band image with blue (B), green (G), red (R), and near infrared (NIR) spectral bands, pansharpened MS values are calculated as
Ijgi 01 00228 i001
Ijgi 01 00228 i002
Ijgi 01 00228 i003
where B'ihs is the pansharpened value for the blue band, B is the digital number (DN) value of the pixel in the original MS band (radiance or reflectance values can be used alternatively), and PAN is the DN of the pixel in the PAN image. α1 to α4 can be determined based on the spectral response curve of a sensor’s PAN and MS bands [6]. Alternatively, a simple average can be used in place of α1 to α4, i.e., (B + G + R + NIR)/4), or α1 to α4 can be determined through multivariate regression [18,19]. Since the MS bands are used for calculation of δi, the theoretical spatial resolution of the sharpened image is not exactly equal to that of the PAN image, but satisfactory resolution can be achieved in practice [6]. We chose to test IHS pansharpening in this study for its ability to preserve the spatial information of the PAN image, which may be beneficial for image segmentation purposes.
BT is another fast and efficient pansharpening method that has been used in many remote sensing studies [5]. Like IHS, BT was also originally developed for 3-band imagery, but an n-band BT-like algorithm is given in [8]. For a 4-band image, it can be calculated as
Ijgi 01 00228 i004
where B'bt is the pansharpened value for band B and I is calculated as in Equations (1) to (3). This BT algorithm is similar to the “naïve” mode of the Hyperspherical Color Sharpening (HCS) algorithm recently proposed for pansharpening WorldView-2 imagery [6]. BT is also similar to IHS in that it preserves almost all of the spatial information of the PAN image while distorting the MS information. However, IHS pansharpening is known to result in saturation compression, while BT results in saturation stretch [6]. We chose to test BT in addition to IHS to assess which property, if either, resulted in higher image segmentation quality.
As the third pansharpening algorithm in this study, we chose to test SFIM due to its capability to minimize the distortion of MS information (unlike IHS and BT). Also unlike IHS and BT, it can be applied to n-bands without requiring α1 to α4. Instead, a smoothed version of the PAN image, often obtained using a 7 × 7 mean filter [6], is required. For a 4-band image, SFIM can be calculated as
Ijgi 01 00228 i005
where B'sfim is the pansharpened value for band B and PANsmooth is the DN value of the pixel in the PAN band after the 7 × 7 mean filter has been applied. Thus, any change in the spectral information of the MS bands is caused solely by PAN / PANsmooth. This SFIM equation is very similar to the “smart mode” of HCS [6].

3. Methods

3.1. Study Area and Data

For this study, GeoEye-1 imagery was acquired for a residential area and a forested area in Ishikawa Prefecture, Japan. The imagery consisted of four 2 m resolution MS bands (B, G, R, and NIR bands) and a 0.5 m PAN band. After orthorectifying the images using the provided Rational Polynomial Coefficients (RPCs) and a 5 m resolution LIDAR-derived digital elevation model (DEM), they were pansharpened using Equations 1 to 3. To calculate I in Equations (1) to (3) for IHS and BT pansharpening, α1 to α4 values (0.343, 0.376, 0.181, and 0.1) were determined based on the spectral response of GeoEye-1’s spectral bands (values obtained from [6]). False color composites of the original and pansharpened images of the residential and forested study areas are shown Figure 1 and Figure 2, respectively, to allow for a visual comparison of results.
Figure 1. GeoEye-1 image (NIR, R, G bands) of (a) the residential study area. (b) Intensity-Hue-Saturation (IHS), (c) Brovey Transform (BT), and (d) Smoothing-Filter-Based-Intensity Modulation (SFIM) pansharpened images. Yellow polygons in (a) delineate reference tree polygons and black polygons delineate reference building polygons. A standard deviation contrast stretch of 2.0 was used for display purposes.
Figure 1. GeoEye-1 image (NIR, R, G bands) of (a) the residential study area. (b) Intensity-Hue-Saturation (IHS), (c) Brovey Transform (BT), and (d) Smoothing-Filter-Based-Intensity Modulation (SFIM) pansharpened images. Yellow polygons in (a) delineate reference tree polygons and black polygons delineate reference building polygons. A standard deviation contrast stretch of 2.0 was used for display purposes.
Ijgi 01 00228 g001
Figure 2. GeoEye-1 image (NIR, R, G bands) of (a).the forested study area, (b) IHS, (c) BT, and (d) SFIM pansharpened images. Yellow polygons in (a) delineate reference polygons of damaged or killed trees. A standard deviation contrast stretch of 3.0 was used for display purposes.
Figure 2. GeoEye-1 image (NIR, R, G bands) of (a).the forested study area, (b) IHS, (c) BT, and (d) SFIM pansharpened images. Yellow polygons in (a) delineate reference polygons of damaged or killed trees. A standard deviation contrast stretch of 3.0 was used for display purposes.
Ijgi 01 00228 g002

3.2. Digitizing Reference Polygons of Land Cover Objects of Interest

For each study area, we digitized reference polygons of specific land cover objects of interest, which were used to evaluate the spatial accuracy of image segmentations. The boundaries of these reference polygons closely matched the boundaries of the objects of interest based on detailed visual analysis. For the residential study area, we digitized 30 polygons of individual trees to assess how well each pansharpening algorithm allowed for small features to be segmented, and 30 polygons of buildings to test how well each algorithm allowed for larger features with distinctive shapes to be segmented (reference polygons are shown in Figure 1(a)). In the forested study area, we digitized 30 polygons of damaged or killed Oak trees to test how well each pansharpening algorithm allowed for specific targets to be detected in a vegetation-dominated landscape (reference polygons are shown in Figure 2(a)). These Oak trees were severely damaged or killed due to mass attacks by ambrosia beetles (Platypus quercivorus) carrying the fungus Raffaelea quercivora [20]. Early detection of attacked trees is important to prevent further expansion of Platypus quercivorus [21], so a more accurate segmentation of damaged trees may lead to more accurate detection using OBIA techniques.

3.3. Image Segmentation

Image segmentation was done in BerkeleyImageSeg (BIS; http://www.imageseg.com). BIS’s segmentation algorithm is based on the region-merging approach described in [11], making it computationally similar to eCognition’s (http://www.ecognition.com) “multi-resolution segmentation” algorithm [11] that has been used in many remote sensing studies. Any difference between BIS’s and eCognition’s segmentation algorithms is therefore due to proprietary implementation details not publically available [16]. There are three user-defined parameters for segmentation: a “Threshold” parameter that controls the relative size of segments, a “Shape” parameter that controls the relative amounts of spectral and spatial information used in the segmentation process, and a “Compactness” parameter that controls how smooth vs. how jagged segment boundaries are [16].
For delineation of individual trees in the residential study area, we tested Threshold parameters from 5 to 40 at a step of 5, and Shape and Compactness parameters of 0.1, 0.3, 0.5, 0.7, and 0.9 for segmenting the pansharpened images (total of 200 segmentations for each pansharpened image). As a baseline for comparison purposes, we also segmented the original PAN image using these segmentation parameters. For delineation of buildings in the residential study area, we tested Threshold parameters of 40–100 at a step of 5 and the same Shape and Compactness parameters used for individual trees (total of 325 segmentations for each pansharpened image). Finally, for delineation of damaged trees in the forested study area, we tested Threshold, Shape, and Compactness parameters equivalent to those used for extracting trees in the residential area. The optimal parameters for segmenting each of these types of land cover were determined quantitatively using the methods described in Section 3.4. As shown in Section 4.1., the most spatially-accurate segmentations were not obtained using the highest or lowest Threshold parameter values tested. For this reason, we did not further expand the parameter search to include lower or higher Threshold parameters.

3.4. Calculating the Spatial Accuracy of Image Segments

The spatial accuracy of each image segmentation was assessed using the D metric (D) [16], which combines measures of oversegmentation (i.e., segments smaller than the land cover objects of interest) and undersegmentation (i.e., segments larger than the land cover objects of interest) into a single value that estimates closeness to an ideal segmentation result (i.e., a perfect match between the reference polygons and image segments). Oversegmentation (OverSegij) is defined as
Ijgi 01 00228 i006
where area(xiyj) is the area of the geographic intersection of reference polygon xi and image segment yj. Yi* is the subset of segments relevant to reference polygon xi(i.e., segments for which either: the centroid of xi is in yj, the centroid of yj is in xi, Ijgi 01 00228 i011 > 0.5, or Ijgi 01 00228 i012 > 0.5. OverSegij values range from 0 to 1, with lower values indicating less oversegmentation. Undersegmentation (UnderSegij) is defined as
Ijgi 01 00228 i007
UnderSegij also ranges from 0–1, and lower values indicate less undersegmentation. D combines these measures of over- and under-segmentation using root mean square error (RMSE), and is given by
Ijgi 01 00228 i008
Lower D values indicate segmentations with a higher spatial accuracy (i.e., less over- and under-segmentation). D was chosen to measure the spatial accuracy of segments in this study due to its good performance in [16] (it frequently indicated an optimal segmentation equivalent or similar to the optimal segmentation indicated by many other metrics) and its speed and ease of implementation (it can be automatically calculated in BIS).

3.5. Calculating the Spectral Accuracy of Image Segments

Once the most spatially-accurate segmentations of each pansharpened image were identified (using D), their spectral accuracy was measured by comparing the spectral characteristics of image segments before and after pansharpening. First, pixels in the original MS bands were upsampled to 0.5 m resolution to match the pixels in the pansharpened images. Then, the mean DN values from the original and MS bands were calculated for each image segment polygon. Once image segments contained their original and pansharpened spectral values, the spectral accuracy of the segmentation was measured, band by band, by calculating the RMSE and average bias (BIAS) of the original and pansharpened values. RMSE was calculated as
Ijgi 01 00228 i009
where Pansharpi is the mean DN value in a pansharpened band for segment i, Originali is the mean DN value in the original MS band for segment i, and n is the total number of image segments. For evaluation of spectral distortion using pixel-based methods, RMSE has been considered as a good metric for spectrally-homogeneous regions in an image [22]. Since image segments represent relatively homogeneous image regions, calculating RMSE at the segment level (i.e., comparing the mean values of the original MS and pansharpened MS bands for each segment) should provide a relatively accurate estimate of spectral quality. However, the mean spectral values of a segment are expected to change to some degree due to pansharpening, and in some cases this change may not be due to color distortion but instead indicate useful spectral information added by pansharpening. Thus, the segmentation with the lowest RMSE may not always indicate the segmentation with the highest spectral accuracy. On the other hand, if the mean spectral values of some segments increase, the mean spectral values of other segments should decrease by roughly the same amount if there is little distortion (since pansharpening should not drastically change the radiometric properties of the image as a whole). Thus, an error measure that takes into account the direction of errors is also important for assessing spectral accuracy. For this study, we calculated average bias (BIAS) for this purpose. BIAS is calculated as
Ijgi 01 00228 i010
BIAS values range from −∞ to ∞. BIAS values near to 0 indicate little over- or underestimation of the spectral values of segments due to pansharpening, making BIAS another good indicator of the spectral accuracy of a segmentation.

4. Results and Discussion

4.1. Spatial and Spectral Accuracy of Image Segments

As was originally anticipated, use of the pansharpening algorithms which preserved more of the spatial information from the PAN band (i.e., IHS and BT) led to more spatially-accurate image segmentations. D values of the most accurate segmentations for each pansharpened image are shown in Table 1. From this table, it is clear that all of the pansharpened images tended to produce more spatially-accurate segmentations than the original PAN image, indicating that the addition of MS information increased spatial accuracy. Of the three pansharpening algorithms, IHS produced the most spatially-accurate segments for all land cover objects of interest, while SFIM produced the least spatially-accurate segments. The worse spatial accuracy of SFIM was likely due to the fact that it degraded the spatial resolution of the PAN image, causing the edges of objects in the pansharpened image to be blurry (leading to less accurate segment boundaries). IHS and BT suffered from spectral distortion, but they seemed to at least preserve sufficient local contrast (i.e., adjacent land cover objects were still relatively easy to distinguish from one another), so this spectral distortion had less of an effect on the spatial accuracy of segmentation than the spatial information degradation of SFIM. The better performance of IHS over BT in all cases suggests that the saturation compression effect of IHS had less of a negative impact than the saturation stretch effect of BT for segmentation purposes.
Table 1. D Metric values (D) of the most spatially-accurate segmentations (i.e., those with the lowest D value) for each pansharpening method, and the Threshold, Shape, and Compactness parameters that produced these segmentations. Lower D indicates a more spatially-accurate segmentation. The pansharpening method with the most accurate segmentation for each land cover of interest is highlighted in gray.
Table 1. D Metric values (D) of the most spatially-accurate segmentations (i.e., those with the lowest D value) for each pansharpening method, and the Threshold, Shape, and Compactness parameters that produced these segmentations. Lower D indicates a more spatially-accurate segmentation. The pansharpening method with the most accurate segmentation for each land cover of interest is highlighted in gray.
Land Cover of Interest (Study Area)Pansharpening MethodDThresholdShapeCompactness
Trees (Residential Area)PAN (No pansharpening)0.7666150.50.7
IHS0.7156100.90.9
BT0.7224150.50.1
SFIM0.7598150.30.3
Buildings (Residential Area)PAN (No pansharpening)0.653850.70.9
IHS0.6362650.90.9
BT0.654600.90.9
SFIM0.6705450.90.7
Damaged Oak Trees (Forested Area)PAN (No pansharpening)0.7594200.70.9
IHS0.5856150.90.5
BT0.6115200.50.5
SFIM0.6299100.90.9
Next, we assessed the spectral accuracy of the most spatially-accurate segmentations of each pansharpened image. As shown in Table 2, for all land cover objects of interest, segmentations of the IHS and BT imagery had high RMSE values and BIAS values well below 0. This indicates that these two pansharpening methods produced segments with a low spectral accuracy, and that they tended to significantly reduce the DN values of image segments. SFIM imagery, on the other hand, produced segmentations with a much lower RMSE and BIAS values much closer to 0, indicating a higher spectral accuracy.
Table 2. RMSE and BIAS of the most spatially-accurate segmentations for each pansharpening method. The pansharpening method with the highest spectral accuracy (i.e., lowest RMSE and BIAS) is highlighted in gray.
Table 2. RMSE and BIAS of the most spatially-accurate segmentations for each pansharpening method. The pansharpening method with the highest spectral accuracy (i.e., lowest RMSE and BIAS) is highlighted in gray.
Land Cover of Interest (Study Area)Pansharpening MethodB RMSEG RMSER RMSENIR RMSEB BIASG BIASR BIASNIR BIAS
Trees (Residential Area)IHS115.8115.8115.8115.8−79.7−79.7−79.7−79.7
BT136.8102.685.5179.2−93.3−67.5−48.8−128.7
SFIM102.581.174.3125.010.68.68.59.6
Buildings (Residential Area)IHS107.9107.9107.9107.9−70.7−70.7−70.7−70.7
BT129.794.976.1144.2−90.9−64.1−46.3−106.9
SFIM95.577.971.5103.528.523.221.629.6
Damaged Oak Trees (Forested Area)IHS75.875.875.875.8−70.5−70.5−70.5−70.5
BT90.860.829.5190.0−81.8−55.0−26.8−174.0
SFIM39.226.412.785.8−7.9−5.3−2.8−14.7

4.2. IHS-SFIM Combination Approach

Since IHS led to very spatially-accurate image segments and SFIM led to very spectrally-accurate image segments, in this section we propose a new IHS-SFIM hybrid approach that combines the benefits of both pansharpening algorithms to produce a final segmented image with a high spatial and spectral accuracy. To combine the spatially-accurate IHS segment boundaries with the spectrally-accurate information of the SFIM pansharpened imagery, we developed a simple two-step process, shown in Figure 3. First, the segment polygons from the most accurate segmentations of the IHS pansharpened imagery were overlaid onto the SFIM imagery. For the residential study area, the segmentation with the highest spatial accuracy for single trees and the segmentation with the highest spatial accuracy for buildings were overlaid onto the SFIM image, and for the forested study area the segmentation with the highest accuracy for damaged/dead trees was overlaid onto the SFIM image. Next, the mean spectral values for each image segment were extracted from the SFIM image bands, and these mean values replaced the spectral values derived from the IHS image. Thus, the final segmentations consisted of segment boundaries from the IHS image segmentation and spectral information from the SFIM imagery.
Figure 3. True color IHS pansharpened image of: (a) a subset of the forested study area, (b) the most spatially-accurate segmentation of the IHS image, and (c) image segments from (b) overlaid on the SFIM pansharpened image to extract more accurate mean spectral values for the segments. Brown areas in the images show trees severely damaged or killed by Raffaelea quercivora. False color imagery (NIR, R, G) used in place of the true color imagery in (df) for visualization purposes. White-colored areas show stressed, damaged, or dead trees, which are clearer in (f).
Figure 3. True color IHS pansharpened image of: (a) a subset of the forested study area, (b) the most spatially-accurate segmentation of the IHS image, and (c) image segments from (b) overlaid on the SFIM pansharpened image to extract more accurate mean spectral values for the segments. Brown areas in the images show trees severely damaged or killed by Raffaelea quercivora. False color imagery (NIR, R, G) used in place of the true color imagery in (df) for visualization purposes. White-colored areas show stressed, damaged, or dead trees, which are clearer in (f).
Ijgi 01 00228 g003
The spectral accuracies of these final segmentations, shown in Table 3, were much higher than the spectral accuracies of the original IHS segmentations in Table 2. For the most spatially-accurate segmentation of single trees in the residential area (i.e., the IHS segmentation with Threshold, Shape, and Compactness parameters of 10, 0.9, and 0.9), use of the IHS-SFIM hybrid approach led to a relative reduction in RMSE values for the B, G, R, and NIR spectral bands of 19.6% (from 115.8 to 93.0), 37.1% (from 115.8 to 72.8), 42.0% (from 115.8 to 67.2) and 4.7% (from 115.8 to 110.4), respectively. BIAS for each of the four spectral bands was decreased by 86.6%, 89.1%, 89.3%, and 86.7%, respectively. For the most spatially-accurate segmentation of buildings in the residential area, RMSE was reduced by 20.1%, 33.9%, 37.3%, and 13.3%, and BIAS decreased by 63.9%, 69.5%, 70.0%, and 58.6%. Finally, for the most spatially-accurate segmentation of insect-damaged trees in the forested study area, RMSE was reduced by 61.1%, 74.0%, 87.3%, and 18.3%, and BIAS decreased by 88.1%, 92.1%, 95.7%, and 79.6%. These results indicate that, for both study areas and all three land cover types of interest, the proposed IHS-SFIM approach was able to achieve the same spatial accuracy as the most spatially-accurate segmentations (i.e., the IHS segmentations in Table 1), while significantly increasing the spectral accuracy of these segmentations (using SFIM spectral information).
For practical applications like image classification, the spectral and spatial information of image segments in one or more of these final segmentations could be incorporated for analysis. For example, to detect insect-damaged trees in the forested study area, the spectral (mean values for each band) and spatial attributes (e.g., texture, size, shape, etc.) of image segments in the most accurate segmentation could be used as classification variables. On the other hand, a multi-scale classification approach similar to [23,24] may be preferable for mapping trees and buildings in the residential area since trees and buildings are much different in terms of size and shape.
Table 3. RMSE and BIAS of each spectral band using the proposed IHS-SFIM combination approach. The values are similar to those of SFIM in Table 2, and have D Metric values equal to those of IHS in Table 1.
Table 3. RMSE and BIAS of each spectral band using the proposed IHS-SFIM combination approach. The values are similar to those of SFIM in Table 2, and have D Metric values equal to those of IHS in Table 1.
Land Cover of Interest (Study Area)Pansharpening MethodB RMSEG RMSER RMSENIR RMSEB BIASG BIASR BIASNIR BIAS
Trees (Residential Area)IHS-SFIM93.0 72.8 67.2 110.4 10.7 8.7 8.5 10.6
Buildings (Residential Area)IHS-SFIM85.3 71.3 67.7 93.5 25.5 21.5 21.2 29.3
Damaged Oak Trees (Forested Area)IHS-SFIM29.5 19.7 9.6 61.9 −8.4 −5.6 −3.0 −14.4
The proposed hybrid approach was used in this study for combining the positive characteristics of two different pansharpening algorithms, but it may also be useful to apply this approach for pansharpening algorithms with adjustable parameters or filters (e.g., [6,7,8,9,10]) that control the tradeoff between spatial and spectral information preservation. For example, segment boundaries can be generated from images pansharpened using parameters/filters that maximize spatial information preservation, and the spectral information of these segments can be derived from images pansharpened using parameters/filters that minimize spectral distortion.

5. Conclusions

In this study, we compared the effects of IHS, BT, and SFIM pansharpening on the spatial and spectral accuracy of image segmentation and proposed a new IHS-SFIM hybrid approach based on the results of these comparisons. IHS and BT tend to preserve the spatial information of the PAN band while distorting the spectral information of the MS bands, while SFIM preserves accurate spectral information from the MS bands while losing some information from the PAN band. We found that IHS and BT pansharpening led to more spatially-accurate segmentation (with IHS producing the most spatially-accurate segmentations), indicating that spatial information preservation was most useful for segmentation purposes. On the other hand, SFIM pansharpening led to segments with more accurate spectral information, which may be more important for applications such as image classification, change detection, or the extraction of vegetation biophysical parameters.
To combine the high spatial accuracy of IHS segments with the high spectral accuracy of SFIM imagery, we proposed a hybrid approach developed specifically for OBIA that involves (i) overlaying the segment boundaries from the IHS image segmentation onto a SFIM image and (ii) deriving the spectral values for image segments (mean DN for each spectral band) from the SFIM imagery. Based on our segment-based calculations of spatial and spectral accuracy, the proposed approach led to higher spatial and spectral accuracy of image segments than the use of a single pansharpening algorithm alone. Since a hybrid approach including two pansharpening methods produced the best results in this study, we recommend users planning to process images with PAN and MS bands using OBIA to pansharpen the imagery themselves (rather than purchase pansharpened imagery directly from the image vendor) so that they can incorporate multiple pansharpening methods for their analysis. It should be noted that we only tested D for measuring the spatial accuracy of image segments, and RMSE and BIAS for measuring the spectral accuracy of image segments. In future studies it may be beneficial to include additional spatial and spectral accuracy measures to see if our findings remain consistent. Finally, since image classification often follows image segmentation in OBIA, future studies are needed to quantitatively assess the impact that different pansharpening algorithms and our proposed hybrid approach have on classification accuracy.

Acknowledgments

This research was supported by the Japan Society for the Promotion of Science (JSPS) Postdoctoral Fellowship for Foreign Researchers. In addition to JSPS, we would like to thank the three anonymous reviewers for their helpful comments.

References and Notes

  1. Schowengerdt, R. Remote Sensing: Models and Methods for Image Processing, 3rd ed; Academic Press: Orlando, FL, USA, 2006; pp. 371–378. [Google Scholar]
  2. Amro, I.; Mateos, J.; Vega, M.; Molina, R.; Katsaggelos, A. A survery of classical methods and new trends in pansharpening of multispectral images. EURASIP J. Adv. Sig. Pr. 2011, 79, 1–22. [Google Scholar]
  3. Haydn, R.; Dalke, G.; Henkel, J. Application of the IHS color transform to the processing of multisensory data and image enhancement. In Proceedings of the International Symposium on Remote Sensing of Environment, First Thematic Conference: “Remote Sensing of Arid and Semi-Arid Lands”, Buenos Aires, Argentina, 19–25 January 1982; Volume 1, pp. 599–616.
  4. Gillespie, A.; Kahle, A.; Walker, R. Color enhancement of highly correlated images. II. Channel ratio and “Chromaticity” transformation techniques. Remote Sens. Environ. 1987, 22, 343–365. [Google Scholar]
  5. Tu, T.; Su, S.; Shyu, H.; Huang, P. A new look at IHS-like image fusion methods. Inform. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  6. Tu, T.; Hsu, C.; Tu, P.; Lee, C. An adjustable pan-sharpening approach for IKONOS/QuickBird/GeoEye-1/WorldView-2 imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 125–134. [Google Scholar] [CrossRef]
  7. Liu, J. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  8. Tu, T.; Lee, Y.; Chang, C.; Huang, P. Adjustable intensity-hue-saturation and brovey transform fusion technique for IKONOS/QuickBird imagery. Opt. Eng. 2005, 44, 116201. [Google Scholar] [CrossRef]
  9. Fasbender, D.; Radoux, J.; Bogaert, P. Bayesian data fusion for adaptable image pansharpening. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1847–1857. [Google Scholar] [CrossRef]
  10. Padwick, C.; Deskevich, M.; Pacifici, F.; Smallwood, S. Worldview-2 Pansharpening. In Proceedings of ASPRS Annual Conference, San Diego, CA, USA, 26–30 April 2010.
  11. Benz, U.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  12. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sensing 2006, 72, 799–811. [Google Scholar]
  13. Myint, S.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar]
  14. Blaschke, T.; Johansen, K.; Tiede, D. Object-Based Image Analysis for Vegetation Mapping and Monitoring. In Advances in Environmental Remote Sensing: Sensor, Algorithms, and Applications, 1st; Weng, Q., Ed.; CRC Press: Boca Raton, FL, USA, 2011; pp. 241–271. [Google Scholar]
  15. Zhang, Y. Evaluation and comparison of different segmentation algorithms. Pattern Recogn. Lett. 1997, 18, 963–974. [Google Scholar] [CrossRef]
  16. Clinton, N.; Holt, A.; Scarborough, J.; Yan, L.; Gong, P. Accuracy assessment measure for object-based image segmentation goodness. Photogramm. Eng. Remote Sensing 2010, 76, 289–299. [Google Scholar]
  17. Neubert, M.; Herold, H.; Meinel, G. Assessing Image Segmentation Quality—Concepts, Methods and Application. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications, 1st; Blaschke, T., Lang, S., Hay, G., Eds.; Springer: Berlin, Germany, 2008; pp. 769–784. [Google Scholar]
  18. Tu, T.; Huang, P.; Hung, C.; Chang, C. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  19. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of ms + pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  20. Kubono, T.; Ito, S. Raffaelea quercivora sp. nov. associated with mass mortality of Japanese oak, and the ambrosia beetle (Platypus quercivorus). Mycoscience 2002, 43, 255–260. [Google Scholar]
  21. Uto, K.; Takabayashi, Y.; Kosugi, Y. Hyperspectral Analysis of Japanese Oak Wilt to Determine Normalized Wilt Index. In Proceedings of 2008 IEEE International Geoscience & Remote Sensing Symposium, Boston, MA, USA, 6–11 July 2008; Volume 2, pp. 295–298.
  22. Vijayaraj, V.; Younan, N.; O’Hara, C. Quantitative analysis of pansharpened images. Opt. Eng. 2006, 45, 046202. [Google Scholar] [CrossRef]
  23. Johnson, B. High-resolution urban land-cover classification using a competitive multi-scale object-based approach. Remote Sens. Lett. 2013, 4, 131–140. [Google Scholar] [CrossRef]
  24. Bruzzone, L.; Carlin, L. A multilevel context-based system for classification of very high spatial resolution images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2587–2600. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Johnson, B.A.; Tateishi, R.; Hoan, N.T. Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis. ISPRS Int. J. Geo-Inf. 2012, 1, 228-241. https://doi.org/10.3390/ijgi1030228

AMA Style

Johnson BA, Tateishi R, Hoan NT. Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis. ISPRS International Journal of Geo-Information. 2012; 1(3):228-241. https://doi.org/10.3390/ijgi1030228

Chicago/Turabian Style

Johnson, Brian Alan, Ryutaro Tateishi, and Nguyen Thanh Hoan. 2012. "Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis" ISPRS International Journal of Geo-Information 1, no. 3: 228-241. https://doi.org/10.3390/ijgi1030228

Article Metrics

Back to TopTop