Next Article in Journal
Bidirectional Long Short-Term Memory Network for Vehicle Behavior Recognition
Next Article in Special Issue
A Conservative Downscaling of Satellite-Detected Chemical Compositions: NO2 Column Densities of OMI, GOME-2, and CMAQ
Previous Article in Journal
Integration of Single-Frequency GNSS and Strong-Motion Observations for Real-Time Earthquake Monitoring
Previous Article in Special Issue
A Geostatistical Approach for Modeling Soybean Crop Area and Yield Based on Census and Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utilizing Pansharpening Technique to Produce Sub-Pixel Resolution Thematic Map from Coarse Remote Sensing Image

1
College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
3
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(6), 884; https://doi.org/10.3390/rs10060884
Submission received: 20 April 2018 / Revised: 27 May 2018 / Accepted: 5 June 2018 / Published: 6 June 2018
(This article belongs to the Special Issue Remote Sensing Image Downscaling)

Abstract

:
Super-resolution mapping (SRM) is a technique to obtain sub-pixel resolution thematic map (SRTM). Soft-then-hard SRM (STHSRM) is an important SRM algorithm due to its simple physical meaning. The soft classification errors may affect the SRTM derived by STHSRM. To overcome this problem, the maximum a posteriori probability (MAP) super-resolution then hard classification (MTC) algorithm has been proposed. However, the prior information of the original image is difficult to utilize in MTC. To solve this issue, a novel method based on pansharpening then hard classification (PTC) is proposed to improve SRTM. The pansharpening technique is applied to the original coarse image to obtain the improved resolution image by suppling more prior information. The SRTM is then derived from the improved resolution image by hard classification. Not only does PTC inherit the advantages of MTC that avoids soft classification errors, but it can also incorporate more prior information from the original image into the process. Experiments based on real remote sensing images show that the proposed method can produce higher mapping accuracy than the STHSRM and MTC. It is shown that the PTC has the percentage correctly classified (PCC) in the range from 89.62% to 95.92% for the experimental dataset.

Graphical Abstract

1. Introduction

The widespread existence of mixed pixels in coarse multispectral image (MSI) or hyperspectral image (HSI) hinders accurate extraction of land cover spatial distribution information [1]. Although soft classification [2] is effective in estimating the proportion of each class within mixed pixels, such as linear spectral mixture analysis [3], nonlinear unmixing [4], support vector machines [5,6], fuzzy c-means classifiers [7], k-nearest neighbor classifiers [8], and artificial neural networks [9], it cannot provide any spatial distribution information for the land-cover classes in mixed pixels. Super-resolution mapping (SRM) is a postprocessing technique operating on the results of the soft classification [10]. It predicts the distribution of land-cover classes at a sub-pixel scale based on the output of soft classification.
SRM is based on the spatial dependence theory [11,12,13]. The soft-then-hard SRM (STHSRM) [14] contains two steps: (1) sub-pixel sharpening and (2) class allocation [14]. The proportion of each class in sub-pixel is estimated by upsampling the soft classification result; this process is called sub-pixel sharpening. Class labels are then allocated for the sub-pixel according to the proportion of each class in the sub-pixel, and this process is called class allocation. Back-propagation neural networks [15,16], Hopfield neural networks [17,18], spatial attraction [19,20], kriging [21], indicator cokriging [22,23], and super-resolution algorithms [24,25,26,27,28,29] can be selected as the sub-pixel sharpening method in STHSRM. Common class allocation methods include linear optimization [30], highest proportion arrangement first [31], and spatial distribution pattern [32]. To overcome the soft classification errors in STHSRM, maximum a posteriori probability (MAP) super-resolution then hard classification (MTC) is proposed [33].
However, due to some uncertainty in the original coarse image, such as diversity of the land cover classes and the limitation of the resolution of the satellite sensor, the MTC has difficulty in gathering the full prior information of the original image. Here, the authors propose a novel method based on pansharpening then hard classification (PTC) to improve the sub-pixel resolution thematic map (SRTM). The pansharpening technique is applied to improve the resolution of the original coarse image with more prior information, and the improved resolution image is then utilized to derive the SRTM by the hard classification method. The proposed method has the following advantages. First, pansharpening technique is creatively applied to obtain SRTM. A new method utilized to obtain the SRTM is proposed, namely pansharpening then classification. Second, PTC inherits the advantages of MTC, which avoids the soft classification errors. Third, the proposed method takes more prior information from the original image into account than MTC.
The remainder of this paper is organized as follows. The SRM is introduced in Section 2. Section 3 gives an introduction about the MTC. The proposed PTC is presented in Section 4. Section 5 shows the experimental results and analyses. The conclusions are given in Section 6.

2. Soft-then-Hard Super-Resolution Mapping

Figure 1 shows a simple example to explain the spatial correlation the theory [14,23,25]. Figure 1a shows the soft classification result for Class 1. There are 3 × 3 mixed pixels in Figure 1a, and the proportion of Class 1 are marked on each mixed pixel. The scale factor S indicates the scale ratio between a mixed pixel and its sub-pixels. When the soft classification results are upsampled with scale factor of S = 2 , a mixed pixel is divided into 2 × 2 sub-pixels, and 0.25 which means that 4 × 0 . 25 sub-pixels belong to Class 1. Figure 1b,c describe two possible distributions of sub-pixels. The principle of spatial dependence indicates that the former is perceived to be more optimal.
Figure 2 shows the flowchart of STHSRM. The soft classification results for each class as inputs are first upsampled with a scale factor S by an appropriate sub-pixel sharpening method to produce a set of soft-classified images for all classes at fine spatial resolution, each of which is composed of the proportion of each class in the sub-pixel. Class labels are then allocated for sub-pixel according to the proportion of each class in sub-pixels.
Constraints from class fractioning should be defined:
N k ( P ) = R o u n d ( L k ( P ) S 2 )
where N k ( P ) is the number of sub-pixels for the kth class, L k ( P ) is the proportion of the kth class for mixed pixel P in the soft classification result, R o u n d ( ) is a function that takes the nearest integer to L k ( P ) S 2 .

3. MAP Super-Resolution then Hard Classification

It is noted that STHSRM can be considered the postprocessing operation on the results of the soft classification. To alleviate the errors from the soft classification, the MAP super-resolution then hard classification (MTC) is proposed [33].
The flowchart of the MTC is shown in Figure 3. The original coarse image is the input, SRTM is the output. An improved resolution image is derived by upsampling the original coarse image using the MAP super-resolution method. SRTM is then achieved by hard classifying the improved resolution image. MTC is different from the STHSRM because the soft classification is avoided.
The endmembers of interest (EOI) [33] is utilized to reduce the complexity of the MAP super-resolution method. The N-FINDR algorithm is used to extract the endmember [34]. Using MTC, N-FINDR takes the original image to derive the spectral signature for the classes of interest (COI), the area with a greater number of pixels. The spectral signature of the COI is selected to constitute the EOI, which is selected as the low dimensional data to original high dimensional data mapping operator, is the EOI column vector. The original high dimensional data to low dimensional data mapping operator is Φ i n v , and Φ i n v = ( Φ T Φ ) - 1 Φ T . The low dimensional data to original high dimensional data mapping operator Φ i n v is first applied to map the original high dimensional original image into a low dimensional transformation space. The above MAP super-resolution process is then applied. Utilizing the low dimensional data to original high dimensional data mapping operator the super-resolution result is mapped to the original dimensional space. Reducing the dimensionality of input data can simplify the MAP super-resolution process. Experiments show that when there is full supervision information, MTC can obtain a more accurate SRTM result than the STHSRM [28,29,33].

4. The Proposed Method

Although MTC can alleviate the soft classification errors on SRTM, gathering the full prior information from the original image is still challenge. To supply more prior information, pansharpening technique is conducted here.

4.1. Pansharpening Technique

Pansharpening aims at fusing a coarse MSI or HSI and a panchromatic image, featuring the result of the processing with the spectral resolution of the former and the spatial resolution of the latter which should be simultaneously acquired over the same area. This is a data fusion problem since one would aim at combining the spatial details resolved by the panchromatic image and the several spectral bands of the MSI or HSI into a unique product [33].
Due to the advantages of high fidelity in rendering the spatial details and robustness to misregistration errors and aliasing [35], the component substitution (CS) is widely used for pansharpening. CS relies on projecting of an MSI or HSI into another space, to separate the spatial structure from the spectral information in different components. Subsequently, the transformed MSI or HSI can be enhanced by replacing the component containing the spatial structure with the panchromatic image. The larger the correlation between the panchromatic image and the replaced component, the less spectral distortion will be introduced by the fusion approach. To achieve a good fusion, histogram matching of the panchromatic image to the selected component is performed before the substitution takes place. The histogram-matched panchromatic will exhibit the same mean and variance as the component to replace. The CS-based fusion process is completed by applying the inverse spectral transformation to obtain the fused image. Figure 4 shows the flowchart of the CS approach [35]. The general formulation of CS is given by:
Y b = Y b + g b ( P b = 1 N w b Y b )
where b ( b = 1, 2, …, N ) indicates the b th spectral band, Y is the original low resolution MSI or HSI, Y is the pansharpened image, Y b denotes the b th band of the pansharpened image, Y b represents the b th band of the MSI or HSI interpolated at the scale of the panchromatic image, g b = [ g 1 , g 2 , ..., g N ] is the vector of the injection gains, P is the panchromatic image, and the weight vector w b = [ w 1 , …, w i , …, w N ]T measures the spectral overlap among the spectral bands and the panchromatic image [36].
There are many pansharpening approaches belonging to the CS family, such as principal component analysis (PCA) [36], Gram–Schmidt [37], and intensity-hue-saturation (IHS) [38]. Due to fast and easy implementation, PCA is employed here. PCA is achieved through a rotation of the original data (i.e., a linear transformation) that yields a set of scalar images, called principal components (PCs). The hypothesis underlying its application to pansharpening is that the spatial information (shared by all the channels) is concentrated on the first PC, while the spectral information (specific to each single band) is accounted for in the other PCs. The whole pansharpening process can be described by the general formulation stated by Equation (2), where the vectors w and g coefficient vectors are derived by the PCA procedure applied to the MSI or HSI.

4.2. Pansharpening then Hard Classification

To obtain more prior information, the pansharpening technique is utilized to obtain SRTM. First, the original coarse HS image Y is improved by PCA and the improved resolution image Y is derived by Equation (2). To reduce the complexity of the PCA algorithm, the EOI developed [33] is also utilized. The N-FINDR takes the original image to derive the spectral signature for the COI which constitutes the EOI. The low dimensional data to original high dimensional data mapping operator Φ is the EOI column vector. The original high dimensional data to low dimensional data mapping operator is Φ i n v . First, utilize Φ i n v maps the original high dimensional original image into a low dimensional transformation space, then the above PCA process is applied. Finally, Φ is applied to map the PCA result Y into the original dimensional space. Reducing the dimensionality of input data can simplify the PCA process. Then, the SRTM is derived by directly hard classifying the image Y .
PTC is shown as a flowchart in Figure 5. The implementation of PTC includes three steps:
Step (1)
Utilizing the endmembers of interest (EOI) map the original high dimensional MSI or HSI into a low dimensional transformation space.
Step (2)
The original coarse MSI or HSI in the low dimensional transformation space and a panchromatic image are fused (see Equation (2)) with the PCA pansharpening technique, to generate an improved resolution image.
Step (3)
SRTM is produced by classifying the improved resolution image.
From Figure 3 and Figure 5, since the PTC utilizes the panchromatic image, it supplies more prior information from the original image into the original coarse image than the MTC, the better SRTM will be derived.

5. Experimental Analysis

To validate the performance of the proposed method, HS images from two public data sets are used. The linear optimization technique (LOT) [30] is employed as class allocation in the STHSRM. SVM is employed in both MTC and PTC as a classification tool [39] and in STHSRM as a soft classification tool [5]. Because the endmember extraction method is utilized in SVM soft classification in STHSRM, and the endmember extraction method is also utilized in MTC and PTC. To get a fair comparison, the N-FINDR algorithm is selected as the endmember extraction method in STHSRM, MTC and PTC, so the source of uncertainties is the same in the three methods. The number of the training samples is selected as 10% per class and the remaining numbers per class are test samples in SVM in all experiments. The original fine remote sensing image is downsampled to produce the simulated coarse image and then upsampled to the size of the original fine image by the scale factor S for quantitative assessment. Since, in the downsampled case the land cover classes at the sub-pixel level are known, the study can facilitate direct evaluation of the impact of image registration error on the technique. Regarding the upsampled case, the experimental results are evaluated by comparing the reference image. To avoid the effect of errors caused by the acquisition of the panchromatic image on the SRTM result, we only consider the effect of pansharpening on SRTM result. The spectral response of the IKONOS satellite is utilized in the original remote sensing image to create appropriate synthetic panchromatic image according to the [40,41]. This satellite captures a panchromatic image (0.45–0.90 μm) and four MSI bands (0.45–0.52 μm, 0.52–0.60 μm, 0.63–0.69 μm, and 0.76–0.90 μm) [41].
Five methods are tested and compared including bilinear interpolation (BI) [28], bicubic interpolation (BIC) [29], MAP as sub-pixel sharpening method (MAP) [24], MTC [33], and the proposed PTC. Two experiments are designed to gain realistic simulation of the coarse image, which are derived by downsampling the original image with a scale factor of S . Traditional classification accuracy assessment, which is completed by the indices of percentage correctly classified (PCC), the average accuracy of the classification result (AA) and the Kappa coefficient (Kappa) are utilized to assess the SRTM. All experiments are tested on a Pentium(R) Dual-core Processor (2.20 GHz) with MATLAB R2010 version.

5.1. Experiment 1

The Washington, DC data set is obtained from the ground truth map of hyperspectral remote sensing image. It is acquired in an urban site of the airborne HYDICE from a mall in Washington, DC. It contains 1400 × 512 pixels and 210 spectral bands. The authors tested a 240 × 240 pixels region which includes shadow, water, road, tree, grass, roof, and trail [42]. The original image is shown in Figure 6a. The simulated coarse image is derived by downsampling the original image. The scale factor S is set to 2 and each mixed pixel contains 2 × 2 sub-pixels. The simulated coarse image is shown in Figure 6b. The synthetic panchromatic image which is derived by the spectral response of the IKONOS satellite is shown in Figure 6c.
Figure 6b shows that spatial distribution information is difficult to obtain from the coarse image. Although the soft classification results of seven land cover classes from SVM soft classification shown in Figure 7 can estimate the proportion of each class, the spatial distribution information is difficult to obtain from the soft classification results. Thus, the SRTM, which is utilized to derive fine distribution information at finer spatial resolution, is necessary.
The results of MAP in the MTC and pansharpening in PTC are shown in Figure 8a,b. The pansharpening result is closer to the original image in Figure 6a than the MAP result. Super-resolution reconstruction relative error is utilized to evaluate the performance of the MAP result and the pansharpening result. Super-resolution reconstruction relative error is defined as the sum of all reconstruction pixels absolute errors for each class in Figure 8a,b ratios the sum of all pixels for each class in Figure 6a. Using the aid of the reference image in Figure 9a, the number of pixels is counted. Table 1 shows the statistics reconstruction error for different classes. As shown in Table 1, each class of reconstruction error from the pansharpening result is lower than that from the MAP result due to more prior information supplied into the coarse image by the pansharpening technique.
Next, the SRTM is produced respectively based on the STHSRM, MTC, and PTC. The SRTM of the five methods are shown in Figure 9. A visual comparison of the results suggests that the results of the proposed PTC in Figure 9f are better. Due to many soft classification errors in the STHSRM, there are many obvious burrs in the road and grass boundaries, which seem rough in Figure 9b–e. Although the MTC avoids the soft classification and improves the SRTM, the prior information of the original image is still not fully utilized, causing wrong classification. Some classes belong to roof, which are wrongly classified into trail for example. With the aid of pansharpening, this phenomenon is alleviated in Figure 9f, which shows a greater continuity and smoother boundaries for each of the classes, and the result is closer to the reference map due to utilizing more prior information.
Further to visual contrast, the performances of the five methods for Experiment 1 are quantitatively evaluated by the classification accuracy of each class, AA and PCC, as listed in Table 2. Checking the accuracy for each class in Table 2, the accuracy of PTC is superior to other methods. Regarding the overall accuracy, in the STHSRM, BI produces a PCC of 76.82%, BIC produces a PCC of 77.47%, whereas MAP produces a PCC of 78.06%. Due to the fact that there is more prior information from the original image utilized in PTC, the classification accuracy of the roof is 88.75%, approximately 3.2% larger than that of MTC and the classification accuracy of the road is 90.31%, with a gain of approximately 1.6% over MTC. Overall, the PTC has the highest PCC. The AA in PTC shows a better balance of the classification of different classes.
The performance of SRTM is affected by the scale factor S . The five methods are tested for the other two scale factors, i.e., 4 and 6. The PCC and the Kappa of the five methods for all three scale factors are shown in Figure 10a,b. It is noteworthy that as S increases, the PCC and the Kappa of all five methods decrease. This is because a higher S will bring more uncertainty in the coarse image. Similar to the results in Table 2, PTC produces higher PCC and Kappa than the other methods.

5.2. Experiment 2

The Pavia data set is acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) at the urban test area of Pavia, northern Italy. The whole data size includes 1400 × 512 pixels and 102 bands. Figure 11a shows 400 × 400 pixels are selected as the tested area [42]. The simulated coarse image shown in Figure 11b is derived by degrading Figure 11a with S = 2 . The synthetic panchromatic image is shown in Figure 11c. The soft classification results of six land cover classes derived from SVM soft classification are shown in Figure 12. Figure 13a,b show a visual comparison of the results suggesting that the pansharpening result in the PTC is visually more consistent with the original image in Figure 11a than the MAP result in the MTC.
Figure 14 gives the SRTM of the STHSRM, MTC, and PTC. Comparing the reference image shown in Figure 14a, due to the influence of the soft classification errors, some disconnected and cone-shaped patches exist in Figure 14b–d in the STHSRM. Figure 14e–f demonstrate that this phenomenon is alleviated. The roof appears to be more continuous, and the road is smoother. Moreover, the result of the proposed PTC in Figure 14f is visually more consistent with the reference distribution of land cover.
Table 3 shows the classification accuracy of each class, the AA and the PCC for the five methods. Similar to Experiment 1, the classification accuracy of the PTC is found to be higher than the STHSRM and the MTC. The classification accuracy of the tree in PTC is 97.04%, about 2.4% greater than that in MTC and the classification accuracy of the road in PTC is 95.31%, with a gain of about 2% over that in MTC for example. According to the AA, PTC has a better performance. Figure 15a,b are the PCC and the Kappa of the five methods for three scale factors, i.e., 2, 5, and 8. Similar to the results in the aforementioned Experiment 1, the PCC and the Kappa from the PTC are higher than those from the STHSRM and the MTC.

6. Conclusions

Utilizing a pansharpening technique to produce a sub-pixel resolution thematic map from a coarse remote sensing image (PTC) is proposed in this paper. The original coarse image and a panchromatic image are fused with the help of the pansharpening technique in the proposed PTC. The improved resolution image is utilized to produce the ideal SRTM. The proposed PTC can avoid the influence of soft classification errors on SRTM. Moreover, the proposed method takes more prior information from the original image into account. Experiments are conducted to compare the proposed with the STHSRM and MTC. Results indicate that the proposed method has comparable accuracy with higher efficiency. Regarding the Pavia data set ( S = 2 ) , the classification accuracy of the tree in PTC is 97.04%, around 2.4% larger than that in MTC; the classification accuracy of the road in PTC is 95.31%, with a gain of about 2% over that in MTC, and the overall accuracy evaluation PCC in PTC is 95.92%, around 1.5% greater than that in MTC. The Washington, DC data set, the PCC, and the Kappa from the PTC are higher than that from the STHSRM and MTC under different scales.
The performance of PTC depends on the classification algorithm. When the supervised information is not rich, a classification algorithm may produce many errors. The PTC cannot always be better than STHSRM and MTC. It is worth seeking a more effective classification algorithm to reduce errors. To easily follow the thread of the exhibition, all acronyms in this paper are listed in Abbreviation.

Author Contributions

P.W. conceived and designed the experiments; L.W. performed the experiments; Y.W. analyzed the data; L.W. contributed reagents/materials/analysis tools; P.W. wrote the paper; H.L. proofread the paper.

Acknowledgments

The work described in the paper is substantially supported by the Fundamental Research Funds for the Central Universities, the National Natural Science Foundation of China (Project Nos. 61675051, 61573183). The authors would like to thank Qunming Wang of the Tongji University for providing the relevant data set. The authors would like to thank the handling editors and the reviewers for providing valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviation

AcronymsAcronym Definitions
MSIMultispectral image
HSIHyperspectral image
SRMSuper-resolution mapping
STHSRMSoft then hard super-resolution mapping
SRTMSub-pixel resolution thematic map
MAPMaximum a posteriori probability
MTCMAP super-resolution then hard classification
PTCPansharpening then hard classification
CSComponent substitution
PCAPrincipal component analysis
LOTLinear optimization technique
EOIEndmembers of interest
COIClasses of interest

References

  1. Mura, M.D.; Prasad, S.; Pacifici, F.; Gamba, P.; Chanussot, J.; Benediktsson, J.A. Challenges and opportunities of multimodality and data fusion in remote sensing. Proc. IEEE 2015, 103, 1585–1601. [Google Scholar] [CrossRef]
  2. Villa, A.; Chanussot, J.; Benediktsson, J.A.; Jutten, C. Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution. IEEE J. Sel. Top. Signal Process. 2011, 5, 521–535. [Google Scholar] [CrossRef]
  3. Wang, L.; Liu, D.; Wang, Q. Geometric method of fully constrained least squares linear spectral mixture analysis. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3558–3566. [Google Scholar] [CrossRef]
  4. Halimi, A.; Altmann, Y.; Dobigeon, N.; Tourneret, J.Y. Nonlinear unmixing of hyperspectral images using a generalized bilinear model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4153–4162. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, L.; Jia, X. Integration of soft and hard classification using extended support vector machine. IEEE Geosci. Remote Sens. Lett. 2009, 6, 543–547. [Google Scholar] [CrossRef]
  6. Wang, L.; Liu, D.; Wang, Q. Spectral unmixing model based on least squares support vector machine with unmixing residue constraints. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1592–1596. [Google Scholar] [CrossRef]
  7. Bastin, L. Comparison of fuzzy c-means means classification, linear mixture modeling and MLC probabilities as tools for unmixing coarse pixels. Int. J. Remote Sens. 1997, 18, 3629–3648. [Google Scholar] [CrossRef]
  8. Schowengerdt, R.A. On the estimation of spatial-spectral mixing with classifier likelihood functions. Pattern Recognit. Lett. 1996, 17, 1379–1387. [Google Scholar] [CrossRef]
  9. Carpenter, G.M.; Gopal, S.; Macomber, S.; Martens, S.; Wooscock, C.E. A neural network method for mixture estimation for vegetation mapping. Remote Sens. Environ. 1999, 70, 138–152. [Google Scholar] [CrossRef]
  10. Atkinson, P.M. Mapping sub-pixel boundaries from remotely sensed images. In Innovations in GIS; Taylor & Francis: New York, NY, USA, 1997; pp. 166–180. [Google Scholar]
  11. Atkinson, P.M. Sub-pixel target mapping from soft-classified remotely sensed imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
  12. Niroumand-Jadidi, M.; Vitti, A. Reconstruction of river boundaries at sub-pixel resolution: Estimation and spatial allocation of water fractions. ISPRS Int. J. Geo-Inf. 2017, 6, 383. [Google Scholar] [CrossRef]
  13. Wetherley, E.B.; Roberts, D.A.; McFadden, J. Mapping spectrally similar urban materials at sub-pixel scales. Remote Sens. Environ. 2017, 195, 170–183. [Google Scholar] [CrossRef]
  14. Wang, Q.; Shi, W.; Wang, L. Allocating classes for soft-then-hard sub-pixel mapping algorithms in units of class. IEEE Trans. Geosci. Remote Sens. 2014, 5, 2940–2959. [Google Scholar] [CrossRef]
  15. Nigussie, D.; Zurita-Milla, R.; Clevers, J.G.P.W. Possibilities and limitations of artificial neural networks for subpixel mapping of land cover. Int. J. Remote Sens. 2011, 32, 7203–7226. [Google Scholar] [CrossRef]
  16. Shao, Y.; Lunetta, R.S. Sub-pixel mapping of tree canopy, impervious surfaces, and cropland in the Laurentian great lakes basin using MODIS time-series data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 336–347. [Google Scholar] [CrossRef]
  17. Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Super-resolution target identification form remotely sensed images using a Hopfield neural network. IEEE Trans. Geosci. Remote Sens. 2011, 39, 781–796. [Google Scholar] [CrossRef]
  18. Muad, A.M.; Foody, G.M. Impact of land cover patch size on the accuracy of patch area representation in HNN-based super resolution mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 1418–1427. [Google Scholar] [CrossRef]
  19. Mertens, K.C.; Basets, B.D.; Verbeke, L.P.C.; De Wulf, R. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
  20. Wang, P.; Wang, L. Soft-then-hard super-resolution mapping based on a spatial attraction model with multiscale sub-pixel shifted images. Int. J. Remote Sens. 2017, 38, 4303–4326. [Google Scholar] [CrossRef]
  21. Verhoeye, J.; De Wulf, R. Land-cover mapping at sub-pixel scales using linear optimization techniques. Remote Sens. Environ. 2002, 79, 96–104. [Google Scholar] [CrossRef]
  22. Jin, H.; Mountrakis, G.; Li, P. A super-resolution mapping method using local indicator variograms. Int. J. Remote Sens. 2012, 33, 7747–7773. [Google Scholar] [CrossRef]
  23. Wang, Q.; Atkinson, P.M.; Shi, W. Indicator cokriging-based subpixel mapping without prior spatial structure information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 309–323. [Google Scholar] [CrossRef]
  24. Zhong, Y.; Wu, Y.; Xu, X.; Zhang, L. An adaptive subpixel mapping method based on MAP model and class determination strategy for hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1411–1426. [Google Scholar] [CrossRef]
  25. Wang, Q.; Shi, W.; Atkinson, P.M. Sub-pixel mapping of remote sensing images based on radial basis function interpolation. ISPRS J. Photogramm. 2014, 92, 1–15. [Google Scholar] [CrossRef]
  26. Ling, F.; Du, Y.; Li, X.; Li, W.; Xiao, F.; Zhang, Y. Interpolation-based super-resolution land cover mapping. Remote Sens. Lett. 2013, 4, 629–638. [Google Scholar] [CrossRef]
  27. Wang, Q.; Shi, W. Utilizing multiple subpixel shifted images in subpixel mapping with image interpolation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 798–802. [Google Scholar] [CrossRef]
  28. Wang, P.; Wang, L.; Chanussot, J. Soft-then-hard subpixel land cover mapping based on spatial-spectral interpolation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1851–1854. [Google Scholar] [CrossRef]
  29. Wang, P.; Wang, L.; Mura, M.D.; Chanussot, J. Using multiple subpixel shifted images with spatial-spectral information in soft-then-hard subpixel mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 13, 1851–1854. [Google Scholar] [CrossRef]
  30. Jia, S.; Qian, Y. Spectral and spatial complexity-based hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3867–3879. [Google Scholar]
  31. Chen, Y.; Ge, Y.; Heuvelink, G.B.M.; Hu, J.; Jiang, Y. Hybrid constraints of pure and mixed pixels for soft-then-hard super-resolution mapping with multiple shifted images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2040–2052. [Google Scholar] [CrossRef]
  32. Ge, Y.; Chen, Y.; Stein, A.; Li, S.; Hu, J. Enhanced sub-pixel mapping with spatial distribution patterns of geographical objects. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2356–2370. [Google Scholar] [CrossRef]
  33. Wang, L.; Wang, P.; Zhao, C. Producing Subpixel Resolution Thematic Map From Coarse Imagery: MAP Algorithm-Based Super-Resolution Recovery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2290–2304. [Google Scholar] [CrossRef]
  34. Chang, C.; Wu, C.; Tsai, C. Random N-Finder (N-FINDR) Endmember Extraction Algorithms for Hyperspectral Imagery. Fast implementation of maximum simplex volume based endmember extraction in original hyperspectral data space. IEEE Trans. Geosci. Remote Sens. 2011, 20, 641–656. [Google Scholar]
  35. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2585. [Google Scholar] [CrossRef]
  36. Loncan, L.; de Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  37. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef] [Green Version]
  38. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  39. Wang, L.; Hao, S.; Wang, Y.; Lin, Y.; Wang, Q. Spatial-spectral information-based semi-supervised classification algorithm for hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3577–3585. [Google Scholar] [CrossRef]
  40. Chavez, P.S., Jr.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 295–303. [Google Scholar]
  41. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  42. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of spatial correlation: (a) the soft classification result of Class 1; (b) the probability of distribution 1; (c) the probability of distribution Class 2.
Figure 1. Schematic diagram of spatial correlation: (a) the soft classification result of Class 1; (b) the probability of distribution 1; (c) the probability of distribution Class 2.
Remotesensing 10 00884 g001
Figure 2. The flowchart of soft-then-hard super-resolution mapping (STHSRM). SRTM, sub-pixel resolution thematic map.
Figure 2. The flowchart of soft-then-hard super-resolution mapping (STHSRM). SRTM, sub-pixel resolution thematic map.
Remotesensing 10 00884 g002
Figure 3. The flowchart of the MTC (MAP super-resolution then hard classification). MAP, maximum a posteriori probability.
Figure 3. The flowchart of the MTC (MAP super-resolution then hard classification). MAP, maximum a posteriori probability.
Remotesensing 10 00884 g003
Figure 4. The flowchart of the component substitution (CS) approach.
Figure 4. The flowchart of the component substitution (CS) approach.
Remotesensing 10 00884 g004
Figure 5. The flowchart of the pansharpening then hard classification (PTC) method.
Figure 5. The flowchart of the pansharpening then hard classification (PTC) method.
Remotesensing 10 00884 g005
Figure 6. Washington, DC data set (a) RGB composites of Data set 1 (bands 65, 52, and 36 for red, green, and blue, respectively); (b) Simulated coarse image ( S = 2 ); (c) Panchromatic image.
Figure 6. Washington, DC data set (a) RGB composites of Data set 1 (bands 65, 52, and 36 for red, green, and blue, respectively); (b) Simulated coarse image ( S = 2 ); (c) Panchromatic image.
Remotesensing 10 00884 g006
Figure 7. Proportion images of the seven classes obtained by spectral unmixing of the Data set 2: From left to right: shadow, water, road, tree, grass, roof, and trail.
Figure 7. Proportion images of the seven classes obtained by spectral unmixing of the Data set 2: From left to right: shadow, water, road, tree, grass, roof, and trail.
Remotesensing 10 00884 g007
Figure 8. (a) MAP result in MTC; (b) Pansharpening result in PTC.
Figure 8. (a) MAP result in MTC; (b) Pansharpening result in PTC.
Remotesensing 10 00884 g008
Figure 9. SRTM in experiment 1 ( S = 2 ). (a) Reference image; (b) BI result; (c) BIC result; (d) MAP result; (e) MTC result; (f) PTC result.
Figure 9. SRTM in experiment 1 ( S = 2 ). (a) Reference image; (b) BI result; (c) BIC result; (d) MAP result; (e) MTC result; (f) PTC result.
Remotesensing 10 00884 g009
Figure 10. (a) PCC (%) of the five methods in relation to scale factor S ; (b) Kappa of the five methods in relation to scale factor S .
Figure 10. (a) PCC (%) of the five methods in relation to scale factor S ; (b) Kappa of the five methods in relation to scale factor S .
Remotesensing 10 00884 g010
Figure 11. Pavia data set (a) RGB composites of Data set 2 (bands 102, 56, and 31 for red, green, and blue, respectively); (b) Simulated coarse image ( S = 2 ); (c) Panchromatic image.
Figure 11. Pavia data set (a) RGB composites of Data set 2 (bands 102, 56, and 31 for red, green, and blue, respectively); (b) Simulated coarse image ( S = 2 ); (c) Panchromatic image.
Remotesensing 10 00884 g011
Figure 12. Proportion images of the seven classes obtained by spectral unmixing of the Data set 3: From left to right: shadow, water, road, tree, grass, and roof.
Figure 12. Proportion images of the seven classes obtained by spectral unmixing of the Data set 3: From left to right: shadow, water, road, tree, grass, and roof.
Remotesensing 10 00884 g012
Figure 13. (a) MAP result in MTC; (b) Pansharpening result in PTC.
Figure 13. (a) MAP result in MTC; (b) Pansharpening result in PTC.
Remotesensing 10 00884 g013
Figure 14. SRTM in experiment 2 ( S = 2 ). (a) Reference image; (b) BI result; (c) BIC result; (d) MAP result; (e) MTC result; (f) PTC result.
Figure 14. SRTM in experiment 2 ( S = 2 ). (a) Reference image; (b) BI result; (c) BIC result; (d) MAP result; (e) MTC result; (f) PTC result.
Remotesensing 10 00884 g014
Figure 15. (a) PCC (%) of the five methods in relation to scale factor S ; (b) Kappa of the five methods in relation to scale factor S .
Figure 15. (a) PCC (%) of the five methods in relation to scale factor S ; (b) Kappa of the five methods in relation to scale factor S .
Remotesensing 10 00884 g015
Table 1. Super-resolution reconstruction errors for different classes.
Table 1. Super-resolution reconstruction errors for different classes.
MAP ResultPansharpening Result
Class 13.27%2.66%
Class 24.18%3.50%
Class 32.58%1.81%
Class 42.96%2.14%
Class 52.31%1.47%
Class 61.27%0.82%
Class 71.46%0.51%
Table 2. Accuracy (%) of five methods in Experiment 1 ( S = 2 ).
Table 2. Accuracy (%) of five methods in Experiment 1 ( S = 2 ).
BIBICMAPMTCPTC
Shadow73.4475.0377.5078.7780.13
Water85.5688.9790.4995.1595.54
Road70.5572.7475.7388.7590.31
Tree72.4575.4577.3697.4798.04
Grass74.7078.6082.1988.8689.51
Roof70.6772.9875.0985.2388.43
Trail73.8875.5877.9887.1690.35
AA74.4677.0579.4888.7790.33
PCC76.8277.4778.0688.5189.62
Table 3. Accuracy (%) of five methods in experiment 2 ( S = 2 ).
Table 3. Accuracy (%) of five methods in experiment 2 ( S = 2 ).
BIBICMAPMTCPTC
Shadow77.5982.3682.9484.2886.13
Water95.8496.2995.7797.8198.54
Road71.6974.5173.2393.3695.31
Tree74.2875.6377.3694.6897.04
Grass69.2371.4071.7190.3492.51
Roof79.7582.0780.8197.6998.43
AA78.0680.3780.3893.0294.66
PCC80.4582.4082.6494.4895.92

Share and Cite

MDPI and ACS Style

Wang, P.; Wang, L.; Wu, Y.; Leung, H. Utilizing Pansharpening Technique to Produce Sub-Pixel Resolution Thematic Map from Coarse Remote Sensing Image. Remote Sens. 2018, 10, 884. https://doi.org/10.3390/rs10060884

AMA Style

Wang P, Wang L, Wu Y, Leung H. Utilizing Pansharpening Technique to Produce Sub-Pixel Resolution Thematic Map from Coarse Remote Sensing Image. Remote Sensing. 2018; 10(6):884. https://doi.org/10.3390/rs10060884

Chicago/Turabian Style

Wang, Peng, Liguo Wang, Yiquan Wu, and Henry Leung. 2018. "Utilizing Pansharpening Technique to Produce Sub-Pixel Resolution Thematic Map from Coarse Remote Sensing Image" Remote Sensing 10, no. 6: 884. https://doi.org/10.3390/rs10060884

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop