Next Article in Journal
Estimating the Height and Basal Area at Individual Tree and Plot Levels in Canadian Subarctic Lichen Woodlands Using Stereo WorldView-3 Images
Next Article in Special Issue
A Novel Tri-Training Technique for the Semi-Supervised Classification of Hyperspectral Images Based on Regularized Local Discriminant Embedding Feature Extraction
Previous Article in Journal
Earth and Rock-Filled Dam Monitoring by High-Resolution X-Band Interferometry: Gongming Dam Case Study
Previous Article in Special Issue
Hyperspectral Image Super-Resolution Inspired by Deep Laplacian Pyramid Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Remote Sensing Image Super-Resolution Mapping Based on the Spatial Attraction Model by Utilizing the Pansharpening Technique

1
Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
3
College of Information and Control Engineering, Qingdao University of Technology, Qingdao 266520, China
4
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(3), 247; https://doi.org/10.3390/rs11030247
Submission received: 24 December 2018 / Revised: 19 January 2019 / Accepted: 23 January 2019 / Published: 26 January 2019

Abstract

:
The spatial distribution information of remote sensing images can be derived by the super-resolution mapping (SRM) technique. Super-resolution mapping, based on the spatial attraction model (SRMSAM), has been an important SRM method, due to its simplicity and explicit physical meanings. However, the resolution of the original remote sensing image is coarse, and the existing SRMSAM cannot take full advantage of the spatial–spectral information from the original image. To utilize more spatial–spectral information, improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN) is proposed. In SRMSAM-PAN, a novel processing path, named the pansharpening path, is added to the existing SRMSAM. The original coarse remote sensing image is first fused with the high-resolution panchromatic image from the same area by the pansharpening technique in the novel pansharpening path, and the improved image is unmixed to obtain the novel fine-fraction images. The novel fine-fraction images from the pansharpening path and the existing fine-fraction images from the existing path are then integrated to produce finer-fraction images with more spatial–spectral information. Finally, the values predicted from the finer-fraction images are utilized to allocate class labels to all subpixels, to achieve the final mapping result. Experimental results show that the proposed SRMSAM-PAN can obtain a higher mapping accuracy than the existing SRMSAM methods.

Graphical Abstract

1. Introduction

Due to the variety of land-cover classes, and the limitations of sensors, there are many mixed pixels that exist widely in any original remote sensing image [1]. Although spectral unmixing [2] can handle mixed pixels by estimating the proportions of land-cover classes in mixed pixels, it cannot provide any spatial distribution information for remote sensing images. To solve this issue, Atkinson proposes a super-resolution mapping (SRM) technique, which is also named subpixel mapping [3,4]. SRM divides each mixed pixel into subpixels, and transforms the coarse-fraction images to a hard classification image with a higher spatial resolution [5].
In recent years, many studies on SRM have been rapid developed. The Hopfield neural network [6,7], back-propagation neural network [8,9], object spatial dependence [10,11], indicator cokriging (ICK) [12,13], point spread function [14,15], and some super-resolution methods [16,17,18] have been successfully utilized in SRM. The above methods belong to soft-then-hard super-resolution mapping (STHSRM) types. STHSRM contains two steps: (1) sub-pixel sharpening; and (2) class allocation [19]. When addressing a supervised classification problem, another type of algorithm, namely super-resolution then classification (STC) [20,21,22], can be utilized to obtain the spatial distribution of land-cover classes. The fine-resolution image is derived from the original coarse image by appropriate super-resolution reconstruction methods. An ideal result is then directly derived from the fine-resolution image by classification techniques. However, when there is no full supervision information in the classification process, STC is not always superior to STHSRM. So, STC is different from STHSRM. To optimize the mapping result, some artificial intelligence algorithms, such as particle swarm optimization [23,24], simulating annealing [25], and genetic algorithm [26], are utilized as the optimization model. In addition, various auxiliary information, such as sub-pixel-shifted images [27,28,29], light detection and ranging data [30], fused images [31], panchromatic images [32], and shape information [33] are used to improve the SRM performance.
Due to its simplicity, physical meanings, and no need for prior structure information, super-resolution mapping based on SRMSAM, belonging to the STHSRM type, has been widely applied. SRMSAM mostly differs in that the spatial attraction is computed, such as the subpixel/pixel spatial attraction model (SPSAM) [34], the subpixel/subpixel spatial attraction model (MSPSAM) [23], and the more effective hybrid spatial attraction model (HSAM) [35,36]. However, these SRMSAM methods are applied in the coarse-fraction images that are derived by unmixing the original coarse remote sensing image. Due to the coarse resolution of the original image, it is difficult for the coarse-fraction images to fully pick up the spatial–spectral information of the original image. To solve this issue, improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN) is proposed. In SRMSAM-PAN, a novel processing path (pansharpening path) is added to the existing processing path. The pansharpening technique is utilized to fuse the original coarse remote sensing image with the high-resolution panchromatic image from the same area, to derive the improved image [37], and the novel fine-fraction images are obtained by unmixing the improved image. The two kinds of fine-fraction images from the pansharpening path and the existing processing path are then integrated to produce finer-fraction images with more spatial–spectral information. Finally, the values predicted from the finer-fraction images are used to allocate the class labels to each subpixel, to obtain the final mapping result. The experimental results show that the proposed SRMSAM-PAN produces a higher mapping accuracy than the state-of-the-art SRMSAM methods.

2. Theory of Spatial Correlation

The intention of SRM is to obtain the subpixel spatial distribution within mixed pixels, by maximizing their spatial correlation [3]. A simple example that explains the theory of the spatial correlation is shown in Figure 1. The original coarse remote sensing image contains two classes, representing Class A and Class B, respectively. The coarse-fraction image, which is shown in Figure 1a, has mixed pixels, and the proportion of Class A is marked on each mixed pixel. The zoom factor represents the zoom ratio between a mixed pixel and its subpixels. When the coarse-fraction image is upsampled with the zoom factor, each mixed pixel is segmented into 16 subpixels. A value of 0.25 means that the four subpixels are attributed to Class A in the central mixed pixel. Figure 1b,c describes two possible subpixel spatial distributions. Based on the theory of spatial correlation, the subpixel spatial correlation is greater in the same class. Therefore, Figure 1b is considered to be more optimal.

3. SRMSAM

The flowchart of the SRMSAM method is shown in Figure 2. Suppose that S is the zoom factor, and that each mixed pixel is segmented into S × S subpixels. Firstly, M coarse-fraction images C m ( m = 1, 2, …, M , M is the number of land-cover class) are obtained by unmixing the original coarse remote sensing image. C m ( P N ) is defined as the proportion value belonging to the mth class for the mixed pixel P N ( N = 1, 2, …, K ), K is the number of mixed pixels in the coarse-fraction image C m ). Secondly, the fine-fraction images F m are derived from the coarse-fraction images C m by the SRMSAM method. The fine-fraction images F m contain the predicted value F m ( p n ) , which is the predicted value of the mth class for subpixel p n ( n = 1, 2, …, K S 2 ), K S 2 is the number of subpixels in the mixed pixel P N . The constraints should satisfy two conditions: (1) each subpixel is only assigned to a specific class, and (2) the number L m ( P N ) of subpixels belonging to the mth class in the mixed pixel P N should meet Equation (1):
L m ( P N ) = R o u n d ( C m ( P N ) × S 2 )
where R o u n d ( ) is defined as a function that obtains the nearest integer to C m ( P N ) × S 2 .
Finally, the predicted value F m ( p n ) is used to allocate class labels into all subpixels by class allocation method.
In SRMSAM, the predicted value F m ( p n ) can be computed by different spatial attraction models, such as the SPSAM model, MSPSAM model, and the more effective HSAM model. Next, we introduce the principles of the three models.
The SPSAM model considers the spatial correlation between the central subpixel p n and the neighboring pixel P J [34]. The predicted value F m spsam ( p n ) of the SPSAM model can be written as:
F m spsam ( p n ) = max m = 1 M n = 1 K S 2 J = 1 8 o m n × w n × C m ( P J )
o m n = { 1 , if   subpixel   p n   belongs   to   class   m   0 , otherwise
where C m ( P J ) is the proportional value belonging to the mth class in the Jth neighbor pixel P J , and J is the number of the neighbor pixels. In this paper, the number of the neighboring pixels is selected as eight [34]. w n is the weight for the dependence between the central subpixel p n and the neighboring pixel P J :
w n = exp ( d ( p n , P J ) 2 / ε 1 )
where d ( p n , P J ) is defined as the Euclidean distance between the center subpixel p n and the coarse neighboring pixel P J , which is shown in Figure 3a. ε 1 is the exponential model parameter.
In the MSPSAM model, the spatial correlation between the central subpixel and its neighboring subpixels is utilized to obtain the predicted value F m mspsam ( p n ) of the MSPSAM model [23]:
F m mspsam ( p n ) = max n = 1 K S 2 j = 1 8 · S 2 w n x n j
x n j = { 1 , if   subpixel   p n   and   subpixel   p j   are assigned   to   same   land   cover   class   0 , otherwise
where w j represents the weight of the spatial correlation between the central subpixel p n and the neighboring subpixel p j , which is given as:
w n = exp ( d ( p n , p j ) 2 / ε 2 )
As shown in Figure 3b, d ( p n , p j ) stands for the Euclidean distance between the center subpixel p n to the neighboring subpixel p j . ε 2 is the exponential model parameter.
HSAM considers the spatial correlation of the above two models at the same time [35,36]. The predicted value F m hsam ( p n ) of the HSAM model can be derived by integrating the predicted value F m spsam ( p n ) of the SPSAM model, and the predicted value F m mspsam ( p n ) of the MSPSAM model by the appropriate parameter θ :
F m hsam ( p n ) = θ F m mspsam ( p n ) + ( 1 θ ) F m spsam ( p n )
Because the HSAM model inherits the advantages of the SPSAM model and the MSPSAM model, the final mapping result in the HSAM model outperforms the other two models.

4. Proposed Method

As shown in Figure 2, we can note that the existing SRMSAM methods are all applied in the coarse-fraction images derived from the original coarse remote sensing image. It is difficult for the coarse-fraction images to carry the full spatial–spectral information of the original image, due the coarse resolution in original image, the final mapping accuracy of SRMSAM will be affected. To supply more spatial–spectral information, and to improve the mapping accuracy, the SRMSAM-PAN model is proposed.

4.1. Pansharpening Path

In the SRMSAM-PAN model, a novel processing path (the pansharpening path) is added to the existing HSAM model. In the novel processing path, the resolution of the original coarse remote images is improved by fusing the higher spatial resolution panchromatic image from the same area by the pansharpening technique. The pansharpening technique can be considered as a particular data fusion problem, which aims at combining the spatial details from the panchromatic image and the spectral bands of the original remote sensing image. The improved image has a high spectral resolution of the original remote sensing image, and a high spatial resolution of the panchromatic image. Due to the effectively rendering of spatial details and its fast implementation, principal component analysis (PCA) [37] is selected as the pansharpening method in this paper.
Figure 4 gives the flowchart of the PCA pansharpening. Firstly, a set of scalar images called principal components is produced by a linear transformation of the original remote sensing image. The spatial information of the original image is collected in the first principal component, while the spectral information is concentrated in the other principal components. Subsequently, the spatial information from the high spatial resolution panchromatic image is utilized to replace the first principal component. To reduce spectral distortion in the PCA pansharpening processing, the histogram matching of the first principal component to the panchromatic image is performed before the replacement takes place. The histogram-matched panchromatic image shows the same mean and variance as the component to replace. Finally, the improved image is obtained by applying the inverse linear transformation. The mathematical model of the PCA pansharpening is given in Equation (9):
O ^ b = O ˜ b + g b ( PAN I )
where PAN is the panchromatic image b ( b = 1, 2, …, B), B is the number of spectral bands in the original image) is the bth spectral band, O is the original coarse remote sensing image; O ^ is the improved image. O ^ b represents the bth spectral band of the improved image, O ˜ b is the bth spectral band of the original image, which is interpolated at the scale of the panchromatic image, and g b = [ g 1 , g 2 , …, g B ] is the vector of the injection gains., while I is given as:
I = b = 1 B y b O ˜ b
where the weight vector y b = [ y 1 , y 2 , …, y B ]T measures the spectral overlap between the panchromatic image and the spectral bands.

4.2. Implementation of SRMSAM-PAN

In the pansharpening path, the improved image O ^ is unmixed to produce the novel fine-fraction images F m pan , which contain the predicted value F m pan ( p n ) . As shown in the flowchart of the pansharpening path in Figure 5, the novel fine-fraction images F m pan with the predicted value F m pan ( p n ) are derived in two steps. In the first step, the original coarse remote sensing image O is improved to obtain an improved image O ^ , by the pansharpening technique. The second step is that the fine-fraction images F m pan are obtained by directly unmixing the improved image O ^ . The predicted value F m pan ( p n ) of the land-cover classes in the fraction images is calculated by Equation (11).
V b O ^ = E · F m pan + n
where V b O ^ = [ V 1 O ^ , V 2 O ^ , …, V B O ^ ]T is the vector of the spectral value of the improved image O ^ , B are the number of spectral bands, F m pan = [ F m pan ( p 1 ) , F m pan ( p 2 ) , …, F m pan ( p K S 2 ) ]T is the vector of the predicted value F m pan ( p n ) of land-cover classes, K S 2 is the number of subpixels, E is the matrix for spectral endmembers, and n is the random noise. In this paper, the least squares support vector machine model (LSVM) [38] is used to seek the optimal estimation under the condition of the minimum random noise. Since the resolution of the original coarse remote sensing image is improved, the novel fine-fraction images can contain the more spatial–spectral information from the original image.
Next the novel fine-fraction images with the predicted value F m pan ( p n ) from the pansharpening path, and the existing fine-fraction images with the predicted value F m hsam ( p n ) from the existing HSAM model are integrated to produce the finer-fraction images with a more accurately predicted value F m ( p n ) . The integrating process of the equation is given as:
F m ( p n ) = α F m pan ( p n ) + ( 1 α ) F m hsam ( p n )
where α ( 0 α < 1 ) is the weight parameter to balance the influence of the predicted values, F m pan ( p n ) and F m hsam ( p n ) . The class allocation method utilizes the more accurately predicted value F m ( p n ) to allocate the class labels to each subpixel, to derive the final mapping result. The flowchart of SRMSAM-PAN is shown in Figure 6. The implementation of SRMSAM-PAN can be summarized in the following steps.
Step 1. In the existing path, the coarse-fraction images are derived from the original coarse remote sensing image by spectral unmixing. At the same time, the resolution of the original image is improved by the pansharpening technique in the pansharpening path.
Step 2. The fine-fraction images with the predicted value F m hsam ( p n ) are produced by the HSAM model. Also, the novel fine-fraction images with the predicted value F m pan ( p n ) are derived by unmixing the improved image.
Step 3. The fine-fraction images from the existing path, and the fine-fraction images from the novel pansharpening path are integrated to produce finer-fraction images with a more accurate predicted value F m ( p n ) (see Equation (12)).
Step 4. According to the constraints in Equation (1), the more accurate predicted value F m ( p n ) is used to allocate class labels to each sub-pixel for obtaining the final mapping result.
Comparing Figure 2 with Figure 6, the two kinds of fine-fraction images from the two paths are related to the different predicted values. The more spatial–spectral information that is supplied by the pansharpening technique, the higher the mapping accuracy that is generated by the proposed SRMSAM-PAN.

5. Experiments and Analysis

Three real hyperspectral images were implemented to test the performance of the proposed SRMSAM-PAN. To assess the effect of image registration error on the SRMSAM methods, a simulated coarse remote sensing image was produced by downsampling the original fine hyperspectral image [39]. The spectral response function of the IKONOS satellite was used to produce a suitable panchromatic image, in order to only consider the influence of the pansharpening technology on the mapping result, and to avoid the impact of errors caused by the acquisition of the panchromatic image [40]. The highest soft-attribute values assigned first (HAVF) [41] were considered as the class allocation method. All experiments were tested by the MATLAB 2018a software package (https://www.mathworks.com/).
Four SRMSAM methods were tested and compared: SPSAM [34], MSPSAM [23], HSAM [36], and the proposed SRMSAM-PAN. The mapping accuracy was evaluated quantitatively by the overall accuracy (OA) and the Kappa coefficient (Kappa).

5.1. Experiment 1

The first hyperspectral image was acquired over the Engineering School at the University of Pavia [39]. As shown in Figure 7a, the tested region had 100 × 100 pixels, 103 spectral bands, and 1.3 m spatial resolution. Figure 7a is degraded, with S = 4 , to simulate the coarse image shown in Figure 7b. The panchromatic image shown in Figure 7c is produced by the spectral response of the IKONOS satellite. As shown in Figure 7d, the pansharpening technique was utilized to fuse the coarse remote sensing image and the panchromatic image, to produce the improved image. The improved image had the spectral resolution of the former, and the spatial resolution of the latter. As a visual observation, the improved image was similar to the original image. SRMSAM-PAN can use the improved image to obtain a higher mapping accuracy. The weight parameter α was set to 0.6.
As shown in Figure 8a, the reference image contained asphalt, meadows, trees, and bricks. The SRMSAM results of the four methods were given in Figure 8b–e. SRMSAM-PAN obtained a better mapping result than SPSAM, MSPSAM, and HSAM, by visual comparison. For example, there were many disconnected patches and obvious burrs in the bricks. This phenomenon was alleviated with the aid of the pansharpening technique. Due to supplying more spatial–spectral information, SRMSAM-PAN was closer to the reference image than the other three SRMSAM methods.
For quantitative evaluation, we utilized the mapping accuracy (%) of each class and OA (%), to evaluate the performance of the four methods. Checking the evaluation results shown in Table 1, the mapping accuracy of SRMSAM-PAN was higher than the other three methods. For example, the mapping accuracy (%) of trees in the SRMSAM-PAN increased from 56.32% to 72.31%, when compared with the HSAM. Since more spatial–spectral information was supplied by the pansharpening technique, SRMSAM-PAN could achieve the highest OA of 93.87%.

5.2. Experiment 2

In Experiment 2, a hyperspectral image with 102 spectral bands and 1.3 m spatial resolution, which was larger and contained more classes, was used [42]. As shown in Figure 8a, the tested region, which covers the two residential areas on both sides of the Ticino river in Pavia city, had 400 × 400 pixels. Figure 9a is degraded, with S = 4 , to obtain the simulated coarse image shown in Figure 9b. Figure 9c is the panchromatic image by the method described in Experiment 1. The pansharpening result is shown in Figure 9d. Better SRMSAM results were obtained by using more spatial–spectral information from the pansharpening result. The weight parameter α was set to 0.5.
The reference image in Figure 10a showed six classes, containing shadow, water, road, tree, grass, and roof. Figure 10b–e gives the mapping results of four methods. With the help of the pansharpening technique, the mapping result was more continuous, and the boundaries were smoother in Figure 10e. The SRMSAM-PAN was visually closer to the reference image, compared to the other three SRMSAM methods.
The mapping accuracies (%) of each class and of the OA (%) of the four methods were measured in Table 2 Similar to the result in Experiment 1, both the mapping accuracies of each class and the OA (%) in the SRMSAM-PAN were higher than in the other three methods. In addition, to test the influence of the zoom factor S on the final mapping results, the four methods were experimented for the two other zoom factors of 2 and 8. Figure 11a,b show the OA (%) and Kappa of the four methods for the three zoom factors. We can note that no matter how the zoom factor S changed, the OA (%) and Kappa of the SRMSAM-PAN result were higher than SPSAM, MSPSAM, and HSAM.

5.3. Experiment 3

The third dataset, including 191 bands and 3 m spatial resolution, was collected from a mall in Washington, DC [42]. As shown in Figure 12a, there were 240 × 280 pixels in the region tested. The coarse image shown in Figure 12b is obtained by downsampling Figure 11a, with S = 4 . Figure 12c,d were the panchromatic image and the pansharpening results, respectively. The weight parameter α was selected as 0.5.
There were seven classes, including shadow, water, road, tree, grass, roof, and trail, in the reference image shown in Figure 13a. The SRMSAM results of the four methods are listed in Figure 13b–e. We can note there were many speckle artifacts existing in Figure 13b,c. Due to the pansharpening technique supplying more spatial–spectral information, this phenomenon was improved, and the SRMSAM-PAN result was more similar to the reference image.
Consistent with the results in Experiment 1 and Experiment 2, we also evaluated the four SRMSAM methods for three zoom factors, i.e., 2, 4, and 8. Checking the OA (%) and Kappa in Figure 14a,b, the quantitative evaluation from SRMSAM-PAN was higher than the other three SRMSAM methods.

5.4. Discussion

The weight parameter α ( 0 α < 1 ) was utilized to balance the influence of the predicted values, F m pan ( p n ) and F m hsam ( p n ) , on the proposed SRMSAM-PAN. To find the appropriate weight parameter α , the experimental data, such as Experiment 2 ( S = 4 ) and Experiment 3 ( S = 4 ) calculated the adjusted OA (%) values for 10 combinations of α in the range of [0, 0.9] at intervals of 0.1. The method of selecting the weight parameter in Experiment 1 was the same as that in the Experiment 2 and Experiment 3. As shown the test result in Figure 15a,b, when α = 0 , only the HSAM model worked, and the predicted value F m pan ( p n ) from the pansharpening path did not play any role at this time. As α continued to increase, the OA (%) value was obviously improved. This was because of the greater spatial–spectral information that was supplied by the pansharpening path. When the appropriate α was 0.5 in Experiment 2 and Experiment 3, the pansharpening path produced the most spatial–spectral information, resulting in the highest OA (%). However, when α continued to increase, the predicted value F m hsam ( p n ) from the existing path contributed less to the overall solution (Equation (12)). This loss of information from the existing path degraded the value of the OA (%).
In addition, the final mapping result also could be obtained by pansharpening technique then classification (PTC) which belongs STC type. Although the SRMSAM-PAN and PTC belong to different types, a comparison between STHSRM-PAN and PTC is worth studying. To get a fair comparison, the classification method based on SVM was selected in PTC. The number of the training samples was respectively selected as 30%, 20% and 10% per class, the remaining numbers per class were test samples in SVM. We respectively named the PTC with 30% training samples as PTC1, the PTC with 20% training samples as PTC2 and the PTC with 10% training samples as PTC3. The proposed STHSRM-PAN and the three kinds of PTC methods were compared in Experiment 2 and Experiment 3. As shown the test result in Figure 16a,b, it is note that when there was abundant supervisory information (i.e., training samples), PTC is superior to the proposed STHSRM-PAN. Instead, STHSRM-PAN can obtain the higher OA (%) than PTC in the absence of adequate supervisory information. However, supervisory information is usually acquired by human markers, a large amount of supervisory information is often difficult to obtain. Therefore, STHSRM-PAN is more widely used than PTC for coarse remote sensing image.
Finally, the performance of SRMSAM-PAN depended on the pansharpening technique. Therefore, it was necessary to test the effects of different pansharpening methods on the performance of the proposed method. The band-dependent spatial detail (BDSD) [43] was selected as another pansharpening method to compare the previous PCA in Experiment 2 and Experiment 3 for three zoom factors, that is, 2, 4 and 8. Figure 17a,b show the OA (%) of the SRMSAM-PAN result in relation to the two pansharpening methods. As shown in Figure 17a,b, since BDSD is more effective than PCA [37], the OA (%) in the BDSD-based SRMSAM-PAN is higher than that in PCA-based SRMSAM-PAN. Hence the more effective pansharpening method can obtain a better mapping result.

6. Conclusions

The contribution of this research is to improve the existing super-resolution mapping based on spatial attraction by pansharpening technique, resulting in obtaining a more accurate super-resolution mapping result. In the proposed SRMSAM-PAN, first the pansharpening technique was utilized, to improve the resolution of the original image in the novel pansharpening path, and the novel fine-fraction images were obtained by unmixing the improved image. The finer-fraction images with more spatial–spectral information are derived by integrating the novel fine–fraction images and the existing fine-fraction images. Finally, the final mapping result was produced by the class allocation method, according to the values predicted from the finer-fraction images. The experimental results show that the proposed SRMSAM-PAN with the appropriate parameter obtained a better mapping result, compared with the three SRMSAM methods: SPSAM, MSPSAM, and HSAM.
Since the performance of the proposed SRMSAM-PAN is related to the pansharpening technique, a better mapping result can be obtained by the more effective pansharpening method. It is worth in developing a more effective pansharpening method in the future. Moreover, the appropriate parameter α is selected by multiple tests in this paper. To improve the final mapping result, an adaptive method for selecting the most appropriate weight parameter α , is worth studying in future work. Finally, we simulate the coarse remote sensing image by downsampling the original fine image. Hence, the performance of the proposed method in the real coarse remote sensing image will be further studied.

Author Contributions

Conceptualization, P.W.; Methodology, P.W.; Software, G.Z.; Validation, G.Z., S.H. and L.W.; Formal analysis, S.H.; Investigation, G.Z.; Resources, L.W.; Data curation, P.W.; Writing—original draft preparation, P.W.; Writing—review and editing, G.Z.; Visualization, L.W.; Supervision, G.Z.; Project administration, P.W.; Funding acquisition, P.W.

Funding

The work was supported by the National Natural Science Foundation of China (grant no. 61801211, 61871218, 61701272, 61675051), Fundamental Research Funds for the Central University (grant no. 3082017NP2017421), National Aerospace Science Foundation of China (grant no. 20185152).

Acknowledgments

The authors would like to thank Dr. Qunming Wang of Tongji University for providing the relevant data set. The authors would like to thank the handling editors and the reviewers for providing valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fisher, P. The pixel: A snare and a delusion. Int. J. Remote Sens. 1997, 18, 679–685. [Google Scholar] [CrossRef]
  2. Wang, L.; Liu, D.; Wang, Q. Geometric method of fully constrained least squares linear spectral mixture analysis. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3558–3566. [Google Scholar] [CrossRef]
  3. Atkinson, P.M. Mapping sub-pixel boundaries from remotely sensed images. In Innovations in GIS; Taylor & Francis: New York, NY, USA, 1997; pp. 166–180. [Google Scholar]
  4. Ma, A.; Zhong, Y.; He, D.; Zhang, L. Multi-objective subpixel land-cover mapping. IEEE Trans. Geosci. Remote Sens. 2018, 56, 422–435. [Google Scholar] [CrossRef]
  5. Atkinson, P.M. Sub-pixel target mapping from soft-classified remotely sensed imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
  6. Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Super-resolution target identification form remotely sensed images using a Hopfield neural network. IEEE Trans. Geosci. Remote Sens. 2011, 39, 781–796. [Google Scholar] [CrossRef]
  7. Muad, A.M.; Foody, G.M. Impact of land cover patch size on the accuracy of patch area representation in HNN-based super resolution mapping. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 1418–1427. [Google Scholar] [CrossRef]
  8. Shao, Y.; Lunetta, R.S. Sub-pixel mapping of tree canopy, impervious surfaces, and cropland in the Laurentian great lakes basin using MODIS time-series data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2011, 4, 336–347. [Google Scholar] [CrossRef]
  9. Nigussie, D.; Zurita-Milla, R.; Clevers, J.G.P.W. Possibilities and limitations of artificial neural networks for subpixel mapping of land cover. Int. J. Remote Sens. 2011, 32, 7203–7226. [Google Scholar] [CrossRef]
  10. Chen, Y.; Ge, Y.; Heuvelink, G.B.M.; An, R.; Chen, Y. Object-based superresolution land-cover mapping from remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 328–340. [Google Scholar] [CrossRef]
  11. Chen, Y.; Zhou, Y.; Ge, Y.; An, R.; Chen, Y. Enhancing land cover mapping through integration of pixel-based and object-based classifications from remotely sensed imagery. Remote Sens. 2018, 10, 77. [Google Scholar] [CrossRef]
  12. Jin, H.; Mountrakis, G.; Li, P. A super-resolution mapping method using local indicator variograms. Int. J. Remote Sens. 2012, 33, 7747–7773. [Google Scholar] [CrossRef]
  13. Wang, Q.; Atkinson, P.M.; Shi, W. Indicator cokriging-based subpixel mapping without prior spatial structure information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 309–323. [Google Scholar] [CrossRef]
  14. Wang, Q.; Atkinson, P.M. The effect of the point spread function on sub-pixel mapping. Remote Sens. Environ. 2017, 193, 127–137. [Google Scholar] [CrossRef]
  15. Chen, Y.; Ge, Y.; An, R.; Chen, Y. Super-resolution mapping of impervious surfaces from remotely sensed imagery with points-of-interest. Remote Sens. 2018, 10, 242. [Google Scholar] [CrossRef]
  16. Wang, P.; Zhang, G.; Kong, Y.; Leung, H. Superresolution mapping based on hybrid interpolation by parallel paths. Remote Sens. Lett. 2019, 10, 149–157. [Google Scholar] [CrossRef]
  17. Wang, Q.; Shi, W.; Atkinson, P.M. Spatiotemporal subpixel mapping of time-series images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5397–5411. [Google Scholar] [CrossRef]
  18. Wang, P.; Wang, L.; Chanussot, J. Soft-then-hard subpixel land cover mapping based on spatial-spectral interpolation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1851–1854. [Google Scholar] [CrossRef]
  19. Wang, Q.; Shi, W.; Wang, L. Allocating classes for soft-then-hard sub-pixel mapping algorithms in units of class. IEEE Trans. Geosci. Remote Sens. 2014, 5, 2940–2959. [Google Scholar] [CrossRef]
  20. Li, F.; Jia, X. Superresolution reconstruction of multispectral data for improved image classification. IEEE Geosci. Remote Sens. Lett. 2009, 5, 798–802. [Google Scholar]
  21. Wang, L.; Wang, P.; Zhao, C. Producing subpixel resolution thematic map from coarse imagery: MAP algorithm-based super-resolution recovery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 2290–2304. [Google Scholar] [CrossRef]
  22. Wang, P.; Wang, L.; Wu, Y.; Leung, H. Utilizing pansharpening technique to produce sub-pixel resolution thematic map from coarse remote sensing image. Remote Sens. 2018, 10, 884. [Google Scholar] [CrossRef]
  23. Wang, Q.; Wang, L.; Liu, D. Particle swarm optimization-based sub-pixel mapping for remote-sensing imagery. Int. J. Remote Sens. 2012, 33, 6480–6496. [Google Scholar] [CrossRef]
  24. Wang, P.; Zhang, G.; Leung, H. Improving super-resolution flood inundation mapping for multispectral remote sensing image by supplying more spectral information. IEEE Geosci. Remote Sens. Lett. 2018. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Ling, F.; Li, X.; Du, Y. Super-resolution land cover mapping using multiscale self-similarity redundancy. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 5130–5145. [Google Scholar] [CrossRef]
  26. Tong, X.; Xu, X.; Plaza, A.; Xie, H.; Pan, H.; Cao, W.; Lv, D. A new genetic method for subpixel mapping using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4480–4491. [Google Scholar] [CrossRef]
  27. Wang, Q.; Shi, W.; Atkinson, P.M. Sub-pixel mapping of remote sensing images based on radial basis function interpolation. ISPRS J. Photogramm. 2014, 92, 1–15. [Google Scholar] [CrossRef]
  28. Xu, X.; Zhong, Y.; Zhang, L.; Zhang, H. Sub-pixel mapping based on a MAP model with multiple shifted hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 580–593. [Google Scholar] [CrossRef]
  29. Wang, P.; Wang, L.; Mura, M.D.; Chanussot, J. Using multiple subpixel shifted images with spatial-spectral information in soft-then-hard subpixel mapping. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 13, 1851–1854. [Google Scholar] [CrossRef]
  30. Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Superresolution mapping using Hopfield neural network with LIDAR data. IEEE Geosci. Remote Sens. Lett. 2005, 2, 366–370. [Google Scholar] [CrossRef]
  31. Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Superresolution mapping using a Hopfield neural network with fused images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 736–749. [Google Scholar] [CrossRef]
  32. Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Super-resolution mapping using Hopfield neural network with panchromatic imagery. Int. J. Remote Sens. 2011, 32, 6149–6176. [Google Scholar] [CrossRef]
  33. Thornton, M.W.; Atkinson, P.M.; Holland, D.A. A linearised pixel swapping method for mapping rural linear land cover features from fine spatial resolution remotely sensed imagery. Comput. Geosci. 2007, 33, 1261–1272. [Google Scholar] [CrossRef]
  34. Mertens, K.C.; Basets, B.D.; Verbeke, L.P.C.; De Wulf, R. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
  35. Ling, F.; Li, X.; Du, Y.; Xiao, F. Sub-pixel mapping of remotely sensed imagery with hybrid intra- and inter-pixel dependence. Int. J. Remote Sens. 2013, 34, 341–357. [Google Scholar] [CrossRef]
  36. Wang, P.; Wang, L. Soft-then-hard super-resolution mapping based on a spatial attraction model with multiscale sub-pixel shifted images. Int. J. Remote Sens. 2017, 38, 4303–4326. [Google Scholar] [CrossRef]
  37. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2585. [Google Scholar] [CrossRef]
  38. Wang, L.; Liu, D.; Wang, Q. Spectral unmixing model based on least squares support vector machine with unmixing residue constraints. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1592–1596. [Google Scholar] [CrossRef]
  39. Wang, P.; Zhang, G.; Leung, H. Utilizing parallel networks to produce sub-pixel shifted images with multiscale spatio-spectral information for soft-then-hard sub-pixel mapping. IEEE Access. 2018, 6, 57485–57496. [Google Scholar] [CrossRef]
  40. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef]
  41. Ge, Y.; Chen, Y.; Stein, A.; Li, S.; Hu, J. Enhanced sub-pixel mapping with spatial distribution patterns of geographical objects. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2356–2370. [Google Scholar] [CrossRef]
  42. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
  43. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
Figure 1. Example of spatial correlation. (a) Spectral unmixing result of Class A. (b) Probability of Distribution 1. (c) Probability of Distribution 2.
Figure 1. Example of spatial correlation. (a) Spectral unmixing result of Class A. (b) Probability of Distribution 1. (c) Probability of Distribution 2.
Remotesensing 11 00247 g001
Figure 2. The flowchart of super-resolution mapping, based on the spatial attraction model (SRMSAM).
Figure 2. The flowchart of super-resolution mapping, based on the spatial attraction model (SRMSAM).
Remotesensing 11 00247 g002
Figure 3. Euclidean distance. (a) Central subpixel p n and its eight neighboring pixels. (b) Central subpixel p n and its eight neighboring subpixels.
Figure 3. Euclidean distance. (a) Central subpixel p n and its eight neighboring pixels. (b) Central subpixel p n and its eight neighboring subpixels.
Remotesensing 11 00247 g003
Figure 4. The flowchart of the principal component analysis (PCA) pansharpening.
Figure 4. The flowchart of the principal component analysis (PCA) pansharpening.
Remotesensing 11 00247 g004
Figure 5. The flowchart of the pansharpening path.
Figure 5. The flowchart of the pansharpening path.
Remotesensing 11 00247 g005
Figure 6. The flowchart of improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN).
Figure 6. The flowchart of improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique (SRMSAM-PAN).
Remotesensing 11 00247 g006
Figure 7. (a) RGB composites of images (bands 19, 30, and 44 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ). (c) Panchromatic image. (d) Pansharpening result.
Figure 7. (a) RGB composites of images (bands 19, 30, and 44 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ). (c) Panchromatic image. (d) Pansharpening result.
Remotesensing 11 00247 g007
Figure 8. SRMSAM results in Experiment 1 ( S = 4 ). (a) Reference image. (b) Subpixel/pixel spatial attraction model (SPSAM). (c) Subpixel/subpixel spatial attraction model (MSPSAM). (d) Hybrid spatial attraction model (HSAM). (e) SRMSAM-PAN.
Figure 8. SRMSAM results in Experiment 1 ( S = 4 ). (a) Reference image. (b) Subpixel/pixel spatial attraction model (SPSAM). (c) Subpixel/subpixel spatial attraction model (MSPSAM). (d) Hybrid spatial attraction model (HSAM). (e) SRMSAM-PAN.
Remotesensing 11 00247 g008
Figure 9. (a) RGB composites of image (bands 102, 56, and 31 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ). (c) Panchromatic image. (d) Pansharpening result.
Figure 9. (a) RGB composites of image (bands 102, 56, and 31 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ). (c) Panchromatic image. (d) Pansharpening result.
Remotesensing 11 00247 g009
Figure 10. SRMSAM results in Experiment 2 ( S = 4 ). (a) Reference image. (b) SPSAM. (c) MSPSAM. (d) HSAM. (e) SRMSAM-PAN.
Figure 10. SRMSAM results in Experiment 2 ( S = 4 ). (a) Reference image. (b) SPSAM. (c) MSPSAM. (d) HSAM. (e) SRMSAM-PAN.
Remotesensing 11 00247 g010
Figure 11. (a) Overall accuracy (OA (%) of the four methods in relation to the zoom factor S . (b) Kappa coefficient (Kappa) of the four methods in relation to the zoom factor S .
Figure 11. (a) Overall accuracy (OA (%) of the four methods in relation to the zoom factor S . (b) Kappa coefficient (Kappa) of the four methods in relation to the zoom factor S .
Remotesensing 11 00247 g011
Figure 12. (a) RGB composites of the image (bands 102, 56, and 31 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ) (c) Panchromatic image. (d) Pansharpening result.
Figure 12. (a) RGB composites of the image (bands 102, 56, and 31 for red, green, and blue, respectively). (b) Coarse image ( S = 4 ) (c) Panchromatic image. (d) Pansharpening result.
Remotesensing 11 00247 g012
Figure 13. SRMSAM results in Experiment 3 ( S = 4 ). (a) Reference image. (b) SPSAM. (c) MSPSAM. (d) HSAM. (e) SRMSAM-PAN.
Figure 13. SRMSAM results in Experiment 3 ( S = 4 ). (a) Reference image. (b) SPSAM. (c) MSPSAM. (d) HSAM. (e) SRMSAM-PAN.
Remotesensing 11 00247 g013
Figure 14. (a) OA (%) of the four methods in relation to the zoom factor S . (b) Kappa of the four methods in relation to the zoom factor S .
Figure 14. (a) OA (%) of the four methods in relation to the zoom factor S . (b) Kappa of the four methods in relation to the zoom factor S .
Remotesensing 11 00247 g014
Figure 15. (a) OA (%) of SRMSAM-PAN in relation to the weight parameter α in Experiment 2 ( S = 4 ). (b) OA (%) of SRMSAM-PAN in relation to the weight parameter α in Experiment 3 ( S = 4 ).
Figure 15. (a) OA (%) of SRMSAM-PAN in relation to the weight parameter α in Experiment 2 ( S = 4 ). (b) OA (%) of SRMSAM-PAN in relation to the weight parameter α in Experiment 3 ( S = 4 ).
Remotesensing 11 00247 g015
Figure 16. (a) OA (%) of SRMSAM-PAN and pansharpening technique then classification (PTC) in experiment 2 (b) OA (%) of SRMSAM-PAN and PTC in experiment 3.
Figure 16. (a) OA (%) of SRMSAM-PAN and pansharpening technique then classification (PTC) in experiment 2 (b) OA (%) of SRMSAM-PAN and PTC in experiment 3.
Remotesensing 11 00247 g016
Figure 17. (a) OA (%) of the SRMSAM-PAN results in relation to the band-dependent spatial detail (BDSD) and PCA in Experiment 2. (b) OA (%) of the SRMSAM-PAN result in relation to BDSD and PCA in Experiment 3.
Figure 17. (a) OA (%) of the SRMSAM-PAN results in relation to the band-dependent spatial detail (BDSD) and PCA in Experiment 2. (b) OA (%) of the SRMSAM-PAN result in relation to BDSD and PCA in Experiment 3.
Remotesensing 11 00247 g017
Table 1. Mapping accuracy (%) of the four methods ( S = 4 ).
Table 1. Mapping accuracy (%) of the four methods ( S = 4 ).
SPSAMMSPSAMHSAMSRMSAM-PAN
Meadows96.3797.1097.7399.13
Asphalt95.4897.2997.4799.82
Tress45.1355.2356.3272.31
Bricks77.1883.3783.6090.30
OA85.1788.7389.2093.87
Table 2. Mapping accuracy (%) of the four methods ( S = 4 ).
Table 2. Mapping accuracy (%) of the four methods ( S = 4 ).
SPSAMMSPSAMHSAMSRMSAM-PAN
Shadow52.4662.8065.9874.57
Water98.0498.3398.3598.76
Road79.3882.9784.0389.74
Tree80.9583.4784.5289.00
Grass80.5183.9485.6689.41
Roof85.8988.6389.8792.49
OA88.5290.8692.2095.11

Share and Cite

MDPI and ACS Style

Wang, P.; Zhang, G.; Hao, S.; Wang, L. Improving Remote Sensing Image Super-Resolution Mapping Based on the Spatial Attraction Model by Utilizing the Pansharpening Technique. Remote Sens. 2019, 11, 247. https://doi.org/10.3390/rs11030247

AMA Style

Wang P, Zhang G, Hao S, Wang L. Improving Remote Sensing Image Super-Resolution Mapping Based on the Spatial Attraction Model by Utilizing the Pansharpening Technique. Remote Sensing. 2019; 11(3):247. https://doi.org/10.3390/rs11030247

Chicago/Turabian Style

Wang, Peng, Gong Zhang, Siyuan Hao, and Liguo Wang. 2019. "Improving Remote Sensing Image Super-Resolution Mapping Based on the Spatial Attraction Model by Utilizing the Pansharpening Technique" Remote Sensing 11, no. 3: 247. https://doi.org/10.3390/rs11030247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop