Next Article in Journal
Modern Methods of Sustainable Behaviour Analysis—The Case of Purchasing FMCG
Next Article in Special Issue
Assessment of the Uncertainty Associated with Statistical Modeling of Precipitation Extremes for Hydrologic Engineering Applications in Amman, Jordan
Previous Article in Journal
Small-Signal Stability Constrained Optimal Power Flow Model Based on BP Neural Network Algorithm
Previous Article in Special Issue
Forecasting of Streamflow and Comparison of Artificial Intelligence Methods: A Case Study for Meram Stream in Konya, Turkey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classifying Vegetation Types in Mountainous Areas with Fused High Spatial Resolution Images: The Case of Huaguo Mountain, Jiangsu, China

1
School of Geomatics and Marine Information, Jiangsu Ocean University, Lianyungang 222002, China
2
Lianyungang Forestry Technical Guidance Station, Lianyungang 222005, China
3
Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatory, Chinese Academy of Sciences, Beijing 100101, China
4
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(20), 13390; https://doi.org/10.3390/su142013390
Submission received: 12 September 2022 / Revised: 14 October 2022 / Accepted: 14 October 2022 / Published: 17 October 2022

Abstract

:
This study tested image fusion quality aiming at vegetation classification in the Kongquegou scenic location on the southern slope of Huaguo Mountain in Lianyungang, Jiangsu Province, China. Four fusion algorithms were used to fuse WorldView-2 multispectral and panchromatic images: GS (Gram-Schmidt) transform, Ehlers, Wavelet transform, and Modified IHS. The fusion effect was evaluated through visual comparison, quantitative index analysis, and vegetation classification accuracy. The study result revealed that GS and Wavelet transformation produced higher spectral fidelity and better-quality fusion images, followed by Modified IHS and Ehlers. In terms of vegetation classification, for the Wavelet transform, both spectral information and adding spatial structure provided higher accuracy and displayed suitability for vegetation classification in the selected area. Meanwhile, although the spectral features obtained better classification accuracy using the Modified IHS, adding spatial structure to the classification process produced less improvement and a lower robustness effect. The GS transform yielded better spectral fidelity but relatively low vegetation classification accuracy using spectral features only and combined spectral features and spatial structure. Lastly, the Ehlers method’s vegetation classification results were similar to those of the GS transform image fusion method. Additionally, the accuracy was significantly improved in the fused images compared to the multispectral image. Overall, Wavelet transforms showed the best vegetation classification results in the study area among the four fusion algorithms.

1. Introduction

Remote sensing image fusion refers to the process of combining multi-source remote sensing according to certain rules to eliminate information redundancy in order to expand the application scope and effects [1,2,3,4]. High-quality fusion of remote sensing images entails more than simply combining data; it also implies improvement of the images’ spatial resolution while retaining the original spectral information, thereby enhancing the reliability of interpretation [5]. Employing an appropriate method to fuse images is a critical task in remote sensing image enhancement processing and represents a necessary step for obtaining high-precision image classification results.
The increasing availability of high-resolution remote sensing images in terms of both the amount of data and sensor types has focused much scholarly attention on high-resolution image fusion, leading to a plethora of relevant research references in the current literature [6,7,8,9,10,11,12,13,14]. Some of the previous research has demonstrated the effectiveness of high-resolution image fusion in terms of improving classification accuracy. For example, Wang et al. used WorldView-2 remote sensing images to compare the effect of five fusion techniques and found a significant improvement in image classification accuracy after fusion [15]. Similarly, Chen et al. [16] reported that the fusion method of tasseled hat transformation and principal component analysis effectively improved the overall classification of ground objects’ identification. Experiments performed by Lu [17] also proved that the accuracy of land information extraction based on GS (Gram-Schmidt) transform could be improved using a certain segmentation scale.
While scholars have demonstrated that image fusion processing improves image quality and increases image classification accuracy, the literature also reveals that different fusion techniques have offered varied enhancement processing characteristics. Thus, the choice of technique should take into consideration the image sensor type characteristics and the purpose for image classification [18,19,20,21,22,23,24,25]. Regarding images that have originated from various sensor types, many studies have been carried out to identify the effectiveness of fusion techniques from the aspect of spectral and spatial fidelity. For example, Jun Ma et al., using six fusion methods for GF-2 image fusion, showed that NNDiffuse was more suitable for GF-2 image fusion [26]. Meanwhile, in experiments that used BJ-2 and GF-2 images, Fang Wang found that Pansharp and GS transforms better improved spatial resolution for the BJ-2 images, while for the GF-2 images, pansharp and HPF yielded better effects [27]. In the existing literature, most of the previous investigations of image fusion focused on the fidelity of spectral information, and researchers gave less consideration to classification accuracy.
In addition, specific thematic application purposes can affect the classification or interpretation of images; the requirements for image enhancement may also differ [25,28,29,30,31,32]. In the context of using a fused image for the extraction of vegetation information in mountainous areas, the requirement for highly accurate spatial fidelity and spatial structure of the vegetation area image becomes much greater due to the large degree of similarity of spectral characteristics between different vegetation types. In this case, selecting an effective image fusion method should be carried out according to the classification accuracy of vegetation types in the study area, in addition to the method’s spectral fidelity and accuracy of spatial information.
This investigation used WorldView-2 images to compare four methods of fusing images: GS (Gram-Schmidt) transform, Ehlers, Modified IHS, and Wavelet transform. The accuracy of vegetation classification, image visual effect, and quantitative index analysis was used to evaluate fusion quality. The research results contribute to the field by providing technical support for image fusion enhancement in the extraction of vegetation information in mountain areas.

2. Materials and Methods

2.1. Study Area

The Kongquegou scenic spot, located on the southern slope of Huaguo Mountain in Lianyungang City, was selected as the study area. Huaguo Mountain is situated in the northern part of Jiangsu Province in the transition zone dividing northern and southern China. It has a temperate monsoon climate with moderate rainfall and abundant sunshine and is suitable for plant growth. Many vegetation types that grow in both northern and southern regions can be found in this mountainous area, with a highest peak of 624.4 m above sea level. The vertical distribution characteristics of vegetation types are not obvious, and the vegetation is mainly characterized as artificial forest. The Kongquegou scenic spot extends along a mountain river formed in the main valley line on the southern slope of Huaguo Mountain. Most of the vegetation types existing in the Huaguo Mountain study area are affected by the terrain, including Quercus acutissima, Castanea mollissima, Pterocarya stenoptera, and other vegetations. Hence, the chosen area provided good representative significance for vegetation information extraction from remote sensing imagery in a mountainous area.
Figure 1 shows the location and an image of the study area. This space illustration extended from the southern end of Kongquegou to the northern Jiulong Bridge, with an area of 38.4 hectares.

2.2. Data Preprocessing

The data used in the research comprised WorldView-2 images, which included 1 panchromatic band and 8 multispectral bands. The spatial resolution of the panchromatic band was 0.5 m and the spatial resolution of the 8 multispectral bands was 2 m. The 8 multispectral bands were red, green, blue, near-infrared1 and 2, coastline, yellow, and red edge. The imaging date was 31 August 2019, and the imaging quality was good. At that season, the leaves of various vegetation types in the study area were all in a luxuriantly green state and distinguishing the spectral features of different vegetation types was relatively smaller than at any other point in the entire vegetation growing period. Thus, spectral fidelity for the extraction of vegetation information had to be higher.
Before image fusion, atmospheric calibration was performed on 2 m multispectral data, and geometric correction was applied to both 2 m multispectral (MUL) and 0.5 m panchromatic (PAN) data. The FLAASH model was used for atmospheric correction, the 2 s 3D polynomial model algorithm was used for geometric correction, and the cubic convolution was used for resampling.

2.3. Study Method

2.3.1. Image Fusion Algorithms

This paper compares the performance of four fusion methods, GS transform, Wavelet transform, Modified IHS, and the Ehlers, when used to fuse images. Each of these methods offered unique characteristics, as shown in Table 1. For example, the GS transform, a high-fidelity remote sensing image fusion method, used multispectral images to simulate panchromatic images, which could maintain the consistency of image spectral information before and after fusion. Wavelet transform reduced image noise and detail distortion through a low-entropy and de-correlation algorithm. Because the Modified IHS technique employed a panchromatic image to substitute the I component produced by the IHS (Intensity, hue, saturation) transformation of the RGB image, only 3 multispectral bands could be used for fusion processing at a time, resulting in an unstable fusion effect. The Ehlers was based on Fourier filtering and Modified IHS transformation, which effectively combined spectral information with high-frequency information. Because the Modified IHS and Ehlers could only combine 3 multispectral bands with a panchromatic image, it was necessary to first identify the optimum bands. Therefore, we calculated the optimal index factor (OIF). A larger OIF index indicated that a greater quantity of information was preserved in the combined bands and used as the optimal combination of bands for image fusion. For the image used in the study, the largest OIF value was 236.39 when 578 bands were combined, which corresponded to red and the near-infrared 1 and 2 bands.

2.3.2. Vegetation Type Classification Based on Fusion Images

The vegetation in the study area was divided into 9 types based on a combination of the extent of image expression and characteristics of the vegetation’s composition, as shown in Table 2 [33,34]. Plotting vegetation type samples for training and validation was accomplished by visual delineation, along with field investigation. To be sure that each sample region corresponded to the indicated type, we carried out the delineation within a larger pure forest patch type. We also sought to ensure efficient, accurate field survey performance by developing the Field Sampling Surveying and Management System based on the Global Navigation Satellite System (GNSS), which was used to navigate and locate the sample quickly and correctly. Field surveying and identification of vegetation types were completed with the assistance of the Lianyungang City Forestry Technology Guidance Station, which had practical requirements for vegetation mapping by remote sensing and whose technicians were familiar with this region’s vegetation construction.
In order to evaluate the quality of images fused by different algorithms, image classification was carried out based on the multispectral image and various fusion images in turn. For the fusion images, vegetation classifications were performed using spectral features only and then subsequently by combining spectral and texture features, and then the fusion quality could be analyzed from the two aspects of image spectral and spatial structure.
All image classification regarding vegetation type was carried out via the random forest algorithm based on an object-oriented technique. The process of segmentation was implemented using the region growing algorithm, and segmentation parameters were acquired by a series of tests. According to our final determination, the scale was 55; the shape factor was 0.1, and the color factor was 0.5.
The spectral features used for classifying included means and standard deviations of the objective grayscale value on each fused layer. Meanwhile, texture features were extracted using the gray-level co-occurrence technique that was accomplished along 45-degree angles.

2.3.3. Fusion Image Quality Evaluation

First, the fused image quality was evaluated by visual comparison and quantitative index analysis. Visual comparison can intuitively analyze fusion quality in terms of color contrast, sharpness, and texture. The quantification index was calculated by the image algorithm in order to indicate the amount of information preservation and relevance to the multispectral and pan images, including the grayscale value mean, standard deviation, correlation coefficient, and information entropy [18]. The calculation formula of each quantitative index is shown in Table 3. In the table, M and N represent the number of rows and columns of the image; L is pixel gray level, z ( x i , y i ) is the gray value of a point on the image, P is the ratio of the number of pixels with gray value I to the total pixels, A ( x i , y i ) is the value of panchromatic image, and F ( x i , y i ) is the value of the fused image.
Second, vegetation classification accuracy was employed to indicate the quality of various fused images. Validation samples were used to evaluate the classification result, and then the confusion matrix was produced to reveal the accuracy.

3. Results

3.1. Visual Evaluation

The above four fusion procedures produced color images with higher spatial resolution; however, obvious differences in spectral and structure quality emerged in the results. According to a visual comparison of spectral information preservation, as shown in Figure 2, the image fused by the Wavelet transformation had almost the same color as the original multispectral image, whereas the GS transform and Modified IHS produced a slightly inferior image and the Ehlers transform yielded the largest deviation.
As for the spatial structure, though the PAN image was fused with the multispectral image directly and no information was lost, different algorithms differed somewhat in the clarification of spatial structure detail on the fused image. Specifically, images fused by the Wavelet transform and GS transform had the clearest spatial details, followed by the Modified IHS, while the image fused by the Ehlers demonstrated the lowest spatial clarification.
A visual comparison reveals that though there was no information loss during the fusing procedure for the high spatial resolution PAN image, the distortion of multispectral information potentially reduced the level of clarification in terms of the image’s spatial detail and introduced texture blurring, which influenced image classification.

3.2. Quantitative Index Analysis

Table 4 displays all of the calculated results of the quantitative index. The mean and standard deviation of the gray value reflects the brightness information and clarity of the fused image. Additionally, the information entropy is an index that measures the amount of information, and the correlation coefficient reflects the spectral fidelity. According to the calculated results in Table 4, the gray value mean and standard deviation of the GS transform fusion image had the closest characteristics to the original multispectral image, followed by the Wavelet transform image. The correlation coefficient indicated that the GS transform effectively preserved the spectral information of the original image while improving the image resolution. In contrast, the average value and standard deviation of the image obtained by the Ehlers transformation deviated from the original image to the greatest extent; in particular, more spectral information was lost, resulting in serious color distortions after fusion. The information entropy and correlation coefficient of the images obtained by the Wavelet transformation were the highest, indicating that the Wavelet transformation provided better spectral fidelity. Overall, the GS transform and Wavelet transform demonstrated the best spectral fidelity.

3.3. Fusion Quality Analysis by Image Classification Accuracy

3.3.1. Quality Analysis According to the Overall Accuracy of Image Classification

Table 4 and Table 5 present the evaluation results concerning vegetation classification accuracy. In Table 5, (1) indicates the classification experiment with spectral features only, while (2) denotes classification that combined spectral and texture features. As shown in Table 3, for both spectral features only and the combined spectral and texture features, the classification accuracy for all four fusion-treated images was significantly higher than that of the original multispectral image, indicating that the image fusion algorithm effectively combined the spectral and spatial structure information to achieve higher classification accuracy for vegetation types. However, a further comparison of the vegetation classification accuracy among the four fused images revealed obvious differences caused by the image fusion effect on both spectral and spatial information.
When considering only spectral information, the image fused using the Modified IHS demonstrated the highest accuracy for vegetation-type classification, followed by Wavelet transformation and the Ehlers. GS transform yielded the poorest result. Thus, while the GS transform showed better spectral fidelity, the Modified IHS and Ehlers proved to be more suitable for vegetation classification after selecting the best band.
As shown in Table 5, adding texture features to the classification procedure improved classification accuracy. This outcome is in line with the findings of previous studies that have also shown that in WorldView-2 images, the texture features of trees are particularly prominent; therefore, adding texture features can be an effective way to improve classification accuracy [35,36]. In terms of overall classification accuracy, when combining spectral features and texture features, the classification accuracy of the Wavelet transform fusion image, which had the best vegetation classification accuracy, was 78.6%, followed by the Modified IHS fusion image, with a classification accuracy of 77.6%.

3.3.2. Quality Analysis of Vegetation Classification Accuracy from Spectral Characteristics

As can be observed in Table 4, when using only the spectral features of the fused image to classify vegetation, the Modified IHS fused image produced the highest accuracy for classifying P. stenoptera, C. mollissima, Tea Plantations, and Q. acutissima. According to the results, albeit only 3 bands were included in the fusion procedure using the Modified IHS algorithm, the selection of the best index of OIF allowed the fused image to retain most of the spectral information of 8 multispectral bands, leading to a better ability to distinguish among the four kinds of vegetation, thereby acquiring a better classification result. In comparison, the image fused by the Wavelet transform technique displayed a similar ability to identify each vegetation type, with slightly lower accuracy in the case of most vegetation classification, except for P. thunbergii, Cunninghamia lanceolata, P. densiflora, and Phyllostachys spectabilis, which demonstrated higher classification results. In order to distinguish them precisely, a fused image should have higher spectral fidelity, larger separability, and more robustness. This classification result implied that the Wavelet transform technique could yield relatively better quality in fused images. In terms of production and user accuracy, GS and Ehlers transform each extracted only one kind of vegetation type with higher accuracy, which were Phyllostachys spectabilis, and Tea Plantations, respectively. Furthermore, although the GS transform was able to establish high spectral fidelity, the vegetation type spectral separability of the image was relatively lower than image fused by Wavelet transform. Meanwhile, in the image fused by Ehlers, both the resulting spectral fidelity and separability were relatively low. According to these results, in order to obtain a high-quality fused image, the fusion algorithm needs to preserve the spectral information well while simultaneously enhancing the image-oriented image classification aims.

3.3.3. Quality Analysis with Texture Features Incorporated for Classification

Table 3 offers the evaluation results for overall accuracy and the Kappa coefficients. Additionally, results for Producer’s accuracy and User accuracy can be found in Table 6.
The results in Table 5 support the following conclusions: (1) Adding texture features can improve classification results for both multispectral images and the four fused images, but the level of improvement is different. Among all the results, the multispectral image showed the largest improvement, with an accuracy improvement of 15.8% and 17.3% for overall accuracy and Kappa, respectively, followed by Wavelet transformation, with an accuracy improvement of 7.3% and 8.3%, respectively. This outcome illustrates the importance of texture features in vegetation classification using remote sensing images. (2) Overall, in the case of lower classification accuracy using spectral features only, adding texture features led to greater improvement, for example, in the multispectral image and the image fused via Ehlers transformation. Meanwhile, for the image fused by the Wavelet transform, which already demonstrated higher classification accuracy using the spectral feature only, the incorporation of texture features further improved vegetation classification. Therefore, combining spectral and spatial features led to the best classification results in terms of higher image fusion quality for both spectral and spatial information among the four fused images.
Analysis of the texture effect for individual vegetation types by production accuracy and user accuracy, as shown in Table 6, supports the following conclusions: (1) Adding texture information was not sufficient to improve all vegetation classification results, and the texture effect on all of the fused images led to some degree of uncertainty, especially in terms of user accuracy. (2) Of the images that resulted from the four methods, only the image fused by the Wavelet transformation showed better robustness when texture was added to the classification process. All of its accuracy assessment results, except for P. taeda’s user accuracy, were improved. (3) In the case of images fused by the three other algorithms, the influence of texture on vegetation classification revealed a larger degree of uncertainty. When texture features were added, vegetation classification accuracy decreased in some areas while increasing at different levels in other areas. For example, for the image fused by the Ehlers, the decrease level of classification user accuracy caused by addition of texture features could reach up to 11.2%, while the increase level could reach 38.9%.
The above analysis shows that adding the texture feature to the Wavelet transform technique for fusing the image achieved the best outcome in terms of overall accuracy and the robustness of individual vegetation classification results.

4. Discussion

This experiment compared four methods for fusing a WorldView-2 image. Visual comparison and quantity index evaluation revealed that the GS transform and Wavelet transform better retained the spectral information of the image. Nevertheless, in terms of vegetation classification, even though the Modified IHS and Ehlers retained only 3 bands of the multispectral image, these methods yielded superior results compared to the GS transform.
When using only spectral features for vegetation classification, the GS transform’s classification result after fusion was inferior to that of the Modified IHS and Ehlers. Two possible reasons may explain this result. (1) The 8-band information retained by GS transform reduced the performance of the classifier due to information redundancy in the process of vegetation classification. In contrast, because the Modified IHS and Ehlers retained only 3 bands through band selection, the reduction of dimension reduced information redundancy. (2) Although the Modified IHS and Ehlers retained only 3 bands of information, the band selected by the optimal band combination retained spectral information that was more suitable for vegetation classification.
Even though adding texture features significantly improved the overall classification accuracy of the four fusion methods, classification accuracy for some vegetation types was reduced. For example, in the Ehlers fusion image, User accuracy could be reduced by up to 11.2% after texture features were added. There are two possible reasons for this phenomenon. First, the texture information for trees in the WorldView-2 image was very prominent, but some critical bands were screened out in the process of band selection. Second, the poor spectral fidelity of the 3-band combination may have indirectly affected the performance of its texture features.
Conceivably, in the area of vegetation classification, higher accuracy was not only dependent on spectral fidelity but also related to the image enhancement algorithm. For example, while the GS transform’s fused image had high spectral fidelity, the image feature dimension was high and various features is highly correlated, which increased the complexity of the classifier calculation and led to the unsatisfactory classification effect [37,38,39,40]. Meanwhile, although the Modified IHS and Ehlers involved optimizing spectral bands according to vegetation classification characteristics, the lower spectral fidelity might have affected spatial construction and induced instability in the image texture features. Compared to the other three methods, the Wavelet transform applied a de-correlation algorithm before the process of image fusion, and the fused image was characterized by better quality and acquired higher vegetation classification. Therefore, the ability to obtain a suitable fusion image requires further investigation of both spectral fidelity and effective feature optimization according to the image and vegetation growth characteristics.

5. Conclusions

Based on data drawn from WorldView-2 imagery, this investigation explored four fusion methods to fuse the image, including GS transform, Ehlers, Modified IHS, and Wavelet transform. Our analysis of visual and quantitative indicators, along with the extraction accuracy of mountain vegetation information, led to the following conclusions:
(1)
Judging by our visual inspection and the quantitative indicators, the GS transform and Wavelet transform techniques provided the best spectral fidelity quality and the best image fusion effect, followed by the Modified IHS, while the Ehlers had the worst spectral fidelity.
(2)
In terms of the classification accuracy of vegetation types, in the images fused by either the GS or Ehlers transform, the classification accuracy was relatively low when using only spectral features; furthermore, adding texture features did not make the effect more robust. The image rendered by the Modified IHS technique displayed lower quality in terms of spectral fidelity but higher accuracy in vegetation classification than the image fused by the GS transform. However, the image fused by Wavelet transform yielded better classification results than the other techniques in using both spectral features and the combination of spectral features and texture features. Therefore, this method was shown to be the most suitable for vegetation classification in the study area.
(3)
The classification accuracy of the fused image showed significant improvement compared to the original multispectral image.
(4)
In general, in the image fusion process, establishing definite aims before implementing the enhancements, improved the classification accuracy of the fused image.

Author Contributions

Conceptualization, X.F. and Y.Z.; methodology, D.C.; software, Z.W.; validation, D.C., X.S. and T.H.; formal analysis, D.C. and X.F.; investigation, D.C.; resources, Y.G.; data curation, D.C.; writing—original draft preparation, D.C.; writing—review and editing, X.F. and Y.Z.; visualization, Z.W.; supervision, X.F.; project administration, X.F.; funding acquisition, X.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Key Laboratory of Coastal Salt Marsh Ecology and Resources, Ministry of Natural Resources (KLCSMERMNR2021102), the National Natural Science Foundation of China (NSFC No. 31270745), the Key subject of “Surveying and Mapping Science and Technology” of Jiangsu Ocean University, (KSJOU), the Postgraduate Research & Practice Innovation Program of Jiangsu Ocean University (KYCX2021-024), and the Strategic Priority Research Program of Chinese Academy of Sciences (XDB 41000000).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

http://www.mapcore.com.cn/index.php?id=106 (accessed on 11 September 2022); https://discover.maxar.com/ (accessed on 11 September 2022).

Acknowledgments

Worldview-2 satellite imagery data are highly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, L.-P.; Chen, F.-H.; Xie, Y.-W. Advances of Multisource Remote Sensing Image Fusion Techniques in China. Remote Sens. Technol. Appl. 2011, 22, 116–122. [Google Scholar]
  2. Li, F. Assessment of Multisource Remote Sensing Image Fusion by several dissimilarity Methods. J. Phys. Conf. Ser. 2021, 2031, 012016. [Google Scholar] [CrossRef]
  3. Hue, S.W.; Korom, A.; Seng, Y.W.; Sihapanya, V.; Phimmavong, S.; Phua, M.H. Land Use and Land Cover Change in Vientiane Area, Lao Pdr Using Object-Oriented Classification on Multi-Temporal Landsat Data. Adv. Sci. Lett. 2017, 23, 11340–11344. [Google Scholar] [CrossRef] [Green Version]
  4. Moonon, A.U.; Hu, J.; Li, S. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation. Sens. Imaging 2015, 16, 1–18. [Google Scholar] [CrossRef]
  5. Zhou, K.; Ming, D.; Lv, X.; Fang, J.; Wang, M. CNN-Based Land Cover Classification Combining Stratified Segmentation and Fusion of Point Cloud and Very High-Spatial Resolution Remote Sensing Image Data. Remote Sens. 2019, 11, 2065. [Google Scholar] [CrossRef] [Green Version]
  6. Usharani, A.; Bhavana, D. Deep Convolution Neural Network Based Approach for Multispectral Images. Int. J. Syst. Assur. Eng. Manag. 2021, 1–10. [Google Scholar] [CrossRef]
  7. Liu, Y.; Chang, M.; Xu, J. High-Resolution Remote Sensing Image Information Extraction and Target Recognition Based on Multiple Information Fusion. IEEE Access 2020, 8, 121486–121500. [Google Scholar] [CrossRef]
  8. Zitouni, A.; Benkouider, F.; Chouireb, F.; Belkheiri, M. Classification of Textured Images Based on New Information Fusion Methods. IET Image Process. 2019, 13, 1540–1549. [Google Scholar] [CrossRef]
  9. Gross, H.N.; Schott, J.R. Application of Spectral Mixture Analysis and Image Fusion Techniques for Image Sharpening. Remote Sens. Environ. 1998, 63, 85–94. [Google Scholar] [CrossRef]
  10. Švab, A.; Oštir, K. High-Resolution Image Fusion: Methods to Preserve Spectral and Spatial Resolution. Photogramm. Eng. Remote Sens. 2006, 72, 565–572. [Google Scholar] [CrossRef]
  11. Li, P.; Wang, Z. Investigation of Image Fusion between High-Resolution Image and Multi-Spectral Image. Geo-Spat. Inf. Sci. 2003, 6, 31–34. [Google Scholar] [CrossRef]
  12. Zhou, W.; Wang, F.; Wang, X.; Tang, F.; Li, J. Evaluation of Multi-Source High-Resolution Remote Sensing Image Fusion in Aquaculture Areas. Appl. Sci. 2022, 12, 1170. [Google Scholar] [CrossRef]
  13. Sun, Y.; Liu, J.; Yang, J.; Xiao, Z.; Wu, Z. A Deep Image Prior-Based Interpretable Network for Hyperspectral Image Fusion. Remote Sens. Lett. 2021, 12, 1250–1259. [Google Scholar] [CrossRef]
  14. Javan, F.D.; Samadzadegan, F.; Mehravar, S.; Toosi, A.; Khatami, R.; Stein, A. A Review of Image Fusion Techniques for Pan-Sharpening of High-Resolution Satellite Imagery. ISPRS J. Photogramm. Remote Sens. 2021, 171, 101–117. [Google Scholar] [CrossRef]
  15. Wang, M.; Fei, X.-Y.; Xie, H.-Q.; Liu, F.; Zhang, H. Study of Fusion Algorithms with High Resolution Remote Sensing Image for Urban Green Space Information Extraction. Bull. Surv. Mapp. 2017, 36–40. [Google Scholar] [CrossRef]
  16. Li, C.; Lin, H.; Sun, H. Worldview-2images Based Urban Green Space Information Extraction. J. Northwest. For. Univ. 2014, 29, 155–160. [Google Scholar]
  17. Lu, C. Research on Object-Oriented Information Extraction Technology Based on Worldview-2 Image. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2012. [Google Scholar]
  18. Li, J.; Zhou, Y.; Li, D.-R. Local histogram matching filter for remote sensing image data fusion. Acta Geod. Cartogr. Sin. 1999, 12, 226–231. [Google Scholar]
  19. Yu, J.; Zhou, Y.; Wang, S.X.; Wang, L.C. Evaluation and Analysis on Image Fusion of Etm. Remote Sens. Technol. Appl. 2007, 22, 733–738. [Google Scholar]
  20. Liu, K.; Fu, J.; Li, F. Evaluation Study of Four Fusion Methods of Gf-1 Pan and Multi-Spectral Images. Remote Sens. Technol. Appl. 2015, 30, 980–986. [Google Scholar]
  21. Liang, L.; Huang, W.; Zhang, R.; Lin, G.; Peng, J.; Liang, C. Sentinel-2 Satellite Image Fusion Method and Quality Evaluation Analysis. Remote Sens. Technol. Appl. 2019, 34, 612–621. [Google Scholar]
  22. Gong, J.; Yang, X.; Han, D.; Zhang, D.; Jin, H.; Gao, Z. Research and evaluation of Beijing-1 image fusion based on Imagesharp algorithm. Second. Int. Conf. Space Inf. Technol. 2007, 6795, 555–561. [Google Scholar]
  23. Mani, V.R.S. A Survey of Multi Sensor Satellite Image Fusion Techniques. Int. J. Sens. Sens. Netw. 2020, 8, 1–10. [Google Scholar] [CrossRef]
  24. Ghimire, P.; Deng, L.; Nie, J. Effect of image fusion on vegetation index quality—A comparative study from Gaofen-1, Gaofen-2, Gaofen-4, Landsat-8 OLI and MODIS Imagery. Remote Sens. 2020, 12, 1550. [Google Scholar] [CrossRef]
  25. Xiao, X.; Xie, J.; Niu, J.; Cao, W. A Novel Image Fusion Method for Water Body Extraction Based on Optimal Band Combination. Traitement Signal 2020, 37, 195–207. [Google Scholar] [CrossRef]
  26. Ren, J.; Yang, W.; Yang, X.; Deng, X.; Zhao, H.; Wang, F.; Wang, L. Optimization of Fusion Method for GF-2 Satellite Remote Sensing Images based on the Classification Effect. Earth Sci. Res. J. 2019, 23, 163–169. [Google Scholar] [CrossRef] [Green Version]
  27. Fang, W.; Yang, W.; Wang, J.; Chen, A. Chinese High-Resolution Satellite Pixel Level Image Fusion and Its Quality Evaluation. Surv. Mapp. Sci. 2021, 46, 73–80. [Google Scholar]
  28. Cheng, S.; Yang, Y.; Li, Y. Study on classification based on image fusion with curvelet transform. Remote Sens. GIS Data Processing Appl. Innov. Multispectral Technol. Appl. 2007, 6790, 134–139. [Google Scholar]
  29. Du, H.; Cao, Y.; Zhang, F.; Lv, J.; Deng, S.; Lu, Y.; He, S.; Zhang, Y.; Yu, Q. A Classification Method of Building Structures Based on Multi-Feature Fusion of Uav Remote Sensing Images. Earthq. Res. Adv. 2021, 1, 100069. [Google Scholar] [CrossRef]
  30. Yuan, L.; Zhu, G. Research on Remote Sensing Image Classification Based on Feature Level Fusion. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 3. [Google Scholar] [CrossRef]
  31. Gao, H.; Chen, W. Image Classification Based on the Fusion of Complementary Features. J. Beijing Inst. Technol. 2017, 26, 197–205. [Google Scholar]
  32. Yang, H.; Xi, X.; Wang, C.; Xiao, Y. Study on Fusion Methods and Qulity Assessment of Pléiades Data. Remote Sens. Technol. Appl. 2014, 29, 476–481. [Google Scholar]
  33. Liu, F.; Huang, Z. Vegetation regionalization in Jiangsu Province. Acta Phytoecol. Geobot. Sin. 1987, 11, 226–233. [Google Scholar]
  34. Meng, Y. Study on the Flora and Main Plant Communities of Yuntai Mountain in Jiangsu Province. Ph.D. Thesis, Nanjing Agricultural University, Nanjing, China, 2016. [Google Scholar]
  35. Liu, H.; An, H.; Wang, B.; Zhang, Q. Tree Species Classification Using Worldview-2 Images Based on Recursive Texture Feature Elimination. J. Beijing For. Univ. 2015, 37, 53–59. [Google Scholar]
  36. Jiang, Y.; Zhang, L.; Yan, M.; Qi, J.; Fu, T.; Fan, S.; Chen, B. High-Resolution Mangrove Forests Classification with Machine Learning Using Worldview and Uav Hyperspectral Data. Remote Sens. 2021, 13, 1529. [Google Scholar] [CrossRef]
  37. Shi, S.; Bi, S.; Gong, W.; Chen, B.; Chen, B.; Tang, X.; Qu, F.; Song, S. Land Cover Classification with Multispectral Lidar Based on Multi-Scale Spatial and Spectral Feature Selection. Remote Sens. 2021, 13, 4118. [Google Scholar] [CrossRef]
  38. Li, Z.; Chen, Z.; Cheng, Q.; Duan, F.; Sui, R.; Huang, X.; Xu, H. Uav-Based Hyperspectral and Ensemble Machine Learning for Predicting Yield in Winter Wheat. Agronomy 2022, 12, 202. [Google Scholar] [CrossRef]
  39. Yang, S.; Gu, L.; Li, X.; Jiang, T.; Ren, R. Crop Classification Method Based on Optimal Feature Selection and Hybrid Cnn-Rf Networks for Multi-Temporal Remote Sensing Imagery. Remote Sens. 2020, 12, 3119. [Google Scholar] [CrossRef]
  40. Demarchi, L.; Kania, A.; Ciężkowski, W.; Piórkowski, H.; Oświecimska-Piasko, Z.; Chormański, J. Recursive Feature Elimination and Random Forest Classification of Natura 2000 Grasslands in Lowland River Valleys of Poland Based on Airborne Hyperspectral and Lidar Data Fusion. Remote Sens. 2020, 12, 1842. [Google Scholar] [CrossRef]
Figure 1. Study area.
Figure 1. Study area.
Sustainability 14 13390 g001
Figure 2. Image fusion results.
Figure 2. Image fusion results.
Sustainability 14 13390 g002
Table 1. Principle of four image fusion methods.
Table 1. Principle of four image fusion methods.
MethodsPrinciplesFeatures
GS (Gram-Schmidt) transformMultispectral image is used to simulate panchromatic image, multi-dimensional linear orthogonal transformation is carried out to simulate panchromatic image and multispectral image, and the first component of orthogonal transformation is replaced by high spatial resolution panchromatic image, and finally GS inverse transformation is carried out.The number of fusion bands is not limited, and the spectral information of the image is well maintained, but it takes a little longer.
Wavelet transformThe panchromatic image and the resampled multispectral image are decomposed by wavelet to obtain the low frequency and a series of high frequency parts. Then the fusion is processed according to their respective fusion strategies to get the low frequency and high frequency parts after fusion, and then the inverse wavelet transform is carried out to get the fused image.The spectral information is well preserved, but the spatial resolution is not improved significantly
EhlersFirst, the panchromatic image is sharpened by fast FFT, and then the spectrum is changed by ISH, replacing the I-component with the sharpened panchromatic component and finally inverting to RGB.The spectrum is well maintained, which takes a long time, and only 3 bands can be fused at a time.
Modified IHSThe image is converted from RGB color space to the color space with IHS as the parameter, and then the I-component is replaced by a high-resolution image, and the HS component is kept, and the fusion is completed through HIS inverse transformation.Only 3 bands can be fused at a time, and the fusion effect is unstable.
Table 2. Mountainous vegetation classification system and sampling results.
Table 2. Mountainous vegetation classification system and sampling results.
Vegetation CategoryImageTraining/ValidationImage Features
Pinus thunbergiiSustainability 14 13390 i001165/193After being artificially planted, Pinus thunbergii is distributed in mountain forest where the slope is relative steep and soil is poor in a semi-natural form. The brightness is darker, the texture is rougher than pine, and the shadows are more pronounced.
Cunninghamia lanceolataSustainability 14 13390 i00293/144Similar to the pine texture features, it is denser and slightly taller than the P. thunbergii in the study area.
Phyllostachys spectabilisSustainability 14 13390 i00340/51The color is darker, with smoother and finer texture features, and the features are obvious.
Castanea mollissimaSustainability 14 13390 i004200/242The leaves of the trees are wide, mostly pruned, and randomly arranged, with bright colors and obvious spectral characteristics.
Quercus acutissimaSustainability 14 13390 i00565/96In the study area, Quercus acutissima is widely distributed in the mountain forest with strong adaptability and is the dominant tree species. The leaves are smaller than those of C. mollissima, the trees are densely arranged, and the branches grow naturally. The color of the image is darker, with the shadow of tall trees, but the crown of the tree is not obvious, showing a “broken” shape.
Pinus densifloraSustainability 14 13390 i00624/33The characteristics are similar to those of P. thunbergii, with small canopy and small shadows, less in number in this study area, and a more random distribution.
Pinus taedaSustainability 14 13390 i00749/53After being artificially planted, Pinus taeda is distributed in the mountain forest in a semi-natural form. It is similar in characteristics to P. thunbergii, but with a significantly larger and taller canopy.
Pterocarya stenopteraSustainability 14 13390 i00813/18The spectral characteristics are similar to those of the C. mollissima, with bright colors and stripes along the river.
Tea PlantationsSustainability 14 13390 i009232/265The Tea Plantations are distributed in strip-shaped terraces, with regular, long and narrow shapes and obvious features.
Table 3. Calculation formula of quantitative index.
Table 3. Calculation formula of quantitative index.
Quantitative IndexFormula
mean value of gray value Z = i = 1 M j = 1 N z ( x i ,           y j ) M   ×   N
standard deviation σ = i = 1 M j = 1 N ( z ( x i ,       y i ) z ) 2 M   ×   N
correlation coefficient E = i = 0 L 1 P i l o g 1 P i
information entropy P = [ F ( x i ,       y i ) f ] [ A ( x i ,       y i ) a ] i = 1 j = 1 [ F ( x i ,       y i ) f ] 2 × i = 1 j = 1 [ A ( x i ,       y i ) a ] 2
Table 4. Calculation results of the mean value of quantitative indicators.
Table 4. Calculation results of the mean value of quantitative indicators.
Average ValueStandard DeviationInformation EntropyCorrelation Coefficient
Mul355.12879.8192.9477-
Pan251.35652.3952.9477-
Ehlers534.649107.4433.12190.53
GS transform355.06675.5230.7824
Mod-IHS249.97056.7503.27760.5581
Wavelet-pc250.83260.2493.52160.8426
Table 5. Image classification accuracy.
Table 5. Image classification accuracy.
MUL (1)MUL (2)Ehlers (1)Ehlers (3)GS (1)GS (2)Modified HIS (1)Modified HIS (2)Wavelet (1)Wavelet (2)
OA 10.5080.6660.6740.7630.6500.7120.7260.7760.7100.783
Kappa0.4440.6270.6370.7350.6100.6780.6950.750.6770.759
1 In the table, OA represents the Overall Accuracy.
Table 6. Vegetation classification accuracy.
Table 6. Vegetation classification accuracy.
P. stenopteraC. mollissimaTea PlantationsP. taedaP. thunbergiiP. densifloraCunninghamia lanceolataQ. acutissimaPhyllostachys spectabilis
WaveletPA *0.5950.5320.6600.7220.6080.8530.6090.560.788
UP *0.7060.4510.7230.8240.7010.8430.7250.4220.663
add texturePA0.6730.6280.6750.7250.6410.6770.8130.956
UP0.7600.5130.7920.7810.7560.9200.7230.6380.805
GS transformPA0.8530.5350.6920.5550.4770.8070.4660.3680.751
UP0.4170.3780.6040.7150.65410.5310.3710.846
add texturePA0.5020.6020.7010.6640.5950.8070.4610.5080.611
UP10.3980.9440.7570.6550.5360.4370.4910.804
Modified IHSPA0.7880.5950.7810.7830.4790.8320.5660.5400.722
UP0.7860.4540.8250.6320.6320.8100.6710.6280.635
add texturePA0.5120.6040.8200.7030.5490.9990.6770.6860.699
UP0.6270.5540.7430.7400.7630.7420.7500.6090.878
EhlersPA0.5220.5770.7390.7220.5890.8080.3980.4700.431
UP0.4380.3730.8650.6570.5770.6790.6240.4140.611
add texturePA0.6230.5860.7410.7590.6560.920.6630.6250.631
UP0.4220.5040.7530.5870.7490.7880.7720.8030.733
* In the table, PA represents the Producer’s precision, and UP represents the User’s precision.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, D.; Fei, X.; Wang, Z.; Gao, Y.; Shen, X.; Han, T.; Zhang, Y. Classifying Vegetation Types in Mountainous Areas with Fused High Spatial Resolution Images: The Case of Huaguo Mountain, Jiangsu, China. Sustainability 2022, 14, 13390. https://doi.org/10.3390/su142013390

AMA Style

Chen D, Fei X, Wang Z, Gao Y, Shen X, Han T, Zhang Y. Classifying Vegetation Types in Mountainous Areas with Fused High Spatial Resolution Images: The Case of Huaguo Mountain, Jiangsu, China. Sustainability. 2022; 14(20):13390. https://doi.org/10.3390/su142013390

Chicago/Turabian Style

Chen, Dan, Xianyun Fei, Zhen Wang, Yajun Gao, Xiaowei Shen, Tingting Han, and Yuanzhi Zhang. 2022. "Classifying Vegetation Types in Mountainous Areas with Fused High Spatial Resolution Images: The Case of Huaguo Mountain, Jiangsu, China" Sustainability 14, no. 20: 13390. https://doi.org/10.3390/su142013390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop