Next Article in Journal
Diagnostic Evaluation of the Contribution of Complementary Training Subjects in the Self-Perception of Competencies in Ethics, Social Responsibility, and Sustainability in Engineering Students
Previous Article in Journal
Sustainability Constraints on Rural Road Infrastructure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on the Evolution of Forest Landscape Patterns in the Fuxin Region of China Combining SC-UNet and Spatial Pattern Perspectives

1
School of Geomatics, Liaoning Technical University, Fuxin 123000, China
2
Dalian Rongkepower Co., Ltd., Dalian 116025, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(16), 7067; https://doi.org/10.3390/su16167067
Submission received: 19 June 2024 / Revised: 1 August 2024 / Accepted: 12 August 2024 / Published: 17 August 2024

Abstract

:
During the vegetation growing season, the forest in the remote sensing image is more distinguishable from other background features, and the forest features are obvious and can show prominent forest area characteristics. However, deep convolutional neural network-based methods tend to overlearn the forest features in the forest extraction task, which leads to the extraction speed still having a large amount of room for improvement. In this paper, a convolutional neural network-based model is proposed based on the incorporation of spatial and channel reconstruction convolution in the U-Net model for forest extraction from remote sensing images. The network obtained an extraction accuracy of 81.781% in intersection over union (IoU), 91.317% in precision, 92.177% in recall, and 91.745% in F1-score, with a maximum improvement of 0.442% in precision when compared with the classical U-Net network. In addition, the speed of the model’s forest extraction has been improved by about 6.14 times. On this basis, we constructed a forest land dataset with high-intraclass diversity and fine-grained scale by selecting some Sentinel-2 images in Northeast China. The spatial and temporal evolutionary changes of the forest cover in the Fuxin region of Liaoning province, China, from 2019 to 2023, were obtained using this region as the study area. In addition, we obtained the change of the forest landscape pattern evolution in the Fuxin region from 2019 to 2023 based on the morphological spatial pattern analysis (MSPA) method. The results show that the core area of the forest landscape in the Fuxin region has shown an increasing change, and the non-core area has been decreasing. The SC-UNet method proposed in this paper can realize the high-precision and rapid extraction of forest in a wide area, and at the same time, it can provide a basis for evaluating the effectiveness of ecosystem restoration projects.

1. Introduction

Forests are important for maintaining biodiversity, safeguarding ecological security, and guaranteeing the supply of ecosystem services [1,2,3,4]. The forest landscape pattern is the spatial distribution and combination of different sizes, shapes and types of forest space units. The study of forest landscape patterns is an important field of forest ecology, which not only focuses on forest structure, but also involves the dynamic changes of forest ecological processes, which is of great significance for the development of forest management and conservation strategies as well as sustainable development [5,6]. Due to the wide spatial and temporal distribution of forests and their heterogeneous characteristics, the method of determining forest coverage through empirical judgment or visual interpretation of remote sensing data can obtain an accurate spatial and temporal distribution, but it is not efficient and cannot meet the needs of forest area survey tasks at larger regional scales [7].
As remote sensing technology continues to evolve, research on forest extraction and monitoring has been deepening. Recently, there are two main types of forest extraction methods based on remote sensing images: (1) spectral index extraction methods; and (2) deep learning extraction methods. Among these methods, the spectral index extraction methods realize forest extraction based on the spectral information in the imagery, and although this type of method can effectively utilize the spectral information to make the extraction more accurate, this type of forest index method has the problem of complex construction [8,9,10,11]. The machine learning methods represented by supervised classification in the forest extraction task have realized rapid forest extraction with high accuracy in specific sample areas, but their universality is not strong enough to meet the needs of large-scale forest extraction tasks [12,13]. In recent years, as an important branch in the field of deep learning technology has been more widely applied in the field of computer vision [14,15,16,17,18,19].
The fully convolutional network (FCN), which is a convolutional neural network composed entirely of convolution operations, can achieve pixel-level semantic segmentation and classification. At the same time, the high-resolution spatial information of the image is preserved, the model parameters are reduced, and the algorithm efficiency is improved [20,21,22,23,24]. Ronneberger et al. [25] proposed the U-Net network consisting of an encoder and a decoder that combines the high-resolution features obtained by the encoder and the up-sampled output characteristics of the decoder by means of a jump link structure to improve the model performance. Due to the advantages of the high extraction accuracy, low computational complexity, and high modeling accuracy, U-Net has been widely used in the field of remote sensing feature decoding [26,27,28,29,30,31].
During the vegetation growth period, forest land is more distinguishable from the other background features in remote sensing images, and the forest land features are more obvious, showing more obvious regional characteristics [32,33,34,35]. However, the deep convolutional neural network (DCNN)-based methods tend to overlearn the forest features in the forest extraction task, resulting in the extraction speed still having large room for improvement. DCNNs are also trained based on a lot of data, and the long training time consumes a large amount of arithmetic power and produces a large number of redundant parameters [36,37,38]. The development of spatial and channel reconstruction convolution (SCConv) provides the possibility of reducing the redundant computation and promoting the learning of representative features [39,40,41].
The main contribution of this paper is that we propose a forest extraction model (SC-UNet) that combines SCConv with U-Net. Firstly, we make the computational speedup of the forest floor extraction model dramatically faster based on the combination of SCConv and the classical U-Net target extraction network to realize rapid and accurate extraction of multi-temporal forest information. The spatio-temporal evolution change of forest land in the study area during 2019–2023 was then analyzed. Finally, the evolution of the forest landscape patterns in the study area was explored based on morphological spatial pattern analysis (MSPA).
The objectives of this study were: (1) aiming at the characteristics of forest land, being more easily distinguishable from other features in remote sensing images, but typically with a lower extraction speed, to develop a forest extraction model (SC-UNet) that combines SCConv and U-Net, to realize the accurate extraction of forests in a small sample dataset; (2) to explore the spatial and temporal evolution changes of forest land in the study area during 2016–2023; and (3) to evaluate the changes in the ecological landscape patterns of forest in the study area.
The rest of the paper is organized as follows. Section 2 describes the study area and data preparation. Section 3 explains the construction method for the forest extraction model (SC-UNet) combining SCConv with U-Net and the MSPA model, detailing the spatial and channel reconstruction convolution process and the theory of MSPA. Section 4 presents the experimental results. Section 5 discusses the change of the multi-year spatial and temporal evolution of forest land in the study area and the change of forest evolution in the study area in terms of ecological landscape patterns, as well as the shortcomings of this study and the potential directions of future work. Section 6 summarizes and concludes the present work.

2. Study Region and Data

2.1. Overview of the Study Region

For the Fuxin area (center coordinates: 42°10′ N, 122°00′ E), the geographic location map is shown in Figure 1. The Fuxin area is located in the northwest of Liaoning province, China, in the northern part of the Horqin Sandy Land, east of the Liaohe River plain, west of the Nurulhu Mountain Range, and south of YiWulvu Mountain, is a transition zone between the Inner Mongolian grassland and the stony mountains of North China. The geomorphological pattern is high in the northwest and low in the southeast. The Fuxin area has many hills and mountains, accounting for 58% of the total area, with sandy land accounting for 19% and plain land accounting for 23%. The climate of the Fuxin area belongs to the semi-arid continental monsoon climate of the northern temperate zone, and the main feature of the climate is that precipitation is on the high side, despite the spatial and temporal distribution of precipitation being extremely uneven. However, the simultaneous period of rain and heat provides favorable conditions for the growth of vegetation in the region. Therefore, spatial and temporal variations in the forest landscape patterns in the Fuxin area are of great significance to the carbon stock and ecological pattern evolution.

2.2. Research Information

The study period was from 2019–2023, and two time-phases of Sentinel-2 imagery were used as the data source to explore the spatial and temporal changes of forest and landscape patterns in the study area. Two hundred and eighteen RGB three-band images from Sentinel-2 satellites covering the northeastern region of China were used in the dataset production process. In the study of forest landscape pattern changes in Fuxin region we used 8 Sentinel-2 true color images covering Fuxin region in 2019 and 2023. The Sentinel-2 satellites, operated by the European Space Agency, each carry a multispectral imaging sensor and provide open, freely accessible data (https://dataspace.copernicus.eu/). The spatial resolution of the imagery is 10 m, and the data are all remote sensing data with <10% cloud cover during the vegetation growing season (June to August), which are preprocessed with radiometric, geometrical, and atmospheric corrections [41,42,43,44,45].

3. Research Methodology

3.1. Spatial and Channel Reconstruction Convolution

Convolution has shown its excellent performance in a variety of computer vision tasks, but because of the fact that natural images are more simple than remote sensing images, this results in a lot of redundant information, which affects the extraction accuracy in forest extraction tasks. So as to influence the redundancy in the precision of forest extraction, we introduced the spatial and channel reconstruction convolution (SCConv) module into the model [46,47]. The aim was to reduce redundant computation and facilitate the learning of representative features. The module consists of two units: a spatial reconstruction unit (SRU) and a channel reconstruction unit (CRU). The overall structure of the SCConv module is shown in Figure 2.
As can be seen in Figure 1, for the intermediate input features X in the bottleneck residual block, the spatially refined features are first obtained by the SRU, and then the channel-refined features are obtained using the CRU. Firstly, in the separation stage, the SRU performs spatial normalization on the input features X by subtracting the mean of the features and dividing this by the standard deviation of the features. Normalization is performed while measuring the variance of the spatial pixels for each batch and channel [48]. The normalized correlation weights are shown in Equation (1) [48]:
W = γ i j 1 c γ j , i , j = 1 , 2 , , c
where W is the pixel weight, and γ i and γ j are the trainable parameters in the SRU used to measure the variance of the spatial pixels in each batch and channel, respectively. This module not only separates the informative features from the less informative ones, but also weights and reconstructs them to enhance the representative features and suppress the redundant features in the spatial dimension.
The CRU first divides the channels of the spatially weighted reconstructed features (X) acquired by the SRU into two parts and then utilizes 1 × 1 convolution to compress the channels of the feature map, in order to improve its computational efficiency. In addition, the feature channels of the CRU can be effectively controlled by introducing the compression ratio r to balance the computational cost. After the segmentation and compression operations, the spatially weighted reconstructed features (X) can be divided into an upper part X u p and a lower part X l o w . In the transformation stage, X u p is input into the upper transformation stage as a “rich feature extractor”. Efficient and low computational cost group-wise convolution (GWC) and point-wise convolution (PWC) are used to extract the features, as shown in Equation (2):
Y u p = M G X u p + M P 1 X u p
where M G , M P 1 are the learnable weights for GWC and PWC, respectively. X u p and Y u p are the feature maps of the upper input and output, respectively. X l o w is fed into the lower transformation stage, as shown in Equation (3):
Y l o w = M P 2 X l o w X l o w
where M P 2 is the learnable weight of the PWC, ∪ denotes the Concat module, and X l o w and Y l o w are the lower input and output feature maps, respectively. In the fusion stage, the output features Y u p and Y l o w of the up-conversion and down-conversion stages are adaptively merged using a simplified SKNet method [49]. The fusion of feature maps is achieved through global average pooling and channel attention, and in the SCConv module, all the parameters are concentrated in the conversion stage, which can significantly reduce the number of parameters, compared to the traditional convolution operation.

3.2. SC-UNet Forest Extraction Model

By using feature extraction network and up-sampling network, DCNN can quickly and accurately extract forest land information in remote sensing images. The encoder part of the woodland extraction model (SC-UNet) proposed in this paper uses the VGG network model, but it is worth noting that in our study, we replace the last convolution module of each layer of the VGG network with SCConv. This is performed to reduce the model parameters and avoid redundant information that negatively affects the accuracy and speed of the model, as shown in Figure 3. In the decoder part of SC-UNet, the classical U-Net network is still retained [25]. Because forest land differs from the background features in the growing season in remote sensing imagery, the standard DCNN structure is affected by interference from the redundant information in the feature extraction process, and it is difficult to realize high-precision extraction of forest under the conditions of few samples and low arithmetic loss. In the proposed approach, the SCConv module is introduced into the encoder part of the model instead of the standard 3 × 3 convolution, making the model suitable for accurate extraction of forest features from remote sensing images with high front background discrimination. Moreover, the SC-UNet network model, which integrates spatial and channel reconstruction convolution modules, is easier to train than U-Net and achieves a higher extraction accuracy under the conditions of few samples and low arithmetic loss. In the decoder part of the SC-UNet model, in order to keep the same distribution of inputs at each layer during training, the model uses a batch normalization (BN) module [50,51,52]. It can make the model in Up sampling (Up sampling convolution, UP Conv) [53,54,55], will use the same size of feature maps spliced with the same size as the down sampling, and through the jump connection in order to achieve a better model reconstruction.

3.3. A Morphology-Based Approach to Analyzing Forest Spatial Patterns

Distinguished from the traditional patch-centered approach for landscape element identification, MSPA is an image resolution method that measures, identifies, and segments the spatial patterns of regional land-use raster images [56,57,58]. Figure 4 shows the specific computational steps, from the binary image input to the output of seven different morphological types.
Firstly, the core area ( X 1 ) and isolated island ( X 2 ) are calculated. The connection line (A) and boundary line (B) are then calculated, and the traffic circle ( X 3 ) and bridging area ( X 4 ) are successively obtained through the connection line. The aperture ( X 5 ) and the edge area ( X 6 ) are then obtained through the boundary line, and finally the branch line ( X 7 ) is calculated. In this study, the 8-neighborhood moving window method was used to image the region, and the foreground was divided into the seven categories of landscape elements, which do not overlap, as shown in Table 1. In turn, a spatial pattern distribution map of the forest landscape based on morphological classification was constructed. This method breaks through some of the limitations of the traditional landscape pattern indices, and is able to intuitively identify the type and structure of each landscape element from the spatial aggregation pattern. Based on the GuidosToolbox3.0 software platform, the quantitative and visualization methods of morphology have been further extended, and we can now carry out the functions of morphological identification of forest landscape types, landscape fragmentation analysis based on morphology, aggregation analysis of forest spatial distribution based on Euclidean distance, and regional ecological planning and design. This makes it possible to analyze the temporal and spatial changes of the forest land in the region from the multiple perspectives of morphology and landscape ecology.

3.4. Multivariate Weighted Results

In order to achieve the objective evaluation of the accuracy of this paper’s method in the task of forest land extraction based on remote sensing images, the accuracy of the proposed SC-UNet method in forest land extraction in remote sensing images was evaluated by establishing a performance evaluation algorithm based on deep learning. Firstly, in order to maximally distinguish the forest from the background in each image, the results of the spatial distribution of forest land obtained in the test set using the method of this paper were binarized and the results of their comparison with the expert’s accurately labeled sample labels were used as an assessment metric. In this paper, four performance metrics are used to evaluate the results [60]: the precision, recall, intersection over union (IoU), and F1-score, as defined in Equations (4)–(7). The intersection over union (IoU) is the ratio of intersection and concatenation between the predicted and true values. The higher this index is, the higher the agreement between the extracted results and the expert labeled results. Precision is the number of correctly extracted forest land pixels in the results obtained by the forest land extraction model. Recall is the number of correctly extracted forest land samples in the test set. In order to evaluate the performance of the forest extraction model more comprehensively, we choose the F 1 score that can balance the effects of precision and recall.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
I o U = T P T P + F P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
where TP is the number of pixels of forest land correctly extracted by the model, FP is the number of pixels in the image wrongly extracted as forest land, and FN is the number of pixels of forest land that are not extracted by the model.

4. Experiments and Analysis

4.1. Experimental Data

In this study, multi-year Sentinel-2 10 m resolution multispectral images of northeastern China were utilized to construct an optical image-based forest dataset for the high-precision forest extraction task in the study area.
(1)
Although forest differs greatly from the background features in remote sensing images in the growing season, the glare, mountain shadows, interfering clouds, and sparse forest when the remote sensing image data are acquired will also produce forests with greater intra-class diversity in the remote sensing images, because the forest is mostly distributed in mountainous and hilly terrain areas. Therefore, in order to enhance the accuracy and robustness of the proposed SCConv method for forest extraction in Sentinel-2 multispectral images with different natural geographic conditions, different image acquisition times, and different imaging conditions, we acquired images of different imaging times, different imaging regions, and different natural geographic environments in the northeastern region of China, to produce the forest land dataset, as shown in Table 2. In Table 2, the color image is the localization of the Sentinel-2 satellite image and the binary image is the sample label corresponding to the color image. In this case, white color is the forest land and black color is the background.
(2)
The original size of the Sentinel-2 multispectral images is large, and in order to improve the iterative training speed of the model, the images were cropped to a 512 × 512 pixel size in this study. The dataset is based on different background complexities, image brightnesses, and clouds interfering. A total of 9220 samples were produced, including 5320 positive samples and 3900 negative samples without forest land labeling. The dataset was divided into training samples, validation samples, and test samples in the ratio of 0.8:0.1:0.1 [60].
(3)
The edge lines of the forest in the imagery were manually labeled using the Labelme module to produce a label set. The sample set and its corresponding label set were combined to form the remote sensing image forest land extraction dataset for deep learning.

4.2. Model Training

In our study, we use the training set to train SC-UNet and compare it with classical U-Net. The training time of the model proposed in this paper was 7.5 h, and the optimal training parameters of the model were as shown in Table 3. “Epochs” represents the number of times the training set passes through the model intact, “batch_size” is the number of samples the model can learn in the same batch, “train loss” and “val loss” denote the measures between the outputs of the network on the training set and validation set and the sample labels, respectively, and the learning rate is a parameter in the algorithm that adjusts the degree of fit, which determines the step size in each iteration. The model reached the optimal parameter settings at the 85th epoch.

4.3. Experimental Results

We evaluated the robustness of the SC-UNet method proposed in this paper for forest extraction based on 532 accurately labeled test samples containing forest land in the forest land dataset. The classical U-Net model and the method proposed in this paper were used to conduct forest extraction experiments in the test set. The sample labels accurately labeled by experts were used as the ground-truth values, so as to obtain the mean values of the four performance evaluation indices (Table 4).
As shown in Table 4, the SC-UNet forest extraction model proposed in this paper, which fuses spatial and channel reconstruction convolution, slightly outperforms the classical U-Net model in the four performance metrics; however, the prediction speed of the SC-UNet model is much faster than that of the U-Net model. In addition, Table 5 demonstrates the effect of the above two methods in forest land extraction with Sentinel-2 images. It can be seen that, in the case of a simple remote sensing image background, the SC-UNet model proposed in this paper performs better than the U-Net model in terms of extraction accuracy and speed, due to the reduction of the interference of redundant information on the forest extraction, and it can realize the highly robust extraction of multi-scale forest features, which are more suitable for practical application scenarios.

5. Discussion

We analyzed the spatial and temporal changes of forest land in the Fuxin region for 2019–2023 based on the SC-UNet results in Section 5.1. In Section 5.2, we assessed the change of forest land in the Fuxin region from the perspective of forest landscape patterns. Finally, in Section 5.3, we discuss the limitations of this study and potential future research directions.

5.1. Spatial and Temporal Changes of Forest Land in the Fuxin Region

The forest coverage of the Fuxin region of China was extracted based on the Sentinel-2 multispectral images with a spatial resolution of 10 m to obtain the spatio-temporal change of forest land in the region during the period of 2019–2023, as shown in Figure 5 and Figure 6. As can be seen from Figure 4, the distribution of forest land in the Fuxin region is uneven, with more forest in the west and south, and less in the east and north. The reason for this distribution state is that the northern part of the Fuxin region is close to the sandy area, and the ecological environment is fragile, which is not conducive to forest growth. In addition, the forest cover of the Fuxin region has shown an increasing change during 2019–2023. During this five-year period, the area covered by forest land increased from 921.80 km2 in 2019 to 931.73 km2 in 2023. The northeastern part of the Fuxin region showed a greater increase in forest land during this five-year period due to the ecosystem restoration projects conducted in this area, such as the construction of windbreak forests and other ecosystem restoration projects, which have been widely carried out in the sandy areas of the northeastern part of the Fuxin region, in order to resist the problems brought by wind and sand.

5.2. Evolution of the Spatio-Temporal Patterns of Forest Landscape Categories in the Fuxin Region Based on MSPA

We classified the forest patch types in the Fuxin region based on the MSPA method and obtained the forest landscape type changes of the seven types of morphology in the Fuxin region in 2019 and 2023, as shown in Figure 7 and Figure 8. In regards to forest area, as shown in Section 5.1, the forest area in the Fuxin region increased during the five-year period from 2019 to 2023. In terms of the overall spatial pattern changes, the seven structural categories of the MSPA method did not change much spatially, and the core area was mainly concentrated in the western and southern forest areas of the Fuxin region. The distribution of forest patches is characterized by large areas and strong stability. Although the core area is more concentrated in the large areas, forest fragmentation and morphological transformation have also occurred in some areas, and the core area patches have been transformed into bridging and edge areas. In 2023, compared with 2019, the overall amount of forest type landscape has increased in the core area, and there are different degrees of reductions in the Perforation, Bridge, Islet, Edge, and Branch types. The core area is mainly concentrated in the western and southern forest areas. The non-core area is mainly concentrated in the Zhangwu sandy area in the northeastern part of the Fuxin region, which is unfavorable for forest growth due to the more fragile ecological environment. Although the development of ecosystem restoration projects has continued in this area in recent years, and the forest land in this area has increased, the forest landscape pattern still shows fragmentation.

5.3. Limitations and Future Research

In this study, we developed the SC-UNet model, which incorporates spatial and channel reconstruction convolution. We also constructed a high intraclass diversity forest land dataset using Sentinel-2 10 m resolution multispectral satellite remote sensing imagery, which covers most of northeastern China. Based on this dataset, the proposed SC-UNet forest land extraction model was trained and evaluated for accuracy. However, we found that the method still has some limitations. Firstly, the method relies on a large number of forest land samples accurately labeled by experts, and the quality of the sample labeling directly affects the accuracy of the forest land extraction. In addition, the texture and color of forests of different heights and species in the optical imagery do have some differences. In our case, even though the number of samples was high, there were fewer forest samples with lower heights, such as shrubs, and with distributions close to those of farmland and building land. This may have affected the accuracy of the model training and forest extraction. In addition, the forest land dataset built in this study was mainly based on multi-year Sentinel-2 satellite remote sensing images covering the northeastern region of China, which mainly includes coniferous forest and deciduous broad-leaved forest, due to the natural geographic conditions. Therefore, the extraction of tropical and subtropical forests may need to be improved. In our future work, we plan to expand the sample sources to tropical and subtropical forests. In addition, the effect of the spatial resolution of the optical images on the details of forest extraction should not be ignored, and in future studies, we will aim to test and analyze forest extraction using other high-resolution satellite images (e.g., GF-1, GF-2, GF-7, ZY-3). In future studies, we will also try to utilize Google Earth Engine (GEE) [61,62]. In terms of the analysis of spatial and temporal changes in forest land, there are some shortcomings in the study regarding the driving factors of its changes, as the results of the spatial distribution of forest land in the study area for only two time points, 2019 and 2023, were studied in our study. Forest land change is a product of the coupled influence of natural factors and human activities [63]. In a future study, we will monitor the forest land change in the study area with multi-temporal phase and long time series to explore the driving factors affecting its change.

6. Conclusions

In this study, we selected multi-year Sentinel-2 10 m resolution multispectral satellite remote sensing images covering northeastern China in the Fuxin area of Liaoning province as the study area. We also constructed a high-intraclass diversity forest land dataset, taking into account the light intensity, sparseness of the tree cover, and cloud interference conditions. Since forest land is more obviously different from the background features in remote sensing images during the vegetation growing season, based on this, we integrated spatial and channel reconstruction convolution with the U-Net model to construct a forest land extraction model (SC-UNet) applicable to medium- and high-resolution remote sensing images. The results showed that the proposed SC-UNet model improved the precision by 0.442%, the F1 score by 0.058%, and the forest land extraction speed by about 6.14 times, compared with the classical U-Net model. We obtained forest distribution maps of the Fuxin region in 2019 and 2023 based on SC-UNet and analyzed their spatial and temporal changes. We also analyzed the changes in the landscape patterns of forest land in the Fuxin region in this five-year period using the MSPA method. The results showed that the forest land area in the Fuxin region showed an increasing change from 2019 to 2023, and the forest land area increased by about 9.93 km2. In terms of the overall spatial pattern change, the seven forest landscape structure categories did not change much spatially, and the core area was mainly concentrated in the southwest part of the Fuxin region. The distribution of forest patches in this region is characterized by large areas and strong stability. The non-core area is mainly concentrated in the Zhangwu sandy area in the northeastern part of the Fuxin region, which is not conducive to forest growth due to the more fragile ecological environment. Although the implementation of ecosystem restoration projects has continued in this area in recent years, and the forest area in this area has increased, the forest landscape pattern still shows a change in fragmentation.

Author Contributions

Conceptualization, F.W. and F.Y.; methodology, F.W. and F.Y.; software, F.W.; validation, F.W.; writing—original draft preparation, F.W.; writing—review and editing, F.W., F.Y. and Z.W.; visualization, F.W. and Z.W.; project administration, F.W. and F.Y.; funding acquisition, F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Education Department Project of Liaoning Province (LJ2020JCL006), the Discipline Innovation Team Project of Liaoning Technical University (LNTU20TD-27), and the Key Laboratory of Land Satellite Remote Sensing Application, Ministry of Natural Resources of the People’s Republic of China (KLSMNR-202107).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in [European Space Agency] at [https://dataspace.copernicus.eu/].

Acknowledgments

We are very grateful to all the reviewers, institutions, and study authors for their help and advice on our work.

Conflicts of Interest

Author Zixue Wang was employed by the company Dalian Rongkepower Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Peng, X.; He, G.; She, W.; Zhang, X.; Wang, G.; Yin, R.; Long, T. A Comparison of Random Forest Algorithm-Based Forest Extraction with GF-1 WFV, Landsat 8 and Sentinel-2 Images. Remote Sens. 2022, 14, 5296. [Google Scholar] [CrossRef]
  2. Zhu, H.; Zhang, B.; Song, W.; Xie, Q.; Chang, X.; Zhao, R. Forest Canopy Height Estimation by Integrating Structural Equation Modeling and Multiple Weighted Regression. Forests 2024, 15, 369. [Google Scholar] [CrossRef]
  3. Zhang, B.; Zhu, H.; Xu, W.; Xu, S.; Chang, X.; Song, W.; Zhu, J. A Fourier–Legendre Polynomial Forest Height Inversion Model Based on a Single-Baseline Configuration. Forests 2024, 15, 49. [Google Scholar] [CrossRef]
  4. De Frenne, P.; Lenoir, J.; Luoto, M.; Scheffers, B.; Zellweger, F.; Aalto, J.; Ashcroft, M.; Christiansen, D.; Decocq, G.; De Pauw, K.; et al. Forest microclimates and climate change: Importance, drivers and future research agenda. Glob. Chang. Biol. 2021, 27, 2279–2297. [Google Scholar] [CrossRef]
  5. Ripple, W.J.; Bradshaw, G.A.; Spies, T.A. Measuring forest landscape patterns in the Cascade Range of Oregon, USA. Biol. Conserv. 1991, 57, 73–88. [Google Scholar] [CrossRef]
  6. Wang, J.; Yang, X. A hierarchical approach to forest landscape pattern characterization. Environ. Manag. 2012, 49, 64–81. [Google Scholar] [CrossRef]
  7. Tariq, A.; Jiango, Y.; Li, Q.; Gao, J.; Lu, L.; Soufan, W.; Almutairi, K.; Habib-ur-Rahman, M. Modelling, mapping and monitoring of forest cover changes, using support vector machine, kernel logistic regression and naive bayes tree models with optical remote sensing data. Heliyon 2023, 9, e13212. [Google Scholar] [CrossRef]
  8. Cross, M.D.; Scambos, T.; Pacifici, F.; Marshall, W.E. Determining effective meter-scale image data and spectral vegetation indices for tropical forest tree species differentiation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2934–2943. [Google Scholar] [CrossRef]
  9. Zhang, X.; Zhang, Y.; Liu, L.; Zhang, J.; Gao, J. Remote sensing monitoring of the subalpine coniferous forests and quantitative analysis of the characteristics of succession in east mountain area of Tibetan Plateau—A case study with Zamtange county. Agric. Sci. Technol. 2011, 12, 926–930. [Google Scholar]
  10. Franklin, S.E.; Hall, R.J.; Moskal, L.M.; Maudie, A.J.; Lavigne, M.B. Incorporating texture into classification of forest species composition from airborne multispectral images. Int. J. Remote Sens. 2000, 21, 61–79. [Google Scholar] [CrossRef]
  11. Cheng, K.; Wang, J. Forest type classification based on integrated spectral-spatial-temporal features and random forest algorithm—A case study in the Qinling mountains. Forests 2019, 10, 559. [Google Scholar] [CrossRef]
  12. Xie, G.; Niculescu, S. Mapping and monitoring of land cover/land use (LCLU) changes in the Crozon peninsula (Brittany, France) from 2007 to 2018 by machine learning algorithms (support vector machine, random forest, and convolutional neural network) and by post-classification comparison (PCC). Remote Sens. 2021, 13, 3899. [Google Scholar] [CrossRef]
  13. Haq, M.A.; Rahaman, G.; Baral, P.; Ghosh, A. Deep learning based supervised image classification using UAV images for forest areas classification. J. Indian Soc. Remote Sens. 2021, 49, 601–606. [Google Scholar] [CrossRef]
  14. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W.T. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  15. Dhanya, V.G.; Subeesh, A.; Kushwaha, N.L.; Vishwakarma, D.K.; Kumar, T.N.; Ritika, G.; Singh, A.N. Deep learning based computer vision approaches for smart agricultural applications. Artif. Intell. Agric. 2022, 6, 211–229. [Google Scholar] [CrossRef]
  16. Moshayedi, A.J.; Roy, A.S.; Kolahdooz, A.; Shuxin, Y. Deep learning application pros and cons over algorithm deep learning application pros and cons over algorithm. EAI Endorsed Trans. AI Robot. 2022, 1, 7. [Google Scholar]
  17. Ghasemi, Y.; Jeong, H.; Choi, S.H.; Park, K.B.; Lee, J.Y. Deep learning-based object detection in augmented reality: A systematic review. Comput. Ind. 2022, 139, 103661. [Google Scholar] [CrossRef]
  18. Chang, X.; Zhang, B.; Zhu, H.; Song, W.; Ren, D.; Dai, J. A Spatial and Temporal Evolution Analysis of Desert Land Changes in Inner Mongolia by Combining a Structural Equation Model and Deep Learning. Remote Sens. 2023, 15, 3617. [Google Scholar] [CrossRef]
  19. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  20. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  21. Wang, J.; Song, L.; Li, Z.; Sun, H.; Sun, J.; Zheng, N. End-to-end object detection with fully convolutional network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15849–15858. [Google Scholar]
  22. Wu, S.; Shi, J.; Chen, Z. HG-FCN: Hierarchical grid fully convolutional network for fast VVC intra coding. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5638–5649. [Google Scholar] [CrossRef]
  23. Wang, S.; Liu, C.; Zhang, Y. Fully convolution network architecture for steel-beam crack detection in fast-stitching images. Mech. Syst. Signal Process. 2022, 165, 108377. [Google Scholar] [CrossRef]
  24. Zhu, H.; Zhang, B.; Chang, X.; Song, W.; Dai, J.; Li, J. A Study of Sandy Land Changes in the Chifeng Region from 1990 to 2020 Based on Dynamic Convolution. Sustainability 2023, 15, 12931. [Google Scholar] [CrossRef]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18. Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  26. Su, Z.; Li, W.; Ma, Z.; Gao, R. An improved U-Net method for the semantic segmentation of remote sensing images. Appl. Intell. 2022, 52, 3276–3288. [Google Scholar] [CrossRef]
  27. John, D.; Zhang, C. An attention-based U-Net for detecting deforestation within satellite sensor imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102685. [Google Scholar] [CrossRef]
  28. Meena, S.R.; Soares, L.P.; Grohmann, C.H.; Westen, C.; Bhuyan, K.; Singh, R.P.; Floris, M.; Catani, F. Landslide detection in the Himalayas using machine learning algorithms and U-Net. Landslides 2022, 19, 1209–1229. [Google Scholar] [CrossRef]
  29. Alsabhan, W.; Alotaiby, T.; Dudin, B. Detecting buildings and nonbuildings from satellite images using U-Net. Comput. Intell. Neurosci. 2022, 2022, 4831223. [Google Scholar] [CrossRef] [PubMed]
  30. Wang, H.; Miao, F. Building extraction from remote sensing images using deep residual U-Net. Eur. J. Remote Sens. 2022, 55, 71–85. [Google Scholar] [CrossRef]
  31. Yan, C.; Fan, X.; Fan, J.; Wang, N. Improved U-Net remote sensing classification algorithm based on Multi-Feature Fusion Perception. Remote Sens. 2022, 14, 1118. [Google Scholar] [CrossRef]
  32. Zhang, R.; Zhu, D. Study of land cover classification based on knowledge rules using high-resolution remote sensing images. Expert Syst. Appl. 2011, 38, 3647–3652. [Google Scholar] [CrossRef]
  33. Chen, B.; Xia, M.; Huang, J. Mfanet: A multi-level feature aggregation network for semantic segmentation of land cover. Remote Sens. 2021, 13, 731. [Google Scholar] [CrossRef]
  34. Navin, M.S.; Agilandeeswari, L. Comprehensive review on land use/land cover change classification in remote sensing. J. Spectr. Imaging 2020, 9, a8. [Google Scholar] [CrossRef]
  35. Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef]
  36. Liu, M.; Shi, J.; Li, Z.; Li, C.; Zhu, J.; Liu, S. Towards better analysis of deep convolutional neural networks. IEEE Trans. Vis. Comput. Graph. 2016, 23, 91–100. [Google Scholar] [CrossRef]
  37. Xu, R.; Samat, A.; Zhu, E.; Li, E.; Li, W. Unsupervised Domain Adaptation with Contrastive Learning-Based Discriminative Feature Augmentation for RS Image Classification. Remote Sens. 2024, 16, 1974. [Google Scholar] [CrossRef]
  38. Lu, Y.; Li, H.; Zhang, C.; Zhang, S. Object-Based Semi-Supervised Spatial Attention Residual UNet for Urban High-Resolution Remote Sensing Image Classification. Remote Sens. 2024, 16, 1444. [Google Scholar] [CrossRef]
  39. Han, Y.; Guo, J.; Yang, H.; Guan, R.; Zhang, T. SSMA-YOLO: A Lightweight YOLO Model with Enhanced Feature Extraction and Fusion Capabilities for Drone-Aerial Ship Image Detection. Drones 2024, 8, 145. [Google Scholar] [CrossRef]
  40. Wu, Q.; Feng, D.; Cao, C.; Zeng, X.; Feng, Z.; Wu, J.; Huang, Z. Improved Mask R-CNN for Aircraft Detection in Remote Sensing Images. Sensors 2021, 21, 2618. [Google Scholar] [CrossRef] [PubMed]
  41. Li, S.; Ganguly, S.; Dungan, J.L.; Wang, W.; Nemani, R.R. Sentinel-2 MSI radiometric characterization and cross-calibration with Landsat-8 OLI. Adv. Remote Sens. 2017, 6, 147. [Google Scholar] [CrossRef]
  42. Gašparović, M.; Jogun, T. The effect of fusing Sentinel-2 bands on land-cover classification. Int. J. Remote Sens. 2018, 39, 822–841. [Google Scholar] [CrossRef]
  43. Lamquin, N.; Woolliams, E.; Bruniquel, V.; Gascon, F.; Gorroño, J.; Govaerts, Y.; Leroy, V.; Lonjou, V.; Alhammoud, B.; Barsi, J.A.; et al. An inter-comparison exercise of Sentinel-2 radiometric validations assessed by independent expert groups. Remote Sens. Environ. 2019, 233, 111369. [Google Scholar] [CrossRef]
  44. Su, W.; Zhang, M.; Jiang, K.; Zhu, D.; Huang, J.; Wang, P. Atmospheric correction method for Sentinel-2 satellite imagery. Acta Opt. Sin. 2018, 38, 0128001. [Google Scholar] [CrossRef]
  45. Yin, F.; Lewis, P.E.; Gómez-Dans, J.L. Bayesian atmospheric correction over land: Sentinel-2/MSI and Landsat 8/OLI. Geosci. Model Dev. 2022, 15, 7933–7976. [Google Scholar] [CrossRef]
  46. Li, J.F.; Wen, Y.; He, L.H. SCConv: Spatial and channel reconstruction convolution for feature redundancy. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE Press: New York, NY, USA, 2023; pp. 6153–6162. [Google Scholar]
  47. Chen, C.F.; Oh, J.; Fan, Q.; Pistoia, M. SC-Conv: Sparse-complementary convolution for efficient model utilization on CNNs. In Proceedings of the 2018 IEEE International Symposium on Multimedia (ISM), Taichung, Taiwan, 10–12 December 2018; pp. 97–100. [Google Scholar]
  48. Wu, Y.; He, K. Group Normalization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Volume 3, pp. 3–19. [Google Scholar]
  49. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; IEEE Press: New York, NY, USA, 2019; pp. 510–519. [Google Scholar]
  50. Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN based on batch normalization and Adam optimizer. In Proceedings of the International Conference on Computational Science and Its Applications, Malaga, Spain, 4–7 July 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 593–604. [Google Scholar]
  51. Lange, S.; Helfrich, K.; Ye, Q. Batch normalization preconditioning for neural network training. J. Mach. Learn. Res. 2022, 23, 1–41. [Google Scholar]
  52. Yu, X.; Zheng, Z.; Meng, L.; Li, L. Scene Recognition of Remotely Sensed Images Based on Bayes Adjoint Batch Normalization; Geomatics and Information Science of Wuhan University: Wuhan, China, 2023. [Google Scholar]
  53. Yang, X.; Zhu, Y.; Guo, Y.; Zhou, D. An image super-resolution network based on multi-scale convolution fusion. Vis. Comput. 2022, 38, 4307–4317. [Google Scholar] [CrossRef]
  54. Hu, H.; Chen, Y.; Xu, J.; Borse, S.; Cai, H.; Porikli, F.; Wang, X. Learning implicit feature alignment function for semantic segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 487–505. [Google Scholar]
  55. Ren, S.; Zhao, N.; Wen, Q.; Han, G.; He, S. Unifying Global-Local Representations in Salient Object Detection with Transformers. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2870–2879. [Google Scholar] [CrossRef]
  56. Soille, P.; Vogt, P. Morphological segmentation of binary patterns. Pattern Recognit. Lett. 2009, 30, 456–459. [Google Scholar] [CrossRef]
  57. Vogt, P.; Ritters, K. GuidosToolbox: Universal digital image object analysis. Eur. J. Remote Sens. 2017, 50, 352–361. [Google Scholar] [CrossRef]
  58. Xiong, C.; Wu, Z.; Zeng, Z.Y.; Gong, J.Z.; Li, J.T. Spatiotemporal evolution of forest landscape pattern in Guangdong-HongKong-Macao Greater Bay Area based on “Spatial Morphology-Fragmentation-Aggregation”. Acta Ecol. Sin. 2023, 43, 3032–3044. [Google Scholar]
  59. Pan, Y.; Jiao, S.; Hu, J.; Guo, Q.; Yang, Y. An Ecological Resilience Assessment of a Resource-Based City Based on Morphological Spatial Pattern Analysis. Sustainability 2024, 16, 6476. [Google Scholar] [CrossRef]
  60. Zhu, H.; Zhang, B.; Song, W.; Dai, J.; Lan, X.; Chang, X. Power-Weighted Prediction of Photovoltaic Power Generation in the Context of Structural Equation Modeling. Sustainability 2023, 15, 10808. [Google Scholar] [CrossRef]
  61. Hu, L.; Xu, N.; Liang, J.; Li, Z.; Chen, L.; Zhao, F. Advancing the Mapping of Mangrove Forests at National-Scale Using Sentinel-1 and Sentinel-2 Time-Series Data with Google Earth Engine: A Case Study in China. Remote Sens. 2020, 12, 3120. [Google Scholar] [CrossRef]
  62. Yang, L.; Driscol, J.; Sarigai, S.; Wu, Q.; Chen, H.; Lippitt, C.D. Google Earth Engine and Artificial Intelligence (AI): A Comprehensive Review. Remote Sens. 2022, 14, 3253. [Google Scholar] [CrossRef]
  63. Li, L.; Zhu, L.; Xu, N.; Liang, Y.; Zhang, Z.; Liu, J.; Li, X. Climate Change and Diurnal Warming: Impacts on the Growth of Different Vegetation Types in the North–South Transition Zone of China. Land 2022, 12, 13. [Google Scholar] [CrossRef]
Figure 1. Geographic location map of Fuxin region.
Figure 1. Geographic location map of Fuxin region.
Sustainability 16 07067 g001
Figure 2. Structure of the spatial and channel reconstruction convolution module.
Figure 2. Structure of the spatial and channel reconstruction convolution module.
Sustainability 16 07067 g002
Figure 3. Schematic diagram of the SC-UNet model structure.
Figure 3. Schematic diagram of the SC-UNet model structure.
Sustainability 16 07067 g003
Figure 4. MSPA computational principle and computational steps.
Figure 4. MSPA computational principle and computational steps.
Sustainability 16 07067 g004
Figure 5. Forest distribution map for the Fuxin region in 2019.
Figure 5. Forest distribution map for the Fuxin region in 2019.
Sustainability 16 07067 g005
Figure 6. Forest distribution map for the Fuxin region in 2023.
Figure 6. Forest distribution map for the Fuxin region in 2023.
Sustainability 16 07067 g006
Figure 7. Distribution of forest landscape patterns in the Fuxin region in 2019.
Figure 7. Distribution of forest landscape patterns in the Fuxin region in 2019.
Sustainability 16 07067 g007
Figure 8. Distribution of forest landscape patterns in the Fuxin region in 2023.
Figure 8. Distribution of forest landscape patterns in the Fuxin region in 2023.
Sustainability 16 07067 g008
Table 1. Definition of the landscape types and the calculation formulas for MSPA [59].
Table 1. Definition of the landscape types and the calculation formulas for MSPA [59].
MSPA ClassCalculation FormulaDescription
Core X 1 = T d s [ E D T X ] X 1 : a collection of image elements that refers to a large aggregation of green image elements with a certain distance from the boundary; T : threshold calculation; d : distance; s : size parameter; E D T : Euclidean distance transform; X : image elements of the input image.
Islet X 2 = X \ R X σ ( X 1 ) X 2 : a collection of green pixels that are not connected and have a small number of aggregates that cannot be used as a core class; X : pixels of the input image; R X δ X 1 : reconstruction by expanding pixel X with the core area ( X 1 ) as the starting point.
Loop A = σ ( s ) X \ X 1 [ S K E L X 1 X ]
X 3 = set of image elements connecting the core classes X 1 in the same place;
X 3 : a collection of narrow green pixels connecting the same core class, also characterized by corridors; A : connecting region; δ λ s : expansion in terms of distance s with respect to X ; X : pixels in the input image; X 1 : core region; S K E L X 1 X : connecting pixel X starting from core region X 1 ; X 3 : traffic circle.
Bridge X 4 = The set of image elements connecting at least two different core classes X 1 ; X 4 : refers to a collection of non-core green image elements connecting at least two different core classes and exhibiting narrow corridor characteristics; X 4 : bridging area
Perforation B = T 0 d s E D T X 1 X \ X 1 X 2 X 3 X 4
X 5 = The set of boundary image elements that are less than s from the center of the boundary;
X 5 : refers to the transition area between the core class and non-green space patches, i.e., the inner fringe of the green space; B : boundary area; T : threshold calculation; d : distance; s : dimensional parameter; E D T : Euclidean distance transform; X : graphemes in the input image; X 1 : core area; X 2 : isolated islands; X 3 : traffic circles; X 4 : bridging area; X 5 : aperture.
Edge X 6 = B \ X 5 X 6 : junction area between the core category and the main non-greenfield area; X 6 : fringe area.
Branch X 7 = X X 1 X 2 X 3 X 4 X 5 X 6 X 7 : a collection of green pixels that are not core class areas and only one end is connected to an edge, bridge, traffic circle, or aperture class; X : pixels in the input image; X 1 : core area; X 2 : isolated island X 3 : traffic circle; X 4 : bridge area; X 5 : aperture; X 6 : edge area.
MSPA: Morphological spatial pattern analysis.
Table 2. A dataset for forest land extraction from remote sensing images in different states.
Table 2. A dataset for forest land extraction from remote sensing images in different states.
Normal ImageGlareSparse Forest ImagesClouds Interfering with the ImageNegative Sample Image
Sustainability 16 07067 i001Sustainability 16 07067 i002Sustainability 16 07067 i003Sustainability 16 07067 i004Sustainability 16 07067 i005
Sustainability 16 07067 i006Sustainability 16 07067 i007Sustainability 16 07067 i008Sustainability 16 07067 i009Sustainability 16 07067 i010
Sustainability 16 07067 i011Sustainability 16 07067 i012Sustainability 16 07067 i013Sustainability 16 07067 i014Sustainability 16 07067 i015
Sustainability 16 07067 i016Sustainability 16 07067 i017Sustainability 16 07067 i018Sustainability 16 07067 i019Sustainability 16 07067 i020
Sustainability 16 07067 i021Sustainability 16 07067 i022Sustainability 16 07067 i023Sustainability 16 07067 i024Sustainability 16 07067 i025
Sustainability 16 07067 i026Sustainability 16 07067 i027Sustainability 16 07067 i028Sustainability 16 07067 i029Sustainability 16 07067 i030
Sustainability 16 07067 i031Sustainability 16 07067 i032Sustainability 16 07067 i033Sustainability 16 07067 i034Sustainability 16 07067 i035
Sustainability 16 07067 i036Sustainability 16 07067 i037Sustainability 16 07067 i038Sustainability 16 07067 i039Sustainability 16 07067 i040
Sustainability 16 07067 i041Sustainability 16 07067 i042Sustainability 16 07067 i043Sustainability 16 07067 i044Sustainability 16 07067 i045
Sustainability 16 07067 i046Sustainability 16 07067 i047Sustainability 16 07067 i048Sustainability 16 07067 i049Sustainability 16 07067 i050
Table 3. Optimal parameters for model training.
Table 3. Optimal parameters for model training.
Parameter
Indicators
EpochsBatch_SizeTrain LossVal LossLearning Rate
Parameters8580.0730.0811 × 10−6
Table 4. Comparison of the forest land extraction results from Sentinel-2 images.
Table 4. Comparison of the forest land extraction results from Sentinel-2 images.
ModelIoU/%Precision/%Recall/%F1/%Prediction Speed (s)
U-Net83.54390.87592.51291.6870.15
SC-UNet81.78191.31792.17791.7450.021
Table 5. Comparison of the Sentinel-2 image forest extraction results.
Table 5. Comparison of the Sentinel-2 image forest extraction results.
Original ImageGround TruthU-NetSC-UNet
Sustainability 16 07067 i051Sustainability 16 07067 i052Sustainability 16 07067 i053Sustainability 16 07067 i054
Sustainability 16 07067 i055Sustainability 16 07067 i056Sustainability 16 07067 i057Sustainability 16 07067 i058
Sustainability 16 07067 i059Sustainability 16 07067 i060Sustainability 16 07067 i061Sustainability 16 07067 i062
Sustainability 16 07067 i063Sustainability 16 07067 i064Sustainability 16 07067 i065Sustainability 16 07067 i066
Sustainability 16 07067 i067Sustainability 16 07067 i068Sustainability 16 07067 i069Sustainability 16 07067 i070
Sustainability 16 07067 i071Sustainability 16 07067 i072Sustainability 16 07067 i073Sustainability 16 07067 i074
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, F.; Yang, F.; Wang, Z. A Study on the Evolution of Forest Landscape Patterns in the Fuxin Region of China Combining SC-UNet and Spatial Pattern Perspectives. Sustainability 2024, 16, 7067. https://doi.org/10.3390/su16167067

AMA Style

Wang F, Yang F, Wang Z. A Study on the Evolution of Forest Landscape Patterns in the Fuxin Region of China Combining SC-UNet and Spatial Pattern Perspectives. Sustainability. 2024; 16(16):7067. https://doi.org/10.3390/su16167067

Chicago/Turabian Style

Wang, Feiyue, Fan Yang, and Zixue Wang. 2024. "A Study on the Evolution of Forest Landscape Patterns in the Fuxin Region of China Combining SC-UNet and Spatial Pattern Perspectives" Sustainability 16, no. 16: 7067. https://doi.org/10.3390/su16167067

APA Style

Wang, F., Yang, F., & Wang, Z. (2024). A Study on the Evolution of Forest Landscape Patterns in the Fuxin Region of China Combining SC-UNet and Spatial Pattern Perspectives. Sustainability, 16(16), 7067. https://doi.org/10.3390/su16167067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop