Next Article in Journal
An Effective Onboard Cold-Sky Calibration Strategy for Spaceborne L-Band Synthetic Aperture Radiometers
Previous Article in Journal
S3L: Spectrum Transformer for Self-Supervised Learning in Hyperspectral Image Classification
Previous Article in Special Issue
A Novel Approach for Instantaneous Waterline Extraction for Tidal Flats
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images

1
Institut für Meereskunde, Universität Hamburg, 20146 Hamburg, Germany
2
Key Laboratory of Network Information System Technology (NIST), Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
3
The Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
4
National Geomatics Center of China, Beijing 100036, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(6), 972; https://doi.org/10.3390/rs16060972
Submission received: 14 January 2024 / Revised: 27 February 2024 / Accepted: 6 March 2024 / Published: 10 March 2024

Abstract

:
This paper proposes a texture-enhanced network (TENet) for intertidal sediment and habitat classification using multiband multipolarization synthetic aperture radar (SAR) images. The architecture introduces the texture enhancement module (TEM) into the UNet framework to explicitly learn global texture information from SAR images. The study sites are chosen from the northern part of the intertidal zones in the German Wadden Sea. Results show that the presented TENet model is able to detail the intertidal surface types, including land, seagrass, bivalves, bright sands/beach, water, sediments, and thin coverage of vegetation or bivalves. To further assess its performance, we quantitatively compared our results from the TENet model with different instance segmentation models for the same areas of interest. The TENet model gives finer classification accuracies and shows great potential in providing more precise locations.

1. Introduction

Intertidal zones are the transitional areas between terrestrial and marine environments [1,2]. The surfaces of these areas are usually of different kinds, including vegetation, mussel beds, and different types of sediments. Under the influence of tidal processes, the intertidal zones are endowed with dynamic conditions and high species diversity and productivity [3,4].
The intertidal ecosystems play a crucial role in ecological systems, including carbon storage [5], nutrient cycling [6], storm protection [7], and serving as nursery grounds for marine organisms [8]. Additionally, the intertidal regions can be reclaimed for various commercial and recreational purposes [9]. However, intertidal zones are also recognized as environmentally delicate and sensitive areas, which can be influenced by warming waters, coastal erosion, species invasion, coastal development, and many other factors [10]. Recently, there have been growing anthropogenic threats exposed to the intertidal zones, including tourism, excessive fishing, oil and gas production, and high nutrient loads [11], which makes continuous monitoring and frequent classification of surface types mandatory.
The understanding of valuable coastal environments is constrained by a lack of data. In situ observations of intertidal zones are scarce due to the inaccessibility of most areas [12]. Satellite remote sensing offers a valuable resource for monitoring coastal zones [13,14]. High-resolution multispectral remote sensing data obtained from satellite-borne optical sensors have been successfully employed for sediment and habitat classification on intertidal flats [15,16]. However, the use of these sensors is constrained to daylight conditions and can be hindered by cloud cover and atmospheric haze [11]. The synthetic aperture radar (SAR), operating as an active microwave sensor, stands out among existing remote sensors due to its ability to acquire high-resolution imagery with multiple bands and polarizations, independent of weather and light conditions. These advantages make SAR an ideal tool for studying intertidal zones [17].
Most of the current approaches employ traditional machine learning algorithms to construct SAR classification schemes [12,18,19], which often requires a significant amount of time and effort to define and manually extract features. Moreover, existing models tend to categorize the surfaces of intertidal zones into a limited number of types, which often fail to meet the practical demands and requirements.
In recent years, deep learning frameworks have shown great potential in the field of remote sensing applications, exhibiting impressive capabilities and performance [20,21,22,23,24,25,26]. There exist already some data-driven models based on optical images for sediment and habitat classification [27,28]. However, there is still very little research reported on data-driven approaches for classifying intertidal sediments and habitats using SAR images.
So far, several methods have been specifically designed for semantic segmentation tasks in SAR images. Wu et al. [29] developed a fully convolutional network (FCN) [30]—based model with a transfer learning strategy, which proved to be effective in achieving accurate polarimetric SAR (PolSAR) scene segmentation with limited training data. We refer to this model as TL-FCN in this paper. Wang et al. [31] proposed HR-SARNet, a deep neural network specifically designed for scene segmentation using high-resolution SAR data. The aforementioned methods have demonstrated effectiveness in accomplishing SAR semantic segmentation tasks. For the intertidal sediment and habitat classification, we can further consider the specific characteristics of intertidal areas to enhance the classification results.
The utilization of multiband and multipolarization SAR data has been employed for the classification of intertidal zones in some studies [32,33]. Gade et al. [34] proposed an approach applied to multifrequency, copolarized spaceborne imaging radar-C/X band (SIR-C/X) SAR images for the classification of sediments. Geng et al. [35] examined the tidal impacts on the polarimetric characteristics of mudflats and aquaculture farms by analyzing C-band Radarsat-2 (RS2) and X-band TerraSAR-X (TSX) SAR data. Wang et al. [12] introduced an FCDK-RF classification scheme for intertidal sediments and habitats, which was validated using SAR data from different bands (L-band, C-band, and X-band), separately. These studies have demonstrated the usefulness and effectiveness of incorporating multiband and multipolarization SAR data for accurate intertidal classification using traditional machine learning algorithms. Based on these findings, we have considered and incorporated multiband and multipolarization SAR data in our data-driven approach. Specifically, in our work, we integrated full-polarization RS2 and ALOS PALSAR-2 (ALOS2) data to classify sediments and habitats in intertidal flats.
In the intertidal zones, texture features have proven to be highly valuable in the classification of sediments [36] and habitats [37]. Texture features contain both local structural and global statistical features [38]. Deep learning models are highly effective in extracting local structural features such as boundaries, smoothness, and coarseness [39]. However, there is a lack of well-defined systems to extract and utilize global statistical texture information for convolutional neural network (CNN)–based semantic segmentation. Zhu et al. [40] demonstrated that easily computable textural features have the potential for general applicability in various computer vision tasks related to image classification. Therefore, taking inspiration from their research, this paper introduces a texture-enhanced network, referred to as TENet, taking statistical texture information into consideration for the classification of intertidal sediments and habitats in SAR imagery.
The subsequent sections of this paper are structured as follows: In Section 2, we present the whole processing chain diagram and the structural details of the TENet model. Section 3 provides an overview of the dataset, including the area of interest and the SAR data used herein. Section 4 shows the verification methods employed and comprehensive analyses of the model results. Finally, in Section 5, we conclude this paper with insightful discussions.

2. System Architecture

Our classification framework is split into two parts. The first part involves the preprocessing of SAR data and optical sensor data. The second part involves utilizing the processed outcomes to feed the TENet model and obtain test results. In this section, we first give a detailed flow diagram of the proposed method. Next, we explain two involved classic polarimetric decomposition algorithms. Lastly, we introduce the proposed TENet model.

2.1. Processing Procedure

Figure 1 depicts the data processing workflow in this paper. During the preprocessing stage of the PolSAR images, we initially execute radiometric calibration on RS2 and ALOS2 SAR data to convert the image pixel values from a digital number (DN) to a standard geophysical measurement unit of radar backscatter. Next, the polarization scattering matrices [ S ] are converted into coherence matrices [ T ] . Following that, a multilooking operation is performed to mitigate speckle noise. Afterwards, slant-to-ground-range conversion and terrain correction are carried out to eliminate geometry-dependent radiometric distortions. To further reduce speckle, a refined Lee polarization filter with a window size of 7 × 7 (range direction × azimuth direction) is employed. Finally, polarimetric decomposition algorithms are applied to extract information on backscattering mechanisms. All the preprocessing operations are performed on each channel separately.
Rational polynomial coefficient (RPC) orthorectification and resizing procedures are performed on the classification data obtained from SPOT-4 and Landsat 8. Finally, both the SAR data and classification data are reprojected to the Universal Transverse Mercator (UTM) zone 32N system, and a georegistration is applied under the same geographical coordinates. Lastly, the processed SAR data and classification map are divided into training sets and testing sets for the TENet model.

2.2. SAR Polarimetric Decomposition

During the preprocessing procedure, two classic polarimetric decomposition algorithms are involved: Cloude-Pottier (CP) and Freeman-Durden (FD). The fundamental principle driving polarimetric decomposition techniques is to analyze and interpret the polarimetric characteristics of radar signals [41]. These decomposition techniques aim to decompose the radar return signal into different scattering mechanisms, enabling the extraction of important information about the observed targets or scenes.
Eigenvalues and eigenvectors are employed in CP polarimetric decomposition to describe the predominant scattering mechanisms exhibited by individual targets [42]. For the analysis of CP polarimetric decomposition, three main parameters with physical interpretations can be utilized: entropy H, alpha angle α , and anisotropy (AN).
The FD polarimetric decomposition is a physically derived model that can be employed to describe the polarimetric backscatter from naturally existing scatterers [43]. For the FD decomposition method, we can extract three orthogonal scattering components, including volume scatter (indicated as vol), double-bounce scatter (indicated as dbl), and first-order Bragg scatter (indicated as odd).

2.3. Texture Enhancement UNet

2.3.1. The Overall Architecture

Figure 2 displays the whole network structure of the TENet model, which is built on the UNet model. This network draws inspiration from the Statistical Texture Learning Network (STL-Net), which is used for semantic segmentation in computer vision [40].
Recent deep learning models for semantic segmentation primarily emphasize the acquisition of high-level features, potentially overlooking abundant details and structural information in PolSAR images. To address this limitation, researchers have introduced skip connections to retain low-level features. However, the extracted low-level features tend to exhibit low quality, particularly when dealing with SAR images affected by speckle noise. Consequently, the model tends to gradually disregard the ambiguous texture details during the training process. Traditional methods for SAR image classification in the intertidal zone often incorporate global textural information, such as intensity column diagrams. Inspired by this idea, TENet is introduced, utilizing a texture enhancement module (TEM) to effectively preserve and transfer improved low-level features.
As shown in Figure 2, for the TENet, the encoding phase consists of four convolutional layers, while the decoding phase consists of four corresponding deconvolutional layers. The input images have dimensions of 512 pixels for both width and height, while the number of channels can vary depending on the experimental settings. The decoder directly receives the feature maps from the corresponding encoder layer, except for the first layer. We designate the value of F as 64. After the decoding phase, a 1 × 1 convolution is used to generate the prediction mask. Further details regarding the TEM will be presented in the following part of this section.

2.3.2. Texture Enhancement Module

The TEM is adopted to improve the texture details by capturing the global distribution of low-level features. The inputs to the TEM are the feature maps obtained from the initial encoding layer, and its outputs are directly connected to the corresponding decoding layer.
We attempted to concatenate the output of the TEM with the encoding feature maps and then sent them to the decoding layer; however, the results turned out to be worse. A possible explanation is that applying the TEM operation to the encoding feature maps, which already contain local textural information, may result in redundancy. The output of the TEM is already capable of capturing sufficient low-level details without adding noise, making the concatenation unnecessary and potentially detrimental to the overall performance. Therefore, we do not use the plain skip connection in the TEM.
The input feature maps of the TEM are represented as A R H × W × C , where H, W, and C denote the height, width, and number of channels of the feature maps, respectively. To obtain the global average feature map g R 1 × 1 × C , global average pooling is applied to each feature map. The cosine similarity between g and A is then computed for every pixel in the feature map and is denoted as follows:
S i , j = g · A i , j g 2 · A i , j 2
where A i , j represents each pixel in the feature maps, with i [ 1 , W ] , j [ 1 , H ] ) . The cosine similarity of each pixel is denoted as S i , j ( i [ 1 , W ] , j [ 1 , H ] ) .
After reshaping into R H × W , S is divided into N parts, denoted as L = L 1 , L 2 , , L N , by equally partitioning N points between the minimum and maximum values of S . The process involves creating N partitions that span the range of values in S .
To obtain the representation of the quantization encoding matrix E R H × W × N , the values of S i , j ( i [ 1 , W ] , j [ 1 , H ] ) are quantized with N functions:
E i , n = 1 L n S i if 0.5 N L n S i < 0.5 N 0 else
where n represents the nth level of L n . In a departure from the conventional argmax operation or the binary-based one-hot encoding approach, the quantization encoding strategy aims to circumvent the issue of gradient disappearance that may arise during backpropagation. With the quantization encoding matrix E at hand, we can proceed to produce the quantization counting map C .
Specifically, we concatenate L and the average E for each feature map to obtain the quantization counting map C R N × 2 :
C = Concat L , i = 1 H W E i , n n = 1 N i = 1 H W E i , n
where C o n c a t refers to the concatenate operation in the channel dimension.
Since C encodes the relative statistics of A , we can obtain the absolute relative statistics of D by concatenating the global average feature map g. Then g is upsampled to R N × C , and C is sent to a multilayer perceptron ( M L P ) to increase the dimension. The final D can be expressed as follows:
D = Concat ( M L P ( C ) , g )
Similar to traditional histogram equalization methods, the statistics map D requires the reconstruction of quantization levels. To achieve this, we employ a graph reasoning module that performs relation reasoning using a graph convolutional network (GCN) in the interaction space:
X = Softmax ϕ 1 D T · ϕ 2 D
L = ϕ 3 ( D ) · X
where L is the reconstructed levels, and operations ϕ 1 , ϕ 2 , and ϕ 3 are performed using 1 × 1 convolutions. The GCN structure here is designed to work as a self-attention mechanism. The final output is the quantization encoding map E on L obtained through matrix multiplication:
R = L · E
The final output of the TEM is the reshaped R C 2 × H × W .

3. Dataset

3.1. Study Area

As shown in Figure 3, the study area is situated in the northern region of the German Wadden Sea, specifically between the islands of Amrum and Föhr. This region is part of the largest coherent intertidal area globally, known as the Wadden Sea, spanning over 500 km along the North Sea coast of the Netherlands, Denmark, and Germany [44].
The dominant surface types in this area are sediments, which can be either pure or mixed with mud. The distribution of these sediments is strongly influenced by local hydrodynamic forces. Additionally, the region also includes vegetated areas and bivalve beds, primarily consisting of Pacific oysters, cockles, and blue mussels. To achieve a fine-grained classification, the surfaces in the study area are categorized into six types: land, water, sediments, bright sands/beach, bivalves, seagrass, and thin coverage of vegetation or bivalves. The thin coverage of vegetation or bivalves denotes areas where the vegetation or bivalves are sparsely distributed or occupy a smaller portion of the overall area.
Figure 4 illustrates a classification map of the study area. The classification for the bright sands/beach, sediments, and bivalves is based on Landsat 8 OLI data collected between 2014 and 2016. The classification of seagrass and bivalves is based on SPOT-4 data obtained in August 2015 and April 2016. The information derived from Landsat 8 OLI is refined through a manual postprocessing step that involves adjusting the data using the identified locations of the bivalves in the SPOT-4 data. This postprocessing step ensures that the derived information aligns more accurately with the actual locations of the bivalves.
The entire classification map, with dimensions of 1187 × 1699 pixels, was divided into three sections for our study. The areas shown in Figure 4a,b were allocated for training purposes, while Figure 4c was reserved for testing. This division ensured that approximately 70% of the data were used for training, while the remaining 30% were used for testing, thereby preventing any data leakage in the testing process.
It is worth noting that the bivalves, seagrass, and thin coverage classes are very limited in both the training and testing sites. Additionally, the beach class includes only a small number of samples in the testing sites, despite accounting for a relatively higher proportion in the training site. These imbalances in data distribution and the differences between the training and testing datasets may potentially have an adverse impact on the final classification results.

3.2. PolSAR Data

Two Single-Look Complex (SLC) SAR images were used for the study area, with pixel sizes ranging from 1 m × 1 m to 5 m × 5 m. Figure 5 displays VV-polarization images obtained from RS2 and ALOS2 on 24 December 2015 and 29 February 2016, respectively. Further details regarding the SAR data can be found in Table 1. The sediments and habitats on the surface of intertidal zones can only be detected when they are exposed during low tide. Consequently, it is necessary to use only SAR data acquired within a time window of 60 min before and 60 min after low tide [45].
In these images, the beach and seagrass classes are observed as dark patches, which could be attributed to the residual water present, resulting in a smooth surface. Conversely, the bivalves contribute to a rougher intertidal surface, leading to their appearance as bright patches.

4. Experiments

4.1. Experimental Setup

All experiments were conducted in PyTorch on a GeForce RTX 2080 Ti GPU. The models were trained using the Adam optimizer with a momentum of 0.9 and a weight decay of 0.0005. The initial learning rate was set to 0.0001. For our multiclass segmentation task, the common cross-entropy loss was adopted.
The training site shown in Figure 4c was randomly cropped into patches of size 512 × 512 . Random flipping augmentation was applied to the training dataset. The models were trained with a batch size of 4 for approximately 10,000 steps. Under this setup, the training of the TENet model typically requires up to 3 h.

4.2. Evaluation Metrics

We assess the semantic segmentation results of different methods using five metrics: per-class F1 score ( F 1 ), mean F1 score ( m F 1 ), mean intersection over union ( m I o U ), average accuracy ( A A ), and overall accuracy ( O A ).
By comparing the reference classifications and prediction results, we can generate the confusion matrix:
P = p i j N k × k
where p i j represents the number of pixels that belong to class i and are classified as class j. The total number of classes is denoted as k, which in our case is seven. Notably, p i i represents the number of correctly classified pixels. The average precision ( P ) and recall ( R ) can be calculated as follows:
P = 1 k i = 1 k p i i j = 1 k p j i , R = 1 k i = 1 k p i i j = 1 k p i j
The F 1 score is a harmonic mean of P and R, which is particularly useful for imbalanced classes. F 1 can be computed as follows:
F 1 = 2 × P × R P + R
The m F 1 is computed by averaging the individual F1 scores for each class.
The m I o U is a stringent metric used for evaluating image segmentation. It considers every pixel and is calculated as follows:
mIoU = 1 k i = 1 k p i i j = 1 k p i j + j = 1 k p j i p i i
The O A is the ratio of correctly classified pixels to the total number of pixels in the testing set. It is given by the following:
OA = i = 1 k p i i i = 1 k j = 1 k p i j
The A A is the average accuracy across all classes:
AA = 1 k i = 1 k p i i j = 1 , j i k p i j + p i i

4.3. Comparison of Results

We compare our TENet model with two classic semantic segmentation models, Deeplab V3 Plus [46] and UNet [47]. Two frameworks specifically designed for SAR image semantic segmentation, HR-SARNet [31] and TL-FCN [29], are also concluded as control groups. To ensure a fair comparison, we maintain consistent hyperparameters across all models, including learning rate, batch size, optimizer, etc.
Note that we used an 11-channel input for all models, consisting of 3 CP component channels (H, AN, α ) (when the reciprocity theorem holds, the HV and VH channels can be substituted with their average value, reducing the need for four polarimetric channels to three. This test can be performed, for instance, by applying the GLRT criterion devised in [48]), 4 intensity channels (HH, HV, VH, HH) for RS2 data, and 4 intensity channels (HH, HV, VH, HH) for ALOS2 SAR data. This configuration was chosen based on the results of our ablation study.
Table 2 presents the quantitative results obtained by different models. In general, we observe that our TENet model achieves the highest performance in terms of m F 1 , m I o U , and A A , and is only marginally (0.30%) below Deeplab V3 Plus in terms of O A .
Due to the limited number of training and testing pixel samples for classes of seagrass, bivalves, beach, and thin coverage, their metrics are not as good as those of land, water, and sediments. The features of the low-accuracy classes are also more difficult to learn.
Considering the extreme sample imbalance of our sediment and habitat classification task, we find that the A A metric is more suitable for assessing the model’s capabilities, compared with the overall accuracy O A . In fact, the visualization results presented in Figure 6 indicate that the Deeplab V3 Plus model tends to overfit on the land, water, and sediments classes.
Compared with the basic framework UNet, our proposed TENet model demonstrates improvements in mean metrics m F 1 , m I o U , A A , and O A by 1.02%, 1.02%, 0.82%, and 0.91%, respectively. When contrasted with the basic skip connections in UNet, integrating the TEM enhanced the model’s performance, providing clear evidence of the TEM’s effectiveness over plain skip connections.
It is worth mentioning that the basic UNet model outperforms the more complex HR-SARNet and TL-FCN models in terms of m F 1 , m I o U , and A A . This observation suggests that our application may not be content with the very high-resolution conditions in HR-SARNet, and the additional branches for FCN in TL-FCN do not contribute significantly to our task.
To intuitively demonstrate the superiority of TENet, we present the prediction masks of the testing sites in Figure 6. A comparison between the UNet baseline and our proposed model reveals that TENet exhibits the ability to predict relatively smoother but more precise results. This improved performance can be attributed to the effect of the TEM.
We note that RS2 and ALOS2 images are acquired at different times during various tide cycles, leading to variations in environmental backgrounds (such as wind speed and weather conditions) and water levels within the same area. These factors, in addition to the frequency difference, can also impact the radar backscatter recorded by the RS2 and ALOS2 sensors.

4.4. Ablation Study

We comprehensively evaluated our TENet model on different input sources. The same set of hyperparameters is adopted across all experiments for a fair comparison.

4.4.1. Multiband Input

We first investigated the multiband design choices of the RS2 (C band) and ALOS2 (L band) data. For each band, we concatenated the three CP components and four intensity channels as input. There are five design choices in Table 3 and Figure 7.
RS2 and ALOS2 are different 7-channel SAR data, the “Training” keyword means that we use this kind of data as TENet input, and the “Testing” keyword means that we use this kind of data for performance testing. The “+” operation between RS2 and ALOS2 means that we concatenate them in the channel dimension.
The quantitative results are shown in Table 3. The “Training: RS2 + ALOS2, Testing: RS2 + ALOS2” setting obtains the best results in all average metrics, which proves that a combination of SAR data from different bands indeed helps the model in learning. The ALOS2 data behaves much better compared with the RS2 data. A possible explanation may be that the ALOS2 data (L band) emits longer waves whose penetration ability is stronger. This ability is very important to classify cover types in the intertidal zone like thin coverage.
The results of the “Training: ALOS2, Testing: ALOS2” setting are very close to our best results. Adding RS2 data makes the performance of seagrass, bivalves, and thin coverage classes drop slightly. More complex multiband fusion networks can be designed for better feature extraction. When the training and testing data are from different bands, a dramatic drop happens in all the metrics. This result shows that SAR image features of different bands reflect different backscatter mechanisms.
The visualization results of the five design choices are shown in Figure 7. The seagrass, bivalves, and thin coverage classes nearly disappear under training and testing for the RS2 setting. However, the classification of training and testing on the ALOS2 setting gets the prediction maps closer to our proposed method. The models trained and tested from different bands data confuse the dominating pixels and lose the power of detecting the classes bivalves and thin coverage.

4.4.2. Multipolarization Input

We investigate the design choices of a combination of multipolarization SAR inputs. For every possible combination, we use both RS2 and ALOS2 data as input. There are seven design choices in Table 4 and Figure 8.
The keyword “I or FD or CP” signifies that we only use either four intensity channels or three FD components or three CP components as the TENet input. The FD and CP components can be combined using band concatenation as “FDCP” or unified with the four intensity channels separately as “FDI” or “CPI”. In the final control group, we utilize intensity channels, FD components, and CP components together as “FDCPI”.
Table 4 presents the quantitative results of the seven comparison groups. Among them, the “CPI” setting demonstrates the best overall performance. In theory, the “FDCPI” setting contains the most comprehensive information compared with other settings. However, it only shows a slight improvement (0.29%) in the sediments class compared with “CPI”. One possible explanation for this is that the concatenation method employed for fusing intensity and polarimetric decomposition information is too simplistic. This information may interfere with each other during feature extraction, leading to limited performance gains. In general, the different combinations of FD components, CP components, and intensity channels enhance the final average metrics compared with using them individually.
Figure 8 illustrates the visualization results of the seven design choices. The “CP” setting demonstrates proficiency in identifying the sediments class but struggles to detect the thin coverage class, aligning with the quantitative results. The “FDCP” setting significantly aids in identifying bivalve areas. One possible explanation is that polarization characteristics can better reflect the features of specific classes, such as bivalves, compared with intensity information. These results further validate the importance of incorporating different polarimetric decomposition components as the model inputs.

5. Discussion

The TENet model can obtain excellent classification results for the classes landmask, water, and sediments, but it fails to perform effectively in certain specific categories. One potential reason is that it is extremely challenging to learn the intricate characteristics of seagrass and bivalves with a limited amount of training data.
In the future, we can enlarge our dataset and adopt few-shot learning strategies to help achieve better classification results. In addition, more appropriate fusion mechanisms for multiband and multipolarization SAR data can be designed. Additional polarimetric decomposition components can be incorporated to differentiate sediments and habitats in intertidal flats. Furthermore, the combination of multiple sensors, such as SAR and multispectral data, holds great promise for significantly improving the performance of the model and realizing full-time all-weather operation in future intertidal monitoring.

6. Conclusions

In this paper, we propose a TENet model for the classification of sediments and habitats in intertidal zones. We evaluate the proposed model using fully polarimetric SAR data from ALOS2 (L-band) and RS2 (C-band), and our classification results are generated with higher accuracies (where sediments are further classified into bright sands/beach and other sediments, and habitats are further subdivided into bivalves, seagrass, and thin coverage of vegetation or bivalves).
The experimental results show that the incorporation of the TEM improves the performances by explicitly learning global texture information. The TENet model exhibits improved classification accuracies across all average metrics in comparison with the UNet baseline. The visualization results further validate that TENet is capable of offering more precise locations.
The comparative experiments verify the effectiveness and potential of the multiband and multipolarization systems for classification tasks in intertidal zones. By considering the information from multiple bands, the TENet model can leverage the complementary nature of SAR data to improve classification results. The best classification results are obtained when we combine ALOS2 and RS2 data. For the multifrequency input, the choice of the “CPI” setting achieved the highest average performances compared with the other six comparison groups. We note that the CP components show greater effectiveness compared with the FD components in our framework in terms of all the average metrics. Overall, our research demonstrates the effectiveness of the TENet model and emphasizes the importance of considering global texture, multiband, and multipolarization SAR information for the accurate classification of sediments and habitats in intertidal zones.

Author Contributions

D.Z., W.W. and M.G. conceived and designed the experiments; D.Z. performed the experiments; M.G. provided the SAR data; W.W., M.G. and H.Z. analyzed the data; W.W. and M.G. aided in revising the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the China Scholarship Council (CSC) for the financial support. This study is supported by the Youth Innovation Promotion Association CAS, grant number 2023137.

Data Availability Statement

The authors do not have permission to share data.

Acknowledgments

The authors would like to thank the European Commission and European Space Agency (ESA) for providing the Sentinel data. The proposed work was conducted during Di Zhang’s Ph.D. period at the University of Hamburg.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lv, M.; Luan, X.; Liao, C.; Wang, D.; Liu, D.; Zhang, G.; Jiang, G.; Chen, L. Human impacts on polycyclic aromatic hydrocarbon distribution in Chinese intertidal zones. Nat. Sustain. 2020, 3, 878–884. [Google Scholar] [CrossRef]
  2. Murray, N.J.; Phinn, S.P.; Fuller, R.A.; DeWitt, M.; Ferrari, R.; Johnston, R.; Clinton, N.; Lyons, M.B. High-resolution global maps of tidal flat ecosystems from 1984 to 2019. Sci. Data 2022, 9, 542. [Google Scholar] [CrossRef]
  3. Bishop-Taylor, R.; Sagar, S.; Lymburner, L.; Beaman, R.J. Between the tides: Modelling the elevation of Australia’s exposed intertidal zone at continental scale. Estuar. Coast. Shelf Sci. 2019, 223, 115–128. [Google Scholar] [CrossRef]
  4. Nizam, A.; Meera, S.P.; Kumar, A. Genetic and molecular mechanisms underlying mangrove adaptations to intertidal environments. iScience 2022, 25, 103547. [Google Scholar] [CrossRef]
  5. Chmura, G.L.; Anisfeld, S.C.; Cahoon, D.R.; Lynch, J.C. Global carbon sequestration in tidal, saline wetland soils. Glob. Biogeochem. Cycles 2003, 17, 22. [Google Scholar] [CrossRef]
  6. Billerbeck, M.; Werner, U.; Bosselmann, K.; Walpersdorf, E.; Huettel, M. Nutrient release from an exposed intertidal sand flat. Mar. Ecol. Prog. Ser. 2006, 316, 35–51. [Google Scholar] [CrossRef]
  7. Smolders, S.; Plancke, Y.; Ides, S.; Meire, P.; Temmerman, S. Role of intertidal wetlands for tidal and storm tide attenuation along a confined estuary: A model study. Nat. Hazards Earth Syst. Sci. 2015, 15, 1659–1675. [Google Scholar] [CrossRef]
  8. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  9. Navedo, J.G.; Herrera, A.G. Effects of recreational disturbance on tidal wetlands: Supporting the importance of undisturbed roosting sites for waterbird conservation. J. Coast. Conserv. 2012, 16, 373–381. [Google Scholar] [CrossRef]
  10. Murray, N.J.; Phinn, S.R.; DeWitt, M.; Ferrari, R.; Johnston, R.; Lyons, M.B.; Clinton, N.; Thau, D.; Fuller, R.A. The global distribution and trajectory of tidal flats. Nature 2019, 565, 222–225. [Google Scholar] [CrossRef]
  11. Gade, M.; Wang, W.; Kemme, L. On the imaging of exposed intertidal flats by single- and dual-co-polarization Synthetic Aperture Radar. Remote Sens. Environ. 2018, 205, 315–328. [Google Scholar] [CrossRef]
  12. Wang, W.; Gade, M.; Stelzer, K.; Kohlus, J.; Zhao, X.; Fu, K. A Classification Scheme for Sediments and Habitats on Exposed Intertidal Flats with Multi-Frequency Polarimetric SAR. Remote Sens. 2021, 13, 360. [Google Scholar] [CrossRef]
  13. Castaneda-Guzman, M.; Mantilla-Saltos, G.; Murray, K.A.; Settlage, R.; Escobar, L.E. A database of global coastal conditions. Sci. Data 2021, 8, 304. [Google Scholar] [CrossRef] [PubMed]
  14. Vitousek, S.; Buscombe, D.; Vos, K.; Barnard, P.L.; Ritchie, A.C.; Warrick, J.A. The future of coastal monitoring through satellite remote sensing. Camb. Prisms Coast. Futur. 2023, 1, e10. [Google Scholar] [CrossRef]
  15. Brockmann, C.; Stelzer, K. Optical Remote Sensing of Intertidal Flats. In Remote Sensing of the European Seas; Barale, V., Gade, M., Eds.; Springer: Dordrecht, The Netherlands, 2008; pp. 117–128. [Google Scholar] [CrossRef]
  16. González, C.J.; Torres, J.R.; Haro, S.; Gómez-Enri, J.; Álvarez, Ó. High-resolution characterization of intertidal areas and lowest astronomical tidal surface by use of Sentinel-2 multispectral imagery and hydrodynamic modeling: Case-study in Cadiz Bay (Spain). Sci. Total Environ. 2023, 861, 160620. [Google Scholar] [CrossRef]
  17. Liu, G.; Liu, B.; Zheng, G.; Li, X. Environment Monitoring of Shanghai Nanhui Intertidal Zone With Dual-Polarimetric SAR Data Based on Deep Learning. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4208918. [Google Scholar] [CrossRef]
  18. Hughes, M.G.; Glasby, T.M.; Hanslow, D.J.; West, G.J.; Wen, L. Random Forest Classification Method for Predicting Intertidal Wetland Migration Under Sea Level Rise. Front. Environ. Sci. 2022, 10, 749950. [Google Scholar] [CrossRef]
  19. Davies, B.F.R.; Gernez, P.; Geraud, A.; Oiry, S.; Rosa, P.; Zoffoli, M.L.; Barillé, L. Multi- and hyperspectral classification of soft-bottom intertidal vegetation using a spectral library for coastal biodiversity remote sensing. Remote Sens. Environ. 2023, 290, 113554. [Google Scholar] [CrossRef]
  20. Chun Liu, C.L.; Junjun Yin, J.Y.; Jian Yang, J.Y. Application of deep learning to polarimetric SAR classification. In Proceedings of the IET International Radar Conference 2015, Hangzhou, China, 14–16 October 2015; p. 4. [Google Scholar] [CrossRef]
  21. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  22. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
  23. Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric Convolutional Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3040–3054. [Google Scholar] [CrossRef]
  24. Campos-Taberner, M.; García-Haro, F.J.; Martínez, B.; Izquierdo-Verdiguier, E.; Atzberger, C.; Camps-Valls, G.; Gilabert, M.A. Understanding deep learning in land use classification based on Sentinel-2 time series. Sci. Rep. 2020, 10, 17188. [Google Scholar] [CrossRef]
  25. Garg, R.; Kumar, A.; Bansal, N.; Prateek, M.; Kumar, S. Semantic segmentation of PolSAR image data using advanced deep learning model. Sci. Rep. 2021, 11, 15365. [Google Scholar] [CrossRef]
  26. Wang, P.; Bayram, B.; Sertel, E. A comprehensive review on deep learning based remote sensing image super-resolution methods. Earth-Sci. Rev. 2022, 232, 104110. [Google Scholar] [CrossRef]
  27. Cui, X.; Yang, F.; Wang, X.; Ai, B.; Luo, Y.; Ma, D. Deep learning model for seabed sediment classification based on fuzzy ranking feature optimization. Mar. Geol. 2021, 432, 106390. [Google Scholar] [CrossRef]
  28. Tallam, K.; Nguyen, N.; Ventura, J.; Fricker, A.; Calhoun, S.; O’Leary, J.; Fitzgibbons, M.; Robbins, I.; Walter, R.K. Application of Deep Learning for Classification of Intertidal Eelgrass from Drone-Acquired Imagery. Remote Sens. 2023, 15, 2321. [Google Scholar] [CrossRef]
  29. Wu, W.; Li, H.; Li, X.; Guo, H.; Zhang, L. PolSAR Image Semantic Segmentation Based on Deep Transfer Learning—Realizing Smooth Classification With Small Training Sets. IEEE Geosci. Remote Sens. Lett. 2019, 16, 977–981. [Google Scholar] [CrossRef]
  30. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  31. Wang, X.; Cavigelli, L.; Eggimann, M.; Magno, M.; Benini, L. HR-SAR-Net: A Deep Neural Network for Urban Scene Segmentation from High-Resolution SAR Data. In Proceedings of the 2020 IEEE Sensors Applications Symposium (SAS), Kuala Lumpur, Malaysia, 9–11 March 2020; pp. 1–6. [Google Scholar] [CrossRef]
  32. Van Beijma, S.; Comber, A.; Lamb, A. Random forest classification of salt marsh vegetation habitats using quad-polarimetric airborne SAR, elevation and optical RS data. Remote Sens. Environ. 2014, 149, 118–129. [Google Scholar] [CrossRef]
  33. Omari, K.; Chenier, R.; Touzi, R.; Sagram, M. Investigation of C-Band SAR Polarimetry for Mapping a High-Tidal Coastal Environment in Northern Canada. Remote Sens. 2020, 12, 1941. [Google Scholar] [CrossRef]
  34. Gade, M.; Alpers, W.; Melsheimer, C.; Tanck, G. Classification of sediments on exposed tidal flats in the German Bight using multi-frequency radar data. Remote Sens. Environ. 2008, 112, 1603–1613. [Google Scholar] [CrossRef]
  35. Geng, X.; Li, X.M.; Velotto, D.; Chen, K.S. Study of the polarimetric characteristics of mud flats in an intertidal zone using C- and X-band spaceborne SAR data. Remote Sens. Environ. 2016, 176, 56–68. [Google Scholar] [CrossRef]
  36. Van Der Wal, D.; Herman, P.M.; Wielemaker-van Den Dool, A. Characterisation of surface roughness and sediment texture of intertidal flats using ERS SAR imagery. Remote Sens. Environ. 2005, 98, 96–109. [Google Scholar] [CrossRef]
  37. Regniers, O.; Bombrun, L.; Ilea, I.; Lafon, V.; Germain, C. Classification of oyster habitats by combining wavelet-based texture features and polarimetric SAR descriptors. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3890–3893. [Google Scholar] [CrossRef]
  38. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  39. Lu, J.; Zhang, W.; Zhao, Y.; Sun, C. Image local structure information learning for fine-grained visual classification. Sci. Rep. 2022, 12, 19205. [Google Scholar] [CrossRef] [PubMed]
  40. Zhu, L.; Ji, D.; Zhu, S.; Gan, W.; Wu, W.; Yan, J. Learning Statistical Texture for Semantic Segmentation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 12532–12541. [Google Scholar] [CrossRef]
  41. Wang, W.; Yang, X.; Liu, G.; Zhou, H.; Ma, W.; Yu, Y.; Li, Z. Random Forest Classification of Sediments on Exposed Intertidal Flats Using ALOS-2 Quad-Polarimetric SAR Data. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 1191–1194. [Google Scholar] [CrossRef]
  42. Cloude, S.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  43. Freeman, A.; Durden, S. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  44. Gade, M.; Melchionna, S. Joint use of multiple Synthetic Aperture Radar imagery for the detection of bivalve beds and morphological changes on intertidal flats. Estuar. Coast. Shelf Sci. 2016, 171, 1–10. [Google Scholar] [CrossRef]
  45. Rainey, M.; Tyler, A.; Gilvear, D.; Bryant, R.; McDonald, P. Mapping intertidal estuarine sediment grain size distributions through airborne remote sensing. Remote Sens. Environ. 2003, 86, 480–490. [Google Scholar] [CrossRef]
  46. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Series Title: Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 11211, pp. 833–851. [Google Scholar] [CrossRef]
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Series Title: Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
  48. Pallotta, L. Reciprocity Evaluation in Heterogeneous Polarimetric SAR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4000705. [Google Scholar] [CrossRef]
Figure 1. Processing diagram for TENet. The first step is to preprocess SAR and optical sensor data. The preprocessing of SAR images and classification results is displayed in blue blocks and green blocks separately. The second step is to use the processed results to feed the TENet model and achieve test results. The training and testing phases of TENet are displayed in yellow blocks.
Figure 1. Processing diagram for TENet. The first step is to preprocess SAR and optical sensor data. The preprocessing of SAR images and classification results is displayed in blue blocks and green blocks separately. The second step is to use the processed results to feed the TENet model and achieve test results. The training and testing phases of TENet are displayed in yellow blocks.
Remotesensing 16 00972 g001
Figure 2. Illustration of the overall architecture of TENet. The spatial resolution of the input SAR image is 512 × 512 pixels. The TENet consists of three parts: an encoder to extract features, a decoder to generate a semantic segmentation mask, and a TEM module to replace the plain skip connection in the first layer to incorporate global textual information.
Figure 2. Illustration of the overall architecture of TENet. The spatial resolution of the input SAR image is 512 × 512 pixels. The TENet consists of three parts: an encoder to extract features, a decoder to generate a semantic segmentation mask, and a TEM module to replace the plain skip connection in the first layer to incorporate global textual information.
Remotesensing 16 00972 g002
Figure 3. The region of interest (indicated by the blue rectangle in the larger map) is situated along the German North Sea coast, east of the island of Amrum and southwest of the island of Föhr.
Figure 3. The region of interest (indicated by the blue rectangle in the larger map) is situated along the German North Sea coast, east of the island of Amrum and southwest of the island of Föhr.
Remotesensing 16 00972 g003
Figure 4. The classification map of the study area with dimensions of 1187 × 1699 pixels is divided into three sections: (c) is utilized for training purposes with dimensions of 1187 × 1187 pixels, while (a,b) are reserved for testing with dimensions of 512 × 512 pixels each. The surfaces in the study area are categorized into six types, and their classification information is derived from Landsat 8 OLI data and SPOT-4 data (©Brockmann Consult 2020).
Figure 4. The classification map of the study area with dimensions of 1187 × 1699 pixels is divided into three sections: (c) is utilized for training purposes with dimensions of 1187 × 1187 pixels, while (a,b) are reserved for testing with dimensions of 512 × 512 pixels each. The surfaces in the study area are categorized into six types, and their classification information is derived from Landsat 8 OLI data and SPOT-4 data (©Brockmann Consult 2020).
Remotesensing 16 00972 g004
Figure 5. SAR images of the study area captured shortly after the low tide: (a) RS2 VV-polarization channel; (b) ALOS2 VV-polarization channel. RS2: ©MacDonald, Dettwiler and Associates Ltd. 2015; ALOS2: ©JAXA 2016.
Figure 5. SAR images of the study area captured shortly after the low tide: (a) RS2 VV-polarization channel; (b) ALOS2 VV-polarization channel. RS2: ©MacDonald, Dettwiler and Associates Ltd. 2015; ALOS2: ©JAXA 2016.
Remotesensing 16 00972 g005
Figure 6. Comparison of segmented maps obtained by different models on testing areas. Pink rectangles highlight some locations where the proposed TENet model produces finer segmentation predictions.
Figure 6. Comparison of segmented maps obtained by different models on testing areas. Pink rectangles highlight some locations where the proposed TENet model produces finer segmentation predictions.
Remotesensing 16 00972 g006
Figure 7. Comparison of segmented maps obtained by different training datasets and testing datasets on testing areas. Pink rectangles highlight some locations where the proposed method (Training: RS2+ALOS2, Testing: RS2+ALOS2) produces finer segmentation predictions.
Figure 7. Comparison of segmented maps obtained by different training datasets and testing datasets on testing areas. Pink rectangles highlight some locations where the proposed method (Training: RS2+ALOS2, Testing: RS2+ALOS2) produces finer segmentation predictions.
Remotesensing 16 00972 g007
Figure 8. Comparison of segmented maps obtained by different input channels on testing areas. Pink rectangles highlight some locations where the proposed method (Input: CPI channels) produces finer segmentation predictions.
Figure 8. Comparison of segmented maps obtained by different input channels on testing areas. Pink rectangles highlight some locations where the proposed method (Input: CPI channels) produces finer segmentation predictions.
Remotesensing 16 00972 g008
Table 1. SAR acquisition information. Water levels are measured at Wittdün.
Table 1. SAR acquisition information. Water levels are measured at Wittdün.
Sensor/BandDate/TimeLow Tide Time/Water LevelWater Level
RS2/C24 December 2015/05:43 UTC05:25 UTC/−103 cm−94 cm
ALOS2/L29 February 2016/23:10 UTC23:46 UTC/−176 cm−171 cm
Table 2. Comparing quantitative results of different semantic segmentation models for intertidal sediment and habitat classification.
Table 2. Comparing quantitative results of different semantic segmentation models for intertidal sediment and habitat classification.
ModelF1 (%)mF1 (%)mIoU (%)AA (%)OA (%)
Landmask Seagrass Bivalves Beach Water Sediments Thin Coverage
DeeplabV3+97.4918.090.283.3779.7378.740.0039.6734.0240.2184.25
UNet96.3913.833.1815.0979.6577.233.0941.2134.4142.8783.04
HR-SARNet96.3118.3910.053.9978.9178.320.0040.8534.2741.8083.14
TL-FCN95.829.059.3016.0880.1777.250.0041.0934.3142.5283.01
TENet97.1118.872.3018.4979.6377.751.4942.2335.4343.6983.95
Table 3. Comparing quantitative results of different train and test dataset inputs for intertidal sediment and habitat classification.
Table 3. Comparing quantitative results of different train and test dataset inputs for intertidal sediment and habitat classification.
Train DatasetTest DatasetF1 (%)mF1 (%)mIoU (%)AA (%)OA (%)
Landmask Seagrass Bivalves Beach Water Sediments Thin Coverage
RS2RS293.0115.280.1111.2276.8273.650.7238.6931.7540.5379.93
ALOS2RS224.321.130.000.0965.9335.330.0018.1112.1622.8139.92
RS2ALOS287.210.940.002.530.428.180.0014.1811.9423.0347.30
ALOS2ALOS295.6117.905.7018.1577.7777.871.9142.1334.4842.2982.73
RS2+ALOS2RS2+ALOS297.1118.872.3018.4979.6377.751.4942.2335.4343.6983.95
Table 4. Comparing quantitative results of different input combinations of intensity channels (I), FD components (FD), and CP components (CP) for intertidal sediment and habitat classification.
Table 4. Comparing quantitative results of different input combinations of intensity channels (I), FD components (FD), and CP components (CP) for intertidal sediment and habitat classification.
InputF1 (%)mF1 (%)mIoU (%)AA (%)OA (%)
Landmask Seagrass Bivalves Beach Water Sediments Thin Coverage
I95.659.397.3613.3178.3176.932.9440.5633.6941.0282.48
FD96.2510.735.408.0977.1877.362.6339.6633.2439.6482.93
CP96.3412.351.5614.9477.8277.890.0040.1333.7040.7683.51
FDI96.1411.256.6913.5879.4177.431.7140.8934.1741.9783.06
FDCP94.4718.1611.8614.1866.0372.940.2639.7031.4742.7478.07
FDCPI96.3414.443.0916.1378.9778.041.0341.1534.4041.9983.55
CPI97.1118.872.3018.4979.6377.751.4942.2335.4343.6983.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Wang, W.; Gade, M.; Zhou, H. TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images. Remote Sens. 2024, 16, 972. https://doi.org/10.3390/rs16060972

AMA Style

Zhang D, Wang W, Gade M, Zhou H. TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images. Remote Sensing. 2024; 16(6):972. https://doi.org/10.3390/rs16060972

Chicago/Turabian Style

Zhang, Di, Wensheng Wang, Martin Gade, and Huihui Zhou. 2024. "TENet: A Texture-Enhanced Network for Intertidal Sediment and Habitat Classification in Multiband PolSAR Images" Remote Sensing 16, no. 6: 972. https://doi.org/10.3390/rs16060972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop