Next Article in Journal
Image Characteristic-Guided Learning Method for Remote-Sensing Image Inpainting
Previous Article in Journal
Retrieval of Ocean Surface Currents by Synergistic Sentinel-1 and SWOT Data Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Model Synergistic Satellite-Derived Bathymetry Fusion Approach Based on Mamba Coral Reef Habitat Classification

1
College of Oceanography and Space Informatics, China University of Petroleum, Qingdao 266500, China
2
Lab of Marine Physics and Remote Sensing, First Institute of Oceanography, Ministry of Natural Resources, Qingdao 266061, China
3
Technology Innovation Center for Ocean Telemetry, Ministry of Natural Resources, Qingdao 266061, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(13), 2134; https://doi.org/10.3390/rs17132134 (registering DOI)
Submission received: 15 May 2025 / Revised: 17 June 2025 / Accepted: 19 June 2025 / Published: 21 June 2025
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)

Abstract

:
As fundamental geophysical information, the high-precision detection of shallow water bathymetry is critical data support for the utilization of island resources and coral reef protection delimitation. In recent years, the combination of active and passive remote sensing technologies has led to a revolutionary breakthrough in satellite-derived bathymetry (SDB). Optical SDB extracts bathymetry by quantifying light–water–bottom interactions. Therefore, the apparent differences in the reflectance of different bottom types in specific wavelength bands are a core component of SDB. In this study, refined classification was performed for complex seafloor sediment and geomorphic features in coral reef habitats. A multi-model synergistic SDB fusion approach constrained by coral reef habitat classification based on the deep learning framework Mamba was constructed. The dual error of the global single model was suppressed by exploiting sediment and geomorphic partitions, as well as the accuracy complementarity of different models. Based on multispectral remote sensing imagery Sentinel-2 and the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) active spaceborne lidar bathymetry data, wide-range and high-accuracy coral reef habitat classification results and bathymetry information were obtained for the Yuya Shoal (0–23 m) and Niihau Island (0–40 m). The results showed that the overall Mean Absolute Errors (MAEs) in the two study areas were 0.2 m and 0.5 m and the Mean Absolute Percentage Errors (MAPEs) were 9.77% and 6.47%, respectively. And R2 reached 0.98 in both areas. The estimated error of the SDB fusion strategy based on coral reef habitat classification was reduced by more than 90% compared with classical SDB models and a single machine learning method, thereby improving the capability of SDB in complex geomorphic ocean areas.

1. Introduction

Shallow water bathymetry is a type of spatial data that describe the physical conditions of the oceans, which is critical for vessel navigation, ocean protection, farm area management, and marine infrastructure development. Bathymetry also provides key support for the early warning of marine hazards and the exploration of marine resources [1,2,3]. The diversity and dynamics of benthic habitats in shallow waters are of great im-portance to aquatic ecosystems and human societies [4,5], such as seagrasses and bio-logical reefs, which maintain the stability of ecosystems through biogeochemical cy-cling, energy transfer, and habitat construction, as well as providing breeding fields and food sources for economic species [6,7,8]. Bathymetry directly determines key environmental parameters, such as light intensity, temperature gradient, and sediment type, which in turn affect the spatial distribution of benthic biomass, community structure, and ecological functions, including photosynthesis and nutrient circulation [9,10,11,12]. Therefore, the acquisition of bathymetric data contributes to understanding the structure and function of ocean ecosystems, providing a scientific basis for evaluating the health status of coral reef ecosystems, protecting ocean environments, and promoting sustainable development.
Although acoustic bathymetry can achieve reliable results with centimeter-level accuracy, it is labor- and material-intensive [13,14]. With significant improvements in the resolution of remote sensing satellite sensors, multi-source data have facilitated the continuous advancement of optical remote sensing for bathymetry. SDB is a technique for inverting bathymetry at multiple scales through various types of satellite observations, and it has become a mainstream tool in bathymetry systems due to its advantages of strong coverage capability, time efficiency, low measurement cost, and developed series of effective models and methods [15,16,17,18,19,20,21,22]. The rationale for SDB is based on the penetration of visible light into the water column. Light is absorbed and scattered by water molecules, colored dissolved organic matter, suspended particles, and other substances and then reflected to satellite sensors, providing information on depth, the optical properties of the water column, and the bottom sediment. Satellite multispectral and hyperspectral imagery has been widely used in various types of SDB models to estimate shallow water depths [23,24]. However, in situ bathymetric information is crucial for models used in specific regions, and the lack of a priori bathymetric information may limit their applications. In particular, machine learning and deep learning models, which have gained popularity in recent years, typically require a substantial amount of prior bathymetry information to construct models and enhance accuracy [25,26].
The spaceborne single-photon lidar ICESat-2, launched by NASA, has enabled the acquisition of centimeter-scale underwater terrain signals, bringing revolutionary technological breakthroughs for SDB research and filling the gap in traditional or SDB technology that cannot acquire in situ measured bathymetry data in complex terrain areas [27,28,29,30,31,32]. The combination of ICESat-2 with passive remote sensing data can efficiently generate large-scale seafloor topographic maps, dynamically monitor changes in the geomorphology of inland lakes and coastal zones, and provide time-sensitive and low-cost global data support, which promote the quantitative development of bathymetric surveys of shallow oceans and benthic habitat research [33,34,35,36,37]. Furthermore, the new level 3A data (ATL24 product) will provide automatically processed bathymetry on a global scale. This will be incorporated into global bathymetric mapping products to assess global nearshore morphologic variation [38,39].
Passive optical remote sensing observation has several advantages, including wide coverage, fast update periods, and low cost. The active optical remote sensing approach is characterized by a high bathymetric data density and excellent detection accuracy. The combination of two technology tools provides an efficient way to realize the direction of large-scale, high-precision bathymetric remote sensing detection. In optical SDB research, due to the intrinsic differences in the reflectance attenuation properties of various seafloor sediments (e.g., coral reefs and sand) within the visible to near-infrared wavelength bands [40,41], the bathymetry of coral reef regions is highly spatially heterogeneous and environmentally complex. It may be challenging to eliminate the confounding effects of signal heterogeneity due to sediment heterogeneity by using a uniform model for a whole region. Hence, partitioned models based on seafloor bottom types can significantly improve inversion accuracy and physical interpretability [35,42]. Nowadays, deep learning classification methods, such as the convolutional neural network and the Transformer, face difficulties in balancing spatial–spectral-dependent modeling over long distances with computational efficiency due to the limitations of local receptive fields or quadratic calculation complexity. In contrast, the emerging Mamba model has achieved the efficient modeling of long sequence data through its linear complexity, which can adaptively fuse multi-scale spatial and spectral features. It is also suitable for precisely capturing the subtle differences between various types in complex seafloor scenarios of coral reef habitats. Additionally, the current SDB methods have exhibited a diversified development trend. Compared with traditional single-model methods, multi-model fusion has demonstrated significant performance enhancements by integrating the advantages of various classical SDB models and machine learning algorithms [43].
Accordingly, the primary objective of this study was to utilize a novel deep learning framework, Mamba, to classify the seafloor sediments and geomorphic features of coral reef habitats and to construct an optimal partitioned SDB fusion approach with multiple models based on the classification results. Through the coral reef habitat partitions and the accuracy complementarity of multiple models, the systematic errors generated by a single model might be effectively suppressed. This approach will also reduce the uncertainty of the estimated bathymetric results, enhance the applicability of SDB for complex underwater terrain, and provide a highly robust solution for large-scale bathymetry mapping.

2. Materials and Methods

2.1. Study Area

Two study areas located in different seas were selected for this study, and their locations, along with the corresponding remote sensing data, are shown in Figure 1. The Yuya Shoal in the South China Sea is a large discontinuous coral atoll, approximately 37 km in length from east to west and 19 km at its widest extent from north to south. The atolls are full of coral reefs, most of which are 5.5 to 18.3 m below the sea surface. In the middle of the atoll lies a lagoon with water depths of ~45 m. On the east side of the atoll is the Xiantou Reef with waves, and on the north side is the Erjiao Reef. On the west side is the Langkou Reef, a long, narrow coral reef oriented east–west.
Niihau Island is the seventh-largest of the Hawaiian Islands and is in the central Pacific Ocean. Niihau Island is located in the northwestern part of the Hawaiian Islands and covers an area of ~180 km2. The island is ~29 km long and 10 km wide, with a coastline of around 72 km. Niihau is a flat, mostly arid, lowland island. Because Niihau has been administered as a long-term closed island, the impact of anthropogenic activities on the ecosystem remains negligible.

2.2. Data and Processing

2.2.1. ICESat-2 ATL03 Data

NASA launched the Advanced Topographic Laser Altimeter System (ATLAS) onboard the ICESat-2 platform on 15 September 2018. ATLAS/ICESat-2 employs a more sensitive single-photon lidar that emits six beams of laser pulses (532 nm) at a repetition frequency of 10 kHz and contains three of each strong and weak signal tracks [44]. The ATL03 data product provides the time, latitude, longitude, and elevation of each photon on the ICESat-2 downlink. This active remote sensing technique, based on the photon counting system, significantly improves the spatial resolution and vertical accuracy of global surface topography measurements through the statistical analysis of massive photon events. The uncertainty of the ATL03 data is within 4 m horizontally and 10 cm vertically [45].
Since sea waves may cause instantaneous fluctuations at the sea surface height, affecting the accuracy of bathymetry measurements, the primary criterion for data selection focuses on relatively stable sea surfaces. All of the data are manually inspected to verify reliability. The acquisition times of ICESat-2 data and remote sensing images are usually inconsistent. However, in most shallow sea areas (except for frequently changing estuarine deltas, etc.), the rate of variation in sea floor topography compared with the time scale of satellite observations (over several years) is relatively slow. Geomorphological features such as bedrock and coral reefs remain generally stable. Thus, temporal gaps between different datasets typically have minor impacts on SDB research. The ICESat-2 ATL03 data used in this study are shown in Table 1 and were obtained from the NASA Earth Observation Data of the Earth Science Data Systems Program (https://search.earthdata.nasa.gov/, accessed on 20 March 2025).

2.2.2. Sentinel-2 Multispectral Imagery

Sentinel-2 is a high-resolution multispectral imaging satellite that can be utilized for monitoring coastal and other areas. Currently, three satellites, 2A, 2B, and 2C, have been launched, providing coverage of 13 spectral bands. In this study, considering the spatial resolution requirement for SDB research and the penetration capability of different bands, three bands with a resolution of 10 m, i.e., B2 (490 nm), B3 (560 nm), and B4 (665 nm)—as well as the B5 (705 nm) band with a resolution of 20 m (re-sampled to the same resolution of 10 m)—were selected. Blue and green bands have high penetration in the water column and can detect relatively deep underwater topographic features. However, sediments such as corals have low reflectance in these bands, mainly due to absorption by photosynthetic and photo-protective compounds [46]. Nevertheless, the red and red-edge bands are more sensitive to reflections from sediments, such as corals, and may provide supplemental signals in very shallow waters, allowing for the effective identification of shallow bottom types and coral boundaries. A higher reflectance at red wavelengths indicates a lack of absorption or the presence of active fluorescence [47]. Moreover, chlorophyll absorption is readily apparent near 675 nm, and the effect of strong near-infrared reflectance is apparent at 700 nm [46]. Despite the strong influence of water absorption, the near-infrared band with a 20 m spatial resolution is still able to capture the structure of coral reefs in complex underwater environments, and a slightly lower spatial resolution also balances the signal-to-noise ratio of the data.
Sentinel-2 Level-1C products are available from the ESA (https://sentinels.copernicus.eu/sentinel-data-access/sentinel-products/sentinel-2-data-products/collection-1-level-1c, accessed on 11 March 2025). In this study, the atmospheric correction processing of Sentinel-2 imagery utilized the Sen2cor processor provided by the European Space Agency (ESA). The main purpose of Sen2cor is to calibrate the top-of-atmosphere reflectance of Sentinel-2 L1C data in order to generate and deliver bottom-of-atmosphere reflectance products of Sentinel-2 L2A and to achieve a comprehensive calibration of the atmosphere and clouds [48]. The imagery for the two study areas, Yuya Shoal and Niihau Island, was obtained on 4 January 2019 and 28 January 2022, respectively.

2.2.3. Tide Data

For SDB research, in view of the difference in acquisition times between ICESat-2 bathymetric data and optical remote sensing imagery, the tidal correction process is necessary to avoid introducing additional errors that may result from changes in tidal variations. Tidal correction standardizes all bathymetry data to the instantaneous state at the moment of image acquisition, thereby improving the accuracy of the estimation. Tide data for different study areas were queried according to the acquisition time of ICESat-2 data and Sentinel-2 data (Yuya Shoal from https://mds.nmdis.org.cn/pages/tidalCurrent.html, Niihau Island from https://tides4fishing.com/, accessed on 28 March 2025). Tide values are calculated from a series of temporal data obtained from mareographs. Tide value variations exhibit a regional pattern, and the characteristics of these variations typically show a high similarity at a small geographical scale. Therefore, tide conditions in the target study areas were inferred by selecting data from the closest stations. The prediction process was adapted from the method of least squares using Foreman’s algorithm. The vertical datum is referenced to Mean Lower Low Water (MLLW).

2.3. Research Methodology

The overall workflow of the multi-model synergistic SDB fusion approach, based on Mamba coral reef habitat classification, is illustrated in Figure 2 and is divided into three main components. The first section establishes the Mamba classification model using processed Sentinel-2 imagery and the manual visual interpretation of coral reef habitat sample classes. For active spaceborne lidar data, the underwater topographic signals from ICESat-2 ATL03 data are extracted and processed, incorporating refraction correction and tidal correction. The third section employs the sediment classification results from the Mamba model and the bathymetric training data to construct multiple bathymetric inversion models. Then, the SDB fusion model is established based on the optimal results from the different classifications. Finally, the fusion model estimation depth is compared with the bathymetric validation data, and a wide-range, high-precision bathymetric result map is generated.

2.3.1. Coral Reef Habitat Classification Mamba Model

Mamba is a new deep learning architecture that has garnered significant attention over the last two years and is regarded as a powerful challenge to the Transformer. By combining Selective State Space Models (SSMs) and hardware-aware algorithms, Mamba significantly improves data processing efficiency while maintaining or even surpassing the performance of the Transformer [49]. MambaHSI is a new Mamba-based hyperspectral image classification model, marking the first application of the Mamba model in the field of hyperspectral image classification [50]. The model can simultaneously simulate remote interactions across an image and integrate spatial and spectral information in an adaptive mode, allowing it to finely distinguish features with subtle differences between each pixel.
The structure of the Mamba-based multispectral coral reef habitat classification model is shown in Figure 3. It is mainly divided into four parts: the input part, the embedding part, the encoding part, and the output part. Among the inputs are the different bands of multispectral images and the selected sample classes. The samples are extracted through manual visual interpretation, and a priori classes are delineated based on the characteristics of different study areas. All samples were divided into model training samples, model validation samples, and samples for accuracy tests, with a ratio of 6:2:2. The embedding part consists of a 1 × 1 convolutional layer, a Batch Normalization (BN) layer, and a Sigmoid-weighted Linear Unit (SiLU) activation function layer, all sequentially connected.
The encoding part consists of three encoder blocks and two average pooling layers connected in an alternating manner. The core of the encoding part is the Encoder block, which contains the Spatial Mamba Block and Spectral Mamba Block, separately performing Mamba processing on the spatial and spectral information of remote sensing images. The spatial and spectral features are then fused with the Spatial–Spectral Fusion Module and optimized by adaptive weighting. The Spatial Mamba and Spectral Mamba blocks employ the same structure, which consists of the Flatten, Mamba, BN Layer, SiLU, and Reshape layers in sequence. The Flatten in Spatial Mamba performs the flattening operation in the spatial dimension and the Flatten in Spectral Mamba performs the flattening operation in the spectral channel dimension to extract spatial and spectral features, respectively.
The output part includes the Segmentation Head and Softmax function, where the Segmentation Head consists of a 1 × 1 convolutional layer, a Batch Normalization (BN) layer, a SiLU activation function layer, and another 1 × 1 convolutional layer, all sequentially connected, and then outputs the final classification result. In contrast to the MambaHSI model applied to hyperspectral image classification, this study employed the model on multispectral remote sensing images, which have far fewer bands than hyperspectral images, so no subgrouping of spectral bands is required. And the output is the final image classification result. The classification model was optimized using the Adam algorithm, and the loss function was set to cross-entropy.

2.3.2. Multi-Model Synergistic SDB Fusion Approach Based on Coral Reef Habitat Classification

In this study, a multi-model fusion strategy based on coral reef habitat classification is proposed. The core of the approach is to allocate optimal SDB models in different seafloor sediment and geomorphic partitions. The specific process involves first applying the Mamba model to coral reef habitat classification using multispectral imagery (Section 2.3.1). Then, different SDB classical models and machine learning algorithms are applied to each seafloor sediment and geomorphology. The SDB model with the best training accuracy in each class is then selected, and a multi-model synergistic SDB fusion approach is constructed. Ultimately, the fusion bathymetric results are formed by splicing the optimal bathymetric results for various classes. Different classical models and machine learning algorithms are as follows:
(1) Log-linear model
The multi-band log-linear SDB model was developed based on parameter optimization of the theoretical analytical model [15], and the formula is as follows:
H = a 0 + i = 1 n a i ln R r s λ i R v λ i ( i = 1,2 , , n ) ,
where n denotes the number of wavebands, a 0 and a i represent the model regression parameters (four bands were selected for parameter calculation in this study), R r s represents the processed remote sensing reflectance, and R v is the remote sensing reflectance under the assumption of infinite depth.
(2) Stumpf model
As the multi-band log-linear model requires the specification of remote sensing reflectance in deep water, as well as the determination of multiple regression parameters, Stumpf et al. [16] proposed a two-band log-transformed ratio model:
H = m 1 l n ( n R r s λ i ) l n ( n R r s λ j ) m 0 ,
where m 1 and m 0 are the model regression parameters, indicating the adjustable slope and the offset when depth is 0 m, respectively; n is a fixed constant to ensure that the logarithmic value is positive in any case, with its range of values generally in the interval of 500~1500 and usually taken as 1000; and i and j represent the blue and green bands, respectively. The model is more robust to optically shallow water in regions with complex bottom variations.
(3) Support Vector Regression
Support Vector Regression (SVR) is a regression algorithm based on the support vector machine, offering unique advantages in addressing nonlinear regression problems [51]. It transforms linearly indistinguishable data in a low-dimensional space into a high-dimensional space for separation through kernel functions, thereby solving the nonlinearity problem between data. It outperforms neural network models on small sample datasets.
In this study, the processed remote sensing reflectance in four bands was used as the input value, and the bathymetry value was considered as the output value to establish the model. The model kernel function was set to a radial basis kernel function. A grid search method was also employed for parameter optimization, which systematically generated all possible parameter combinations within a specified range of values and evaluated each set of parameters through cross-validation. The penalty coefficient, C , and the kernel function parameter, g , needed to be adjusted, as they have essential impacts on model complexity, fit, and generalization ability. Finally, the optimal parameters obtained from the grid search were applied to train the final SVR model.
(4) Random Forest
The Random Forest (RF) is a widely used machine learning algorithm that consists of multiple decision trees. The algorithm constructs numerous sets of training samples with varying differences, and an independent regression tree model is built on each of these subsamples [52]. The splitting rule in RF regression problems is based on minimizing the Mean Squared Error (MSE). An RF can reduce the sensitivity of the overfitting problem because each regression tree in the algorithm is randomly sampled in the training dataset, so that the samples are not over-trained and are not susceptible to other values.
In the present study, the same processed remote sensing reflectance in four bands was used as the input value, and the bathymetry value was used as the output value to establish the model. Parameter optimization was performed using grid search and cross-validation to obtain the optimized parameters, i.e., the number of regression trees and the number of features randomly selected for each tree split.
(5) XGBoost
EXtreme Gradient Boosting (XGBoost) is an integrated machine learning algorithm based on decision trees [53]. The principle is to train CART regression trees using a training set and sample truth values. By iteratively adding decision trees, the residuals of the pre-order model can be gradually corrected. When making predictions, the output values of each tree are weighted and accumulated to obtain the final prediction.
In this study, the model was also constructed using the processed four-band remote sensing reflectance and bathymetry values. The model learning rate was set to 0.1, and the maximum depth of each tree was 6. For training, 80% of the samples and 80% of the features of each tree were randomly selected. L2 regularization was added, and an early-stop mechanism was set to dynamically terminate iterations in order to suppress overfitting.

2.3.3. DBSCAN Denoising Algorithm

In the study, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), an unsupervised spatial clustering algorithm, was employed to distinguish between underwater terrain photons and noise photons in the ICESat-2 ATL03 data [53]. The core concept of DBSCAN is based on two defined parameters: the neighborhood radius (Eps) and the minimum number of points (MinPts), which are used to aggregate points in a high-density region into arbitrarily shaped clusters through a density-direct and density-connected propagation mechanism. The photons that cannot satisfy the core point condition and are not covered by density propagation are judged as noise points, enabling cluster division and anomaly separation.
In this study, the MinPts was set to 2–4, and the Eps values were set between 2.5 and 5 based on the characteristics of the different data tracks. After extracting the underwater topographic photons, the ATL03 product did not consider the refraction phenomena occurring at the air–seawater interface and the corresponding change in the speed of light. Consequently, the measured depth value of the underwater photons was deeper than the actual value [27]. Therefore, the refraction correction of the obtained underwater photons was necessary to recover the true depth. The Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and goodness of fit R2 between the processed ICESat-2 lidar bathymetry data and the estimated bathymetry were used as accuracy evaluation indexes in this study.

3. Results

3.1. Results of Coral Reef Habitat Classification

In this study, the seafloor sediment and geomorphology types in the coral reef habitat of the two study areas were discriminated based on visual interpretation. The Yuya Shoal area was divided into five classes: deepwater, lagoon, coral sand, coral detritus, and coral-covered areas. The sediment types and geomorphology around Niihau Island are relatively simple, comprising only three classes: deepwater, coral sand, and coral-covered areas. In addition, since multispectral remote sensing images were affected by clouds and sea waves, the cloud- and wave-covered regions were added to the model when distinguishing between different types. Thus, the misjudgment of the remaining types was avoided, and the accuracy of the classification model and SDB model were improved. The samples containing all the classes were selected in the two study areas of Yuya Shoal and Niihau Island with image pixel counts of 1400 and 1800, respectively. All samples were divided into Mamba model training samples, Mamba model validation samples, and samples for accuracy tests according to the ratio of 6:2:2. After the completion of training, the final results of the coral reef habitat classification for both areas are shown in Figure 4, and the classification accuracy results are shown in Table 2.
Except for the deepwater area where bottom sediment is not apparent, the coral-covered area is the largest in Yuya Shoal (Figure 4a), showing the continuous coverage of the whole atoll. In the western part of the shoal, there are two small lagoons, which are surrounded by coral sand areas with a fan-like extended distribution. On the outside of the northwestern atoll, coral detritus areas are distributed in belts that extend for more than 10 km. For the Niihau Island sea area (Figure 4b), the coral-covered area is also considered to be dominant. There exists a continuous coral sand area in the nearshore north-central part of the island and areas off the southern edge of the island. In addition, coral sand patches are sporadically distributed throughout the coral-covered area.
From the accuracy in Table 2, it can be observed that the Mamba model showed good performance in terms of classification accuracy for both study areas, with an overall accuracy above 95%. Except for the lagoon area of Yuya Shoal, the classification accuracy also reached more than 96% in different classes. The slightly weaker accuracy of the lagoon area might be attributed to the fact that the spectral reflectance characteristics in different lagoon areas were highly similar to those of other classes, e.g., shallow lagoon area versus coral sand area and deep lagoon area versus open deepwater area, which resulted in diminishing the sensitivity of the Mamba model to the texture differences. However, the classification accuracy of the lagoon area still reached 80%. In general, the Mamba coral reef habitat classification model developed in this study exhibits superior classification performance in complex environments, providing high-confidence baseline data on sediment and geomorphology distribution for coral reef ecosystems.

3.2. Results of Satellite-Derived Bathymetry Fusion Model Construction

3.2.1. ICESAT-2/ATL03 Underwater Topography Extraction

A definite amount of known bathymetric information is required to participate in model construction for the different SDB models described in the previous Section 2.3.2. In this study, the DBSCAN denoising algorithm was employed to separate noise photons from photons of the underwater terrain in ICESat-2 ATL03 data. The denoising results for different tracks in the two study areas are presented in Figure 5 and Figure 6. The topographic information above the sea surface was not included in these figures, and some underwater topographic features were absent due to limitations in the ICESat-2 detection capability and cloud obscuration.
Yuya Shoal areas exhibited significant spatial heterogeneity and drastic changes in bathymetric gradients due to the complexity of sediment and geomorphological types. The overall bathymetry spanned from 0 to 22.5 m. Additionally, the number of bathymetry values in shallow water accounted for more than 50% of the total data. The high density of ICESat-2 photon point cloud can delineate the centimeter-scale fluctuations and variations of coral reef habitats. However, due to the deep water depths and unclear bottom in the peripheral and central large-scale lagoon basins of the atoll, laser pulses suffered from attenuation in the water column and decreased reflectance, resulting in reduced detection capability, and only fragmented signals could be captured. Consequently, bathymetry information greater than 20 m was rare. The underwater topography of the seas around Niihau is varied mainly to a relatively flat degree, with a wide range of homogeneous sediment types existing. The favorable water quality conditions and high reflectance of coral sand sediment in the deepwater area allowed for the detection of underwater signals at depths of ~40 m. Bathymetry above 15 m accounted for approximately 20% of the total data.
The underwater topography data detected in both areas were partitioned according to the Mamba classification results. Each sediment class of the bathymetric data was divided into 70% training data (a total of 29,022 in Yuya Shoal and 9958 in Niihau Island) and 30% validation data (a total of 12,379 in Yuya Shoal and 4031 in Niihau Island), ensuring that data were sampled from different bathymetry segments. The histograms of bathymetric distributions of the training data are shown in Figure 7. It can be observed that the most considerable amount of bathymetry training data for the two study areas were found in coral-covered areas, followed by coral sand areas. The lowest number of training samples were observed in the deep water area because the 532 nm green laser energy decays exponentially with depth when transmitted through the water column, resulting in a significant reduction in the number of effective echo photons. The difficulty of precisely separating signal photons from noise photons significantly increased under low signal-to-noise ratio conditions. Then, the five classical models and machine learning methods introduced in Section 2.3.2 were utilized to construct SDB models under different bottom types.

3.2.2. Bathymetry Fusion Model Construction

Based on bathymetry training data after partitions, multiple SDB models for different sediment and geomorphology types in both study areas were constructed. Scatter plots of estimated depth versus lidar depth values are shown in Figure 8 and Figure 9, along with their MAE, MAPE, RMSE, and R2. The results show that in Yuya Shoal, the XGBoost model in deepwater (>10 m) exhibited excellent performance with lower MAE (0.37 m) and MAPE (2.51%) values than the other models, which may be attributed to the superior nonlinear fitting ability of it to the comparatively simple optical attenuation pattern in deep water. The depths of the lagoon were in the range of 3–18 m. It retained the best estimation accuracy with XGBoost, where the MAE and MAPE were 0.21 m and 2.83%, respectively. In complex benthic environments such as coral detritus areas (<5 m), coral covered areas (0–23 m), and coral sand areas, the RF model performed optimally, with excellent results of R2 over 0.97 in all areas, which might be attributable to the effective fusion of heterogeneous reflectance features from multiple sources by its integrated learning mechanism.
The results of the accuracy index for Niihau Island further validated the suitability of the model selection regularity. The vast majority in the deepwater area were within the range of 20–40 m. Similarly, XGBoost had the lowest error indexes with only 0.31 m and 1.09% for the MAE and MAPE, respectively. The coral-covered and coral sand areas, within a depth range of 0–30 m, demonstrated the adaptability of the RF method in complex bottom areas as well, with MAEs of 0.2 m and 0.13 m for the two classes, respectively, and R2 values reaching up to 0.99.
In addition, the comparative analysis of the accuracy results revealed that the classical log-linear and Stumpf SDB models have limitations in coral reef environments, with both having high error levels. Especially the coral covered benthic types caused significant errors due to the low reflectance of the bottom sediments. In contrast, the machine learning models were more adaptable in regions with complex variations in bottom types. Consequently, in this study, the satellite bathymetry fusion model was constructed based on the optimal performance of the bathymetry training data in different seafloor sediment and geomorphology types. For deepwater and lagoon areas, the XGBoost model was preferred, while the RF model was employed in complex bottom areas (i.e., coral, coral detritus, and coral sand covered areas).

3.3. Results of Shallow Water Bathymetry

3.3.1. Accuracy of Bathymetry Estimation

Based on the satellite bathymetry fusion model constructed from different bottom types in the above two study areas, the overall bathymetric results were obtained, with accuracy assessment conducted using bathymetry validation data. Simultaneously, to compare the accuracy of the fusion SDB model with that of the remaining models, global SDB models were established based on all types of training data using the five previous methods. Density scatter plots comparing the estimated and the lidar bathymetry values were derived (Figure 10 and Figure 11), with the color scale representing the density of the data distribution.
For Yuya Shoal (Figure 10), the accuracy of the bathymetry validation points indicated that the fusion bathymetry model had the highest result accuracy, with an overall MAE of 0.2 m and MAPE of 9.77% in the range of 0–23 m depth, which reduced more than 70% of the MAPE compared with that of the classical SDB model. The RMSE was 0.33 m, and R2 reached 0.99. The estimation results of the fusion model also performed well across different bathymetry ranges. The intensive red scatter in the 0–5 m range indicated that the estimation results were highly concentrated, and the correlation between the lidar depth and the estimated depth values was extremely strong. The closeness of the bathymetric results slightly decreased in the 5–12 m range, with greater dispersion in the 12–23 m range. However, all scatters were uniformly distributed on both sides of the 1:1 reference line, which represents an ideal consistency, and showed an overall good linear correlation trend.
The scatter distributions in the two classical SDB methods were relatively dispersed, and the scatter densities were both almost within 0.5. Moreover, at depths greater than 15 m, the estimations of both methods displayed significant deviations. Among the machine learning models, the RF and XGBoost models showed good accuracies, similarly demonstrating a concentrated tendency of scatter points in shallow water (0–5 m). However, the figure shows that the RF model produced an underestimation at depths greater than 11 m. The scatter points were mainly below the 1:1 straight line, and several significant deviation points were observed at lidar values of ~10 m. The XGBoost model exhibited a more discrete scatter in shallow depths of less than 13 m. Consequently, although the overall MAEs and RMSEs were favorable, the overall MAPE was relatively large.
For Niihau Island (Figure 11), the fusion bathymetry model continued to perform excellently. The relative concentration of bathymetric results in a 0–15 m depth range (density > 0.5) clearly demonstrated the strong agreement between the estimated depth and the lidar depth. As bathymetric values increased, the amount of validation data gradually decreased and the scatter density declined. Nevertheless, a good and unbiased overall linear correlation between the estimated values and the lidar bathymetry values remained. Notably, at depths greater than 30 m, the estimation results closely matched the lidar bathymetry values on both sides of the 1:1 straight line, demonstrating the excellent capability of the fusion model in deeper depths. The overall MAE of the fusion model in the 0–40 m depth range was only 0.5 m, and the MAPE was 6.47%, with R2 reaching 0.98. The RMSE was 0.73 m, which was reduced by more than 2 m compared with the classical bathymetry models.
The two classical SDB models performed moderately worse, especially the log-linear model, with the densest scatter points (density ~0.6) occurring above the 1:1 straight line. The estimated bathymetry was significantly underestimated at depths greater than 20 m. The scatter points of all three machine learning methods were comparatively concentrated in the 0–12 m depth range. The SVR model generated an underestimation at deep depths (>30 m), with a greater overall scatter deviation than the RF and XGBoost models. The figure shows that both RF and XGBoost models had significant deviation points at depths greater than 20 m.
In general, traditional bathymetry models suffered the highest errors in the two complex study areas, while the single machine learning models showed limitations in specific bathymetry intervals. Based on high-precision coral reef habitat classification by the Mamba model and the fusion of optimal multi-models, the overall accuracies in both study areas were significantly improved, thereby enabling the high-precision bathymetry estimation of coral reef sea regions.
To assess model performance variations in depth, the MAEs and MAPEs of different models were calculated within each depth interval by segmenting the bathymetry at 1 m intervals (Figure 12). Therefore, it was possible to reveal the fluctuation of model accuracy within different depth characteristic intervals, including shallow water, transition zone, and deep water, thereby enhancing the refinement of evaluation. The results indicated that the fusion bathymetry model proposed in this study still demonstrated significant advantages in the stratified validation of the two areas.
Within 23 bathymetry intervals in Yuya Shoal, the fusion model outperformed the other compared models in ~82% (19 intervals) of the segments for both the MAE and MAPE indexes. The maximum error margin in the remaining four intervals (12–13 m, 15–16 m, and 18–20 m transition zones) was less than 0.6%, illustrating the ability to achieve full-domain error containment. For the Niihau Island area (0–40 m), which spans a wide range of depths, the actual validation interval comprised 37 bathymetry segments, influenced by the sparse bathymetry validation points in depths of 35–38 m. The fusion bathymetry model comprised a total of 33 bathymetry intervals, with the highest accuracy indexes accounting for more than 86%. Similarly, the remaining four bathymetry segments (18–19 m, 23–24 m, 25–26 m, and 27–29 m) were quite close to the lowest error methods, with differences in error ranging from 0.02% to 4%.
The validation results of segmentation accuracy revealed that the fusion bathymetry model based on the Mamba coral reef habitat classification employed in this study exhibited good performance in all bathymetry intervals. The proposed model provided stable adaptability over full bathymetric intervals from shallow to deep water, and, in particular, it could maintain low MAPEs in extremely shallow water (≤5 m) and low MAEs in deep water (≥18 m), significantly outperforming the other methods and demonstrating the reliability of the fusion model estimation.

3.3.2. Shallow Water Bathymetry Map

Based on the proposed fusion SDB model and the Mamba coral reef habitat classification result obtained above, a wide-area bathymetry map is available for the Yuya Shoal and Niihau Island areas (Figure 13). Yuya Shoal (Figure 13a) exhibits the complexity of typical coral reef geomorphology bathymetry variability, with shallow shoals of irregular bathymetry distributed throughout the atoll. There are two small lagoons with depths of 8–10 m and 10–14 m in the western extension belt of the atoll. The depth of the coral detritus zone in the northern part of the lagoon is extremely shallow (<2 m), and the patch reef communities within the small lagoon are sporadically distributed, with depths commonly less than 5 m. The central part of the atoll features a variety of sediment types, with coral-covered areas and coral sand areas interspersed. The majority of depths are below 10 m, whereas depths increase rapidly to more than 20 m at the periphery of the atoll and near the edge of the central large lagoon. The bathymetry map of the waters around Niihau Island reveals a moderate change in bathymetry trend (Figure 13b), with depths in the northern part of the island extending uniformly from 0 m to 30 m. While the southern part of the island is deeper compared with the northern part, there is still a trend of gradual increase from 0 m to 40 m.

4. Discussion

4.1. Influence of Different Amounts of Training Bathymetry Data on the Model

The amount of training data is one of the core factors affecting the accuracy of bathymetry estimation. For the purpose of exploring the influence of training sample size on the precision of different bathymetry models, the Yuya Shoal area was exemplified, and training bathymetric sample data were reduced to 1/2, 1/4, 1/6, and 1/8 of the original samples, respectively. Experiments were still performed using the proposed fusion model and the remaining five models, as was the calculation of various accuracy indexes for different sample data sizes (Figure 14). The results showed that the difference in training sample data sizes had a noticeable impact on the accuracy of the classical bathymetry models and machine learning models. The best accuracies at different sample sizes were achieved by the fusion model proposed in this study, with a slight increase in the MAE, MAPE, and RMSE as the sample size decreased. However, the R2 remained above 0.95.
For the two classical models, the reduction of training samples had essentially no effect on estimation accuracy, possibly because the semi-theoretical and semi-empirical bathymetry models primarily rely on prior knowledge and physical assumptions. While an increase in sample size can improve fitting performance by optimizing parameters, there is an obvious bottleneck in accuracy improvement due to the fixed model structure. Among three machine learning models, the accuracy of the SVR model only decreased slightly with the reduced sample size, but the errors were overall higher than those of the RF and XGBoost models. The overall fluctuation range of all indexes in the SVR model was relatively narrow, indicating its low sensitivity to the size of the training data. This result may be attributed to the fact that SVR is based on the minimization of optimal objectives, which reduces model dependence on sensitivity to local sample distributions by constructing global hyperplanes in high-dimensional space through kernel function mapping. However, RF and XGBoost, as integrated tree models, are more vulnerable, with their performance constrained by sample size; both models showed increases in error as the sample size decreased. From the results in the figure, it has been observed that the RF model had the second-best accuracy compared with the fusion model when the training data were reduced to 1/2. Nevertheless, as the sample size continued to decrease, the XGBoost model demonstrated better accuracy with lower error than the RF model, but it was inferior to the fusion model, which may have been related to the sensitivity difference between RF and XGBoost to sample size. Although two kinds of tree models can achieve higher accuracy with complex rules fitted under sufficient samples, SVR, with theory-driven generalization ability, is more suitable for application demands with high stability requirements in data-confined scenarios.
The general results show that the bathymetry fusion model based on Mamba coral reef habitat classification in this study demonstrates its robustness and superiority by complementing the advantages of multiple methods and maintaining optimal accuracy across various training data sizes.

4.2. Uncertainty Influence Analysis of the Model and Data

In this study, the Mamba model was implemented for coral reef habitat classification from multispectral remote sensing imagery, and a multi-model SDB fusion model was established based on it. The essence of multispectral remote sensing imagery lies in its two-dimensional spatial structure, coupled with multiband spectral features. The core design of Mamba models focuses on modeling unidirectional or bidirectional long-range dependencies in sequence data. When the model is applied to remote sensing image data, the spatial and spectral dimensions are converted into sequences through the flattening process [49,50], which may lead to the loss of local, spatially relevant information (e.g., bottom texture and boundary continuity). In addition, the dimensionality of the four-band spectral features used in this study is relatively low, which may make it difficult for the model to adequately discern the subtle differences in bottom reflectance properties across finite spectral bands. Especially in waters with poor quality conditions and prominent absorption and scattering effects in the water column, the variation in signal-to-noise ratio among bands may further weaken the classification robustness. However, the two study areas selected in this study exhibited superior water quality conditions, and the classification accuracy demonstrated the excellent performance of the model. Currently, research on shallow sea sediment classification and optical bathymetry inversion is primarily confined to clear coral reef areas. Conducting research in low-transparency seas, which are often affected by land-based influxes, algal blooms, or sediment resuspension, can be challenging. The reasons are mainly due to the strong attenuation and interference of optical signals by complex turbid waters. Suspended sediments or high concentrations of algae significantly absorb and scatter light, resulting in the light signal that penetrates the water column and reaches the bottom being extremely weak or even disappearing. Meanwhile, the strong backscattered signals generated by the water column might completely overwhelm the weak bottom reflection signals, thus preventing the sensor from effectively detecting the crucial optical information that reflects the bottom conditions and water depth. Additionally, current bathymetry models hinder the accurate isolation and quantification of the complex effects of water column components, including chlorophyll concentration, colored dissolved organic matter, and suspended sediments. Analogously, the ICESat-2 satellite, which emits visible laser light, has a high penetration capacity in clear water. Nonetheless, its ability to detect underwater topography in turbid waters with low transparency (e.g., nearshore estuaries or waters affected by algal blooms) is severely reduced, probably to less than 5 m. Turbid waters are full of suspended sediments, plankton, or organic particles, substantially increasing the absorption and scattering of 532 nm laser light. Furthermore, the extraction of ICESat-2 underwater topography signals by the DBSCAN algorithm has the possibility that real topography signals can be misclassified as noise points due to different parameter settings, or some noise can be mistakenly clustered as bathymetry information. Particularly in areas of variable depths or complex bottom variations, the underwater topography signal inherently suffers from localized density inhomogeneities. Suspended materials in the water column, biological activities, and changes in optical properties of the water column will introduce noise photons, which overlap with the true terrain signal features and affect the determination of the cluster boundaries of the algorithm. In turn, this leads to errors in lidar bathymetry values, which reduces the accuracy of bathymetry estimation results. Currently, however, ICESat-2/ATL03 open-source data are still the best and irreplaceable source for acquiring underwater topography information in complex waters, such as faraway islands and reefs, which are challenging to investigate in the field.
Bathymetry fusion models can reduce estimation errors by creating partitioned models in different seafloor sediment classes [35,42]. Although a fusion model improves the accuracy of a single model by applying diverse models, high-precision and fragmented classification results could result in less bathymetry training data for certain classes. Partitioned models constructed in data-sparse regions may produce pseudo-correlation features due to fitting noise. When the training data are spatially unevenly distributed, the errors of a fusion model can be even higher than those of a single model in low-sample-density regions. Consequently, the scale of different classes of known bathymetry data requires consideration when developing bathymetry models for sediment classification. Alternatively, various machine learning and deep learning models have demonstrated excellent performance in SDB applications [18,54], whereas these black-box models make it difficult to trace the relevance of bathymetric results to real information at a physical mechanism level. The water column radiation transfer model is based on rigorous physical principles, which require obtaining multiple water column optical properties and complex calculations. Optimizing the radiative transfer model parameters using machine learning or deep learning methods may further enhance the applicability of optical bathymetry remote sensing in various water column types, reduce dependence on actual measured data, and improve the accuracy of satellite-derived bathymetry.

5. Conclusions

In this study, the high-precision bathymetry inversion of coral reef seas was accomplished by innovatively integrating the Mamba classification framework with a multi-model SDB fusion approach. The performance of the Mamba coral reef habitat classification model demonstrated excellent accuracy in two complex areas, Yuya Shoal and Niihau Island, with overall accuracies exceeding 95% in both cases. For bathymetry estimation, traditional semi-theoretical and semi-empirical models (log-linear and Stumpf models) are limited by physical mechanisms and model structures, which lead to higher errors. Single machine learning models (SVR, RF, XGBoost), despite significantly improving accuracy, suffer from limitations in specific bottom areas. In contrast, the satellite bathymetry fusion model based on coral reef habitat classification proposed in this study showed exceptional accuracy in both study areas. Comparison with ICESat-2 lidar underwater topography detection results indicated that the fusion model achieved an overall MAE of 0.2 m and a MAPE of 9.77% in the 0–23 m depth range of Yuya Shoal. The RMSE was 0.33 m, and R2 reached 0.99, which is more than 70% lower compared with the classical bathymetry models’ MAPE. The overall MAE of the fusion model for the range of 0–40 m depths in Niihau Island was only 0.5 m, with a MAPE of 6.47% and an RMSE of 0.73 m, which reduced the RMSE by more than 2 m compared with that of classical models. And the R2 reached 0.98. The fusion model also improved overall accuracy compared with three machine learning methods in various study areas. The comparison results of bathymetry interval accuracy and different training data sizes also demonstrated the robustness and superiority of the fusion model, which provides large-scale and high-confidence underwater topography results for complex seafloor sediments and geomorphologies in coral reef habitats.

Author Contributions

X.Z.: Conceptualization, Methodology, Software, Validation, Formal Analysis, and Writing—original draft; Y.M.: Conceptualization and Writing—review and editing; F.Z.: Methodology, Software, and Validation; Z.L.: Conceptualization and Methodology; J.Z.: Conceptualization and Methodology. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (NSFC) [grant numbers 51839002, 41906158] and the Taishan scholar project of Shandong Province [grant number ts20190963].

Data Availability Statement

All original data in the study are openly available in the description of data and processing in Section 2.2.

Acknowledgments

We gratefully acknowledge the European Space Agency (ESA) for providing the Sentinel-2 products and the NASA Earth Observation Data of the Earth Science Data Systems Program for distributing the ICESat-2 data. The authors also thank the editor and the anonymous reviewers for their review of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mandlburger, G. A review of active and passive optical methods in hydrography. Int. Hydrogr. Rev. 2022, 28, 8–52. [Google Scholar] [CrossRef]
  2. Cesbron, G.; Melet, A.; Almar, R.; Lifermann, A.; Tullot, D.; Crosnier, L. Pan-European Satellite-derived coastal bathymetry—Review, user needs and future services. Front. Mar. Sci. 2021, 8, 740830. [Google Scholar] [CrossRef]
  3. He, J.; Zhang, S.; Cui, X.; Feng, W. Remote sensing for shallow bathymetry: A systematic review. Earth-Sci. Rev. 2024, 258, 104957. [Google Scholar] [CrossRef]
  4. Wedding, L.; Friedlander, A.; McGranaghan, M.; Yost, R.; Monaco, M. Using bathymetric LiDAR to define nearshore benthic habitat complexity: Implications for management of reef fish assemblages in Hawaii. Remote Sens. Environ. 2008, 112, 59–4165. [Google Scholar] [CrossRef]
  5. Kutser, T.; Hedley, J.; Giardino, C.; Roelfsema, C.; Brando, V.E. Remote sensing of shallow waters—A 50 year retrospective and future directions. Remote Sens. Environ. 2020, 240, 111619. [Google Scholar] [CrossRef]
  6. Campbell, S.J.; Kartawijaya, T.; Sabarini, E.K. Connectivity in reef fish assemblages between seagrass and coral reef habitats. Aquat. Biol. 2011, 13, 65–77. [Google Scholar] [CrossRef]
  7. Honda, K.; Nakamura, Y.; Nakaoka, M.; Uy, W.H.; Fortes, M.D. Habitat use by fishes in coral reefs, seagrass beds and mangrove habitats in the Philippines. PLoS ONE 2013, 8, e65735. [Google Scholar] [CrossRef]
  8. James, R.K.; Keyzer, L.M.; Van De Velde, S.J.; Herman, P.M.; Van Katwijk, M.M.; Bouma, T.J. Climate change mitigation by coral reefs and seagrass beds at risk: How global change compromises coastal ecosystem services. Sci. Total Environ. 2023, 857, 159576. [Google Scholar] [CrossRef]
  9. Yen, P.P.; Sydeman, W.J.; Hyrenbach, K.D. Marine bird and cetacean associations with bathymetric habitats and shallow-water topographies: Implications for trophic transfer and conservation. J. Mar. Syst. 2004, 50, 79–99. [Google Scholar] [CrossRef]
  10. Dennison, W. Seagrasses: Biology, ecology and conservation. Bot. Mar. 2009, 52, 365–366. [Google Scholar] [CrossRef]
  11. Britton-Simmons, K.H.; Rhoades, A.L.; Pacunski, R.E.; Galloway, A.W.; Lowe, A.T.; Sosik, E.A.; Dethier, M.N.; Duggins, D.O. Habitat and bathymetry influence the landscape-scale distribution and abundance of drift macrophytes and associated invertebrates. Limnol. Oceanogr. 2012, 57, 176–184. [Google Scholar] [CrossRef]
  12. Cameron, M.J.; Lucieer, V.; Barrett, N.S.; Johnson, C.R.; Edgar, G.J. Understanding community-habitat associations of temperate reef fishes using fine-resolution bathymetric measures of physical structure. Mar. Ecol. Prog. Ser. 2014, 506, 213–229. [Google Scholar] [CrossRef]
  13. Costa, B.M.; Battista, T.A.; Pittman, S.J. Comparative evaluation of airborne LiDAR and ship-based multibeam SoNAR bathymetry and intensity for mapping coral reef ecosystems. Remote Sens. Environ. 2009, 113, 1082–1100. [Google Scholar] [CrossRef]
  14. Bio, A.; Gonçalves, J.A.; Magalhães, A.; Pinheiro, J.; Bastos, L. Combining low-cost sonar and high-precision global navigation satellite system for shallow water bathymetry. Estuaries Coasts 2020, 45, 1000–1011. [Google Scholar] [CrossRef]
  15. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef]
  16. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  17. Chénier, R.; Faucher, M.A.; Ahola, R. Satellite-derived bathymetry for improving Canadian Hydrographic Service charts. ISPRS Int. J. Geoinf. 2018, 7, 306. [Google Scholar] [CrossRef]
  18. Al Najar, M.; Thoumyre, G.; Bergsma, E.W.; Almar, R.; Benshila, R.; Wilson, D.G. Satellite derived bathymetry using deep learning. Mach. Learn. 2023, 112, 1107–1130. [Google Scholar] [CrossRef]
  19. Almar, R.; Bergsma, E.W.; Thoumyre, G.; Solange, L.C.; Loyer, S.; Artigues, S.; Salles, G.; Garlan, T.; Lifermann, A. Satellite-derived bathymetry from correlation of Sentinel-2 spectral bands to derive wave kinematics: Qualification of Sentinel-2 S2Shores estimates with hydrographic standards. Coast. Eng. 2024, 189, 104458. [Google Scholar] [CrossRef]
  20. Wicaksono, P.; Harahap, S.D.; Hendriana, R. Satellite-derived bathymetry from WorldView-2 based on linear and machine learning regression in the optically complex shallow water of the coral reef ecosystem of Kemujan island. Remote Sens. Appl. 2024, 33, 101085. [Google Scholar] [CrossRef]
  21. Liu, C.; Xie, H.; Xu, Q.; Li, J.; Sun, Y.; Ji, M.; Tong, X. Diffuse attenuation coefficient and bathymetry retrieval in shallow water environments by integrating satellite laser altimetry with optical remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2025, 136, 104318. [Google Scholar] [CrossRef]
  22. Jiang, C.; Chen, Y.; Liu, Y.; Dong, Z.; Tang, Q.; Li, Z. Very high-resolution satellite-derived bathymetry using panchromatic and multispectral image fusion. Appl. Opt. 2025, 64, 2835–2846. [Google Scholar] [CrossRef]
  23. Lowell, K.; Rzhanov, Y. An Empirical Evaluation of the Localised Accuracy of Satellite-Derived Bathymetry and SDB Depth Change. Mar. Geod. 2025, 48, 47–71. [Google Scholar] [CrossRef]
  24. Viaña-Borja, S.; González-Villanueva, R.; Alejo, I.; Stumpf, R.; Navarro, G.; Caballero, I. Satellite-derived bathymetry using Sentinel-2 in mesotidal coasts. Coast. Eng. 2025, 195, 104644. [Google Scholar] [CrossRef]
  25. Quang, N.; Banno, M.; Ha, N. Satellite derived bathymetry using empirical and machine learning approaches: A case study in the highly dynamic coastal water. Coast. Eng. J. 2025, 67, 232–251. [Google Scholar] [CrossRef]
  26. Zhao, Y.; Fang, S.; Wu, Z.; Wu, S.; Chen, H.; Song, C.; Mao, Z.; Shen, W. A novel spatial graph attention networks for satellite-derived bathymetry in coastal and island waters. J. Environ. Manag. 2025, 380, 125034. [Google Scholar] [CrossRef] [PubMed]
  27. Parrish, C.E.; Magruder, L.A.; Neuenschwander, A.L.; Forfinski-Sarkozi, N.; Alonzo, M.; Jasinski, M. Validation of ICESat-2 ATLAS bathymetry and analysis of ATLAS’s bathymetric mapping performance. Remote Sens. 2019, 11, 1634. [Google Scholar] [CrossRef]
  28. Babbel, B.J.; Parrish, C.E.; Magruder, L.A. ICESat-2 elevation retrievals in support of satellite-derived bathymetry for global science applications. Geophys. Res. Lett. 2021, 48, e2020GL090629. [Google Scholar] [CrossRef]
  29. Parrish, C.E.; Magruder, L.; Herzfeld, U.; Thomas, N.; Markel, J.; Jasinski, M.; Imahori, G.; Herrmann, J.; Trantow, T.; Borsa, A.; et al. ICESat-2 bathymetry: Advances in methods and science. In Proceedings of the OCEANS 2022, Hampton Roads, VA, USA, 17–20 October 2022. [Google Scholar] [CrossRef]
  30. Zhang, D.; Chen, Y.; Le, Y.; Dong, Y.; Dai, G.; Wang, L. Refraction and coordinate correction with the JONSWAP model for ICESat-2 bathymetry. ISPRS Int. J. Geoinf. 2022, 186, 285–300. [Google Scholar] [CrossRef]
  31. Jia, K.; Ma, Y.; Zhang, J.; Wang, B.; Zhang, X.; Cui, A. A denoising methodology for detecting ICESat-2 bathymetry photons based on quasi full-waveform. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  32. Giribabu, D.; Hari, R.; Sharma, J.; Ghosh, K.; Padiyar, N.; Sharma, A.; Bera, A.K.; Srivastav, S.K. Performance assessment of GEBCO_2023 gridded bathymetric data in selected shallow waters of Indian ocean using the seafloor from ICESat-2 photons. Mar. Geophys. Res. 2024, 45, 1. [Google Scholar] [CrossRef]
  33. Ma, Y.; Xu, N.; Liu, Z.; Yang, B.; Yang, F.; Wang, X.H.; Li, S. Satellite-derived bathymetry using the ICESat-2 lidar and Sentinel-2 imagery datasets. Remote Sens. Environ. 2020, 250, 112047. [Google Scholar] [CrossRef]
  34. Xu, N.; Ma, Y.; Zhou, H.; Zhang, W.; Zhang, Z.; Wang, X.H. A method to derive bathymetry for dynamic water bodies using ICESat-2 and GSWD data sets. IEEE Geosci. Remote Sens. Lett. 2020, 19, 1–5. [Google Scholar] [CrossRef]
  35. Li, S.; Wang, X.H.; Ma, Y.; Yang, F. Satellite-derived bathymetry with sediment classification using ICESat-2 and multispectral imagery: Case studies in the South China Sea and Australia. Remote Sens. 2023, 15, 1026. [Google Scholar] [CrossRef]
  36. Liu, Y.; Zhou, Y.; Yang, X. Bathymetry derivation and slope-assisted benthic mapping using optical satellite imagery in combination with ICESat-2. Int. J. Appl. Earth Obs. Geoinf. 2024, 127, 103700. [Google Scholar] [CrossRef]
  37. Lv, J.; Gao, C.; Qi, C.; Li, S.; Su, D.; Zhang, K.; Yang, F. Arctic supraglacial lake derived bathymetry combining ICESat-2 and spectral stratification of satellite imagery. EGUsphere 2025, 2025, 1–22. Available online: https://egusphere.copernicus.org/preprints/2025/egusphere-2025-364/ (accessed on 12 April 2025).
  38. Dietrich, J.T.; Rackley, R.A.; Gibbons, A. Analysis of ICESat-2 data acquisition algorithm parameter enhancements to improve worldwide bathymetric coverage. Earth Space Sci. 2024, 11, e2023EA003270. [Google Scholar] [CrossRef]
  39. Jung, J.; Parrish, C.E.; Magruder, L.A.; Herrmann, J.; Yoo, S.; Perry, J.S. ICESat-2 bathymetry algorithms: A review of the current state-of-the-art and future outlook. ISPRS J. Photogramm. Remote Sens. 2025, 223, 413–439. [Google Scholar] [CrossRef]
  40. Petit, T.; Bajjouk, T.; Mouquet, P.; Rochette, S.; Vozel, B.; Delacourt, C. Hyperspectral remote sensing of coral reefs by semi-analytical model inversion Comparison of different inversion setups. Remote Sens. Environ. 2017, 190, 348–365. [Google Scholar] [CrossRef]
  41. Salameh, E.; Frappart, F.; Almar, R.; Baptista, P.; Heygster, G.; Lubac, B.; Raucoules, D.; Almeida, L.P.; Bergsma, E.W.; Capo, S.; et al. Monitoring beach topography and nearshore bathymetry using spaceborne remote sensing: A review. Remote Sens. 2019, 11, 2212. [Google Scholar] [CrossRef]
  42. Ye, L.; Chu, S.; Chen, H.; Cheng, J.; Xu, R.; Qu, Z. Hyperspectral Satellite-Derived Bathymetry Considering Substrate Characteristics. Mar. Geod. 2025, 1–25. [Google Scholar] [CrossRef]
  43. Wu, Z.; Mao, Z.; Shen, W. Integrating multiple datasets and machine learning algorithms for satellite-based bathymetry in seaports. Remote Sens. 2021, 13, 4328. [Google Scholar] [CrossRef]
  44. Markus, T.; Neumann, T.; Martino, A.; Abdalati, W.; Brunt, K.; Csatho, B.; Farrell, S.; Fricker, H.; Gardner, A.; Harding, D.; et al. The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2): Science requirements, concept, and implementation. Remote Sens. Environ. 2017, 190, 260–273. [Google Scholar] [CrossRef]
  45. Magruder, L.; Neumann, T.; Kurtz, N. ICESat-2 early mission synopsis and observatory performance. Earth Space Sci. 2021, 8, e2020EA001555. [Google Scholar] [CrossRef] [PubMed]
  46. Hochberg, E.J.; Atkinson, M.J.; Andréfouët, S. Spectral reflectance of coral reef bottom-types worldwide and implications for coral reef remote sensing. Remote Sens. Environ. 2003, 85, 159–173. [Google Scholar] [CrossRef]
  47. Mazel, C.H. Spectral measurements of fluorescence emission in Caribbean cnidarians. Mar. Ecol. Prog. Ser. 1995, 120, 185–191. [Google Scholar] [CrossRef]
  48. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for sentinel-2. In Proceedings of the Image and Signal Processing for Remote Sensing XXIII, Warsaw, Poland, 4 October 2017. [Google Scholar] [CrossRef]
  49. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar] [CrossRef]
  50. Li, Y.; Luo, Y.; Zhang, L.; Wang, Z.; Du, B. MambaHSI: Spatial-spectral mamba for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  51. Awad, M.; Khanna, R. Support Vector Regression, Efficient Learning Machines; Apress: Berkeley, CA, USA, 2015; pp. 67–80. [Google Scholar] [CrossRef]
  52. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  53. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996. [Google Scholar]
  54. Tonion, F.; Pirotti, F.; Faina, G.; Paltrinieri, D. A machine learning approach to multispectral satellite derived bathymetry. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, 3, 565–570. [Google Scholar] [CrossRef]
Figure 1. (a) Location of the study areas and distribution of ICESat-2 lidar data, (b) Yuya Shoal, (c) Niihau Island (the bottom Sentinel-2 remote sensing images are true color compositions in RGB bands).
Figure 1. (a) Location of the study areas and distribution of ICESat-2 lidar data, (b) Yuya Shoal, (c) Niihau Island (the bottom Sentinel-2 remote sensing images are true color compositions in RGB bands).
Remotesensing 17 02134 g001
Figure 2. Workflow diagram for data processing and research methodology.
Figure 2. Workflow diagram for data processing and research methodology.
Remotesensing 17 02134 g002
Figure 3. Mamba-based model structure for multispectral coral reef habitat classification.
Figure 3. Mamba-based model structure for multispectral coral reef habitat classification.
Remotesensing 17 02134 g003
Figure 4. Results of coral reef habitat classification in different study areas: (a) Yuya Shoal and (b) Niihau Island.
Figure 4. Results of coral reef habitat classification in different study areas: (a) Yuya Shoal and (b) Niihau Island.
Remotesensing 17 02134 g004
Figure 5. Results of underwater topography extraction from ICESat-2 ATL03 data in Yuya Shoal (after refraction correction).
Figure 5. Results of underwater topography extraction from ICESat-2 ATL03 data in Yuya Shoal (after refraction correction).
Remotesensing 17 02134 g005
Figure 6. Results of underwater topography extraction from ICESat-2 ATL03 data in Niihau Island (after refraction correction).
Figure 6. Results of underwater topography extraction from ICESat-2 ATL03 data in Niihau Island (after refraction correction).
Remotesensing 17 02134 g006
Figure 7. Distribution of training data in different study areas within various water bathymetry segments: (a) Yuya Shoal and (b) Niihau Island.
Figure 7. Distribution of training data in different study areas within various water bathymetry segments: (a) Yuya Shoal and (b) Niihau Island.
Remotesensing 17 02134 g007
Figure 8. Scatterplot of estimated bathymetry with lidar bathymetry values for different sediment and geomorphology types in Yuya Shoal (black straight lines are 1:1 lines, where accuracy indexes in the bottom right of the first two columns correspond to the lagoon and coral detritus, respectively).
Figure 8. Scatterplot of estimated bathymetry with lidar bathymetry values for different sediment and geomorphology types in Yuya Shoal (black straight lines are 1:1 lines, where accuracy indexes in the bottom right of the first two columns correspond to the lagoon and coral detritus, respectively).
Remotesensing 17 02134 g008
Figure 9. Scatterplot of estimated bathymetry with lidar bathymetry values for different sediment and geomorphology types in Niihau Island (black straight lines are 1:1 lines).
Figure 9. Scatterplot of estimated bathymetry with lidar bathymetry values for different sediment and geomorphology types in Niihau Island (black straight lines are 1:1 lines).
Remotesensing 17 02134 g009
Figure 10. Density scatterplot of estimated and lidar bathymetry values for different models in Yuya Shoal (black lines represent 1:1 straight lines and color scale represents scatter density).
Figure 10. Density scatterplot of estimated and lidar bathymetry values for different models in Yuya Shoal (black lines represent 1:1 straight lines and color scale represents scatter density).
Remotesensing 17 02134 g010
Figure 11. Density scatterplot of estimated and lidar bathymetry values for different models in Niihau Island (black lines represent 1:1 straight lines and color scale represents scatter density).
Figure 11. Density scatterplot of estimated and lidar bathymetry values for different models in Niihau Island (black lines represent 1:1 straight lines and color scale represents scatter density).
Remotesensing 17 02134 g011
Figure 12. Bathymetry estimation accuracy of different models by depth intervals: (a) MAE and (b) MAPE of Yuya Shoal; (c) MAE and (d) MAPE of Niihau Island (partial results of MAPE are not shown in the figures for huge errors).
Figure 12. Bathymetry estimation accuracy of different models by depth intervals: (a) MAE and (b) MAPE of Yuya Shoal; (c) MAE and (d) MAPE of Niihau Island (partial results of MAPE are not shown in the figures for huge errors).
Remotesensing 17 02134 g012
Figure 13. Satellite bathymetry maps in different areas: (a) Yuya Shoal and (b) Niihau Island (gray masks for islands, waves/cloud-covered areas, and dark blue masks for open deep water areas).
Figure 13. Satellite bathymetry maps in different areas: (a) Yuya Shoal and (b) Niihau Island (gray masks for islands, waves/cloud-covered areas, and dark blue masks for open deep water areas).
Remotesensing 17 02134 g013
Figure 14. Comparison of bathymetry accuracy in Yuya Shoal with different training sample sizes.
Figure 14. Comparison of bathymetry accuracy in Yuya Shoal with different training sample sizes.
Remotesensing 17 02134 g014
Table 1. ICESat-2 ATL03 dataset.
Table 1. ICESat-2 ATL03 dataset.
Study AreaDateTrack NumberStudy AreaDateTrack Number
Yuya Shoal20190109#0179 (gt1l)Nihau Island20190119#0343 (gt3l)
20190202#0552 (gt1l/gt2l/gt3l)20190127#0457 (gt1l/gt2l/gt3l)
20190312#1124 (gt1l/gt2l/gt3l)20190527#0889 (gt2l)
20190504#0552 (gt1l/gt2l/gt3l)20190818#0785 (gt1l/gt2l)
20191008#0179 (gt1r/gt3r)20191117#0785 (gt3r)
20200309#1124 (gt1r/gt3r)20200227#0960 (gt1r/gt2r)
20200906#1124 (gt3l)20200425#0457 (gt1r)
20201206#1124 (gt3l)20200823#0899 (gt2l)
20210307#1124 (gt1r)20200917#1288 (gt2l)
20230402#0179 (gt2l/gt3l)20210213#0785 (gt2r)
20231025#0552 (gt1l)20210723#0457 (gt1r)
20240330#0179 (gt2r)20211022#0457 (gt1l)
20240423#0552 (gt1l)20220113#0343 (gt1l)
20220812#0785 (gt3r)
Table 2. Accuracy of coral reef habitat classification in different study areas.
Table 2. Accuracy of coral reef habitat classification in different study areas.
ClassAccuracy Per Class (%)
AreaYuya ShoalNihau Island
deepwater97.5696.67
lagoon80/
coral sand10096.45
coral detritus100/
coral98.196.69
cloud/wave100100
Overall Accuracy (%)97.5596.69
Average Accuracy (%)95.9497.45
Kappa (%)96.8294.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Ma, Y.; Zhang, F.; Li, Z.; Zhang, J. Multi-Model Synergistic Satellite-Derived Bathymetry Fusion Approach Based on Mamba Coral Reef Habitat Classification. Remote Sens. 2025, 17, 2134. https://doi.org/10.3390/rs17132134

AMA Style

Zhang X, Ma Y, Zhang F, Li Z, Zhang J. Multi-Model Synergistic Satellite-Derived Bathymetry Fusion Approach Based on Mamba Coral Reef Habitat Classification. Remote Sensing. 2025; 17(13):2134. https://doi.org/10.3390/rs17132134

Chicago/Turabian Style

Zhang, Xuechun, Yi Ma, Feifei Zhang, Zhongwei Li, and Jingyu Zhang. 2025. "Multi-Model Synergistic Satellite-Derived Bathymetry Fusion Approach Based on Mamba Coral Reef Habitat Classification" Remote Sensing 17, no. 13: 2134. https://doi.org/10.3390/rs17132134

APA Style

Zhang, X., Ma, Y., Zhang, F., Li, Z., & Zhang, J. (2025). Multi-Model Synergistic Satellite-Derived Bathymetry Fusion Approach Based on Mamba Coral Reef Habitat Classification. Remote Sensing, 17(13), 2134. https://doi.org/10.3390/rs17132134

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop