Next Article in Journal
Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data
Previous Article in Journal
RingFormer-Seg: A Scalable and Context-Preserving Vision Transformer Framework for Semantic Segmentation of Ultra-High-Resolution Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery

Tropical Marine Science Institute, National University of Singapore, Singapore 119227, Singapore
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 3066; https://doi.org/10.3390/rs17173066
Submission received: 1 July 2025 / Revised: 31 August 2025 / Accepted: 1 September 2025 / Published: 3 September 2025

Abstract

Highlights

What are the main findings?
  • Developed a hybrid UVA-ML bathymetric inversion model integrating multispectral imagery with DSM data.
  • DSM integration disentangles spectral ambiguities and improves bathymetric prediction accuracy.
What is the implication of the main finding?
  • Achieved >20% accuracy improvement (R2 > 0.93) across heterogeneous coastal environments.

Abstract

The accurate monitoring of short-term bathymetric changes in shallow waters is essential for effective coastal management and planning. Machine Learning (ML) applied to Unmanned Aerial Vehicle (UAV)-based multispectral imagery offers a rapid and cost-effective solution for bathymetric surveys. However, models based solely on multispectral imagery are inherently limited by confounding factors such as shadow effects, poor water quality, and complex seafloor textures, which obscure the spectral–depth relationship, particularly in heterogeneous coastal environments. To address these issues, we developed a hybrid bathymetric inversion model that integrates digital surface model (DSM) data—providing high-resolution topographic information—with ML applied to UAV-based multispectral imagery. The model training was supported by multibeam sonar measurements collected from an Unmanned Surface Vehicle (USV), ensuring high accuracy and adaptability to diverse underwater terrains. The study area, located around Lazarus Island, Singapore, encompasses a sandy beach slope transitioning into seagrass meadows, coral reef communities, and a fine-sediment seabed. Incorporating DSM-derived topographic information substantially improved prediction accuracy and correlation, particularly in complex environments. Compared with linear and bio-optical models, the proposed approach achieved accuracy improvements exceeding 20% in shallow-water regions, with performance reaching an R2 > 0.93. The results highlighted the effectiveness of DSM integration in disentangling spectral ambiguities caused by environmental variability and improving bathymetric prediction accuracy. By combining UAV-based remote sensing with the ML model, this study presents a scalable and high-precision approach for bathymetric mapping in complex shallow-water environments, thereby enhancing the reliability of UAV-based surveys and supporting the broader application of ML in coastal monitoring and management.

1. Introduction

The bathymetric mapping of shallow waters is fundamental for coastal management, environmental monitoring, and maritime navigation safety. High-precision depth measurements are essential for optimizing the design and maintenance of hydrotechnical structures, ensuring safe navigation, and supporting ecosystem conservation [1,2]. The availability of accurate and high-resolution bathymetric data allows for the improved modelling of coastal erosion processes, sediment dynamics, and habitat distributions, enabling the implementation of effective mitigation strategies and sustainable coastal development initiatives. Furthermore, bathymetric data contributes to understanding climate change impacts on nearshore environments by providing insights into coastal morphology evolution and vulnerability assessments. The exploration of aquatic environments through remote sensing technologies has expanded our capabilities to estimate water column depth, its constituents, and benthic cover types across vast and often inaccessible terrains.
Traditional remote sensing methods for bathymetric mapping, such as Landsat 8 and Worldview satellite data, have been widely used for bio-optical modelling and water quality estimation [3]. The empirical models, founded on the theory of light radiation attenuation in water, combine theoretical and empirical parameters for the passive optical remote sensing of water depth. These models, and their iterations like the loglinear model by [4,5] and the dual-band model by [6,7,8,9,10,11,12,13,14,15,16], demonstrate the evolving understanding and application of remote sensing in aquatic environments. However, these methods suffer from site-specific dependencies, environmental variability, and limitations in retrieving accurate depth information, particularly in complex nearshore environments [17,18]. The introduction of ML models, as alternatives to conventional empirical and bio-optical models, has significantly improved predictive capabilities, offering a direct correlation between remote sensing image radiance and measured water depth [19,20,21], reducing errors associated with environmental variability. ML models can generalize better across different aquatic environments and provide more robust depth retrieval under diverse conditions.
In smaller targeted areas requiring sub-metre spatial resolution, in regions where cloud cover or water turbidity limit the applicability of optical satellite data, or in locations that demand frequent repeat surveys, UAV-based bathymetric mapping offers a more suitable and flexible alternative to satellite-derived bathymetry (SDB) [20,22,23,24,25,26,27,28,29,30,31,32]. UAV-based photogrammetry, particularly when integrated with ML techniques, has demonstrated remarkable potential in shallow-water bathymetric mapping by overcoming the limitations of traditional sonar and satellite-derived methods. This integration enhances mapping accuracy, improves operational efficiency, and enables adaptability to the complex and heterogeneous conditions of coastal and nearshore environments.
Recent progress in coastal remote sensing has increasingly combined UAVs and USVs with ML and deep learning to improve bathymetric mapping. UAVs equipped with multispectral sensors can rapidly capture high-resolution imagery, while USVs provide dense and accurate multibeam sonar data for model calibration and validation [21,24]. Transformer-based deep learning frameworks, such as BathyFormer [33,34], leverage high-resolution multispectral imagery and self-attention mechanisms to capture both local and global spatial dependencies, thereby improving nearshore bathymetric mapping accuracy. Similarly, visual transformer architectures [35] that integrate active and passive remote sensing data can retrieve precise bathymetry without in situ measurements [36], achieving sub-metre RMSE and ensuring robust performance across varied water depths. In parallel, ensemble ML approaches, such as stacking ensemble models [37,38], have demonstrated superior robustness and cross-regional generalizability compared to single-model algorithms, effectively mitigating the variability in prediction accuracy caused by site-specific spectral and environmental conditions. A comprehensive synthesis of these techniques is provided in the review by Mandlburger [39], which discusses both active and passive optical methods in hydrography. Collectively, these platforms have demonstrated the potential to overcome many of the limitations of traditional empirical and physics-based models, particularly in shallow-water environments. These developments mark an important shift toward hybrid and data-driven strategies for bathymetric inversion, with growing relevance in data-sparse and inaccessible regions.
Despite these advances, ML-based bathymetric mapping remains constrained in complex coastal environments. Environmental factors such as shadow effects, turbidity, and heterogeneous benthic textures often obscure the spectral–depth relationship, reducing predictive accuracy. In tropical waters such as those surrounding Singapore, suspended sediments, fluctuating turbidity, and variable bottom composition (e.g., sand, seagrass, coral, and fine sediments) introduce spectral ambiguities that complicate depth retrieval. Additional variability from tides, weather conditions, and water quality further challenges the reliability and transferability of UAV-based models. These persistent issues underscore the need for supplementary datasets that can provide complementary structural information to enhance bathymetric predictions.
DSMs offer such a complementary dataset by capturing fine-scale elevation variations and structural relief that are not represented in spectral imagery. Complementary research has also introduced synergistic datasets to enhance model robustness and to correct for refraction in optical workflows, addressing systematic vertical errors in underwater DSMs derived from Structure-from-Motion techniques [36,40,41]. Although DSMs alone are limited in turbid environments, due to the complex interactions between water depth, quality variations, and light penetration, they preserve relative topographic patterns that convey important information about seabed morphology. This structural context can mitigate errors caused by shadows, poor water clarity, and benthic heterogeneity, making DSMs a valuable input for bathymetric inversion models. Their integration with multispectral imagery is therefore particularly beneficial in optically complex nearshore environments where traditional methods often fail to maintain accuracy.
In this work, we developed a hybrid integrated bathymetric inversion model that integrates DSM data with ML applied to UAV-based multispectral imagery. The model was trained and validated using multibeam sonar measurements collected from USV, ensuring robustness and adaptability across diverse underwater terrains. Our case study focused on the waters surrounding Lazarus Island, Singapore, which encompass sandy beach slopes, seagrass meadows, coral reef communities, and fine-sediment seabeds, representing a heterogeneous shallow-water environment. Incorporating DSM-derived topographic information markedly enhanced prediction accuracy and correlation, particularly in optically and structurally complex areas. Compared with conventional linear and bio-optical models, the integrated approach achieved accuracy improvements exceeding 20% in shallow-water regions, with performance reaching an R2 > 0.93. These results demonstrate the value of DSM integration in disentangling spectral ambiguities caused by environmental factors and highlight its significance for developing scalable, high-precision frameworks for bathymetric mapping in complex coastal settings.

2. Materials and Methods

2.1. Study Area

The study area is located in the waters surrounding Lazarus Island (1.226°N, 103.843°E), Singapore, which is a small, ecologically significant area with diverse coastal and marine environments. The water quality around Lazarus Island is influenced by its proximity to Singapore’s busy shipping lanes and industrial activities. High levels of turbidity, caused by sedimentation from dredging and land reclamation, have been noted in studies [42,43,44,45,46]. Industrial and urban runoff contributes to nutrient pollution and leads to occasional algal blooms and eutrophication. Tidal currents play a role in water quality, with strong currents helping to flush pollutants but also leading to sediment resuspension. Under normal conditions, visibility ranges from 2 to 4 m, but can drop below 1 m during periods of high turbidity, heavy rainfall, or strong currents associated with the monsoon seasons [47,48,49,50].
The topographic survey area of interest, highlighted in the blue and green boxes in Figure 1, consists of a sandy beach slope transitioning into seagrass meadows, a mix of isolated coral colonies and rubble, and a fine-sediment seabed. To facilitate comparative analysis, the study area was divided into two distinct regions. Due to the shallow nature of our target area along the shore, the USV cannot reach there. In the overlapping area, the water is more accessible compared to other parts, and there are more measured data available for model training. The area that are more accessible by the USV for collecting actual data designated as the overlapping regions, and these areas are included within both regions, also ensures the continuity of environmental gradients (substrate and depth) and allows consistent DSM generation. Area 1 features a small artificial embankment composed of small stones, extending from the sandy shoreline into shallow waters. The seabed comprises a mix of sand, scattered seagrass, and coral reefs, with low water turbidity, allowing for better light penetration and improved visibility. Area 2 is a narrow coastal strip where mangrove vegetation along the shore creates irregular shading effects on the water surface. This area has a dense presence of seagrass and mixed coral and rubble reefs, contributing to higher water turbidity, which complicates data acquisition and analysis. The combined effects of high turbidity and shadowing pose significant challenges for remote sensing and underwater feature extraction. The performance of different models in addressing these detection challenges was evaluated across the two study areas, enabling a comparative analysis of their effectiveness in processing and differentiating environmental features.

2.2. Data Source

The study utilized multispectral data collected by an UAV on 5 July 2023, during Singapore’s Southwest Monsoon season (June–September). The UAV employed for this task was the DJI Phantom 4 multispectral quadrotor, manufactured by [51,52]. This UAV is distinguished by its built-in Real-Time Kinematic (RTK) module, enabling the synchronization of flight control, camera, and RTK clock systems at the microsecond level, ensuring millisecond-level accuracy in imaging time. Additionally, real-time compensation is applied for the position and orientation of each camera lens and antenna, resulting in highly precise positional data for each captured image. The UAV features six 1/2.9-inch CMOS sensors, comprising one colour sensor for visible light imaging and five monochrome sensors for multispectral imaging, each with an effective resolution of 2.08 million pixels. The integrated multispectral imaging system captures data across five bands: red (central wavelength 650 nm), green (560 nm), blue (450 nm), near-infrared (840 nm), and red-edge (730 nm).
The flight conditions were optimal, with clear skies, adequate lighting, and ground wind speed below 4 m/s, meeting the aerial survey requirements. The UAV operated at an altitude of 30 m, covering an area of 6167 m2. The flight plan comprised 44 main flight lines, oriented at a 79-degree angle. The heading overlap rate was set at 80%, the side overlap rate at 70%, and the gimbal tilt angle at 90 degrees. The total flight duration was 28 min, during which 670 original aerial images were captured.
Ground truth bathymetric data were acquired using an USV (Rivertech Ltd., Sofia, Bulgaria) equipped with a remote controlled platform mounted and an integrated LiDAR/MBES (Multibeam Echo Sounder) (NORBIT ASA, Trondheim, Norway) system [53]. Then the data collected from the MBES and LiDAR sensors on the USV were subsequently collated and pre-processed with Hypack (version 2022, Xylem, Middletown, CT, USA) Software [54] and QGIS (version 3.30.3, QGIS Development Team, Zürich, Switzerland) [55] to generate a merged bathymetric dataset. Unlike the drone, the data acquisition system of the USV requires a trial and testing on land and in the water.

2.3. Integrated Model Construction

To enable accurate and cost-effective bathymetric mapping in shallow coastal environments, we developed an integrated modelling framework that synergizes UAV-derived DSMs, multispectral imagery, and USV-acquired sonar bathymetry through ML regression. The full workflow (Figure 2) comprises three primary phases: (i) data acquisition, (ii) data preprocessing, and (iii) model integration and prediction.

2.3.1. Data Acquisition

In this study, multispectral data were acquired using a DJI Phantom 4 Multispectral UAV, which records reflectance across five spectral bands—blue, green, red, red edge, and near-infrared. Simultaneously, ground-truth bathymetric measurements were obtained via a remote-controlled USV equipped with a multibeam echosounder (MBES), enabling the precise profiling of underwater topography in diverse nearshore conditions in Singapore.

2.3.2. Data Preprocessing

UAV imagery was processed using DJI Terra (version 3.60, SZ DJI Technology Co., Ltd., Shenzhen, China) software to mosaic images, generate multispectral orthomosaics and a DSM via aerial triangulation and 3D photogrammetric reconstruction. RTK-GPS measurements were applied for radiometric and geometric correction to enhance spatial accuracy. The key stages included the following: (1) feature point extraction and matching: the software extracts features such as Scale Invariant Feature Transform (SIFT) algorithms from images and matches these feature points between adjacent images; (2) photogrammetric network construction: a spatial relationship network is established between images using the matched points; (3) integration of positional data (RTK-GPS); (4) global bundle adjustment to optimize camera positions and orientation parameters using all images and matched points; (5) output of refined 3D point clouds and aerial triangulation results; (6) DSM reconstruction: generates a surface elevation model used for image correction; (7) orthorectification: projects each image onto the DSM to eliminate the effects of tilt and distortion; (8) image stitching and blending: selects pixels from the optimal viewing angles, blends overlapping areas, and removes seams. The DSM and orthomosaics achieved high spatial accuracy, with georeferencing RMSE values of 0.013 m and 0.017 m for Study Areas 1 and 2, respectively. Multispectral aerial images were further processed to generate mosaicked images with high horizontal alignment accuracy.
In parallel, MBES data collected by the USV were preprocessed using Hypack software. The pipeline included the following: (1) creation of tidal correction files using Maritime and Port Authority (MPA) tide tables, (2) Total Propagated Uncertainty (TPU) calculation to meet International Hydrographic Organization (IHO) Special Order standards (depths < 40 m), and (3) application of the Combined Uncertainty and Bathymetry Estimator (CUBE) [56] algorithm to produce clean, gridded bathymetric surfaces via maximum likelihood estimation. These processed bathymetric outputs were further refined in QGIS for geospatial alignment.
To prepare inputs for ML, all datasets—including DSM, multispectral bands, and sonar-derived bathymetry—were resampled to a uniform 2 cm pixel resolution. Layer stacking was performed using ENVI 5.7 (NV5 Geospatial Solutions) [57] to generate a multisource composite cube. The resulting five-band multispectral data were converted to ENVI standard format for compatibility with downstream tools such as The Water Colour Simulator (WASI-2D). Due to the UAV’s low flight altitude (~30 m) and stable atmospheric conditions, no atmospheric correction was applied, minimizing reflectance distortion.

2.3.3. Model Integration and Depth Estimation

The final integrated regression model was constructed using MATLAB’s (version R2023b, MathWorks Inc., Natick, MA, USA) [58] Regression Learner application. This model ingested the stacked dataset—comprising UAV-derived DSM and multispectral imagery alongside USV-obtained bathymetric ground truth—to predict water depth across the surveyed regions. The ML regression framework enabled the robust mapping of depth gradients in optically complex, shallow-water environments. Predicted outputs were visualized as depth-coded maps, with warm colours indicating shallower areas and cooler tones representing deeper zones (Figure 2).
To evaluate the performance of our method, we conducted the experiments on the two designated study areas, respectively. The bathymetry inversion outputs were validated using multibeam sonar measurements, incorporating 802,987 and 248,284 bathymetric points extracted from the USV platform for the two respective areas.
For data processing, the drone imagery was used to generate depth estimates across different modes. Additionally, the generated DSM was integrated with the five-band spectral cube, forming a six-band cube for application in the proposed Integrated Model.

2.3.4. Model Evaluation

For the Integrated Model, the experiments followed a training and testing procedure, a random subset of 60% of the dataset was used for training and the remaining 40% for testing within each prepared dataset. Each data point included co-registered multibeam bathymetry as ground truth, multispectral imagery, and DSM values. This approach allowed for the effective model evaluation on unseen data while maintaining balanced input from different data modalities. The final trained model was then applied to the full dataset to generate the bathymetric inversion results.
These inversion results were generated by applying the trained models from the previous sections to broader UAV-covered areas, including regions without direct USV bathymetry.
To assess the accuracy of each bathymetric dataset, we employed the coefficient of determination (R2), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) as evaluation metrics. The predicted depths were compared against the ground truth measurements, and linear regression scatterplots were generated to calculate the overall R2, MAE, and RMSE values. A higher R2 indicates a stronger correlation between actual and predicted water depths, signifying greater inversion accuracy. A lower RMSE reflects smaller fluctuations in the depth estimation error, while a lower MAE represents smaller absolute differences between the measured and retrieved depths, both indicating improved bathymetric inversion performance.

2.4. Comparative Models of WASI-2D, Lyzenga, Stumpf, and ML

To evaluate the accuracy of our model in predicting water depths across different shallow-water features, we conducted a comparative analysis using widely adopted linear and bio-optical models, including the bio-optical WASI-2D model, the linear Lyzenga and Stumpf models, and an ML approach. The models were trained and validated using datasets where each point includes UAV multispectral imagery and corresponding USV multibeam bathymetry data as ground truth. This ensured robust ground truth for model learning and validation. Specifically, for data processing, the Stumpf model utilized blue and green spectral bands, while the Lyzenga model, WASI-2D model, and ML-based approach employed a five-band pre-processed spectral cube.

2.4.1. Shallow Bathymetry Inversion Based on WASI-2D Model

The WASI-2D is an open-source software tool designed for analyzing the spectral properties of aquatic environments. Built upon established bio-optical models [59], WASI-2D incorporates a 2D module that enables pixel-wise image analysis [60]. For bathymetric retrieval, the WASI tool considers the influence of water-column constituents and seafloor reflectance spectra on water-leaving reflectance. The algorithm iteratively matches observed spectral signatures to precomputed WASI spectra for various depths, minimizing the residual error through a cost function optimization process. The final output provides an optimal depth estimation by identifying the best-fitting spectra for each pixel. A detailed description of WASI can be found in [60].
The remote sensing reflectance in WASI is modelled according to the equations
R r s s h λ = R r s d e e p λ × 1 A r s , 1 × e x p K d λ + K u W λ × Z b + A r s , 2 × R r s b λ × e x p K d λ + K u B λ × Z b
The superscript s h indicates shallow water, d e e p —deep water, b —bottom, and the λ —indicates the wavelength. The first term on the right-hand side is the contribution of the water column with depth Z b , the second term represents the contribution of the bottom albedo. Light attenuation is described by the attenuation coefficients K d for downwelling irradiance, K u W for upwelling radiance originating from the water layer, and K u B for upwelling radiance from the bottom surface. These three coefficients are calculated as a function of the sun zenith angle, viewing direction and the concentrations of water constituents using equations also derived by [60]. A r s , 1 and A r s , 2 are empirical constants.
The WASI algorithm iterates the spectral signatures on a per-pixel basis, trying to fit an optimal spectrum given the constant values of model parameters. Inverse modelling takes place by approximating the remote sensing reflectance ( R r s ) spectra (of each pixel) with suitable WASI spectra for different depths. The best fit with the observed image spectrum is obtained by minimizing a cost function that calculates the correlation between the R r s and the WASI spectra. The inversion algorithm employs the absolute difference function in order to identify an optimal set of fit parameters (depth and seafloor type), which minimize the residual of the cost function [60,61].

2.4.2. Shallow Bathymetry Inversion Based on Lyzenga Model

Lyzenga (linear band model) was originally developed by Lyzenga [62,63], this model improves bathymetric mapping accuracy by incorporating multiple spectral bands and applying a rotational matrix to correct for variations in bottom reflectance. The model was later refined to better account for water quality heterogeneity, enabling it to handle variations in both bottom type and water composition while maintaining reliable depth estimations [63]. To account slightly better for water quality heterogeneity, finally modelling depth as
h ^ = h 0 j = 1 N h j X j
where h 0 and each h j are constants defining a linear relationship between X j , for each of the bands 1 to N as X j = ln L j L W j , L j is the above-surface radiance in band j and L W j is the averaged deep water radiance.
All values of h 0 to h j are determined through a multiple linear regression between a set of known depths and the log-transformed radiances found at those depths. Lyzenga et al. [63] demonstrated that this algorithm should account for heterogeneity in bottom type and water quality and still achieve accurate results.
The model’s effectiveness is directly proportional to the number of spectral bands used, implying that multispectral imagery (e.g., five-band sensors) should outperform conventional RGB imagery in heterogeneous water environments.

2.4.3. Shallow Bathymetry Inversion Based on Stumpf Model

The Stumpf et al. [64] model builds upon the simplified one-dimensional radiative transfer solution proposed by Lyzenga et al. but introduces a log transformation to linearize the relationship between spectral band values and depth. Unlike Lyzenga’s multiple linear regression approach, the Stumpf model employs a simple ratio of reflectance between two spectral bands to estimate depth. According to Beer’s Law, log-transformed reflectance decreases linearly with depth, with low-absorption bands attenuating more gradually than high-absorption bands. The ratio of these bands, when log-transformed, should therefore exhibit a near-linear relationship with depth. The modelling depth is modelled as
h ^ = h 0 + h 1 ln n × L H I ln n × L L O
where h 0 is an offset for zero depth, h 1 is a tunable constant defining the slope of the relationship between the ratio and depth, L is radiance, H I and L O are labels indicating high-absorption and low-absorption bands, respectively, and n is a large constant used to ensure positive log values and a linear response. The value of n was set to 1000 throughout this investigation, as variation in n has been shown to have no significant effect on estimated depth [64]. This method assumes that bottom reflectance variations affect both bands equally or insignificantly relative to depth, resulting in a consistent ratio across different seafloor types.

2.4.4. Shallow Bathymetry Inversion Based on ML Model

Conventional ML methods have proven effective for shallow-water bathymetry mapping [15,19,20], with the Bagged Tree model, introduced by Breiman [65], being a widely used approach. This model applies Bootstrap Aggregating (Bagging) to Decision Tree systems, leveraging a multilayer feedforward neural network trained via error back-propagation to enhance predictive accuracy. It operates in two phases: forward propagation, where the input signal undergoes a nonlinear transformation through hidden layers, and backward propagation (error feedback), where deviations from expected outputs trigger an iterative adjustment of weights and thresholds to minimize prediction errors. This iterative learning process refines the model’s ability to handle complex bathymetric variations, making Bagged Trees a robust tool for depth estimation in diverse aquatic environments.
In our implementation, we used 30 decision trees (learners). Each tree was trained using bootstrapped samples drawn with replacement from the original training data. The minimum leaf size was set to 8, which controls the smallest number of observations allowed in a tree leaf to prevent overfitting and encourage generalization.
Furthermore, for each split in the decision tree, the algorithm was set to use all available predictors (i.e., ‘Select All’), meaning no random subsetting of features was applied during the split. This configuration aligns with the standard Bagged Tree structure rather than Random Forest, which introduces random feature sampling.
These hyperparameters were selected based on empirical testing and prior studies to balance model complexity and performance. The model was implemented using MATLAB’s Regression Learner, and default options were refined where necessary.

3. Results

In this section, we presented the results of our DSM-integrated hybrid model on the bathymetric depth of Study Area 1, which show low water turbidity, and of Study Area 2, which show high water turbidity along with irregular shading on water surface and complex seafloor textures. And compared with the result of models of WASI-2D, Lyzenga, Stumpf and ML.

3.1. Nearshore Bathymetry Validation for Integrated Model

To evaluate model accuracy, test data from the USV were input into the trained model, generating water depth inversion results. The errors were calculated, and the inversion accuracy was assessed, as summarized in Table 1. For the proposed DSM-integrated hybrid model, the scatter plots in Figure 3a,b illustrate the relationship between the measured and predicted water depths for Study Area 1 and Study Area 2, respectively. The scatter diagram with the coloured density indicated the concentration of points in different regions, and density increasing from dark to light. The red solid lines represent the 1:1 lines. The blue dashed lines represent the regression lines. The regression line can be used to present the overall tendency of estimated depth.
In Study Area 1, characterized by low water turbidity, the bathymetric inversion results exhibit strong agreement with USV measurements. The scatter plot for this region (Figure 3a) demonstrates a high correlation between predicted and actual depths, with R2 = 0.933, and low errors (MAE = 0.292 m, RMSE = 0.400 m). Only a few localized errors were observed, mainly associated with seagrass and stone patches, along with minor artefacts near the centre of the scene along the seashore. These artefacts likely result from tiling effects in the reflectance mosaic and appear to protrude above the water surface, corresponding to the positive depth values shown in Figure 3.
For Study Area 2, which exhibits high water turbidity, irregular shading, and complex seafloor textures, the proposed Integrated Model also demonstrates a strong correlation between predicted and measured depths. The model achieved a high R2 = 0.969, with minimal errors (MAE = 0.080 m, RMSE = 0.040 m) (Figure 3b, Table 1).
The Boxplots in Figure 4g and Figure 5g indicate that the majority of validation points exhibit residual values of less than 0.3 m and 0.1 m in Study Areas 1 and 2, respectively, suggesting that the generated bathymetry maps are highly reliable for temporal change analysis. The overall correlation (R2 = 0.933 and 0.969 for the two study areas, respectively) confirms the robustness of the proposed approach using drone-based imagery. Furthermore, the error distribution of the predictions effectively generalizes beyond the training patches, demonstrating consistent performance across various seafloor types.

3.2. Comparative Analysis of the Integrated Model with WASI-2D, Lyzenga, Stumpf, and ML Models

We evaluated the performance of the proposed Integrated Model against four commonly used bathymetric models—WASI-2D, Lyzenga, Stumpf, and ML with multispectral inputs alone and DSM alone—using the same training and test datasets for both study areas. The scatter plots of water depth in the two study areas are shown in Figure 4a–f and Figure 5a–f, while the box plots of biases are displayed in Figure 4g and Figure 5g. The overall accuracy of each model is presented in Table 1.
In Study Area 1, which is characterized by low water turbidity, the accuracy of the models, as indicated by R2 values, follows the order: Integrated Model (0.933) > ML (0.929 > DSM (0.701) > Lyzenga (0.664) > Stumpf (0.423)) > WASI-2D (0.401) (Table 1). The retrieval errors (RMSE) range from 0.400 m (Integrated Model) to 1.004 m (Stumpf model). As observed in Figure 4, the Integrated Model produces results similar to the ML model in terms of both R2 values and regression line fitting, indicating a strong correlation between predicted and measured water depths. The regression line of the proposed method and ML are a similar degree with the increase in water in two study areas, the regression line between the estimated depth of the two models, and the measured depth is near the 1:1 line. In contrast, the WASI-2D, Lyzenga, and Stumpf models perform worse than ML, demonstrating a central tendency to underestimate water depth, with greater scatter and significant deviations from the 1:1 reference line. The regression lines of the Integrated Model and ML model align closely with the 1:1 line, suggesting that the relationship between predicted and measured depths remains consistent across the entire depth range, rather than being dependent on specific depth levels.
In Study Area 2, which is characterized by high water turbidity, irregular shading, and complex seafloor textures, the models exhibit the following R2 values: Integrated Model (0.969) > ML (0.842) > Lyzenga (0.612) > DSM (0.481) > Stumpf (0.264) > WASI-2D (0.192). Unlike Study Area 1, where ML and the Integrated Model performed similarly, in Study Area 2, the ML model’s performance dropped to 0.85, whereas the Integrated Model achieved the highest accuracy of 0.969. Specifically, the Integrated Model outperformed ML by 12%, Lyzenga by 37%, Stumpf by 70%, and WASI-2D by 80%. The retrieval errors (RMSE) ranged from 0.080 m (Integrated Model) to 0.450 m (WASI-2D model). As illustrated in the scatter plots (Figure 5a–f), the Integrated Model’s regression line remains closely aligned with the 1:1 line, indicating consistent accuracy across all depth ranges. This further reinforces that the Integrated Model maintains a more stable and reliable performance compared to ML, Lyzenga, Stumpf models, and WASI-2D.
The results demonstrate that DSM-only predictions capture overall depth trends but lack the fine spectral detail needed for precise estimation, as reflected by relatively high RMSE and MAE values. In contrast, multispectral-only ML predictions perform well in optically simple areas but degrade in shadowed or spectrally ambiguous regions. Importantly, the Integrated Model achieves the highest accuracy across both areas, with more than a 12% accuracy gain in the complex Area 2 compared to spectral-only ML. This indicates that DSM contributes complementary information beyond spatial co-alignment, particularly in environments where depth and topography are correlated.
Additionally, across both regions, DSM-only models consistently produce R2 values between those of ML and empirical models. Although DSM is less reliable for absolute depth prediction, it effectively captures relative elevation variations and structural seabed morphology, which enhances interpretability and strengthens the hybrid inversion.

4. Discussion

The Integrated Model demonstrated superior performance in both Study Area 1 (low turbidity) and Study Area 2 (high turbidity) due to its ability to handle the transition between optically shallow and optically deep waters through DSM integration. This capability makes it particularly well-suited for complex and turbid water environments where traditional empirical and semi-analytical models struggle. When comparing the models, WASI-2D and empirical models such as Lyzenga and Stumpf exhibit lower accuracy because they assume optically shallow waters have a uniform bottom colour and a homogeneous water column—conditions that do not align with the complex characteristics of Study Area 2. This limitation explains the significantly lower accuracy of these models in highly turbid environments.
Additionally, the Integrated Model leverages MLand DSM data, which enables it to adapt dynamically to variations in water depth, substrate composition, and reflectance properties. Unlike empirical methods, which rely on fixed assumptions about water optical properties, the Integrated Model offers greater flexibility and higher accuracy in diverse environmental conditions. Overall, these findings confirm the robustness of the Integrated Model, making it a highly effective tool for bathymetric mapping, especially in areas with complex underwater topography and variable water turbidity.

4.1. Comparison of Water Depth Inversion Results Using Different Models

To compare the results of water depth inversion using different models, the trained models were applied to estimate water depth across the two study areas. As illustrated in Figure 6 and Figure 7, the predicted bathymetric data from each model generally align with the observed trends in water depth variations. The difference (Bias) between the USV measured bathymetry and bathymetric inversion results of different models is also shown in Figure 8 and Figure 9. The areas with USV multibeam coverage are clearly marked in Figure 6e and Figure 7e to facilitate the direct comparison and validation of model predictions.
A comparison of the inversion results indicates that the proposed Integrated Model provided the most accurate predictions, closely reflecting actual water depth changes. In contrast, the conventional linear models (Lyzenga and Stumpf) and the WASI-2D model tended to overestimate water depth in very shallow waters and showed a tendency to underestimate water depth when the actual depth was around 1 m. However, they were still able to capture the general trend of water depth. Due to the lack of actual data for correction, the WASI-2D model is most affected by shadow and high water turbidity, exhibiting the lowest accuracy, struggling in both shallow and deep waters.
The bathymetric datasets provided critical insight into the morphological complexity of the nearshore zone, highlighting the capacity of high-resolution UAV-based remote sensing to capture subtle seabed features. In Study Area 1 (Figure 6 and Figure 8), the bedforms appeared as coherent, linear structures—characteristics indicative of controlled sediment dynamics. These patterns likely resulted from an artificial gravel beach installed along the coastline, which acts to attenuate wave energy, reduce sediment transport variability, and maintain morphological uniformity.
By contrast, Study Area 2 (Figure 7 and Figure 9) presented a markedly different profile, characterized by discontinuous, irregular bedform geometries consistent with natural depositional processes. The absence of shoreline engineering in this area likely exposed the seabed to more dynamic hydrodynamic forces—such as wave refraction, tidal currents, and wind-driven transport—resulting in spatially variable substrate textures and sediment patchiness.
In addition to geomorphic differences, several environmental factors contributed to reduced depth estimation accuracy in Study Area 2. Notably, shadowing effects cast by coastal wetland vegetation introduced significant radiometric noise into the multispectral UAV imagery. These shadows disrupted the spectral signal critical to optical-based depth retrieval, producing localized underestimation in the regression model. Suboptimal illumination conditions—such as overcast skies or low sun angles—further degraded the spectral reflectance quality, reducing the signal-to-noise ratio essential for resolving subtle depth variations.
Collectively, these findings underscore the dual role of natural geomorphology and environmental acquisition conditions in shaping the fidelity of drone-based bathymetric inversion. They also emphasize the need for adaptive modelling strategies that incorporate ancillary environmental data to improve robustness under complex coastal scenarios.
The challenges of collecting USV-based validation data in Study Area 2 further contributed to the accuracy differences between the two areas. The inaccessibility of this region resulted in a lack of training data in specific small regions, such as the data in the shaded area near the shore that USV cannot reach, which appears to have a negative impact on model performance. As a result, models that rely on reference depth measurements, such as ML-based methods, exhibited greater errors.
A comparison between the two study areas reveals that Study Area 1 featured higher image quality, well-defined seabed structures, and a more homogeneous substrate composition. These characteristics allowed ML models to achieve an R2 value of 0.929. After integrating the DSM, accuracy improved slightly to 0.933, indicating that while DSM data added more detailed depth information, the improvement was relatively small in stable and uniform environments.
In Study Area 2, where the coastal structure is more complex, the accuracy of all models decreased compared to Study Area 1. The ML model’s performance dropped to an R2 value of 0.842 in this region. However, after integrating the DSM, accuracy improved significantly, with the R2 increasing to 0.969 and the RMSE decreasing to 0.080. This result highlights the strong adaptability and robustness of the Integrated Model, particularly in environments with high water turbidity, irregular shading, and varied seafloor textures.

4.2. Limitations of the Proposed Integrated Model

While the proposed Integrated Model demonstrates a strong performance on data specific to Lazarus Island, its generalizability to other geographic locations remains a challenge. This limitation arises from inherent variations in optical water types, seabed composition, and water column properties across different sites, which can significantly impact spectral responses and depth retrieval accuracy. As such, direct application of the model to new regions without adaptation is unlikely to yield optimal results [36].
The bathymetric mapping inversion results, derived from measured water depth data and multi-spectral imagery, confirm that it is possible to approximate real water depth values. However, several external factors, including suspended solids, seabed composition, and image quality, influence the accuracy of inversion models. To further improve bathymetric mapping, future research should focus on incorporating additional spectral information to reduce errors caused by suspended particles, refining correction algorithms for seafloor variations, and enhancing DSM integration techniques to improve predictions in complex underwater environments. The potential strategies for model transferability also should be improved in future research, such as an approach involving fine-tuning the model with a limited number of site-specific calibration points, enabling localized adjustment while minimizing additional data collection efforts. Alternatively, the use of synthetic datasets to artificially expand the spectral and bathymetric diversity within the training data may improve robustness and adaptability across broader environmental conditions. These strategies represent promising pathways toward developing more transferable and scalable bathymetric mapping models.

5. Conclusions

To improve the accuracy of bathymetric depth estimation, particularly in environments affected by shadow effects, water turbidity, and complex seafloor textures, we developed a robust ML-based bathymetric inversion model that integrates the DSM data with UAV-based multispectral imagery for water depth inversion. To evaluate its effectiveness, we systematically compared its performance with conventional empirical models (Lyzenga and Stumpf), the WASI-2D model, and a standalone ML approach across different underwater conditions.
Our results demonstrate that the DSM-Integrated Model offers notable improvement in accuracy and stability compared to the other approaches. While conventional empirical models tended to overestimate water depth in shallow regions and showed inconsistencies in complex seafloor environments, the DSM-Integrated Model effectively mitigated errors caused by heterogeneous seafloor textures, irregular shading, and high water turbidity. This integration contributed to a significant improvement in accuracy, achieving an R2 of 0.969 and an RMSE of 0.080 m, making it a promising alternative to traditional methods.
Although the ML model alone demonstrated lower errors than the empirical and WASI-2D models, it faced challenges in capturing depth variations in areas with complex seabed structures. On the other hand, the WASI-2D model performed well in optically shallow waters, operating effectively without requiring reference depth measurements—an advantage when measured data is unavailable. However, its accuracy decreased in turbid environments with varying seabed compositions, affecting its reliability in certain conditions.
The integration of DSM data into the ML model proved to be highly effective in stabilizing depth estimations across different conditions, allowing for better generalization beyond training datasets. This hybrid approach bridges the gap between remote sensing reflectance and bathymetric inversion, offering a scalable and adaptable solution for UAV-based bathymetric mapping. These findings underscore the critical role of DSM integration in enhancing the accuracy and reliability of UAV-based bathymetric surveys. They also support the broader adoption of ML techniques for coastal monitoring. When measured depth data is unavailable, the WASI-2D model serves as a viable alternative. However, when reference depth measurements are accessible, ML-based methods, particularly our DSM-Integrated Model, deliver significantly improved accuracy and detail, making them well-suited for diverse and complex seabed environments.
Future research should focus on further optimizing DSM-based models by incorporating additional environmental parameters, such as suspended sediment concentration and substrate classification, to enhance their adaptability to various aquatic environments. Expanding the application of ML techniques for real-time bathymetric monitoring will also be essential for advancing the precision, efficiency, and scalability of bathymetric mapping, ultimately supporting more effective coastal resource management and planning.

Author Contributions

Conceptualization, M.Z.; methodology, M.Z.; software, M.Z.; validation, M.Z.; formal analysis, M.Z.; investigation, M.Z. and A.C.L.; resources, M.Z.; data curation, M.Z., A.C.L., A.E.A., H.T.D. and Y.L.L.; writing—original draft preparation, M.Z.; writing—review and editing, M.Z. and A.C.L.; visualization, M.Z.; supervision, S.K.O.; project administration, S.K.O.; funding acquisition, S.K.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Cities of Tomorrow (CoT) R&D programme (Grant COT-V4-2020-8), Singapore.

Data Availability Statement

Data available on request from the authors.

Acknowledgments

We thank our equipment technical support staff member Tung Yee Wong for the help during the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, N. Detecting Coastline Change with All Available Landsat Data over 1986–2015: A Case Study for the State of Texas, USA. Atmosphere 2018, 9, 107. [Google Scholar] [CrossRef]
  2. Hilton, M.J.; Manning, S.S. Conversion of Coastal Habitats in Singapore: Indications of Unsustainable Development. Environ. Conserv. 2009, 22, 307–322. [Google Scholar] [CrossRef]
  3. Kalybekova, A. A Review of Advancements and Applications of Satellite-Derived Bathymetry. Eng. Sci. 2025, 35, 1541. [Google Scholar] [CrossRef]
  4. Paredes, J.M.; Spero, R.E. Water depth mapping from passive remote sensing data under a generalized ratio assumption. Appl. Opt. 1983, 22, 1134–1135. [Google Scholar] [CrossRef]
  5. Polcyn, F.C.; Lyzenga, D.R. Calculations of Water Depth from ERTS-MSS Data; NASA: Ann Arbor, MI, USA, 1973. [Google Scholar]
  6. Mobley, C.D.; Sundman, L.K.; Davis, C.O.; Bowles, J.H.; Downes, T.V.; Leathers, R.A.; Montes, M.J.; Bissett, W.P.; Kohler, D.D.R.; Reid, R.P.; et al. Interpretation of hyperspectral remote-sensing imagery by spectrum matching and look-up tables. Appl. Opt. 2005, 44, 3576–3592. [Google Scholar] [CrossRef]
  7. Lee, Z.; Carder, K.L.; Mobley, C.D.; Steward, R.G.; Patch, J.S. Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization. Appl. Opt. 1999, 38, 3831–3843. [Google Scholar] [CrossRef]
  8. Philpot, W.D. Bathymetric mapping with passive multispectral imagery. Appl. Opt. 1989, 28, 1569–1578. [Google Scholar] [CrossRef] [PubMed]
  9. Clark, R.K.; Fay, T.H.; Walker, C.L. Bathymetry calculations with Landsat 4 TM imagery under a generalized ratio assumption. Appl. Opt. 1987, 26, 4036_1–4038. [Google Scholar] [CrossRef] [PubMed]
  10. Bramante, J.F.; Raju, D.K.; Sin, T.M. Multispectral derivation of bathymetry in Singapore’s shallow, turbid waters. Int. J. Remote Sens. 2012, 34, 2070–2088. [Google Scholar] [CrossRef]
  11. Bramante, J.F.; Ali, S.M.; Ziegler, A.D.; Sin, T.M. Decadal biomass and area changes in a multi-species meadow in Singapore: Application of multi-resolution satellite imagery. Bot. Mar. 2018, 61, 289–304. [Google Scholar] [CrossRef]
  12. Domazetović, F.; Šiljeg, A.; Marić, I.; Faričić, J.; Vassilakis, E.; Panđa, L. Automated Coastline Extraction Using the Very High Resolution WorldView (WV) Satellite Imagery and Developed Coastline Extraction Tool (CET). Appl. Sci. 2021, 11, 9482. [Google Scholar] [CrossRef]
  13. Lee, Z.; Carder, K.L.; Chen, R.F.; Peacock, T.G. Properties of the water column and bottom derived from Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data. J. Geophys. Res. Ocean. 2001, 106, 11639–11651. [Google Scholar] [CrossRef]
  14. Giardino, C.; Brando, V.E.; Dekker, A.G.; Strömbeck, N.; Candiani, G. Assessment of water quality in Lake Garda (Italy) using Hyperion. Remote Sens. Environ. 2007, 109, 183–195. [Google Scholar] [CrossRef]
  15. Mohamed, H.; AbdelazimNegm; Salah, M.; Nadaoka, K.; Zahran, M. Assessment of proposed approaches for bathymetry calculations using multispectral satellite images in shallow coastal/lake areas: A comparison of five models. Arab. J. Geosci. 2017, 10, 42. [Google Scholar] [CrossRef]
  16. Alevizos, E.; Le Bas, T.; Alexakis, D.D. Assessment of PRISMA Level-2 Hyperspectral Imagery for Large Scale Satellite-Derived Bathymetry Retrieval. Mar. Geod. 2022, 45, 251–273. [Google Scholar] [CrossRef]
  17. Stumpf, R.P.; Pennock, J.R. Calibration of a general optical equation for remote sensing of suspended sediments in a moderately turbid estuary. J. Geophys. Res. Ocean. 1989, 94, 14363–14371. [Google Scholar] [CrossRef]
  18. Louchard, E.M.; Reid, R.P.; Stephens, F.C.; Davis, C.O.; Leathers, R.A.; Valerie, D.T. Optical remote sensing of benthic habitats and bathymetry in coastal environments at Lee Stocking Island, Bahamas: A comparative spectral classification approach. Limnol. Oceanogr. 2003, 48 Pt 2, 511–521. [Google Scholar] [CrossRef]
  19. Eugenio, F.; Marcello, J.; Mederos-Barrera, A.; Marques, F. High-Resolution Satellite Bathymetry Mapping: Regression and Machine Learning-Based Approaches. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  20. Parsons, M.; Bratanov, D.; Gaston, K.J.; Gonzalez, F. UAVs, Hyperspectral Remote Sensing, and Machine Learning Revolutionizing Reef Monitoring. Sensors 2018, 18, 2026. [Google Scholar] [CrossRef]
  21. Zhou, W.; Tang, Y.; Jing, W.; Li, Y.; Yang, J.; Deng, Y.; Zhang, Y. A Comparison of Machine Learning and Empirical Approaches for Deriving Bathymetry from Multispectral Imagery. Remote Sens. 2023, 15, 393. [Google Scholar] [CrossRef]
  22. Alevizos, E.; Alexakis, D.D. Evaluation of radiometric calibration of drone-based imagery for improving shallow bathymetry retrieval. Remote Sens. Lett. 2022, 13, 311–321. [Google Scholar] [CrossRef]
  23. Alevizos, E.; Alexakis, D.D. Monitoring Short-Term Morphobathymetric Change of Nearshore Seafloor Using Drone-Based Multispectral Imagery. Remote Sens. 2022, 14, 6035. [Google Scholar] [CrossRef]
  24. Alevizos, E.; Nicodemou, V.C.; Makris, A.; Oikonomidis, I.; Roussos, A.; Alexakis, D.D. Integration of Photogrammetric and Spectral Techniques for Advanced Drone-Based Bathymetry Retrieval Using a Deep Learning Approach. Remote Sens. 2022, 14, 4160. [Google Scholar] [CrossRef]
  25. Alevizos, E.; Oikonomou, D.; Argyriou, A.V.; Alexakis, D.D. Fusion of Drone-Based RGB and Multi-Spectral Imagery for Shallow Water Bathymetry Inversion. Remote Sens. 2022, 14, 1127. [Google Scholar] [CrossRef]
  26. Gonçalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  27. Rossi, L.; Mammi, I.; Pelliccia, F. UAV-Derived Multispectral Bathymetry. Remote Sens. 2020, 12, 3897. [Google Scholar] [CrossRef]
  28. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
  29. David, C.G.; Kohl, N.; Casella, E.; Rovere, A.; Ballesteros, P.; Schlurmann, T. Structure-from-Motion on shallow reefs and beaches: Potential and limitations of consumer-grade drones to reconstruct topography and bathymetry. Coral Reefs 2021, 40, 835–851. [Google Scholar] [CrossRef]
  30. Lei, S.; Luo, J.; Tao, X.; Qiu, Z. Remote Sensing Detecting of Yellow Leaf Disease of Arecanut Based on UAV Multisource Sensors. Remote Sens. 2021, 13, 4562. [Google Scholar] [CrossRef]
  31. Alevizos, E.; Roussos, A.; Alexakis, D.D. Geomorphometric analysis of nearshore sedimentary bedforms from high-resolution multi-temporal satellite-derived bathymetry. Geocarto Int. 2021, 37, 8906–8923. [Google Scholar] [CrossRef]
  32. Dekker, A.G.; Phinn, S.R.; Anstee, J.; Bissett, P.; Brando, V.E.; Casey, B.; Fearns, P.; Hedley, J.; Klonowski, W.; Lee, Z.P.; et al. Intercomparison of shallow water bathymetry, hydro-optics, and benthos mapping techniques in Australian and Caribbean coastal environments. Limnol. Oceanogr. Methods 2011, 9, 396–425. [Google Scholar] [CrossRef]
  33. Lv, Z.; Herman, J.; Brewer, E.; Nunez, K.; Runfola, D. BathyFormer: A Transformer-Based Deep Learning Method to Map Nearshore Bathymetry with High-Resolution Multispectral Satellite Imagery. Remote Sens. 2025, 17, 1195. [Google Scholar] [CrossRef]
  34. Cai, W.; Liu, Y.; Chen, Y.; Dong, Z.; Yuan, H.; Li, N. A Seabed Terrain Feature Extraction Transformer for the Super-Resolution of the Digital Bathymetric Model. Remote Sens. 2023, 15, 4906. [Google Scholar] [CrossRef]
  35. Zhou, Y.; Mao, Z.; Mao, Z.; Zhang, X.; Zhang, L.; Huang, H. Benthic Mapping of Coral Reef Areas at Varied Water Depths Using Integrated Active and Passive Remote Sensing Data and Novel Visual Transformer Models. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
  36. Agrafiotis, P.; Demir, B. Deep learning-based bathymetry retrieval without in-situ depths using remote sensing imagery and SfM-MVS DSMs with data gaps. ISPRS J. Photogramm. Remote Sens. 2025, 225, 341–361. [Google Scholar] [CrossRef]
  37. Wen, L.; Hughes, M. Coastal Wetland Mapping Using Ensemble Learning Algorithms: A Comparative Study of Bagging, Boosting and Stacking Techniques. Remote Sens. 2020, 12, 1683. [Google Scholar] [CrossRef]
  38. Cheng, J.; Chu, S.; Cheng, L. Advancing Shallow Water Bathymetry Estimation in Coral Reef Areas via Stacking Ensemble Machine Learning Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12511–12530. [Google Scholar] [CrossRef]
  39. Mandlburger, G. A review of active and passive optical methods in hydrography. Int. Hydrogr. Rev. 2022, 28, 8–52. [Google Scholar] [CrossRef]
  40. Agrafiotis, P.; Karantzalos, K.; Georgopoulos, A.; Skarlatos, D. Learning from Synthetic Data: Enhancing Refraction Correction Accuracy for Airborne Image-Based Bathymetric Mapping of Shallow Coastal Waters. PFG—J. Photogramm. Remote Sens. Geoinf. Sci. 2021, 89, 91–109. [Google Scholar] [CrossRef]
  41. Liu, Y.; Deng, R.; Qin, Y.; Cao, B.; Liang, Y.; Liu, Y.; Tian, J.; Wang, S. Rapid estimation of bathymetry from multispectral imagery without in situ bathymetry data. Appl Opt 2019, 58, 7538–7551. [Google Scholar] [CrossRef]
  42. Sin, T.M.; Ang, H.P.; Buurman, J.; Lee, A.C.; Leong, Y.L.; Ooi, S.K.; Steinberg, P.; Teo, S.L.-M. The urban marine environment of Singapore. Reg. Stud. Mar. Sci. 2016, 8, 331–339. [Google Scholar] [CrossRef]
  43. Lai, S.; Loke, L.H.L.; Hilton, M.J.; Bouma, T.J.; Todd, P.A. The effects of urbanisation on coastal habitats and the potential for ecological engineering: A Singapore case study. Ocean Coast. Manag. 2015, 103, 78–85. [Google Scholar] [CrossRef]
  44. Tan, K.S.; Acerbi, E.; Lauro, F.M. Marine habitats and biodiversity of Singapore’s coastal waters: A review. Reg. Stud. Mar. Sci. 2016, 8, 340–352. [Google Scholar] [CrossRef]
  45. Bai, Z.; Wang, Y.; Li, M.; Sun, Y.; Zhang, X.; Wu, Y.; Li, Y.; Li, D. Land Subsidence in the Singapore Coastal Area with Long Time Series of TerraSAR-X SAR Data. Remote Sens. 2023, 15, 2415. [Google Scholar] [CrossRef]
  46. Catalao, J.; Raju, D.; Nico, G. Insar Maps of Land Subsidence and Sea Level Scenarios to Quantify the Flood Inundation Risk in Coastal Cities: The Case of Singapore. Remote Sens. 2020, 12, 296. [Google Scholar] [CrossRef]
  47. Tan, Y.H.J.; Tham, J.K.Q.; Paul, A.; Rana, U.; Ang, H.P.; Nguyen, N.T.H.; Yee, A.T.K.; Leong, B.P.I.; Drummond, S.; Tun, K.P.P. Remote sensing mapping of the regeneration of coastal natural habitats in Singapore: Implications for marine conservation in tropical cities. Singap. J. Trop. Geogr. 2022, 44, 130–148. [Google Scholar] [CrossRef]
  48. Chou, L.M.; Toh, T.C.; Ng, C.S.L. Effectiveness of reef restoration in Singapore’s rapidly urbanizing coastal environment. Int. J. Environ. Sci. Dev. 2017, 8, 576–580. [Google Scholar] [CrossRef]
  49. Bird, M.; Chua, S.; Fifield, L.K.; Teh, T.S.; Lai, J. Evolution of the Sungei Buloh–Kranji mangrove coast, Singapore. Appl. Geogr. 2004, 24, 181–198. [Google Scholar] [CrossRef]
  50. Chng, L.C.; Chou, L.M.; Huang, D. Environmental performance indicators for the urban coastal environment of Singapore. Reg. Stud. Mar. Sci. 2022, 49, 102101. [Google Scholar] [CrossRef]
  51. Shenzhen DJI Innovation Technology Co., Shenzhen, China. DJI. DJI Terra. Available online: https://www.dji.com (accessed on 1 July 2025).
  52. Jarahizadeh, S.; Salehi, B. A Comparative Analysis of UAV Photogrammetric Software Performance for Forest 3D Modeling: A Case Study Using AgiSoft Photoscan, PIX4DMapper, and DJI Terra. Sensors 2024, 24, 286. [Google Scholar] [CrossRef]
  53. NORBIT ASA. NORBIT. Available online: https://norbit.com/ (accessed on 1 July 2025).
  54. Xylem Inc., US. Hypack. Available online: www.hypack.com (accessed on 1 July 2025).
  55. QGIS Project Organization. QGIS. Available online: https://www.qgis.org/ (accessed on 1 July 2025).
  56. Calder, B.R.; Mayer, L.A. Automatic processing of high-rate, high-density multibeam echosounder data. Geochem. Geophys. Geosyst. 2003, 4, 6. [Google Scholar] [CrossRef]
  57. NV5 Geospatial Solutions Limited. ENVI; (Version 5.7); NV5 Geospatial: Broomfield, CO, USA, 2023. [Google Scholar]
  58. The MathWorks, Inc. MATLAB; (Version R2023b); The MathWorks Inc.: Natick, Massachusetts, USA, 2023. [Google Scholar]
  59. Giardino, C.; Candiani, G.; Bresciani, M.; Lee, Z.; Gagliano, S.; Pepe, M. BOMBER: A tool for estimating water quality and bottom properties from remote sensing images. Comput. Geosci. 2012, 45, 313–318. [Google Scholar] [CrossRef]
  60. Gege, P. WASI-2D: A software tool for regionally optimized analysis of imaging spectrometer data from deep and shallow waters. Comput. Geosci. 2014, 62, 208–215. [Google Scholar] [CrossRef]
  61. Manuel, A.; Blanco, A.C.; Tamondong, A.M.; Jalbuena, R.; Cabrera, O.; Gege, P. Optmization of Bio-Optical Model Parameters for Turbid Lake Water Quality Estimation Using Landsat 8 and Wasi-2d. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLII-3/W11, 67–72. [Google Scholar] [CrossRef]
  62. Lyzenga, D.R. Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data. Int. J. Remote Sens. 1981, 2, 71–82. [Google Scholar] [CrossRef]
  63. Lyzenga, D.R.; Malinas, N.P.; Tanis, F.J. Multispectral bathymetry using a simple physically based algorithm. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2251–2259. [Google Scholar] [CrossRef]
  64. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48 Pt 2, 547–556. [Google Scholar] [CrossRef]
  65. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
Figure 1. Survey locations in Lazarus Island: (a,b) top view of the study area with dashed lines indicating the zoomed-in regions shown in the subsequent panels, and (c) view of the nearshore bay with two distinct featured research areas (Area 1 and Area 2).
Figure 1. Survey locations in Lazarus Island: (a,b) top view of the study area with dashed lines indicating the zoomed-in regions shown in the subsequent panels, and (c) view of the nearshore bay with two distinct featured research areas (Area 1 and Area 2).
Remotesensing 17 03066 g001
Figure 2. Workflow for integrated bathymetric modelling.
Figure 2. Workflow for integrated bathymetric modelling.
Remotesensing 17 03066 g002
Figure 3. Scatter diagram of water depth inversion values and measured values in Area 1 (a) and Area 2 (b), the colour shows the density increasing from dark to light. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Figure 3. Scatter diagram of water depth inversion values and measured values in Area 1 (a) and Area 2 (b), the colour shows the density increasing from dark to light. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Remotesensing 17 03066 g003
Figure 4. Scatter diagram of water depth inversion values and measured values using WASI-2D, Lyzenga, Stumpf, ML, and DSM in Area 1 (af), the colour shows the density increasing from dark to light, (g) are Boxplots of biases for different models in Study Area 1. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Figure 4. Scatter diagram of water depth inversion values and measured values using WASI-2D, Lyzenga, Stumpf, ML, and DSM in Area 1 (af), the colour shows the density increasing from dark to light, (g) are Boxplots of biases for different models in Study Area 1. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Remotesensing 17 03066 g004
Figure 5. Scatter diagram of water depth inversion values and measured values using WASI-2D, Lyzenga, Stumpf, ML, and DSM in Area 1 (af), the colour shows the density increasing from dark to light, (g) are Boxplots of biases for different models in Study Area 2. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Figure 5. Scatter diagram of water depth inversion values and measured values using WASI-2D, Lyzenga, Stumpf, ML, and DSM in Area 1 (af), the colour shows the density increasing from dark to light, (g) are Boxplots of biases for different models in Study Area 2. The red dashed line represents the 1:1 reference line (perfect agreement between measured and predicted depths), while the solid blue line shows the linear regression fit of the data points.
Remotesensing 17 03066 g005
Figure 6. Bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 1. (g) USV measured water depth.
Figure 6. Bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 1. (g) USV measured water depth.
Remotesensing 17 03066 g006
Figure 7. Bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 2. (g) USV measured water depth.
Figure 7. Bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 2. (g) USV measured water depth.
Remotesensing 17 03066 g007
Figure 8. The difference (Bias) between the USV measured bathymetry and bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 1. (g) USV measured water depth.
Figure 8. The difference (Bias) between the USV measured bathymetry and bathymetric inversion results of (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 1. (g) USV measured water depth.
Remotesensing 17 03066 g008
Figure 9. The difference (Bias) between the USV measured bathymetry and bathymetric inversion results (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 2. (g) USV measured water depth.
Figure 9. The difference (Bias) between the USV measured bathymetry and bathymetric inversion results (a) WASI-2D, (b) Lyzenga, (c) Stumpf, (d) ML, (e) DSM, and (f) Integrated Model for Study Area 2. (g) USV measured water depth.
Remotesensing 17 03066 g009
Table 1. Overall accuracy of the models in area 1 and 2.
Table 1. Overall accuracy of the models in area 1 and 2.
STUDY AREA 1STUDY AREA 2
ModelR2RMSE (m)MAE (m)R2RMSE (m)MAE (m)
WASI-2D0.40121.03150.84370.19170.45580.3610
Lyzenga0.66370.97890.85670.61170.31130.2469
Stumpf0.42251.00430.85420.26440.31620.2520
ML0.92880.41080.30170.84200.17620.1261
DSM0.70141.84961.56370.48090.73370.5447
Integrated Model0.93250.40020.29250.96930.08010.0395
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, M.; Lee, A.C.; Alip, A.E.; Dieu, H.T.; Leong, Y.L.; Ooi, S.K. Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery. Remote Sens. 2025, 17, 3066. https://doi.org/10.3390/rs17173066

AMA Style

Zhou M, Lee AC, Alip AE, Dieu HT, Leong YL, Ooi SK. Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery. Remote Sensing. 2025; 17(17):3066. https://doi.org/10.3390/rs17173066

Chicago/Turabian Style

Zhou, Mandi, Ai Chin Lee, Ali Eimran Alip, Huong Trinh Dieu, Yi Lin Leong, and Seng Keat Ooi. 2025. "Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery" Remote Sensing 17, no. 17: 3066. https://doi.org/10.3390/rs17173066

APA Style

Zhou, M., Lee, A. C., Alip, A. E., Dieu, H. T., Leong, Y. L., & Ooi, S. K. (2025). Robust Bathymetric Mapping in Shallow Waters: A Digital Surface Model-Integrated Machine Learning Approach Using UAV-Based Multispectral Imagery. Remote Sensing, 17(17), 3066. https://doi.org/10.3390/rs17173066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop