Next Article in Journal
A Web-Based National-Scale Coastal Tidal Flat Extraction System Using Multi-Algorithm Integration on AI Earth Platform
Previous Article in Journal
Single Image to Semantic BIM: Domain-Adapted 3D Reconstruction and Annotations via Multi-Task Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning

1
Geomatics Engineering Program, Graduate School, Istanbul Technical University, Istanbul 34469, Türkiye
2
Geomatics Engineering Department, Civil Engineering Faculty, Istanbul Technical University, Istanbul 34469, Türkiye
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(16), 2912; https://doi.org/10.3390/rs17162912
Submission received: 22 June 2025 / Revised: 1 August 2025 / Accepted: 20 August 2025 / Published: 21 August 2025
(This article belongs to the Section Environmental Remote Sensing)

Abstract

Bathymetry, the measurement of water depth and underwater terrain, is vital for scientific, commercial, and environmental applications. Traditional methods like shipborne echosounders are costly and inefficient in shallow waters due to limited spatial coverage and accessibility. Emerging technologies such as satellite imagery, drones, and spaceborne LiDAR offer cost-effective and efficient alternatives. This research explores integrating multi-sensor datasets to enhance bathymetric mapping in coastal and inland waters by leveraging each sensor’s strengths. The goal is to improve spatial coverage, resolution, and accuracy over traditional methods using data fusion and machine learning. Gülbahçe Bay in İzmir, Turkey, serves as the study area. Bathymetric modeling uses Sentinel-2, Göktürk-1, and aerial imagery with varying resolutions and sensor characteristics. Model calibration evaluates independent and integrated use of single-beam echosounder (SBE) and satellite-based LiDAR (ICESat-2) during training. After preprocessing, Random Forest and Extreme Gradient Boosting algorithms are applied for bathymetric inference. Results are assessed using accuracy metrics and IHO CATZOC standards, achieving A1 level for 0–10 m, A2/B for 0–15 m, and C level for 0–20 m depth intervals.

1. Introduction

Coastal areas serve as vital meeting points between land and sea, holding significant social, economic, and ecological importance. Approximately 10% of the global population resides in low-lying coastal regions, where human activities intersect with the intensified impacts of climate change-induced natural hazards [1,2]. Consequently, coastal ecosystems face escalating pressures and heightened susceptibility to environmental stressors [3]. Accurate bathymetric information is essential for understanding coastal morphology, near-bottom currents, sediment transport, and ecosystem dynamics, thereby enabling effective planning and management strategies by coastal engineers, researchers, and policymakers [4].
Traditional methods for obtaining bathymetry are often constrained by logistical challenges, high costs, and limited spatial and temporal coverage. This issue is particularly problematic in dynamic coastal environments, where changes due to dredging, scouring, and erosion can render data obsolete before it can be collected and processed. Consequently, approximately half of the world’s shallow coastal waters lack comprehensive survey data [5]. Bathymetric data in coastal areas play a pivotal role across a spectrum of applications, including monitoring, coastal engineering and protection, marine resource management, energy exploration, submarine cable deployment, dredging operations, navigation, sovereignty delineation, scientific research, and conservation efforts [6]. Therefore, considerable time and effort have been invested by the scientific community and responsible authorities worldwide in searching for alternatives to fill this gap.
Remote sensing data acted as a reliable data source for ocean applications by offering significant advantages in terms of cost-effectiveness, scalability, and spatial and temporal coverage [6]. Ocean remote sensing encompasses a broad range of applications, including monitoring sea surface temperature [7], water quality analysis [8], mapping marine ecosystems [9], and detecting environmental hazards like oil spills or algal blooms [10,11]. A growing area within this field is underwater image captioning, which uses artificial intelligence (AI) to automatically describe scenes captured by submersible cameras. This technology enhances marine biodiversity studies, supports autonomous underwater vehicles (AUVs), and aids in documenting deep-sea environments where human access is limited [12]. Another application field of ocean remote sensing, which is satellite-derived bathymetry (SDB), has emerged as a promising solution, benefiting from advancements in sensor technology. Since the 1970s, SDB studies have leveraged remote sensing technology [13]. Over the years, various methods and techniques have been developed to extract seabed depths from satellite imagery, with ongoing efforts to enhance accuracy and reliability in coastal and inland bathymetric data collection [14,15,16]. This development spans a wide range of research, from classical physics-based theories to cutting-edge statistical approaches utilizing AI [6,17,18,19,20,21].
Despite the promising accuracy and ease of application of AI-based algorithms, they require extensive datasets for training and often struggle to generalize across different conditions [22,23,24]. The ongoing need for comprehensive, high-quality training data presents significant challenges, particularly when scaling these methodologies for broader geographical applications [4]. In this context, satellite-based LiDAR data, which actively senses, is considered an alternative due to the economic, physical, and operational challenges associated with obtaining traditional echosounder training data. Additionally, the impact of using satellite and UAV-derived imagery data with different characteristics as input on modeling processes remains an active research question within the scope of SDB.
The term SDB encompasses various techniques that extract water depths using data from space-based sensors. These methods include gravity measurements [25], nearshore wave characteristics [26], stereo or multi-view space or airborne sensors [27], space-based laser [28], and multispectral imagery [29]. Among these, multispectral imagery is the most employed and is closely associated with the concept of SDB. Therefore, throughout the remainder of this paper, SDB will exclusively refer to the use of multispectral imagery for bathymetric analysis.
The innovative approach initially utilized multispectral aerial photography [30], evolving further with the integration of multispectral optical satellite imagery, beginning with Landsat constellations, and later extending to other satellites like Sentinel-2 [31,32,33]. SDB is determined by analyzing the attenuation of radiance concerning depth and wavelength in the water column, employing either empirical or semi-analytical imaging techniques [34]. Empirical methods establish statistical correlations between pixel values in images and actual depth measurements [29]. Analytical models, on the other hand, rely on radiative transfer models and optical characteristics of seawater, such as attenuation coefficients, backscattering, spectral signatures of suspended and dissolved matter, and bottom reflectance [35].
In practical terms, the primary distinction between empirical and semi-analytical methods lies in their implementation complexity. Empirical methods are simpler to deploy but necessitate depth control points within the study area. Moreover, recalculating the relationship between measured depth and reflectance is essential for each use of a different satellite image, even within the same area, and this relationship may only apply effectively to a singular substrate type [36]. Consequently, their utility is limited in regions featuring intricate seabed substrates or dynamically changing turbid conditions. Nonetheless, recent studies suggest that enhancements are feasible through additional processing techniques such as spatial modeling or utilization of multi-temporal images [37,38,39].
On the other hand, semi-analytical approaches do not necessitate depth control points but instead rely on input parameters associated with water and atmospheric properties, which can be theoretically derived or obtained from field measurements. In theory, these approaches could be applied in any location without prior seabed depth measurements. However, they demand a comprehension of the underlying physical model or pre-prepared Look-Up Tables (LUTs), and their implementation requires significantly more intricate modeling, particularly for non-specialists, compared to empirical methods [40].
On the empirical side, early studies based on linear and logarithmic band ratio algorithms can be said to have paved the way for current machine learning (ML) algorithms and deep learning (DL) approaches, according to a chronological review of empirical SDB studies [41,42]. The first use of ML-based SDB mapping was introduced by Ceyhun and Yalcın (2010) [43], and the popular use of ML-based approaches with the support vector machine (SVM) algorithm has been observed [18,23], followed by random forest (RF) [18,33,44,45] and XGBoost algorithms [46,47,48]. Among these, the use of XGBoost in optical SDB is relatively new, with only two studies conducted for bathymetric depth extraction from Sentinel-2 satellite imagery. Susa’s study highlighted the current use of XGBoost and suggested further investigation of its performance [47]. DL-based SDB mapping is a novel research initiative primarily focused on detecting the local spatial correlation between reflectance information and water depth. Early studies employed artificial neural networks (ANNs) for SDB mapping, achieving significant improvements in accuracy compared to classical models [49,50]. Another study by Dickens and Armstrong utilized recurrent neural networks (RNNs) on Orbview 3 satellite imagery to derive SDB in the Pacific Islands [22]. In a more recent study, convolutional neural networks (CNNs) were used to define the relationship and produce SDB maps at spatial resolutions compatible with multispectral imagery [51]. Recently, Wan and Ma (2021) employed a deep belief network with data perturbation (DBN-DP) model on Quickbird and Worldview 2 imagery, reporting R2 correlation and RMSE metrics in comparison with other models used in the study [52]. A newer study published in 2023 compared fundamental empirical models and ML-based methods (RF, SVM, and NN) for SDB mapping in the Ganquan Dao area, with findings indicating higher inversion results with ML-based methods up to a depth of 15 m. The authors of the study noted that comparative analysis of empirical and ML-based methods at different water depths remains scarce and insufficient [53].
The progression of AI-based bathymetric inversion methods has evolved from machine learning to advanced deep learning architectures. Initial efforts focused on ensemble learning techniques, such as stacking models, which improved shallow water bathymetry estimation by integrating multiple ML base learners to enhance robustness and accuracy [54]. These approaches laid the groundwork for more complex models. Recently, transformer-based deep learning architectures have emerged, offering superior spatial feature extraction and contextual understanding. For instance, BathyFormer, a vision transformer model, demonstrated high accuracy in nearshore bathymetry mapping using multispectral imagery [55]. The models, such as those using CNN and ConvLSTM architectures, have shown strong performance in reconstructing and forecasting bathymetric changes in dynamic coastal environments [56]. Collectively, DL-based methodologic advancements reflect a shift toward more flexible and scalable data-driven bathymetric mapping workflows that require large datasets for effective training; however, they face challenges to generalize effectively in bathymetric mapping due to spatial variability, which limits their ability to produce consistent results across different geographic regions [57]. In parallel, a combined use of geometry-based structures and deep learning models has been proposed to estimate bathymetry without relying on in situ depth data. These models leverage satellite or aerial imagery and advanced photogrammetric techniques, such as Structure-from-Motion (SfM) combined with Multi-View Stereo (MVS), to generate digital surface models (DSMs), and DL-based methods are employed by use of the DSM as an input training data for improved bathymetric mapping [58,59]. Although these approaches benefit from eliminating the need for in situ measurements, they present their own set of challenges. These include the need for multiple images and their need for both intrinsic and extrinsic sensor calibration; difficulties in generating reliable tie points for image matching—particularly over homogeneous or sandy seabeds—the requirement for accurate ground control points (GCPs) for georeferencing on the DSM generation part; and significant computational demands, especially when processing very high-resolution imagery or large spatial extents due to the complexity of the algorithms on the DL-based processing part. Additionally, their performance is highly dependent on environmental factors such as water surface distortions, water clarity and turbidity, sun glint, and shadows, similar to the other methods for bathymetric inversion.
Both empirical and analytical methods offer the benefit of generating bathymetry across extensive regions with high spatial precision and at a reduced expense compared to shipborne acoustic systems. Nevertheless, they are constrained to shallow water environments optically. The extinction depth of bathymetry extracted via SDB spans from a few meters (e.g., northern Baltic Sea) [60] to 40 m (e.g., Mediterranean Sea) [61], contingent upon factors like sunlight penetration, bottom reflectance, turbidity, location, and season. The spatial resolution of such bathymetric data varies with the optical sensor used, ranging from coarse resolution with platforms like Landsat-8 (approximately 30 m resolution) to medium resolution with SPOT, Sentinel-2 (10 m), and Planetscope (3 m) [62] to very high resolution (50–60 cm) with commercial satellites such as WorldView-3 [62,63].
In the context of SDB, empirical models primarily rely on determining and modeling the correlations between surface reflectance values obtained from remote sensing data and depth information. It is observed that the depth information used for model training is often provided by single-beam or multi-beam echosounder measurements. The physical and economic challenges associated with collecting these echosounder data, along with their limited coverage, pose difficulties for large-scale studies. As an alternative to this data, the use of spaceborne LiDAR data has been the subject of a limited number of studies. Forfinski-Sarkozi and Parrish (2016) demonstrated the potential of satellite LiDAR for bathymetric measurement at a depth of 10 m using NASA’s high-altitude airborne ICESat-2 simulator, the Multiple Altimeter Beam Experimental LiDAR (MABEL) [64]. Li et al., (2019) [65] combined airborne MABEL data with Landsat-based water classifications to obtain nearshore bathymetric data for Lake Mead. Comparisons with in situ data revealed that the root mean square errors (RMSE) ranged from 1.18 to 2.36 m across four transects. Parrish et al., (2019) [28] evaluated the bathymetric mapping performance of ATLAS over St. Thomas in the U.S. Virgin Islands. After correcting for refraction and the reduction in the speed of light in water, the RMSE, validated with Experimental Advanced Airborne Research LiDAR-B (EAARL-B) data collected by the U.S. Geological Survey (USGS), reached levels between 0.43 and 0.60 m for four laser tracks. However, due to the limitation of providing point measurements along the ground tracks of only six laser beams, ICESat-2 is insufficient for producing a comprehensive Digital Elevation Model (DEM) of the seafloor but has the potential to serve as training data for modeling studies using optical imagery obtained through passive remote sensing.
In this context, bathymetric inference using satellite image-LiDAR fusion has primarily employed basic models such as the Lyzenga method and linear regression, without examining the performance of current machine learning-based algorithms [66,67,68,69]. The proposed project aims to address this research gap. Additionally, within the scope of the proposed project, the performance of our national satellite Göktürk-1 imagery, which has only been examined once in ocean waters in our previous study [70], and very high-resolution UAV imagery in bathymetry will be evaluated. Past satellite-derived bathymetry studies have mostly been conducted using reflectance values (RGB) obtained from remote sensing data in the visible region. In this research, the potential benefit of adding brightness values derived from tasseled cap transformation (TCT) of the imagery data to the dataset for bathymetric modeling will be investigated. Finally, for the first time on a national scale and over national coastlines, a comprehensive analysis will be presented, comparing the bathymetric modeling performance of Sentinel-2, Göktürk-1, and UAV-based remote sensing data of different spatial resolutions, in combination with LiDAR-based and single-beam sounding training data, using the latest machine learning-based algorithms.
This research aims to conduct comprehensive research based on comparative performance analysis of bathymetric modeling using remote sensing data obtained from different platforms and to investigate the synergy of cross-data platforms in bathymetric inference methods, addressing a significant gap at the national level and continuing to be a subject of international research. The research intends to investigate the impact of spatial resolution by utilizing Sentinel-2, Göktürk-1, and UAV imagery; examine the effects of using ICESat-2 and single-beam sounding data independently and together as training data on model performance; and explore the potential contributions of derived data (TCT) from input remote sensing data to the modeling process. Additionally, the latest ensemble ML-based bathymetric modeling algorithms will be comparatively tested. To achieve these goals, the following research topics have been emphasized:
-
Investigating the bathymetric modeling performance of ICESat-2 LiDAR as a training dataset coupled with Sentinel-2, Göktürk-1, and UAV imagery data with different spatial resolutions, independently for the study sub-region.
-
Examining the impact of using ICESat-2 LiDAR data independently and its fusion with single-beam echosounding data for the training and validation of bathymetric models over a large region.
-
Exploring the effect of brightness values derived from TCT, in addition to spectral bands of imagery, on bathymetric modeling performance.
-
Investigating the performance of machine learning-based bathymetric modeling algorithms for this region.

2. Study Area and Data

2.1. Study Area

Gülbahçe Bay, located on the vibrant coast of İzmir, Türkiye, stands out as a significant area of interest for scientific research on coastal environmental dynamics. Situated within the Eastern Aegean Sea, Gülbahçe Bay is strategically positioned approximately 20 km southwest of İzmir city center and encompasses an area recognized for its ecological importance and diverse marine habitats (Figure 1). This bay plays a critical role in the region’s ecological balance through its complex network of marine ecosystems, supporting rich biodiversity and providing vital habitat for numerous marine species. Coastal waters are influenced by various environmental factors such as oceanographic processes, anthropogenic activities, and geological features, contributing to their dynamic nature and ecological complexity. The geographical location of Gülbahçe Bay makes the region susceptible to various environmental pressures and human-induced impacts, including urbanization, industrialization, agricultural runoff, and tourism-related activities. Understanding the complex interactions among these factors and their effects on the bay’s ecological health is crucial for effective conservation and sustainable management strategies. Additionally, Gülbahçe Bay serves as a valuable area for scientific research, offering a unique opportunity to study coastal processes, sediment dynamics, water quality changes, and ecosystem resilience in a dynamic coastal environment. Such research plays a significant role in advancing our efforts to understand coastal ecosystems and informs evidence-based management practices aimed at preserving the bay’s ecological integrity, while also supporting the livelihoods of local communities.

2.2. Data

2.2.1. Sentinel-2

The Sentinel-2 satellite constellation comprises Sentinel 2A and Sentinel 2B satellites, offering 13 bands of multispectral data across a spectrum from Coastal Aerosol to Shortwave Infrared (SWIR). With a common temporal resolution of five days, these satellites provide near-infrared resolution of 10 m for visible and near-infrared (NIR) bands. It offers extensive coverage of the Earth’s land surface, inland waters, and oceans. Recent studies have highlighted their significant potential for bathymetry applications across coastal, inland, and open sea waters [71,72]. Level-1C (L1C) Sentinel 2(A) image dated 31 August 2021 is selected for this study, primarily due to minimum cloud and sun-glint presence. The image is radiometrically and geometrically corrected, and the TOA reflectance values can be obtained from these data with basic coefficient transformations.

2.2.2. Göktürk-1

Level-2A (L2A) imagery from the Göktürk-1 satellite, captured on 28 February 2019, at 08:38 (UTC+3), was obtained from the Command of Reconnaissance Satellite Operations of the Turkish Air Force. Developed by Türkiye, Göktürk-1 represents a major milestone in the nation’s remote sensing capabilities. Launched on 5 December 2016, it is Türkiye’s first high-resolution Earth observation satellite, designed to provide valuable imagery for a wide range of applications, including environmental monitoring, disaster management, urban planning, and national security. The satellite is equipped with a multispectral sensor offering 2 m spatial resolution and a panchromatic sensor with 0.5 m spatial resolution. Technical specifications of Göktürk-1 are detailed in [70].

2.2.3. Aerial Imagery

Within the scope of the research, the Quantum Systems Trinity F90+, equipped with the MicaSense RedEdge-MX Multispectral Camera, is used to collect aerial imagery. This fixed-wing UAV is designed for professionals seeking large-area mapping capabilities. It features vertical takeoff and landing (eVTOL) capabilities and offers a flight duration of over 90 min. Due to its ability to be equipped with various sensors, this UAV can provide centimeter-level accuracy results using precise point positioning (PPK) technology. The RedEdge-MX sensor collects data in multiple spectral bands, including blue, green, red, red edge, and near-infrared (NIR). This sensor provides high-resolution images with a single-band resolution of 1.2 MP and a ground sampling distance (GSD) of 8 cm per pixel/band, enabling precise monitoring and analysis of surface objects and changes. The image collection with the abovementioned setup was conducted on 16 July 2021, during calm sea conditions, to be fed into drone-derived bathymetry (DDB) algorithms.

2.2.4. IceSat-2

ICESat-2, launched in September 2018, introduces a cutting-edge spaceborne LiDAR system designed to accurately measure surface elevations [73]. Serving as the successor to ICESat, which operated from 2003 to 2009, ICESat-2 is outfitted with the Advanced Topographic Laser Altimeter System (ATLAS). Utilizing a photon counting technique [74], ATLAS estimates the range between the telescope and ground-level targets. Each photon’s elevation, backscattered to the detector from a LiDAR pulse, is georeferenced based on its round-trip timing, providing precise data despite only capturing a small fraction of emitted photons. While ICESat-2 primarily focuses on observing elevations of land ice, sea ice, cloud tops, canopy tops, terrain, and water surfaces, its green laser enables the mapping of shallow underwater features.

2.2.5. Single-Beam Echosounder

Being a component of the traditional depth data acquisition methods, the single-beam echosounder (SBE) is utilized as the primary data source for both training and validation scopes in this study. Within this dataset, a total of 9000 bathymetric point data were randomly selected, focusing on depths ranging from 0 to 20 m. To ensure an even representation across various depths and prevent biases arising from uneven distributions, a consistent distribution was maintained during the selection step. Out of the total 6500 points, 4500 were allocated for training the model, while the remaining 2000 were set aside for validation (Figure 2).

3. Methodology

3.1. Atmospheric Correction

The multispectral images from Sentinel-2 and Göktürk-1 used in this study were atmospherically corrected using the ATCOR algorithm. This algorithm was chosen over alternatives such as iCOR and ACOLITE due to its ease of use—particularly its ability to directly read metadata and process Göktürk-1 imagery—as well as its comparable performance in bathymetric applications, as demonstrated in previous research [48]. The ATCOR (Atmospheric/Topographic Correction for Satellite Imagery) algorithm leverages the MODTRAN 5 radiative transfer model, precomputed Look-Up Table (LUT), and additional atmospheric components derived from the image [75,76]. These LUT tables, based on MODTRAN 5, encompass four distinct aerosol models: rural, urban, marine, and desert. Moreover, the water vapor (Wv) parameter is discretized into six values ranging from 0.75 cm to 4.11 cm, adjusted seasonally.
Ozone concentrations are obtained from an accessible database. The determination of the Aerosol Optical Thickness (AOT) parameter incorporates the dense dark vegetation algorithm in conjunction with a user-defined visibility parameter. Users are granted the flexibility to choose the desired aerosol model, while the Wv parameter is determined using the Atmospheric Pre-corrected Differential Absorption (APDA) algorithm. Within the ATCOR algorithm designed for atmospheric correction in turbid and opaque waters, the adjacency effect is mitigated by computing the average reflectance based on neighboring pixels.
The subsequent stage in the process is sun glint correction, a critical step that can significantly impact the error margin, contingent upon factors such as sea surface roughness and viewing angle during image capture. Previous studies have underscored this effect [77]. To address this, we adopted the method proposed by Hedley et al. (2005) [78]. This well-established technique identifies the lowest near-infrared reflectance value within the image and conducts a regression analysis comparing it with the near-infrared reflectance values of all other pixels. Once the regression formula is determined, it is applied to all pixels in visible bands to adjust their reflectance values accordingly.

3.2. IceSAT-2 Preprocessing

ICESat-2 ATL03 data was queried via the Open Altimetry online portal (https://nsidc.org/data/atl03/versions/7) where the dataset collected on 14 April 2021, was subset and downloaded over our area of interest in HDF5 format from the NSIDC server.
In the initial phase, water surface and seabed photons undergo manual selection utilizing an interactive Python plot v.3.9, guided by user interpretation. Subsequently, a mean filter is employed to ascertain the Mean Sea Level (MSL), which is then transformed into the Lowest Astronomical Tide (LAT) values via tide reduction techniques. Then, LAT-corrected MSL is subtracted from the height values of the designated seabed photons to compute the Apparent Depth values.
Due to the change in the speed of light that occurs at the air–water interface, the Apparent Depth (Da) should successively be corrected. The relationship between the Apparent Depth and the True Depth (Dt) can be computed by the following:
η2 Dt = η1 Da,
which is derived from Snell’s Law given in Equation (2), which relates the angles of incidence and refraction to the refractive indices of the two mediums and determines the corrected depth by adjusting for the bending of the light as it passes from air into water. This correction helps ensure accurate measurements in bathymetric LiDAR applications.
η1Sin(θ1) = η2 Sin(θ2)
In the above equation, the angle of incidence (the angle between the incoming light ray and the normal to the surface in air) is denoted as “θ1” and derived from the elevation of the unity pointing vector for each photon. To be precise, “θ1” is calculated as “π/2” minus the reference elevation (ref_elev), a parameter provided in the ATL03 output. “η1” refers to the refractive index of light traveling from air, “η2” to the refractive index of light traveling into water, and “θ2” to the angle of refraction (the angle between the refracted light ray and the normal to the surface in water). The index of refraction of seawater (S = 35 PSU) at atmospheric pressure, a temperature of 20 °C, and a wavelength of 540 nm is 1.34116 [79]. Hence, by taking η1 = 1 (since it is very close to vacuum) and η2 = 1.34 (approximate value for seawater) we can solve Equation (1) for the only unknown: Dt.
Theoretically, the incidence angle of ICESat-2 beams would cause a bending of light in water and associated geolocation changes. However, because of a very small off-nadir angle limited to 1.8° [79] during the entire mission (up to 0.46°), the emitted beams induce as much as 9 cm of horizontal offset at a depth of 30 m due to refraction [28]. Since the max depth achieved in our selection is 10 m and the ground sampling distance (GSD) of both Göktürk-1 (2 m) and aerial imagery (8 cm) is larger than the potential offset at 10 m, the horizontal shift due to the refraction of seawater is small enough to be ignored in our study.
After applying the above-mentioned corrections, a total of 3356 points were obtained for the depths up to 10 m, out of which 2356 were selected for training and 1000 for testing (Figure 3).

3.3. Tasseled Cap Transformation

Optical bathymetry models utilize the impact of light’s exponential decay in water on the measured radiance values in remotely sensed images. The decrease in light intensity as it traverses water is governed by the logarithmic decay principle of the Beer–Lambert law, which is directly employed by models. The Beer–Lambert law outlines the exponential attenuation of light in the water with the following Equation (3):
I = I0 × e−βD
where “e” is the base of natural logarithms, “I” is the intensity of light at a certain depth, “I0” is the intensity of light instantly after entering the water (i.e., light reflected from the water surface is discarded), “β” is the attenuation coefficient, and “D” is the distance that light travels through water.
We use TCT brightness as an input feature for bathymetric inversion because it captures the overall reflectance of the water column, which is influenced by both water depth and bottom type in optically shallow environments. As depth increases, the attenuation of light reduces the surface reflectance, leading to a measurable decrease in brightness. This makes TCT brightness a physically meaningful proxy for depth. Additionally, using TCT brightness helps reduce dimensionality and noise by summarizing information from multiple spectral bands, which is beneficial for machine learning models. These brightness values are derived through Tasseled Cap Transformation (TCT), a widely used technique in remote sensing that facilitates the extraction of crucial information from multispectral satellite imagery.
The transformative capability of TCT in generating orthogonal components offers valuable insights into the spectral characteristics of both land and sea surfaces, enabling a diverse array of remote sensing applications such as land cover classification tasks, vegetation monitoring, biomass estimation, and vegetation health assessment. Initially introduced by Kauth and Thomas (1976), TCT has become a widely utilized technique for transforming multispectral satellite imagery into three orthogonal components known as brightness, greenness, and wetness [80]. These components serve to capture specific aspects of the land surface’s reflectance characteristics. TCT operates by linearly combining the spectral bands of a multispectral image to derive the brightness, greenness, and wetness components. The brightness component reflects the overall magnitude of reflectance, indicating the surface’s general brightness level. It is sensitive to variations in lighting conditions, making it valuable for detecting areas with differing degrees of light and shadow. The greenness component is indicative of vegetation vigor and density, offering insights into vegetation health and abundance. Meanwhile, the wetness component detects variations in surface moisture content, including water bodies, wetlands, or soil moisture levels.
In environmental studies, TCT aids in the detection and monitoring of water bodies, identification of wetlands, and analysis of surface moisture conditions. Moreover, in marine sciences, TCT plays a crucial role in extracting valuable information about coastal and marine environments, including the detection of coastal features, water quality monitoring, coral reef health assessment, identification of coastal pollution, and mangrove mapping. Mangroves, for instance, thrive in soft, moist soils and saltwater surroundings, experiencing periodic inundation by tides. In such cases, a TCT-based approach proves more suitable than traditional vegetation indices, especially for marine applications.
In our study, we leverage TCT in the marine domain to enhance SDB accuracy by incorporating pixel brightness values alongside reflectance values. The brightness component in TCT can be calculated using Equation (4), where the coefficients preceding the brackets represent Tasseled Cap Coefficients, and the numbers within the brackets denote the wavelength intervals suitable for specific terms of the equation [80].
Brightness = 0.3037 × [450:520] + 0.2793 × [520:600] + 0.4743 × [630:690] + 0.5585 × [760:900] + 0.5082 × [1150:1750] + 0.1863 × [2080:2350].

3.4. Machine Learning Based Bathymetry Extraction Models

Significant advancements in SDB have been observed through the integration of machine learning algorithms, including Random Forest (RF) and Extreme Gradient Boosting (XGBoost) [33,72]. These methods have demonstrated their efficacy in extracting bathymetric information from remotely sensed satellite imagery. In accordance with the findings of Gülher and Alganci (2023), our study exclusively employs the Random Forest and XGBoost algorithms for this purpose [48].

3.4.1. Random Forest

The RF algorithm is an ensemble learning technique that combines multiple decision trees to construct a robust prediction model [81]. It functions by generating numerous decision trees and aggregating their outcomes to make predictions. To introduce an element of randomness and prevent overfitting while enhancing algorithm performance, a random subset of features is selected for each tree. Additionally, Bootstrap Aggregating (bagging) is employed to create diverse datasets from the original data, facilitating the creation of distinct decision trees.
With its versatile capabilities, RF finds application across various domains such as classification, regression, image recognition, anomaly detection, bioinformatics, and environmental sciences. In the domain of SDB, RF utilizes a range of input variables derived from satellite imagery, including spectral bands, texture indices, and spatial features, to estimate bathymetric depths [44]. By capturing intricate non-linear relationships between these input variables and bathymetric depths, RF produces precise and dependable predictions.

3.4.2. Extreme Gradient Boosting

In similar application domains as RF, XGBoost emerges as another potent machine learning algorithm applied in SDB [82]. It operates on the principles of gradient boosting, sequentially integrating weak learners to establish a robust predictive model. With components such as regularization and distributed computing, XGBoost mitigates overfitting risks, enhances model generalization, and facilitates the management of extensive datasets. Its proficiency in handling large and intricate datasets renders it particularly suitable for analyzing high-resolution satellite imagery. Through iterative optimization of the model’s performance, XGBoost adeptly captures complex relationships between input variables and bathymetry.

3.4.3. Hyperparameter Tunning

The hyperparameters of both models constructed for bathymetry extraction were optimized using the “Grid Search” approach, and the results are shown in Table 1.

4. Results

4.1. Sub-Region Analysis with IceSat-2 LiDAR

Initial analysis was performed on the sub-region of the study area, where aerial imagery and Sentinel 2 and Göktürk-1 satellite images were available. The IceSat-2 LiDAR data up to 10 m depth interval was used for training and testing purposes. Bathymetry extraction is carried out via RF and XGBoost algorithms coupled with the designated dataset and band pairs in the visible spectrum (B: Blue, G: Green, R: Red). The results with regard to the verification metrics for the depth range of 0–10 m are then calculated and given in Table 2. During the second run of the models, the pixel brightness values obtained through the TCT and Sentinel-2 MSI tandem are also fed as an auxiliary dataset. The corresponding results are presented alongside the results of the first run in the same tables, marked with ‘(B)’.
According to the results, both Göktürk-1 and Sentinel-2 satellite images trained with IceSat-2 LiDAR data provided sub-meter accuracy in depth inversion. The blue–green band pair performed better than the other two combinations for both satellite images. Addition of TCT data notably improved the accuracy and improved the correlation for both Sentinel-2 and Göktürk-1, and especially a noticeable MAE reduction is observed for the Göktürk-1 image. When it comes to aerial images, the results indicate very high accuracy with sub-decimeter MAE obtained across all band pair combinations. The lower flight altitude of the platform seems to reduce the atmospheric effects. The addition of TCT did not improve the results in this case. Overall, IceSat-2 effectively trained all sensors for bathymetric inversion for the 0–10 m depth interval by providing an A1 category of zone of confidence (CATZOC) level document [83].

4.2. Full Region Analysis with Fusion Approach

For the full region analysis, Göktürk-1 image blue–green band pairs and TCT data obtained from the Sentinel 2 image were used as satellite-based data sources according to their performance on the sub-region analysis. The inversion was performed with an RF algorithm. In the full-region analysis, SBE data were incorporated into both the training and testing phases, with testing conducted across various depth intervals. Examination of accuracy and correlation metrics demonstrates that the integration of IceSat-2 LiDAR data significantly enhances model performance within the 0–10 m depth range across the broader region (Table 3). This improvement is evidenced by reductions in RMSE and MAE compared to a model utilizing only SBE data. This data fusion approach achieved bathymetric inversion performance consistent with A1 CATZOC standards, as indicated by the MAE metric. Beyond error reduction, the observed increase in correlation metrics indicates a more accurate representation of the inversion relationship when SBE and IceSat-2 data are fused. The hexbin plots presented in Figure 4a,b reveal a nearly linear relationship, particularly within the 0–7 m depth range. However, when the analysis is extended to the 0–15 m and 0–20 m depth intervals—incorporating additional SBE data into the training phase—the performance of the bathymetric inversion declines significantly, as indicated by increased RMSE and MAE values. Despite this degradation, the results still meet the A2/B and C CATZOC standards based on the MAE metric. Furthermore, correlation metrics also exhibit a downward trend at these greater depths. This decline is substantiated by the correlation plots in Figure 4c,d, which demonstrate a marked reduction in correlation within the 10–20 m depth range. Based on the performance of the trained models, final bathymetric maps were generated exclusively for the 0–10 m and 0–15 m depth ranges. These maps are presented in Figure 5 and Figure 6, respectively.

5. Discussion

The aim objective of our research is to investigate the cross-platform synergy in the bathymetry extraction method via remote sensing. Four different data sources were under the spotlight to achieve that goal: (1) ICESat-2 for providing the seed data for the training and validation of models in use, (2) Göktürk-1 for delivering the SI required for SDB, (3) aerial imagery for delivering the SI required for DDB, and (4) Sentinel-2 for benchmarking the performance of Göktürk-1 and obtaining pixel brightness values. As given in Equation (4), calculation of pixel brightness values necessitates the use of any bands that fall in the predefined ranges of the electromagnetic spectrum, and unfortunately Göktürk-1 is limited in that regard. Therefore, Sentinel-2 is exploited as the free data source offering the necessary bands with the highest resolution. The fundamental findings of Gülher and Alganci (2023) are followed [48], which provided a comprehensive analysis of the performance of empirical methods and atmospheric correction pairs. The images utilized in the study are ATCOR-corrected, and only the previously identified top-performer algorithms; namely RF and XGBoost, are implemented.
ICESat-2 is designated as the primary seed data for the 0–10 m depth range, complemented by the available SBE data at this depth interval. The bathymetry extraction of the seabed between the 10 and 20 m contours was undertaken using the SBE data only. Gülbahçe Bay is renowned for its clear waters, apart from seasonal turbulence, and its gradually increasing depths, making it a suitable testbed for our research. The multispectral imagery of Göktürk-1 is processed to produce the SDB and benchmarked against that of Sentinel-2. Additionally, we introduced pixel brightness values obtained from the Sentinel-2 imagery via TCT as an auxiliary dataset into the bathymetry extraction algorithms and monitored the contribution, establishing cross-platform cooperation and data fusion. The verification process is conducted by monitoring three different metrics (RSE, AE, and R2), as well as adhering to the IHO standards established to determine the quality level of bathymetric data.
The IceSat-2 performance as training and testing data in bathymetric inversion through different satellite- and aerial-based remote sensing data is found to be very promising according to sub-region analysis. Especially when used for training aerial imagery, the inversion error can be reduced to few centimeters, and very high correlation can be achieved. The role of IceSat-2 data in enhancing the bathymetric inversion when combined with SBE data was also evident in the full region analysis. It is proved that the fusion of IceSat-2 and SBE improved the R2 correlation from 0.46 to 0.95, which resulted in a reduction of the errors up to 39%. These results proved that IceSat-2 data could satisfactorily act as primary training data up to 10 m depth, which was the maximum depth of the data available for this study region. Moreover, it can improve the performance of bathymetric inversion when fused with other bathymetric data sources such as SBE for a large area analysis beyond its coverage capacity in terms of both depth and spatial distribution. As can be observed in Figure 2 and Figure 3, even with few footprints and limited data coverage of Icesat-2, it satisfactorily filled the spatial gap on SBE data, especially in very shallow coastal regions, thus improving the spatial distribution of the training dataset in addition to performance gain.
Transferability and spatial performance on large spatial extents are current and common limitations in empirical-, ML-, and DL-based methods. When comparing ML model performance across different regions, our previous studies conducted on Horseshoe Island, Antarctica, demonstrated that Random Forest (RF) and XGBoost algorithms consistently yielded efficient and comparable results [48,70]. These studies utilized a similar optical satellite image configuration but benefited from higher-quality multibeam echosounder data. The Antarctic region presents distinct environmental conditions, including unique seasonal dynamics, varying atmospheric properties (e.g., aerosol optical depth and air quality), reduced illumination (optically darker water column), and differing seabed characteristics. The main observable difference is that while RF and XGBoost provided identical performance in the current research, the performance of XGBoost was comparatively lower in Antarctic studies, which can be interpreted as XGBoost being more severely affected by the lack of illumination, yet further evidence is needed. However, still it is worth mentioning that ML- and DL-based SDBs are influenced by several factors such as water quality, wave structure, salinity, and illumination conditions related to seasonal differences, which are not directly investigated in this study and still affect the transferability of the methods. Further studies are planned to investigate possible ways for adapting models originally trained on data from other regions to a specific location by incorporating local water quality parameters by domain adaptation strategies.
The integration of TCT pixel brightness values into the bathymetry extraction process once again proved to be a significant contributor across all tested categories, with performance improvements comparable to those demonstrated in the study by Gülher and Alganci [70]. The contribution is most significant when processing Göktürk-1 imagery and least significant with aerial imagery. This shortfall can be attributed to the substantial difference in spatial resolution between Sentinel-2 and aerial imagery. Utilizing pixel brightness values obtained from a source with higher spatial resolution could enhance the results in future studies. Another factor might be that aerial imagery is less influenced by atmospheric effects. If this assumption is confirmed, it could lead to a precise identification of the contribution of pixel brightness values in SDB/DDB as helping to reduce the lingering atmospheric effects even after atmospheric correction procedures.
The study carried out by Gülher and Alganci (2023) [70], which was the first exercise focusing on the capabilities of Göktürk-1 in bathymetry applications, underlines the relatively poor performance of Göktürk-1 imagery as opposed to Sentinel-2 and Landsat-8 constellations and attributes this shortcoming to the design motivation of Göktürk-1 being reconnaissance missions rather than environmental observations. Thus, the 11-bit radiometric resolution of Göktürk-1 as opposed to the 12-bit radiometric resolution of both Landsat-8 and Sentinel-2 was the suspected reason that hinders any higher accuracy in SDB that the high geospatial resolution can offer. Whereas, in this research, Göktürk-1 evidently outperforms Sentinel-2 in every algorithm and band pair. This outcome adds a new dimension to the discussion by signifying the negligible effect of Göktürk-1 being one bit shy in radiometric resolution when the focus is on shallow waters benefitting from good sun illumination. Göktürk-1 multispectral imagery is endorsed by this research as a reliable source to be used for scientific and cartographic purposes, up to 10 m depth contour, where the top ‘Category of Zone of Confidence’ (CATZOC) Level of A1 is achieved. Despite all the findings provided and insights gained, we should bear in mind that Göktürk-1 imagery in SDB applications has potentially more to offer.
Lastly, the results of this study demonstrate a high level of reliability for bathymetric inversion within the 0–10 m depth range. In contrast, the 0–15 and 0–20 m intervals exhibit reduced accuracy due to a noticeable decline in correlation. However, the limited availability of ICESat-2 data for these deeper intervals in the study area restricts our ability to fully assess its potential contribution to improving bathymetric inversion at these depths.

6. Conclusions

Coastal regions are vital to the planet’s ecological balance but are increasingly threatened by human activity and pollution, necessitating ongoing monitoring. Bathymetry plays a key role in marine research and navigation safety, especially in shallow coastal waters where traditional survey methods are often impractical. Satellite-derived bathymetry (SDB) offers a promising alternative, though its accuracy depends on factors like sensor quality, atmospheric correction, and environmental conditions. Recognized by the IHO (S-44, 2022) as an official data source, SDB is increasingly used by nautical chart producers. On 17 May 2024, the IHO also released its first “Satellite-Derived Bathymetry Best Practice Guide,” offering methods and guidance for applying SDB in shallow coastal areas. While SDB has been studied for decades, continued research is essential to refine its application [83]. This research provides an approach with synergetic use of IceSat-2 LiDAR data with echosounder data for training and testing purposes on hand, and combined use of remote sensing data from different platforms for an efficient shallow water bathymetric inversion. Results demonstrate that with the prowess of ever-growing ICESat-2 coverage, the limitation stemming from the necessity for seed data in empirical application is diminished to a great extent. It can effectively support the training and validation of bathymetric models using both satellite and aerial multispectral imagery, enabling high-resolution, seamless seabed mapping.

Author Contributions

Conceptualization, E.G. and U.A.; methodology, E.G. and U.A.; validation, E.G. and U.A.; formal analysis, E.G.; writing—original draft preparation, E.G. and U.A.; visualization, E.G. and U.A.; supervision, U.A.; project administration, U.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Projects Department of Istanbul Technical University (project number: MGA-2025-46486).

Data Availability Statement

The datasets and codes generated during this study will be available from the corresponding author upon reasonable request after the completion of the project.

Acknowledgments

The authors would like to extend their gratitude to the Office of Navigation, Hydrography, and Oceanography of the Turkish Navy and the Reconnaissance Satellite Battalion Command of the Turkish Air Forces for providing the SBE and Göktürk-1 datasets, respectively.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Neumann, B.; Vafeidis, A.T.; Zimmermann, J.; Nicholls, R.J. Future Coastal Population Growth and Exposure to Sea-Level Rise and Coastal Flooding—A Global Assessment. PLoS ONE 2015, 10, e0118571. [Google Scholar] [CrossRef]
  2. McMichael, C.; Dasgupta, S.; Ayeb-Karlsson, S.; Kelman, I. A Review of Estimating Population Exposure to Sea-Level Rise and the Relevance for Migration. Environ. Res. Lett. 2020, 15, 123005. [Google Scholar] [CrossRef]
  3. Melet, A.; Teatini, P.; Le Cozannet, G.; Jamet, C.; Conversi, A.; Benveniste, J. Earth Observations for Monitoring Marine Coastal Hazards and Their Drivers. Surv. Geophys. 2020, 41, 1489–1534. [Google Scholar] [CrossRef]
  4. Viaña-Borja, S.P.; González-Villanueva, R.; Alejo, I.; Stumpf, R.P.; Navarro, G.; Caballero, I. Satellite-Derived Bathymetry Using Sentinel-2 in Mesotidal Coasts. Coast. Eng. 2025, 195, 104644. [Google Scholar] [CrossRef]
  5. International Hydrographic Organization. C-55 Status of Hydrographic Surveying and Charting Worldwide; International Hydrographic Bureau: Monte Carlo, Monaco, 2021. [Google Scholar]
  6. Jawak, S.D.; Vadlamani, S.S.; Luis, A.J. A synoptic review on deriving bathymetry information using remote sensing technologies: Models, methods and comparisons. Adv. Rem. Sens. 2015, 4, 147–162. [Google Scholar] [CrossRef]
  7. Bai, Z.; Sun, Z.; Fan, B.; Liu, A.-A.; Wei, Z.; Yin, B. Multiscale Spatio-Temporal Attention Network for Sea Surface Temperature Prediction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 5866–5877. [Google Scholar] [CrossRef]
  8. Sun, Y.; Wang, D.; Li, L.; Ning, R.; Yu, S.; Gao, N. Application of Remote Sensing Technology in Water Quality Monitoring: From Traditional Approaches to Artificial Intelligence. Water Res. 2024, 267, 122546. [Google Scholar] [CrossRef]
  9. Evagorou, E.; Hasiotis, T.; Petsimeris, I.T.; Monioudi, I.N.; Andreadis, O.P.; Chatzipavlis, A.; Christofi, D.; Kountouri, J.; Stylianou, N.; Mettas, C.; et al. A Holistic High-Resolution Remote Sensing Approach for Mapping Coastal Geomorphology and Marine Habitats. Remote Sens. 2025, 17, 1437. [Google Scholar] [CrossRef]
  10. Jiang, Z.; Zhang, J.; Ma, Y.; Mao, X. Research on Remote Sensing Quantitative Inversion of Oil Spills and Emulsions Using Fusion of Optical and Thermal Characteristics. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 8472–8489. [Google Scholar] [CrossRef]
  11. Wang, S.; Qin, B. Application of Optical Remote Sensing in Harmful Algal Blooms in Lakes: A Review. Remote Sens. 2025, 17, 1381. [Google Scholar] [CrossRef]
  12. Li, H.; Li, L.; Wang, H.; Zhang, W.; Ren, P. Underwater Image Captioning with AquaSketch-Enhanced Cross-Scale Information Fusion. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–18. [Google Scholar] [CrossRef]
  13. Mavraeidopoulos, A.K.; Oikonomou, E.; Palikaris, A.; Poulos, S. A hybrid bio-optical transformation for satellite bathymetry modeling using sentinel-2 imagery. Remote Sens. 2019, 11, 2746. [Google Scholar] [CrossRef]
  14. Traganos, D.; Poursanidis, D.; Aggarwal, B.; Chrysoulakis, N.; Reinartz, P. Estimating Satellite-Derived Bathymetry (SDB) with the Google Earth Engine and Sentinel-2. Remote Sens. 2018, 10, 859. [Google Scholar] [CrossRef]
  15. Jagalingam, P.; Akshaya, B.; Hegde, A.V. Bathymetry mapping using Landsat 8 satellite imagery. Procedia Eng. 2015, 116, 560–566. [Google Scholar] [CrossRef]
  16. Lambert, S.E.; Parrish, C.E. Refraction correction for spectrally derived bathymetry using UAS imagery. Remote Sens. 2023, 15, 3635. [Google Scholar] [CrossRef]
  17. Ashphaq, M.; Srivastava, P.K.; Mitra, D. Review of near-shore satellite derived bathymetry: Classification and account of five decades of coastal bathymetry research. J. Ocean Eng. Sci. 2021, 6, 340–359. [Google Scholar] [CrossRef]
  18. Duan, Z.; Chu, S.; Cheng, L.; Ji, C.; Li, M.; Shen, W. Satellite-derived bathymetry using Landsat-8 and Sentinel-2A images: Assessment of atmospheric correction algorithms and depth derivation models in shallow waters. Opt. Express 2022, 30, 3238–3261. [Google Scholar] [CrossRef]
  19. Gao, J. Bathymetric mapping by means of remote sensing: Methods, accuracy and limitations. Prog. Phys. Geogr. Earth Environ. 2009, 33, 103–116. [Google Scholar] [CrossRef]
  20. Kutser, T.; Hedley, J.D.; Giardino, C.; Roelfsema, C.M.; Brando, V.E. Remote sensing of shallow waters—A 50-year retrospective and future directions. Remote Sens. Environ. 2020, 240, 111619. [Google Scholar] [CrossRef]
  21. Turner, I.L.; Harley, M.D.; Almar, R.; Bergsma, E.W.J. Satellite optical imagery in coastal engineering. Coast. Eng. 2021, 167, 103919. [Google Scholar] [CrossRef]
  22. Dickens, K.; Armstrong, A. Machine Learning of Derived Bathymetry and Coastline Detection. SMU Data Sci. Rev. 2019, 2, 4. [Google Scholar]
  23. Misra, A.; Ramakrishnan, B. Assessment of coastal geomorphological changes using multi-temporal Satellite-Derived Bathymetry. Cont. Shelf Res. 2020, 207, 104213. [Google Scholar] [CrossRef]
  24. Moeinkhah, A.; Shakiba, A.; Azarakhsh, Z. Assessment of regression and classification methods using remote sensing technology for detection of coastal depth (case study of bushehr port and kharg island). J. Indian Soc. Remote Sens. 2019, 47, 1019–1029. [Google Scholar] [CrossRef]
  25. Watts, A.B.; Tozer, B.; Harper, H.; Boston, B.; Shillington, D.J.; Dunn, R. Evaluation of Shipboard and Satellite-derived Bathymetry and Gravity Data over Seamounts in the Northwest Pacific Ocean. J. Geophys. Res. Solid Earth 2020, 125, e2020JB020396. [Google Scholar] [CrossRef]
  26. Almar, R.; Bergsma, E.W.J.; Gawehn, M.A.; Aarninkhof, S.G.J.; Benshila, R. High-frequency Temporal Wave-pattern Reconstruction from a few Satellite Images: A New Method towards Estimating Regional Bathymetry. J. Coast. Res. 2020, 95, 996–1000. [Google Scholar] [CrossRef]
  27. Hodúl, M.; Chénier, R.; Faucher, M.A.; Ahola, R.; Knudby, A.; Bird, S. Photogrammetric Bathymetry for the Canadian Arctic. Mar. Geod. 2020, 43, 23–43. [Google Scholar] [CrossRef]
  28. Parrish, C.E.; Magruder, L.A.; Neuenschwander, A.L.; Forfinski-Sarkozi, N.; Alonzo, M.; Jasinski, M. Validation of ICESat-2 ATLAS Bathymetry and Analysis of ATLAS’s Bathymetric Mapping Performance. Remote Sens. 2019, 11, 1634. [Google Scholar] [CrossRef]
  29. Salameh, E.; Frappart, F.; Almar, R.; Baptista, P.; Heygster, G.; Lubac, B.; Raucoules, D.; Almeida, L.P.; Bergsma, E.W.J.; Capo, S.; et al. Monitoring Beach Topography and Nearshore Bathymetry Using Spaceborne Remote Sensing: A Review. Remote Sens. 2019, 11, 2212. [Google Scholar] [CrossRef]
  30. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef]
  31. Caballero, I.; Stumpf, R.P. Retrieval of Nearshore Bathymetry from Sentinel-2A and 2B satellites in South Florida Coastal Waters. Estuar. Coast. Shelf Sci. 2019, 226, 106277. [Google Scholar] [CrossRef]
  32. Evagorou, E.; Mettas, C.; Agapiou, A.; Themistocleous, K.; Hadjimitsis, D. Bathymetric Maps from Multi-temporal Analysis of Sentinel-2 Data: The Case Study of Limassol, Cyprus. Adv. Geosci. 2018, 45, 397–407. [Google Scholar] [CrossRef]
  33. Sagawa, T.; Yamashita, Y.; Okumura, T.; Yamanokuchi, T. Satellite derived bathymetry using machine learning and multi-temporal satellite images. Remote Sens. 2019, 11, 1155. [Google Scholar] [CrossRef]
  34. Laporte, J.; Dolou, H.; Avis, J.; Arino, O. Thirty Years of Satellite Derived Bathymetry: The Charting Tool That Hydrographers Can No Longer Ignore. Int. Hydrogr. Rev. 2020, 30, 129–154. [Google Scholar] [CrossRef]
  35. Capo, S.; Lubac, B.; Marieu, V.; Robinet, A.; Bru, D.; Bonneton, P. Assessment of the Decadal Morphodynamic Evolution of a Mixed Energy Inlet using Ocean Color Remote Sensing. Ocean Dyn. 2014, 64, 1517–1530. [Google Scholar] [CrossRef]
  36. Heege, T.; Hartman, K.; Wettle, M. Effective Surveying Tool for Shallow-Water Zones. Satellite-Derived Bathymetry. Hydro International. 2017. Available online: https://www.hydro-international.com/content/article/effective-surveying-tool-for-shallow-water-zones (accessed on 5 May 2025).
  37. Caballero, I.; Stumpf, R.P. Towards Routine Mapping of Shallow Bathymetry in Environments with Variable Turbidity: Contribution of Sentinel-2A/B Satellites Mission. Remote Sens. 2020, 12, 451. [Google Scholar] [CrossRef]
  38. Cahalane, C.; Magee, A.; Monteys, X.; Casal, G.; Hanafin, J.; Harris, P. A Comparison of Landsat 8, RapidEye and Pleiades Products for Improving Empirical Predictions of Satellite-Derived Bathymetry. Remote Sens. Environ. 2019, 233, 111414. [Google Scholar] [CrossRef]
  39. Da Silveira, C.B.L.; Strenzel, G.M.R.; Maida, M.; Araujo, T.C.M.; Ferreira, B.P. Multiresolution Satellite-Derived Bathymetry in Shallow Coral Reefs: Improving Linear Algorithms with Geographical Analysis. J. Coast. Res. 2020, 36, 1247–1265. [Google Scholar] [CrossRef]
  40. Guzinski, R.; Spondylis, E.; Michalis, M.; Tusa, S.; Brancato, G.; Minno, L.; Hansen, L.B. Exploring the Utility of Bathymetry Maps Derived with Multispectral Satellite Observations in the Field of Underwater Archaeology. Open Archaeol. 2016, 2, 243–263. [Google Scholar] [CrossRef]
  41. Lyzenga, D.R. Shallow-Water Bathymetry Using Combined Lidar and Passive Multispectral Scanner Data. Int. J. Remote Sens. 1985, 6, 115–125. [Google Scholar] [CrossRef]
  42. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of Water Depth with High-Resolution Satellite Imagery over Variable Bottom Types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  43. Ceyhun, Ö.; Yalçın, A. Remote Sensing of Water Depths in Shallow Waters via Artificial Neural Networks. Estuar. Coast. Shelf Sci. 2010, 89, 89–96. [Google Scholar] [CrossRef]
  44. Manessa, M.D.M.; Kanno, A.; Sekine, M.; Haidar, M.; Yamamoto, K.; Imai, T.; Higuchi, T. Satellite-Derived Bathymetry Using Random Forest Algorithm and Worldview-2 Imagery. Geoplanning 2016, 3, 117–126. [Google Scholar] [CrossRef]
  45. Mudiyanselage, S.S.J.D.; Abd-Elrahman, A.; Wilkinson, B.; Lecours, V. Satellite-Derived Bathymetry Using Machine Learning and Optimal Sentinel-2 Imagery in South-West Florida Coastal Waters. GISci. Remote Sens. 2022, 59, 1143–1158. [Google Scholar] [CrossRef]
  46. Gafoor, F.A.; Al-Shehhi, M.R.; Cho, C.-S.; Ghedira, H. Gradient Boosting and Linear Regression for Estimating Coastal Bathymetry Based on Sentinel-2 Images. Remote Sens. 2022, 14, 5037. [Google Scholar] [CrossRef]
  47. Susa, T. Satellite Derived Bathymetry with Sentinel-2 Imagery: Comparing Traditional Techniques with Advanced Methods and Machine Learning Ensemble Models. Mar. Geod. 2022, 45, 435–461. [Google Scholar] [CrossRef]
  48. Gülher, E.; Alganci, U. Satellite-Derived Bathymetry Mapping on Horseshoe Island, Antarctic Peninsula, with Open-Source Satellite Images: Evaluation of Atmospheric Correction Methods and Empirical Models. Remote Sens. 2023, 15, 2568. [Google Scholar] [CrossRef]
  49. Liu, S.; Wang, L.; Liu, H.; Su, H.; Li, X.; Zheng, W. Deriving Bathymetry from Optical Images with a Localized Neural Network Algorithm. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5334–5342. [Google Scholar] [CrossRef]
  50. Nagamani, P.V.; Chauhan, P.; Sanwlani, N.; Ali, M.M. Artificial Neural Network Based Inversion of Benthic Substrate Bottom Type and Bathymetry in Optically Shallow Waters—Initial Model Results. J. Indian Soc. Remote Sens. 2012, 40, 137–143. [Google Scholar] [CrossRef]
  51. Ai, B.; Wen, Z.; Wang, Z.; Wang, R.; Su, D.; Li, C.; Yang, F. Convolutional Neural Network to Retrieve Water Depth in Marine Shallow Water Area from Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2888–2898. [Google Scholar] [CrossRef]
  52. Wan, J.; Ma, Y. Shallow Water Bathymetry Mapping of Xinji Island Based on Multispectral Satellite Image Using Deep Learning. J. Indian Soc. Remote Sens. 2021, 49, 2019–2032. [Google Scholar] [CrossRef]
  53. Zhou, W.; Tang, Y.; Jing, W.; Li, Y.; Yang, J.; Deng, Y.; Zhang, Y. A Comparison of Machine Learning and Empirical Approaches for Deriving Bathymetry from Multispectral Imagery. Remote Sens. 2023, 15, 393. [Google Scholar] [CrossRef]
  54. Cheng, J.; Chu, S.; Cheng, L. Advancing Shallow Water Bathymetry Estimation in Coral Reef Areas via Stacking Ensemble Machine Learning Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12511–12530. [Google Scholar] [CrossRef]
  55. Zhang, X.; Chen, X.; Han, W.; Huang, X.; Chen, Y.; Li, J.; Wang, L. Satellite-Driven Deep Learning Algorithm for Bathymetry Extraction. In Web Information Systems Engineering—WISE 2024; Barhamgi, M., Wang, H., Wang, X., Eds.; Lecture Notes in Computer Science; Springer: Singapore, 2025; Volume 15439. [Google Scholar] [CrossRef]
  56. Yildiz, I.; Stanev, E.V.; Staneva, J. Advancing Bathymetric Reconstruction and Forecasting Using Deep Learning. Ocean Dyn. 2025, 75, 36. [Google Scholar] [CrossRef]
  57. Islam, K.A.; Abul-Hassan, O.; Zhang, H.; Hill, V.; Schaeffer, B.; Zimmerman, R.; Li, J. Ensemble Machine Learning Approaches for Bathymetry Estimation in Multi-Spectral Images. Geomatics 2025, 5, 34. [Google Scholar] [CrossRef]
  58. Agrafiotis, P.; Janowski, Ł.; Skarlatos, D.; Demir, B. MAGICBATHYNET: A Multimodal Remote Sensing Dataset for Bathymetry Prediction and Pixel-Based Classification in Shallow Waters. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 249–253. [Google Scholar]
  59. Agrafiotis, P.; Demir, B. Deep Learning-Based Bathymetry Retrieval without In-Situ Depths Using Remote Sensing Imagery and SfM-MVS DSMs with Data Gaps. ISPRS J. Photogramm. Remote Sens. 2025, 225, 341–361. [Google Scholar] [CrossRef]
  60. Kulha, N.; Ruha, L.; Väkevä, S.; Koponen, S.; Viitasalo, M.; Virtanen, E.A. Satellite Bathymetry Estimation in the Optically Complex Northern Baltic Sea. Estuar. Coast. Shelf Sci. 2024, 298, 108634. [Google Scholar] [CrossRef]
  61. Darmanin, G.; Gauci, A.; Deidun, A.; Galone, L.; D’Amico, S. Satellite-Derived Bathymetry for Selected Shallow Maltese Coastal Zones. Appl. Sci. 2023, 13, 5238. [Google Scholar] [CrossRef]
  62. Poursanidis, D.; Traganos, D.; Chrysoulakis, N.; Reinartz, P. Cubesats Allow High Spatiotemporal Estimates of Satellite-Derived Bathymetry. Remote Sens. 2019, 11, 1299. [Google Scholar] [CrossRef]
  63. Collin, A.; Etienne, S.; Feunteun, E. VHR Coastal Bathymetry Using WorldView-3: Colour versus Learner. Remote Sens. Lett. 2017, 8, 1072–1081. [Google Scholar] [CrossRef]
  64. Forfinski-Sarkozi, N.A.; Parrish, C.E. Analysis of MABEL Bathymetry in Keweenaw Bay and Implications for ICESat-2 ATLAS. Remote Sens. 2016, 8, 772. [Google Scholar] [CrossRef]
  65. Li, Y.; Gao, H.; Jasinski, M.F.; Zhang, S.; Stoll, J.D. Deriving High-Resolution Reservoir Bathymetry from ICESat-2 Prototype Photon-Counting Lidar and Landsat Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7883–7893. [Google Scholar] [CrossRef]
  66. Le Quilleuc, A.; Collin, A.; Jasinski, M.F.; Devillers, R. Very High-Resolution Satellite-Derived Bathymetry and Habitat Mapping Using Pleiades-1 and ICESat-2. Remote Sens. 2022, 14, 133. [Google Scholar] [CrossRef]
  67. Jia, D.; Li, Y.; He, X.; Yang, Z.; Wu, Y.; Wu, T.; Xu, N. Methods to Improve the Accuracy and Robustness of Satellite-Derived Bathymetry through Processing of Optically Deep Waters. Remote Sens. 2023, 15, 5406. [Google Scholar] [CrossRef]
  68. Liu, Y.; Zhou, Y.; Yang, X. Bathymetry Derivation and Slope-Assisted Benthic Mapping Using Optical Satellite Imagery in Combination with ICESat-2. Int. J. Appl. Earth Obs. Geoinf. 2024, 127, 103700. [Google Scholar] [CrossRef]
  69. Lv, J.; Li, S.; Wang, X.; Qi, C.; Zhang, M. Long-Term Satellite-Derived Bathymetry of Arctic Supraglacial Lake from ICESat-2 and Sentinel-2. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2024, 48, 469–477. [Google Scholar] [CrossRef]
  70. Gülher, E.; Alganci, U. Satellite–Derived Bathymetry in Shallow Waters: Evaluation of Gokturk-1 Satellite and a Novel Approach. Remote Sens. 2023, 15, 5220. [Google Scholar] [CrossRef]
  71. Chu, S.; Cheng, L.; Ruan, X.; Zhuang, Q.; Zhou, X.; Li, M.; Shi, Y. Technical Framework for Shallow-Water Bathymetry with High Reliability and No Missing Data Based on Time-Series Sentinel-2 Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8745–8763. [Google Scholar] [CrossRef]
  72. Yunus, A.P.; Dou, J.; Song, X.; Avtar, R. Improved Bathymetric Mapping of Coastal and Lake Environments Using Sentinel-2 and Landsat-8 Images. Sensors 2019, 19, 2788. [Google Scholar] [CrossRef]
  73. Markus, T.; Neumann, T.; Martino, A.; Abdalati, W.; Brunt, K.; Csatho, B.; Farrell, S.; Fricker, H.; Gardner, A.; Harding, D.; et al. The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2): Science Requirements, Concept, and Implementation. Remote Sens. Environ. 2017, 190, 260–273. [Google Scholar] [CrossRef]
  74. Neumann, T.A.; Martino, A.J.; Markus, T.; Bae, S.; Bock, M.R.; Brenner, A.C.; Harbeck, K. The Ice, Cloud, and Land Elevation Satellite–2 Mission: A Global Geolocated Photon Product Derived from the Advanced Topographic Laser Altimeter System. Remote Sens. Environ. 2019, 233, 111325. [Google Scholar] [CrossRef]
  75. Berk, A.; Anderson, G.; Acharya, P.; Bernstein, L.; Muratov, L.; Lee, J.; Fox, M.; Adler-Golden, S.; Chetwynd, J.; Hoke, M. MODTRAN5: 2006 Update. Proc. SPIE 2006, 6233, 62331F. [Google Scholar] [CrossRef]
  76. Richter, R.; Schläpfer, D. Atmospheric/Topographic Correction for Satellite Imagery: ATCOR-2/3 User Guide; DLR IB 565-01/17; DLR: Wessling, Germany, 2017. [Google Scholar]
  77. Goodman, J.A.; Lee, Z.; Ustin, S. Influence of Atmospheric and Sea-Surface Corrections on Retrieval of Bottom Depth and Reflectance Using a Semi-Analytical Model: A Case Study in Kaneohe Bay, Hawaii. Appl. Opt. 2008, 47, F1–F11. [Google Scholar] [CrossRef] [PubMed]
  78. Hedley, J.D.; Harborne, A.R.; Mumby, P.J. Technical Note: Simple and Robust Removal of Sun-Glint for Mapping Shallow-Water Benthos. Int. J. Remote Sens. 2005, 26, 2107–2112. [Google Scholar] [CrossRef]
  79. Neuenschwander, A.; Magruder, L. The Potential Impact of Vertical Sampling Uncertainty on ICESat-2/ATLAS Terrain and Canopy Height Retrievals for Multiple Ecosystems. Remote Sens. 2016, 8, 1039. [Google Scholar] [CrossRef]
  80. Kauth, R.J.; Thomas, G.S. The Tasseled-Cap: A Graphic Description of the Spectral-Temporal Development of Agricultural Crops as Seen by Landsat. In Proceedings of the Symposium on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 15–17 June 1976; pp. 41–51. [Google Scholar]
  81. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  82. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
  83. International Hydrographic Organization. Standards for Hydrographic Surveys of the International Hydrographic Organization, S-44 Edition 6.1.0, Monte Carlo. 2022. Available online: https://iho.int/uploads/user/pubs/standards/s-44/S-44_Edition_6.1.0.pdf (accessed on 20 April 2025).
Figure 1. Google Earth view of the study region along with nautical chart.
Figure 1. Google Earth view of the study region along with nautical chart.
Remotesensing 17 02912 g001
Figure 2. Distribution of SBE data of different depth ranges along with closer looks.
Figure 2. Distribution of SBE data of different depth ranges along with closer looks.
Remotesensing 17 02912 g002
Figure 3. The distribution of obtained LiDAR points together with the aerial imagery extent.
Figure 3. The distribution of obtained LiDAR points together with the aerial imagery extent.
Remotesensing 17 02912 g003
Figure 4. Hexbin correlation for different training data and depth intervals.
Figure 4. Hexbin correlation for different training data and depth intervals.
Remotesensing 17 02912 g004
Figure 5. Final bathymetry map produced for 0–10 m depth interval and with RF method.
Figure 5. Final bathymetry map produced for 0–10 m depth interval and with RF method.
Remotesensing 17 02912 g005
Figure 6. Final bathymetry map produced for 0–15 m depth interval and with RF method.
Figure 6. Final bathymetry map produced for 0–15 m depth interval and with RF method.
Remotesensing 17 02912 g006
Table 1. Hyperparameter set for ML-based Models.
Table 1. Hyperparameter set for ML-based Models.
RFXGBoost
bootstrap: Trueobjective: ‘reg:squarederror’
ccp_alpha: 0.32base_score: 0.5
criterion: squared_errorbooster: ‘gbtree’
max_features: 1.0tree_method: ‘exact’
min_samples_leaf: 2colsample_bynode: 2
min_samples_split: 4colsample_bytree: 2
n_estimators: 200learning_rate: 0.3201
oob_score: Falsen_estimators: 200
random_state: 45num_parallel_tree: 2
warm_start: Truepredictor: ‘auto’
Table 2. The QC metrics for the bathymetry extraction based on RF and XGBoost for the 0–10 m depth range.
Table 2. The QC metrics for the bathymetry extraction based on RF and XGBoost for the 0–10 m depth range.
Band Pair/MetricsRMSE
(m)
MAE
(m)
R2
Random Forest
Gkt-1Gkt-1(B)Gkt-1Gkt-1(B)Gkt-1Gkt-1(B)
Blue-Green0.580.210.270.130.880.94
Green-Red0.710.210.450.130.890.94
Blue-Red0.720.230.470.140.860.93
S-2S-2(B)S-2S-2(B)S-2S-2(B)
Blue-Green0.590.350.320.280.850.92
Green-Red0.750.420.480.280.860.91
Blue-Red0.780.420.470.280.840.91
AerialAerial (B)AerialAerial (B)AerialAerial (B)
Blue-Green0.170.160.060.050.960.96
Green-Red0.180.160.070.050.960.96
Blue-Red0.180.160.050.050.960.96
XGBoost
Gkt-1Gkt-1(B)Gkt-1Gkt-1(B)Gkt-1Gkt-1(B)
Blue-Green0.580.210.270.130.870.94
Green-Red0.710.220.460.130.890.94
Blue-Red0.720.240.470.140.880.93
S-2S-2(B)S-2S-2(B)S-2S-2(B)
Blue-Green0.590.350.320.280.850.91
Green-Red0.750.420.480.280.860.91
Blue-Red0.780.420.470.280.840.91
AerialAerial (B)AerialAerial (B)AerialAerial (B)
Blue-Green0.170.160.060.050.960.96
Green-Red0.180.160.070.050.960.96
Blue-Red0.180.160.050.050.960.96
Table 3. The QC metrics for the bathymetry extraction based on RF for different depth ranges and training data setup.
Table 3. The QC metrics for the bathymetry extraction based on RF for different depth ranges and training data setup.
Information/Depth Range (m)0–10
(a)
0–10
(b)
0–15
(c)
0–20
(d)
Training Data Amount10 K10 K11 K12 K
Training Data SourceSBE OnlySBE and LiDARSBE and LiDARSBE and LiDAR
RMSE (m)0.770.671.412.6
MAE (m)0.530.320.881.73
R20.460.950.880.73
Pearson-R0.680.970.970.85
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gülher, E.; Alganci, U. Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning. Remote Sens. 2025, 17, 2912. https://doi.org/10.3390/rs17162912

AMA Style

Gülher E, Alganci U. Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning. Remote Sensing. 2025; 17(16):2912. https://doi.org/10.3390/rs17162912

Chicago/Turabian Style

Gülher, Emre, and Ugur Alganci. 2025. "Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning" Remote Sensing 17, no. 16: 2912. https://doi.org/10.3390/rs17162912

APA Style

Gülher, E., & Alganci, U. (2025). Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning. Remote Sensing, 17(16), 2912. https://doi.org/10.3390/rs17162912

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop