Next Article in Journal
Future Rainfall Erosivity over Iran Based on CMIP5 Climate Models
Previous Article in Journal
Field-Monitoring Sediment Basin Performance during Highway Construction
Previous Article in Special Issue
Assessing Sea Surface Temperatures Estimated from Fused Infrared and Microwave Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shallow Water Bathymetry Mapping from ICESat-2 and Sentinel-2 Based on BP Neural Network Model

1
School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
3
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454000, China
*
Author to whom correspondence should be addressed.
Water 2022, 14(23), 3862; https://doi.org/10.3390/w14233862
Submission received: 31 October 2022 / Revised: 22 November 2022 / Accepted: 24 November 2022 / Published: 27 November 2022
(This article belongs to the Special Issue Application of Ocean Colour Remote Sensing in Turbidity Monitoring)

Abstract

:
Accurate shallow water bathymetry data are essential for coastal construction and management, marine traffic, and shipping. With the development of remote sensing satellites and sensors, the satellite-derived bathymetry (SDB) method has been widely used for bathymetry in shallow water areas. However, traditional satellite bathymetry requires in-situ bathymetric data. Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) with the advanced high-resolution topographic laser altimeter system (ATLAS) provides a new technical tool and makes up for the shortcomings of traditional bathymetric methods in shallow waters. In this study, a new method is proposed to automatically detect photons reflected from the shallow seafloor with ICESat-2 altimetry data. Two satellite bathymetry models were trained, to obtain shallow water depth from Sentinel-2 satellite images. First, sea surface and seafloor signal photons from ICESat-2 were detected in the Oahu (in the U.S. Hawaiian Islands) and St. Thomas (in the U.S. Virgin Islands) sampling areas, to obtain water depths along the surface track. The results show that the RMSE is between 0.35 and 0.71 m and the R2 is greater than 0.92, when compared to the airborne LiDAR bathymetry (ALB) data in the field. Second, the ICESat-2 bathymetric points from Oahu Island are used to train the Back Propagation (BP) neural network model and obtain the SDB. The RMSE is between 0.97 and 1.43 m and the R2 is between 0.90 and 0.96, which are better than the multi-band ratio model with RMSE of 1.03–1.57 m and R2 of 0.89–0.95. The results show that the BP neural network model can effectively improve bathymetric accuracy, when compared to the traditional multi-band ratio model. This approach can obtain shallow water bathymetry more easily, without the in-situ bathymetric data. Therefore, it extends to a greater extent with the free ICESat-2 and Sentinel-2 satellite data for bathymetry in shallow water areas, such as coastal, island and inland water bodies.

1. Introduction

As the interaction zone between the sea and land, or between islands and the surrounding environment of coral reefs, the shallow water provides the basic physical environment for the sustainability and biodiversity of marine and coastal ecosystems [1,2]. Underwater topographic survey in shallow water area is a basic marine surveying and mapping work to obtain three-dimensional coordinates of seafloor topographic points, whose core is bathymetry [3]. The water depth is an important topographic element in the shallow sea. Bathymetry in shallow water areas is important for coastal construction; marine safety; resource survey and development; fish and marine industries; marine transportation and shipping; environmental protection and management; and island coastal zone study and management [4,5,6]. In some areas, shallow water bathymetry changes over time due to seawater erosion and sediment transport. Therefore, acquiring high-resolution shallow water bathymetric data accurately and efficiently with low cost is the goal of ocean surveying and mapping.
Traditional shallow water bathymetry is mainly measured by shipborne single-beam or multi-beam echosounders [7,8,9], airborne LiDAR [10,11,12], SAR [13,14], optical remote sensing [15,16,17] and other methods, but not all of them can be balanced with each other in terms of measurement accuracy, efficiency and cost [18]. For example, the accuracy of shipborne sonar measurements is high, but it consumes more time with high costs. It is so limited in very shallow areas and difficult to deploy near the coast [19,20,21]. The airborne LiDAR system has limitations in measurement work, due to the difficulty of development and the complexity of data processing, and only commercial software provided by companies can be used, which is costly and time consuming [22].
With the development of remote sensing technology, satellite remote sensing data are increasing for bathymetry. Water depth retrieval using multispectral satellite images from satellites such as WorldView, Landsat and Sentinel, is called satellite-derived bathymetry (SDB), which determines a mathematical model (linear, polynomial, exponential, etc.) from radiative transfer formula and uses the model to calculate its depth. However, SDB accuracy is generally lower than that of active surveys due to correction of satellite images and in-situ bathymetric data, as well as the limitations of mathematical models. The development of machine learning and deep learning may improve bathymetric accuracy. For example, Back Propagation (BP) neural network has been used in the field of remote sensing image processing to improve the quality.
In order to reduce the consumption of manpower and time, improve the accuracy of measurement, and overcome the limitations of terrain and instruments, active bathymetry technology based on LiDAR is being increasingly used for shallow water bathymetry. As the successor of ICESat, ICESat-2 LiDAR altimetry satellite launched in September 2018 and carried the advanced high-resolution topographic laser altimeter system (ATLAS), which provides new data for shallow water bathymetry. Compared to conventional shallow water bathymetry methods, ICESat-2 laser satellites can achieve large-range, periodic and low-cost acquisition of shallow water bathymetry data, which creates some compensation for the lack of bathymetric information in some areas. In recent years, ICESat-2 bathymetry, instead of in-situ bathymetry data, combined with satellite images for shallow water bathymetry, has been extensively studied. Compared with the traditional single-data bathymetry method, this method has better performance in terms of accuracy and universality. Moreover, it is easier to invert the water depth by simply downloading ICESat-2 products and satellite images online for some remote areas [23]. For example, Forfinski-Sarkozi et al. [24] used the ICESat-2 altimeter simulator MABEL (Multiple Altimeter Beam Experimental LiDAR) to evaluate the bathymetric capability in details, and the results showed that MABEL can measure water depths up to 8 m with a root mean square error of 0.7 m. Hsiao-Jou et al. [25] combined ICESat-2 and Sentinel-2 data to measure the shallow water depth of six islands and reefs in the South China Sea. It was demonstrated that the RMSE of ICESat-2 bathymetry is 0.26–0.61 m when compared with airborne laser data, and the RMSE of the SDB is 0.50–0.90 m. Zhang et al. [26] trained four typical models with ICESat-2 bathymetric points and multispectral images, and produced bathymetric maps of Coral Island, Ganquan Island and Lingshang Reef in the Xisha Islands, China, where the RMSE of ICESat-2 bathymetry is less than 0.52 m and the R2 is greater than 0.93, and the average RMSE of the SDB is 0.16 and the average R2 is 0.90. For the Oahu area (U.S. Hawaiian Islands), Liu et al. [27] developed a downscaled bathymetric mapping approach (DBMA) that uses the water depth from multitemporal Landsat-8 data to calibrate the empirical model for high spatial resolution imagery (e.g., Sentinel-2A/B, GaoFen-1/2, ZiYuan-3, and WorldView-2). The results showed that DBMA provided high accuracy for depth, ranging from 0 to 12 m for clear waters (0 m to 5 m for turbid waters), with a root mean square error (RMSE) smaller than 2 m. For the St. Thomas area, Parrish et al. [28] analyzed the performance of ICESat-2 bathymetric mapping using an example near the U.S. Virgin Islands, and demonstrated that ATLAS had a maximum depth mapping capability of nearly 1 Secchi in depth for water depths up to 38 m and had agreement within 0.43—0.60 m root mean square error over 1 m grid resolution.
However, these studies of bathymetry based on ICESat-2 and Sentinel-2 have the following problems: (1) ICESat-2 bathymetry has been studied in only a few individual regions, and their methods and results may not be applicable for other regions. (2) For satellite-derived bathymetry, the traditional mathematical models do not have good realizations in terms of model training and result accuracy. In this study, a method was proposed to extract bathymetric signals directly from ICESat-2\ATL03 data to obtain bathymetric points, in the sampling area of Oahu and St. Thomas. The ICESat-2 bathymetry method was compared and validated in two completely different coastal environments. Second, the BP neural network model was trained using ICESat-2 bathymetry points in the Oahu region and used to generate bathymetric maps from Sentinel-2 images, which was compared with the multi-band ratio model. Finally, the obtained bathymetric results were validated, and their performance was compared and analyzed using in-situ bathymetric data.

2. Materials and Methods

2.1. Study Areas and Data

2.1.1. Study Areas and In-Situ Bathymetric Data

The first study area is the south of St. Thomas in the Virgin Islands of the eastern Caribbean Sea (Figure 1). The St. Thomas is a volcanic island with a barely grassed and rugged mountain range running east and west. The in-situ data are high-precision airborne LiDAR bathymetry data, which were collected by the USGS and NOAA in March 2014 using the second-generation Experimental Advanced Airborne Research LiDAR (EAARL-B). EAARL-B is an airborne pulsed laser ranging system for measuring ground elevation, vegetation canopy and coastal topography. The time difference between the emission and reception of the laser beam is recorded. The aircraft flew at a speed of approximately 55 m per second in the target area and at an altitude of approximately 300 m. The laser ranging system has the root mean square error of 13.5 cm in the vertical direction [29]. Airborne laser bathymetry data from this area are used to evaluate the bathymetric accuracy and bathymetric penetration performance of ICESat-2.
The other study area is located in the west of Oahu (Figure 2), which is one of the eight islands of the Hawaiian archipelago. The area of the island in this area is 1574 km2, ranking third in the Hawaiian archipelago. The island has the largest city in the Pacific Ocean, Honolulu, and Pearl Harbor. The coast of the island is winding, but there are many coral reefs along the coast and the water type is clear water [30]. This area has shallow water LiDAR bathymetry data from SHOALS, provided by the University of Hawaii Coastal Geology Group website. It is used to verify and analyze the bathymetric accuracy of ICESat-2 and Sentinel-2.

2.1.2. ICESat-2 Data

ICESat-2 satellite is equipped with an Advanced Terrain Laser Altimeter System (ATLAS), which was launched in September 2018. It uses photon counting LiDAR and an auxiliary system to calculate the time taken by the laser from emission to return, to determine the geodetic latitude and longitude of the photon. The instrument pulse frequency is 10 kHz, the nominal ground speed is approximately 7000 m/s and the footprint points on the ground are spaced at approximately 0.7 m [31]. The three pairs of laser beams from ATLAS form six ground tracks. The two ground tracks on the left and right of the same pair are separated by about 90 m. The two different pairs of ground tracks are separated by about 3.3 km [32].
ICESat-2 provides 21 data products divided into four levels including Level-1, Level-2, Level-3A, Level-3B. The data used in this study are Level-2 level product ATL03 (Global Geolocation Photon Data). ATL03 data provide time, latitude, longitude, ellipsoidal height, and along-track flight distance for each photon from ATLAS. Altitude is corrected for several geophysical phenomena, such as the effects of atmospheric delays, solid tides, system pointing deviations, etc. However, the bathymetric errors caused by refraction effect have not been corrected [33,34].
ATL03 data provide the confidence level associated with each photonic event (land, ocean, sea ice, land ice, inland water) selected as a signal. The confidence level ranges from 0 to 4, where 0 is noise; 1 is adding buffer, but the algorithm is classified as background; 2 is low confidence; 3 is medium confidence; 4 is high confidence. The higher the confidence level, the more likely it is the photon signal. The algorithm assumes that noisy photons follow the Poisson distribution and detects signal photons with outliers of the Poisson distribution. The noise can be removed by the ATL03’s own labelling results to identify the surface and bottom of the water. However, due to attenuation and scattering effects in water, the distribution of signal and noise photons is very different from that of atmospheric photons. Therefore, many signal photons may be identified as noise, while many noise photons are retained, and the denoising results that come with ATL03 should be used with caution. Therefore, this study proposes a method to automatically detect signal photons in the raw ATL03 data and determine the surface and bottom of the water, in place of the labeling results that come with the ATL03 product.

2.1.3. Sentinel-2 Satellite Image

The Sentinel-2A and Sentinel-2B bipolar orbiting satellites were launched in June 2015 and March 2017, respectively, which carried the high-resolution multispectral imaging (MSI) device at an altitude of 786 km. It can cover 13 spectral bands with a width up to 290 km, and a ground resolution of 10 m, 20 m and 60 m, respectively. The revisit period of one satellite is 10 days, and the two satellites are complementary. A complete image of the Earth’s equatorial region can be completed every 5 days. For the higher latitudes of Europe, this cycle takes only 3 days. It is mainly used to monitor the land environment and can provide information on the growth of terrestrial vegetation, soil coverage, the environment of inland rivers and coastal areas, etc. [35,36].
Level-1C images of Sentinel-2A and Sentinel-2B for this study are available free of charge from the ESA Sentinel science data center. The images are geometrically projected with the UTM/WGS84 (Universal Transverse Mercator/World Geodetic System 84) with top-of-atmosphere reflectance. Level-1C images are geometrically corrected orthophotos and are not radiometrically calibrated or atmospherically corrected. Therefore, it is necessary to process the Level-1C data into Level-2A data. Level-2A images are the bottom of atmosphere corrected reflectance that has undergone radiometric calibration and atmospheric correction. In the processing from Level-1C to Level-2A datasets, the used Level-1C images are processed using the Sen2Cor plugin [37]. This plugin is embedded in the free Sentinel app platform SNAP [38]. Sen2Cor is based on the scene classification and look-up tables from the radiative transfer model (LibRadtran). It is primarily concerned with terrestrial areas but is the default Level 2A processor for atmospheric correction [39,40].

2.2. Methods

2.2.1. Shallow Water Bathymetry Mapping

The main work of this study is first to search the ICESat-2 laser altimetry points and Sentinel-2 imagery in the study area. Next, the histogram of the elevation distribution is applied to the ICESat-2 point cloud, to find and filter out the surface photons. Multiple moving median filtering is performed on the remaining photons to find sea floor signal photons. Next, accurate ICESat-2 bathymetric points are obtained by calculating the elevation difference between the surface and seafloor signal photon points and applying a refraction correction. The Sentinel-2 images are pre-processed to obtain reflectance for each band. The bands reflectance and water depth of ICESat-2 bathymetry points are used to train the multi-band ratio model and the BP network model to invert the water depth from Sentinel-2 images. Finally, the water depths detected by ICESat-2 and the SDB derived from Sentinel-2, are validated using in-situ bathymetric data. Figure 3 shows the workflow of this study.

2.2.2. Detection of ICESat-2 Bathymetry Points

When the ICESat-2 satellite operates, the ATLAS instrument emits laser pulses at a fixed frequency. The corresponding ATL03 product is the time and space information of all photonic point clouds. The point cloud profile obtained along the flight direction is shown in Figure 4.
Although the ATLAS laser beam can penetrate the water surface, most photon reflections come from the sea surface. The raw data are divided into multiple spatial segments with an aggregation distance of 1000 m along the flight path, and a two-degree Gaussian distribution is applied to each spatial segment to fit a histogram. Because of the large number of photons that can accumulate near the surface and the seafloor, bimodal histograms can be formed in most spatial segments. The primary wave crest is close to the sea surface and the secondary wave crest is close to the sea floor to determine the sea surface elevation [25]. The vertical bin size in the histogram of the elevation distribution is an empirical value of 0.1 m, which is selected according to the Sturges formula [25,41] of Equation (1).
C = R / ( 1 + log 2 n )
where C is the size of the vertical bin, R is the vertical variation range, and n is the number of samples. The two-degree Gaussian fitting function is as follows (Equation (2)):
h = a 1   e ( x     μ 1 ) 2 σ 1 2 + a 2   e ( x     μ 2 ) 2 σ 2 2
where a 1 and a 2 are peak height (the number of photons in the elevation interval), μ 1 and μ 2 are the expected value corresponding to the wave crest (the elevation value corresponding to the wave crest), σ 1 and σ 2 are the standard deviation of the wave crest (related to the full width half maximum). Photons with elevations between μ 1   ±   3 σ 1 are considered to be the ocean surface.
The photons on the surface of the ocean and above are removed and the remaining photons are those reflected underwater. Most of these remaining photons are reflected from the seafloor. For these photons, because the elevation of the seafloor signal photons is generally in the middle, multiple median filtering is used to detect the seafloor signal photons. This is due to the different distribution of the seafloor topography in different areas and the uneven distribution density of photon point cloud. In order to achieve higher accuracy and resolution in the final results, two different median filtering methods are used in the two study areas from which the benthic photons are detected, to ensure that the noise points are removed, and the true benthic signal photons are retained.
Median filtering retains only one bathymetric point in each filter window, resulting in many true bathymetric signals being filtered out, and resulting in very few bathymetric points. This study uses an improved median filtering method by setting the filtering step size during filtering. The end position of the previous filter window is not the start position of the next filter window, but the start position of the previous filter window plus the filter step size is the start position of the subsequent filter window. This means that there are overlapping parts between adjacent filtering windows, so that the real bathymetric signal points filtered out in the previous filtering have a chance to be retained in the next filtering. In this study, in the sampling area of Oahu, the number of photon points is used to determine the size of the filter window and the filter step size. In the St. Thomas sampling area, the along-orbit flight distance is used to determine the filter window size and filter step size. The value of the wave window size and the filter step size is obtained from the empirical value of multiple experiments. For example, in the Oahu area, 25 photon points are used as the filter window size, and 5 photon points are used as the filter step size. Two times the median filters are applied in the same experiment, to remove noise. The remaining photons are those reflected from the seafloor, and the elevations of the remaining photons are calculated.

2.2.3. Refraction Correction

The rate of propagation of photons in air is approximately 2.997 × 108 m/s. When photons from ATLAS enter the ocean and continue to travel in the water, they are refracted at the air–water interface. Once in seawater, the rate of photons will change to 2.235 × 108 m/s, when the photon travels at a lower speed and changes direction. Therefore, refraction introduces the horizontal offset and the vertical offset, compared to the photon geolocation given in the ATL03 product. The geographic coordinates of the photons given in the ATL03 data are calculated only considering the propagation of the laser in a single air medium. This means that a lower rate will allow photons to travel a greater theoretical distance than the actual distance, making the bottom of the water deeper than the actual depth [23,42]. Therefore, the refraction error must be corrected.
Figure 5 is the schematic diagram of the refraction of the laser beam at the interface of air and water. The red points are the points obtained with the line propagation condition, and the green point is the real photon point after refraction correction. Because the satellite observes the earth almost vertically, assuming the average orbital altitude of 496 km and the ground beam spacing of 3.3 km, the actual maximum value of θ_1 in normal mode is only arcsin (3.3 km/496 km) ≈ 0.38°. The horizontal correction (≈ 0.003D) is calculated to be negligible and can cause a maximum horizontal offset of 9 cm at a depth of 30 m [18,23,43]. Therefore, only the refraction correction in the vertical direction needs to be calculated.
The law of refraction can be expressed as follows:
n 2 n 1 = sin θ 1 sin θ 2
where n 1 (≈1.00029) and n 2 (≈1.34116) are the refractive index of air and water bodies, respectively. Although the refractive index of seawater varies depending on the temperature and salinity of the water, and the wavelength of the laser beam, the error here is negligible. θ 1 and θ 2 are the incident angle and the refraction angle, respectively. The law of refraction can be expressed as follows by the relationship of trigonometric functions.
n 2 n 1 = S R
where R is the actual distance the laser beam would travel underwater if refraction occurred, and S is the theoretical distance the laser beam would travel underwater assuming no refraction occurred. Therefore, the calculation formula of water depth after refraction correction is as follows:
Z = D · n 1 n 2
where Z is the water depth after refraction correction, and D is the water depth without refraction correction. The water depth value (D) can be obtained by calculating the difference between the photon elevation of the seafloor and the photon elevation of the sea surface.
Finally, the tidal heights from the tidal website were used to convert the datum from WGS84 to Mean Lower Low Water (MLLW).

2.2.4. Satellite-Derived Bathymetry Based on BP Neural Network Model

The multilayer feedforward network using the BP algorithm is the most widely used neural network at present, which is called the BP neural network. Its biggest function is to map complex nonlinear functional relations. BP neural network is a self-adaptive nonlinear dynamic network system composed of neuron models, thresholds, activation functions, and so forth, connected to each other. The basic structure of the multilayer feedforward neural network is from front to back: input layer, hidden layer (1 or more layers), output layer. Figure 6 shows the satellite-derived bathymetry structure based on BP neural network model. The errors generated in the training process of BP neural network are in accordance with the backward propagation principle. It is widely used in the field of remote sensing image processing [44,45].
The learning process of this neural network consists of two processes: the forward propagation of the signal, and the backward propagation of the error. In forward propagation, the input samples are passed in from the input layer and then processed layer by layer, through each hidden layer before being passed to the output layer. If the actual output of the output layer does not match the desired output, it is transferred to the back propagation of the error. The back propagation of the output error is in some form layer by layer, through the hidden layer to the input layer. The error is apportioned to all neurons in each layer to obtain the error signal of each layer. They are used to adjust the weights of the neurons in each layer. The weights are continuously adjusted by cycling through the input and output sample sets, in order to reduce the output errors of all input samples to a satisfactory accuracy. It takes the specified training error as the target and continuously adjusts and updates the training error that does not meet the target, from the output layer to the input layer, until it reaches the target error and stops iterative updates.
For the training of this network, the reflectance of the red, green and blue bands corresponding to the ICESat-2 bathymetry points in the Sentinel-2 image is used as the input layer, and the water depth obtained by ICESat-2 is used as the output layer. The number of nodes in the hidden layer is 10. The training data accounts for 70% of the input data, the validation data accounts for 15%, and the test data accounts for 15%. The activation function is Sigmoid, the training function is Trainlm (this training function uses the Levenberg-Marquardt algorithm, the memory requirement is the largest, and the training speed is the fastest). The number of trainings is 1000, the learning rate is 0.001, and the training requirement accuracy is 10 7 .
In addition, the satellite-derived bathymetry establishes the mapping between the radiant energy of the water sampling unit and the water depth, based on the radiative transfer equation or empirical equation. It uses the grey scale value (or reflectance) of the optical remote sensing image to calculate the water depth value of each image pixel. In general, the reflectance of the water surface decreases as the water depth increases. However, the attenuation degree of the shorter wavelength band is smaller than that of the longer wavelength band, which makes the logarithmic ratio of the reflectance of different wavelength bands, have a higher correlation with the water depth. Lyzenga et al. proposed a band ratio model using LiDAR bathymetry data combined with satellite remote sensing imagery. It links the water surface remote sensing reflectance of the two bands (blue and green bands) with the water depth and derives the logarithmic conversion relationship, between the ratio of the high absorption band and the low absorption band, as shown in Equation (6) [46,47,48,49]. Then the linear model is built between ratio and water depth.
R = ln   [ n   R w ( λ i ) ] ln   [ n   R w ( λ j ) ]
where R is the band ratio, n is a constant and R w is the reflectance of bands λ i and λ j .
In recent years, Tian [50] improved the traditional logarithmic transformation model. It adjusts the logarithmic adjustment factor to 2 factors for each band, and the results show a significant improvement in bathymetric inversion capability [4]. The logarithmic transformation relationship in the improved model is shown in Equation (7).
R = ln   [ m   R w ( λ i ) ] ln   [ n   R w ( λ j ) ]
At present, the commonly used linear regression remote sensing water depth models mainly include single-factor model, two-factor model, multi-factor model, etc. The model used in this study is the multi-factor model (Equation (8)).
Z = a 1   R 1 + a 2   R 2   + a 3   R 3 + b
where a 1 , a 2 , a 3 and b are regression coefficients, and R 1 , R 2 and R 3 are the results of Equation (7) with the blue-green bands, the blue-red bands and the green-red bands as inputs, respectively.

3. Results and Analysis

3.1. ICESat-2 Bathymetric Points

Since the latitude and longitude of the ICESat-2 bathymetric photon points and the reference data points cannot be exactly the same, the natural proximity interpolation method is used to interpolate the reference data to the position of the ICESat-2 bathymetry photon points, without extrapolating to the area without data. Natural proximity interpolation is expected to perform well, even if the interpolation points are at the boundaries of the interpolated dataset, since this interpolation is computed using area-based weights [23,51].
The acquisition dates of ICESat-2\ATL03 in the Oahu area are 8 September 2019, 14 June 2020 and 5 December 2020. The acquisition dates of ICESat-2\ATL03 in the St. Thomas area are 22 November 2018 and 21 February 2019. This study uses the methods in Section 2.2.2 and Section 2.2.3 to derive nearshore shallow water depths from ICESat-2 data for three dates (four ground tracks) on western Oahu, and two dates (six ground tracks) on St. Thomas. The ICESat-2 bathymetry performance is verified and analyzed by comparing ICESat-2 bathymetry points, with LiDAR bathymetry data in the area. The evaluation metrics include mean absolute error (MAE), mean relative error (MRE), root mean square error (RMSE), and coefficient of determination (R2).
Figure 7 and Figure 8 show the raw ICESat-2 photons and processing results for the Oahu area on 8 September 2019. Figure 9 and Figure 10 show the raw photons and processing results for the St. Thomas area on 21 February 2019. Figure 11 shows the detected signal photons points reflected from the water surface and bottom of the Oahu area from ICESat-2 data from four ground tracks. Figure 12 shows the detected signal photons points reflected from the water surface and bottom of the St. Thomas area from ICESat-2 data from six ground tracks. The blue points are signal photons reflected from the sea surface, and the black points are signal photons reflected from the seafloor. Figure 13 shows the comparative correlation between ICESat-2 bathymetric points obtained in the two study areas, and airborne LiDAR bathymetry data. Table 1 shows the results of the comparison of the four ground tracks in the Oahu area. Table 2 shows the results of the comparison of the six ground tracks in the St. Thomas area. For the four surface tracks in the Oahu area, MAE is between 0.31 m and 0.51 m, MRE (defined as mean absolute error divided by mean water depth) is between 2% and 8%, RMSE is between 0.35 m and 0.58 m, and R2 is greater than 0.92 for all four tracks. For the six surface tracks in the St. Thomas area, MAE is between 0.26 m and 0.51 m, MRE values are all less than 3%, RMSE is between 0.37 m and 0.71 m, and the R2 of the six ground tracks are all greater than 0.99.
In general, the bathymetric data obtained from the ICESat-2 satellite agrees well with the in-situ bathymetric dataset. Comparing the ICESat-2 bathymetry results for the Oahu and St. Thomas sampling areas, it can be seen that the maximum detection depth for the Oahu sampling area does not exceed 30 m, with greater measurement error. In contrast, the St. Thomas sampling area has the maximum detection depth of approximately 35 m, with much lower error and 99% correlation with the validation data. It shows that the relative error of bathymetry in the St. Thomas area is smaller, the correlation is higher, and the bathymetric results are better if the area is at the same depth. This may be mainly because ICESat-2 photon points reflected from the seafloor in the Oahu area, are much less dense and unevenly distributed. In contrast, the ICESat-2 photon points reflected from the seafloor in the St. Thomas area are denser and more evenly distributed. The filtering with the distance along the track as the filter window retains more noise points, while the method with the number of photons as the filter window gives better results. Better results are obtained by using the along-orbit distance as a filter window, in the St. Thomas area.
The results show that ICESat-2 satellite altimetry data can obtain high accuracy shallow water bathymetry. It can be used as training data for shallow water bathymetry from remote sensing images and provide information reference for some areas where bathymetry data are lacking.

3.2. Satellite-Derived Bathymetry with Sentinel-2 Imagery

In the sampling area of Oahu, ICESat-2 bathymetry points and preprocessed Sentinel-2 images are used to train BP neural network model, which is compared to multiband ratio model. The two trained models are used to derive bathymetry from Sentinel-2 images and compared with airborne LiDAR bathymetry data to verify their accuracy.
Although the maximum water depth detected by ICESat-2 in the sampling area near Oahu is about 30 m, more than 92% of the points have a water depth of less than 20 m. Therefore, only results for water depths between 0 m and 20 m are retained when calculating water depths using Sentinel-2 images. Figure 14 shows the results of the satellite-derived bathymetry for the model trained using ICESat-2 bathymetry points. Sentinel-2 images were acquired on 10 December 2019, 30 December 2019, 28 January 2022 and 22 February 2022. Figure 15 and Table 3 show the comparison and verification results of SDB and in-situ bathymetry data, where MBR is the multi-band ratio model, BPNN is the BP neural network model, N is the number of bathymetric points of the SDB, RMSE is the root mean square error and R2 is the coefficient of determination.
For the multiband ratio model, the RMSE is between 1.03 m and 1.57 m with an average of 1.4 m, and R2 is between 0.89 and 0.95 with an average of 0.92. For the BP neural network model, the RMSE is between 0.97 m and 1.43 m with an average of 1.22 m, and R2 is between 0.90 to 0.96 with an average of 0.93. For the SDB of the Sentinel-2 images for the four dates, the highest accuracy is for the image of 2019/12/10. Since this image has more cloud coverage in the study area, the cloud-covered portion is filtered out when processing the Sentinel-2 imagery, resulting in fewer points in the exported SDB compared to the other three dates. However, for the image on 2019/12/30, the accuracy of the SDB derived from the image is worse than the other three dates. The result where the SDB has the largest number N and R2 of 0.89, is the smallest value of the four dates.
Although both models require setting parameters during training, the setting of parameters in the multi-band ratio model directly affects the final computational results. In contrast, in the BP neural network model, the parameters are only related to the training of the model. In terms of bathymetric accuracy, the bathymetric results of BP neural network have lower RMSE and higher R2. However, as far as the model itself is concerned, the multi-band ratio model can visually reflect the mapping relationship between water depth and remote sensing images. The BP neural network model can only obtain the final output results. Comparing the two models, the BP neural network model has higher validation accuracy and can better fit the relationship between ICESat-2 bathymetry points, and the reflectance of multispectral images. Compared with the multi-band ratio model, the BP neural network model has better results in this study.

4. Discussion

4.1. ICESat-2 Bathymetric Error

From Table 1 and Table 2, for the ICESat-2 bathymetry results for Oahu Island and St. Thomas Island, the MAE, RE, RMSE and R2 for the Oahu Island area are all worse than those for St Thomas Island. It is worth noting that the poor validation accuracy of the ICESat-2 bathymetry points in the Oahu area may be more attributable to the in-situ bathymetric data. The airborne LiDAR bathymetry (ALB) data for this area are acquired before 2004, and the ICESat-2 bathymetric data are acquired after 2018, during which relatively minor changes in seafloor elevation are inevitable. For inshore shallow water areas, the water bodies on land are constantly transporting debris particles and various dissolved nutrients to the ocean. In the marginal sea, large amounts of sediments of terrestrial origin and biogenic sediments formed by the upper seawater organisms, will settle and accumulate to the bottom. This may cause some minor changes to the seafloor. In addition, the mobility of water bodies and sea level changes may also contribute to the result.
In contrast, the ALB data for the St. Thomas area are acquired in 2014, resulting in less variation in seafloor elevation and better accuracy of the validation results. It can be seen from Figure 7 and Figure 8 that the water surface fluctuation in the sampling area near Oahu is relatively large, and the shock is relatively obvious, and the sea surface amplitude can reach 0.34 m. The sea surface fluctuation in the St. Thomas sampling area is relatively small, which may be one of the reasons for the water depth error of ICESat-2. In addition, the errors from the airborne LiDAR bathymetry data itself can have an impact on the validation of ICESat-2 bathymetry results. The difference between the two study areas for the maximum water depth may be due to the water quality environment of the study area.

4.2. Satellite Bathymetric Error

This study uses ICESat-2 bathymetry points in the Oahu study area to train the model and derive the SDB from Sentinel-2 images. Firstly, according to the law of error propagation, the error is directly related to the error of the ICESat-2 bathymetry points. Secondly, the image quality of Sentinel-2 also affects the results. Meanwhile, we found that not all images of dates can get SDB with better accuracy. Image pre-processing from L1C to L2A is important for satellite-derived bathymetry, where sunlight flicker and white caps can seriously affect the reflectance obtained by the Sentinel-2, and its residuals can cause errors in bathymetry [38,52,53,54]. The local water clarity, water column conditions and substrate type at the time of image acquisition, will also affect the accuracy of bathymetry [37,55,56,57]. Thirdly, the ALB data of the Oahu area were collected before 2004, while the acquired Sentinel-2 images were collected after 2018. The time interval can have an impact on the validation results. In addition, in order to accurately match the geographic location of the SDB points and the reference ALB points, this study interpolates it, which will also cause errors during the interpolation process and affect the verification results.
It is worth noting that when training the multi-band ratio model, the values of the ratio parameters m and n in Equation (8) can have some impact on the accuracy of the results. In this study, the values of m and n are different for each date of the Sentinel-2 image training model, and the values are obtained through multiple experiments. Although the improved Equation (8) has two ratio parameters, the training accuracy is higher when the values of m and n are taken to be the same or similar.

4.3. Error Correction

Refraction correction and tidal correction are essential for bathymetry when using the ICESat-2 dataset. According to the refraction correction formula (5), the refraction correction of water depth is about 25.42% of the original water depth, and the refraction correction, for the maximum water depth of 35 m in this study, is about 8.9 m. The tidal heights of the Oahu Island sampling area at the time of ICESat-2 data acquisition on 2019/09/08, 2020/06/14, and 2020/12/05 are 0.38 m, 0.13 m, and 0.25 m, from querying the tide website. The tidal heights in the St. Thomas area at the time of ICESat-2 data acquisition on 2018/11/22 and 2019/02/21 are 0.23 m and 0.05 m. It can be seen that the refraction correction can significantly improve the accuracy of the ICESat-2 bathymetry points, and tidal correction can somewhat eliminate the bathymetric datum differences between ICESat-2 bathymetry and ALB data.

5. Conclusions

In this study, automatically detecting signal photons from the sea surface and bottom is proposed based on ICESat-2 data. The ICESat-2 bathymetry points are used to train the BP neural network model and to estimate shallow water depth from Sentinel-2 images. ICESat-2 bathymetry experiments in the Oahu and St. Thomas regions demonstrate the feasibility of bathymetry using ICESat-2 data in shallow water areas. The satellite-derived bathymetry experiment in the Oahu region further validates the effectiveness of fused bathymetry, between ICESat-2 and Sentinel-2. The RMSE is between 0.97 and 1.43 m and the R2 is between 0.90 and 0.96, which are better than the multi-band ratio model with RMSE of 1.03–1.57 m and the R2 of 0.89–0.95. The better performance of BP neural network in satellite-derived bathymetry was demonstrated by comparing multi-band ratio model and neural network model.
In general, the bathymetric method in this study is an active–passive fusion bathymetric method with combining ICESat-2 and Sentinel-2, which can avoid the limitations of traditional methods by replacing the in-situ bathymetric data with ICESat-2 bathymetric data. It can acquire bathymetric data more effectively for shallow water areas where no in-situ data are available. It has very important applications in shallow water areas such as islands, and coastal and inland lakes, especially in remote areas.

Author Contributions

Methodology, X.G. and S.J.; validation, X.J.; data curation, X.J.; writing—original draft preparation, X.G.; writing—review and editing, S.J.; funding acquisition, S.J. and X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Strategic Priority Research Program Project of the Chinese Academy of Sciences, grant number XDA23040100; Jiangsu Marine Science and Technology Innovation Project, grant number JSZRHYKJ202202; Jiangsu Postgraduate Practice Innovation Program Project, grant number SJCX22_0372.

Data Availability Statement

ICESat-2 data can be downloaded from the NASA Earthdata Search website (https://search.earthdata.nasa.gov/search?q=ATL03, accessed on 3 November 2021) and the NSIDC website (https://nsidc.org/data/data-access-tool/ATL03/versions/5, accessed on 3 November 2021); Sentinel 2 images can be downloaded from the ESA Copernicus Data Center (https://scihub.copernicus.eu/dhus/#/home, accessed on 3 March 2022).

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. The authors declare no conflict of interest.

References

  1. Nicholls, R.J.; Cazenave, A. Sea-Level Rise and Its Impact on Coastal Zones. Science 2010, 328, 1517–1520. [Google Scholar] [CrossRef] [PubMed]
  2. Hoegh-Guldberg, O.; Mumby, P.J.; Hooten, A.J.; Steneck, R.S.; Greenfield, P.; Gomez, E.; Harvell, C.D.; Sale, P.F.; Edwards, A.J.; Caldeira, K.; et al. Coral Reefs Under Rapid Climate Change and Ocean Acidification. Science 2007, 318, 1737–1742. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Zhao, J.H.; Ouyang, Y.Z.; Wang, A.X. Status and Development Tendency for Seafloor Terrain Measurement Technology. Acta Geod. Cartogr. Sin. 2017, 46, 1786–1794. [Google Scholar] [CrossRef]
  4. Ma, Y.; Zhang, J.; Zhang, J.Y. Progress in Shallow Water Depth Mapping from Optical Remote Sensing. Advances in Marine Science 2018, 36, 331–351. [Google Scholar] [CrossRef]
  5. Xu, X.; Chen, Y.N.; Ma, Z.L. A Bathymetric Extraction Approach Through Refraction and Inversion from Overlapped Or-thoimages. Hydrogr. Surv. Charting 2019, 39, 18–21. [Google Scholar] [CrossRef]
  6. McCombs, M.P.; Mulligan, R.P.; Boegman, L. Offshore wind farm impacts on surface waves and circulation in Eastern Lake Ontario. Coast. Eng. 2014, 93, 32–39. [Google Scholar] [CrossRef]
  7. Liu, Y.J. Application of single-beam and multi-beam bathymetric systems for underwater topographic surveys in shallow waters. Surv. World 2021, 3, 4–6. [Google Scholar]
  8. Shi, L. Analysis of comparison between single sounding system and shallow multi-beam sounding system used in bathymetric surveying. Heilongjiang Hydraul. Sci. Technol. 2018, 46, 32–34. [Google Scholar] [CrossRef]
  9. Zhang, W.Q.; Zhou, W.J.; Ju, Z.X.; Lin, X.F. Analysis of the combined application of single- and multi-beam systems for shallow area measurements. China Water Transp. Sci. Technol. Waterw. 2018, 5, 64–67. [Google Scholar] [CrossRef]
  10. Legleiter, C.J.; Overstreet, B.T.; Glennie, C.L.; Pan, Z.; Fernandez-Diaz, J.C.; Singhania, A. Evaluating the capabilities of the CASI hyperspectral imaging system and Aquarius bathymetric LiDAR for measuring channel morphology in two distinct river environments. Earth Surf. Process. Landf. 2016, 41, 344–363. [Google Scholar] [CrossRef]
  11. Parker, H.; Sinclair, M. The successful application of Airborne LiDAR Bathymetry surveys using latest technology. In Proceedings of the 2012 Oceans, Yeosu, Republic of Korea, 21–24 May 2012; pp. 1–4. [Google Scholar] [CrossRef]
  12. Ramnath, V.; Feygels, V.; Kalluri, H.; Smith, B. CZMIL (Coastal Zone Mapping and Imaging Lidar) bathymetric performance in diverse littoral zones. In Proceedings of the OCEANS 2015, Washington, DC, USA, 19–22 October 2015; pp. 1–10. [Google Scholar] [CrossRef]
  13. Wang, X.Z. Research on SAR Remote Sensing Imaging Mechanism and Inversion of Typical Shallow Water Topography. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 10 June 2018. [Google Scholar]
  14. Fu, B.; Huang, W.G.; Zhou, C.B.; Yang, J.S.; Shi, A.Q.; Li, D.L. Simulation study of sea bottom topography mapping by spaceborne SAR. Haiyang Xuebao 2001, 1, 35–42. [Google Scholar]
  15. Al Najar, M.; Benshila, R.; El Bennioui, Y.; Thoumyre, G.; Almar, R.; Bergsma, E.W.J.; Delvit, J.-M.; Wilson, D.G. Coastal Bathymetry Estimation from Sentinel-2 Satellite Imagery: Comparing Deep Learning and Physics-Based Approaches. Remote Sens. 2022, 14, 1196. [Google Scholar] [CrossRef]
  16. McCarthy, M.J.; Otis, D.B.; Hughes, D.; Muller-Karger, F.E. Automated high-resolution satellite-derived coastal bathymetry mapping. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102693. [Google Scholar] [CrossRef]
  17. Duan, Z.; Chu, S.; Cheng, L.; Ji, C.; Li, M.; Shen, W. Satellite-derived bathymetry using Landsat-8 and Sentinel-2A images: Assessment of atmospheric correction algorithms and depth derivation models in shallow waters. Opt. Express 2022, 30, 3238. [Google Scholar] [CrossRef] [PubMed]
  18. Cao, B.C.; Fang, Y.; Jiang, Z.Z.; Gao, L.; Hu, H.Y. Water Depth Measurement from the Fusion of ICESat-2 Laser Satellite and Optical Remote Sensing Image. Hydrogr. Surv. Charting 2020, 40, 21–25. [Google Scholar] [CrossRef]
  19. Hu, Y.H. Research on bathymetry technology based on multibeam sonar system. Sci. Technol. Inf. 2018, 16, 36–37. [Google Scholar] [CrossRef]
  20. Tang, Q.H.; Chen, Y.L.; Lu, B.; Wen, W.; Ding, J.S. Comparison of Sounding Accuracy Between Multi-beam Sonar Systems EM1002S and GeoSwath. Coast. Eng. 2013, 32, 56–64. [Google Scholar]
  21. Wang, H.L. Establishment for The Model of Submarine Terrain with Sonar System and DGPS. J. Earth Sci. Environ. 1998, 2, 65–68. [Google Scholar]
  22. Cao, B.C. A Study of Remotely-Sensed Data Processing in Bathymetry. Ph.D. Thesis, Information Engineering University, Zhengzhou, China, 22 December 2017. [Google Scholar]
  23. Ranndal, H.; Christiansen, P.S.; Kliving, P.; Andersen, O.B.; Nielsen, K. Evaluation of a Statistical Approach for Extracting Shallow Water Bathymetry Signals from ICESat-2 ATL03 Photon Data. Remote Sens. 2021, 13, 3548. [Google Scholar] [CrossRef]
  24. Forfinski-Sarkozi, N.A.; Parrish, C.E. Analysis of MABEL Bathymetry in Keweenaw Bay and Implications for ICESat-2 ATLAS. Remote Sens. 2016, 8, 772. [Google Scholar] [CrossRef] [Green Version]
  25. Hsu, H.-J.; Huang, C.-Y.; Jasinski, M.; Li, Y.; Gao, H.; Yamanokuchi, T.; Wang, C.-G.; Chang, T.-M.; Ren, H.; Kuo, C.-Y.; et al. A semi-empirical scheme for bathymetric mapping in shallow water by ICESat-2 and Sentinel-2: A case study in the South China Sea. ISPRS J. Photogramm. Remote Sens. 2021, 178, 1–19. [Google Scholar] [CrossRef]
  26. Zhang, X.; Chen, Y.; Le, Y.; Zhang, D.; Yan, Q.; Dong, Y.; Han, W.; Wang, L. Nearshore bathymetry based on ICESat-2 and multispectral images: Comparison between Sentinel 2, Landsat 8, and testing Gaofen-2. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2449–2462. [Google Scholar] [CrossRef]
  27. Liu, Y.; Zhao, J.; Deng, R.; Liang, Y.; Gao, Y.; Chen, Q.; Xiong, L.; Liu, Y.; Tang, Y.; Tang, D. A downscaled bathymetric mapping approach combining multitemporal Landsat-8 and high spatial resolution imagery: Demonstrations from clear to turbid waters. ISPRS J. Photogramm. Remote Sens. 2021, 180, 65–81. [Google Scholar] [CrossRef]
  28. Parrish, C.E.; Magruder, L.A.; Neuenschwander, A.L.; Forfinski-Sarkozi, N.; Alonzo, M.; Jasinski, M. Validation of ICESat-2 ATLAS Bathymetry and Analysis of ATLAS’s Bathymetric Mapping Performance. Remote Sens. 2019, 11, 1634. [Google Scholar] [CrossRef] [Green Version]
  29. Fredericks, X.; Kranenburg, C.; Nagle, D.B. EAARL-B Submerged Topography? Saint Thomas, U.S. Virgin Islands, 2014; USGS: Reston, VA, USA, 2014. [Google Scholar] [CrossRef]
  30. Ustin, S. Classification of benthic composition in a coral reef environment using spectral unmixing. J. Appl. Remote Sens. 2007, 1, 011501. [Google Scholar] [CrossRef]
  31. Jasinski, M.; Stoll, J.; Hancock, D.; Robbins, J.; Nattala, J.; Pavelsky, T.; Morrison, J.; Ondrusek, M.; Parrish, C.; Jones, B.; et al. Algorithm Theoretical Basis Document (ATBD) for Inland Water Data Products ATL13, Version 4; NASA: Washington, DC, USA, 2021. [Google Scholar] [CrossRef]
  32. Neumann, T.; Brenner, A.; Hancock, D.; Robbins, J.; Saba, J.; Harbeck, K.; Gibbons, A.; Lee, J.; Luthcke, S.; Rebold, T.; et al. ATLAS/ICESat-2 L2A Global Geolocated Photon Data, Version 4; NASA National Snow and Ice Data Center Distributed Active Archive Center: Boulder, CO, USA, 2021; Available online: https://nsidc.org/sites/default/files/atl03-v004-userguide.pdf (accessed on 1 July 2021).
  33. Jasinski, M.F.; Stoll, J.D.; Cook, W.B.; Ondrusek, M.; Stengel, E.; Brunt, K. Inland and Near-Shore Water Profiles Derived from the High-Altitude Multiple Altimeter Beam Experimental Lidar (MABEL). J. Coast. Res. 2016, 76, 44–55. [Google Scholar] [CrossRef]
  34. Li, Y.; Gao, H.; Jasinski, M.F.; Zhang, S.; Stoll, J.D. Deriving High-Resolution Reservoir Bathymetry From ICESat-2 Prototype Photon-Counting Lidar and Landsat Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7883–7893. [Google Scholar] [CrossRef]
  35. Drusch, M.; del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  36. Caballero, I.; Stumpf, R.P. On the use of Sentinel-2 satellites and lidar surveys for the change detection of shallow bathymetry: The case study of North Carolina inlets. Coast. Eng. 2021, 169, 103936. [Google Scholar] [CrossRef]
  37. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for Sentinel-2. Proc. SPIE 2017, 10427, 1042704. [Google Scholar] [CrossRef]
  38. Ma, Y.; Xu, N.; Liu, Z.; Yang, B.; Yang, F.; Wang, X.H.; Li, S. Satellite-derived bathymetry using the ICESat-2 lidar and Sentinel-2 imagery datasets. Remote Sens. Environ. 2020, 250, 112047. [Google Scholar] [CrossRef]
  39. Casal, G.; Monteys, X.; Hedley, J.; Harris, P.; Cahalane, C.; McCarthy, T. Assessment of empirical algorithms for bathymetry extraction using Sentinel-2 data. Int. J. Remote Sens. 2019, 40, 2855–2879. [Google Scholar] [CrossRef]
  40. Warren, M.; Simis, S.; Martinez-Vicente, V.; Poser, K.; Bresciani, M.; Alikas, K.; Spyrakos, E.; Giardino, C.; Ansper, A. Assessment of atmospheric correction algorithms for the Sentinel-2A MultiSpectral Imager over coastal and inland waters. Remote Sens. Environ. 2019, 225, 267–289. [Google Scholar] [CrossRef]
  41. Sturges, H.A. The Choice of a Class Interval. J. Am. Stat. Assoc. 1926, 21, 65–66. [Google Scholar] [CrossRef]
  42. Mobley, C. The Optical Properties of Water. In Handbook of Optics, 2nd ed.; McGraw-Hill: New York, NY, USA, 1995; Volume 1. [Google Scholar]
  43. Neuenschwander, A.L.; Magruder, L.A. The Potential Impact of Vertical Sampling Uncertainty on ICESat-2/ATLAS Terrain and Canopy Height Retrievals for Multiple Ecosystems. Remote Sens. 2016, 8, 1039. [Google Scholar] [CrossRef] [Green Version]
  44. Lu, G.; Xu, D.; Meng, Y. Dynamic Evolution Analysis of Desertification Images Based on BP Neural Network. Comput. Intell. Neurosci. 2022, 2022, 5645535. [Google Scholar] [CrossRef]
  45. Cerrada, M.; Zurita, G.; Cabrera, D.; Sánchez, R.-V.; Artés, M.; Li, C. Fault diagnosis in spur gears based on genetic algorithm and random forest. Mech. Syst. Signal Process. 2016, 70–71, 87–103. [Google Scholar] [CrossRef]
  46. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef]
  47. Klonowski, W.M.; Fearns, P.R.C.S.; Lynch, M.J. Retrieving key benthic cover types and bathymetry from hyperspectral imagery. J. Appl. Remote Sens. 2007, 1, 011505. [Google Scholar] [CrossRef]
  48. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  49. Ji, Q. Research on Water Depth Inversion Method of Multispectral Remote Sensing Image. Ph.D. Thesis, Shanghai Ocean University, Shanghai, China, 30 May 2021. [Google Scholar]
  50. Tian, Z. Study on Bathymetry Inversion Models Using Multispectral or Hyperspectral Data and Bathyorographical Mapping Technology. Ph.D. Thesis, Shandong University of Science and Technology, Qingdao, China, 10 June 2015. [Google Scholar]
  51. Amidror, I. Scattered data interpolation methods for electronic imaging systems: A survey. J. Electron. Imaging 2002, 11, 157–176. [Google Scholar] [CrossRef] [Green Version]
  52. Hedley, J.D.; Roelfsema, C.; Brando, V.; Giardino, C.; Kutser, T.; Phinn, S.; Mumby, P.J.; Barrilero, O.; Laporte, J.; Koetz, B. Coral reef applications of Sentinel-2: Coverage, characteristics, bathymetry and benthic mapping with comparison to Landsat 8. Remote Sens. Environ. 2018, 216, 598–614. [Google Scholar] [CrossRef]
  53. Kay, S.; Hedley, J.D.; Lavender, S. Sun Glint Correction of High and Low Spatial Resolution Images of Aquatic Scenes: A Review of Methods for Visible and Near-Infrared Wavelengths. Remote Sens. 2009, 1, 697–730. [Google Scholar] [CrossRef] [Green Version]
  54. Hedley, J.D.; Harborne, A.R.; Mumby, P.J. Technical note: Simple and robust removal of sun glint for mapping shallow-water benthos. Int. J. Remote Sens. 2005, 26, 2107–2112. [Google Scholar] [CrossRef]
  55. Casal, G.; Harris, P.; Monteys, X.; Hedley, J.; Cahalane, C.; McCarthy, T. Understanding satellite-derived bathymetry using Sentinel 2 imagery and spatial prediction models. GIScience Remote Sens. 2020, 57, 271–286. [Google Scholar] [CrossRef]
  56. Caballero, I.; Stumpf, R.P. Retrieval of nearshore bathymetry from Sentinel-2A and 2B satellites in South Florida coastal waters. Estuarine, Coast. Shelf Sci. 2019, 226, 106277. [Google Scholar] [CrossRef]
  57. Vahtmäe, E.; Kutser, T. Airborne mapping of shallow water bathymetry in the optically complex waters of the Baltic Sea. J. Appl. Remote Sens. 2016, 10, 025012. [Google Scholar] [CrossRef]
Figure 1. Map of the St. Thomas Island study area, which includes the ICESat-2 ground tracks. From left to right: 20181122GT3R, 20181122GT2R, 20190221GT2L, 20181122GT1R, 20181122GT1L, 20190221GT1L.
Figure 1. Map of the St. Thomas Island study area, which includes the ICESat-2 ground tracks. From left to right: 20181122GT3R, 20181122GT2R, 20190221GT2L, 20181122GT1R, 20181122GT1L, 20190221GT1L.
Water 14 03862 g001
Figure 2. Map of the Oahu Island study area, which includes the ICESat-2 ground tracks. From left to right: 20200614GT2L, 20201205GT1L, 20190908GT1R, 20190908GT1L.
Figure 2. Map of the Oahu Island study area, which includes the ICESat-2 ground tracks. From left to right: 20200614GT2L, 20201205GT1L, 20190908GT1R, 20190908GT1L.
Water 14 03862 g002
Figure 3. The workflow of shallow water bathymetry combining with ICESat-2 and Sentinel-2.
Figure 3. The workflow of shallow water bathymetry combining with ICESat-2 and Sentinel-2.
Water 14 03862 g003
Figure 4. ATL03 photon point cloud cross−section.
Figure 4. ATL03 photon point cloud cross−section.
Water 14 03862 g004
Figure 5. Illustration of refraction of the laser beams at the air–sea interface. Inspiration from [23,28].
Figure 5. Illustration of refraction of the laser beams at the air–sea interface. Inspiration from [23,28].
Water 14 03862 g005
Figure 6. Satellite-derived bathymetry structure based on BP neural network model.
Figure 6. Satellite-derived bathymetry structure based on BP neural network model.
Water 14 03862 g006
Figure 7. The raw photon point cloud of ATL03 for 20190908GT1R in the Oahu area.
Figure 7. The raw photon point cloud of ATL03 for 20190908GT1R in the Oahu area.
Water 14 03862 g007
Figure 8. Signal photons on the sea surface (Green) and seafloor (Red) detected from ATL03 for 20190908GT1R and signal photons after refraction correction (Blue) in the Oahu area.
Figure 8. Signal photons on the sea surface (Green) and seafloor (Red) detected from ATL03 for 20190908GT1R and signal photons after refraction correction (Blue) in the Oahu area.
Water 14 03862 g008
Figure 9. The raw photon point cloud of ATL03 for 20190221GT1L in the St. Thomas area.
Figure 9. The raw photon point cloud of ATL03 for 20190221GT1L in the St. Thomas area.
Water 14 03862 g009
Figure 10. Signal photons on the sea surface (Green) and seafloor (Red) detected from ATL03 for 20190221GT1L and signal photons after refraction correction (Blue) in the St. Thomas area.
Figure 10. Signal photons on the sea surface (Green) and seafloor (Red) detected from ATL03 for 20190221GT1L and signal photons after refraction correction (Blue) in the St. Thomas area.
Water 14 03862 g010
Figure 11. Detection results of reflected photons on the sea surface and seabed in the Oahu area; (a) 20190908GT1L; (b) 20190908GT1R; (c) 20200614GT2L; (d) 20201205GT1L.
Figure 11. Detection results of reflected photons on the sea surface and seabed in the Oahu area; (a) 20190908GT1L; (b) 20190908GT1R; (c) 20200614GT2L; (d) 20201205GT1L.
Water 14 03862 g011
Figure 12. Detection results of reflected photons on the sea surface and seabed in the St. Thomas area; (a) 20181122GT1L; (b) 20181122GT1R; (c) 20181122GT2R; (d) 20181122GT3R; (e) 20190221GT1L; (f) 20190221GT2L.
Figure 12. Detection results of reflected photons on the sea surface and seabed in the St. Thomas area; (a) 20181122GT1L; (b) 20181122GT1R; (c) 20181122GT2R; (d) 20181122GT3R; (e) 20190221GT1L; (f) 20190221GT2L.
Water 14 03862 g012
Figure 13. Comparison of ICESat-2 bathymetry and airborne LiDAR bathymetry data from the sampling area of Oahu (Left) and St. Thomas (Right).
Figure 13. Comparison of ICESat-2 bathymetry and airborne LiDAR bathymetry data from the sampling area of Oahu (Left) and St. Thomas (Right).
Water 14 03862 g013
Figure 14. SDB derived from four Sentinel-2 images by multiband ratio model and BP neural network model. The multi-band ratio models are as follows.
Figure 14. SDB derived from four Sentinel-2 images by multiband ratio model and BP neural network model. The multi-band ratio models are as follows.
Water 14 03862 g014
Figure 15. Comparison of water depths derived from four Sentinel−2 images acquired in the Oahu area with ALB data. The multi−band ratio models are as follows. (a) 2019/12/10; (b) 2019/12/30; (c) 2022/01/28; (d) 2022/02/22. The BP neural network models are as follows: (e) 2019/12/10; (f) 2019/12/30; (g) 2022/01/28; (h) 2022/02/22.
Figure 15. Comparison of water depths derived from four Sentinel−2 images acquired in the Oahu area with ALB data. The multi−band ratio models are as follows. (a) 2019/12/10; (b) 2019/12/30; (c) 2022/01/28; (d) 2022/02/22. The BP neural network models are as follows: (e) 2019/12/10; (f) 2019/12/30; (g) 2022/01/28; (h) 2022/02/22.
Water 14 03862 g015
Table 1. Validation of ICESat-2 bathymetry results in Oahu area.
Table 1. Validation of ICESat-2 bathymetry results in Oahu area.
DateGround TrackMAE (m)MRERMSE (m)R2
2019/09/081L0.328%0.350.92
2019/09/081R0.317%0.390.97
2020/06/142L0.472%0.580.98
2020/12/051L0.517%0.550.93
Table 2. Validation of ICESat-2 bathymetry results in St. Thomas area.
Table 2. Validation of ICESat-2 bathymetry results in St. Thomas area.
DateGround TrackMAE (m)MRERMSE (m)R2
2018/11/221L0.373%0.540.99
2018/11/221R0.322%0.430.99
2018/11/222R0.523%0.710.99
2018/11/223R0.341%0.410.99
2019/02/211L0.262%0.370.99
2019/02/212L0.412%0.540.99
Table 3. Validation results of SDB in the Oahu area.
Table 3. Validation results of SDB in the Oahu area.
DateMBRBPNN
NRMSE (m)R2NRMSE (m)R2
2019/12/10870651.030.95883220.970.96
2019/12/301341341.490.891289521.430.90
2022/01/281291751.570.921307861.180.93
2022/02/221332281.510.921332281.290.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, X.; Jin, X.; Jin, S. Shallow Water Bathymetry Mapping from ICESat-2 and Sentinel-2 Based on BP Neural Network Model. Water 2022, 14, 3862. https://doi.org/10.3390/w14233862

AMA Style

Guo X, Jin X, Jin S. Shallow Water Bathymetry Mapping from ICESat-2 and Sentinel-2 Based on BP Neural Network Model. Water. 2022; 14(23):3862. https://doi.org/10.3390/w14233862

Chicago/Turabian Style

Guo, Xiaozu, Xiaoyi Jin, and Shuanggen Jin. 2022. "Shallow Water Bathymetry Mapping from ICESat-2 and Sentinel-2 Based on BP Neural Network Model" Water 14, no. 23: 3862. https://doi.org/10.3390/w14233862

APA Style

Guo, X., Jin, X., & Jin, S. (2022). Shallow Water Bathymetry Mapping from ICESat-2 and Sentinel-2 Based on BP Neural Network Model. Water, 14(23), 3862. https://doi.org/10.3390/w14233862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop