1. Introduction
The wet/dry shoreline, also called the high-water line, is defined as the maximum runup limit on a rising tide where the part of the beach is still wet, but beyond this line, the sand is dry [
1]. This shoreline is affected by the wind, wave, runup, setup, currents, and tidal conditions of the present moment. Detecting and predicting changes in the position of the wet/dry shoreline on the time scale of a hours or less is essential for beach risk management, and coastal researchers [
2,
3,
4,
5].
The wet/dry shoreline was selected as the best indicator of beach inundation among forty-five other shoreline indicators [
6]. This indicator is well suited for research based on imagery and requires a stable and repeatable inundation metric. McCurdy [
7], and McBeth [
8] studied the wet/dry shoreline and concluded that there was an insignificant difference between the water line of the previous high tide and the wet/dry shoreline on the studied imagery. Stafford [
9] further confirmed, stating that this was the result of the stable nature of the wet/dry shoreline over a tidal cycle. Furthermore, Dolan [
10] stated that the wet/dry shoreline is a stable shoreline indicator and is less sensitive to the tidal stage than the instantaneous runup limit. Thus, the wet/dry shoreline is a shoreline definition that is well suited to accomplish the goal of measuring and predicting coastal inundation.
Relative to elevation proxies or tidal datums, such as Mean High Water (MHW), as an indicator of shoreline position, previous literature [
1] has mentioned that the wet/dry shoreline is generally not a stable indicator for measuring shoreline change due to its dependence on the tide, water level, runup, and subjectivity in the delineation. Despite that, that is not a concern for this paper since we are not trying to monitor the wet/dry shoreline change or erosion over a lengthy time period, i.e., days or longer. Instead, this paper uses the wet/dry shoreline to calibrate and train an AI model for the creation of a time series of the position of the wet/dry shoreline that, in future research, will be used in an AI model to predict coastal inundation at a time scale of hours or less.
Additionally, using the wet/dry shoreline is critical for the operational application of this research. We interviewed multiple beach managers, and they are most interested in the short-term predictions represented by the position of the wet/dry shoreline. From an operational point of view, they are not looking for a long-term model, they want to know how far the runup will reach on the beach in the next couple of hours to couple of days. They are interested in the short-term model because they need short-term predictions to determine if beach access roads should be closed, and if the lifeguards’ stand should stay on the beach during the next inundation event. Having a prediction of the wet/dry shoreline rather than the average water level at a tide gauge will be most helpful for beach managers to be able to make such decisions. These are important decisions on beach access to protect ecological, biological, and economic resources [
11,
12,
13]. Thus, using the wet/dry shoreline is the best indicator to satisfy the needs of beach managers since it captures the current metocean conditions.
Given the significant benefits of detecting the wet/dry shoreline, there is a need for an automated method to detect it from remote sensing imagery [
14]. Traditional approaches consist of using semi-automatic software applied on satellite imagery [
15,
16,
17,
18]. These methods are capable of identifying the wet/dry shoreline, but they are not as accurate as newer machine learning, and deep learning approaches [
11]. Many recent studies do combine the use of machine learning or deep learning with satellite imagery [
19,
20,
21,
22]. These approaches do obtain more accurate shoreline predictions, but their main limitation is in the use of satellite imagery. Satellite imagery has many benefits, primarily the ability to collect multi-band imagery [
23,
24,
25]. The disadvantage of satellite imagery is that data is only available at a location when the satellite goes over that specific area, and there is no cloud cover at that time. Then, it is possible that the area of interest is not covered by any open-source satellite.
A great option to overcome these challenges and have fast and timely access to the study area is to use UAS for the data collection [
26,
27,
28]. UAS provide fast and timely data acquisition of the wet/dry shoreline conditions. Additionally, UAS allow the collection of very high-resolution imagery, given their proximity to the ground, focused on the region of interest [
29]. Imagery collected at lower altitudes results in higher ground sample distance (GSD), and any error in predicted features within the image will be significantly lower than the error from satellite imagery, on the order of as low as a few centimeters for UAS imagery. This assures a more accurate wet/dry shoreline feature location, essential if the imagery is used to compare the location’s evolution over time and calibrate predictive models. Similarly to satellite and aerial imagery, all UAS data can be georeferenced [
30,
31,
32]. This can thus provide the precise coordinates for the shoreline, including its vertical datum referenced to a terrestrial datum.
Another challenge with detecting the wet/dry shoreline is consistent labeling. When using raw UAS imagery data, it is challenging to consistently label the wet/dry shoreline since only a small portion of the beach is visible in each image. We found that if the images are labeled first and georeferenced after, we often have inconsistent labeling and a discontinuous wet/dry shoreline. To solve this inconsistency problem, we used orthomosaic data. In this way, we assure consistent labeling of the wet/dry shoreline along the beach, thus improving the deep learning model performance [
33,
34,
35]. This is discussed in greater detail in the data processing and data preparation
Section 2.2.
Detecting the wet/dry shoreline is an edge detection problem. HED (Holistically-Nested Edge Detection) [
36], VGG (Visual Geometry Group) [
37], Dexined [
38], and Deep Hybrid Net [
39] architectures have shown excellent results in similar applications. This paper proposes using a modified HED architecture to create a generalizable model for several locations. The method includes using CLAHE (Contrasted Limited Adaptive Histogram Equalization) as a pre-processing step to adjust and normalize the images’ contrast. The new method allows the detection of the wet/dry shoreline in a wide range of lighting conditions and beach characteristics. Based on our results with wet/dry shoreline detection from nine UAS flights in different locations in Texas and Florida, we believe this model can be generalized to a wide variety of beach conditions.
This paper proposes two major contributions: (1) creating a generalizable wet/dry shoreline detection model that performs well in several locations that were not part of the model training, and (2) computing the elevation of the predicted wet/dry shoreline. Creating a generalized model is only possible because of the use of CLAHE (Contrasted Limited Adaptive Histogram Equalization) as a pre-processing step, using UAS high-resolution imagery from multiple locations along the Texas and Florida coast, and training with imagery with a large variety of different atmospheric conditions. Computing the wet/dry shoreline elevations is only possible because we are using orthomosaic and DSM (Digital Surface Model) data that was created using the raw UAS imagery for each of the UAS flights. This allowed us to use high-resolution georeferenced data to train the neural network. Then, once the wet/dry shoreline was predicted, we were able to georeference this data back using ArcGIS Pro. This allowed us to use the AI predicted imagery in combination with the DSM to compute the elevation of the wet/dry shoreline. This paper is the first to propose this additional step to compute the elevation of the wet/dry shoreline. This method, combined with the collection of imagery over a broad range of metocean conditions, allows the creation of a time series of wet/dry shoreline elevations and enables the analysis and prediction of total water level and inundation for that location in addition to average water levels.
2. Study Area and Dataset
The research goal of this article is to create a generalizable wet/dry shoreline detection model that performs well under various atmospheric and geological conditions at different locations. For this reason, it was necessary to have multiple study areas. We gathered UAS imagery from four study areas: Packery Channel, Mustang Island SP (State Park), Fish Pass, and Little Saint George Island. The first three locations are in Texas, while the last is a State Reserve in Florida.
Figure 1 shows the locations of the different study areas along the Gulf of Mexico. Nine UAS flights collected data at the different study areas, flown in 2017, 2018, 2019, and 2020. Even if there are multiple flights in the same area, the flights collected are in different regions in the areas, and they had different study area lengths. UAS data were collected under different lighting conditions, different wind and weather conditions, and over different beach geomorphology and sediment type across the study areas. The data used in this study were collected and processed by the Measurement Analytics Lab (MANTIS) at the Conrad Blucher Institute for Surveying and Science (CBI). Locations in Texas are the northernmost part of North Padre Island and two locations along Mustang Island. The sand composition of all three sites is very similar consisting of well sorted fine sediments with uniform properties all along the length of Mustang Island [
40], 26 km, and continuing for the northernmost part of North Padre Island. The sediments are 86.9% quartz, 9.4% feldspar and 3.7% rock fragments [
41]. All locations in Florida are along the 15 km of gulf facing beach of Little Saint George Island composed of medium-fine sediments composed of over 99% quartz sand [
42]. The composition differences result in lighter color for the Florida sands than the Texas sands as can be seen in
Figure 2 and
Figure 3. Another difference between the Florida and the Texas study sites is that cars are allowed to drive on the Texas beaches resulting in tire marks while driving is not allowed on the Florida study site. This makes detecting the wet/dry shoreline more challenging since car tires create edges and lines on the sand. Furthermore, access to Little Saint George Island is very limited, only by boat, while the Texas beaches are continuously visited resulting in people, and other related objects, in the Texas images but not the Florida images. For all study sites, the morphology of the beaches is modified by events such as high-water events, high wave heights and long wave periods, and high winds, all potentially contributing to a changing wet/dry shoreline. When the above forcings subside, the water does not reach as far, creating a new wet/dry shoreline closer to the water. This challenges the detection of the wet/dry shoreline since there may be multiple shorelines visible within the same image. In
Figure 2, it can be observed that half of the sand area of the beach contains evidence of past wet/dry shorelines. Using images significantly different from each other, different locations and recent metocean conditions, enables the training of a more general model if successful.
2.1. Data Collection
UAS was the method selected to collect imagery data because of its flexibility in deciding the collection date, time, and location. Compared to satellite imagery, this flexibility was important since we were looking for various oceanographic conditions. We could evaluate the model’s generalization with a greater diversity of data. Additionally, UAS have the ability to collect georeferenced high-resolution RGB imagery at a relatively small scale. Accurately georeferenced imagery allows comparing the difference between the labeled and the predicted wet/dry shorelines with a metric more directly relevant to the study of beach dynamics and inundation.
Figure 4 shows a sample of the beach imagery diversity from Texas and Florida.
Analyzing the images in
Figure 4, one will notice significant differences in the imagery saturation, luminosity, and overall beach conditions. In
Figure 4d, car tire marks can be observed. This adds complexity to training the deep learning edge detection model since the tire marks add more edges to the images that do not represent the wet/dry shoreline and must be ignored by the detection algorithm.
By looking in detail at
Figure 4e, two candidates’ wet/dry shorelines can be identified. The first one is at the bottom part of the image, and the second one is at the top. In the case of this work, the correct wet/dry shoreline is close to the bottom because it is the most recent shoreline. Similarly, in
Figure 4a, the image has two edges to distinguish from. Another challenge on the dataset is that sometimes there is both a wet/dry shoreline and a waterline, as shown in
Figure 4b,e. The waterline is another challenge that could confuse the neural network as it has a clear edge.
Figure 4c is characterized by very different light conditions than the rest of the dataset. Additionally, it contains plant debris that has washed ashore on the sand. This all adds to the complexity of identifying the shoreline.
Figure 4d,e show another common challenge, where there are objects on the beach close to the shoreline. In the case of
Figure 4d, a surveying target can be observed. In the case of
Figure 4e, there is a car close to the shoreline. Thus, it is necessary to develop a robust model to predict the wet/dry shoreline that can ignore objects on the beach.
2.1.1. UAS and Camera Used
The imagery used in this research was collected using five UAS with five RGB single-frame cameras to further increase the model’s generalization.
Table 1 describes the UAS and cameras used. The table shows that this research combines quadcopters and fixed-wing UAS. We used the diversity of UAS and cameras to increase the model’s generalization.
Table 2 describes the properties of the cameras used. The table shows a combination of fixed-focal and auto-focal lenses with a range of 8.8 to 35 mm. The MP (Megapixels) from the cameras ranges from 20 to 42. Finally, the maximum resolution ranges from 4000 × 4000 to 7952 × 5304. The resulting imagery used as input for the neural network was taken by significantly different cameras with different properties. This will allow the determination of whether the proposed model works well, independent of the cameras’ photogrammetric properties.
2.1.2. Atmospheric and Oceanic Conditions during Data Collection
Figure 5, and
Table 3 document the range of metocean conditions for our UAS flights. This study used imagery from nine UAS flights, but the figures only include data for six of the flights because there was no station collecting atmospheric condition data near Little Saint George Island in Florida.
Figure 5 is a wind rose representing the wind direction and speed for the UAS flights in Texas. The wind direction is indicated by the vector direction, while the wind speed is the length of the vector. The Texas beaches are along the Coastal Bend, and their orientations with respect to north/south are similar and displayed in the background image behind the wind rose graphic. Both wind direction and speed impact the reach of the water on a beach. The wind speeds ranged between 6 m/s and 8 m/s, with wind directions onshore for all but one case (flight 5).
The second force that is important to the wet/dry shoreline’s location is the water level’s height, as recorded by a nearby tide gauge. NOAA (National Oceanic and Atmospheric Administration) collects 181 continuous measurements at a 1 Hz frequency prior to processing and averaging the data to compute an average [
44]. The higher the average water level during a flight, the further the wet/dry shoreline will reach up on the beach.
Table 3 represents the average water level relative to the NAVD88 datum during each UAS flight. During our flights, the water level in meters varied from 0.191 m for Flight 6 to 0.484 m for Flight 1.
The third important condition is the significant wave height because the vertical height reached by the runup is similar to the significant breaking wave height [
44]. During events with larger significant wave heights, the wet/dry shoreline will be located further up the beach.
Table 3 represents the significant wave height for the different UAS flights. Note that for flight 6, the significant wave height information was unavailable for the location and time of the UAS flight. Moreover, there was no relevant wave information for the two Florida locations as the closest National Data Buoy Center buoy was located too far from the study beaches. As shared in
Table 3, the significant wave height was smaller for flights 3, 4, and 5, measuring 0.23 m for flight 5 and 0.79 m for flights 3 and 4, as compared to the significant wave heights of 1.06 m for flight 6 and 1.37 m for flight 2.
Table 4 summarizes the specific information for the different UAS flights, the flight’s location, date, and flight time.
Table 5 summarizes the study area length, number of images for each flight, flight height, and the GSD for all flights. The study area length varies from 118 m to 17.5 km. The flight heights range from 10 m to 120 m. The GSD depends on the flight height and the camera. The smallest GSD of 0.25 cm is associated with the lowest altitude flight. The largest GSD of 1.91 cm is for flight number 6, flying at an altitude of 85 m.
2.2. Data Processing and Data Preparation
Each UAS flight captured a set of overlapping images, which were then processed using structure-from-motion (SfM) photogrammetry with Agisoft Metashape software to create point clouds, elevation models, and orthomosaics. To create the SfM products, most of the water from the products was removed to obtain a higher degree of accuracy for the SfM products. In the places where the water was removed, null values were given.
All the UAS flights had GCPs (Ground Control Points). The number of the GCPs depended on the length of the study area (
Table 5). The GCPs were surveyed using RTK (Real-Time Kinematics) GNSS (Global Navigation Satellite Systems) tied into the Texas DOT Real-Time Network and the Florida DOT Real-Time Network (RTN), respectively. The mean RMSE in
Table 6 represents the overall RMSE (average of each 3D component’s RMSE) based on the positional differences between the X, Y, and Z coordinates of the GCPs and the reconstructed/estimated X, Y, and Z coordinate from the SfM photogrammetry solution for each respective GCP.
For this research, using orthomosaic data is important since this provides consistent georeferenced and accurately rectified imagery. Manually labeling images individually would create inconsistencies in the location of the wet/dry shoreline and difficulties in the learning process for the deep learning model, so it is better to combine the images into an orthomosaic. ArcGIS Pro was then used to label the orthomosaic images from a single flight all at once.
The initial step was to load the orthomosaic raster and manually draw a vector at the wet/dry shoreline location. The second step consisted of augmenting the wet/dry shoreline vector width to 30 cm using a buffer. The third step was to create a new raster for the labeled imagery using the extract-by-mask tool. The input raster was the original orthomosaic, and the feature mask data were the buffer. This new raster had the same orthomosaic data as the original imagery, with the difference being that now there was a black wet/dry shoreline marking on it. The fourth step was creating a polygon feature class to select the wet/dry shoreline area from the orthomosaic. The polygon was used to mark where to split the orthomosaic into unscaled 512 × 256 pixel sub-images without any overlap, as these are the selected input dimensions for the neural network. The fifth step was to use the split raster geoprocessing tool to split both the original orthomosaic and the labeled raster equally. The original orthomosaic raster was first divided using the dimensions of the previously created polygon, then repeated with the labeled raster, followed by splitting the orthomosaic images into individual images. All images were then exported from ArcGIS Pro. Since the orthomosaics had some null values, some of the output images had null values as well, represented with black pixels. Images with null values were removed from the dataset as the transition to black pixels in the images creates an artificial edge, which would result in unphysical difficulties for training an edge detection algorithm. For our study, we were able to output datasets without images containing null values for most of the flights. All images without null values were used for the study. Finally, the sixth step consisted of writing and running a Python script to rotate the imagery to a consistent orientation. At this point, the wet/dry shoreline was represented with a black shade, while the rest of the image was white.
5. Conclusions
This research shows how to create a model to predict the wet/dry shoreline and compute its elevation using UAS imagery. This model is able to generalize to several locations in Texas and Florida. To test the model’s generalization performance, nine UAS flights were performed at four locations along the Gulf of Mexico in Texas and Florida during a period of four years. The flights were performed in various conditions on beaches characterized by different morphologies. Despite the challenges of the diversity of the conditions, the proposed model was able to generalize and perform well at all of the locations based on the accuracy of the computed elevations of the wet/dry shorelines. To further evaluate the model’s generalization performance, it was successively trained for eight of the locations while testing the model on the ninth (unseen) location. Considering all the combinations, the absolute mean total elevation differences between labeled and AI predicted wet/dry shorelines was 2.1 cm, while the absolute mean standard deviation was 2.2 cm. These results show the model’s success in computing the wet/dry elevation in unseen locations. The next step for this research, and the ultimate goal, is to create an accurate time series of total water levels to complement tide gauge measurements and help better predict, prepare for, and manage coastal inundation events.
Future work on this project is to include more locations. This would increase the beach diversity and further improve the model’s generalization performance. Additionally, the proposed method will be tested on satellite and stationary camera data. Although stationary cameras will not contribute to the variety of locations, they allow us to measure wet/dry shoreline elevations for a broad range of metocean conditions. These additions would increase the model’s robustness. Further, we would like to explore what factors impact the sensitivity of the shoreline delineation horizontal error.