Next Article in Journal
A Study of NOAA-20 VIIRS Band M1 (0.41 µm) Striping over Clear-Sky Ocean
Previous Article in Journal
Flood Hazard Assessment Using Weather Radar Data in Athens, Greece
Previous Article in Special Issue
Dense In Situ Underwater 3D Reconstruction by Aggregation of Successive Partial Local Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning

by
Ali Alakbar Karaki
*,
Ilaria Ferrando
,
Bianca Federici
and
Domenico Sguerso
Geomatics Laboratory, Department of Civil, Chemical and Environmental Engineering (DICCA), University of Genoa, 16145 Genoa, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(1), 73; https://doi.org/10.3390/rs17010073
Submission received: 13 October 2024 / Revised: 14 December 2024 / Accepted: 24 December 2024 / Published: 28 December 2024

Abstract

:
Conventional methodologies often struggle in accurately positioning underwater habitats and elucidating the complex interactions between terrestrial and aquatic environments. This study proposes an innovative methodology to bridge the gap between these domains, enabling integrated 3D mapping and underwater positioning. The method integrates UAV (Uncrewed Aerial Vehicles) photogrammetry for terrestrial areas with underwater photogrammetry performed by a snorkeler. The innovative aspect of the proposed approach relies on detecting the snorkeler positions on orthorectified images as an alternative to the use of GNSS (Global Navigation Satellite System) positioning, thanks to an image processing tool. Underwater camera positions are estimated through precise time synchronization with the UAV frames, producing a georeferenced 3D model that seamlessly joins terrestrial and submerged landscapes. This facilitates the understanding of the spatial context of objects on the seabed and presents a cost-effective and comprehensive tool for 3D coastal mapping, useful for coastal management to support coastal resilience.

1. Introduction

The management of coastal areas poses significant challenge for several disciplines [1,2,3,4] such as socio-ecology, morphodynamics, infrastructure management, and environmental studies due to the dynamic interaction between the land and the sea environments [1,5,6,7]. This complexity is further intensified by anthropogenic activities and climate change impacts, e.g., storms, floods, rising sea level, and coastal erosion [8,9,10]. Traditional surveying techniques and methodologies often struggle to capture the intricate relationships between terrestrial and submerged elements [6,11], needing interdisciplinary collaboration of various disciplines to comprehensively understand and address the coastal environment and to assess its resilience [4,12,13,14,15,16,17].
In the scientific literature, the concept of coastal resilience refers to both a system response to changes and the goal of enhancing resilience through managed decision-making [8,18,19,20]. However, as a consequence of the several disciplines and expertise involved in coastal resilience (e.g., environmental, ecological, physical, social [8,21,22]), it demands an interdisciplinary approach [1,23,24,25,26] to ensure a comprehensive understanding of this environment in all its dimensions [27,28,29,30]. A key element to link the various disciplines revolves around the collection and analysis of reliable data [23,26,31,32,33,34,35,36], with particular reference to geospatial and hydro-spatial data [37], to generate a 3D model or a Digital Twin (DT) of coastal areas. Thus, the need arises to seamlessly integrate the datasets from terrestrial and marine environments into a common reference frame to effectively represent coastal environment [14,38,39,40,41,42] and to provide a better understanding of the interaction between human behavior and environmental processes [41,42,43,44,45].
The question of how to build a methodology that integrates the wide range of existing survey techniques that express both topographic and bathymetric attributes [11,35,46,47,48,49,50,51] arises. Data collected from emerged and submerged environments rely on distinct equipment and techniques [2,24,25,35,51,52,53], each with certain accuracy and spatial resolution [32,46,54,55,56,57,58,59], as well as its own type of output data. For instance, an accurate bathymetric survey can be achieved using echosounders, resulting in a precise bathymetric mapping of the seabed [10,33,53], but such data alone, although essential, may not be enough to provide a comprehensive assessment of ecological habitats [26,32,56,60,61]. In this context, photogrammetry represents a promising interdisciplinary solution [11,35,62,63,64,65]. Photogrammetry encompasses terrestrial and underwater domains, each with unique characteristics and applications [23,47,66]. Terrestrial photogrammetry, often facilitated by Uncrewed Aerial Vehicles (UAVs), offers high-resolution data for mapping land surface [31,34,56,58], while underwater photogrammetry relies on submerged cameras to explore hidden depths of aquatic environment [6,67]. The integration of these two domains presents a unique opportunity to generate a unified geospatial and hydro-spatial dataset for a comprehensive analysis in coastal areas [37].
In recent years, photogrammetry has advanced significantly thanks to the development of techniques such as SLAM (Simultaneous Localization and Mapping) and SFM (Structure from Motion) [68,69,70,71,72]. However, the most critical aspect of underwater photogrammetry is the accurate positioning of the underwater model. The main technique involves scaling the model together with camera calibration to ensure accurate measurements [3,47,56,60,73,74,75,76,77,78,79,80], thereby facilitating the monitoring of temporal changes [26,32,34,35,54,57,59,61]. Although the widely recognized value of this scaling approach, a major challenge persists: the absence of georeferencing in the collected underwater data. Indeed, such data are often expressed in a separate local system because of the high costs of underwater positioning systems. The use of underwater targets measured by the GNSS (Global Navigation Satellite System) receiver extended from seabed above the water surface is a commonly adopted approach for underwater positioning [3,32,52,81]. The applicability of this approach strongly depends on the water depth and sea current situation. The lack of georeferencing causes the spatial disconnection between terrestrial and aquatic environments [80].
To address this challenge, we present a methodology to seamlessly integrate photogrammetry in emerged and submerged areas, bridging the gap between these domains. This is achieved thanks to a collaborative data acquisition between a UAV performing the photogrammetric survey of the emerged area and a snorkeler equipped with an underwater camera moving on the water surface. The core focus is on georeferencing underwater features by accurately determining the snorkeler’s camera position without relying on GNSS. This is achieved by processing UAV-captured images into orthorectified images, which serve as a spatial reference to compute the snorkeler’s camera position. Consequently, the underwater camera will be georeferenced within the same spatial frame as the terrestrial data. This innovative integration of UAV and underwater photogrammetry facilitates a seamless merging of land and underwater photogrammetric data into a unified and georeferenced 3D model. A georeferenced virtual model of the emerged and submerged areas is the result of the method, with particular attention to accurately represent underwater marine habitats in shallow water areas. The method highlights the contribution of photogrammetry to interdisciplinary research, propels the boundaries of geospatial analysis, and unlocks new possibilities for sustainable coastal management and resource evaluation.

2. Materials and Methods

This section outlines the proposed methodology for integrating land and underwater photogrammetry, the employed instruments and equipment for data acquisition, and the study area where the first experimentation was carried out.

2.1. Method

Our innovative methodology seamlessly integrates emerged and underwater photogrammetry into a unified workflow, starting from the definition of a common reference system. Terrestrial and aerial photogrammetric surveys can be easily georeferenced by GNSS measurements performed by the UAV itself or via GCPs (Ground Control Points). Notably, underwater environments present unique challenges for georeferencing, due to technical limitations or high costs of underwater positioning systems. To overcome these challenges, our proposed methodology was developed with multiple objectives. Its primary goal is to merge the underwater and land models into a unified representation with a cost-effective solution. Additionally, this allows us to evaluate the feasibility of dynamic control points localization from images instead of GNSS, offering a viable alternative where GNSS has limitations for instance due to electromagnetic interference.
The methodology relies on defining a fixed reference frame for both environments, where sea level serves as the zero level for altitude and for estimating the underwater camera position during the UAV survey within the external reference frame. As introduced before, the external reference frame may be realized by the UAV GNSS positioning and/or by the GCPs positions. In this approach, a snorkeler equipped with a fixed underwater camera, mounted on the chest and oriented in nadiral direction, conducts the underwater photogrammetric survey. An easily identifiable target, aligned to the underwater camera, is placed on the snorkeler’s back so to be clearly visible from the UAV. Thus, the snorkeler’s position during the underwater photogrammetry survey, represented by the target center, can be determined with respect to the external reference frame. This requires consistent intersection between the snorkeler and the UAV paths, ensuring that the snorkeler is detected in multiple images during the UAV flight. Consequently, the snorkeler serves as a dynamic control point with respect to the UAV, while the UAV acquires the photogrammetric data and detects the snorkeler in multiple locations, so to connect emerged and submerged data (Figure 1).
The UAV captured aerial images in parallel with the snorkeler, who began acquiring underwater videos. Both instruments were synchronized with a known starting time. At a specific moment, the UAV captured the snorkeler, while the snorkeler simultaneously recorded underwater features. By matching the timestamps, accurate for a level of fractions of a second, it was possible to associate each UAV frame with its corresponding underwater frame based on timing.
Let tUAV represent the timestamp when the UAV captures the snorkeler’s position and tUW denote the timestamp when the snorkeler’s camera records an underwater frame. The underwater video is captured at a known frame rate, FPSUW, measured in frames per second. The corresponding underwater frame at time tUAV is calculated by Equation (1):
Frame UW + 1   = t UAV t start · FPS UW
where tstart is the start time of the underwater video. For example, if the UAV captures the snorkeler at 11:02:00 AM, and the underwater video begins at 11:00:00 AM with a frame extraction rate of 2 frames per second, the corresponding underwater frame can be determined considering that 240 frames would have been recorded by the underwater camera (2 min × 2 frames/second = 240 frames) when the snorkeler is captured by the UAV (at 11:02:00 AM). Thus, the frame captured at 11:02:00 AM by the UAV corresponds to the 239th frame of the underwater video as the naming starts from zero. After identifying the UAV frame corresponding to the underwater frame, the corresponding frames positions can be determined.
Once the UAV images were synchronized with the underwater frames, aerial triangulation was performed to determine the precise position and orientation of the UAV during image acquisition. Aerial triangulation involves solving the collinearity equations, which establish a mathematical relationship between the image points, ground points, and the camera position and orientation parameters. This process ensures accurate georeferencing and relative orientation of the captured images with the measured GCPs. The collinearity equations are expressed as follows (Equation (2)):
x = x 0 + f r 11 X X S + r 12 Y Y S + r 13 ( Z Z s r 31 X X S + r 32 Y Y S + r 33 Z Z S y = y 0 + f r 21 X X S + r 22 Y Y S + r 23 ( Z Z s r 31 X X S + r 32 Y Y S + r 33 Z Z S
where x and y are the image coordinates (in pixels); x0 and y0 are the principal point coordinates of the camera; f is the focal length; X, Y, Z are the ground coordinates of the object point; XS, YS, ZS are the camera projection center coordinates in the external reference frame; rij are the elements of a 3 × 3 rotation matrix (computed from roll, pitch, and yaw angles). Using these equations, each image orientation was determined in the reference frame established by the GCPs. Aerial triangulation was performed using a Bundle Block Adjustment algorithm (BBA), which minimizes errors by simultaneously optimizing the camera parameters, GCPs locations and image tie points. The tie points, as known, are points seen in three or more images. Due to the fact they introduce redundancy because the unknown coordinates are the same for each collinearity equations. Obviously, the tie points have to be static points on land, seen in different times from different images. This is why the UAV flight path is strategically planned to ensure both adequate coverage of the area and sufficient overlapping of the images for precise 3D reconstruction. Each image must include a balanced composition of land and water, facilitating seamless computation of the relative positions and orientations of images themselves.
After the UAV and snorkeler frames collection, a standard photogrammetric processing workflow for UAV-derived data are performed to generate a 3D and a DSM (Digital Surface Model), both georeferenced using measured GCPs.
Since one of the primary objectives is to generate an orthophoto covering the entire study area including its water surface, due to the fact that the orthophoto depends on the DSM, careful attention was devoted to the DSM generation process. As the water surface is considered the zero-level reference in this study, this assumption needed to be reflected in the DSM. The original DSM over the water area showed distortions, resulting in a non-flat surface, which conflicted with our assumption of a flat water-level. To address this, a Python script was employed to mask the area covered by water and assign a uniform height value of zero to the surface. Once a reliable DSM was generated, a comprehensive orthophoto was created. Individual orthorectified images were derived for each frame involved in the data processing from the orthophoto. These orthorectified images were corrected for distortions based on the DSM and georeferenced to the primary model orthophoto, ensuring accurate integration into the photogrammetric dataset.
Since some of the orthorectified images captured the snorkeler, we were able to estimate the X-Y coordinates of the target on his back and of the camera center using the orthorectified images. By referencing the time, we can identify the specific underwater frame at that time and determine the corresponding X-Y coordinates for that specific frame. A Python script was developed to sequentially display each orthophoto, allowing the operator to navigate and click on the target center. This triggers the automatic storage of the X-Y coordinates of the clicked point, labeled with the frame name, in a text file. The target coordinates are assumed to represent the camera center, so that they reflect the underwater frame position. In this context, we were able to establish a connection between the underwater frames, their corresponding orthorectified images, and their georeferenced coordinates. The underwater photogrammetric data processing relied on the computed camera coordinates. For each frame, the camera position (easting and northing coordinates) was involved in data processing. The error estimation of the positioning methodology was based on the Bundle Block Adjustment (BBA) of the underwater model, relying on the known coordinates of the frames. An essential step for underwater processing was the camera calibration by using a chessboard pattern to estimate the camera parameters. Then, the data processing leads to a georeferenced underwater point cloud and a 3D model (Figure 2). The accuracy of the obtained 3D model is evaluated only in the X-Y plane (horizontal accuracy). However, the verification of depth (Z-axis) is limited at this stage.
Both data coming from the UAV and underwater camera were processed using Agisoft Metashape [82], a powerful photogrammetry software that facilitated the generation of accurate and georeferenced 3D models. The evaluation of the accuracy of this model is related to the least square method.

2.2. Equipment

The land survey was conducted utilizing the DJI Mini 2 UAV, a compact and agile drone weighing under 250 g, which makes it compliant with regulations in many areas. It is equipped with a high-resolution camera, 12 MP sensor, F2.8 focal length (equivalent to 24 mm), and a sensor size of 6.17 × 4.65 mm. The frames were captured automatically, following a pre-drawn path for the UAV. The chosen flight height was 80 m with front overlap of 80% and side overlap of 70%.
The STONEX S850+ GNSS receiver, able to receive signals from the GPS (Global Positioning System), GLONASS, and Galileo satellites constellations, was used to measure natural reference points. Since the study area includes several artificial structures, the use of natural points as GCPs rather than target placement provides greater flexibility, possibly serving as benchmarks for future surveys. The STONEX S850+ GNSS device was used in NRTK (Network Real Time Kinematic) mode, connected to Regione Liguria GNSS positioning service [83] to obtain the differential corrections. This well-established positioning technique allowed us to achieve spatial accuracy within a few cm. A total of 18 GCPs were measured, displayed as shown in (Figure 3).
For the underwater survey, the GoPro Hero 10 Black, a rugged action camera designed for underwater exploration, was used. The camera capabilities include 4K resolution at a rate of 120 frames per second (fps) video, a 24.4 mm lens equivalent to the 35 mm focal length, and a sensor size of 6.40 × 5.60 mm, allowing for the capture of detailed imagery in aquatic environments. The camera robustness and advanced features made it an ideal and cost-effective tool for our underwater photogrammetry process. Videos were recorded during the underwater survey with HyperSmooth mode, then frames were extracted to ensure sufficient overlapping between consecutive images (which strongly depends on the snorkeler speed variation) and to obtain comprehensive coverage of the underwater environment. The data acquisition started for both UAV and the snorkeler simultaneously. The UAV, operating along a predefined flight-plan, systematically acquired nadiral images. A total of 128 images were collected by the UAV during the mission. The GoPro attached to the snorkeler collected two separate videos.

2.3. Study Area

To demonstrate the applicability of the proposed methodology and to validate its effectiveness, an application was carried out on a case study. The Silent Bay in Sestri Levante (Liguria, Italy) was selected on the basis of several relevant considerations: (1) the possibility of flying to acquire a photogrammetric survey within legal/social constraints; (2) the water transparency and the seabed depth facilitated optimal underwater acquisition; (3) the presence of notable underwater features, including rocky formations, sandy areas, and sea grass, holding significant ecological value by providing habitat and sustenance for diverse marine species. These factors make the Bay an ideal location to apply the interdisciplinary approach case study (Figure 4).
Given the large dimension of the bay (approximately 400 m in width and 300 m in length), the snorkeler recorded videos in two distinct locations while the UAV collected frames across all the emerged area of the bay, capturing the snorkeler in various locations.

3. Results

The implementation of the proposed methodology yielded noteworthy outcomes, demonstrating its adaptability in effectively integrating underwater and terrestrial data. The most significant result is the realization of a coherent and georeferenced 3D model for both terrestrial and aquatic landscapes.

3.1. Topograpghic Model

The results obtained from a unified orthophoto are presented in Section 3.1.1. Subsequently, Section 3.1.2 discusses the user-friendly Python 3.11 application designed to compute the coordinates of the snorkeler during the underwater survey.

3.1.1. Orthophoto as Bridge

As outlined in the methodology workflow, the starting point regarded the generation of an orthophoto for the entire study area, representing the topographic and the water surfaces. The orthophoto was produced after following the photogrammetric workflow of Agisoft Metashape 2.1.4 starting by computing the relative orientation of the 128 acquired frames covering an area of 0.151 km2. The coordinates of 18 GCPs expressed in ETRF2000-2008.0 reference frame with UTM zone 32N projection (EPSG: 6707) were inserted, achieving an average error of 3 cm. The outputs included a colored point cloud, a 3D model, and a DSM with 10.4 cm/pix. The DSM was processed to obtain a flat surface water representing the zero level reference (Figure 5).
Using the processed DSM, an orthophoto with 2.6 cm/pix (Figure 6) was the final product of the photogrammetric processing.
The orthophoto generated from all the UAV-captured frames serves multiple purposes: some frames contribute to realize the comprehensive model, while others facilitate the detection of the snorkeler and the estimation of his position. In this context, the unified orthophoto covering the entire study area was segmented into individual orthorectified images, resulting in a corrected and georeferenced image for each UAV frame (Figure 7). The orthorectified images capturing the snorkeler can be used to estimate his position within the same terrestrial reference frame, i.e., ETRF2000-2008.0 with UTM zone 32N projection.

3.1.2. User Friendly Application

Having obtained several orthorectified images detecting the snorkeler in various locations, a tool was required to extract the coordinates of the snorkeler. To address this need, we developed a user-friendly Python application that eases the target coordinates extraction from images. The user can sequentially recall the orthorectified images providing the appropriate file path, zoom in on the snorkeler, and click on the target on his back. The application then automatically extracts the target coordinates and saves them to a separate text file along with the corresponding frame name (Figure 8).
Additionally, a Python script was developed to correlate the orthorectified images with the corresponding underwater frames using time synchronization, as detailed in Section 2.1. The computed coordinates are then assigned to their respective underwater frames, ensuring accurate georeferencing. This process guarantees accurate positioning of the snorkeler, enabling the precise integration of terrestrial and underwater photogrammetric data.

3.2. Underwater Photogrammetric Model

This section presents the results for the underwater model, with particular attention to the outcomes of different camera calibrations methods in Section 3.2.1. Furthermore, Section 3.2.2 highlights the results of georeferenced underwater photogrammetry based on computed camera position.

3.2.1. Camera Calibration

Camera calibration represents an essential step in underwater photogrammetry. Our experience revealed that using the Agisoft Metashape 2.1.4 for automatic calibration in underwater context led to significant challenges, particularly the inability to reconstruct a realistic underwater model. The relative orientation of underwater frames produced a distorted, spiral-like representation of the sea bottom, to be attributed to inaccurate camera calibration parameters. To overcome this issue, we performed manual underwater calibration using a chess board and applied the resulting parameters during the image processing (Figure 9).
This approach effectively mitigates refraction and distortion effects in the images, ensuring the accuracy of the underwater photogrammetric reconstruction and providing reliable results that are essential for accurate modeling and analysis.

3.2.2. Underwater Georeferenced Model

As previously mentioned, 41 positions of the snorkeler were computed. However, not all these positions were usable, as in some cases the snorkeler was passing over deep water areas, making it impossible to orient the underwater images with the model. This highlights a limitation of the methodology, as it depends on depth and water transparency for constructing underwater models. After filtering the data, 21 useful positions were retained, where clear underwater visibility allowed for accurate image orientation. From these 21 positions we could evaluate the error by BBA (Table 1).
The average error for the x coordinate is 0.068 m, while the y coordinate exhibits a slightly higher of 0.093 m, resulting in a total mean error of 0.080 m. The standard deviation reveals greater variability in the y direction (0.09 m) compared to the x direction (0.05 m). The error distribution also shows a wider range for y direction with a maximum error of 0.34 m, whereas the x axis reaches a maximum error of 0.16 m. The minimum error across both axes remains low, starting at 0.01 m for both x and y, indicating that some measurements are highly accurate. This distribution suggests that while overall errors remain relatively small, the system or process might face more challenges maintaining accuracy along the y axis. Evaluation of the total error is presented in the histogram reported in Figure 10.
The histogram of overall error presents a frequency distribution of error values ranging from approximately 0.025 m to 0.200 m. The highest frequency of errors falls within the 0.025 m to 0.050 m range. As the error increases, there is a notable decline in frequency, with moderate frequencies between 0.075 m and 0.100 m, and a further decrease as the error exceeds 0.100 m. This distribution shows that while the majority of data points exhibit small errors, some outliers are present. To further explore the source of this variability, the data were categorized into two groups based on the snorkeler position in the orthophoto: “M” for positions in the middle area, and “B” for positions near the boundary area of the orthophoto used to estimate the snorkeler’s position. Statistical analysis (Table 2) shows that frames in category “M” tend to have lower and less disperse errors, with a narrower range of values. In contrast, category “B” exhibits both higher mean error and greater variability, indicating more pronounced inaccuracies and potential outliers. A scatter plot of the x and y errors by category further supports these findings, with category “B” displaying a wider spread of errors compared to the more tightly clustered errors in category “M” (Figure 11).
This suggests that frames categorized as “B” are less accurate, potentially due to frame distortions between the middle and boundaries of the orthorectified images. Following the error evaluation, the underwater 3D models were constructed from the collected images and georeferenced using the computed coordinates from the orthorectified images. These 3D models captured various features on the seabed, including rocky regions, sandy areas, and the presence of both healthy and dead seagrass. This detailed representation of the underwater environment highlights the method capability to accurately capture and represent diverse seabed characteristics, providing valuable insights for coastal management and conservation efforts.

3.3. Merged Model

After obtaining the land and underwater models georeferenced in the same reference frame, they were integrated into a unified entity respecting the correct geographical location. This enables the comprehensive analysis and interpretation of the merged dataset. In this context, we could understand the positions of sandy areas, infrastructure, health and dead seagrass (Figure 12).

3.4. Virtual Model

Following our methodology, the integration of land and underwater photogrammetry results in the creation of a georeferenced 3D model. Beyond providing a geometric representation, this model accurately reflects the true environmental features and characteristics of the study area. Furthermore, it enables to develop virtual reality (VR) models by exploiting Unreal Engine [84]. In Unreal Engine, we created a virtual environment that accurately represents real-world geometry, seamlessly integrating both terrestrial and underwater landscapes (Figure 13).
Thus, we transformed static 3D models into dynamic virtual representations, providing an immersive platform for visualizing and exploring the coastal landscape. This advancement significantly improves our ability to effectively analyze and disseminate complex spatial data, supporting applications in coastal management, environmental monitoring, and educational outreach (Figure 14).

4. Discussion

Our methodology represents a significant advancement in integrating land and underwater photogrammetry, offering a comprehensive and unified approach to survey coastal environments. This innovative technique addresses several challenges in underwater photogrammetry, including camera calibration, georeferencing issues, and the limitations of underwater positioning systems, by combining UAVs with a snorkeler equipped with underwater camera. The UAV ability to detect the snorkeler in multiple images using a target system allows us to achieve positioning accuracy in the order of 8 cm for the snorkeler. This level of accuracy is a promising starting point, considering the challenging nature of positioning a dynamic GCP from orthorectified images. For instance, the dynamic GCP, moving over the water surface, can lead to inaccuracies in the orthophoto reconstruction due to water movement and surface undulations. Additionally, the target attached to the snorkeler is vulnerable to minor movements caused by waves. Despite these challenges, the methodology demonstrates significant potential for improving the integration of terrestrial and underwater photogrammetric data. Furthermore, this approach effectively reconstructs both underwater and emerged environments within a single 3D model, accurately positioning underwater habitats and features in the same reference frame as terrestrial features. This holistic approach provides a comprehensive understanding of the coastal environment, highlighting the spatial relations between underwater and terrestrial features. Moreover, the methodology is cost-effective, as it avoids the need for expensive underwater positioning systems by leveraging existing technologies in innovative ways.
However, despite the advantages, the methodology presents challenges and limitations. The reliance on visual targets for positioning, coupled with the effect of water clarity, lighting conditions and depth, can influence the quality of underwater images. Furthermore, the accuracy of the unified model depends on precise synchronization between the UAV and the snorkeler camera, and on the correct calibration of the underwater camera. Additionally, image distortions correction and accurate georeferencing in the orthophoto stage are crucial elements to achieve appreciable results.
Future research should focus on enhancing the integration of topographic and underwater photogrammetry by incorporating additional technologies. A promising direction is the integration of underwater acoustic sensors, such as Multi-Beam Echo Sounders (MBES), for bathymetric data collection. By combining underwater photogrammetry with MBES data, more detailed and accurate 3D models of underwater environments can be achieved. Finally, replacing snorkeler with USV (Unmanned Surface Vehicle) for underwater photogrammetry can be beneficial: equipping USV with GNSS technology, the positioning accuracy from orthorectified images could be directly assessed. This comparison would facilitate the validation and potential improvement of the dynamic GCPs accuracy.

5. Conclusions

The proposed methodology highlights the dynamic convergence of topographic and underwater photogrammetry, showing the interaction between two distinct environments. Employing a UAV equipped with a high-resolution camera and a snorkeler equipped using an underwater camera, a comprehensive georeferenced 3D model is generated, spanning both terrestrial and aquatic landscapes. The challenges encountered in this methodology include: (1) addressing underwater image distortion; (2) synchronizing time-based data between land and underwater cameras, and (3) ensuring the correct relative orientation of underwater images. The success of this methodology depends on the seamless integration of multiple components, including precise UAV flight path planning, innovative underwater data acquisition methods, and meticulous image processing techniques. Despite the complex nature of underwater environments in terms of measurements, the achievement of accurate and geo-referenced results clearly shows the robustness of the approach. Through the generation of orthorectified images and georeferenced models, this methodology demonstrates its accuracy and underlines its applicability within diverse interdisciplinary contexts. By providing a detailed account of the challenges and solutions encountered, this study contributes to the evolving field of geospatial analysis and modeling. Moreover, the development of streamlined tools and applications for computing the relative position and orientation of images, alongside data extraction, highlights the potential for future advancements in underwater photogrammetry. In this regard, this methodology expands the horizons of spatial analysis by bridging the gap between aerial and submerged perspectives. As technology evolves, this research paves the way for further innovation in geospatial methodologies, enabling comprehensive landscape assessment across both terrestrial and aquatic domains. The integration of aerial and underwater photogrammetry highlights the multidisciplinary potential of geospatial science and unlocks new avenues of exploration and discovery in both spatial domains.

Author Contributions

Conceptualization, A.A.K. and D.S.; methodology, A.A.K., I.F. and D.S.; data collection: A.A.K.; analysis and interpretation of results: A.A.K., I.F. and D.S.; draft manuscript preparation: A.A.K.; review and editing, A.A.K., B.F., I.F. and D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

We would like to thank Stefano Cunietti for performing the UAV flight and Mattia Scovenna for the underwater photogrammetric survey.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bonaldo, D.; Antonioli, F.; Archetti, R.; Bezzi, A.; Correggiari, A.; Davolio, S.; De Falco, G.; Fantini, M.; Fontolan, G.; Furlani, S.; et al. Integrating multidisciplinary instruments for assessing coastal vulnerability to erosion and sea level rise: Lessons and challenges from the Adriatic Sea, Italy. J. Coast. Conserv. 2019, 23, 19–37. [Google Scholar] [CrossRef]
  2. Teague, J.; Scott, T. Underwater Photogrammetry and 3D Reconstruction of Submerged Objects in Shallow Environments by ROV and Underwater GPS. J. Mar. Sci. Res. Technol. 2017, 1, 005. Available online: https://core.ac.uk/download/pdf/132613751.pdf (accessed on 5 July 2024).
  3. Fan, J.; Wang, X.; Zhou, C.; Ou, Y.; Jing, F.; Hou, Z. Development, Calibration, and Image Processing of Underwater Structured Light Vision System: A Survey. IEEE Trans. Instrum. Meas. 2023, 72, 5004418. [Google Scholar] [CrossRef]
  4. Calantropio, A.; Chiabrando, F. Underwater Cultural Heritage Documentation Using Photogrammetry. J. Mar. Sci. Eng. 2024, 12, 413. [Google Scholar] [CrossRef]
  5. Gallina, V.; Torresan, S.; Critto, A.; Sperotto, A.; Glade, T.; Marcomini, A. A review of multi-risk methodologies for natural hazards: Consequences and challenges for a climate change impact assessment. J. Environ. Manag. 2016, 168, 123–132. [Google Scholar] [CrossRef] [PubMed]
  6. Pulido Mantas, T.; Roveta, C.; Calcinai, B.; Di Camillo, C.G.; Gambardella, C.; Gregorin, C.; Coppari, M.; Marrocco, T.; Puce, S.; Riccardi, A.; et al. Photogrammetry, from the Land to the Sea and Beyond: A Unifying Approach to Study Terrestrial and Marine Environments. J. Mar. Sci. Eng. 2023, 11, 759. [Google Scholar] [CrossRef]
  7. McFadden, L. Governing Coastal Spaces: The Case of Disappearing Science in Integrated Coastal Zone Management. Coast. Manag. 2007, 35, 429–443. [Google Scholar] [CrossRef]
  8. Masselink, G.; Lazarus, E.D. Defining Coastal Resilience. Water 2019, 11, 2587. [Google Scholar] [CrossRef]
  9. National Academies of Sciences, Engineering, and Medicine; Division of Behavioral and Social Sciences and Education; Division on Earth and Life Studies; Board on Environmental Change and Society; Ocean Studies Board; Board on Earth Sciences and Resources; Committee on Long-Term Coastal Zone Dynamics. Understanding the Long-Term Evolution of the Coupled Natural-Human Coastal System: The Future of the U.S. Gulf Coast; National Academies Press: Washington, DC, USA, 2018; ISBN 978-0-309-47584-6. [Google Scholar]
  10. Ferrando, I.; Brandolini, P.; Federici, B.; Lucarelli, A.; Sguerso, D.; Morelli, D.; Corradi, N. Coastal Modification in Relation to Sea Storm Effects: Application of 3D Remote Sensing Survey in Sanremo Marina (Liguria, NW Italy). Water 2021, 13, 1040. [Google Scholar] [CrossRef]
  11. Karaki, A.A.; Bibuli, M.; Caccia, M.; Ferrando, I.; Gagliolo, S.; Odetti, A.; Sguerso, D. Multi-Platforms and Multi-Sensors Integrated Survey for the Submerged and Emerged Areas. J. Mar. Sci. Eng. 2022, 10, 753. [Google Scholar] [CrossRef]
  12. Alexander, K.A.; Hobday, A.J.; Cvitanovic, C.; Ogier, E.; Nash, K.L.; Cottrell, R.S.; Fleming, A.; Fudge, M.; Fulton, E.A.; Frusher, S.; et al. Progress in integrating natural and social science in marine ecosystem-based management research. Mar. Freshw. Res. 2018, 70, 71–83. [Google Scholar] [CrossRef]
  13. Holm, P.; Goodsite, M.E.; Cloetingh, S.; Agnoletti, M.; Moldan, B.; Lang, D.J.; Leemans, R.; Moeller, J.O.; Buendía, M.P.; Pohl, W.; et al. Collaboration between the natural, social and human sciences in Global Change Research. Environ. Sci. Policy 2013, 28, 25–35. [Google Scholar] [CrossRef]
  14. Spooner, E.; Karnauskas, M.; Harvey, C.J.; Kelble, C.; Rosellon-Druker, J.; Kasperski, S.; Lucey, S.M.; Andrews, K.S.; Gittings, S.R.; Moss, J.H.; et al. Using Integrated Ecosystem Assessments to Build Resilient Ecosystems, Communities, and Economies. Coast. Manag. 2021, 49, 26–45. [Google Scholar] [CrossRef]
  15. Gonçalves, G.; Andriolo, U.; Pinto, L.; Bessa, F. Mapping marine litter using UAS on a beach-dune system: A multidisciplinary approach. Sci. Total Environ. 2020, 706, 135742. [Google Scholar] [CrossRef] [PubMed]
  16. Cardoso-Andrade, M.; Queiroga, H.; Rangel, M.; Sousa, I.; Belackova, A.; Bentes, L.; Oliveira, F.; Monteiro, P.; Sales Henriques, N.; Afonso, C.M.L.; et al. Setting Performance Indicators for Coastal Marine Protected Areas: An Expert-Based Methodology. Front. Mar. Sci. 2022, 9, 848039. [Google Scholar] [CrossRef]
  17. Bini, M.; Rossi, V. Climate Change and Anthropogenic Impact on Coastal Environments. Water 2021, 13, 1182. [Google Scholar] [CrossRef]
  18. van Dongeren, A.; Ciavola, P.; Martinez, G.; Viavattene, C.; Bogaard, T.; Ferreira, O.; Higgins, R.; McCall, R. Introduction to RISC-KIT: Resilience-increasing strategies for coasts. Coast. Eng. 2018, 134, 2–9. [Google Scholar] [CrossRef]
  19. Clement, S.; Jozaei, J.; Mitchell, M.; Allen, C.R.; Garmestani, A.S. How resilience is framed matters for governance of coastal social-ecological systems. Environ. Policy Gov. 2024, 34, 65–76. [Google Scholar] [CrossRef] [PubMed]
  20. Talubo, J.P.; Morse, S.; Saroj, D. Whose resilience matters? A socio-ecological systems approach to defining and assessing disaster resilience for small islands. Environ. Chall. 2022, 7, 100511. [Google Scholar] [CrossRef]
  21. Flood, S.; Schechtman, J. The rise of resilience: Evolution of a new concept in coastal planning in Ireland and the US. Ocean Coast. Manag. 2014, 102, 19–31. [Google Scholar] [CrossRef]
  22. Chaffin, B.C.; Scown, M. Social-ecological resilience and geomorphic systems. Geomorphology 2018, 305, 221–230. [Google Scholar] [CrossRef]
  23. Hamal, S.N.G. Investigation of Underwater Photogrammetry Method: Challenges and Photo Capturing Scenarios of the Method. Adv. Underw. Sci. 2023, 3, 19–25. [Google Scholar]
  24. Abadie, A.; Boissery, P.; Viala, C. Georeferenced underwater photogrammetry to map marine habitats and submerged artificial structures. Photogramm. Rec. 2018, 33, 448–469. [Google Scholar] [CrossRef]
  25. Marín-Buzón, C.; Pérez-Romero, A.; López-Castro, J.L.; Ben Jerbania, I.; Manzano-Agugliaro, F. Photogrammetry as a New Scientific Tool in Archaeology: Worldwide Research Trends. Sustainability 2021, 13, 5319. [Google Scholar] [CrossRef]
  26. Rossi, P.; Castagnetti, C.; Capra, A.; Brooks, A.J.; Mancini, F. Detecting change in coral reef 3D structure using underwater photogrammetry: Critical issues and performance metrics. Appl. Geomat. 2020, 12, 3–17. [Google Scholar] [CrossRef]
  27. Pace, L.A.; Saritas, O.; Deidun, A. Exploring future research and innovation directions for a sustainable blue economy. Mar. Policy 2023, 148, 105433. [Google Scholar] [CrossRef]
  28. Penca, J.; Barbanti, A.; Cvitanovic, C.; Hamza-Chaffai, A.; Elshazly, A.; Jouffray, J.-B.; Mejjad, N.; Mokos, M. Building competences for researchers working towards ocean sustainability. Mar. Policy 2024, 163, 106132. [Google Scholar] [CrossRef]
  29. Gill, J.C.; Smith, M. Geosciences and the Sustainable Development Goals; Springer Nature: Berlin, Germany, 2021; ISBN 978-3-030-38815-7. [Google Scholar]
  30. Ventura, D.; Bonifazi, A.; Gravina, M.F.; Belluscio, A.; Ardizzone, G. Mapping and Classification of Ecologically Sensitive Marine Habitats Using Unmanned Aerial Vehicle (UAV) Imagery and Object-Based Image Analysis (OBIA). Remote Sens. 2018, 10, 1331. [Google Scholar] [CrossRef]
  31. Wright, A.E.; Conlin, D.L.; Shope, S.M. Assessing the Accuracy of Underwater Photogrammetry for Archaeology: A Comparison of Structure from Motion Photogrammetry and Real Time Kinematic Survey at the East Key Construction Wreck. J. Mar. Sci. Eng. 2020, 8, 849. [Google Scholar] [CrossRef]
  32. Marre, G.; Holon, F.; Luque, S.; Boissery, P.; Deter, J. Monitoring Marine Habitats With Photogrammetry: A Cost-Effective, Accurate, Precise and High-Resolution Reconstruction Method. Front. Mar. Sci. 2019, 6, 276. [Google Scholar] [CrossRef]
  33. Prahov, N.; Prodanov, B.; Dimitrov, K.; Dimitrov, L.; Velkovsky, K. Application of Aerial Photogrammetry in the Study of the Underwater Archaeological Heritage of Nessebar. In Proceedings of the 20th International Multidisciplinary Scientific GeoConference SGEM 2020, Albena, Bulgaria, 16–25 August 2020; pp. 175–182. [Google Scholar] [CrossRef]
  34. Russo, F.; Del Pizzo, S.; Di Ciaccio, F.; Troisi, S. Monitoring the Posidonia Meadows structure through underwater photogrammetry: A case study. In Proceedings of the 2022 IEEE International Workshop on Metrology for the Sea; Learning to Measure Sea Health Parameters (MetroSea), Milazzo, Italy, 3–5 October 2022; pp. 215–219. [Google Scholar] [CrossRef]
  35. Arnaubec, A.; Ferrera, M.; Escartín, J.; Matabos, M.; Gracias, N.; Opderbecke, J. Underwater 3D Reconstruction from Video or Still Imagery: Matisse and 3DMetrics Processing and Exploitation Software. J. Mar. Sci. Eng. 2023, 11, 985. [Google Scholar] [CrossRef]
  36. Nocerino, E.; Menna, F.; Gruen, A.; Troyer, M.; Capra, A.; Castagnetti, C.; Rossi, P.; Brooks, A.J.; Schmitt, R.J.; Holbrook, S.J. Coral Reef Monitoring by Scuba Divers Using Underwater Photogrammetry and Geodetic Surveying. Remote Sens. 2020, 12, 3036. [Google Scholar] [CrossRef]
  37. Hains, D.; Schiller, L.; Ponce, R.; Bergmann, M.; Cawthra, H.C.; Cove, K.; Echeverry, P.; Gaunavou, L.; Kim, S.-P.; Lavagnino, A.C.; et al. Hydrospatial—Update and progress in the definition of this term. Int. Hydrogr. Rev. 2022, 28, 221–225. [Google Scholar] [CrossRef]
  38. Davila Delgado, J.M.; Oyedele, L. Digital Twins for the built environment: Learning from conceptual and process models in manufacturing. Adv. Eng. Inform. 2021, 49, 101332. [Google Scholar] [CrossRef]
  39. Madusanka, N.S.; Fan, Y.; Yang, S.; Xiang, X. Digital Twin in the Maritime Domain: A Review and Emerging Trends. J. Mar. Sci. Eng. 2023, 11, 1021. [Google Scholar] [CrossRef]
  40. Riaz, K.; McAfee, M.; Gharbia, S.S. Management of Climate Resilience: Exploring the Potential of Digital Twin Technology, 3D City Modelling, and Early Warning Systems. Sensors 2023, 23, 2659. [Google Scholar] [CrossRef] [PubMed]
  41. Stern, P.C. Understanding Individuals’ Environmentally Significant Behavior. Environ. Law. Report. News Anal. 2005, 35, 10785. [Google Scholar]
  42. Hofman, K.; Walters, G.; Hughes, K. The effectiveness of virtual vs real-life marine tourism experiences in encouraging conservation behaviour. J. Sustain. Tour. 2022, 30, 742–766. [Google Scholar] [CrossRef]
  43. Kelly, R.; Evans, K.; Alexander, K.; Bettiol, S.; Corney, S.; Cullen-Knox, C.; Cvitanovic, C.; de Salas, K.; Emad, G.R.; Fullbrook, L.; et al. Connecting to the oceans: Supporting ocean literacy and public engagement. Rev. Fish Biol. Fish. 2022, 32, 123–143. [Google Scholar] [CrossRef]
  44. Wang, Z. Influence of Climate Change On Marine Species and Its Solutions. IOP Conf. Ser. Earth Environ. Sci. 2022, 1011, 012053. [Google Scholar] [CrossRef]
  45. Ferreira, J.C.; Vasconcelos, L.; Monteiro, R.; Silva, F.Z.; Duarte, C.M.; Ferreira, F. Ocean Literacy to Promote Sustainable Development Goals and Agenda 2030 in Coastal Communities. Educ. Sci. 2021, 11, 62. [Google Scholar] [CrossRef]
  46. He, J.; Lin, J.; Ma, M.; Liao, X. Mapping topo-bathymetry of transparent tufa lakes using UAV-based photogrammetry and RGB imagery. Geomorphology 2021, 389, 107832. [Google Scholar] [CrossRef]
  47. Mazza, D.; Parente, L.; Cifaldi, D.; Meo, A.; Senatore, M.R.; Guadagno, F.M.; Revellino, P. Quick bathymetry mapping of a Roman archaeological site using RTK UAS-based photogrammetry. Front. Earth Sci. 2023, 11, 1183982. [Google Scholar] [CrossRef]
  48. Apicella, L.; De Martino, M.; Ferrando, I.; Quarati, A.; Federici, B. Deriving Coastal Shallow Bathymetry from Sentinel 2-, Aircraft- and UAV-Derived Orthophotos: A Case Study in Ligurian Marinas. J. Mar. Sci. Eng. 2023, 11, 671. [Google Scholar] [CrossRef]
  49. Jaud, M.; Delsol, S.; Urbina-Barreto, I.; Augereau, E.; Cordier, E.; Guilhaumon, F.; Le Dantec, N.; Floc’h, F.; Delacourt, C. Low-Tech and Low-Cost System for High-Resolution Underwater RTK Photogrammetry in Coastal Shallow Waters. Remote Sens. 2023, 16, 20. [Google Scholar] [CrossRef]
  50. Kahmen, O.; Rofallski, R.; Luhmann, T. Impact of Stereo Camera Calibration to Object Accuracy in Multimedia Photogrammetry. Remote Sens. 2020, 12, 2057. [Google Scholar] [CrossRef]
  51. Remmers, T.; Grech, A.; Roelfsema, C.; Gordon, S.; Lechene, M.; Ferrari, R. Close-range underwater photogrammetry for coral reef ecology: A systematic literature review. Coral Reefs 2024, 43, 35–52. [Google Scholar] [CrossRef]
  52. Calantropio, A.; Chiabrando, F. Georeferencing Strategies in Very Shallow Waters: A Novel GCPs Survey Approach for UCH Photogrammetric Documentation. Remote Sens. 2024, 16, 1313. [Google Scholar] [CrossRef]
  53. Del Savio, A.A.; Luna Torres, A.; Vergara Olivera, M.A.; Llimpe Rojas, S.R.; Urday Ibarra, G.T.; Neckel, A. Using UAVs and Photogrammetry in Bathymetric Surveys in Shallow Waters. Appl. Sci. 2023, 13, 3420. [Google Scholar] [CrossRef]
  54. Ventura, D.; Castoro, L.; Mancini, G.; Casoli, E.; Pace, D.S.; Belluscio, A.; Ardizzone, G. High spatial resolution underwater data for mapping seagrass transplantation: A powerful tool for visualization and analysis. Data Brief 2022, 40, 107735. [Google Scholar] [CrossRef] [PubMed]
  55. Wang, E.; Li, D.; Wang, Z.; Cao, W.; Zhang, J.; Wang, J.; Zhang, H. Pixel-level bathymetry mapping of optically shallow water areas by combining aerial RGB video and photogrammetry. Geomorphology 2024, 449, 109049. [Google Scholar] [CrossRef]
  56. Marre, G.; Deter, J.; Holon, F.; Boissery, P.; Luque, S. Fine-scale automatic mapping of living Posidonia oceanica seagrass beds with underwater photogrammetry. Mar. Ecol. Prog. Ser. 2020, 643, 63–74. [Google Scholar] [CrossRef]
  57. Menna, F.; Nocerino, E.; Drap, P.; Remondino, F.; Murtiyoso, A.; Grussenmeyer, P.; Börlin, N. IMPROVING UNDERWATER ACCURACY BY EMPIRICAL WEIGHTING OF IMAGE OBSERVATIONS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII–2, 699–705. [Google Scholar] [CrossRef]
  58. Federici, B.; Corradi, N.; Ferrando, I.; Sguerso, D.; Lucarelli, A.; Guida, S.; Brandolini, P. Remote sensing techniques applied to geomorphological mapping of rocky coast: The case study of Gallinara Island (Western Liguria, Italy). Eur. J. Remote Sens. 2019, 52, 123–136. [Google Scholar] [CrossRef]
  59. Menna, F.; Nocerino, E.; Nawaf, M.M.; Seinturier, J.; Torresani, A.; Drap, P.; Remondino, F.; Chemisky, B. Towards real-time underwater photogrammetry for subsea metrology applications. In Proceedings of the OCEANS 2019—Marseille, Marseilles, France, 17–20 June 2019; IEEE: Marseille, France, 2019; pp. 1–10. [Google Scholar] [CrossRef]
  60. Capra, A.; Castagnetti, C.; Mancini, F.; Rossi, P. Underwater Photogrammetry for Change Detection. In Proceedings of the FIG Working Week, Amsterdam, The Netherlands, 10–14 May 2020. [Google Scholar]
  61. Russo, F.; Del Pizzo, S.; Di Ciaccio, F.; Troisi, S. An Enhanced Photogrammetric Approach for the Underwater Surveying of the Posidonia Meadow Structure in the Spiaggia Nera Area of Maratea. J. Imaging 2023, 9, 113. [Google Scholar] [CrossRef] [PubMed]
  62. Hochschild, V.; Braun, A.; Sommer, C.; Warth, G.; Omran, A. Visualizing Landscapes by Geospatial Techniques. In Modern Approaches to the Visualization of Landscapes; Edler, D., Jenal, C., Kühne, O., Eds.; Springer Fachmedien: Wiesbaden, Germany, 2020; pp. 47–78. ISBN 978-3-658-30956-5. [Google Scholar] [CrossRef]
  63. Bayomi, N.; Fernandez, J.E. Eyes in the Sky: Drones Applications in the Built Environment under Climate Change Challenges. Drones 2023, 7, 637. [Google Scholar] [CrossRef]
  64. Ferrando, I.; Berrino, E.; Federici, B.; Gagliolo, S.; Sguerso, D. Photogrammetric Processing and Fruition of Products in Open-Source Environment Applied to the Case Study of the Archaeological Park of Pompeii. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLVIII-4-W1-2022, 143–149. [Google Scholar] [CrossRef]
  65. Vacca, G.; Vecchi, E. UAV Photogrammetric Surveys for Tree Height Estimation. Drones 2024, 8, 106. [Google Scholar] [CrossRef]
  66. Nocerino, E.; Menna, F. Photogrammetry: Linking the World across the Water Surface. J. Mar. Sci. Eng. 2020, 8, 128. [Google Scholar] [CrossRef]
  67. Pulido Mantas, T.; Roveta, C.; Calcinai, B.; Coppari, M.; Di Camillo, C.G.; Marchesi, V.; Marrocco, T.; Puce, S.; Cerrano, C. Photogrammetry as a promising tool to unveil marine caves’ benthic assemblages. Sci. Rep. 2023, 13, 7587. [Google Scholar] [CrossRef] [PubMed]
  68. Eltner, A.; Sofia, G. Chapter 1—Structure from motion photogrammetric technique. In Developments in Earth Surface Processes; Tarolli, P., Mudd, S.M., Eds.; Remote Sensing of Geomorphology; Elsevier: Amsterdam, The Netherlands, 2020; Volume 23, pp. 1–24. [Google Scholar] [CrossRef]
  69. Jeon, I.; Lee, I. 3D reconstruction of unstable underwater environment with sfm using slam. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 957–962. [Google Scholar] [CrossRef]
  70. Alkhatib, M.N.; Bobkov, A.V.; Zadoroznaya, N.M. Camera pose estimation based on structure from motion. Procedia Comput. Sci. 2021, 186, 146–153. [Google Scholar] [CrossRef]
  71. Hu, K.; Wang, T.; Shen, C.; Weng, C.; Zhou, F.; Xia, M.; Weng, L. Overview of Underwater 3D Reconstruction Technology Based on Optical Images. J. Mar. Sci. Eng. 2023, 11, 949. [Google Scholar] [CrossRef]
  72. Wang, X.; Fan, X.; Shi, P.; Ni, J.; Zhou, Z. An Overview of Key SLAM Technologies for Underwater Scenes. Remote Sens. 2023, 15, 2496. [Google Scholar] [CrossRef]
  73. Ventura, D.; Mancini, G.; Casoli, E.; Pace, D.S.; Lasinio, G.J.; Belluscio, A.; Ardizzone, G. Seagrass restoration monitoring and shallow-water benthic habitat mapping through a photogrammetry-based protocol. J. Environ. Manag. 2022, 304, 114262. [Google Scholar] [CrossRef]
  74. Hamal, S.N.G.; Ulvi, A.; Yiğit, A.Y. Three-Dimensional Modeling of an Object Using Underwater Photogrammetry. Adv. Underw. Sci. 2021, 1, 11–15. [Google Scholar]
  75. Nocerino, E.; Menna, F. In-camera IMU angular data for orthophoto projection in underwater photogrammetry. ISPRS Open J. Photogramm. Remote Sens. 2023, 7, 100027. [Google Scholar] [CrossRef]
  76. Menna, F.; Nocerino, E.; Ural, S.; Gruen, A. MITIGATING IMAGE RESIDUALS SYSTEMATIC PATTERNS IN UNDERWATER PHOTOGRAMMETRY. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 977–984. [Google Scholar] [CrossRef]
  77. Neyer, F.; Nocerino, E.; Gruen, A. Image quality improvements in low-cost underwater photogrammetry. ISPRS—Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W10, 135–142. [Google Scholar] [CrossRef]
  78. Nocerino, E.; Nawaf, M.M.; Saccone, M.; Ellefi, M.B.; Pasquet, J. Multi-Camera System Calibration of a Low-Cost Remotely Operated Vehicle for Underwater Cave Exploration. ISPRS—Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2018, XLII-1, 329–337. [Google Scholar] [CrossRef]
  79. She, M.; Song, Y.; Mohrmann, J.; Köser, K. Adjustment and Calibration of Dome Port Camera Systems for Underwater Vision. In Pattern Recognition; Fink, G.A., Frintrop, S., Jiang, X., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11824, pp. 79–92. ISBN 978-3-030-33675-2. Lecture Notes in Computer Science. [Google Scholar] [CrossRef]
  80. James, M.R.; Robson, S. Mitigating systematic error in topographic models derived from UAV and ground-based image networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef]
  81. Ballarin, M.; Costa, E.; Piemonte, A.; Piras, M.; Teppati Losè, L. UNDERWATER PHOTOGRAMMETRY: POTENTIALITIES AND PROBLEMS RESULTS OF THE BENCHMARK SESSION OF THE 2019 SIFET CONGRESS. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 925–931. [Google Scholar] [CrossRef]
  82. Agisoft Metashape. Available online: https://www.agisoft.com/ (accessed on 5 July 2024).
  83. Regione Liguria GNSS Service. Available online: https://geoportal.regione.liguria.it/servizi/rete-gnss-liguria.html (accessed on 11 December 2024).
  84. Unreal Engine. Available online: https://www.unrealengine.com/en-US (accessed on 5 July 2024).
Figure 1. The UAV (in red) and snorkeler (in green) paths and their intersections (in the black circles) over the study area.
Figure 1. The UAV (in red) and snorkeler (in green) paths and their intersections (in the black circles) over the study area.
Remotesensing 17 00073 g001
Figure 2. Methodology steps: (1) UAV and underwater image acquisition; (2) generation of orthophoto over the study area; (3) snorkeler position extraction; (4) UAV and underwater frames time synchronization; (5) underwater photogrammetry processing; (6) seamless 3D georeferenced model. The arrows indicate the sequence of steps and data flow between aerial (green) and underwater processes (red) and time synchronization (blue).
Figure 2. Methodology steps: (1) UAV and underwater image acquisition; (2) generation of orthophoto over the study area; (3) snorkeler position extraction; (4) UAV and underwater frames time synchronization; (5) underwater photogrammetry processing; (6) seamless 3D georeferenced model. The arrows indicate the sequence of steps and data flow between aerial (green) and underwater processes (red) and time synchronization (blue).
Remotesensing 17 00073 g002
Figure 3. Distribution of GCPs (1 to 18) over the study area (UAV orthophoto as background).
Figure 3. Distribution of GCPs (1 to 18) over the study area (UAV orthophoto as background).
Remotesensing 17 00073 g003
Figure 4. Study area location: The red dot indicates the study area, Sestri Levante, located in the east cost of Liguria region (Italy).
Figure 4. Study area location: The red dot indicates the study area, Sestri Levante, located in the east cost of Liguria region (Italy).
Remotesensing 17 00073 g004
Figure 5. Original DSM with surface water distortion (a) and modified DSM with flat water surface equivalent to zero (b).
Figure 5. Original DSM with surface water distortion (a) and modified DSM with flat water surface equivalent to zero (b).
Remotesensing 17 00073 g005
Figure 6. The 3D model, and orthophoto.
Figure 6. The 3D model, and orthophoto.
Remotesensing 17 00073 g006
Figure 7. UAV image, orthorectified image, and orthorectified image overlaid on the DSM (right). The DSM is color-coded based on elevation, starting from blue at the lowest elevations (zero) and transitioning to red at the highest elevations (as shown in Figure 5).
Figure 7. UAV image, orthorectified image, and orthorectified image overlaid on the DSM (right). The DSM is color-coded based on elevation, starting from blue at the lowest elevations (zero) and transitioning to red at the highest elevations (as shown in Figure 5).
Remotesensing 17 00073 g007
Figure 8. The interactive Python application for snorkeler coordinates and frames extraction on the orthophotos.
Figure 8. The interactive Python application for snorkeler coordinates and frames extraction on the orthophotos.
Remotesensing 17 00073 g008
Figure 9. Manual camera calibration with parameters estimated from.
Figure 9. Manual camera calibration with parameters estimated from.
Remotesensing 17 00073 g009
Figure 10. Histogram shows the frequencies of the overall error (expressed in meters) of x and y coordinates.
Figure 10. Histogram shows the frequencies of the overall error (expressed in meters) of x and y coordinates.
Remotesensing 17 00073 g010
Figure 11. Scatter plot showing the x and y errors over two categories: “M” in blue and “B” in green, referring middle or boundary snorkeler’s position within the orthophoto, respectively.
Figure 11. Scatter plot showing the x and y errors over two categories: “M” in blue and “B” in green, referring middle or boundary snorkeler’s position within the orthophoto, respectively.
Remotesensing 17 00073 g011
Figure 12. Merged model (emerged and submerged) with georeferenced underwater features; (a) rocky and sandy patches; (b) seagrass; (c) dead seagrass.
Figure 12. Merged model (emerged and submerged) with georeferenced underwater features; (a) rocky and sandy patches; (b) seagrass; (c) dead seagrass.
Remotesensing 17 00073 g012
Figure 13. Virtual model of Silence Bay in Sestri Levante (Italy).
Figure 13. Virtual model of Silence Bay in Sestri Levante (Italy).
Remotesensing 17 00073 g013
Figure 14. Virtual model showing the underwater seagrass.
Figure 14. Virtual model showing the underwater seagrass.
Remotesensing 17 00073 g014
Table 1. Error analysis over x and y coordinates and the total error in meters.
Table 1. Error analysis over x and y coordinates and the total error in meters.
Error [m]xyTotal
Mean0.0680.0930.08
Standard deviation0.050.090.05
Minimum0.010.010.02
Maximum0.160.340.19
Table 2. Error analysis over categories expressing the snorkeler’s position within the orthophoto.
Table 2. Error analysis over categories expressing the snorkeler’s position within the orthophoto.
Error [m]MB
Mean0.040.11
Standard deviation0.030.04
Minimum0.020.05
Maximum0.10.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karaki, A.A.; Ferrando, I.; Federici, B.; Sguerso, D. Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning. Remote Sens. 2025, 17, 73. https://doi.org/10.3390/rs17010073

AMA Style

Karaki AA, Ferrando I, Federici B, Sguerso D. Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning. Remote Sensing. 2025; 17(1):73. https://doi.org/10.3390/rs17010073

Chicago/Turabian Style

Karaki, Ali Alakbar, Ilaria Ferrando, Bianca Federici, and Domenico Sguerso. 2025. "Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning" Remote Sensing 17, no. 1: 73. https://doi.org/10.3390/rs17010073

APA Style

Karaki, A. A., Ferrando, I., Federici, B., & Sguerso, D. (2025). Bridging Disciplines with Photogrammetry: A Coastal Exploration Approach for 3D Mapping and Underwater Positioning. Remote Sensing, 17(1), 73. https://doi.org/10.3390/rs17010073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop