You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

15 November 2025

An Investigation into the Registration of Unmanned Surface Vehicle (USV)–Unmanned Aerial Vehicle (UAV) and UAV–UAV Point Cloud Models

,
and
Department of Soil and Water Conservation, National Chung Hsing University, Taichung 402, Taiwan
*
Author to whom correspondence should be addressed.
Sensors2025, 25(22), 6992;https://doi.org/10.3390/s25226992 
(registering DOI)
This article belongs to the Special Issue Remote Sensing and UAV Technologies for Environmental Monitoring

Abstract

This study explores the integration of point cloud data obtained from unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) to address limitations in photogrammetry and to create comprehensive models of aquatic environments. The UAV platform (AUTEL EVO II) employs structure-from-motion (SfM) photogrammetry using optical imagery, while the USV (equipped with a NORBIT iWBMS multibeam sonar system) collects underwater bathymetric data. UAVs commonly face constraints in battery life and image-processing capacity, making it necessary to merge smaller UAV point clouds into larger, more complete models. The USV-derived bathymetric data are integrated with UAV-derived surface data to construct unified terrain models that include both above-water and underwater features. This study evaluates three coordinate transformation (CT) methods—4-parameter, 6-parameter, and 7-parameter—across three study areas in Taiwan to assess their effectiveness in registering USV–UAV and UAV–UAV point clouds. For USV–UAV integration, all CT methods improved alignment accuracy compared with results without CT, achieving decimeter-level precision. For UAV–UAV integrations, the 7-parameter method provided the best accuracy, especially in areas with low terrain roughness such as rooftops and pavements, while improvements were less pronounced in areas with high roughness such as tree canopies. These findings demonstrate that the 7-parameter CT method offers an effective and straightforward approach for accurate point cloud integration from different platforms and sensors.

1. Introduction

With the rapid development of terrestrial laser scanning (TLS), unmanned aerial vehicle (UAV) photogrammetry, and UAV light detection and ranging (LiDAR), point cloud models have become essential tools for describing and analyzing object shapes []. Point cloud models are now widely utilized across various fields, including environmental monitoring, autonomous navigation, archeological site recording, and urban modeling. In many of these applications, integrating point cloud data from different scanners or sensors is essential to produce complete and accurate 3D models. This work is crucial because of the variations in coordinate systems, resolutions, and measurement accuracies. Proper registration or fusion harmonizes these discrepancies by aligning data to a consistent reference frame, ensuring precise spatial relationships and comprehensive 3D models. This process mitigates the errors and inconsistencies arising from diverse sensor characteristics, enhancing the overall quality and usability of the fused point cloud models.
In the past 10 years, research on the registration of point cloud models has commonly included studies on TLS-TLSs and TLS-UAVs. For example, ref. [] presented a practical framework for the integration of UAV-based photogrammetry and TLS in open-pit mine areas. The results show that TLS-derived point clouds can be used as ground control points (GCPs) in mountainous areas or high-risk environments where it is difficult to conduct a global navigation satellite system (GNSS) survey. The framework achieved decimeter-level accuracy for the generated digital surface model (DSM) and digital orthophoto map. Ref. [] proposed an efficient registration method based on a genetic algorithm for the automatic alignment of two terrestrial laser scanning (TLS) point clouds (TLS–TLS) and the alignment between TLS and unmanned aerial vehicle (UAV)–LiDAR point clouds (TLS–UAV LiDAR). The experimental results indicate that the root-mean-square error (RMSE) of the TLS–TLS registration is 3–5 mm, and that of the TLS–UAV LiDAR registration is 2–4 cm. Ref. [] discussed creating virtual environments from 3D point-cloud data suitable for immersive and interactive virtual reality. Both TLS (LiDAR-based) and UAV photogrammetric point clouds were utilized. The UAV point clouds were generated using optical imagery processed through structure-from-motion (SfM) photogrammetry. These datasets were merged using a custom algorithm that identifies data gaps in the TLS dataset and fills them with data from the UAV photogrammetric model. The result demonstrated an RMSE accuracy of approximately 5 cm. Ref. [] designed a method for the global refinement of TLS point clouds on the basis of plane-to-plane correspondences. The experimental results show that the proposed plane-based matching algorithm efficiently finds plane correspondences in partial overlapping scans, providing approximate values for global registration, and indicating that an accuracy better than 8 cm can be achieved. Ref. [] extracted key points with stronger expression, expanded the use of multi-view convolutional neural networks (MVCNNs) in point cloud registration, and adopted a graphics processing unit (GPU) to accelerate the matrix calculation. The experimental results demonstrated that this method significantly improves registration efficiency while maintaining an RMSE accuracy of 3 to 4 cm. Ref. [] used a feature-level point cloud fusion method to process point cloud data from TLS and UAV LiDAR. The results show that the tally can be achieved quickly and accurately via feature-level fusion of the two point cloud datasets. Ref. [] established a high-precision, complete, and realistic bridge model by integrating UAV image data and TLS point cloud data. The integration of UAV image point clouds with TLS point clouds is achieved via the iterative closest point (ICP) algorithm, followed by the creation of a triangulated irregular network (TIN) model and texture mapping via Context Capture 2023 software. The geometric accuracies of the integrated model in the X, Y, and Z directions are 1.2 cm, 0.8 cm, and 0.9 cm, respectively. Other related research on point cloud registration or fusion from TLS-TLS or TLS-UAV over the past ten years has been included [,,,,,,,,,,,,,,].
According to the references, the fusion of TLS–TLS point clouds currently achieves millimeter-level accuracy, while the fusion of TLS–UAV point clouds attains centimeter-level accuracy. Compared with studies on TLS–TLS and UAV–TLS data integration, research specifically addressing the precise registration between UAV-derived point clouds from different flight missions or sensors (e.g., photogrammetry and LiDAR) remains limited. Ref. [] presented a novel procedure for fine registration of UAV-derived point clouds by aligning planar roof features. The experimental results demonstrated an average error of 9 cm from the reference distances.
The integration of UAV-UAV point clouds is critical for photogrammetry. Affordable UAVs used for this purpose are often constrained by limited battery capacity and software limitations in processing large volumes of imagery, which makes it challenging to generate comprehensive point cloud models. This challenge can be addressed by merging smaller, individual UAV point cloud models into larger, more complete models. In addition, while the previous studies on TLS-TLS, UAV-TLS, and UAV-UAV point cloud fusion have yielded many high-accuracy results, most of these studies rely on selected GCPs or check points for evaluation. While the use of GCPs is a common practice for evaluating registration accuracy, relying solely on a few discrete points may not fully capture spatial variations. Therefore, in this study, we also evaluate the UAV–UAV coordinate transformation results by incorporating multiple verification zones—such as rooftops, pavements, tree canopies, and grasslands—to provide a more spatially representative assessment of accuracy. This approach will provide a more robust and credible assessment than relying solely on GCPs. On the other hand, unmanned surface vehicle (USV) technology has been widely used in recent years, including for depth measurements in ports, ponds, and reservoirs. This provides important insights into sediment accumulation and the inspection of underwater structures in these areas [,,,]. If USVs are combined with UAVs, the integration of the USV-UAV point cloud model can be used to construct a comprehensive aquatic environment, including both above-water and underwater terrains, which is crucial for the further understanding of sediment transportation and ecological environments. While some studies have addressed UAV and USV data integration, research specifically dealing with the registration of USV–UAV point cloud data for this purpose is still limited. Due to the limited research on UAV-UAV and USV-UAV point cloud fusion, this study aims to address two key issues. First, the results of the USV and UAV point cloud registration are investigated to construct a comprehensive terrain model of the aquatic environment, which will be followed by accuracy evaluation. Second, we explore the outcomes of fusing point clouds from two UAV-SfM (structure from motion) datasets to overcome the current limitations of low-cost drones in terms of the flight range and the number of images processed by photogrammetry software. The entire integration of the USV-UAV and UAV-UAV point cloud models was conducted via 4-parameter, 6-parameter, and 7-parameter coordinate transformation methods (CT methods), which are all essential for precise geodetic coordinate conversions and aligning different geospatial datasets. All the point cloud fusion results were uploaded to the Pointbox website, allowing anyone to easily access and view the research findings.

2. Methods

2.1. UAVs and USVs

This study uses SfM-based photogrammetry principles to construct point cloud models from UAV data. SfM photogrammetry is a technique used to reconstruct 3D scenes from a collection of 2D images captured from different viewpoints. The point cloud models generated from the UAV in this study were produced via Pix4Dmapper 4.8.4 software. SfM photogrammetry consists of a series of computer vision and photogrammetric methods for reconstructing scene structure and camera motion from overlapping images []. The SfM-based photogrammetry approach was employed to construct point cloud models from UAV imagery. The models were generated using Pix4Dmapper software, which applies SfM and multi-view stereo (MVS) algorithms to reconstruct 3D surfaces from overlapping 2D images. SfM integrates computer vision and photogrammetric methods to recover both scene structure and camera motion from image centralization []. The SfM process typically includes key steps such as feature extraction and matching, camera pose estimation, bundle adjustment, and 3D reconstruction. In the feature extraction and matching stage, key feature points are detected from UAV images, usually at distinct textures or edges (e.g., building corners or road markings). The scale-invariant feature transform (SIFT) algorithm [] is used to extract and match corresponding feature points across multiple images. In the camera pose estimation step, the geometric relationships of collinearity and epipolar geometry are used to compute the UAV camera positions, motion directions, and intrinsic parameters, thereby constructing a sparse 3D point cloud structure. To minimize projection errors caused by image misalignment or overlap inconsistencies, bundle adjustment is performed in Pix4Dmapper to refine image orientation and improve reconstruction accuracy. The objective function of the bundle adjustment can be expressed as follows []:
i = 1 n p i P i ( X i , R i , T i ) 2
where p i is the observed image point, P i is the projected model from the 3D coordinates X i , R i is the rotation matrix, and T i is the translation vector. Finally, the MVS algorithm [] generates a dense point cloud and reconstructs the 3D surface by estimating depth maps from feature correspondences across multiple overlapping images. These algorithms are well established; therefore, only a concise description is provided here, as the main focus of this study lies in the registration of UAV and USV point clouds, rather than the image-reconstruction procedure.
The purpose of USV depth sounding is to calculate the water depth via the speed and time difference of the sound waves traveling through water. The depth sounding technology adopted in this study is the multibeam echo sounding system. When a USV equipped with a multibeam echo sounder travels on the water surface, it emits a sound wave toward the bottom. When the sound wave hits the bottom, it reflects back. Once the receiver receives the reflected sound wave, it can calculate the travel time of the sound wave, thereby determining the water depth h , as shown below []:
h = 1 2 ( v × t ) + k + d
where v is the speed of sound in water, t is the round-trip travel time of the sound wave, k is a constant correction factor used to compensate for the system time delay between signal transmission and reception, and d is the draft of the vessel, representing the vertical distance between the water surface and the transducer position. The inclusion of k and d ensures that the calculated water depth reflects the true distance from the water surface to the seabed. The system used in this study was a NORBIT iWBMS multibeam echo sounder (NORBIT Subsea, Trondheim, Norway), operating at a frequency range of 200–700 kHz with a depth accuracy of approximately ±5 cm. In addition, the USV system (NORBIT iWBMS) included a built-in time synchronization function between the GNSS receiver and the multibeam sonar sensors, ensuring that all depth measurements were temporally aligned with the GNSS positioning data. Combined with the UAV GNSS, this setup provided a consistent spatio-temporal reference framework for both datasets. Because the UAV and USV surveys were conducted on the same day under calm conditions and within a small study area, additional extrinsic calibration was not required.

2.2. Coordinate Transformation

The CT strategies for this study include 4-parameter, 6-parameter, and 7-parameter strategies. The 4-parameter CT is used for simple adjustments, such as translating and rotating maps with a uniform scale [,]. The 6-parameter CT handles cases where differential scaling and rotation are needed [,]. The 7-parameter Helmert transformation is a commonly used 3D coordinate transformation model that consists of three translation parameters, three rotation parameters, and one uniform scale factor [,]. It is widely applied in the registration of point cloud data obtained from different platforms, such as UAVs and USVs, to achieve spatial alignment under a single global scale. In contrast, the 4-parameter and 6-parameter transformations are typically applied to 2D datasets, where the parameters include translations, rotation, and (in the case of the 4-parameter model) a scale factor. Although the 7-parameter model assumes uniform scaling, future studies could explore extending this model by introducing additional scale factors to form 8- or 9-parameter transformations for anisotropic scaling and enhanced registration flexibility. The 4-parameter CT model can be expressed as follows []:
X = a x b y + c Y = b x a y + d
where ( x , y ) are the original coordinates, and ( X , Y ) are the transformed coordinates. It includes the translation parameters ( c and d ) and the rotation parameters ( a and b ). The six-parameter CT model can be expressed as follows []:
X = a x + b y + c Y = d x + e y + f
where ( x , y ) are the original coordinates, and ( X , Y ) are the transformed coordinates. It includes the translation parameters ( c and f ), the rotation parameters and scale adjustments ( a , b , d , and e ). The 7-parameter CT model between any two Cartesian systems can be written as follows []:
X G Y G Z G = 1 + d m 1 γ β γ 1 α β α 1 X L Y L Z L + T X T Y T Z
where ( X L , Y L , Z L ) are the original coordinates and ( X G , Y G , Z G ) are the transformed coordinates;   T X , T Y , and T Z are the translation parameters; α , β , and γ are the rotation parameters; and d m is the parameter of scale correction. In this study, CT calculations involving four parameters, six parameters, or seven parameters are all determined through a least squares adjustment.
In Equations (3)–(5), each parameter represents a distinct geometric element of the transformation. In the 4-parameter and 6-parameter models, the coefficients ( a , b, d, e) describe the combined effects of rotation and scale in the horizontal plane, while the remaining terms (c, d, f) represent translations along the X and Y directions. For the 7-parameter Helmert transformation (Equation (5)), T X , T Y , and T Z denote translations, α, β, and γ are the rotation angles about the X-, Y-, and Z-axes, and d m is the uniform scale factor. The rotations follow the right-hand rule, where positive angles correspond to counter-clockwise rotation about each axis. The transformation is applied in the X–Y–Z order to convert coordinates from the local system ( X L , Y L , Z L ) to the global system ( X G , Y G , Z G ). This convention is consistent with the standard geodetic Helmert model commonly used in 3D coordinate conversion.

3. Study Area and Data

This paper includes three study areas. Their locations and aerial images are shown in Figure 1. Study area 1 is located in the Cien retention basin area in Zhongpu Township, Chiayi County. The area comprises three ponds, with this study using the central pond as a case example for the registration of USV and UAV point cloud data. The USV was employed to survey the bathymetry of the pond bottom, whereas the UAV was used to survey the terrain around this pond. Study area 2 is located at the buildings of the Department of Soil and Water Conservation (SWC) of National Chung Hsing University (NCHU), Taichung City, and study area 3 is located at the Sijiaolin Stream within Dongshi Forestry Culture Park in Taichung City. There are several soil and water conservation structures in the Sijiaolin Stream. We test the registration results of the USV-derived and UAV-derived point cloud models in the study area 1 and those of the UAV-derived point cloud models in the study areas 2 and 3.
Figure 1. Left image: Taiwan topography and the locations of study areas 1–3. Right: Aerial orthophotos of study areas 1–3.
For each study area, control points were selected based on geometric distinctiveness, clear visibility in both UAV and USV datasets, and adequate spatial coverage. Points were arranged as uniformly as possible within the overlap or boundary zones to minimize spatial bias, while collinear configurations were avoided to maintain geometric stability. All control points corresponded to sharp and easily identifiable terrain features—such as embankment corners, building edges, and spillway intersections—ensuring repeatable and reliable registration across datasets. The control points were selected as comprehensively as possible within each study area, considering the geometric and visibility constraints of both UAV and USV datasets. Although some regions offered limited distinctive features due to vegetation or water-surface reflections, the selected points provided sufficient spatial coverage and geometric stability for reliable registration.
The USV system used in study area 1 is the NORBIT iWBMS multibeam sonar system [] mounted in an unmanned vehicle from Chen Kai Technology (Figure 2a). Additionally, the drone used in study areas 1–3 is the AUTEL EVO II, as shown in Figure 2b. The trajectories of the USV and UAV missions in study areas 1–3 are shown in Figure 3. In Figure 3a, the red dots and green dots represent the UAV and USV mission trajectories, respectively. The photos over study area 1 were taken on 4 July 2023, at an altitude of 80 m, with a forward overlap of 70% and a side overlap of 60%, totaling 151 photos. The USV survey in study area 1 was also conducted on 4 July 2023, yielding a total of 502,521 point cloud data points. In Figure 3b, the trajectories with red and yellow dots represent Models 2-1 and 2-2, respectively. The aerial photography dates for the two models were 23 October 2023, and 24 October 2023. The flight altitude was 30 m, with a forward overlap of approximately 70% and a side overlap of approximately 60%. The number of aerial photos taken was 53 for Model 2-1 and 99 for Model 2-2. In Figure 3c, the trajectories with red, yellow, and blue dots represent Models 3-1, 3-2, and 3=3, respectively. The UAV aerial photography date for study area 3 was 4 March 2024. The flight altitude was 100 m, with a forward overlap of approximately 80%, and a side overlap of approximately 70%. The number of aerial photos taken for Models 3-1 to 3-3 was 43, 106, and 55, respectively. Although there are three point cloud models in study area 3, this paper analyzes only the registration results of Models 3-1 and 3-2. All of the registrations of the USV-derived and UAV-derived point cloud models in the three study areas are implemented with 4-parameter, 6-parameter, and 7-parameter CT methods.
Figure 2. (a) USV system of Chen Kai Technology. (b) AUTEL EVO II.
Figure 3. (a) UAV trajectories (red lines) of study area 1, with an inset on the right showing the USV trajectories (green lines). (b) UAV trajectories of study area 2, with red and yellow dots representing Models 2-1 and 2-2, respectively. (c) UAV trajectories of study area 3, with red, yellow, and blue lines representing Models 3-1, 3-2, and 3-3, respectively.

4. Results and Discussion

4.1. The Results for Study Area 1: USV–UAV Point Cloud Registration

Since USV and UAV technologies are used for measuring underwater and ground terrain, respectively, their point cloud models do not overlap. However, we selected 6 control points and 6 check points in areas where the two point cloud models are in close proximity, and where features are distinct, such as embankment gaps and spillway corners. In Figure 4, the red cross points and red circles indicate the control and check points, respectively, with the control and check points being alternately distributed. All the control points are utilized with the 4-parameter, 6-parameter, and 7-parameter CT methods to estimate the unknown parameter values using the least squares approach. Then, the horizontal coordinate of the USV-derived point cloud model is transformed to a new horizontal coordinate consistent with that of the UAV. For the elevation coordinates, since the 4-parameter and 6-parameter CTs are 2D transformations, we use a single control point at the spillway corner to adjust the elevation coordinates of the USV to those of the UAV.
Figure 4. The topography of the farm pond measured by the USV, with red cross points (1a–1f) and red circles indicating control and check points used for UAV/USV point cloud fusion.
Figure 5 shows a portion of the fusion results of the USV–UAV point clouds in study Area 1. The Pointbox links for all the point cloud fusion results are provided in the caption of Figure 5. The spatial resolutions of the USV and UAV point cloud models shown in the links are 10 cm and 3 cm, respectively. In Figure 5, without CT, there are noticeable gaps between the UAV-based and USV-based point clouds (Figure 5a). However, after applying CT, no matter whether the 4-parameter, 6-parameter, or 7-parameter CT methods are used, the gaps are reduced (Figure 5b–d). Table 1 shows the coordinate differences at each control point before and after applying CT. Since the 4-parameter and 6-parameter CT methods do not account for the Z-direction (elevation), the standard deviations for these methods, as presented in Table 1, are calculated solely in the horizontal plane. In contrast, the standard deviation for the 7-parameter CT method is computed in 3D space. Furthermore, Table 1 reveals that the standard deviations obtained using the 4-parameter, 6-parameter, and 7-parameter CT methods range from 31 to 34 cm, demonstrating decimeter-level accuracy. Since the Z-direction component in the 4-parameter and 6-parameter CT methods is treated as a single translation, we further compared their performance with the 7-parameter CT method in terms of Z-direction coordinate transformation. To do so, we analyzed the elevation differences of all control and check points before and after applying CT. The results are presented in Figure 6. The standard deviations of the differences for the cases without the CT method, with the 4-parameter CT method, with the 6-parameter CT method, and with the 7-parameter CT method—comprising both control and check points—are 17.5 cm, 16.3 cm, 12.9 cm, and 14.6 cm, respectively. These statistics indicate that the application of the CT method for point cloud fusion results in a reduction in standard deviation, with the 6-parameter method yielding the best results. This can be attributed to the fact that both the control and check points are located at the water surface interface, ensuring elevation consistency. As a result, the advantages of the 3D 7-parameter transformation are less pronounced compared to those of the 2D 6-parameter transformation. Additionally, the standard deviation results show that although the use of CT methods led to improved outcomes compared to non-CT methods, the improvement was not substantial. The primary reason for this is that, due to the lack of overlapping point clouds from the USV and UAV missions, control and check points can only be selected from the boundary areas, which may introduce selection errors.
Figure 5. The partial fusion results of USV-UAV point clouds in study area 1: (a) without the CT method, (b) with the 4-parameter CT method, (c) with the 6-parameter CT method, and (d) with the 7-parameter CT method. The entire point cloud fusion results for (ad) can be found at the following four links (accessed on 30 August 2024): https://www.pointbox.xyz/clouds/66b32be2817a42b2255567e2, https://www.pointbox.xyz/clouds/66b3257c817a4296c65567da, https://www.pointbox.xyz/clouds/66b328eb817a4256e35567de, https://www.pointbox.xyz/clouds/66b32b84817a42e34f5567e0.
Table 1. Statistics of the coordinate differences at control points among the different CT methods (unit: cm) in study area 1.
Figure 6. The height differences (vertical bars) between UAV and USV point clouds at the control and check points in study area 1: (a) without the CT method, (b) with the 4-parameter CT method, (c) with the 6-parameter CT method, and (d) with the 7-parameter CT method. The light purple and dark purple bars indicate positive and negative values, respectively.
Overall, although the point cloud fusion results of the USV–UAV integration achieved only decimeter-level accuracy, the results still provide valuable insights into local sediment transport and the ecological environment in retention basins.

4.2. Results for Study Area 2: UAV/UAV Point Cloud Fusion

Study area 2 was located at the buildings of the Department of SWC on the NCHU campus. We tested the 4-parameter, 6-parameter, and 7-parameter CT methods for the point cloud registration of Models 2-1 and 2-2. Seven control points were selected at the corners of the building for easy identification, and their distribution is shown in Figure 7. In addition, we did not select specific check points in this area but instead chose three verification zones, which are the red frames shown in Figure 7. The verification zones include rooftops, tree canopies, and pavements. The 4-parameter, 6-parameter, and 7-parameter CT methods use the 7 control points to solve for unknown parameter values. We then transform the Model 2-2 coordinate system to the Model 2-1 coordinate system. Since the 4-parameter and 6-parameter CTs are both 2D transformations, we use one control point at the rooftop corner to adjust the Model 2-2 elevation. Figure 8 shows a portion of the registration results (rooftop) of point cloud Models 2-1 and 2-2 in study Area 2. Without the use of the CT method, the difference between Models 2-1 and 2-2 is quite pronounced (Figure 8a). However, when the 4-parameter and 6-parameter CT methods are used, the gap between the two models is significantly reduced (Figure 8b,c), although some minor differences are still evident. With the 7-parameter method, the two models are almost merged into one. The Pointbox links for the complete point cloud registration results shown in Figure 8 are summarized in the figure caption. The spatial resolution is 3 cm. Table 2 presents the coordinate differences at each control point before and after applying the CT methods. The same as Table 1, the standard deviations for 4-parameter and 6-parameter CT methods are calculated solely in the horizontal plane, and 7-parameter CT method is computed in 3D space. Furthermore, Table 2 reveals that the standard deviations obtained using the 4-parameter, 6-parameter, and 7-parameter CT methods range from 4 to 6 cm, demonstrating centimeter-level accuracy. Since the Z-direction component in the 4-parameter and 6-parameter CT methods is treated as a single translation, we further compared their performance with the 7-parameter CT method in terms of Z-direction coordinate transformation, as shown in Figure 9. Figure 9 shows that without the use of the CT method, the difference between Models 2-1 and 2-2 at the control points reaches several meters. After applying the 4-parameter or 6-parameter CT methods, the difference decreases to several tens of centimeters, and with the use of the 7-parameter CT method, it further decreases to several centimeters.
Figure 7. Aerial orthophoto of study area 2, with the red cross points (2a–2g) and frames indicating the control points and verification zones used for UAV point cloud integration. The verification zones include rooftops, tree canopies, and pavements.
Figure 8. The partial integration results of UAV-based point cloud Models 2-1 and 2-2 in study area 2: (a) without the CT method, (b) with the 4-parameter CT method, (c) with the 6-parameter CT method, and (d) with the 7-parameter CT method. The entire point cloud integration results for (ad) can be found at the following four links (accessed on 30 August 2024): https://www.pointbox.xyz/clouds/66b37dfb817a428b49556804, https://www.pointbox.xyz/clouds/66b330a2817a42933b5567e6, https://www.pointbox.xyz/clouds/66b33288817a42ba785567e8, https://www.pointbox.xyz/clouds/66b33446817a425bbb5567ea.
Table 2. Statistics of the coordinate differences at control points among the different CT methods (unit: cm) in study area 2.
Figure 9. The elevation differences (vertical bars) between point cloud Models 2-1 and 2-2 at the control points in study area 2: (a) without the CT method, (b) with the 4-parameter CT method, (c) with the 6-parameter CT method, and (d) with the 7-parameter CT method. The light purple and dark purple bars indicate positive and negative values, respectively.
To further comprehensively evaluate the CT results for study area 2, we assessed the accuracy using the verification zones. First, we used GMT 6.5 software [] to grid the elevations of the verification zones (with a grid spacing of 10 cm), including the zones in Models 2-1 and 2-2, both before and after applying CT. We then calculated and statistically analyzed the elevation differences at each grid point and compute the correlation coefficients. Table 3 presents the statistics of the elevation differences between Model 2-1 and Model 2-2 across the various verification zones. The table shows that, without the use of the CT method, the mean elevation difference in any verification zone reaches several meters. In terms of standard deviation, due to the flat terrain of the pavement, only the verification zone corresponding to the pavement produces a relatively good result, with a standard deviation of 4.9 cm, whereas the standard deviations for the rooftop and tree canopy verification zones are in the meter range. After applying the 4-parameter or 6-parameter CT method, both the mean value and standard deviation in the three verification zones are significantly reduced, with the mean value ranging from approximately −8 to −37 cm, and the standard deviation decreasing to around 1 to 37 cm. After the 7-parameter CT method is applied, the mean values and standard deviations in the verification zones—rooftop and pavements—further decrease. The mean values are between −3.8 and −6.5 cm, and the standard deviations range from 0.6 to 0.9 cm, indicating that the 7-parameter method provides highly ideal point cloud integration results for these zones. However, for the verification zone of the tree canopy, the results are comparable to those obtained with the 4-parameter or 6-parameter methods, with average values and standard deviations of approximately 16 and 36 cm, respectively. This is likely because tree canopies are areas with very high terrain roughness; even minor displacements in planar coordinates can result in significant elevation differences. Thus, even with the 7-parameter CT method, the accuracy cannot be reduced to the centimeter level at the verification zone of the tree canopy.
Table 3. Statistics of the elevation differences among the different verification zones and CT methods (unit: cm) in study area 2.
Table 4 shows the correlation coefficients between Model 2-1 and Model 2-2 in each verification zone under different CT methods. Table 4 shows that the use of the CT method yields the best results in the verification zone of the rooftop. The correlation coefficient improves from 0.44 without CT to over 0.9, and with the 7-parameter CT method, it even reaches 0.99. The same results are observed in the verification zone of the pavement. However, in the verification zone of the tree canopy, the results with, and without, the CT method are similar, ranging from 0.78 to 0.83, due to the very high terrain roughness of the tree canopy. Overall, in the study of point cloud integration in study area 2, the results of the 7-parameter CT method are significantly better than those of the 4-parameter and 6-parameter CT methods. Although the 7-parameter CT method performs best, Table 3 shows that the 4-parameter and 6-parameter CT methods can still achieve centimeter-level accuracy or even better on rooftops and pavements. This is likely due to the even distribution of control points in this study area.
Table 4. The correlation coefficients of Models 2-1 and 2-2 at different verification zones using different CT methods.

4.3. Results for Study Area 3: UAV/UAV Point Cloud Fusion

Study area 3 was located at the Sijiaolin stream within the Dongshi forestry culture park. We tested the 4-parameter, 6-parameter, and 7-parameter CT methods for the point cloud integration of Model 3-1 and Model 3-2. Six control points were selected at the walkways, which are easy to identify. The distribution of all control points is shown in Figure 10. We additionally used three verification zones (a grid spacing of 10 cm), which are the red frames shown in Figure 10, to evaluate the point cloud integration results. The verification zones include tree canopies, pavements, and grasslands. The 4-parameter, 6-parameter, and 7-parameter CT methods use the 6 control points to solve for unknown parameter values. Then, we transform the Model 3-2 coordinate system to the Model 3-1 coordinate system. Since both the 4-parameter and the 6-parameter CT are 2D transformations, we used one stable control point to adjust the elevation of Model 3-2. Figure 11 shows a portion of the integration results of Models 3-1 and 3-2. Figure 11a,c,e,g show the structures of the fishways, whereas Figure 11b,d,f,h depict the structures of the wildlife passages. Without the use of the CT method, the difference between Models 3-1 and 3-2 is quite pronounced (Figure 11a,b). However, when the 4-parameter and 6-parameter CT methods are used, the gap between the two models is significantly reduced (Figure 11c–f), although some minor differences are still evident. With the 7-parameter method, the two models are almost merged into one (Figure 11g,h). In addition, the estimated scale factors obtained from the 7-parameter CT were close to unity (within ±0.001), confirming that the UAV and USV point clouds were consistent in physical scale after alignment to the same reference frame defined by the USV GNSS positioning. Since the area of the Sijiaolin stream is mostly forested, to visualize the integration results of Models 3-1 and 3-2 in Figure 11, we have represented the forest and grassy areas of Models 3-1 and 3-2 in red and blue, respectively. The Pointbox links of the entire point cloud integration results are summarized in the caption of Figure 11. The spatial resolution of the two UAV point cloud models shown in the Pointbox links is 3 cm.
Figure 10. Aerial orthophoto of study area 3, with the red cross points (3a–3f) and frames indicating the control points and verification zones used for UAV point cloud fusion. The verification zones include the tree canopy, pavement, and grassland.
Figure 11. The partial integration results of point cloud Models 3-1 and 3-2 in study area 3: (a,b) without the CT method, (c,d) with the 4-parameter CT method, (e,f) with the 6-parameter CT method, and (g,h) with the 7-parameter CT method. The entire point cloud fusion results for (ad) can be found at the following four links (accessed on 30 August 2024): https://www.pointbox.xyz/clouds/66b37adb817a4275f75567fc, https://www.pointbox.xyz/clouds/66b37cd3817a42f55d5567fe, https://www.pointbox.xyz/clouds/66b37dcc817a428987556800, https://www.pointbox.xyz/clouds/66b37de0817a423873556802.
Table 5 presents the coordinate differences at each control point before and after applying the CT methods. The same as Table 1 and Table 2, the standard deviations for 4-parameter and 6-parameter CT methods are calculated solely in the horizontal plane, and 7-parameter CT method is computed in 3D space. Table 5 reveals that the standard deviations obtained using the 4-parameter, 6-parameter, and 7-parameter CT methods range from 5 to 7 cm, demonstrating centimeter-level accuracy. Since the Z-direction component in the 4-parameter and 6-parameter CT methods is treated as a single translation, we further compared their performance with the 7-parameter CT method in terms of Z-direction coordinate transformation, as shown in Figure 12. Figure 12 shows that without the use of the CT method, the difference at the control points between Model 3-1 and Model 3-2 reaches 10–20 m. After applying the 4-parameter or 6-parameter CT methods, the difference decreases to 1–2 m, and with the use of the 7-parameter CT method, it further decreases to several centimeters. Table 6 presents the statistics of the elevation differences between Model 3-1 and Model 3-2 in different verification zones. The Table shows that without CT, both the mean values and standard deviations for the three verification zones are the largest. After the 4-parameter or 6-parameter CT methods were applied, both the mean values and standard deviations decreased. Among these zones, the reductions in the verification zones of the pavement and grassland are more significant than those in the verification zone of the tree canopy. After the 7-parameter CT method was used, the mean value was similar to that obtained with the 4-parameter and 6-parameter CT methods, but the standard deviation was further reduced, reaching 33.9 cm, 11.8 cm, and 4.3 cm for the verification zones of the tree canopy, pavement, and grassland, respectively. Compared with the verification zones of the pavement and grassland, the result for the verification zone of the tree canopy is slightly worse because of the very high terrain roughness.
Table 5. Statistics of the coordinate differences at control points among the different CT methods (unit: cm) in study area 3.
Figure 12. The elevation differences (vertical bars) between point cloud Models 3-1 and 3-2 at the control points in study area 3: (a) without the CT method, (b) with the 4-parameter CT method, (c) with the 6-parameter CT method, and (d) with the 7-parameter CT method. The light purple and dark purple bars indicate positive and negative values, respectively.
Table 6. Statistics of the elevation differences among the different verification zones and CT methods (unit: cm) in study area 3.
Table 7 shows the correlation coefficients between Model 3-1 and Model 3-2 in each verification zone under different CT methods. Without the CT method, the correlation coefficient between the two models in the verification zone of the tree canopy is relatively low, at only 0.79, whereas in the other two verification zones, it exceeds 0.9. After the CT method is applied, the correlation coefficients between the two models in all the verification zones significantly increase. In the verification zone of the canopy, the 7-parameter method yields the best correlation coefficient, reaching 0.97. In the other verification zones, all three CT methods yielded consistent correlation coefficients of 0.99. Overall, in the study of point cloud integration in study area 3, the results of the 7-parameter CT method are significantly better than those of the 4-parameter and 6-parameter CT methods. Compared to study area 2, the results of the 4-parameter and 6-parameter CT methods are noticeably worse than those of the 7-parameter CT method. This is likely due to the less even distribution of control points in this area.
Table 7. The correlation coefficients of Models 3-1 and 3-2 at different verification zones using different CT methods.

5. Conclusions

This study investigates the integration of point cloud data from UAVs and USVs to generate comprehensive terrain models in aquatic environments, and from UAVs to develop continuous terrain models in urban and stream areas. The research includes three study sites: a retention basin in Chiayi County (study Area 1), the Department of SWC at NCHU (study Area 2), and the Sijiaolin stream (study Area 3). Point cloud models were generated using UAV and USV technologies, followed by integration of the USV-UAV and UAV-UAV data using 4-parameter, 6-parameter, and 7-parameter CT methods. These methods were employed to align disparate datasets and produce accurate geospatial models.
The results of the USV-UAV integration demonstrate that CT methods effectively reduce discrepancies between the USV and UAV point clouds, achieving decimeter-level accuracy across all three CT methods (4-, 6-, and 7-parameter) in study area 1. Among these, the 6-parameter CT method delivered the best performance, with a standard deviation of 12.9 cm in elevation differences. However, the overall improvement in standard deviation remained modest, primarily due to the non-overlapping nature of the point clouds and potential errors in control point selection. These findings emphasize the importance of precise control point selection and highlight the inherent challenges in integrating heterogeneous point cloud datasets. Overall, this study provides valuable insights into sediment transport and ecological conditions in retention basins.
The USV–UAV integration benefited from the built-in GNSS–sonar time synchronization in the USV system, which ensured temporal consistency between the datasets. However, for larger-scale or highly dynamic environments, detailed sensor calibration and synchronization procedures remain essential to minimize systematic errors.
The UAV-UAV integration results show that, regardless of the CT method (4-, 6-, or 7-parameter), the difference in control points before and after transformation consistently achieved centimeter-level accuracy. When validation was performed using verification zones, significant improvements were observed in reducing elevation discrepancies and achieving better point cloud alignment, particularly in areas with less terrain roughness, such as pavements and rooftops. In study area 2, CT methods achieved an accuracy of 1 to 3 cm for rooftops, and less than 1 cm for pavements. In study area 3, the CT methods resulted in accuracies of 11 to 19 cm for pavements and 4 to 14 cm for grasslands. These findings align with previous research. However, in areas with more complex terrain, such as tree canopies, accuracy improvements were less pronounced, although the 7-parameter CT method still outperformed the others. In general, the 7-parameter CT method consistently provided superior results compared to the 4-parameter and 6-parameter methods, in both study area 2 and 3. Notably, the 4- and 6-parameter methods yielded results closer to the 7-parameter method in study area 2, while in study area 3, the 4- and 6-parameter methods performed significantly worse. This discrepancy is attributed to the more even distribution of control points in study area 2 compared to study area 3.
The key contributions of this study are as follows: In USV-UAV integration research, this paper is the first to explore the acquisition of both underwater and above-water terrain data using USV-UAV systems and to perform point cloud model registration. These findings provide a valuable reference for future efforts in integrating UAV and USV point cloud data, facilitating more accurate and comprehensive modeling of diverse terrains, particularly in aquatic and adjacent environments. In the UAV-UAV research, this study evaluates the accuracy of different CT methods for registering UAV-based point cloud models. Notably, replacing traditional check points with verification zones may provide a more representative and comprehensive assessment.
While numerous studies on TLS-TLS, TLS-UAV, and UAV-UAV point cloud integration (e.g., [,,,,,,,]) report high accuracy, many rely on commercial or custom software for data processing, which involves complex computational procedures. Additionally, point cloud fusion results are often confined to specific commercial software platforms, limiting their broader applicability and dissemination. In contrast, this paper employs widely used CT methods, offering a simpler, more accessible approach to merging point cloud models. The results demonstrate that this method achieves high accuracy while remaining straightforward. Furthermore, all point cloud fusion results from this study are available on the Pointbox website, a free platform for viewing point cloud models, facilitating easy access and sharing of the research findings.

Author Contributions

Y.-S.H. is the PI of the projects leading to this paper. Y.-S.H. and Y.-S.Y. wrote the manuscript, and Y.-H.C. and Y.-S.Y. helped with computation and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by grants from the Agency of Rural Development and Soil and Water Conservation, MOA of Taiwan under grants ARDSWC-112-057 and ARDSWC-112-059.

Institutional Review Board Statement

Not applicable. This study did not require ethical approval.

Data Availability Statement

The UAV and USV data were generated and processed by groups from National Chung Hsing University and National Cheng Kung University, respectively. The data are available from the corresponding author upon reasonable request.

Acknowledgments

The program for calculating the 4-parameter, 6-parameter, and 7-parameter CTs in this study was partially provided by RealWorld Engineering Consultants, Inc.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cheng, L.; Chen, S.; Liu, X.; Xu, H.; Wu, Y.; Li, M.; Chen, Y. Registration of Laser Scanning Point Clouds: A Review. Sensors 2018, 18, 1641. [Google Scholar] [CrossRef]
  2. Tong, X.; Liu, X.; Chen, P.; Liu, S.; Luan, K.; Li, L.; Liu, S.; Liu, X.; Xie, H.; Jin, Y.; et al. Integration of UAV-Based Photogrammetry and Terrestrial Laser Scanning for the Three-Dimensional Mapping and Monitoring of Open-Pit Mine Areas. Remote Sens. 2015, 7, 6635–6662. [Google Scholar] [CrossRef]
  3. Yan, L.; Tan, J.; Liu, H.; Xie, H.; Chen, C. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm. Sensors 2017, 17, 1979. [Google Scholar] [CrossRef] [PubMed]
  4. Bolkas, D.; Chiampi, J.; Chapman, J.; Pavill, B.F. Creating a virtual reality environment with a fusion of sUAS and TLS point-clouds. Int. J. Image Data Fusion 2020, 11, 136–161. [Google Scholar] [CrossRef]
  5. Pavan, N.L.; dos Santos, D.R.; Khoshelham, K. Global Registration of Terrestrial Laser Scanner Point Clouds Using Plane-to-Plane Correspondences. Remote Sens. 2020, 12, 1127. [Google Scholar] [CrossRef]
  6. Li, C.; Xia, Y.; Yang, M.; Wu, X. Study on TLS Point Cloud Registration Algorithm for Large-Scale Outdoor Weak Geometric Features. Sensors 2022, 22, 5072. [Google Scholar] [CrossRef]
  7. Panagiotidis, D.; Abdollahnejad, A.; Slavík, M. 3D point cloud fusion from UAV and TLS to assess temperate managed forest structures. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102917. [Google Scholar] [CrossRef]
  8. Li, J.; Peng, Y.; Tang, Z.; Li, Z. Three-Dimensional Reconstruction of Railway Bridges Based on Unmanned Aerial Vehicle–Terrestrial Laser Scanner Point Cloud Fusion. Buildings 2023, 13, 2841. [Google Scholar] [CrossRef]
  9. Guo, L.; Wu, Y.; Deng, L.; Hou, P.; Zhai, J.; Chen, Y. A Feature-Level Point Cloud Fusion Method for Timber Volume of Forest Stands Estimation. Remote Sens. 2023, 15, 2995. [Google Scholar] [CrossRef]
  10. Paris, C.; Kelbe, D.; van Aardt, J.; Bruzzone, L. A Novel Automatic Method for the Fusion of ALS and TLS LiDAR Data for Robust Assessment of Tree Crown Structure. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3679–3693. [Google Scholar] [CrossRef]
  11. Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 73–80. [Google Scholar] [CrossRef]
  12. Hansch, R.; Weber, T.; Hellwich, O. Comparison of 3D interest point detectors and descriptors for point cloud fusion. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 57–64. [Google Scholar] [CrossRef]
  13. Fryskowska, A.; Walczykowski, P.; Delis, P.; Wojtkowska, M. ALS and TLS data fusion in cultural heritage documentation and modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 147–150. [Google Scholar] [CrossRef]
  14. Persad, R.A.; Armenakis, C. Automatic co-registration of 3D multi-sensor point clouds. ISPRS J. Photogramm. Remote Sens. 2017, 130, 162–186. [Google Scholar] [CrossRef]
  15. Li, W.; Wang, C.; Zai, D.; Huang, P.; Liu, W.; Wen, C.; Li, J. A Volumetric Fusing Method for TLS and SFM Point Clouds. IEEE J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2018, 11, 3349–3357. [Google Scholar] [CrossRef]
  16. Dai, W.; Yang, B.; Liang, X.; Dong, Z.; Huang, R.; Wang, Y.; Li, W. Automated fusion of forest airborne and terrestrial point clouds through canopy density analysis. ISPRS J. Photogramm. Remote Sens. 2019, 156, 94–107. [Google Scholar] [CrossRef]
  17. Urbančič, T.; Roškar, Ž.; Kosmatin Fras, M.; Grigillo, D. New Target for Accurate Terrestrial Laser Scanning and Unmanned Aerial Vehicle Point Cloud Registration. Sensors 2019, 19, 3179. [Google Scholar] [CrossRef]
  18. Zang, Y.; Yang, B.; Li, J.; Guan, H. An Accurate TLS and UAV Image Point Clouds Registration Method for Deformation Detection of Chaotic Hillside Areas. Remote Sens. 2019, 11, 647. [Google Scholar] [CrossRef]
  19. Cucchiaro, S.; Fallu, D.J.; Zhang, H.; Walsh, K.; Van Oost, K.; Brown, A.G.; Tarolli, P. Multiplatform-SfM and TLS Data Fusion for Monitoring Agricultural Terraces in Complex Topographic and Landcover Conditions. Remote Sens. 2020, 12, 1946. [Google Scholar] [CrossRef]
  20. Son, S.W.; Kim, D.W.; Sung, W.G.; Yu, J.J. Integrating UAV and TLS Approaches for Environmental Management: A Case Study of a Waste Stockpile Area. Remote Sens. 2020, 12, 1615. [Google Scholar] [CrossRef]
  21. Han, Y.; Sun, H.; Lu, Y.; Zhong, R.; Ji, C.; Xie, S. 3D Point Cloud Generation Based on Multi-Sensor Fusion. Appl. Sci. 2022, 12, 9433. [Google Scholar] [CrossRef]
  22. Terryn, L.; Calders, K.; Bartholomeus, H.; Bartolo, R.E.; Brede, B.; D’hont, B.; Disney, M.; Herold, M.; Lau, A.; Shenkin, A.; et al. Quantifying tropical forest structure through terrestrial and UAV laser scanning fusion in Australian rainforests. Remote Sens. Environ. 2022, 271, 112912. [Google Scholar] [CrossRef]
  23. Quintero Bernal, D.F.; Kern, J.; Urrea, C. A Multimodal Fusion System for Object Identification in Point Clouds with Density and Coverage Differences. Processes 2024, 12, 248. [Google Scholar] [CrossRef]
  24. Alicandro, M.; Di Angelo, L.; Di Stefano, P.; Dominici, D.; Guardiani, E.; Zollini, S. Fast and Accurate Registration of Terrestrial Point Clouds Using a Planar Approximation of Roof Features. Remote Sens. 2022, 14, 2986. [Google Scholar] [CrossRef]
  25. Specht, C.; Świtalski, E.; Specht, M. Application of an autonomous/unmanned survey vessel (ASV/USV) in bathymetric measurements. Pol. Marit. Res. 2017, 24, 36–44. [Google Scholar] [CrossRef]
  26. Rowley, J. Autonomous Unmanned Surface Vehicles (USV): A Paradigm Shift for Harbor Security and Underwater Bathymetric Imaging. In Proceedings of the OCEANS 2018 MTS/IEEE, Charleston, SC, USA, 22–25 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  27. Specht, M.; Specht, C.; Szafran, M.; Makar, A.; Dąbrowski, P.; Lasota, H.; Cywiński, P. The Use of USV to Develop Navigational and Bathymetric Charts of Yacht Ports on the Example of National Sailing Centre in Gdańsk. Remote Sens. 2020, 12, 2585. [Google Scholar] [CrossRef]
  28. Shih, H.C.; Lee, C.M.; Ho, M.K.; Kuo, C.Y.; Liao, T.S.; Chen, C.P.; Yeh, T.K. Monitoring and risk assessment of Taoyuan ponds using an unmanned surface vehicle with multibeam echo sounder, ground-penetrating radar, and electrical resistivity tomography. Geomat. Nat. Hazards Risk 2024, 15, 2323598. [Google Scholar] [CrossRef]
  29. Wang, Y.; Yuan, Y.; Lei, Z. Fast SIFT Feature Matching Algorithm Based on Geometric Transformation. IEEE Access 2020, 8, 88133–88140. [Google Scholar] [CrossRef]
  30. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar] [CrossRef]
  31. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  32. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 519–528. [Google Scholar] [CrossRef]
  33. Hughes Clarke, J.E. Multibeam Echosounders. In Submarine Geomorphology; Micallef, A., Krastel, S., Savini, A., Eds.; Springer Geology; Springer: Cham, Switzerland, 2008. [Google Scholar] [CrossRef]
  34. Deakin, R. A Note on the Bursa-Wolf and Molodensky—Badekas Transformations; Technical Report; School of Mathematical and Geospatial Sciences, RMIT University: Melbourne, Australia, 2006. [Google Scholar]
  35. Kumi-Boateng, B.; Ziggah, Y.Y. Horizontal coordinate transformation using artificial neural network technology—A case study of Ghana geodetic reference network. J. Geomat. 2017, 11, 1–11. [Google Scholar]
  36. Xiong, X.; Qin, K. Linearly Estimating All Parameters of Affine Motion Using Radon Transform. IEEE Trans. Image Process. 2014, 23, 4311–4321. [Google Scholar] [CrossRef] [PubMed]
  37. Ioannidou, S.; Pantazis, G. Helmert Transformation Problem. From Euler Angles Method to Quaternion Algebra. ISPRS Int. J. Geo-Inf. 2020, 9, 494. [Google Scholar] [CrossRef]
  38. Yang, R.; Deng, C.; Yu, K.; Li, Z.; Pan, L. A New Way for Cartesian Coordinate Transformation and Its Precision Evaluation. Remote Sens. 2022, 14, 864. [Google Scholar] [CrossRef]
  39. Ghilani, C. Adjustment Computations: Spatial Data Analysis; John Wiley and Sons Inc.: New York, NY, USA, 2010; pp. 464–470. [Google Scholar] [CrossRef]
  40. Cai, J.; Grafarend, E.W. Systematical Analysis of the Transformation Between Gauss-Krueger-Coordinate/DHDN and UTM-Coordinate/ETRS89 in Baden-Württemberg with Different Estimation Methods. In Geodetic Reference Frames; International Association of Geodesy Symposia; Drewes, H., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; Volume 134. [Google Scholar] [CrossRef]
  41. NORBIT iWBMS Datasheet. Available online: https://norbit.com/subsea/products/#wbms (accessed on 28 August 2024).
  42. Wessel, P.; Luis, J.F.; Uieda, L.; Scharroo, R.; Wobbe, F.; Smith, W.H.F.; Tian, D. The Generic Mapping Tools Version 6. Geochem. Geophys. Geosyst. 2019, 20, 5556–5564. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.