A New Sensor System for Accurate 3D Surface Measurements and Modeling of Underwater Objects

structures, such as oil and gas pipelines, offshore wind turbine foundations, or anchor chains. Abstract: A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the signiﬁcance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks.


Introduction
Three-dimensional acquisition of objects underwater is gaining importance in the field of inspection. Industrial facilities, such as offshore wind turbine foundations, oil and gas pipeline systems, underwater structures, and other objects, including anchor chains, are regularly required to be measured underwater [1,2]. Other objects of interest for underwater 3D measurements include archaeological sites on the seabed [3][4][5][6][7], sunken shipwrecks [8], plant and coral growth [9], and the biomass or size of fish populations [10][11][12]. Several contactless methods have been applied to perform 3D reconstruction underwater. Examples includes techniques using sonar systems, laser scanning, time-of-flight (ToF) cameras, and photogrammetry.
Whereas sonar systems [13][14][15][16] provide fast measurements of large areas over long distances, their measurement accuracy is low compared to that of optical sensors.
Mariani et al. introduced a new ToF based system called UTOFIA [17], providing an accuracy of approximately 10 cm at a 14 m distance. A commercially available ToF system is provided by 3D at Depth [18].
Laser scanning systems [1,2,19,20] provide fast acquisition, long ranges, high measurement accuracy, and robustness against water turbidity. Additionally, measurements can be In the remainder of this paper, the development of an appropriate 3D sensor system is described; first, results are presented, as well as examples of measurements and the construction of 3D models of the captured objects. We developed a sensor system based on structured light illumination for an underwater 3D inspection system for industrial structures according to the requirements of potential users. Its targeted application is the inspection of oil and gas pipelines, offshore windmill fundaments, anchor chains, and other technical structures. It can be mounted to a remotely operated vehicle (ROV), and it can be controlled remotely from a vessel. It captures up to sixty 3D scans per second with a velocity up to 1 m/s. It has a field of view of about 1 m 2 and captures 3D data at distances between approximately 1.6 m and 2.4 m. The system was built as a laboratory setup (see Figure 1). The sensor system, called UWS (underwater sensor), was developed according to the requirements within a funded research project (see acknowledgements). Appl. Sci. 2022, 12, x FOR PEER REVIEW 3 of 15 Our aim was to investigate the application of a new stereo scanning systems supported by structured illumination for precise, continuous 3D capturing and modeling of underwater structures at medium-range measurement distances (up to 3 m) and fields of view of about 1 m 2 .

Materials and Methods
In the remainder of this paper, the development of an appropriate 3D sensor system is described; first, results are presented, as well as examples of measurements and the construction of 3D models of the captured objects.

Hardware Setup
We developed a sensor system based on structured light illumination for an underwater 3D inspection system for industrial structures according to the requirements of potential users. Its targeted application is the inspection of oil and gas pipelines, offshore windmill fundaments, anchor chains, and other technical structures. It can be mounted to a remotely operated vehicle (ROV), and it can be controlled remotely from a vessel. It captures up to sixty 3D scans per second with a velocity up to 1 m/s. It has a field of view of about 1 m 2 and captures 3D data at distances between approximately 1.6 m and 2.4 m. The system was built as a laboratory setup (see Figure 1). The sensor system, called UWS (underwater sensor), was developed according to the requirements within a funded research project (see acknowledgements). The sensor system consists of:

•
Two monochrome measurement cameras arranged in a stereo setup for 3D data acquisition; • A projection unit producing structured illumination patterns; • A color camera for navigation and additional visual odometry data capture; • Two high-power LED flash units, providing synchronized homogenous illumination for the color camera; • A fiberoptic gyro (FOG) inertial measurement unit (IMU) for motion estimation to support global reconstruction tasks using SLAM algorithms; • Underwater housings for each camera and the projection unit (IMU is in the housing of the color camera); • An electronic control and interface box.
An additional necessary piece of equipment is a PC workstation for control and storage of the measurement data and power supply, which can be positioned on the vessel and is connected to the UWS. The cameras and lenses are commercially available parts. The projection unit is a Fraunhofer IOF development consisting mainly of an LED light source, a rotating slide (GOBO wheel) and motor, a projection lens, and control The sensor system consists of:

•
Two monochrome measurement cameras arranged in a stereo setup for 3D data acquisition; • A projection unit producing structured illumination patterns; • A color camera for navigation and additional visual odometry data capture; • Two high-power LED flash units, providing synchronized homogenous illumination for the color camera; • A fiberoptic gyro (FOG) inertial measurement unit (IMU) for motion estimation to support global reconstruction tasks using SLAM algorithms; • Underwater housings for each camera and the projection unit (IMU is in the housing of the color camera); • An electronic control and interface box.
An additional necessary piece of equipment is a PC workstation for control and storage of the measurement data and power supply, which can be positioned on the vessel and is connected to the UWS. The cameras and lenses are commercially available parts. The projection unit is a Fraunhofer IOF development consisting mainly of an LED light source, a rotating slide (GOBO wheel) and motor, a projection lens, and control electronics. The projection unit realizes the generation of a structured light pattern in the form of an aperiodic sinusoidal fringe pattern (see [40]).

3D Data Generation
Measurement data are generated from the stereo camera image stream supported by the aperiodic fringe projection GOBO unit. The common principle of triangulation of corresponding points in the camera images using the calibrated camera geometries is applied (see [41]). We use advanced pinhole modeling, which takes into account the refraction of the vision rays at the interfaces of the different media air, glass, and water.
The corresponding point search is realized using the generation of structured temporal patterns by the GOBO-based projection unit. Typically, ten consecutive images are used to form the temporal pattern of grey values at one pixel. Correspondence is found by choosing the highest correlation value along the epipolar lines (see [40]).

Geometric Modeling
To obtain optimal geometric 3D sensor modeling, an extensive theoretical geometrical analysis of the optical sensor components (cameras, lenses, and glass cover) and the additional important conditions and features was performed.
Furthermore, refraction of the vision rays at the interfaces between air and glass and glass and water has to be considered in the geometric camera modeling. This was done using an extended pinhole model (see [42,43]). Simulations revealed that the systematic measurement error of our stereo system with the actual geometric parameters was weak over the complete measurement distance (2.0 m ± 0.4 m), even if a simple pinhole model including distortion correction was used.
As a result of the camera model analysis, an extended pinhole camera model was selected as a 3D sensor model, leading to minimal systematic measurement errors. A detailed description of the geometric property analysis is given in [43].

Sensor Calibration
One precondition for the 3D measurement process is the calibration of the sensor system. The parts of the UWS to be calibrated are:

•
The 3D scanning unit, consisting of two monochrome measurement cameras arranged in a stereo array; • The color camera (to be calibrated with respect to one of the stereo cameras); • The IMU (to be calibrated with respect to the color camera).
In the following section, the calibration of the 3D sensor is briefly described. According to the theoretical analysis mentioned in Section 2.1.3, we selected the extended pinhole model with distortion correction function.
Sensor calibration was performed in a water basin using a set of ArUco [44] and circle markers in a plane-near arrangement (see Figure 2). The marker boards were placed on the floor of the basin, and the sensor was positioned in certain orientations according to the scene, using a gantry system. The image data were recorded continuously in video mode. Hence, more data than necessary (from approximately ten different sensor positions) were recorded. Calibration points were selected automatically using our own software tool, and calibration parameters were calculated using the commercially available BINGO bundle adjustment software [45].
Calibration was evaluated using consecutively performed static measurements of given specimens, such as a ball bar and a plane-normal with calibrated measures. After analysis of the evaluation measurements of the ball bar, a 3D error-compensation function was generated and used as the final part of the calibration. Results of the evaluation measurements are documented in Section 3.
Calibration of the color camera was performed analogously using BINGO software, whereas IMU calibration was performed according to the method introduced by Furgale et al. [46].
An extensive survey of different calibration procedures of underwater systems was conducted by Shortis [47]. Calibration was evaluated using consecutively performed static measurements of given specimens, such as a ball bar and a plane-normal with calibrated measures. After analysis of the evaluation measurements of the ball bar, a 3D error-compensation function was generated and used as the final part of the calibration. Results of the evaluation measurements are documented in Section 3.
Calibration of the color camera was performed analogously using BINGO software, whereas IMU calibration was performed according to the method introduced by Furgale et al. [46].
An extensive survey of different calibration procedures of underwater systems was conducted by Shortis [47].

Effects of Sensor Movement on Measurement Data
The relative movement between recording camera and observed measurement object leads to a shift of the pixels representing a given object point. When this shift exceeds one or two pixels per sequence recording, it leads to smearing of the image points comparable to the application of an average filter operator. Therefore, it leads to errors in 3D measurement.
If these errors are to be corrected, the typical application scenarios of the sensor must be considered. Let us consider the case of a linear sensor motion with constant velocity v, image sequence length n, and 2D image frame rate f. For simplicity and due to the lack of exact a priori information concerning the object distance of a mapped point, we assume a constant object distance, d. Additionally, the virtual principal distance, c, the pixel size, ps, and the binning factor, b, are influencing parameters. The pixel shift, Δs (in camera pixels), is: Let us consider the case of fast sensor motion and high 2D image frame rate f. Let v = 1 m/s, c = 17 mm, ps = 0.009 mm (binning mode), d = 2 m, and f = 900 Hz. Additionally, we assume a motion direction mainly in the Y direction of the camera coordinate system (perpendicular to the optical axis of the system). Let us consider a ten-image (n = 10) sequence recording to obtain one 3D scan. The image recording time is approximately 10/900 Hz ≈ 11 ms. The actual movement, ΔS, of the sensor in this time is 11 mm. An object point at a 2 m distance "moves" in the image plane Δs ≈ 10.4 px per sequence, i.e., approximately

Data Recording in Movement 2.2.1. Effects of Sensor Movement on Measurement Data
The relative movement between recording camera and observed measurement object leads to a shift of the pixels representing a given object point. When this shift exceeds one or two pixels per sequence recording, it leads to smearing of the image points comparable to the application of an average filter operator. Therefore, it leads to errors in 3D measurement.
If these errors are to be corrected, the typical application scenarios of the sensor must be considered. Let us consider the case of a linear sensor motion with constant velocity v, image sequence length n, and 2D image frame rate f. For simplicity and due to the lack of exact a priori information concerning the object distance of a mapped point, we assume a constant object distance, d. Additionally, the virtual principal distance, c, the pixel size, ps, and the binning factor, b, are influencing parameters. The pixel shift, ∆s (in camera pixels), is: Let us consider the case of fast sensor motion and high 2D image frame rate f. Let v = 1 m/s, c = 17 mm, ps = 0.009 mm (binning mode), d = 2 m, and f = 900 Hz. Additionally, we assume a motion direction mainly in the Y direction of the camera coordinate system (perpendicular to the optical axis of the system). Let us consider a ten-image (n = 10) sequence recording to obtain one 3D scan. The image recording time is approximately 10/900 Hz ≈ 11 ms. The actual movement, ∆S, of the sensor in this time is 11 mm. An object point at a 2 m distance "moves" in the image plane ∆s ≈ 10.4 px per sequence, i.e., approximately one pixel per image in the sequence. Instead, using f = 300 Hz frame rate, we already have a 31-pixel shift per sequence and a three-pixel shift per image.

Motion Compensation
Motion compensation is achieved by shifting the entire image according to the "true" motion direction (Equation (1)) with respect to the xand y-axes of the image coordinate system. Let α be the motion direction along the x-axis. Then: ∆x = cos(α) · ∆s; ∆y = sin(α) · ∆s. (2) Hence, the image must be shifted (image by image, n = 1) by −∆s = (−∆x, −∆y). In order to compensate for the sensor motion, Equation (1) can be used to construct a correction function for the 2D images. The parameters c, f, n, ps, and b are known, whereas v, d, and α must be estimated. For simplicity, d should be set to a constant value obtained by an estimation of the average distance of the object measurement points. This assumption can be made because the points on the object surface are at a similar distance to the sensor, and the distance change within one scan sequence (few milliseconds) is negligible.
This kind of motion compensation is valid for observations where all object points are close to a virtual plane in the space, perpendicular to the main observation direction of the sensor. When the sensor is tilted by an angle, τ, a systematic distance shift is caused from lower to upper image regions, and distance shift of distinct lines must be treated differently depending on the angle, τ.

Motion Velocity and Direction Estimation
The estimation of the sensor motion can be realized using several methods. The simplest method is the use of constants for v and d according to predefined control inputs (for example, if the velocity of the carrying ROV is known) and a default standard value for d. Alternatively, data obtained by the IMU and current 3D measurement data from the last scan (real-time measurement) can be used for estimation. This would require additional calculation, which must be provided with low latency. This kind of estimation has the advantage of quick parameter modifications to actual ROV speed and direction changes.
In this work, we estimated the full six-degree-of-freedom sensor trajectory using color camera and IMU data. The developed motion compensation assumes the simplified case of a constant object distance. The position of the sensor is extrapolated by visualinertial odometry [48] from the average pixel shift ∆s = (∆x, ∆y) over all structured image points of the color camera, combined with IMU data and 3D data iterative closest point (ICP) alignment [49]. The calculated pixel shift, ∆s, is transformed by back projection to the expected shift in the rectified images of the measurement cameras for correlation determination using a standard distance and color camera frame rate.
Using the above assumptions and the approximated motion velocity and direction, the recorded object points in motion in the same image sequence are mapped onto the same image point. Then, all subsequent calculation steps, such as filtering, correlation, triangulation, etc. (see [40]), are applied in the same way as for static measurements. To avoid the additional performance requirement for shifting of images, the correlation determination is extended by offset ∆s. This means that the temporal correlation calculation is processed without performance degradation.

3D Model Generation
Using the initial estimation of the motion trajectory by visual-inertial odometry, a registration strategy with several steps is used for further refinement of the trajectory.
In preparation for the registration, the 3D data are filtered to reduce the size of the 3D point cloud and achieve an equalized spatial distribution of the chosen points.
Every 3D point cloud is registered sequentially against its predecessors using an iterative closest point algorithm (ICP), with the goal of local optimization of the trajectory and improvement of the resulting 3D map. For this, a metascan created from a sliding window of registered preceding scans is used to provide more structure during ICP registration. The search radius is selected to be quite small (a few centimetres), depending on the velocity of the sensor, to reduce the risk of rough registration errors because of the limited field of view and the expected weak geometric structure of the scene.
Because the remaining residual errors accumulate and the existing drift of visualinertial odometry is not completely eliminated, a second registration with a continuous-time ICP method is performed in the next step. The basic idea is that the error of the trajectory in temporal proximity of a considered pose is negligible. The trajectory is then split into subsections, and several successive 3D scans around a chosen reference scan are combined to form a partial map. These partial maps are again registered against their predecessors. The change in pose of a reference scan is then distributed to the poses between two reference scans to maintain the continuity of the trajectory. For small changes, a linear distribution (translation) or SLERP (rotation) is sufficient. To correct the accumulated drift, loops are detected and closed. For this purpose, the poses of the aggregated submaps are optimized, and the changes are subsequently analogously distributed to the individual poses. The resulting map is provided for live visualization during the measurement process. Postprocessing of the data to create the final 3D point cloud is performed with continuous-time SLAM [50].

Evaluation Measurements with Static Sensor
The quality of calibration determines the magnitude of systematic measurement error. The systematic error is analyzed in all regions in the measurement volume by means of given specimens with a known geometry. For evaluation, we used length measurements of a calibrated ball bar with a spherical diameter of 100 mm, a center-point distance of 500 mm, and flatness measurements of a plane normal with a surface of 800 mm × 100 mm. The ball bar and plane normal were placed in different regions of the measurement volume at distances of 1.5 m to 2.4 m from the sensor in 100 mm steps. Figure 3 shows the specimen and the sensor device in a basin during evaluation measurement of the specimen.
two reference scans to maintain the continuity of the trajectory. For small ch distribution (translation) or SLERP (rotation) is sufficient. To correct the drift, loops are detected and closed. For this purpose, the poses of the aggreg are optimized, and the changes are subsequently analogously distributed ual poses. The resulting map is provided for live visualization during the process. Post-processing of the data to create the final 3D point cloud is p continuous-time SLAM [50].

Evaluation Measurements with Static Sensor
The quality of calibration determines the magnitude of systematic me ror. The systematic error is analyzed in all regions in the measurement vol of given specimens with a known geometry. For evaluation, we used len ments of a calibrated ball bar with a spherical diameter of 100 mm, a centerof 500 mm, and flatness measurements of a plane normal with a surface of mm. The ball bar and plane normal were placed in different regions of the volume at distances of 1.5 m to 2.4 m from the sensor in 100 mm steps. Figu specimen and the sensor device in a basin during evaluation measuremen men.  Characteristic quantities we used for quality evaluation are length measurement error, l e ; flatness error, f e (see [34,51]); and their variation over the measurement volume, as well as the noise of the 3D measurement data. Noise was determined as standard deviation of the 3D point distances of a fitted plane (for the plane normal) or fitted sphere surface (for the ball bar).
Measurements were realized as follows. The specimens were placed on the floor of the basin. The sensor was carried by a triaxial gantry system and placed at different heights (distances) over the measurement objects. At least ten consecutive 3D measurements of the complete scene were performed for each height. Every 3D scan provided 3D measurement data on the sphere surfaces, the sphere distance, and the plane-normal surface. This procedure was repeated by a lateral shift of the UWS with respect to the scene to realize a modified location of the specimen in the vision field.
The first results of the length measurement of the ball-bar showed a significant correlation between measurement distance and length measurement error, namely a proportional dependence. Hence, an appropriate error compensation function in the 3D object space was generated by: according to a virtually defined sensor axis (x 0 , y 0 , z). The parameters were found heuristically by analysis of first measurement data and were set to x 0 = 490 mm, y 0 = −100 mm, z 0 = 1950 mm, and k = −1.5 × 10 −6 . Refined evaluation data were obtained by additional independent measurements. A more detailed description concerning determination of a 3D compensation function is given by Bräuer-Burchardt et al. [52]. Table 1 documents the results of the static measurements of the ball bar during underwater measurements (spherical diameters and distance between sphere center points). Refinement includes a 3D correction function according to Equation (3) applied to the whole set of 3D measurement points.  Figure 4 shows the results of the ball-bar length measurement depending on measurement distance. A drift of the length values is clearly visible, whereas refinement reduces this effect drastically. Figure 5 shows three examples (1.7 m, 2.0 m, and 2.4 m) of the plane-normal measurements. Here, flatness deviation is quite low until 2 m measurement distance and then rises slowly. Refinement did not cause an improvement because of the omission of Z-coordinate correction by the error compensation function. Here, too few data were available to achieve a meaningful estimation of the systematic error function. Table 2 shows the determined noise of the 3D measurement points on the surfaces of sphere1 and sphere2, and the surface of the plane normal depending on object distance. As expected, noise increases with longer measurement distance. distance and then rises slowly. Refinement did not cause an improvement because o omission of Z-coordinate correction by the error compensation function. Here, too data were available to achieve a meaningful estimation of the systematic error functio   Table 2 shows the determined noise of the 3D measurement points on the surface sphere1 and sphere2, and the surface of the plane normal depending on object dista As expected, noise increases with longer measurement distance. OR PEER REVIEW 9 of 15 distance and then rises slowly. Refinement did not cause an improvement because of the omission of Z-coordinate correction by the error compensation function. Here, too few data were available to achieve a meaningful estimation of the systematic error function.   Table 2 shows the determined noise of the 3D measurement points on the surfaces of sphere1 and sphere2, and the surface of the plane normal depending on object distance. As expected, noise increases with longer measurement distance.    1 Standard deviation values on spheres are obtained from two to four independent measurements. 2 Standard deviation values on planes are obtained from ten independent surface positions.

Measurements with Moved Sensor
First measurements with a moving sensor were performed in air in the laboratory. A constant linear motion with known velocity (0.2 m/s and 1.0 m/s) of the sensor was realized using a traversing track.
Video images were corrected using the known parameters for c, f, ps, and b and the estimated quantities d = d 0 = 2 m, v = 0.2 m/s (1.0 m/s), and α = 0 • . Figure 6 shows the resulting 3D point clouds of the measurement object cylinder, two ball bars, and a plaster bust without and with motion compensation at a velocity of 0.2 m/s and a 375 Hz 2D framerate. Results at 1.0 m/s velocity were not completely satisfying and should be improved in the future.

Measurements with Moved Sensor
First measurements with a moving sensor were performed in air in the labora constant linear motion with known velocity (0.2 m/s and 1.0 m/s) of the sensor w ized using a traversing track.
Video images were corrected using the known parameters for c, f, ps, and b estimated quantities d = d0 = 2 m, v = 0.2 m/s (1.0 m/s), and α = 0°. Figure 6 shows the resulting 3D point clouds of the measurement object cylind ball bars, and a plaster bust without and with motion compensation at a velocit m/s and a 375 Hz 2D framerate. Results at 1.0 m/s velocity were not completely sa and should be improved in the future.
Subsequent experiments concerning motion compensation were performed water basin. Two different velocities were realized: 0.1 m/s and 0.7 m/s. The maxim sor velocity of 1 m/s was not yet tested underwater. Motion compensation (MC) alized manually (for reference), as well as automatically, using the implemented c sation algorithm. Figure 7 shows the results of manual and automatic motion com tion at 0.7 m/s velocity compared to the measurement result without motion com tion (left) with the example of a plastic pipe.  Subsequent experiments concerning motion compensation were performed in the water basin. Two different velocities were realized: 0.1 m/s and 0.7 m/s. The maximal sensor velocity of 1 m/s was not yet tested underwater. Motion compensation (MC) was realized manually (for reference), as well as automatically, using the implemented compensation algorithm. Figure 7 shows the results of manual and automatic motion compensation at 0.7 m/s velocity compared to the measurement result without motion compensation (left) with the example of a plastic pipe.    Table 3 documents the standard deviations of the 3D points from the fitted sphere and cylinder shape, respectively. Motion compensation provides a significant reduction in the standard deviation. Automatic motion compensation gives comparable results to manual compensation.

Measurement Example of 3D Model Generation by Registration of Consecutive Scans
Additional dynamic recordings were carried out using a triaxial gantry system. A pipeline was placed in the water, as well as the ball-bar specimen and plane normal. The underwater scene is shown in the left image of Figure 8. Tests were performed with two sensor velocities of about 0.1 m/s and 0.7 m/s. The sensor distance to the measurement objects was consistently approximately 2.0 m. The measurements were carried out in clear water with a temperature of about 14 • C. There was a negligible amount of stray light from the very small windows of the hangar.
The plastic pipe of about 7 m length on the floor of the basin was scanned in continuous motion of the sensor. Consecutive scans were merged according to the description in Section 2.3. The result is shown in the right image of Figure 8. The generated point cloud is composed of 680 individual scans. The point cloud is colored according to height.
Additional dynamic recordings were carried out using a triaxial gantry system. A pipeline was placed in the water, as well as the ball-bar specimen and plane normal. The underwater scene is shown in the left image of Figure 8. Tests were performed with two sensor velocities of about 0.1 m/s and 0.7 m/s. The sensor distance to the measurement objects was consistently approximately 2.0 m. The measurements were carried out in clear water with a temperature of about 14 °C. There was a negligible amount of stray light from the very small windows of the hangar.

Discussion
In this paper, we presented a new underwater 3D scanning device designed for offshore (up to 1000 m depth) inspection applications. The sensor captures the surface of objects continuously and builds up a 3D model of the scanned object in real time by moving along the object with velocities up to 0.7 m/s. The sensor was tested in a water basin under clear water measurement conditions.
The presented results of the new structured-light-based underwater 3D sensor mark a milestone in the development of such sensor systems concerning the achievable measurement accuracy. Results of static measurements of the specimens show that the systematic measurement error is in the same range of accuracy as 3D air scanners of comparable measurement volume. Additionally, using the 3D error compensation function, the systematic error decreased to below 0.2 mm, which is approximately 1/5000 of the measurement volume dimension. This is more accurate than some comparable air scanners. All these results were obtained in clear water under near-optimal extrinsic conditions. Hence, these results show the high accuracy potential of the device and not the real performance in possibly harsh conditions in real inspection applications offshore.
Compared to previously presented setups or devices based on the same 3D data generation principles [32][33][34], our new system provides a considerably larger measurement volume, longer object distance, and shorter exposure time (see Table 4). These features, together with the developed motion compensation method, make application with a fast continuously moving sensor possible. The detected random errors are in the expected range. For precise measurements at high velocities, motion must be taken into account. The presented compensation via displacement in the image space already achieves a significant improvement in the application. This was shown by experiments with a traversing track in air and a gantry system for different speeds in water. Furthermore, the 3D output data, improved by motion compensation, allows for finer registration and thus the recording of more accurate 3D models. Experiments with a sensor velocity of 1 m/s must be conducted before offshore measurements can be performed.
The main task in future work is the extension of our experiments to offshore measurements under typical application conditions. These experiments should also analyze the influences of water turbidity, temperature, and salinity. Scattering effects of the particles in the water should also be studied. In order to simulate application at a 1000 m depth before real offshore experiments, we plan to test the UWS sensor system in an appropriate compression chamber.
Another task of future work is to realize an appropriate calibration method that can be effectively performed offshore, even under rough weather conditions.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidentiality agreement.