Next Article in Journal
Multiclass Classification of Hepatic Anomalies with Dielectric Properties: From Phantom Materials to Rat Hepatic Tissues
Next Article in Special Issue
Flow Field Perception of a Moving Carrier Based on an Artificial Lateral Line System
Previous Article in Journal
Finger Gesture Spotting from Long Sequences Based on Multi-Stream Recurrent Neural Networks
Previous Article in Special Issue
On AUV Control with the Aid of Position Estimation Algorithms Based on Acoustic Seabed Sensing and DOA Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vision-Based Localization System Suited to Resident Underwater Vehicles

1
Centre for Robotics & Intelligent Systems, University of Limerick, V94 T9PX Limerick, Ireland
2
School of Computing, Engineering, and Physical Sciences, University of the West of Scotland, Glasgow G72 0AG, UK
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(2), 529; https://doi.org/10.3390/s20020529
Submission received: 9 December 2019 / Revised: 10 January 2020 / Accepted: 16 January 2020 / Published: 18 January 2020
(This article belongs to the Special Issue Autonomous Underwater Vehicle Navigation)

Abstract

:
In recent years, we have seen significant interest in the use of permanently deployed resident robotic vehicles for commercial inspection, maintenance and repair (IMR) activities. This paper presents a concept and demonstration, through offshore trials, of a low-cost, low-maintenance, navigational marker that can eliminate drift in vehicle INS solution when the vehicle is close to the IMR target. The subsea localisation marker system is fixed on location on the resident field asset and is used in on-vehicle machine vision algorithms for pose estimation and facilitation of a high-resolution world coordinate frame registration with a high refresh rate. This paper presents evaluation of the system during trials in the North Atlantic Ocean during January 2019. System performances and propagation of position error is inspected and estimated, and the effect of intermittent visual based position update to Kalman filter and onboard INS solution is discussed. The paper presents experimental results of the commercial state-of-the-art inertial navigation system operating in the pure inertial mode for comparison.

1. Introduction

In recent years, we have seen the conceptual introduction of resident robotic platforms to underwater inspection, maintenance and repair (IMR) activities within the oil and gas sector. This is being driven by cost saving activities to reduce the levelized cost of energy (LCOE) for the sector and in an effort to compete with emerging sectors such as hydraulic fracturing. Traditionally, subsea IMR activities are carried out using a work-class hydraulic remotely operated vehicle (ROV). Vehicles are tethered and controlled directly from a surface support vessel and generally incur significant expenditures associated with the cost of the vessel and crew. The concept of permanently deployed ROV systems has been in the literature for many years [1]; however, only in recent years have we seen the introduction of these system to commercial marine activities offshore. IKM Subsea [2] were one of the first commercial entities to bring this from the proof-of-concept prototype stage to full operational system enactments. IKM developed their fully electric Merlin UCV ROV, modifying their manipulator protocols along with umbilical connectors for shore-based distance piloting and long-term deployment [3]. This was developed and deployed under a breakthrough ten-year contract for Statoil’s Visund and Snorre B field assets where one resident ROV system was included for deployment onsite [4]. The outputs from this and similar commercial projects [5,6] signalled a justification for resident robots on underwater infrastructure projects, driving a change in how offshore operators manage IMR activities. Resident systems allowed for less personnel offshore and for shared onshore resources, thereby reducing costs and enabling more efficient operations. Additionally, one of the primary benefits to this approach was shown to be the extended operational weather windows which can be achieved through having system deployments available on the seabed at all times.
This extended weather capabilities has hugely significant benefits, not just to oil and gas but also to the offshore wind sector, which has seen the rollout of many large infrastructure projects off European waters in recent years. However, typically within offshore wind and oil and gas, multiple assets can be spread across hundreds of square kilometres of seabed, and a resident vehicle capable of performing an inspection across these multiple distributed assets would be required. This mobility is being addressed through developments in resident autonomous underwater vehicles (AUVs) [7,8]. However, there are many concerns for resident AUV systems suited to offshore wind including major technical building blocks such as autonomy, power, communications, navigation accuracy and connectivity [9].

2. Hardware and Experimental Setup

The experimental setup, for the trials performed during January 2019 in North Atlantic Ocean, is shown in Figure 1. The system consists of a control cabin, launch and recovery system (LARS), tether management system (TMS), and the remotely operated vehicle. The ROV used for the trials is work class Comanche ROV by Subsea, which is one of the standard ROVs used throughout the offshore sector. The ROV was deployed from the vessel using the launch and recovery system with a corresponding cage type TMS. For initial experiments, the tether management system was deployed to the seabed, acting as a ‘fixed subsea structure’. The light beacons were fixed to the TMS creating an active, light beacon marker that was utilised to provide high-refresh-rate, long-range ROV visual pose estimation.

2.1. Light Marker and Camera

The system for visual pose estimation consists of a machine vision camera IDS uEye with Sony 1/1.8” CMOS (IMX265), and with fixed focus Lensagon BM4518S lens. The camera is enclosed in subsea housing with flat port. Four conventional subsea LED lights Bowtech LED-K-3200-DC were used as a light marker. The lights, standard throughout the sector, are shown in Figure 2. Four lights were attached to the tether management system creating a unique light marker with four beacons. To minimise the localization error, the camera used during the experiment must be precalibrated. To achieve highest precision, the best practice is to calibrate a camera on site and, after the camera is calibrated, it is ready to be used for image acquisition. The acquired image is sent to a topside PC where image processing is done. Detailed camera calibration and image acquisition process for the ROV localization, as well as hardware setup, has been presented in [24,25].

2.2. The Navigation System

The heart of the ROV navigation system is the state-of-the-art inertial navigational system PHINS 6000. The system is coupled with a DVL Nortek 500, a depth sensor, an ultra-short baseline system Teledyne Ranger 2, and a differential GPS unit Okeanus GPSR-3015G. The technical specifications of the navigational system components are given in Table 1. Navigation filter within the commercial INS unit is Extended Kalman Filter (EKF). The EKF is used to combine and fuse an internal and external sensor data. The GPS unit and the USBL are used for the absolute navigation measurements and position drift corrections. Additionally, the GPS unit is used for initial alignment of the fibre optics gyroscope (FOG) within the INS unit, and for navigation when the vehicle is on the surface, while USBL provides absolute position underwater. Without USBL the ROV position underwater drifts over time.
Section 3 further describes inertial system PHINS 6000, the components of the system, operating modes and propagation of the position error in pure inertial mode.

3. Inertial Navigation System PHINS 6000

Commercial inertial navigation system PHINS 6000 was used during the trials. The system presents a gold standard within marine robotics navigation and consists of three major components:
  • Inertial measurement unit (IMU) which consists of three fibre optic gyroscopes and three accelerometers,
  • Inertial navigation system (INS) resolving inertial measurements and updating position, velocity, and attitude,
  • Extended Kalman filter for optimal integration of external and internal sensor measurements.
Figure 3 shows a functional block diagram of the PHINS Kalman filter. If no external sensor data are fed in, the output of the INS unit is based purely on the IMU measurements. These data are used to update coefficients of the error equations. The error equations are used to update the covariance matrix. The INS estimates are compared with the observed external sensor measurements and used as feedback to the INS. In case no external sensor measurement was received, the Kalman filter provides error bound estimates. A software switch separates the external sensors and the EKF. Thus, it is possible to operate the system in different operation modes with a various combination of external sensor data.

3.1. PHINS Operating Modes

The navigation system PHINS 6000 operates with various external sensors such as GPS, USBL/LBL, depth sensor, and DVL. The system operates in various modes depending on the combination of external sensors used. The GPS is usually used for initial heading alignment process while on the ship’s deck, and to acquire an absolute, geo-referenced position while ROV is on the surface.
While operating underwater, there are three PHINS operating modes based on the external sensors used:
  • INS + USBL/LBL + (DVL),
  • INS + DVL,
  • INS (Pure inertial).
The best pose estimation performance is achieved while PHINS operates in USBL + INS mode. The USBL provides an absolute position while INS reduces the noise on the USBL position measurements. As shown in Table 2, PHINS position accuracy in this mode is three times better than the accuracy of the standalone USBL system. Throughout the trials, the measured standard deviation of the ROV position, in this mode, was between 0.25 m and 0.3 m at all times. Additionally, DVL can be used in conjunction with USBL and INS measurements to provide additional precision, especially in the case of intermittent USBL signal loss.
Without USBL/LBL absolute position fix, the position measurement drifts over time. The DVL + INS operating mode minimises the drift due to a constant speed over ground measurement. In this mode, the position drift of PHINS 6000 is approximately 0.1% of the travelled distance. Pure inertial mode without external sensor aiding is considered a worst-case scenario. The position error is estimated based on the integration of the IMU data, thus it depends on the quality of the FOG and accelerometers.

3.2. Propagation of Errors in Pure Inertial Mode

Position estimation accuracy in the pure inertial mode depends on two factors: (1) the accuracy of the IMU acquired measurements, therefore the accuracy of the accelerometers and FOGs, and (2) an initial position, velocity, and attitude errors obtained after the alignment/position fix. Considering the IMU data are integrated over time, the position error contained within the INS covariance error matrix propagates with time as well. Therefore, assuming the same initial position error in the pure inertial mode, the propagation of the position error is only a function of time. In practice, the initial position error varies mostly due to the quality of the position fix. During few days of the ROV operations, the position error based on USBL position fix was between 0.23 m and 0.57 m. Some of the factors that may influence the quality of the position fix are the strength of the USBL signal, a partial USBL responder occlusion, subsea noise pollution, and distance between transponder and receiver.
The propagation of position error in the pure inertial mode was further studied. Figure 4 shows the propagation of the measured position error during three tests of various durations. The experiment was performed in the North Atlantic Ocean off the coast of Ireland. Each test started with PHINS operated in the INS + USBL mode. As shown in the magnified section, the initial position error was between 0.25 m and 0.33 m, depending on the test. After the initial position fix was acquired, the INS was set to operate in the pure inertial mode.
The continuous line shows recorded position error propagation over time. During the tests, the ROV was manually piloted while performing a general visual inspection of the seabed. At the end of the test, PHINS was switched back to operate in INS + USBL mode. This reduced the position error to the initial values. Although the ROV paths during the tests were different, the propagation of the error behaves nearly equal. As shown in the figure, the position error is approximately 2 m after 2 min, 12 m after 5 min, and 40 m after 9 min. Since PHINS is a commercial system, the error propagation model is not known, thus it was necessary to measure error propagation and model the error propagation function. The dashed lines in Figure 4 present corresponding 3rd order error propagation functions calculated using least-squares polynomial curve fitting. In case a vision based position fix is acquired at any time throughout the experiment, this function is to be used for the prediction of the position and position error estimates, assuming PHINS continues to operate in the pure inertial mode.
In the next section, the performance of the vision-based pose estimation system used during the trials is to be compared with the USBL data. If comparable, the visually estimated pose data could be used as PHINS position update. Additionally, the estimated position error is to be reduced using intermittent position fix.

Additional Considerations

Position error estimates provided by PHINS are based on the INS accelerometers and gyroscopes theoretical models, thus are not accurate physical measurements. Although the relative difference of the initial position error was small across three tests initially, the difference grows exponentially after longer periods of time, as shown in Figure 4. For that reason, the derived error propagation function is considered to describe the behaviour of the system’s error propagation correctly for up to 6 min.

4. Visual Pose Estimation

A visual position estimation method for subsea localization is described in this section. Since the method presents an upgrade of the method used during field trials, more detailed explanation of the image acquisition, camera calibration, and pose estimation procedure can be found in [25]. Prior to the pose estimation, the camera was calibrated on site using calibration panel with a 7 × 10 chessboard pattern. The calibration algorithm assumes a pinhole camera model [26]. Calculated lens distortion and camera intrinsic parameters are then used to correct image distortion. The method estimates the relative position between the camera and the marker using a single camera and a marker of known size shown in Figure 2. The proposed solution that consists of four conventional light beacons mounted on the tether management system assembling the active light navigation marker. For the experiment, the TMS is deployed to the seabed and presents a permanently fixed subsea structure. It is assumed that the absolute position of the subsea structure is known in the world frame, therefore the absolute position of the light marker is measured and known.

4.1. Image Processing

Figure 5 shows the image captured during the algorithm testing on dry in the laboratory. As shown in the figure, light conditions during the algorithm testing were more complex than during the trials, as shown in Figure 2, since, at depths greater than 30 m, sunlight reflections off the metal parts are weak. The light marker with four light beacons is in the centre of the frame. The reflections of the marker light beacons are visible on the floor and the cabinet on the right, next to the marker. The marker light is slightly scattered, and multiple sources of the light are on the ceiling of the room. The image is first undistorted and blurred to partially soften reflections and scattered light (b,c). The image is then binarized as shown in Figure 5d, and objects that have fewer than P pixels are removed (e). The size of P is determined experimentally. In the next step, the image is morphologically closed (f), which fills the gaps between two objects separated by less than N pixels. However, by moving the marker further away from the camera, the distance between the light beacons mapped in the camera frame is reduced. Therefore, the N value must be less than the distance between the two closest light beacons in the camera frame at any time. In the last step, the centres of the remaining objects’ mass are calculated (g), and the roundness of the objects is checked. The four roundest objects are then detected and chosen as candidates for pose estimation (h).

4.2. Pose Estimation

Given the coordinates of light beacons B M in marker frame, their corresponding image coordinates I B , and intrinsic camera parameters K acquired during calibration process, the extrinsic camera parameters are calculated as:
[ R Cam M , t Cam M ] = E ( B M , I B , K ) ,
where B M is an M × 3 matrix with at least M = 4 coplanar points, I B is a corresponding M × 3 matrix, R Cam M is a 3D rotation matrix, and t Cam M is a 3D translation vector.
Figure 6 shows the relevant homogeneous transformations between the ROV and the world frame. To validate the performance of the vision pose estimation, the ROV position was measured simultaneously with the USBL system. The USBL position measurements are considered the ground truth position data.
The output of the visual pose estimation is the homogeneous transformation between the light marker and camera H Cam M consisting of relative position vector t Cam M and orientation matrix R Cam M . The position of the ROV in the world frame is calculated as:
H world ROV = H Cam ROV H M Cam H world M ,
where the transformation matrix H Cam ROV describes the relative position and the orientation between the ROV and the camera frame. The transformation matrix between the camera and the light marker H M Cam is acquired using a visual pose estimation. Since TMS with light beacons presents permanently deployed subsea structure, the position and orientation of the marker in the world frame H world M is measured prior to experiment; therefore, it is known, and it is assumed that it was not changed over time.
This method showed excellent pose/position estimation when the distance between the marker and the camera is less than 3.5 m, and sufficient accuracy for navigation when the position is estimated from a more significant distance out as far as 10 m distance to marker [25]. As shown in Figure 7, during the visual pose estimation process, a relative heading between the marker and the camera maps as a perspective distortion of the marker in the camera projection plane. If the distortion is not detected correctly, the rotation matrix R Cam M contains angle measurement errors. From a longer distance, those relatively small angle measurement errors can produce a significant error in the pose estimation. Since the orientation of the marker in the world frame is known, and the ROV attitude is measured with high precision IMU, to compensate for the angle measurement error, the relative orientation between the camera and marker R r Cam M can be derived.
The corrected position of the marker in the camera coordinate frame is calculated as:
p Cam M = t M Cam R r Cam M .
As discussed in the previous section, the position estimation accuracy in pure inertial mode depends on the accuracy of the accelerometers and the FOGs, and accuracy of the initial position measurement. While the accuracy of the IMU data depends on the quality of the sensor, thus cannot be improved, and more accurate position estimation enables better accuracy of the position measurement.

5. Results

The methods presented in Section 3 and Section 4 have been used for position estimation and position error propagation prediction in a real-world environment test. Prior to the test, the TMS system with light-based marker was deployed to the seabed. Since the INS operates in the world frame, it was necessary to determine marker position in the world frame.
After the TMS deployment, the ROV was docked and latched into the TMS. Based on the known ROV position within the TMS when docked and latched, the ROV navigation system was used to measure the TMS position and the orientation in the world frame. The position and the orientation of the light marker in the TMS frame were measured prior to the deployment. Thus, the position of the light marker in the world frame was derived.

5.1. Visual Pose Estimation—Static Test

Figure 8 shows the ROV position estimation in the X M ROV , Y M ROV , Z M ROV axis, and relative heading α M ROV in the marker reference frame. The coordinate frames were shown previously in Figure 7. During the test, the ROV was approximately 7.1 m from the light marker. The ROV was holding position while heading was changed in increments of 5 degrees, for a total change of 20 degrees, as shown in Figure 8. The depth of the vehicle was constant. The continuous line presents PHINS measurements of the position and angles. The INS operated in the highest precision mode with all external sensors used (USBL, DVL, depth sensor). The PHINS measurements are considered ground truth with position standard deviation during the test between 0.2 m and 0.3 m. The heading standard deviation was 0.04 deg, while roll and pitch standard deviation was 0.001 deg. The dotted line shows visual pose estimation measurements. As shown, the visually estimated pose data are noisy and contains significant errors within R Cam M . After the rotation matrix is replaced with R r Cam M , the new position is calculated. The dashed line presents the updated ROV position based on fusion of PHINS relative angle measurements and camera estimated position.
As shown in Figure 9, the position error is significantly reduced. As expected, the IMU angle measurements outperform the visually estimated relative orientation. The measurements contain less noise, and the updated position is more accurate. Figure 10 shows position error distribution and mean values in the X Y Z axes with a normal distribution curve fitted. The graphs in the left column present the distribution of the visually estimated relative pose error before the position correction. The right column shows the error distribution after the IMU angle measurements were used for a position correction. As shown, the mean error value and standard deviation are significantly reduced. The biggest improvement is achieved in X M ROV and Y M ROV , since those estimations mostly depend on the perspective distortion of the light marker in the camera frame, thus relative angle measurements.
The error in Z M ROV axis is reduced as well; however, the improvement is less noticeable due to the small estimation error prior to position correction. Since the distance from the marker on the z-axis is estimated by calculating the relative distance between the light beacon image coordinates, the position estimation on the z-axis is less prone to error measurements. In addition, the error on the z-axis is less affected by errors in relative heading α M Cam , as shown in Figure 7b), since for relative heading errors α M Cam = ±10 deg, the distance c b .
The experiment showed that visual pose estimated data are comparable with data acquired by PHINS operating in highest precision mode. The vision system performed well in good visibility up to 10 m from the target using light beacon-based position marker and with the standard deviation less than 0.5 m.

5.2. Visual Pose Estimation—Dynamic Test

The ROV position and relative heading in the M frame during a dynamic test are shown in Figure 11. The test begins with the ROV placed approximately 6 m from the light marker in Z M ROV axis, and approximately 2 m in the X M ROV axis. After the initial ROV position is measured, the vehicle is sent to the position approximately two meters from the marker Z M ROV = 2 m, and aligned with the marker frame in X M ROV axis. The ROV depth has been constant throughout the experiment. As the vehicle approaches the marker, at the time around 25 s and distance Z M ROV 3.5 m from the marker, the camera estimated position and heading (dotted line) starts to overlap with the PHINS position (continuous line). While during the static test shown in Figure 8 it may seem that the camera estimated relative heading has a constant offset from the PHINS heading, Figure 11 shows that the offset changes with the distance from the marker, and, as the ROV gets closer to the marker, the camera-based pose estimation becomes more accurate.
However, to achieve more accurate camera pose estimation throughout the whole distance range, the IMU angle measurements have to be used. As shown in Figure 12, the position error has been reduced and improved in all axes, with the significant improvement achieved in X M ROV and Y M ROV as expected. Figure 13 shows a series of images during the dynamic test as seen from the image acquisition camera. The measurement noise caused by partial light marker occlusion, most visible in heading measurement in Figure 11, has been significantly reduced after the camera pose estimation correction using the IMU angle measurements. The problems associated with the light marker coverage have been previously addressed and discussed in more detail in [25]. However, the partial light marker occlusion due to the ROV tether is shown in Figure 13, at time T = 42 s.

5.3. Visual Pose Estimation for INS Position Update

The experiment simulates a hypothetical scenario of the autonomous vehicle completing a dock at docking station within resident field. The initial conditions are as follows:
  • The vehicle has no USBL onboard and is travelling at cruising speed with navigation system based solely on INS.
  • During the trial, the vehicle DVL system is disabled in order to effect a faster integration drift in INS system over time than just 10 minutes operation with DVL.
  • The vehicle operates in a structured environment.
  • Approximately halfway to the docking station, there is a known landmark which can be recognized with the onboard vision system, and the position and orientation of landmark is known in world frame.
The experiment starts at time T0 with the ROV docked and the navigational system running in USBL + INS mode. The INS position standard deviation is low. After the position STD reached a minimum value of 0.26 m, PHINS was switched to operate in pure inertial mode, which simulates the USBL signal loss. The ROV was then piloted for approximately 10 minutes in the area around the TMS. Since the propagation of position error in pure inertial mode does not depend on travelled distance or speed, to reduce the risk of damaging the vehicle, the ROV was piloted in relatively close proximity to the deployed TMS. Figure 14 shows relative northing and easting between the ROV and the light marker during the experiment.
The black dashed line shows PHINS estimated position throughout the experiment while the grey shaded area presents the corresponding position standard deviation. During the experiment, the ROV position was simultaneously measured with the USBL system. The blue continuous line shows the ground truth, USBL measured ROV position with corresponding position standard deviation. To avoid possible noisy measurements and achieve highest precision, the supporting vessel with the USBL transponder was positioned in direct ROV line of sight at all times. Due to the exponential nature of the position error propagation, the position error propagates slowly at the beginning. As shown in the figure, at time T1 after approximately 5 min, the PHINS position standard deviation is around 10 m, reaching over 40 m after 10 min at T4. At time T4 the PHINS system was switched to operate in INS + USBL mode, thus the position error standard deviation (gray shaded area) dropped to an initial value, and the ROV position was corrected. The ROV was then docked at T5, and the experiment finished.
Throughout the experiment, the position of the ROV was visually estimated two times. Between time T1–T2, which presents ROV passing by a known landmark, and between T3–T5, upon reaching the docking station. The magnified section of Figure 14 between time T3–T5 is shown in Figure 15, and it shows the comparison between, PHINS data, USBL data, and visually estimated pose data. The figure shows that the visually estimated pose (red continuous line) is comparable with the USBL data (blue continuous line). Therefore, in case of USBL signal loss, while PHINS operates in pure inertial mode, the visual pose estimation data could be used as the intermittent INS position update to reduce the position error and to allow for transition of the subsea vehicle towards another known landmark or area covered with the USBL signal.
The influence of intermittent, vision based position update on position error estimation was further investigated. Since PHINS 6000 does not provide a designated input for a visually acquired position update, the visually estimated ROV position between T1–T2 and T3–T5 could not be fed into the system Kalman filter. However, the known initial position, and the error propagation function modelled in Section 3, allows for estimation of the family of possible position trajectories and corresponding position error.
Figure 16 shows a family of possible position estimation trajectories between time T2 and T4 (continuous red lines) in case the ROV position was updated between T1–T2 and the ROV continued to operate in pure inertial mode until T4. A specific position trajectory could not be simulated since PHINS 6000 exact mathematical model of the EKF, sensor error models, and the raw IMU sensor data are not available. A red shaded area presents the estimated position STD between T2 and T4, in case of position update at T2, for the trajectory which aligns with the ground truth data.

6. Discussion and Conclusions

This paper shows experimental results of a vision-based localization system for the resident ROVs/AUVs which can be utilised to eliminate drift error from on-vehicle inertial navigation system. The proposed system is developed around the standard equipment found throughout the ROV sector. The vision estimated pose based on active light-based marker recognition showed good overall performance and is comparable with the measured USBL ground truth position. While INS operates without USBL/LBL, the visually estimated pose could be used as a position fix, thus reducing the position error caused by position drift.
The system has been tested in the North Atlantic Ocean during trials in January 2019. The propagation of position error in pure inertial mode was measured and modelled as a function of time and initially estimated position error. This function is used to simulate a family of possible position trajectories in the case that vision-based position fix is acquired during the test. The visually-based pose estimation method, with IMU relative angle correction, was used to determine the relative pose between the ROV and deployed subsea asset. The results showed that the proposed system performed well and can improve the ROV/AUV localization underwater. The performance of the vision system depends on the water turbidity, in clear water providing up to a 10 m range and solution results in a low cost and stable platform for localization while not being prone to noise pollution as acoustic navigation systems. With the high water turbidity, the system operation range becomes limited and the UUV navigation purely relies on the acoustic based pose estimation technology.
High precision INS system with DVL aiding provides a strong platform; however, with the complexities involved in AUV based IMR activities, this is not accurate enough for the transition of resident systems between assets or for close quarter operations on subsea installations. The position drift of such configuration is 0.1% of the travelled distance. However, in close proximity to the marker, visual pose estimation is shown to be a reliable system for an absolute position fix. Since resident ROVs/AUVs operate in the structured environment, this low-cost solution can provide an alternative and more cost effective solution to high-maintenance acoustic-based positioning systems.
Future work on this project is investigating the input of real-time 3D dense reconstruction and tracking system, known as SteroFusion [27], as sensor aiding to INS to improve subsea navigation.

Author Contributions

P.T. developed and implemented the algorithm; P.T., A.W., J.R., E.O., and G.D. worked on hardware integration; P.T., A.W., G.D., J.R., and D.T. organised and contributed to the experimental trials; G.D., E.O., and D.T. contributed to research programme scope development, supervision, and funding acquisition; P.T. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon works supported by Science Foundation Ireland and industry partners Shannon Foynes Port Company and the Commissioners of Irish Lights under the MaREI Research Centres Awards 12/RC/2302_P2 & 14/SP/2740, RoboVaaS EU ERA-Net Co-fund award through Irish Marine Institute and EU Horizon 2020 research and innovation programme project EUMarineRobots under Grant Agreement 731103.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AUVAutonomous unmanned vehicle
CMOSComplementary metal-oxide-semiconductor
DVLDoppler velocity log
EKFExtended Kalman filter
FOGFibre optic gyroscope
GPSGlobal positioning system
IMRInspection, maintenance and repair
IMUInertial measurement unit
INSInertial navigation system
LARSLaunch and recovery system
LBLLong baseline
LCOELevelized cost of energy
LEDLight emitted diode
MREMarine renowable energy
OPEXOperational expenditure
ROVRemotely operated vehicle
RVResearch vessel
SLAMSimultaneous localization and mapping
STDStandard deviation
TMSTether management system
USBLUltra-short baseline

References

  1. Gilmour, B.; Niccum, G.; O’Donnell, T. Field resident AUV systems—Chevron’s long-term goal for AUV development. In Proceedings of the 2012 IEEE/OES Autonomous Underwater Vehicles (AUV), Southampton, UK, 24–27 September 2012; pp. 1–5, ISSN: 1522-3167. [Google Scholar] [CrossRef]
  2. UT2. UT3—Resident ROVs. UT3 2018, 12, 26–31. [Google Scholar]
  3. MacDonald, A.; Torkilsden, S.E. ROV in residence. Offshore Eng. Mag. 2019, 44, 52–55. [Google Scholar]
  4. MacDonald, A.; Torkilsden, S.E. IKM Subsea wins contract for Statoil’s Visund and Snorre B platforms. Offshore Technol. Mag. 2016. [Google Scholar]
  5. McPhee, D. Equinor awards Saipem £35m subsea service deal at Njord field - News for the Oil and Gas Sector. Energy Voice 2019. [Google Scholar]
  6. Anonymous. Saipem continues with Shell license for subsea robotics development—Energy Northern Perspective. Energy North. Perspect. Mag. 2019. [Google Scholar]
  7. Zagatti, R.; Juliano, D.R.; Doak, R.; Souza, G.M.; de Paula Nardy, L.; Lepikson, H.A.; Gaudig, C.; Kirchner, F. FlatFish Resident AUV: Leading the Autonomy Era for Subsea Oil and Gas Operations. In Proceedings of the Offshore Technology Conference, Houston, TX, USA, 30 April–3 May 2018. [Google Scholar] [CrossRef]
  8. Matsuda, T.; Maki, T.; Masuda, K.; Sakamaki, T. Resident autonomous underwater vehicle: Underwater system for prolonged and continuous monitoring based at a seafloor station. Robot. Auton. Syst. 2019, 120, 103231. [Google Scholar] [CrossRef]
  9. Newell, T. Technical Building Blocks for a Resident Subsea Vehicle. In Proceedings of the Offshore Technology Conference, Houston, TX, USA, 30 April–3 May 2018. [Google Scholar] [CrossRef]
  10. Paull, L.; Saeedi, S.; Seto, M.; Li, H. AUV Navigation and Localization: A Review. IEEE J. Ocean. Eng. 2014, 39, 131–149. [Google Scholar] [CrossRef]
  11. Nicosevici, T.; Garcia, R.; Carreras, M.; Villanueva, M. A review of sensor fusion techniques for underwater vehicle navigation. In Proceedings of the Oceans ’04 MTS/IEEE Techno-Ocean ’04 (IEEE Cat. No. 04CH37600), Kobe, Japan, 9–12 November 2004; Volume 3, pp. 1600–1605. [Google Scholar] [CrossRef] [Green Version]
  12. Majumder, S.; Scheding, S.; Durrant-Whyte, H.F. Multisensor data fusion for underwater navigation. Robot. Auton. Syst. 2001, 35, 97–108. [Google Scholar] [CrossRef]
  13. Subsea Inertial Navigation, iXblue. Available online: https://www.ixblue.com/products/range/subsea-inertial-navigation (accessed on 7 December 2019).
  14. SPRINT—Subsea Inertial Navigation System, Sonardyne. Available online: https://www.sonardyne.com/ (accessed on 7 December 2019).
  15. Balasuriya, B.; Takai, M.; Lam, W.; Ura, T.; Kuroda, Y. Vision based autonomous underwater vehicle navigation: Underwater cable tracking. In Proceedings of the Oceans ’97 MTS/IEEE Conference Proceedings, Halifax, NS, Canada, 6–9 October 1997; Volume 2, pp. 1418–1424. [Google Scholar] [CrossRef]
  16. Ortiz, A.; Simó, M.; Oliver, G. A vision system for an underwater cable tracker. Mach. Vis. Appl. 2002, 13, 129–140. [Google Scholar] [CrossRef]
  17. Carreras, M.; Ridao, P.; Garcia, R.; Nicosevici, T. Vision-based localization of an underwater robot in a structured environment. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 1, pp. 971–976, ISSN: 1050-4729. [Google Scholar] [CrossRef] [Green Version]
  18. Sivčev, S.; Rossi, M.; Coleman, J.; Dooly, G.; Omerdić, E.; Toal, D. Fully automatic visual servoing control for work-class marine intervention ROVs. Control Eng. Pract. 2018, 74, 153–167. [Google Scholar] [CrossRef]
  19. Cieslak, P.; Ridao, P.; Giergiel, M. Autonomous underwater panel operation by GIRONA500 UVMS: A practical approach to autonomous underwater manipulation. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 529–536, ISSN: 1050-4729. [Google Scholar] [CrossRef]
  20. Hidalgo, F.; Bräunl, T. Review of underwater SLAM techniques. In Proceedings of the 2015 6th International Conference on Automation, Robotics and Applications (ICARA), Queenstown, New Zealand, 17–19 February 2015; pp. 306–311. [Google Scholar] [CrossRef]
  21. Ribas, D.; Ridao, P.; Neira, J. Underwater SLAM for Structured Environments Using an Imaging Sonar; Springer: Berlin, Germany, 2010. [Google Scholar]
  22. Guth, F.; Silveira, L.; Botelho, S.; Drews, P.; Ballester, P. Underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach. In Proceedings of the 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, 12–15 August 2014; pp. 981–986, ISSN: 2155-1782. [Google Scholar] [CrossRef]
  23. Köser, K.; Frese, U. Challenges in Underwater Visual Navigation and SLAM. In AI Technology for Underwater Robots; Intelligent Systems, Control and Automation: Science and Engineering; Kirchner, F., Straube, S., Kühn, D., Hoyer, N., Eds.; Springer: Cham, Switzerland, 2020; pp. 125–135. [Google Scholar]
  24. Trslic, P.; Rossi, M.; Sivcev, S.; Dooly, G.; Coleman, J.; Omerdic, E.; Toal, D. Long term, inspection class ROV deployment approach for remote monitoring and inspection. In Proceedings of the MTS/IEEE OCEANS 2018, Charleston, SC, USA, 22–25 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  25. Trslic, P.; Rossi, M.; Robinson, L.; O’Donnel, C.W.; Weir, A.; Coleman, J.; Riordan, J.; Omerdic, E.; Dooly, G.; Toal, D. Vision based autonomous docking for work class ROVs. Ocean Eng. 2020, 196, 106840. [Google Scholar] [CrossRef]
  26. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  27. Rossi, M.; Trslić, P.; Sivčev, S.; Riordan, J.; Toal, D.; Dooly, G. Real-Time Underwater StereoFusion. Sensors 2018, 18, 3936. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Experimental setup used during the offshore trials in North Atlantic Ocean during January 2019. The system consists of control cabin, A-frame launch and recovery system, ROV and the corresponding tether management system.
Figure 1. Experimental setup used during the offshore trials in North Atlantic Ocean during January 2019. The system consists of control cabin, A-frame launch and recovery system, ROV and the corresponding tether management system.
Sensors 20 00529 g001
Figure 2. Underwater light used for a light marker attached to the TMS (top left); Lights mounted on TMS forming a light marker (top right)—a photo taken from a rear ROV camera; the light marker observed from approximately 7 m distance (bottom); the photo taken using an image acquisition camera.
Figure 2. Underwater light used for a light marker attached to the TMS (top left); Lights mounted on TMS forming a light marker (top right)—a photo taken from a rear ROV camera; the light marker observed from approximately 7 m distance (bottom); the photo taken using an image acquisition camera.
Sensors 20 00529 g002
Figure 3. Function block diagram of the Kalman filter.
Figure 3. Function block diagram of the Kalman filter.
Sensors 20 00529 g003
Figure 4. The INS position error propagation in pure inertial mode.
Figure 4. The INS position error propagation in pure inertial mode.
Sensors 20 00529 g004
Figure 5. Image processing stages. Image acquisition (a); image undistorted (b); image blurred (c); image binarization (d); objects with less than P pixels removed (e); image morphologically closed (f); finding centres of remaining object and calculating object roundness (g); four roundest objects chosen (h).
Figure 5. Image processing stages. Image acquisition (a); image undistorted (b); image blurred (c); image binarization (d); objects with less than P pixels removed (e); image morphologically closed (f); finding centres of remaining object and calculating object roundness (g); four roundest objects chosen (h).
Sensors 20 00529 g005
Figure 6. Homogeneous transformations between ship, ROV, TMS and world coordinate frames.
Figure 6. Homogeneous transformations between ship, ROV, TMS and world coordinate frames.
Sensors 20 00529 g006
Figure 7. The camera and light marker coordinate systems (a), relative heading α M Cam between the image acquisition camera and light marker (b), maps as a perspective distortion of light beacons in camera projection plane (c).
Figure 7. The camera and light marker coordinate systems (a), relative heading α M Cam between the image acquisition camera and light marker (b), maps as a perspective distortion of light beacons in camera projection plane (c).
Sensors 20 00529 g007
Figure 8. Position estimation of the camera in marker coordinate frame during the static test.
Figure 8. Position estimation of the camera in marker coordinate frame during the static test.
Sensors 20 00529 g008
Figure 9. Relative position error of visually estimated pose before and after correction during the static test.
Figure 9. Relative position error of visually estimated pose before and after correction during the static test.
Sensors 20 00529 g009
Figure 10. Position error distribution in marker frame before (left column) and after correction (right column).
Figure 10. Position error distribution in marker frame before (left column) and after correction (right column).
Sensors 20 00529 g010
Figure 11. Position estimation of the camera in marker coordinate frame during the dynamic test.
Figure 11. Position estimation of the camera in marker coordinate frame during the dynamic test.
Sensors 20 00529 g011
Figure 12. Relative position error of visually estimated pose before and after correction during the dynamic test.
Figure 12. Relative position error of visually estimated pose before and after correction during the dynamic test.
Sensors 20 00529 g012
Figure 13. The dynamic test. The ROV position is estimated and corrected while ROV is approaching the light marker.
Figure 13. The dynamic test. The ROV position is estimated and corrected while ROV is approaching the light marker.
Sensors 20 00529 g013
Figure 14. Relative ROV position throughout the experiment. Blue continuous line presents the ROV ground truth USBL position. The PHINS estimated position, while operating in pure inertial mode, is shown with black dashed line. At time T4, PHINS was switched to operate in INS + USBL mode, and the ROV was docked at T5.
Figure 14. Relative ROV position throughout the experiment. Blue continuous line presents the ROV ground truth USBL position. The PHINS estimated position, while operating in pure inertial mode, is shown with black dashed line. At time T4, PHINS was switched to operate in INS + USBL mode, and the ROV was docked at T5.
Sensors 20 00529 g014
Figure 15. The comparison between PHINS data, USBL data, and visually estimated pose acquired during the trials.
Figure 15. The comparison between PHINS data, USBL data, and visually estimated pose acquired during the trials.
Sensors 20 00529 g015
Figure 16. Family of position estimation trajectories (red continuous lines) from time T2 to T4 in case of ROV position update between time T1 and T2.
Figure 16. Family of position estimation trajectories (red continuous lines) from time T2 to T4 in case of ROV position update between time T1 and T2.
Sensors 20 00529 g016
Table 1. The DVL, USBL, and GPS system technical specification.
Table 1. The DVL, USBL, and GPS system technical specification.
Nortek 500 DVL
Bottom velocity
Single ping std @ 3m/s5 mm/s
Long term accuracy ± 0.2 % / ± 1 mm/s
Minimum altitude0.3 m
Maximum altitude200 m
Velocity resolution0.01 mm/s
Current profiling
Minimum accuracy 1 % of measured value / ± 5 mm/s
Velocity resolution1 mm/s
Teledyne Ranger 2 USBL
Operating range>6000 m
System accuracy0.2% of Slant range
Position update rate1 s
Okeanus DGPS
Position accuracy GPS<15 m
Position accuracy DGPS (WAAS)<3 m
PPS Time±1 us
Table 2. The PHINS 6000 INS system technical specification.
Table 2. The PHINS 6000 INS system technical specification.
Position accuracy with USBL/LBLThree times better than USBL/LBL accuracy
Position accuracy with DVL0.1% of travelled distance
Position accuracy with no aiding for 1 min/ 2 min0.8 m/ 3.2 m
Heading  accuracy with GPS0.01 deg secant latitude
Heading  accuracy with DVL/USBL/LBL0.02 deg secant latitude
Roll and Pitch accuracy0.01 deg
Heave accuracy5 cm or 5% (Whichever is greater)

Share and Cite

MDPI and ACS Style

Trslić, P.; Weir, A.; Riordan, J.; Omerdic, E.; Toal, D.; Dooly, G. Vision-Based Localization System Suited to Resident Underwater Vehicles. Sensors 2020, 20, 529. https://doi.org/10.3390/s20020529

AMA Style

Trslić P, Weir A, Riordan J, Omerdic E, Toal D, Dooly G. Vision-Based Localization System Suited to Resident Underwater Vehicles. Sensors. 2020; 20(2):529. https://doi.org/10.3390/s20020529

Chicago/Turabian Style

Trslić, Petar, Anthony Weir, James Riordan, Edin Omerdic, Daniel Toal, and Gerard Dooly. 2020. "Vision-Based Localization System Suited to Resident Underwater Vehicles" Sensors 20, no. 2: 529. https://doi.org/10.3390/s20020529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop