You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

16 January 2015

Inertial Sensor Self-Calibration in a Visually-Aided Navigation Approach for a Micro-AUV

,
,
,
and
1
Systems, Robotics and Vision, Department of Mathematics and Computer Science, University of the Balearic Islands, Cra de Valldemossa, km 7.5, Palma de Mallorca 07122, Spain
2
Balearic Islands Coastal Observing and Forecasting System (SOCIB), Data Center Parc Bit, Naorte, Bloc A, 2op. pta. 3, Palma de Mallorca 07121, Spain
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Inertial Sensors and Systems

Abstract

This paper presents a new solution for underwater observation, image recording, mapping and 3D reconstruction in shallow waters. The platform, designed as a research and testing tool, is based on a small underwater robot equipped with a MEMS-based IMU, two stereo cameras and a pressure sensor. The data given by the sensors are fused, adjusted and corrected in a multiplicative error state Kalman filter (MESKF), which returns a single vector with the pose and twist of the vehicle and the biases of the inertial sensors (the accelerometer and the gyroscope). The inclusion of these biases in the state vector permits their self-calibration and stabilization, improving the estimates of the robot orientation. Experiments in controlled underwater scenarios and in the sea have demonstrated a satisfactory performance and the capacity of the vehicle to operate in real environments and in real time.

2. Platform Description

2.1. Hardware

The Fugu-C structure consists of a cylindrical, transparent, sealed housing containing all of the hardware and four propellers, two horizontal and two vertical. The dimensions of the cylinder are ø212 × 209 mm, and the motor supports are 120 mm long. Despite the fact that the propeller configuration could provide the vehicle with four DOFs, surge, heave, roll and yaw, due to the vehicle dynamics, it is almost passively stable in roll and pitch. Consequently, the robot has three practical DOFs, surge, heave and yaw: (x, z, yaw).

Figure 1a shows the 3D CAD model design of the vehicle, Figure 1b shows the hardware and the supporting structure, Figure 1c,d show two views of the final result.

Figure 1. Fugu-C. (a) The CAD model of the vehicle; (b) the hardware and structure; (c,d) two different views of the robot.

Fugu-C can operate as an AUV, but it can also be teleoperated as a remotely operated vehicle (ROV) with a low-cost joystick. Its hardware includes (see Figure 2):

Figure 2. The schematic of the Fugu-C hardware connection.
  • Two IEEE-1394a stereo cameras, one of them oriented toward the bottom, whose lenses provide a 97° horizontal field of view (HFOV), and another one oriented toward the forward direction with a 66° HFOV.

  • A 95-Wh polymer Li-Ion battery pack. With this battery, Fugu-C has an autonomy for about 3 h, depending on how much processing is done during the mission. For an autonomous mission with self-localization, the autonomy is reduced to two hours.

  • A PC104 board based on an Atom N450 at 1.66 GHz with a 128-GB SSD 2.5-inch hard drive.

  • A three port 800 FireWire PC104 card to connect the cameras to the main computer board.

  • A nano IMU from Memsense, which provides triaxial acceleration, angular rate (gyro) and magnetic field data [28].

  • A power supply management card, which permits turning on/off the system, charging the internal battery and the selection between the external power supply or the internal battery.

  • A microcontroller-based serial motor board that manages some water leak detectors, a pressure sensor and four DC motor drivers.

  • An external buoy containing a USB WiFi antenna for wireless access from external computers.

  • A set of DC-DC converters to supply independent power to the different robot modules.

  • Four geared, brushed DC motors with 1170 rpm of nominal speed and 0.54 kg·cm torque.

A 16-pin watertight socket provides the connectivity for wired Ethernet, external power, battery changing and a WiFi buoy. Note that not all connections are needed during an experiment. If the wired Ethernet is used (tethered ROV in use), then the buoy is not connected. Alternatively, if the buoy is connected (wireless ROV/AUV), then the wired Ethernet remains disconnected.

The autonomy of the robot when operating in AUV mode with the current type of battery has been calculated taking into account that its average power consumption at maximum processing (all sensors on, including both cameras, and all of the electronics and drivers operative) and with the motors engaged is around 44.5 W, so the battery would last (96 W·h)/(44.5 W) = 2.15 h at full charge. If it was necessary to include two LED bulbs (10 W) attached to the vehicle enclosure to operate in environments with poor illumination conditions, the internal power consumption would increase to 64.5 W and the battery would last (96 W·h)/(64.5 W) = 1.49 h.

The robot can operate in shallow waters up to 10 m in depth. However, another version for deeper waters could be made by thickening the acrylic cylinder.

Experiments showed an average speed in surge of 0.5 m/s and an average speed in heave of 1 m/s.

2.2. Software

All of the software was developed and run using ROS middleware [25] under GNU/Linux. Using ROS makes the software integration and interaction easier, improves their maintainability and sharing and strengthens the data integrity of the whole system. The ROS packages installed in Fugu-C can be classified into three categories: internal state management functions, vision functions and navigation functions.

The internal state management function continuously checks the state of the water leak sensors, the temperature sensor and the power level. The activation of any of these three alarms induces the system to publish a ROS topic-type alarm, to terminate all running software and to shutdown the horizontal motors, while it activates the vertical ones to launch the vehicle quickly up to the surface.

The visual functions include image grabbing, stereo odometry, 3D point cloud computation and environment reconstruction, which is explained in Section 3.

The navigation functions include the MESKF for sensor fusion and pose estimation and all of the modules for twist and depth control, and it is detailed in Section 4.

3. The Vision Framework

3.1. Image Grabbing and Processing

In order to save transmission bandwidth, raw images provided by any of the stereo cameras are encoded using a Bayer filter and sent directly to the computer, where they are processed as follows:

(1)

First, RGB and gray-scale stereo images are recovered by applying a debayer interpolation.

(2)

Then, these images are rectified to compensate for the stereo camera misalignment and the lens and cover distortion; the intrinsic and extrinsic camera parameters for the rectification are obtained in a previous calibration process.

(3)

Optionally, they are downscaled from the original resolution (1024 × 768) to a parameterizable lower rate (power of two divisions); this downsampling is done only if there is a need to monitor the images from remote computers without saturating the Ethernet communications. Normally, a compressed 512 × 384 px image is enough to pilot the vehicle as an ROV without compromising the network capacity. If the vehicle is being operated autonomously, this last step is not performed.

(4)

Following the rectification, disparity calculation is done using OpenCV's block matching algorithm to finally project these disparity images as 3D points.

(5)

Odometry is calculated from the bottom-looking camera with the LibViso2 library. In our experiments, the forward-looking camera has been used only for monitoring and obstacle detection, although it could also be used to compute the frontal 3D structure.

Figure 3 shows the image processing task sequence performed in Fugu-C.

Figure 3. The successive steps of the on-line image processing task.

3.2. Visual Odometry

Stereo visual odometry is used to estimate the displacement of Fugu-C and its orientation. Many sophisticated visual odometry approaches can be found in the literature [29]. An appropriate approach for real-time applications that uses a relatively high image frame rate is the one provided by the Library for Visual Odometry 2 (LibViso2) [30]. This approach has shown good performance and limited drift in translation and rotation, in controlled and in real underwater scenarios [31].

The main advantages derived from LibViso2 and that make the algorithm suitable for real-time underwater applications are three-fold: (1) it simplifies the feature detection and tracking process, accelerating the overall procedure; in our system, the odometer can be run at 10 fps using images with a resolution of 1024 × 768 pixels; (2) a pure stereo process is used for motion estimation, facilitating its integration and/or reutilization with the module that implements the 3D reconstruction; (3) the great number of feature matches found by the library at each consecutive stereo image makes it possible to deal with high resolution images, which is an advantage to increase the reliability of the motion estimates.

3.3. Point Clouds and 3D Reconstruction

3D models of natural sea-floor or underwater installations with strong relief are a very important source of information for scientists and engineers. Visual photo-mosaics are a highly valuable tool for building those models with a high degree of reliability [32]. However, they need a lot of time to be obtained, and they are rarely applicable online. On the contrary, the concatenation of successive point clouds registered by accurate pose estimates permits building and watching the environment online and in 3D, but with a lower level of detail. Thus, photo-mosaicking and point cloud concatenation can be viewed at present as complementary instruments for optic-based underwater exploration and study.

Dense stereo point clouds computed from the disparity maps [33] are provided by Fugu-C at the same rate as the image grabber. They are accumulated to build a dense 3D model of the underwater environment, visible online if a high rate connection between the vehicle and an external computer is established. The concatenation and meshing of the partial reconstructed regions must be done according to the robot pose. Although the odometric position (x, y, z) accumulates drift during relatively long routes, the system can reconstruct, with a high degree of reliability, static objects or short-medium stereo sequences, if no important errors in orientation are accumulated. However, if the orientation of the vehicle diverges from the ground truth, especially in roll and pitch, the successive point clouds will be concatenated with evident misalignments, presenting differences in inclination between them, thus distorting the final 3D result.

In order to increase the reliability of the reconstructed areas in longer trajectories and to minimize the effect of the orientation errors in the 3D models, we have implemented a generic ESKF, which corrects the vehicle attitude estimated by the visual odometer in the three rotation axis and minimizes the effect of the gyroscope drift.

5. Experimental Results

The experimental results are organized in experiments in a controlled environment, experiments in the sea and 3D reconstruction.

5.3. 3D Reconstruction

Stereo video sequences grabbed in the Port of Valldemossa in the ROS-bag [25] format were played offline to simulate the process of building online the 3D map of the environment by concatenating the successive point clouds. The dense point cloud generation was performed at the same rate as the image grabber (10 frames/s), permitting the reconstruction of the environment in real time. The correction in the vehicle estimated attitude increases the precision in the assembly of these point clouds, resulting in a realistic 3D view of the scenario where the robot is moving.

Figure 16a,b shows two different 3D views of the marine environment where Experiment 3 was performed. The successive point clouds were registered using the vehicle odometry pose estimates. Figure 16c,d shows two 3D views of the same environments, but registering the point clouds using the vehicle pose estimates provided by the MESKF. In all figures, the starting point and the direction of motion are indicated with a red circle and a red arrow, respectively. A marker was placed in the ground to indicate the starting/end point.

Figure 16. Experiment 3. (a,b) Two different views of the environment reconstructed in 3D, using the odometry to concatenate the point clouds; (c,d) two different 3D views of the environment concatenating the point clouds according to the MESKF estimates. In all cases, the red circle indicates the starting point where the marker was deposited, and the red arrow indicates the direction of motion.

Figure 17a,b shows two different 3D views built during Experiment 4, registering the successive point clouds with the vehicle odometry pose estimates. Figure 17c,d shows two 3D views of the same environments, but registering the point clouds using the vehicle pose estimates provided by the MESKF. Again, in all figures, the starting point and the direction of motion are indicated with a red circle and a red arrow, respectively.

Figure 17. Experiment 4. (a,b) Two different views of the environment reconstructed in 3D, using the odometry to concatenate the point clouds; (c,d) two different 3D views of the environment concatenating the point clouds according to the MESKF estimates. In all cases, the red circle indicates the starting point where the marker was deposited, and the red arrow indicates the direction of motion.

Figure 18 shows three images of Fugu-C navigating in this environment. In Figure 18c, the artificial marker deposited on the sea ground can be observed at the bottom of the image.

Figure 18. Images (ac) show three different views of Fugu-C navigating in the Port of Valldemossa.

Raw point clouds are expressed with respect to the camera frame, but then, they are transformed to the global coordinates frame by composing their camera (local) coordinates with the estimated vehicle global coordinates. 3D maps of Figures 16a,b and 17a,b show clear misalignments between diverse point clouds. These misalignments are due to the values of the estimated roll, and pitch vehicle orientations are, at certain instants, significantly different from zero. As a consequence, the orientation of the point cloud has a value different from zero, causing them to be inclined and/or displaced with respect to those immediately contiguous and causing also the subsequent point clouds to be misaligned with respect to the horizontal plane. This effect is particularly evident in Figure 16a,b, where very few point clouds are parallel to the ground, most of them being displaced and oblique with respect to the ground.

However, as the vehicle orientations in roll and pitch estimated by the MESKF are all approximately zero, all of the point clouds are nearly parallel to the ground plane, without any significant inclination in pitch/roll or important misalignment, and providing a highly realistic 3D reconstruction. Notice how the 3D views shown in Figures 16c,d and 17c,d coincide with the trajectories shown in Figures 11 and 12, respectively.

The video uploaded in [42] shows different perspectives of the 3D map built from the dataset of Experiment 4, registering the point clouds with the odometry (at the left) and with the filter estimates (at the right). Observing the 3D reconstructions from different view points offers a better idea of how sloping, misaligned and displaced with respect to the ground, can be some of the point clouds, due to those values of roll and pitch different from zero. The improvement in the 3D map structure when using the filtered data is evident, as all of the point clouds are placed consecutively, totally aligned and parallel to the sea ground.

6. Conclusions

This paper presents Fugu-C, a prototype micro-AUV especially designed for underwater image recording, observation and 3D mapping in shallow waters or in cluttered aquatic environments. Emphasis has been made on the vehicle structure, its multiple-layer navigation architecture and its capacity to reconstruct and map underwater environments in 3D. Fugu-C combines some of the advantages of a standard AUV with the characteristics of the micro-AUVs, outperforming other micro underwater vehicles in: (1) its ability to image the environment with two stereo cameras, one looking downwards and another one looking forward; (2) its computational and storage capacity; and (3) the possibility to integrate all of the sensorial data in a specially-designed MESKF that has multiple advantages.

The main benefits of using this aforementioned filter and their particularities with respect to other similar approaches are:

(1)

A general configuration permitting the integration of as many sensors as needed and applicable in any vehicle with six DOF.

(2)

It deals with two state vectors, the nominal and the error state; it represents all nominal orientations in quaternions to prevent singularities in the attitude estimation; however, the attitude errors are represented as rotation vectors to avoid singularities in the covariance matrices; the prediction model assumes a motion with constant acceleration.

(3)

The nominal state contains the biases of the inertial sensors, which permits a practical compensation of those systematic errors.

(4)

Linearization errors are limited, since the error variables are very small and their variations much slower than the changes on the nominal state data.

(5)

This configuration permits the vehicle to navigate by just integrating the INS data when either the aiding sensor or the filter fail.

Extensive experimental results in controlled scenarios and in the sea have shown that the implemented navigation modules are adequate to maneuver the vehicle without oscillations or instabilities. Experiments have also evidenced that the designed navigation filter is able to compensate, online, the biases introduced by the inertial sensors and to correct errors in the vehicle z coordinate, as well as in the roll and pitch orientations estimated by a visual odometer. These corrections in the vehicle orientation are extremely important when concatenating stereo point clouds to form a realistic 3D view of the environment without any misalignment.

Furthermore, the implementation of the MESKF is available to the scientific community in a public repository [26].

Future work plans are oriented toward the following: (1) the aided inertial navigation approach presented in this paper is unable to correct the vehicle position in (x, y), since it is not using any technique to track environmental landmarks or to adjust the localization by means of closing loops; the use of stereo GraphSLAM [43] to correct the robot position estimated by the filter will be the next step, applying, afterwards, techniques for fine-point cloud registration when they present overlap; (2) obviously, the twist and depth simple PID controllers described in Section 4.1 could be changed by other, more sophisticated systems that take into account other considerations, such as external forces, hydrodynamic models and the relation with the vehicle thrusters and its autonomy; one of the points planned to be investigated in forthcoming work consists of trying to find a trade off between controlling the vehicle only with the navigation sensorial data and the incorporation of a minimal number to structural considerations.

Appendix

A. Velocity Error Prediction

Let k be the estimated linear velocity defined according to Equation (25) as:

v ( k + 1 ) = v k + a k Δ t
where k is the estimated acceleration.

According to Equations (21) and (A1), it can be expressed as:

v ( k + 1 ) = v k + ( R k g a m + b k ) Δ t
and according to Equations (1) and (A2), it can be expressed as:
v ( k + 1 ) + δ v ( k + 1 ) = v k + δ v k + ( δ R k R k g a m + b k + δ b k ) Δ t
where all of the estimated variables have been substituted by the sum of their nominal and error values, except the rotation matrix, which has to be denoted as a product of the nominal rotation and the rotation error matrices.

The rotation error matrix δ R k is given by its Rodrigues formula [44], which, for small angles, can be approximated as: δ R k I 3 × 3 + [ K ] x, where [K]x is a skew symmetric matrix that contains the rotation vector corresponding to δ R k and I3×3 is the 3 × 3 identity matrix.

Consequently, Equation (A3) can be expressed as:

v ( k + 1 ) + δ v ( k + 1 ) = v k + δ v k + ( I 3 × 3 + [ K ] x ) R k g Δ t a m Δ t + b k Δ t + δ b k Δ t

Segregating the error terms from both sides of Equation (A4), we obtain the expression to predict the error in the linear velocity:

δ v ( k + 1 ) = δ v k + [ K ] x R k g Δ t + δ b k Δ t = δ v k + [ K ] x g L Δ t + δ b k Δ t

Since [K]x contains the vector corresponding to the rotation error δ R k and the acceleration bias is expressed as δak in the error state vector, Equation (A5) can also be seen as:

δ v ( k + 1 ) = δ v k ( g L δ q k ) Δ t + δ a k Δ t

B. Position Error Prediction

Let k be the estimated vehicle position defined according to Equation (23) as:

p ( k + 1 ) = p k + R k v k Δ t + 1 2 R k a k Δ t 2

According to Equations (1) and (A7), it can be expressed as:

p ( k + 1 ) + δ p ( k + 1 ) = p k + δ p k + R k δ R k ( v k + δ v k ) Δ t + 1 2 R k δ R k ( a k + δ a k ) Δ t 2

Analogously to Equation (A4), the term δ Sensors 15 01825i1k is substituted by its approximation computed from the Rodrigues formula:

p ( k + 1 ) + δ p ( k + 1 ) = p k + δ p k + R k ( I 3 × 3 + [ K ] x ) ( v k + δ v k ) Δ t + 1 2 R k ( I 3 × 3 + [ K ] x ) ( a k + δ a k ) Δ t 2

Operating Equation (A9) and separating the nominal and the error terms in both sides of the expression, it gives:

p ( k + 1 ) = p k + R k v k Δ t + 1 2 R k a k Δ t 2
for the nominal position, and
δ p ( k + 1 ) = δ p k + R k δ v k Δ t + R k [ K ] x v k Δ t + R k [ K ] x δ v k Δ t + 1 2 R k δ a k Δ t 2 + 1 2 R k [ K ] x a k Δ t 2 + 1 2 R k [ K ] x δ a k Δ t 2
for the error in position.

Assuming that errors are very small between two filter iterations, taking into account that [K]x contains the rotation vector error and representing any product by [K]x as a cross product, Equation (A12) can be re-formulated as:

δ p ( k + 1 ) = δ p k + R k δ v k Δ t R k ( v k δ q k ) Δ t + 1 2 R k δ a k Δ t 2 1 2 R k ( a k δ q k ) Δ t 2

Acknowledgments

This work is partially supported by the Spanish Ministry of Economy and Competitiveness under Contracts PTA2011-05077 and DPI2011-27977-C03-02, FEDER Funding and by Govern Balear grant number 71/2011.

Author Contributions

Francisco Bonin-Font and Gabriel Oliver carried out a literature survey, proposed the fundamental concepts of the methods to be used and wrote the whole paper. The mathematical development and coding for the MESKF was carried out by Joan P. Beltran and Francisco Bonin-Font. Gabriel Oliver, Joan P. Beltran and Miquel Massot Campos did most of the AUV platform design and development. Finally, Josep Lluis Negre Carrasco and Miquel Massot Campos contributed to the code integration, to the experimental validation of the system and to the 3D reconstruction methods. All authors revised and approved the final submission.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, S.M.; An, P.E.; Holappa, K.; Whitney, J.; Burns, A.; Nelson, K.; Heatzig, E.; Kempfe, O.; Kronen, D.; Pantelakis, T.; et al. The Morpheus ultramodular autonomous underwater vehicle. IEEE J. Ocean. Eng. 2001, 26, 453–465. [Google Scholar]
  2. Watson, S.A.; Crutchley, D.; Green, P. The mechatronic design of a micro-autonomous underwater vehicle. J. Mechatron. Autom. 2012, 2, 157–168. [Google Scholar]
  3. Wick, C.; Stilwell, D. A miniature low-cost autonomous underwater vehicle. Proceedings of the 2001 MTS/IEEE Conference and Exhibition (OCEANS), Honolulu, HI, USA, 5–8 November 2001; pp. 423–428.
  4. Heriot Watt University. mAUV. Available online: http://osl.eps.hw.ac.uk/virtualPages/experimentalCapabilities/Micro%20AUV.php (accessed on 15 November 2014).
  5. Hildebrandt, M.; Gaudig, C.; Christensen, L.; Natarajan, S.; Paranhos, P.; Albiez, J. Two years of experiments with the AUV Dagon—A versatile vehicle for high precision visual mapping and algorithm evaluation. Proceedings of the 2012 IEEE/OES Autonomous Underwater Vehicles (AUV), Southampton, UK, 24–27 September 2012.
  6. Kongsberg Maritime. REMUS. Available online: http://www.km.kongsberg.com/ks/web/nokbg0240.nsf/AllWeb/D241A2C835DF40B0C12574AB003EA6AB?OpenDocument (accessed on 15 November 2014).
  7. Carreras, M.; Candela, C.; Ribas, D.; Mallios, A.; Magi, L.; Vidal, E.; Palomeras, N.; Ridao, P. SPARUS. CIRS: Underwater, Vision and Robotics. Available online: http://cirs.udg.edu/auvs-technology/auvs/sparus-ii-auv/ (accessed on 15 November 2014).
  8. Wang, B.; Su, Y.; Wan, L.; Li, Y. Modeling and motion control system research of a mini underwater vehicle. Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA'2009), Changchun, China, 9–12 August 2009.
  9. Yu, X.; Su, Y. Hydrodynamic performance calculation of mini-AUV in uneven flow field. Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Tianjin, China, 14–18 December 2010.
  10. Liang, X.; Pang, Y.; Wang, B. Chapter 28, Dynamic modelling and motion control for underwater vehicles with fins. In Underwater Vehicles; Intech: Vienna, Austria, 2009; pp. 539–556. [Google Scholar]
  11. Roumeliotis, S.; Sukhatme, G.; Bekey, G. Circumventing dynamic modeling: Evaluation of the error-state Kalman filter applied to mobile robot localization. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Detroit, MI, USA, 10–15 May 1999; pp. 1656–1663.
  12. Allotta, B.; Pugi, L.; Bartolini, F.; Ridolfi, A.; Costanzi, R.; Monni, N.; Gelli, J. Preliminary design and fast prototyping of an autonomous underwater vehicle propulsion system. Inst. Mech. Eng. Part M: J. Eng. Marit. Environ. 2014. [Google Scholar] [CrossRef]
  13. Kelly, J.; Sukhatme, G. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar]
  14. Allotta, B.; Pugi, L.; Costanzi, R.; Vettori, G. Localization algorithm for a fleet of three AUVs by INS, DVL and range measurements. Proceedings of the International Conference on Advanced Robotics (ICAR), Tallinn, Estonia, 20–23 June 2011.
  15. Allotta, B.; Costanzi, R.; Meli, E.; Pugi, L.; Ridolfi, A.; Vettori, G. Cooperative localization of a team of AUVs by a tetrahedral configuration. Robot. Auton. Syst. 2014, 62, 1228–1237. [Google Scholar]
  16. Higgins, W. A comparison of complementary and Kalman filtering. IEEE Trans. Aerosp. Electron. Syst. 1975, AES-11, 321–325. [Google Scholar]
  17. Chen, S. Kalman filter for robot vision: A survey. IEEE Trans. Ind. Electron. 2012, 59, 263–296. [Google Scholar]
  18. An, E. A comparison of AUV navigation performance: A system approach. Proceedings of the IEEE OCEANS, San Diego, CA, USA, 22–26 September 2003; pp. 654–662.
  19. Suh, Y.S. Orientation estimation using a quaternion-based indirect Kalman filter With adaptive estimation of external acceleration. IEEE Trans. Instrum. Meas. 2010, 59, 3296–3305. [Google Scholar]
  20. Miller, P.A.; Farrell, J.A.; Zhao, Y.; Djapic, V. Autonomous underwater vehicle navigation. IEEE J. Ocean. Eng. 2010, 35, 663–678. [Google Scholar]
  21. Achtelik, M.; Achtelik, M.; Weiss, S.; Siegwart, R. Onboard IMU and monocular vision based control for MAVs in unknown in- and outdoor environments. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011.
  22. Weiss, S.; Achtelik, M.; Chli, M.; Siegwart, R. Versatile distributed pose estimation and sensor self calibration for an autonomous MAV. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 31–38.
  23. Markley, F.L. Attitude error representations for Kalman filtering. J. Guid. Control Dyn. 2003, 26, 311–317. [Google Scholar]
  24. Hall, J.K.; Knoebel, N.B.; McLain, T.W. Quaternion attitude estimation for miniature air vehicles using a multiplicative extended Kalman filter. Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, Monterey, CA, USA, 5–8 May 2008; pp. 1230–1237.
  25. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A. ROS: An open source robot operating system. Proceedings of ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009.
  26. Negre, P.L.; Bonin-Font, F. GitHub. Available online: https://github.com/srv/pose_twist_meskf_ros (accessed on 15 November 2014).
  27. Hogue, A.; German, J.Z.; Jenkin, M. Underwater 3D mapping: Experiences and lessons learned. Proceedings of the 3rd Canadian Conference on Computer and Robot Vision (CRV), Quebec City, QC, Canada, 7–9 June 2006.
  28. MEMSENSE nIMU Datasheet. Available online: http://memsense.com/docs/nIMU/nIMU_Data_Sheet_DOC00260_RF.pdf (accessed on 15 November 2014).
  29. Fraundorfer, F.; Scaramuzza, D. Visual odometry. Part II: Matching, robustness, optimization and applications. IEEE Robot. Autom. Mag. 2012, 19, 78–90. [Google Scholar]
  30. Geiger, A.; Ziegler, J.; Stiller, C. StereoScan: Dense 3D reconstruction in real-time. Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011.
  31. Wirth, S.; Negre, P.; Oliver, G. Visual odometry for autonomous underwater vehicles. Proceedings of the MTS/IEEE OCEANS, Bergen, Norway, 10–14 June 2013.
  32. Gracias, N.; Ridao, P.; Garcia, R.; Escartin, J.; L'Hour, M.; C., F.; Campos, R.; Carreras, M.; Ribas, D.; Palomeras, N.; Magi, L.; et al. Mapping the Moon: Using a lightweight AUV to survey the site of the 17th Century ship “La Lune”. Proceedings of the MTS/IEEE OCEANS, Bergen, Norway, 10–14 June 2013.
  33. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  34. Fossen, T. Guidance and Control of Ocean Vehicles; John Wiley: New York, NY, USA, 1994. [Google Scholar]
  35. Trawny, N.; Roumeliotis, S.I. Indirect Kalman filter for 3D attitude estimation; Technical Report 2005-002; University of Minnesota, Dept. of Comp. Sci. & Eng.: Minneapolis, Minnesota, EEUU, 2005. [Google Scholar]
  36. Bonin-Font, F.; Beltran, J.; Oliver, G. Multisensor aided inertial navigation in 6DOF AUVs using a multiplicative error state Kalman filter. Proceedings of the IEEE/MTS OCEANS, Bergen, Norway, 10–14 June 2013.
  37. Miller, K.; Leskiw, D. An Introduction to Kalman Filtering with Applications; Krieger Publishing Company: Malabar, Florida, EEUU, 1987. [Google Scholar]
  38. Ahmadi, M.; Khayatian, A.; Karimaghaee, P. Orientation estimation by error-state extended Kalman filter in quaternion vector space. Proceedings of the 2007 Annual Conference (SICE), Takamatsu, Japan, 17–20 September 2007; pp. 60–67.
  39. Bujnak, M.; Kukelova, S.; Pajdla, T. New efficient solution to the absolute pose problem for camera with unknown focal length and radial distortion. Lect. Notes Comput. Sci. 2011, 6492, 11–24. [Google Scholar]
  40. Fischler, M.; Bolles, R. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar]
  41. Moore, T.; Purvis, M. Charles River Analytics. Available online: http://wiki.ros.org/robot_localization (accessed on 15 November 2014).
  42. Bonin-Font, F. Youtube. Available online: http://youtu.be/kKe1VzViyY8 (accessed on 7 January 2015).
  43. Negre, P.L.; Bonin-Font, F.; Oliver, G. Stereo graph SLAM for autonomous underwater vehicles. Proceedings of the The 13th International Conference on Intelligent Autonomous Systems, Padova, Italy, 15–19 July 2014.
  44. Ude, A. Filtering in a unit quaternion space for model-based object tracking. Robot. Auton. Syst. 1999, 28, 163–172. [Google Scholar]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.