# An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

^{1.415}, in this way the contrast between object and background is enlarged and the feature is highlighted, since the gray levels of the object are increased and the background ones are decreased. Therefore, the improved Otsu TSM has more accurate segmentation performance than the methodology used in [22]. Besides, [22] does not consider the computational time of their method, but our proposed segmentation approach could keep good tradeoff between segmentation precision and computational cost. In [23], the classic Otsu thresholding and static thresholding are applied for object detection using the sector scanning sonar. Although the Otsu segmentation method requires several scanline measurements to be collated before obtaining the binary detection, the segmentation result of the Otsu approach is much cleaner than that of static thresholding, but the features which are farther away with marginal measurements and near the background noise level, cannot be detected by the classic Otsu method. The improved Otsu TSM presented in our work could solve this problem, since it is an adaptive thresholding method and it can find the best segmentation threshold of an image. The computational cost of the configuration-conjunct threshold segmentation method, described in [24], on their presented low resolution sonar image is 0.371 s, which is three times higher than that of our improved Otsu TSM (0.117 s). Also their proposed method can only extract linear objects with neat and obvious edges, not like the objects with different feature forms presented in our work. In this section, on the one hand, a brief state of the art of the underwater SLAM problem is introduced and the recent important works in the field of feature detection in sonar images are compared. On the other hand, the basic functionalities of the three most commonly used map representations are outlined and a survey is made of their suitability to a priori map localization i.e., computational complexity, reliability, etc.

#### 2.1. Map Representations

#### 2.2. Simultaneous Localization and Mapping

## 3. An Improved Otsu TSM for Fast Feature Detection

#### 3.1. Side-Scan Sonar Images

#### 3.2. The Proposed Improved Otsu TSM Algorithm

_{30}has been defined as the number of contours to be found with an area size smaller than 30 pixels. The procedure of the improved Otsu approach is illustrated in Figure 2. At first, the traditional Otsu method [42] is used to calculate the initial segmentation threshold T. Then, the Moore Neighbor contour detection algorithm [43,44] is employed to compute N

_{30}. If N

_{30}> 300, (64,000/30 × 300 = 71.1:1), it means that there are still many small bright spots remaining in the segmentation result, and the threshold needs to be improved. The final segmentation threshold T* can be calculated as explained further on. If N

_{30}≤ 300, the final segmentation threshold T* is set as the initial segmentation threshold T, and segmentation is finished. Notice that both values, N

_{30}and 300 should be changed depending on the characteristics of the used sonar images.

_{i}, and the total number of pixels is calculated by:

_{0}and C

_{1}by a threshold T*. The set C

_{0}implies the background pixels with a gray level of [T + 1, …, T*], and C

_{1}means those pixels of foreground object with a gray level of [T + 1, …, 255]. The probabilities of gray level distributions for the two classes are the following: w

_{0}is the probability of the background and w

_{1}is the probability of the object:

_{0}and C

_{1}are:

#### 3.3. The Power-Law Transformation

#### 3.4. TSM Results for Side-Scan Sonar Images

#### 3.4.1. TSM Results for High Resolution SSS Image

Algorithm 1: Canny edge detection |

1. Smooth the image with a Gaussian filter, h = fspecial (‘gaussian’, [3 3], 0.5); |

2. Calculate the gradient’s amplitude and orientation with the finite-difference for the first partial derivative; |

3. Non-Maxima Suppression; |

4. Detect and link the edge with double threshold method, y = edge (b, ‘canny’, 0.33), the high threshold for Figure 1a is 0.33, and the 0.4 times high threshold is used for the low threshold. |

_{30}returned from the above Canny edge algorithm equals 752, which is bigger than 300. Therefore, our improved Otsu TSM has been applied, and the segmentation result is shown in Figure 5b, with the final threshold T* of 0.6784. In order to detect the centroids of each segmented region, we need to do the following morphological operations (Algorithm 2) with Figure 5b.

Algorithm 2: Morphological operations for detecting feature centroids |

1. Remove all connected components that have fewer than 30 pixels in Figure 5b; |

2. Bridge previously unconnnected pixels; |

3. Perform dilation using the structuring element ones (3) with the size of a 3 × 3 square; |

4. Fill the holes in the image; |

5. Compute the area size, the centroid and the bounding box of different contiguous regions; |

6. Concatenate structure array which contains all centroids into a single matrix. |

#### 3.4.2. TSM Results for Low Resolution SSS Image

_{15}has been defined as the number of contours to be found with an area size smaller than 15 pixels. If N

_{15}> 100, (95076/15 × 100 = 63.4:1),this assigned threshold of 63.4 is lower than that of 71.1 for Figure 1a, since the proportion of background spots in this low resolution SSS image is higher than that in Figure 1a.

_{15}computed by the Canny contour detection algorithm is 419, which is bigger than 100. As a result, the proposed improved Otsu TSM has been applied, and the segmentation result is shown in Figure 7b, with the final threshold T* of 0.3529. The morphological operations for marking the centroids of every segmented region within the branch are similar to that of the ship. Only in step 1, the parameter is set to 15 to remove all connected components that have fewer than 15 pixels. The red stars ‘*’, shown in Figure 7c,d, imply the centroids for every contiguous region or connected component in the segmentation results of our improved Otsu TSM and the maximum entropy TSM, separately. The centroid coordinate of the branch detected by our method is (187.3, 115.6), which will be used as a landmark point in the further simulation test of an AEKF-based SLAM loop mapping. The confusion matrices of the real centroids and the ones detected by the improved Otsu TSM on the one hand and the maximum entropy TSM on the other hand are shown in the following Table 4 and Table 5, separately.

#### 3.5. TSM Results for Forward-Looking Sonar Image

_{40}is defined as the number of contours which area size is smaller than 40 pixels, and it is computed by the Canny contour detection algorithm. If N

_{40}> 600, (1,789,440/40 × 600 = 74.6:1), this assigned threshold 74.6 is higher than that of 71.1 for the former high resolution SSS image of the ship, that is because the black background proportion in this presented FLS image is higher than that in that SSS image. This means that there are many small bright spots still left in the segmentation result.

_{40}calculated from the above Canny edge detection algorithm is 1341, which is bigger than 600. Thus, our improved Otsu TSM has been applied, and the segmentation result is shown in Figure 10b, with the final threshold T* of 0.5412. The morphological operations for computing the centroids of every segmented region within the body are similar to that of the ship. Only in step 1, the parameter is set to 40, in order to remove all connected components that have fewer than 40 pixels. Besides, in step 3, it applies dilation two times.

## 4. The Estimation-Theoretic AEKF-SLAM Approach

#### 4.1. Extended Kalman Filter

_{k}; v

_{k}is the observation noise and it also obeys standard Gaussian distribution ${v}_{k}~N(0,{R}_{k})$, its covariance matrix is denoted as R

_{k}.

- Time Update
- Predictor step:$${\widehat{X}}_{k}^{-}=f({\widehat{X}}_{k-1}^{-})$$$${P}_{k}^{-}={F}_{k}{P}_{k-1}{F}_{k}^{T}+{Q}_{k}$$
_{k}and H_{k}are the Jacobian matrices of partial derivatives of $f(\cdot )$ and $h(\cdot )$ with respect to X.$${F}_{k}={\frac{\partial f(X)}{\partial X}|}_{X={\widehat{X}}_{k-1}^{-}},{H}_{k}={\frac{\partial h(X)}{\partial X}|}_{X={\widehat{X}}_{k}^{-}}$$The nonlinear functions f and h are linearized by using a Taylor series expansion, where terms of second and higher order are omitted.

- Measurement Update
- •
- Calculate the Kalman gain K
_{k}, ${K}_{k}={P}_{k}^{-}{H}_{k}^{T}{({H}_{k}{P}_{k}^{-}{H}_{k}^{T}+{R}_{k})}^{-1}$. - •
- Corrector step:
- First, update the expected value ${\widehat{X}}_{k}$, ${\widehat{X}}_{k}={\widehat{X}}_{k}^{-}+{K}_{k}[{Z}_{k}-h({\widehat{X}}_{k}^{-})]$.
- Then, update the error covariance matrix ${P}_{k}$, ${P}_{k}={P}_{k}^{-}-{K}_{k}{H}_{k}{P}_{k}^{-}=(I-{K}_{k}{H}_{k}){P}_{k}^{-}$.

#### 4.2. The Estimation Process of the AEKF-SLAM

_{k}and the robot motion model are utilized to estimate the robot pose. Then, in the update stage, the innovation v

_{k}is computed as the difference between the new observation from an exteroceptive sensor and the predicted measurement, and its error covariance is used to calculate the Kalman gain W

_{k}. When a landmark is detected for the first time, it is added to the system state vector through the state augmentation stage.

_{k}. Here Q

_{k}and R

_{k}are the covariance matrices of procession noise errors and observation errors, respectively.

_{f}is the vector of landmarks that are already detected and stored in the map; z

_{n}is the vector of measurements which are unseen and new landmarks.

Algorithm 3: Underwater landmark map building based on AEKF-SLAM |

1. $Fork=1toN$ |

2. $[{X}_{k}^{-},{P}_{k}^{-}]=Predict({X}_{k-1},{P}_{k-1});$ |

3. ${z}_{k}=GetObservations\left(\right);$ |

4. $[{z}_{f},{z}_{n}]=DataAssociation({X}_{k}^{-},{P}_{k}^{-},{z}_{k},{R}_{k});$ |

5. $[{X}_{k}^{+},{P}_{k}^{+}]=UpdateMap({X}_{k}^{-},{P}_{k}^{-},{z}_{f},{R}_{k});$ |

6. $[{X}_{k}^{+},{P}_{k}^{+}]=AugmentMap({X}_{k}^{-},{P}_{k}^{-},{z}_{n},{R}_{k});$ |

7. $Endfor$ |

#### 4.2.1. Prediction Stage

_{vm}stands for the cross covariance between the robot state and the map landmarks:

_{v}. Supposing the position of the n-th landmark is denoted as ${x}_{{m}_{n}}={({\widehat{x}}_{n},{\widehat{y}}_{n})}^{T}$, and the environmental landmarks are described as ${\widehat{X}}_{m}={[{\widehat{x}}_{1},{\widehat{y}}_{1},\mathrm{...},{\widehat{x}}_{n},{\widehat{y}}_{n}]}^{T}$, and its covariance matrix is P

_{m}. Note that the initial condition of the state estimate is usually given as ${\widehat{X}}_{a}={\widehat{X}}_{v}=0$ and P

_{a}= P

_{v}= 0, which means that no landmarks have been observed yet and the initial robot pose defines the base coordinate origin.

_{δ}is commonly obtained using wheel encoder odometry and a robot kinematic model. Therefore, the prediction state of the system is given by:

_{v}and Q

_{v}are the Jacobian matrices of partial derivatives of the nonlinear model function g with respect to the robot state X

_{v}and the robot pose change X

_{δ}:

_{v}and its cross-correlations P

_{vm}, the prediction covariance matrix ${P}_{a}^{-}$ can be implemented more efficiently as:

#### 4.2.2. Update Stage

_{i}, also called innovation:

_{i}is:

_{i}:

#### 4.2.3. State Augmentation

_{new}and its covariance matrix R

_{new}, which are measured relative to the robot:

_{i}, which is defined as follows: the transformation function g

_{i}is applied to convert the polar observation z

_{new}to a global Cartesian feature position. It is composed of the current robot pose ${\widehat{X}}_{v}$ and the new observation z

_{new}:

#### 4.3. AEKF-SLAM Loop Map Simulation

## 5. Conclusions and Future Work

#### 5.1. Conclusions

#### 5.2. Future Work

^{2}of working area. As for further improvements of the current study, the future work includes:

- •
- Employing other forms of target objects for the detection and tracking purpose, devising parametric feature models for describing general objects, and more complex scenarios with multiple distinct features will also be included. Besides, more complicated vehicle model such as six DOF kinematic model will be investigated. Therefore, as the robot navigates, we can perform the proposed feature detection algorithm on the acquired images exactly when the 3D object is detected by the sonar.
- •
- Developing a computationally tractable version of the SLAM map building algorithm which maintains the properties of being consistent and non-divergent. Hierarchical SLAM or sub-mapping methods build local maps of limited size, which bound the covariances and thus the linearization errors. Then, by linking the local maps through a global map or a hierarchy of global maps, AEKF-based SLAM application in large environments is possible.
- •
- Considering the application of unscented KF (UKF) in the field of underwater robotic navigation. As an alternative estimation technique, UKF does not need calculating the derivatives, and it also handles a very effective tradeoff between computational load and estimation accuracy in the case of strongly nonlinear and discontinuous systems [54]. Besides, considering FastSLAM, which uses the Rao-Blackwellised method for particle filtering (RBPF), as future work, since it is very suitable for non-linear motions. It also has better performance than EKF-SLAM at solving the data association problem for detecting loop closures. Afterwards, we will evaluate the estimation performances of these two methods to the SLAM problem with that of the AEKF considered in this work.
- •
- Incorporating data streams observed from the acoustic and visual sensors to generate a 3D representation of the underwater environment, i.e., the seabed, working environment or artifacts [55]. In our case, we will use the depth logger based on pressure for navigation and the DE340D SSS as perception sensor to get horizontal positions of features of interest, therefore by integrating with the vertical positioning data obtained through pressure sensor, a subsea 3D map will be created.
- •
- Considering map simplification and transform based algorithms for fusion of two different resolution maps. One is a large scale medium resolution map generated using a SSS (in SWARMs T4.1 Large scale 3D mapping), the other is a local 3D high resolution map created by fusion of FLS images and visual information. The sonar system used to obtain the large scale map achieves a very high area coverage rate but has a modest resolution, as it could detect objects but is insufficient to identify their precise nature. To achieve combining both systems for maximizing the operational effectiveness, the large scale medium resolution map will be used to trigger detailed investigations of regions of interest using the local 3D high resolution maps.

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Abbreviations

SONAR | SOund Navigation And Ranging |

SLAM | Simultaneous Localization and Mapping |

TSM | Threshold Segmentation Method |

SSS | Side-Scan Sonar |

FLS | Forward-Looking Sonar |

AEKF | Augmented Extended Kalman Filter |

KF | Kalman Filter |

EKF | Extended Kalman Filter |

PF | Particle Filter |

EM | Expectation Maximization |

SIFT | Scale-Invariant Feature Transform |

SURF | Speeded Up Robust Features |

DOF | Degree of Freedom |

CML | Concurrent Mapping and Localization |

RSSI | Received Signal Strength Indication |

AUV | Autonomous Underwater Vehicle |

FRR | False Positive Rate |

PPV | Positive Predictive Value |

RMS | Root Mean Square |

RBPF | Rao-Blackwellised Particle Filtering |

UKF | Unscented Kalman Filter |

## References

- Smith, R.C.; Cheeseman, P. On the representation and estimation of spatial uncertainty. Int. J. Robot. Res.
**1986**, 5, 56–68. [Google Scholar] [CrossRef] - Durrant-Whyte, H.F. Uncertain geometry in robotics. IEEE J. Robot. Automat.
**1988**, 4, 23–31. [Google Scholar] [CrossRef] - Ayache, N.; Faugeras, O. Maintaining representations of the environment of a mobile robot. IEEE Trans. Robot. Automat.
**1989**, 5, 804–819. [Google Scholar] [CrossRef] - Chatila, R.; Laumond, J.P. Position referencing and consistent world modeling for mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; pp. 135–148.
- Smith, R.; Self, M.; Cheeseman, P. Estimating uncertain spatial relationships in robotics. In Autonomous Robot Vehicles; Ingemar, J.C., Gordon, T.W., Eds.; Springer: New York, NY, USA, 1990; pp. 167–193. [Google Scholar]
- Stachniss, C. Class Lecture: Robot Mapping—WS 2013/14 Short Summary. Autonomy Intelligent System. University of Freiburg: Germany, 2013. Available online: http://ais.informatik.uni-freiburg.de/teaching/ws13/mapping/ (accessed on 7 July 2016).
- Bailey, T.; Durrant-Whyte, H. Simultaneous localization and mapping: Part II. IEEE Robot. Autom. Mag.
**2006**, 13, 108–117. [Google Scholar] [CrossRef] - Thrun, S. Simultaneous localization and mapping. In Robotics and Cognitive Approaches to Spatial Mapping; Springer: Berlin, Germany, 2008; Volume 38, pp. 13–41. [Google Scholar]
- Guth, F.; Silveira, L. Underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach. In Proceedings of the 2014 5th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), São Paulo, Brazil, 12–15 August 2014; pp. 981–986.
- Canny, J.A. computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell.
**1986**, 8, 679–698. [Google Scholar] [CrossRef] [PubMed] - Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151.
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis.
**2004**, 60, 91–110. [Google Scholar] [CrossRef] - Bay, H.; Tuytelaars, T.; Van Gool, L. SURF: Speeded up robust features. In Proceedings of the ECCV’06: European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Volume 3951, pp. 404–417.
- Palomer, A.; Ridao, P.; Ribas, D. Multibeam 3D underwater SLAM with probabilistic registration. Sensors
**2016**, 16, 560. [Google Scholar] [CrossRef] [PubMed] - Colin, M.M.K.; Mae, L.S.; Yajun, P. Extracting seafloor elevations from side–scan sonar imagery for SLAM data association. In Proceedings of the IEEE 28th Canadian Conference on Electrical and Computer Engineering, Halifax, NS, Canada, 3–6 May 2015; pp. 332–336.
- Lee, S.J.; Song, J.B. A new sonar salient feature structure for EKF-based SLAM. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5966–5971.
- Ye, X.; Li, P.; Zhang, J. Fully affine invariant matching algorithm based on nonlinear scale space for side scan sonar image. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation, Beijing, China, 2–5 August 2015; pp. 2387–2391.
- Allotta, B.; Costanzi, R.; Ridolfi, A.; Pascali, M.A.; Reggiannini, M.; Salvetti, O.; Sharvit, J. ACOUSTIC data analysis for underwater archaeological sites detection and mapping by means of autonomous underwater vehicles. In Proceedings of the IEEE OCEANS 2015 GENOVA, Genova, Italy, 18–21 May 2015; pp. 1–6.
- Hurtos, N.; Cufi, X.; Petillot, Y.; Salvi, J. Fourier-Based registrations for two-dimensional forward-looking sonar image mosaicing. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, 7–12 October 2012; pp. 5298–5305.
- Aykin, M.D.; Negahdaripour, S. On feature matching and image registration for two-dimensional forward-scan sonar imaging. J. Field Robot.
**2013**, 30, 602–623. [Google Scholar] [CrossRef] - Dos Santos, M.M.; Ballester, P.; Zaffari, G.B.; Drews, P.; Botelho, S. A topological descriptor of acoustic images for navigation and mapping. In Proceedings of the 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), Uberlandia, Brazil, 29–31 October 2015; pp. 289–294.
- Li, M.; Ji, H.; Wang, X.; Weng, L.; Gong, Z. Underwater object detection and tracking based on multi-beam sonar image processing. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; pp. 1071–1076.
- Chew, J.L.; Chitre, M. Object detection with sector scanning sonar. In Proceedings of the 2013 OCEANS-San Diego, San Diego, CA, USA, 23–27 September 2013; pp. 1–8.
- Liu, L.; Bian, H.; Yagi, S.I.; Yang, X. A configuration-conjunct threshold segmentation method of underwater linear object detection for forward-looking sonar. In Proceedings of the Symposium on Ultrasonic Electronics, Tsukuba, Japan, 5–7 November 2015.
- Hahne, D. Mapping with Mobile Robots. Ph.D. Thesis, University of Freiburg, Freiburg, Germany, December 2004. [Google Scholar]
- Zelinsky, W. The Cultural Geography of the United States, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1992. [Google Scholar]
- Siegwart, R.; Nourbakhsh, I.R. Introduction to Autonomous Mobile Robot; Massachusetts Institute of Technology: London, UK, 2004. [Google Scholar]
- Folkesson, J.; Christensen, H.I. Closing the loop with graphical slam. IEEE Trans. Robot.
**2007**, 23, 731–741. [Google Scholar] [CrossRef] - Morenoa, L.; Garridoa, S.; Blancoa, D.; Munozb, M.L. Differential evolution solution to the slam problem. Robot. Auton. Syst.
**2009**, 57, 441–450. [Google Scholar] [CrossRef] - Williams, B.; Cummins, M.; Neira, J.; Newman, P.M.; Reid, I.D.; Tardos, J.D. A comparison of loop closing techniques in monocular slam. Robot. Auton. Syst.
**2009**, 57, 1188–1197. [Google Scholar] [CrossRef] - Dissanayake, G.; Durrant-Whyte, H.; Bailey, T. A computationally efficient solution to the simultaneous localisation and map building (SLAM) problem. In Proceedings of the 2000 IEEE International Conference on Robotics & Automation, San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 1009–1014.
- Durrant-Whyte, H.; Majumder, S.; Thrun, S.; Battista, M.; Scheding, S. A Bayesian algorithm for simultaneous localization and map building. In Robotics Research; Springer: Berlin, Germany, 2003; pp. 49–60. [Google Scholar]
- Menegatti, E.; Zanella, A.; Zilli, S.; Zorzi, F.; Pagello, E. Range-Only slam with a mobile robot and a wireless sensor networks. In Proceedings of the 2009 IEEE international conference on robotics and automation (ICRA), Kobe International Conference Center, Kobe, Japan, 12–17 May 2009; pp. 8–14.
- Aulinas, J.; Lladó, X.; Salvi, J.; Petillot, Y.R. Selective submap joining for underwater large scale 6-DOF SLAM. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 2552–2557.
- Ribas, D.; Ridao, P.; Tardós, J.D.; Neira, J. Underwater SLAM in a marina environment. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1455–1460.
- Petillot, Y.; Tena Ruiz, I.; Lane, D.M. Underwater vehicle obstacle avoidance and path planning using a multi-beam forward looking sonar. IEEE J. Ocean. Eng.
**2001**, 26, 240–250. [Google Scholar] [CrossRef] - Cervenka, P.; Moustier, C. Sidescan sonar image processing techniques. IEEE J. Ocean. Eng.
**1993**, 18, 108–122. [Google Scholar] [CrossRef] - Wang, X.; Wang, H.; Ye, X.; Zhao, L.; Wang, K. A novel segmentation algorithm for side-scan sonar imagery with multi-object. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics(ROBIO), Sanya, China, 15–18 December 2007; pp. 2110–2114.
- Edward, T.; James, R.; Daniel, T. Automated optimisation of simultaneous multibeam and sidescan sonar seabed mapping. In Proceedings of the 2007 IEEE Conference on Oceans-Europe, Stoctland, UK, 18–21 June 2007; pp. 1–6.
- Deep Vision AB Company. Available online: http://deepvision.se/ (accessed on 7 July 2016).
- ECA Group Company. Available online: http://www.ecagroup.com/en/defence-security (accessed on 7 July 2016).
- Otsu, N. A threshold selection method from gray-level histogram. IEEE Trans. SMC
**1979**, 9, 62–66. [Google Scholar] - Abber, G.G. Contour Tracing Algorithms. Available online: http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/alg.html (accessed on 7 July 2016).
- Kang, L.; Zhong, S.; Wang, F. A new contour tracing method in a binary image. In Proceedings of the 2011 International Conference on Multimedia Technology (ICMT), Hangzhou, China, 26–28 July 2011; pp. 6183–6186.
- Zhang, Y.; Wu, L. Optimal multi-level thresholding based on maximum Tsallis entropy via an artificial bee colony approach. Entropy
**2011**, 13, 841–859. [Google Scholar] [CrossRef] - Flandrin, P.; Rilling, F.; Goncaleves, P. Empirical mode decomposition as a filter bank. IEEE Signal Process. Lett.
**2004**, 11, 112–114. [Google Scholar] [CrossRef] - Ge, G.; Sang, E.; Liu, Z.; Zhu, B. Underwater acoustic feature extraction based on bidimensional empirical mode decomposition in shadow field. In Proceedings of the 3rd International Workshop on Signal Design and Its Applications in Communications (IWSDA), Chengdu, China, 23–27 September 2007; pp. 365–367.
- Desistek Robotik Elektronik Yazilim Company. Available online: http://www.desistek.com.tr/ (accessed on 7 July 2016).
- Leonard, J.; Durrant-Whyte, H. Directed Sonar Sensing for Mobile Robot Navigation. In The Springer International Series in Engineering and Computer Science; Springer US: Boston, MA, USA, 1992. [Google Scholar]
- Kang, J.G.; Choi, W.S.; An, S.Y.; Oh, S.Y. Augmented EKF based SLAM method for improving the accuracy of the feature map. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 3725–3731.
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the 2002 American Association for Artificial Intelligence (AAAI-02), Edmonton, AB, Canada, 28 July–1 August 2002; pp. 593–598.
- Ribas, D. Towards Simultaneous Localization & Mapping for an AUV Using an Imaging Sonar. Ph.D. Thesis, University de Girona, Girona, Spain, 2005. [Google Scholar]
- Dissanayake, G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom.
**2001**, 17, 229–241. [Google Scholar] [CrossRef] - Allotta, B.; Caiti, A.; Costanzi, R.; Fanelli, F.; Fenucci, D.; Meli, E.; Ridolfi, A. A new AUV navigation system exploiting unscented Kalman filter. Ocean Eng.
**2016**, 113, 121–132. [Google Scholar] [CrossRef] - Hidalgo, F.; Bräunl, T. Review of Underwater SLAM Techniques. In Proceedings of the 6th International Conference on Automation, Robotics and Applications, Queenstown, New Zealand, 17–19 February 2015; pp. 305–311.

**Figure 4.**(

**a**) Traditional Otsu TSM, Th = 0.3216; (

**b**) Local TSM, Th = 0.1628; (

**c**) Iterative TSM, Th = 0.4238; (

**d**) Maximum entropy TSM, Th = 0.6627.

**Figure 5.**(

**a**) Canny edge detection after applying the traditional Otsu method, bw = edge (b, ‘canny’, 0.33), N

_{30}= 752 > 300; (

**b**) Improved Otsu TSM,T = 0.3216, T* = 0.6784; (

**c**) Result of the improved Otsu TSM after morphological operations marking the centroids of the obtained regions; (

**d**) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the acquired areas.

**Figure 6.**(

**a**) Traditional Otsu TSM, Th = 0.1137; (

**b**) Local TSM, Th = 0.0941; (

**c**) Iterative TSM, Th = 0.2609; (

**d**) Maximum entropy TSM, Th = 0.3176.

**Figure 7.**(

**a**) Canny contour detection after applying the traditional Otsu method, bw=edge (b, ‘canny’, 0.1255), N

_{15}= 419 > 100; (

**b**) Improved Otsu TSM, T = 0.1137, T* = 0.3529; (

**c**) Result of the improved Otsu TSM after morphological operations marking the centroids of the obtained regions; (

**d**) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the acquired areas.

**Figure 8.**The original FLS image comes from [48], and there is a plastic mannequin in the down center.

**Figure 9.**(

**a**) Traditional Otsu TSM, Th = 0.1176; (

**b**) Local TSM, Th = 0.0941; (

**c**) Iterative TSM, Th = 0.2990; (

**d**) Maximum entropy TSM, Th = 0.4118.

**Figure 10.**(

**a**) Canny edge detection after employing the traditional Otsu method, bw = edge (b, ‘canny’, 0.13), N

_{40}= 1341 > 600; (

**b**) Improved Otsu TSM, T = 0.1176, T* = 0.5412; (

**c**) Result of the improved Otsu TSM after morphological operations marking the centroids of the acquired areas; (

**d**) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the obtained regions.

**Figure 11.**The flow chart of SLAM procedure based on an AEKF. Modified after [27].

**Figure 12.**The architecture of the AEKF-SLAM system, as described in [50].

**Figure 13.**(

**a**) The robot is observing the centroids of certain parts of the body before loop closure; (

**b**) The final AEKF-SLAM loop map where the landmarks are detected by the improved Otsu TSM.

**Figure 14.**(

**a**) The robot is observing the centroids of certain parts of the body before loop closure; (

**b**) The final AEKF-SLAM loop map where the landmarks are detected by the maximum entropy TSM.

Detected | |||
---|---|---|---|

Ship Centroids | Non-Ship Centroids | ||

Real | Ship Centroids | 2 | 0 |

Non-Ship Centroids | 4 | 21 |

Detected | |||
---|---|---|---|

Ship Centroids | Non-Ship Centroids | ||

Real | Ship Centroids | 2 | 0 |

Non-Ship Centroids | 8 | 20 |

**Table 3.**Computational costs of different segmentation methods on Figure 1a.

Segmentation Method | Computational Time [s] |
---|---|

Traditional Otsu TSM | 0.178226 |

Local TSM | 0.913942 |

Iterative TSM | 0.289513 |

Maximum entropy TSM | 1.562499 |

Improved Otsu TSM | 0.868372 |

Detected | |||
---|---|---|---|

Branch Centroids | Non-Branch Centroids | ||

Real | Branch Centroids | 1 | 0 |

Non-Branch Centroids | 1 | 13 |

Detected | |||
---|---|---|---|

Branch Centroids | Non-Branch Centroids | ||

Real | Branch Centroids | 1 | 0 |

Non-Branch Centroids | 7 | 11 |

**Table 6.**Computational costs of different segmentation methods on Figure 1b.

Segmentation Method | Computational Time [s] |
---|---|

Traditional Otsu TSM | 0.120458 |

Local TSM | 0.261021 |

Iterative TSM | 0.227290 |

Maximum entropy TSM | 0.378283 |

Improved Otsu TSM | 0.241164 |

Detected | |||
---|---|---|---|

Body Centroids | Non-Body Centroids | ||

Real | Body Centroids | 5 | 0 |

Non-Body Centroids | 11 | 26 |

Detected | |||
---|---|---|---|

Body Centroids | Non-Body Centroids | ||

Real | Body Centroids | 2 | 3 |

Non-Body Centroids | 44 | 40 |

**Table 9.**Computational costs of different segmentation methods on Figure 8.

Segmentation Method | Computational Time [s] |
---|---|

Traditional Otsu TSM | 0.244472 |

Local TSM | 0.941853 |

Iterative TSM | 0.428126 |

Maximum entropy TSM | 3.903889 |

Improved Otsu TSM | 1.452562 |

**Table 10.**The landmark point positions of the ship, branch and body estimated by the AEKF and the true ones detected by the improved Otsu TSM.

Ship [m] | Branch [m] | Body [m] | ||||||
---|---|---|---|---|---|---|---|---|

True | (53.5, 60.3) | (54.23, 65.39) | (18.73, −11.56) | (−94.98, −66.29) | (−96.69, −66.06) | (−102.12, −61.57) | (−102.41, −70.3) | (−106.55, −81.13) |

Estimated | (53.66, 60.23) | (54.31, 65.32) | (18.8, −11.49) | (−94.99, −66.35) | (−96.67, −66.12) | (−102.2, −61.59) | (−102.4, −70.34) | (−106.4, −81.44) |

**Table 11.**The landmark point positions of the ship, branch and body estimated by the AEKF and the true ones detected by the maximum entropy TSM.

Ship [m] | Branch [m] | Body [m] | |||
---|---|---|---|---|---|

True | (53.61, 60.18) | (54.22, 65.4) | (18.75, −11.55) | (−99.23, −67.7) | (−97.59, −72.08) |

Estimated | (54.24, 59.12) | (54.96, 64.35) | (18.62, −11.69) | (−100.1, −65.82) | (−98.58, −70.23) |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yuan, X.; Martínez, J.-F.; Eckert, M.; López-Santidrián, L. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation. *Sensors* **2016**, *16*, 1148.
https://doi.org/10.3390/s16071148

**AMA Style**

Yuan X, Martínez J-F, Eckert M, López-Santidrián L. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation. *Sensors*. 2016; 16(7):1148.
https://doi.org/10.3390/s16071148

**Chicago/Turabian Style**

Yuan, Xin, José-Fernán Martínez, Martina Eckert, and Lourdes López-Santidrián. 2016. "An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation" *Sensors* 16, no. 7: 1148.
https://doi.org/10.3390/s16071148