Next Article in Journal
Using Orthogonal Combined Signals in Broadband ADCP for Improving Velocity Measurement
Next Article in Special Issue
A-Priori Calibration of a Structured Light Underwater 3D Sensor
Previous Article in Journal
Extreme Weather and Climate Events: Physical Drivers, Modeling and Impact Assessment
Previous Article in Special Issue
Applications of Virtual Data in Subsea Inspections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Image Enhancement and Mosaicking System Based on A-KAZE Feature Matching

Department of Electronics and Computer Engineering, Centre for Robotics and Intelligent Systems, University of Limerick, V94 T9PX Limerick, Ireland
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2020, 8(6), 449; https://doi.org/10.3390/jmse8060449
Submission received: 21 May 2020 / Revised: 16 June 2020 / Accepted: 17 June 2020 / Published: 19 June 2020
(This article belongs to the Special Issue Underwater Computer Vision and Image Processing)

Abstract

:
Feature extraction and matching is a key component in image stitching and a critical step in advancing image reconstructions, machine vision and robotic perception algorithms. This paper presents a fast and robust underwater image mosaicking system based on (2D)2PCA and A-KAZE key-points extraction and optimal seam-line methods. The system utilizes image enhancement as a preprocessing step to improve quality and allow for greater keyframe extraction and matching performance, leading to better quality mosaicking. The application focus of this paper is underwater imaging and it demonstrates the suitability of the developed system in advanced underwater reconstructions. The results show that the proposed method can address the problems of noise, mismatching and quality issues which are typically found in underwater image datasets. The results demonstrate the proposed method as scale-invariant and show improvements in terms of processing speed and system robustness over other methods found in the literature.

1. Introduction

Underwater imaging is an important technique used to document and reconstruct biologically or historically important sites that are generally inaccessible for the majority of public and scientific communities. A variety of technical and advanced equipment has been used to acquire underwater imaging, including sonar, thermal, optical camera and laser. In modern day underwater imaging, high quality optical cameras are typically the preferred method used for a host of computer vision tasks from scene reconstructions and navigation [1] and to intervention tasks [2]. However, there are some critical issues in processing underwater images and in maintaining high levels of image quality robustness. Environment and optical noise, wave disturbances, light stability and equality, temperature fluctuations and other environment factors can all affect underwater imaging quality and can make documentation of underwater domains one of the most challenging tasks. Image processing techniques can help researchers in other fields to process data and find best objects and key-points in the images. In the underwater environment, optical scattering is the one of main issues that can cause various distortions including color loss in the underwater image [3].
Research in the area of image processing for underwater domains has been an area of growing importance and relevance to the blue economy and has been a focus also for energy production sectors. Lu et al. [4] found that some flicker exists in the underwater images and they proposed a method of median dark channel prior technique for descattering. Li et al. [3] proposed a system for improving the quality of underwater images. Lu et al. [5] proposed a system for transferring an underwater style image to a recovered style to restore underwater images using Multi-Scale Cycle Generative Adversarial Network (MCycle GAN) System. They included a Structural Similarity Index Measure of loss (SSIM loss) for underwater image restoration. They designed an adaptive SSIM loss to adapt underwater image quality using dark channel prior (DCP) algorithm.
There are some new methods presented for feature extraction in nonlinear scale spaces. Alcantarilla et al. [6] introduced a multiscale 2D feature detection method in nonlinear scale spaces called KAZE which means wind in the Japanese language. Other, more common techniques can extract features by building the Gaussian scale space of images at different levels which results in smoothing the image boundaries. Alcantarilla et al.’s approach described 2D features in a nonlinear scale space using nonlinear diffusion filtering. This method of nonlinear diffusion filtering increases repeatability and distinctiveness compared to SIFT (scale-invariant feature transform) [7] and SURF (speeded-up robust features) [8] approaches. However, a disadvantage to this approach is that it can be computationally intense.
Image stitching includes image matching, feature matching, bundle adjustment, gain compensation, automatic panorama straightening and multiband blending [9]. An image stitching system is very useful but possibly cannot guarantee accuracy for the underwater images. Chen et al. [10] proposed a method for UAV image mosaicking with optical flow based on nonrigid matching algorithms, local transformation descriptions and aerial image mosaicking. Zhang et al. [11] used the classic SIFT and matching algorithm for the registration of images, in which the false matching points are removed by the RANSAC (random sample consensus) algorithm. These systems also result in very high mismatch data for underwater image features and factors.
Nunes et al. [12] presented a mosaicking method for underwater robotic operations such as real-time object detection. This model is called robust and large-scale mosaicking (ROLAMOS) which composes sequences of the seafloor from visual. Elibol et al. [13] proposed an image mosaicking technique for visual mapping in underwater environments using multiple underwater robots that classifies overlapping image pairs in the trajectories carried out by the robot formation. In an experimental study, Ferentinos et al. [14] used objective computer vision and mosaicking techniques in processing sidescan sonar seafloor images for the separation of potential ancient shipwreck targets from other seafloor features with similar acoustic signatures.
This paper proposes and investigates a novel method for image processing and an advanced image stitching technique which is suited to the underwater domain and addresses many of the concerns raised over quality, domain suitability and robustness. The paper is organized as follows: Section 2 introduces the preprocessing of underwater images such as noise removal and image enhancement operations and provides a proposed method for feature extraction and image matching using principle component analysis and other techniques. Section 3 shows the results of the proposed model and comparison of this work and other valid literature results. Finally, Section 4 documents paper conclusions.

2. Materials and Methods

This paper puts forward an image stitching technique for underwater pipe images based on (1) Fast Fourier Transform (FFT) for image noise removal, (2) Mix-CLAHE for image enhancement, (3) A-KAZE and (2D)2PCA for image matching and finally (4) optimal seam-line technique for image stitching. The flow chart of the proposed method is shown in Figure 1. As follows:
  • Underwater noise removal from images by Fast Fourier Transform (FFT) technique.
  • Contrast and intensity of images are increased by Mixture Contrast Limited Adaptive Histogram Equalization (Mix-CLAHE) technique.
  • Important feature extraction and image matching are performed by (2D)2PCA and A-KAZE techniques.
  • Image stitching is carried out by the optimal seam-line method.

2.1. Noise Reduction

Noise is known as uncorrelated with respect to the image and there is no relationship between the noise values and image pixels. Underwater images are frequently corrupted by noise due to several parameters such as environment factors including hazing and turbidity. The computational problem for the Fast Fourier Transform (FFT) is to compute the sequence Xk of N complex-valued numbers that given another sequence of data xn of length N, according to [15]:
X k = n = 0 N 1 x n e i 2 π k n N    0   k N 1

2.2. Image Enhancement

Currently there is much research ongoing in underwater image enhancement and in the use of algorithms designed around the characteristics of the underwater image such as low contrast and color cast. Outdoor image enhancement models can be adapted and used for underwater images; however, these methods change the pixel values in either the transformed domain or the spatial domain. There are some more advanced models for underwater image enhancement such as deep learning models, Convolutional Neural Networks (CNN) utilisation, transform-domain image enhancement and spatial-domain image enhancement [16]. Many methods have been reviewed in this work for both outdoor and underwater image enhancement domains. The area of research is broad and includes: Contrast Limited Adaptive Histogram Equalization (CLAHE) [17], Gamma Correction, and Generalized Unsharp Masking (GUM) [18], elative global histogram stretching (RGHS) [19], homomorphic filter and an anisotropic filter [20], wavelet-based fusion [21], wavelet-based perspective enhancement technique [22], CNN-based underwater image enhancement method [23], UIE-net (Underwater Image Enhancement-net) [24], WaterGAN [25], adopted GANs [26], Wasserstein GAN [27] and others.
The refractive index is an important key in underwater imaging which has a perceptible impact on underwater imaging sensors. In the underwater imaging, even the case of a camera can act as a lens and causes refraction of light. Jordt et al. [28] using optical cameras proposed a geometrical imaging model for 3D reconstruction to address the geometric effect of refraction. The geometric effect of refraction can be seen in the NIR wavelength as radial distortion. Anwer et al. [29] proposed a time of flight correction method to overcome the effect of refraction of light in the underwater imaging. Łuczyński et al. [30] presented the pinax model for calibration and rectification correction of underwater cameras in flat-pane housings. Their model takes the refraction indices of water into account and is enough to calibrate the underwater camera only once in air and for underwater imaging.
In this proposed model, a method called Mixture Contrast Limited Adaptive Histogram Equalization (Mix-CLAHE) [31] has been applied to improve the visibility and contrast of underwater images. The method operates CLAHE on the HSV and RGB color models to generate two images, which are combined by the Euclidean norm.
CLAHE is a kind of Adaptive Histogram Equalization (AHE) which limits the amplification by clipping the histogram at clip limit which is a user-defined value. It determines noise smoothing in the contrast enhancement and histogram. Mix-CLAHE is a mix of the results of CLAHE-RGB and CLAHE-HSV. The Mix-CLAHE first normalize the result of CLAHE-RGB as:
[ r c 1 , g c 1 , b c 1 ] = [ R c R c + G c + B c , G c R c + G c + B c , B c R c + G c + B c ]   w h e r e   C = V S
where Red (R), Green (G) and Blue (B) above are RGB color model terms and Hue (H), Saturation (S) and Value (V) are HSV color model terms. Then the result of CLAHE-HSV is converted to RGB with the conversion from HSV to RGB denoted by (rc2; gc2; bc2) and the results are combined using a Euclidean norm as:
R G B n = [ r c 1 2 + r c 2 2   , g c 1 2 + g c 2 2   , b c 1 2 + b c 2 2   ]

2.3. Image Matching

An advanced technique for feature extraction is 2-directional 2-dimensional principal component analysis ((2D)2PCA). In this method a 2-dimensional PCA is used in the row direction of images, and then an alternative 2-dimensional PCA is operated on the column direction of images. In the (2D)2PCA technique for feature extraction, the size reduction is applied in the rows and columns of images simultaneously [32]. In order to describe the different patterns and angles of underwater image within one image, the texture attribute can be used, since texture contains information about the spatial distribution of gray levels and keeps information with variations in brightness, orientation, and angles. However, the high dimensionality of a feature vector in underwater image that represents texture attributes limits its computational efficiency, so it is necessary to choose a method that combines the representation of the texture with the decrease of dimensionality, in a way to make the retrieval and mosaicking algorithm more effective and computationally treatable. Furthermore, 2-directional 2-dimensional principal component analysis is a fast and accurate feature extraction and data representation technique that aims at finding a less redundant and more compact representation of data in which a reduced number of components can be independently responsible for data variation.
To apply this technique on an underwater image A with m rows and n columns, the covariance matrix C can be defined as:
C = 1 M k = 1 M i = 1 m ( A k ( i ) A ¯ ( i ) ) ( A k ( i ) A ¯ ( i ) ) T
where M is defined as the training sample with m by n matrices, which are shown by Ak (k = 1, 2, …, M) and A ¯ and C are defined as the average matrix and covariance matrix respectively and A k ( i ) and A ¯ ( i ) denote the i-th row vectors of Ak and A ¯ respectively. Equation (4) is a 2-dimensional PCA operator in the image rows and another 2-dimensional PCA can be applied in the image columns as:
C = 1 M k = 1 M j = 1 n ( A k ( j ) A ¯ ( j ) ) ( A k ( j ) A ¯ ( j ) ) T
where A k ( j ) and A ¯ ( j ) denote the j-th column vectors of Ak and A ¯ respectively. q first high eigenvalues of matrix C are located as columns in the matrix Z which Z R m × q . Projecting the random matrix A onto Z yields a “q by n” matrix Y = Z T A and projecting the matrix A onto Z and X generates a “q by d” matrix Y = Z T A X . The matrix C is then used as the extracted feature matrix in the proposed method.
Alcantarilla et al. [33] also proposed a fast and novel multiscale feature detection approach that exploits the benefits of nonlinear scale spaces called Accelerated-KAZE (A-KAZE). After the use of noise removal techniques, the main data in the image can typically be damaged and blurred. The A-KAZE method uses the nonlinear scale space that blurs the image data, resulting in noise reduction without damaging the image pixels. Nonlinear scale space is built using the fast-explicit diffusion (FED) algorithm and the principle of nonlinear diffusion filter. The image luminance is diffused by the nonlinear nature of the partial differential equation with nonlinear scale space. The classic nonlinear diffusion is defined by:
L u m t = d i v   ( C ( x , y , t ) L u m
where Lum is the luminance of the image, div is divergence, ∇ is gradient operator, t is a scale parameter of function and C is the conductivity function, being local to image structure C that guarantees the applicability of diffusion. The function C can be either a scalar or a tensor based on the image structure and is defined by:
C ( x , y , t ) = G | L u m σ ( x , y , t ) |
where Lumσ and ∇Lumσ are smoothed Gaussian versions of the image and gradient of Lumσ, respectively. Although there are some conductivity functions, the conductivity function G2 supports wide regions and can be expressed by:
G 2 = 1 1 + | L u m σ | 2 / λ 2
where λ is a contrast factor that is used to remove edges. In the A-KAZE algorithm, after feature extraction, the element of the Hessian for each of the filtered images Lumi in the nonlinear scale space will be computed. The calculation of the Hessian matrix can be defined by:
H ( L u m i ) = σ i , n o r m 2   ( L u m x x i L u m y y i L u m x y i L u m x y i )    
where σ i , n o r m 2 is the normalized scale factor of the octave of each image in the nonlinear scale (i.e., σ i , n o r m = σ i 2 σ i ). L u m x x i and L u m y y i are the horizontal and vertical image of the second-order derivative, respectively and L u m x y   i is the cross-partial derivative.
The eigenvectors and the main directions of the eigenvalues are constructed. In this step, the eigenvectors with scale and rotation invariance are extracted based on the first-order differential images.
The A-KAZE algorithm uses a Modified-Local Difference Binary (M-LDB) to describe the feature points and exploit gradient and intensity information from the nonlinear scale space. Yang and Cheng [34] introduced the LDB descriptor and developed the same principle as BRIEF [35].

2.4. Image Mosaicking

Image mosaicking is a technology that combines several overlapped images into a large image, including steps of image acquisition, image registration and image fusion [36]. A fast and high accurate image matching method is necessary to achieve a high-resolution mosaic of underwater images.
After feature and key-points extraction, the system selects main pixels and images to put in the final stitched image. In the optimal seam-line method, these pixels should be combined to minimize visible seams and ghosting [37]. Considering the stitching method for two images (a) and (b), the optimal seamline can be defined as:
E c l o ( x , y ) = 1 N s i , j ϵ V | a ( x + i , y + i ) b ( x + i , y + i ) |
E s t r = min ( c a ( x , y ) , c b ( x , y ) )
( x , y ) = E c l o ( x , y ) 2 + β E s t r ( x , y )
where ca and cb represent changes of the two images in axis directions x and y, respectively. Meanwhile, Eclo and Estr are the difference of the average energy in the related neighborhood and the similarity of the geometrical structure between images indicated by gradients. α and β are weighting factors and used to measure the proportion of the relationship between structural change and color change.

3. Results

The proposed model and reported test results of this paper have been applied to a sample of underwater pipe images from the online MARIS dataset [38]. The MARIS dataset was acquired underwater near Portofino in Italy utilizing a stereo vision imaging system. It provides images of cylindrical pipes with different colour submerged at 10 m depth. The dataset includes 9600 stereo images in Bayer encoded format with 1292 × 964 resolution.
The (2D)2PCA technique has been used for the feature extraction. Several feature matrices have been selected and compared to other literature techniques. Techniques such as PCA, 2DPCA and SVD are used for feature extraction in machine vision applications on the same dataset and as demonstrator for comparison of results. The system demonstrates improved results while utilizing few principal components.
Figure 2 shows two input images and the subsequent results of FFT noise reduction algorithm. As can be seen, the processed images have improved vision clarity and have been prepared for the next step as input to the image to image enhancement stage.
Figure 3 shows the two images after noise reduction and the results of the image enhancement algorithm based on the Mix-CLAHE technique. Figure 4 shows a comparison between this model and other image enhancement method results [39]. Histogram Equalization (HE) is a traditional technique for image intensities adjusting to enhance contrast. Integrated Color Model (ICM) converts the image to the HSI color model and the S and I components are stretched to improve the brightness and saturation of the output image. Relative Global Histogram Stretching (RGHS) is a method for color image enhancement. RGHS technique improves the visual effect of the color image and retains the main information without any blind enhancement on account of underwater image characteristics such as turbidity, haze and noise. Unsupervised Color Correction Method (UCM) is based on contrast optimization of selective histogram stretching and Von Kries hypothesis (VKH). This method can improve brightness of low brightness components and uses a Rayleigh distribution function to redistribute the input underwater image in combination with the variation of UCM and ICM to improve image contrast and reduce oversaturation, overenhancement regions and noise introduction. Another algorithm is the Screened Poisson Equation for Image Contrast Enhancement (SP). Output of this algorithm is obtained by applying the Screened Poisson equation to each colour channel [40]. Underwater images can be numerically represented by a feature vector, preferentially at a low-dimensional space in which the most relevant visual aspects are emphasized. Visually, underwater image enhancement and noise removal methods can address typical image damages, including degradations and distortions. In the underwater environments, lacking illumination causes the low energy of RGB components of underwater images. Underwater image enhancement methods such as Histogram Equalization (HE), Gray-World Assumption (GWA), White Balancing (WB) and other techniques can directly cause distortions. Thus, in the proposed method, Mix-CLAHE technique has been used to improve image and address image damages.
Figure 5 shows the result of feature extraction and image matching using (2D)2PCA and A-KAZE algorithms. The results show the features matched from two images with different camera angles in a video frame from an underwater motion camera. Figure 6 shows a comparison between this model and another image matching method result called Oriented FAST and Rotated BRIEF (ORB) for underwater image [45] which is basically a fusion of the scale-invariant feature transform (FAST) key-point detector and the Binary Robust Independent Elementary Features (BRIEF) descriptor with many modifications to enhance the performance. As shown, the key-points extraction and matching in the proposed method is very accurate and this method is better than other methods for image mosaicking. The used number of match point for the proposed method and ORB method (Figure 6) is 110 points. The highest difference angle is selected between first and last image to show the accurate of matching and mismatching points in this comparison.
Figure 7 shows the result of the proposed method for a collection of video frames with different angels and contrasts. The results clearly show that this method can create a good underwater mosaicking from different frames with high time distance in pipe underwater image dataset. The dataset of frames used is acquired at 15 frames per second and in the first step one image in every 10 frames has been selected for the mosaicking process. Regarding the high difference angle between images, a little curviness for the final image is inevitable. This curve is generated to keep all image pixels for final image. In the image mosaicking process, this curve can be improved in the final image based on the number of match points.
Figure 8 shows the result of the proposed method for a collection of video frames with more different angles and in this step one image in every 20 frames has been selected and the difference frame between first and last image is 60.

4. Discussion

Figure 9 presents the underwater pipe mosaicked images for other mosaicking methods based on SIFT and random sample consensus (RANSAC) [46] and shows differences between these method and the proposed model. This technique is used to keep the accuracy and quality of mosaicking by eliminating the influence of mismatched point pairs in underwater images on mosaicking. Red rectangles show obviously mismatched points and regions based on SIFT and random sample consensus (RANSAC) and green rectangles show differences between results of the proposed method and the other technique. Accuracy assessment is critical to discern the performance of the image matching and mosaicking methods and is an important key for image mosaicking use in underwater smart robots. It is to be noted that the input images for this technique (shown in Figure 9) are the denoised underwater pipe images using the FFT technique.
To evaluate the accuracy, the Root-Mean-Square Error (RMSE) is used. The distance between a point in the first image (in each image pairs) and the corresponding point in the sequence frame with different look and angle (second image in each image pairs) is [47]:
D i = ( U i x i ) 2 + ( V i y i ) 2
where (xi, yi) and (Ui, Vi) are the coordinates of a pair of corresponding points in the first image and second image, respectively and N is the total number of correct matching pairs. RMSE is defined as:
R M S E = 1 N i = 1 N [ ( U i x i ) 2 + ( V i y i ) 2 ]
To evaluate the accuracy and effectiveness of the proposed method, five image pairs are tested (Figure 5). Table 1 shows comparisons of the results of matching accuracy (RMSE), number of extracted feature points and correct matching rate applied on test images. It can be seen from Figure 9 and Table 1 that the proposed method results are more accurate than other methods. Table 1 shows that although the number of feature points extracted by SURF and SIFT is larger than the proposed method points number for each image pairs, matching accuracy in the proposed technique is better than other methods. The correct matching rate (CMR) can be defined as:
C M R = N c N
where NC and N are logarithm of correct matching points and all logarithm of all matching points, respectively.
Finally, for another image mosaicking accuracy evaluation, the mosaicked results have been compared to the ground truth [48]. The mosaicking accuracy is calculated by the mean difference between the coordinates of the corresponding pixel and average reprojection error in pixels (eM) can be defined as:
e M = 1 N i = 1 N ε i T ε i    ε i = x i x i
where N is the number of corresponding pixel pairs. xi and x′i are matching pixels in the mosaicked and the ground-truth, respectively. Large eM shows the deformation in the mosaicking process. Table 2 shows the mosaicking error for the proposed method and other techniques. For this comparison, data 1 and data 2 are mosaicked images from the main dataset with 10 and 20 frames difference between images.

5. Conclusions

In this paper, we propose an optimized mosaicking underwater image method based on (2D)2PCA, A-KAZE and optimal seam line technique which is evaluated against other competing techniques using an underwater pipe image dataset. Firstly, the preprocessing is based on FFT and Mix-CLAHE methods, with feature extraction completion. After that, mosaicked images are generated based on optimal seam-line method for image fusion. The effectiveness of the proposed method has been shown in comparison with other approaches reported in the literature. The developed method is shown, through demonstration and a Root-Mean-Square Error estimator, to give significant improvement to that of comparison systems. Future work will investigate adaptation of this system to real-time underwater image mosaicking.

Author Contributions

Methodology, I.A.K., G.D. and D.T.; analysis, I.A.K and G.D.; investigation, G.D.; writing original draft preparation, I.A.K.; writing, review and editing, G.D. and D.T.; supervision, D.T.; funding acquisition, G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This publication has emanated from research supported by the Science Foundation Ireland under the MaREI and CONFIRM Research Centres (Grant No. SFI/12/RC/2302_P2, SFI/14/SP/2740 and SFI/16/RC/3918), RoboVaaS EU ERA-Net Co-fund award through Irish Marine Institute and EU Horizon 2020 project EUMarineRobots under Grant Agreement 731103.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rossi, M.; Trslić, P.; Sivčev, S.; Riordan, J.; Toal, D.; Dooly, G. Real-time underwater stereofusion. Sensors 2018, 18, 3936. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Sivčev, S.; Rossi, M.; Coleman, J.; Dooly, G.; Omerdić, E.; Toal, D. Fully automatic visual servoing control for work-class marine intervention ROVs. Control Eng. Pract. 2018, 74, 153–167. [Google Scholar] [CrossRef]
  3. Li, Y.; Zhang, Y.; Xu, X.; He, L.; Serikawa, S.; Kim, H. Dust removal from high turbid underwater images using convolutional neural networks. Opt. Laser Technol. 2019, 110, 2–6. [Google Scholar] [CrossRef]
  4. Lu, H.; Li, Y.; Zhang, L.; Serikawa, S. Contrast enhancement for images in turbid water. J. Opt. Soc. Am. A 2015, 32, 886–893. [Google Scholar] [CrossRef] [Green Version]
  5. Lu, J.; Li, N.; Zhang, S.; Yu, Z.; Zheng, H.; Zheng, B. Multi-scale adversarial network for underwater image restoration. Opt. Laser Technol. 2019, 110, 105–113. [Google Scholar] [CrossRef]
  6. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE Features. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 214–227. [Google Scholar]
  7. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  8. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  9. Li, M.; Chen, R.; Zhang, W.; Li, D.; Liao, X.; Wang, L.; Pan, Y.; Zhang, P. A stereo dual-channel dynamic programming algorithm for UAV image stitching. Sensors 2017, 17, 2060. [Google Scholar] [CrossRef] [Green Version]
  10. Chen, J.; Xu, Q.; Luo, L.; Wang, Y.; Wang, S. A Robust Method for Automatic Panoramic UAV Image Mosaic. Sensors 2019, 19, 1898. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, W.; Guo, B.; Li, M.; Liao, X.; Li, W. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow. Sensors 2018, 18, 1214. [Google Scholar] [CrossRef] [Green Version]
  12. Nunes, A.P.; Gaspar, A.R.S.; Pinto, A.M.; Matos, A.C. A mosaicking technique for object identification in underwater environments. Sens. Rev. 2019, 39. [Google Scholar] [CrossRef]
  13. Elibol, A.; Kim, J.; Gracias, N.; Garcia, R. Efficient image mosaicing for multi-robot visual underwater mapping. Pattern Recognit. Lett. 2014, 46, 20–26. [Google Scholar] [CrossRef]
  14. Ferentinos, G.; Fakiris, E.; Christodoulou, D.; Geraga, M.; Dimas, X.; Georgiou, N.; Kordella, S.; Papatheodorou, G.; Prevenios, M.; Sotiropoulos, M. Optimal sidescan sonar and subbottom profiler surveying of ancient wrecks: The ‘Fiskardo’ wreck, Kefallinia Island, Ionian Sea. J. Archaeol. Sci. 2020, 113, 105032. [Google Scholar] [CrossRef]
  15. Mkayes, A.A.; Saad, N.M.; Faye, I.; Walter, N. Image histogram and FFT based critical study on noise in fluorescence microscope images. In Proceedings of the 2016 6th International Conference on Intelligent and Advanced Systems (ICIAS), Kuala Lumpur, Malaysia, 15–17 August 2016; pp. 1–4. [Google Scholar]
  16. Wang, Y.; Song, W.; Fortino, G.; Qi, L.-Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  17. Zuiderveld, K. Contrast limited adaptive histogram equalization. In Graphics Gems IV; Heckbert, P.S., Ed.; Academic Press Professional: San Diego, CA, USA, 1994; pp. 474–485. [Google Scholar]
  18. Deng, G. A generalized unsharp masking algorithm. IEEE Trans. Image Process. 2010, 20, 1249–1261. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In Proceedings of the International Conference on Multimedia Modeling, Bangkok, Thailand, 5–7 February 2019; pp. 453–465. [Google Scholar]
  20. Prabhakar, C.; Kumar, P.P. Underwater image denoising using adaptive wavelet subband thresholding. In Proceedings of the 2010 International Conference on Signal and Image Processing, Changsha, China, 14–15 December 2010; pp. 322–327. [Google Scholar]
  21. Khan, A.; Ali, S.S.A.; Malik, A.S.; Anwer, A.; Meriaudeau, F. Underwater image enhancement by wavelet based fusion. In Proceedings of the 2016 IEEE International Conference on Underwater System Technology: Theory and Applications (USYS), Penang, Malaysia, 13–14 December 2016; pp. 83–88. [Google Scholar]
  22. Vasamsetti, S.; Mittal, N.; Neelapu, B.C.; Sardana, H.K. Wavelet based perspective on variational enhancement technique for underwater imagery. Ocean Eng. 2017, 141, 88–100. [Google Scholar] [CrossRef]
  23. Perez, J.; Attanasio, A.C.; Nechyporenko, N.; Sanz, P.J. A deep learning approach for underwater image enhancement. In Proceedings of the International Work-Conference on the Interplay Between Natural and Artificial Computation, Corunna, Spain, 19–23 June 2017; pp. 183–192. [Google Scholar]
  24. Wang, Y.; Zhang, J.; Cao, Y.; Wang, Z. A deep CNN method for underwater image enhancement. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1382–1386. [Google Scholar]
  25. Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 2017, 3, 387–394. [Google Scholar] [CrossRef] [Green Version]
  26. Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 20–25 May 2018; pp. 7159–7165. [Google Scholar]
  27. Yu, X.; Qu, Y.; Hong, M. Underwater-GAN: Underwater image restoration via conditional generative adversarial network. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; pp. 66–75. [Google Scholar]
  28. Jordt, A.; Köser, K.; Koch, R. Refractive 3D reconstruction on underwater images. Methods Oceanogr. 2016, 15, 90–113. [Google Scholar] [CrossRef]
  29. Anwer, A.; Ali, S.S.A.; Khan, A.; Mériaudeau, F. Underwater 3-d scene reconstruction using kinect v2 based on physical models for refraction and time of flight correction. IEEE Access 2017, 5, 15960–15970. [Google Scholar] [CrossRef]
  30. Łuczyński, T.; Pfingsthorn, M.; Birk, A. The pinax-model for accurate and efficient refraction correction of underwater cameras in flat-pane housings. Ocean Eng. 2017, 133, 9–22. [Google Scholar] [CrossRef]
  31. Hitam, M.S.; Awalludin, E.A.; Yussof, W.N.J.H.W.; Bachok, Z. Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In Proceedings of the 2013 International Conference on Computer Applications Technology (ICCAT), Sousse, Tunisia, 20–22 January 2013; pp. 1–5. [Google Scholar]
  32. Kazerouni, I.; Haddadnia, J. A mass classification and image retrieval model for mammograms. Imaging Sci. J. 2014, 62, 353–357. [Google Scholar] [CrossRef]
  33. Alcantarilla, P.F. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [Google Scholar]
  34. Yang, X.; Cheng, K.-T. LDB: An ultra-fast feature for scalable augmented reality on mobile devices. In Proceedings of the 2012 IEEE international symposium on mixed and augmented reality (ISMAR), Atlanta, GA, USA, 5–8 November 2012; pp. 49–57. [Google Scholar]
  35. Calonder, M.; Lepetit, V.; Ozuysal, M.; Trzcinski, T.; Strecha, C.; Fua, P. BRIEF: Computing a local binary descriptor very fast. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar] [CrossRef] [Green Version]
  36. Lin, C.-C.; Pankanti, S.U.; Natesan Ramamurthy, K.; Aravkin, A.Y. Adaptive as-natural-as-possible image stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1155–1163. [Google Scholar]
  37. Wang, B.; Li, H.; Hu, W. Research on key techniques of multi-resolution coastline image fusion based on optimal seam-line. Earth Sci. Inform. 2019, 1–12. [Google Scholar] [CrossRef]
  38. Oleari, F.; Kallasi, F.; Rizzini, D.L.; Aleotti, J.; Caselli, S. An underwater stereo vision system: From design to deployment and dataset acquisition. In Proceedings of the OCEANS 2015, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
  39. Wang, Y. Single Underwater Image Enhancement and Color Restoration. Available online: https://github.com/wangyanckxx/Single-Underwater-Image-Enhancement-and-Color-Restoration (accessed on 28 May 2020).
  40. Mangeruga, M.; Bruno, F.; Cozza, M.; Agrafiotis, P.; Skarlatos, D. Guidelines for underwater image enhancement based on benchmarking of different methods. Remote Sens. 2018, 10, 1652. [Google Scholar] [CrossRef] [Green Version]
  41. Hummel, R. Image enhancement by histogram transformation. Comput. Graph. Image Process. 1975, 6, 184–195. [Google Scholar] [CrossRef]
  42. Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG Int. J. Comput. Sci. 2007, 34, 2–12. [Google Scholar]
  43. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing the low quality images using unsupervised colour correction method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics, Istanbul, Turkey, 10–13 October 2010; pp. 1703–1709. [Google Scholar]
  44. Ghani, A.S.A.; Isa, N.A.M. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching. SpringerPlus 2014, 3, 757. [Google Scholar] [CrossRef] [Green Version]
  45. Żak, B.; Hożyń, S. Local image features matching for real-time seabed tracking applications. J. Mar. Eng. Technol. 2017, 16, 273–282. [Google Scholar] [CrossRef] [Green Version]
  46. Zhao, J.; Zhang, X.; Gao, C.; Qiu, X.; Tian, Y.; Zhu, Y.; Cao, W. Rapid mosaicking of unmanned aerial vehicle (UAV) images for crop growth monitoring using the SIFT algorithm. Remote Sens. 2019, 11, 1226. [Google Scholar] [CrossRef] [Green Version]
  47. Ma, W.; Wu, Y.; Liu, S.; Su, Q.; Zhong, Y. Remote sensing image registration based on phase congruency feature detection and spatial constraint matching. IEEE Access 2018, 6, 77554–77567. [Google Scholar] [CrossRef]
  48. Tian, Y.; Sun, A.; Luo, N.; Gao, Y. Aerial image mosaicking based on the 6-DoF imaging model. Int. J. Remote Sens. 2020, 41, 74–89. [Google Scholar] [CrossRef]
Figure 1. The flow chart of proposed method.
Figure 1. The flow chart of proposed method.
Jmse 08 00449 g001
Figure 2. Underwater pipe image noise removal (a) Original image and (b) Output image.
Figure 2. Underwater pipe image noise removal (a) Original image and (b) Output image.
Jmse 08 00449 g002
Figure 3. Underwater pipe image enhancement (Mix-CLAHE) (a) Original image after noise reduction and (b) Output image.
Figure 3. Underwater pipe image enhancement (Mix-CLAHE) (a) Original image after noise reduction and (b) Output image.
Jmse 08 00449 g003
Figure 4. Comparison on results of several image enhancement methods. (a) Original Image (b) Histogram Equalization [41]; (c) Integrated Color Model [42]; (d) Relative Global Histogram Stretching [19]; (e) Unsupervised Color Correction Method [43]; (f) Rayleigh Distribution [44] and (g) Screened Poisson [40].
Figure 4. Comparison on results of several image enhancement methods. (a) Original Image (b) Histogram Equalization [41]; (c) Integrated Color Model [42]; (d) Relative Global Histogram Stretching [19]; (e) Unsupervised Color Correction Method [43]; (f) Rayleigh Distribution [44] and (g) Screened Poisson [40].
Jmse 08 00449 g004
Figure 5. Image matching for (a) two original images (in each step) and (b) The result of image matching using (2D)2PCA and A-KAZE algorithms.
Figure 5. Image matching for (a) two original images (in each step) and (b) The result of image matching using (2D)2PCA and A-KAZE algorithms.
Jmse 08 00449 g005
Figure 6. Comparisons on results of image matching (output from step (2)) for 110 points (a) Oriented FAST and Rotated BRIEF (ORB) and (b) proposed methods.
Figure 6. Comparisons on results of image matching (output from step (2)) for 110 points (a) Oriented FAST and Rotated BRIEF (ORB) and (b) proposed methods.
Jmse 08 00449 g006
Figure 7. Mosaic image of underwater pipe dataset for every 10 frames (a) original images and (b) final mosaicked image.
Figure 7. Mosaic image of underwater pipe dataset for every 10 frames (a) original images and (b) final mosaicked image.
Jmse 08 00449 g007
Figure 8. Mosaic image of underwater pipe dataset for every 20 frames (a) original images and (b) final mosaicked image.
Figure 8. Mosaic image of underwater pipe dataset for every 20 frames (a) original images and (b) final mosaicked image.
Jmse 08 00449 g008
Figure 9. The underwater pipe mosaicked images (a) based on SIFT and random sample consensus (RANSAC) with mismatched regions in red and (b) compared with proposed model with differences in green.
Figure 9. The underwater pipe mosaicked images (a) based on SIFT and random sample consensus (RANSAC) with mismatched regions in red and (b) compared with proposed model with differences in green.
Jmse 08 00449 g009
Table 1. Comparison of Root-Mean-Square Error (RMSE) of test image pairs (Figure 5).
Table 1. Comparison of Root-Mean-Square Error (RMSE) of test image pairs (Figure 5).
RMSENumber of Feature PointsCMR (%)
MethodSURFSIFT and RANSACProposed MethodSURFSIFT and RANSACProposed MethodSURFSIFT and RANSACProposed Method
Image pair
Image Pair 10.17010.15320.1483248322811823899194
Image Pair 20.16160.12390.1146284923421792829396
Image Pair 30.13040.12550.1246229423292158758791
Image Pair 40.15950.11770.1144241027892332949290
Image Pair 50.16810.12000.1075298326921982879293
Table 2. Comparison of mosaicking error.
Table 2. Comparison of mosaicking error.
DataMosaicking Error
-SURFSIFT and RANSACProposed Method
Data 10.95630.89580.8061
Data 21.67211.27851.0930

Share and Cite

MDPI and ACS Style

Abaspur Kazerouni, I.; Dooly, G.; Toal, D. Underwater Image Enhancement and Mosaicking System Based on A-KAZE Feature Matching. J. Mar. Sci. Eng. 2020, 8, 449. https://doi.org/10.3390/jmse8060449

AMA Style

Abaspur Kazerouni I, Dooly G, Toal D. Underwater Image Enhancement and Mosaicking System Based on A-KAZE Feature Matching. Journal of Marine Science and Engineering. 2020; 8(6):449. https://doi.org/10.3390/jmse8060449

Chicago/Turabian Style

Abaspur Kazerouni, Iman, Gerard Dooly, and Daniel Toal. 2020. "Underwater Image Enhancement and Mosaicking System Based on A-KAZE Feature Matching" Journal of Marine Science and Engineering 8, no. 6: 449. https://doi.org/10.3390/jmse8060449

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop