Next Article in Journal
A Hash-Based RFID Authentication Mechanism for Context-Aware Management in IoT-Based Multimedia Systems
Next Article in Special Issue
Selection of the Optimal Spectral Resolution for the Cadmium-Lead Cross Contamination Diagnosing Based on the Hyperspectral Reflectance of Rice Canopy
Previous Article in Journal
Automatic Identification of Tool Wear Based on Convolutional Neural Network in Face Milling Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fast Image Deformity Correction Algorithm for Underwater Turbulent Image Distortion

School of Computer Science and Information Engineering, Hubei University, Wuhan 430062, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(18), 3818; https://doi.org/10.3390/s19183818
Submission received: 31 July 2019 / Revised: 28 August 2019 / Accepted: 2 September 2019 / Published: 4 September 2019
(This article belongs to the Special Issue Photonics-Based Sensors for Environment and Pollution Monitoring)

Abstract

:
An algorithm correcting distortion based on estimating the pixel shift is proposed for the degradation caused by underwater turbulence. The distorted image is restored and reconstructed by reference frame selection and two–dimensional pixel registration. A support vector machine-based kernel correlation filtering algorithm is proposed and applied to improve the speed and efficiency of the correction algorithm. In order to validate the algorithm, laboratory experiments on a controlled simulation system of turbulent water and field experiments in rivers and oceans are carried out, and the experimental results are compared with traditional, theoretical model-based and particle image velocimetry-based restoration and reconstruction algorithms. Using subjective visual evaluation, image distortion has been effectively suppressed; based on an objective performance statistical analysis, the measured values are better than the traditional and formerly studied restoration and reconstruction algorithms. The method proposed in this paper is also much faster than the other algorithms. It can be concluded that the proposed algorithm can effectively improve the de-distortion effect of the underwater turbulence degraded image, and provide potential techniques for the accurate operation of underwater target detection in real time.

1. Introduction

With the development of underwater imaging technology, underwater target recognition has been widely used in topographic survey and geomorphological observation [1]. In natural static water, the scattering and absorption characteristics of suspended particles are the main factors causing the degradation of an underwater image [2], limiting the underwater visible range to only a few tens of meters. Former studies focused on the degradation model, solving beam transmission and scattering problems to improve the image quality and distance [3,4]. However, in real water environments like rivers and oceans, the underwater visible distance decreases severely due to turbulent effects; the nonuniform variation of light field distribution results in image distortion [5,6], which makes turbulence the most important degradation factor in natural water imaging. Therefore, it is necessary to study the underwater turbulent degradation in depth.
Some scholars have studied the degradation of underwater turbulence and its image recovery processing. Hou et al. [7,8,9,10,11] used the underwater imaging degradation model to analyze the effects of suspended particles, turbulence, and path scattering on underwater optical imaging. Gero et al. [12,13,14] established laboratory and field underwater turbulence experimental systems, conducted on-site measurements, and analyzed the influence of optical turbulence on the resolution of underwater imaging systems with quantitative data. Matt et al. [15,16] established a turbulent environment experimental platform with changeability and reproducibility. The Doppler velocimeter and the particle image velocimetry (PIV) system were used to analyze the fluid field and the computational fluid dynamics model was used to compensate for the measurement results. Farwell et al. [17,18] studied the intensity and coherence distribution of turbulence on underwater beam propagation based on the ocean turbulence power spectrum model, and performed a large number of numerical calculations.
There are three main types of research: theoretical calculation from the turbulent structure function and scattering characteristics; the establishment of an experimental system for simulating turbulence for laboratory and field measurement and analysis; and simulation experiments using the PIV method. Chen et al. have studied these three methods [19,20], and the results show that the turbulent flow field causes modulation transfer function (MTF) declines of the whole spatial frequency, and the path radiation and fluid media lead to a decrease of modulation contrast of the high spatial frequency. In fact, turbulence will affect imaging at both high and low frequencies due to the nonuniformity of light field, which causes image distortion. Therefore, it is necessary to study image processing methods that are specifically aimed at image distortion.
Hu et al. [21] proposed a method based on the motion field kernel regression. Holohan et al. [22] proposed the use of adaptive optics (AO) technology for image processing. Wen et al. [23] proposed an underwater image reconstruction method based on motion compensation for high-quality image block selection and denoising. Kumar et al. [24] proposed a two–stage image reconstruction method. In the first stage, the blind image quality (BIQ) metric and K–means clustering algorithm are used to select the reference frame and clear frame sequence, respectively. In the second stage, the pixel registration technology and two-dimensional interpolation technology are used to reconstruct the distorted image. Although this method can effectively alleviate the impact of turbulence, the computational complexity is high.
In terms of image distortion elimination, Ahn et al. [25] introduced a convolutional neural network into image distortion classification. Mao [26] proposed a 2D interpolation-based distortion correction technique for bistatic Synthetic Aperture Radar (SAR) polar format image formation. Sun et al. [27] proposed an improved cubic chirplet decomposition method based on linked scatterers to solve the distortion problem for shipborne bistatic ISAR. There are other studies on the elimination of image distortion in different fields [28,29]. From the above, it can be seen that the elimination of image distortion is generally to carry out reverse operations based on the cause of distortion, among which are popular methods such as the use of neural networks to identify the types of distortion.
Therefore, in this paper, a self-defined metric is used to select the reference frame and the input frame sequence of the short exposure image with high clarity. Pixel registration and two-dimensional registration algorithms are used to suppress the distortion. The kernel correlation filtering algorithm is used to improve the speed and efficiency of the algorithm, which can reduce the amount of calculation and improve the deformity removal effect at the same time.

2. Theory and Methods

2.1. Two-Dimensional Pixel Registration Algorithm

The reference frame and the input frame sequence are selected according to the sharpness value of captured image frames. The sharpness of image can be calculated by [24]:
B = ( i = 1 I ( σ η P ¯ ( η , ϕ i ) ) 2 I ) 1 2 ,
where σ η is the mean value of P ¯ ( η , ϕ i ) , I represents the number of directions selected, and P ¯ ( η , ϕ i ) denotes the expected entropy of the image:
P ¯ ( η , ϕ i ) = r P ( r , ϕ i ) S ,
P ( r ) = log 2 ( m = 1 R D r 3 ( m ) ) 2 ,
where η [ 1 , 2 , , S ] represents the size of the image, ϕ i [ ϕ 1 , ϕ 2 , , ϕ I ] is the measurement direction, r and m represent the discrete variables of time and frequency, respectively, R is the number of pixels, and D ( m ) = D ( m ) D ( m ) represents the complex conjugate of D ( m ) .
We define D as the wave structure function of turbulence [30]:
D ( ρ , Z ) = 3.603 × 10 7 k 2 z ε 1 / 3 ( χ T / ω 2 ) ρ 5 / 3 ( 0.419 ω 2 0.838 ω + 0.419 ) ,
where k = 2 π / λ is the wavenumber equation, λ is the wavelength (530 nm when calculated in this paper), and ρ is the distance between two points on the cross section perpendicular to the transmission direction.
The reference frame can be selected as the input frame with the highest sharpness value, and frames with higher sharpness are kept as the input frame sequence for subsequent image processing.
The pixel shifting of the input frame sequence relative to the reference frame is calculated using the backward mapping method:
R a ( a , b ) = ( g = 1 G Q a ( a , b , g ) ) / G R b ( a , b ) = ( g = 1 G Q b ( a , b , g ) ) / G ,
where R a and R b represent the mean values of pixel shift in the horizontal and vertical directions, respectively, g denotes a frame index, and G denotes the total number of reserved frame sequences.
The corrected shift of each pixel in all reserved input frames is derived from:
Q a ( a , b , g ) = Q a ( a + R a 1 ( a , b ) , b + R b 1 ( a , b ) , g ) + R a 1 ( a , b ) Q b ( a , b , g ) = Q b ( a + R a 1 ( a , b ) , b + R b 1 ( a , b ) , g ) + R b 1 ( a , b ) ,
where Q a and Q b are the corrected displacements in the horizontal and vertical directions. R a 1 and R b 1 represent the inverse of R a and R b , respectively.
Then the corrected frames can be restored and reconstructed by:
f 1 ( a , b ) = f g ( a + Q a , b + Q b ) f g n + 1 = f g * h 1 ( a , b ) e j 2 π d a d b ,
f 2 ( a , b ) = f g ( a cos θ b sin θ + Q a , b cos θ + a sin θ + Q b ) f g n + 1 = P [ f g n + i = 1 P λ P ( g i h f i ) ] ,
where f 1 ( a , b ) represents the restored image, f 2 ( a , b ) represents the reconstructed image, f g represents the sequence of reserved frames, θ represents the angle of rotation, and h denotes the Gaussian estimation.
The recovered image can be used as the reference image for the next iteration. Through multiple iterations, the de-distortion effect will be better removed.

2.2. Support Vector Machine-Based Kernel Correlation Filtering Algorithm

When it comes to finding the optimal solution for Equations (7) and (8), the regularization constraint process can be used for limiting the iteration process, the main idea of which is to solve the mathematical ill-conditioned problem of finding the minimum value. The constraint algorithm in this paper combined the idea of kernel correlation filters (KCF) algorithm.
The expression of the regularization is as shown [31]:
f ( x ) = min n i = 1 M ( x i - ( n i T x + z ) ) 2 + x n w 2 ,
where x , x i represent the original image and the observed image, min n represents the minimum value, ξ represents the regularity factor, and n w 2 represents the penalty factor.
As a result, in the process of solving Equations (7) and (8), the goal of Equation (9) is to solve a best approximation solution of f ( x ) = n T x + z that can be defined as the Interval of functions.
The distance between a point in the sample space and the classification hyper plane is calculated as follows:
r = | n T x + z | n ,
The distance between the support vector and the hyper plane is called the “interval” of the support vector machine (SVM), which can be expressed as follows:
r = 2 n ,
The principle of a support vector machine is to maximize the interval, i.e., to minimize the 1 2 n 2 . The constrained optimization problem of linear classification can be expressed as follows:
min n , z 1 2 n 2 s . t . y i ( n T x i + z ) 1 , i = 1 , , m ,
Lagrangian functions can be constructed by introducing Lagrange multipliers into constraints:
α i 0 , i = 1 , 2 , , N L ( n , z , α ) = 1 2 | n | 2 i = 1 N α i [ y i ( n · x i + z ) 1 ] ,
Then the extremum can be obtained by summing partial derivatives:
L n = 0 n = i = 1 n α i y i x i L z = 0 i = 1 n α i y i = 0 ,
α can be obtained by substituting the above two conditions into the formula:
L ( n , z , a ) max = i = 1 n a i - 1 2 i , j = 1 n a i a j y i y j x i T x j s . t . i = 1 n a i y i = 0 , i = 1 , 2 , , n ,
In turn, n can be solved as follows:
n = i = 1 n α i * y i x i ,
In the design process of the algorithm, in order to use the fuzzy sample image to train the least squares classifier and simplify the computation, a circular matrix can be constructed. We set:
i = 1 n α i = ( X X T + β I ) 1 ,
where β is the parameter that controls overfitting.
Assuming that H ( x ) is a i × i matrix, it can be obtained by cyclic shifting of a vector of I i , from which X can be obtained:
X = H ( x ) = ( x 0 x 2 x i x i x 1 x i 1 x 2 x 3 x 1 ) ,
The above matrix can be converted by I = E K E , E is the constructed core function:
X K X = E d i a g ( x ) d i a g ( x ) E K ,
Since the matrix is diagonal, Equation (11) can be converted to:
X K X = E d i a g ( x x ) E K ,
Then the discrete Fourier form of n can be obtained by substituting Equation (20) into Equation (16), which can greatly reduce the amount of computation in the training process of the least squares classifier.
In summary, the kernel correlation matrix constructed in Equation (18) is substituted into Equation (16) to speed up the constrained optimization of classification in support vector machine for calculating the best approximation solution to Equation (9). Therefore, the factors affecting the speed of the algorithm are determined by the constructed core matrix. The constructed core functions of the matrix include radial basis function, point product kernel, weighted core, and so on. In this paper, the radial basis function core is selected. However, the factors affecting the accuracy of the algorithm return to the solution of Equations (7) and (8), where B in Equation (1) determines the input of the algorithm, and the kernels of restoration and reconstruction algorithms also have an impact on the accuracy. Thus, when the velocity of turbulence or visibility changes, the initial estimation function in Equations (7) and (8) will be changed to improve the algorithm.

3. Experimental Results and Analysis

In order to further verify the effectiveness of the proposed method, the experimental data for this paper were obtained through the laboratory simulation of a turbulent environment and field tests in a real turbulent ocean environment.
Due to the relationship between image distortion and the turbulent velocity field, the image restoration method based on PIV velocity field measurement is used as a verification method in the laboratory experiment system. Considering the follow abilities and light scattering characteristics of the tracer particles, common particle bubbles (which also have the advantage of being nonpolluting) are selected as tracer particles to measure the flow velocity field distribution of underwater turbulence. The probability density function of the bubble motion displacement can be described by the probability density function of time:
f s ( s ) = f t ( t ) × ( 1 | s ( t 1 ) | + 1 | s ( t 2 ) | + + 1 | s ( t n ) | ) = 1 t × 1 | v | ,
where f t ( t ) is the probability density function of time, which is a random variable subject to uniform distribution; s ( t ) is the reciprocal of the relative displacement s ( t ) ; | v | is the speed of bubble motion, which can be estimated by the bubble dynamics equation; and t is the exposure time of the image sensor. Then the motion modulation transfer function of the bubble can be calculated by the one-dimensional Fourier transform of the probability density function of the relative displacement:
M T F m o t i o n ( f ) = 1 d 0 d exp ( i 2 π f s ) d s = sin c ( π f d ) ,
Thus, the MTF can be used as a priori knowledge of image restoration reconstruction algorithm.
In order to objectively analyze the processing results, this paper selects objective evaluation criteria of the non-reference ideal image as the quality assessment of image restoration and reconstruction, including the information capacity (IC), blur metric (BM), and gray average gradient (GMG). IC characterizes the richness of useful image information; the BM describes the degree of image distortion; the GMG reflects the image edge information. The larger the values of IC and GMG, the smaller the BM value, which denotes the better effects of image restoration and reconstruction. These evaluation criteria have been described in detail in previous articles published by the research team [20], and so will not be repeated here [19].
The BM is defined as follows:
B M = max ( s D v e r t i c a l , s D h o r i z o n t a l ) , s D v e r t i c a l = i , j = 1 m 1 . n 1 D v e r t i c a l ( i , j ) , s D h o r i z o n t a l = i , j = 1 m 1 , n 1 D h o r i z o n t a l ( i , j ) , i ( 0 , m 1 ) , j ( 0 , n 1 ) , { D v e r t i c a l = | F ( i , j ) F ( i 1 , j ) | D h o r i z o n t a l = | F ( i , j ) F ( i , j 1 ) | ,
where D v e r t i c a l and D h o r i z o n t a l represent different images in the vertical and horizontal directions. F ( i , j ) is the pixel of coordinate ( i , j ) on the image plane, and ( m , n ) is the size of the image. Then the blur metric can be normalized by the range 0 to 1.
The IC is defined as follows:
I C = log 2 { 1 + log [ p ( i , j , d , θ ) ] log [ max ( p ( i , j , d , θ ) ) ] } ,
where p ( i , j , d , θ ) represents the correlation between pixels, i and j represent the coordinates of the pixel, d is the imaging distance, and θ represents the direction of association between the pixels.
The GMG is defined as follows:
G M G = 1 ( M 1 ) ( N 1 ) i = 1 M 1 j = 1 N 1 [ f ( x , y + 1 ) f ( x , y ) ] 2 + [ f ( x + 1 , y ) f ( x , y ) ] 2 2 ,
where f ( x , y ) denotes the point at coordinate ( x , y ) on image plane, and ( M , N ) is the size of the image.

3.1. Laboratory Experiments

An underwater turbulence experiment system is established in this paper. A 532 nm green semiconductor laser is used as the light source; images are captured by a high-speed COMS image sensor. The spot size of the laser is 10–20 mm, and its power is 200 mw. The experimental water tank is made of high-transmittance acrylic plate, so more than 90% of the laser source is irradiated on the target plate, and its size is 150 cm × 34 cm × 33 cm (length, height, width). Both the inlet and outlet are 40 mm round holes, at different heights to form turbulence with a water pump. The experimental system uses a circulating pump with a maximum head of 5 m and a maximum flow of 7.8 m3/h to provide hydrodynamic power. The laser and sensor are 33 cm away from the target plate. In order to reduce the experimental error, the experiment was carried out in a dark environment. The three-dimensional structure of the experimental system is shown in Figure 1.
The Reynolds number (Re) is used to determine whether the fluid is in a turbulent state. If Re >4000, the fluid state is turbulent. The flow rate of the water body is controlled by the flow meter and the water pump water valve. The pump drives the flow of water, and the valve controls the size of the flow. Turbulence occurs when the inlet flow reaches a certain speed. By controlling the water flow velocity at the water inlet of the water tank, turbulence of different strengths is obtained. The flow meter can read the velocity in real time, and then calculate the turbulent Reynolds number and turbulent intensity to ensure that the sample image is obtained in a turbulent environment.
The training platform of this algorithm is: the operating system is Ubuntu 14.04 (Canonical Ltd, London, England), the CPU is Core i7–9700K (Quad–core 4.9 GHz) (2200 Mission College Blvd. Santa Clara, CA 95054–1549 USA), and the graphics card is ASUS DUAL RTX2070–O8G–EVO (ASUS, Taipei City, Taiwan). The programming is performed in MATLAB R20017b (Apple Hill Drive, Natick, MA 01760–2098, USA). If the computer configuration is reduced or improved, the algorithm time will increase or decrease accordingly. If the image resolution increases, the number of training window travels in SVM will increase, and the algorithm time will increase accordingly. The image resolution of the sample images selected in this paper is cut to 800 × 600, and the scale factor of super-resolution reconstruction is set to 3.

3.1.1. Microturbulent Environment

When the water velocity of the inlet is 5 m/s, the target object is photographed 60 times by Charge Couple Device (CCD) sensor in 5 s. The captured image sequences are processed and compared by the proposed algorithm along with traditional blind restoration (BD) [32], projection onto convex set reconstruction (POCS) [33], the semi-blind restoration and reconstruction method based on the turbulent degradation model (M−SB) [19], the total variation image super-resolution reconstruction technique based on L1 norm (M−TV) [34], and a restoration and reconstruction method based on the PIV method (PIV−RR) [35]. The sample image taken is shown in Figure 2, with restored and reconstructed results shown in Figure 3. The evaluation values for the images are listed and compared in Table 1. Table 2 shows the processing time of the algorithms.
It can be seen that the traditional BD method relieved a certain degree of blurring and introduced a large ringing effect, which is improved by the M−SB method, but the distortion of the image is not improved. The POCS and M−TV methods can improve the image resolution while improving the image sharpness, but the distortion of the image is also not improved. It can be seen that the image restoration and reconstruction algorithms have a significant effect in terms of improving resolution and deblurring, but they are not suitable for image distortion. This explains the necessity of the algorithm that is proposed by this paper. Based on the method in this paper, the distortion can be significantly improved, and the PIV−RR method also has a certain effect on the processing of image distortion.
As can be seen from Table 1, the BM values of M–SB and PIV–RR method are smaller compared to those obtained by the other methods. Although the BM value of the method proposed in this paper is larger than that of the other two methods, it is not much larger. The IC value of the M−TV method is the largest, which is followed by the proposed method, while the PIV−RR method has a small value. The GMG value of both PIV−RR and the proposed method are larger than those of the other methods. It can be concluded that the proposed algorithm is not as good at deblurring as targeted image restoration, but it has advantages over the PIV−RR method in reconstruction.
As can be seen from Table 2, the method proposed in this paper has obvious advantages in terms of the processing time.

3.1.2. Strong Turbulence Environment

When the water velocity of the inlet reaches 25 m/s, as can be seen in Figure 4, the degree of distortion of the image is greatly increased. The results after image restoration and reconstruction are shown in Figure 5. The evaluation values for the images are listed and compared in Table 3, while the processing time is compared in Table 4.
It can be seen from Figure 5 that the traditional BD method has no obvious ringing effect, but the image is more blurred, which is the same as with the M−TV method. It can also be seen that the distortion of the image is not improved by the POCS and M−SB methods. In the case of strong turbulence, the method proposed in this paper obviously performs better than the PIV−RR method.
As can be seen from Table 3, the BM values of the M−SB and PIV−RR methods are smaller, while the proposed method has a larger value. The IC value of the proposed method is the largest, while the PIV−RR method has a small value. The GMG values of both PIV−RR and the proposed method are larger than for the other methods. It can be seen from Table 2 that the proposed method also has an obvious advantage in terms of processing time.
As a result, it can be concluded that, from a subjective point of view, the method proposed in this paper performs better than the other methods in terms of image distortion. Objectively speaking, the method proposed in this paper had performed poorly at deblurring, but is stronger in terms of image resolution and sharpness improvement compared to the reconstruction method. In particular, compared to the PIV−RR method, the two methods are comparable in the case of microturbulence. However, under strong turbulence, the method proposed in this paper is obviously better than the PIV−RR method from a subjective point of view. From the perspective of processing speed, the proposed method has an obvious advantage.

3.2. Field Tests

Tests in a turbulent water environment were carried out in the Yangtze River and South China Sea. Sample images were captured by an underwater packaging imaging system. The laser operated at 465−470 nm and CMOS image sensor are enclosed in a waterproof tank, and images captured by the image sensor were transferred to an image processing module. The attenuation coefficient of water is assumed to be a constant that does not change with wavelength in the observation range and can be measured by:
K = 1 z I n E ( z ) E ( 0 ) ,
where z is depth; E ( z ) is irradiance at a depth; and E ( 0 ) is irradiance of the surface plane. Figure 6 shows the schematic diagram of the experimental system, with physical properties listed in Table 5.
The sample image and processed results are shown in Figure 7 and Figure 8. The evaluation values for the images are listed and compared in Table 6 and Table 7. The processing time of the algorithms are compared in Table 8 and Table 9. After laboratory experiments, the traditional BD and POCS methods are no longer used as comparisons in field experiments.
The experimental results in the river are similar to those in an environment of strong turbulence, while in the ocean the circumstances are more similar to microturbulence, so the effectiveness of laboratory experiments and the robustness of the proposed algorithm can be verified.
In order to further verify the validity of the algorithm, two sets of the TURBID dataset [36] with a turbidity of I10 are used for image enhancement and comparison with other latest underwater image enhancement methods. In recent years, the research on underwater image enhancement has mainly focused on mathematical methods such as estimation [37,38,39,40], fusion [41], color correction [42,43,44], and the combination of depth neural network [45,46,47]. In this paper, Accurate Image Super-Resolution Using Very Deep Convolutional Networks (VDSR) [48] is chosen for comparison as a deep neural network method. Zhang et al. [49] proposed a medium transmission estimation method for underwater images based on joint prior distribution, which is also added for comparison. A future research direction is to introduce neural networks and adopt more new datasets, such as the Underwater Image Enhancement Benchmark Dataset (UIEBD) [50,51].
The processing results are shown in Figure 9 and the evaluation results are given in Table 10.
It can be seen from the experimental results that the VDSR and PIV methods are inferior, while Zhang’s method and the proposed method show good results, especially for the Chlorophyll dataset. According to the paper that proposed the datasets, the generation of turbidity mainly affects the scattering. Therefore, Zhang’s method for light scattering and the proposed method considering scattering displacement will achieve better recovery effects. The VDSR method, with its uncertain training process, and the PIV method, dependent on measured parameters, cannot achieve good results. It is noted that the time of VDSL is faster than that of the proposed method, but this is after training, and the time of training samples is not included. The method proposed in this paper can be used for both de–distortion and de–blurring, so it is more applicable.

4. Conclusions

Combining pixel registration and SVM−KCF algorithms, an underwater turbulence degradation image deformity correction algorithm based on pixel displacement estimation is proposed in this paper. Experimental verification was carried out through a laboratory-simulated turbulent environment and field tests in the river and ocean. Compared with traditional image recovery algorithms, the proposed algorithm can effectively suppress distortion and obtain better objective evaluation index parameters in both micro and strong turbulent environments. The proposed method has an obvious advantage in terms of processing time. Therefore, it can be concluded that the proposed algorithm can effectively suppress the image distortion caused by underwater turbulence, and significantly reduce the processing time, which provides theoretical and technical support for real-time underwater imaging detection.

Author Contributions

Conceptualization, Y.C. and Z.Z.; methodology, Y.C. and M.Z.; software, Y.C. and M.Z.; validation, M.Z.; formal analysis, Y.C., M.Z., and Z.Z.; investigation, M.Z.; resources, Y.P.; data curation, M.Z.; writing—original draft preparation, M.Z.; writing—review and editing, Y.C., Z.Z., and Y.P.; visualization, M.Z.; supervision, Y.P.; project administration, Y.P.; funding acquisition, Y.P.

Funding

This paper is supported by Project 61806076 from the National Natural Science Foundation of China, and 201710512051, 201810512051 from the Innovation and Entrepreneurship Training Program for College Students in Hubei Province.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, W.; Na, L.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput. Electr. Eng. 2017. [Google Scholar] [CrossRef]
  2. Nnolim, U.A. Smoothing and enhancement algorithms for underwater images based on partial differential equations. J. Electron. Imaging 2017, 26, 023009. [Google Scholar] [CrossRef]
  3. Yu, Y.; Liu, F. System of remote-operated-vehicle-based underwater blurred image restoration. Opt. Eng. J. Soc. Photo Opt. Instrum. Eng. 2007, 46, 16002. [Google Scholar] [CrossRef]
  4. Hou, W.; Gray, D.J.; Weidemann, A.D.; Fournier, G.R.; Forand, J.L. Automated Underwater Image Restoration and Retrieval of Related Optical Properties; IGARSS: Barcelona, Spain, 2007. [Google Scholar]
  5. Bi, Y.; Xu, X.; Chua, S.Y.; Chow, E.M.T.; Wang, X. Underwater Turbulence Detection Using Gated Wavefront Sensing Technique. Sensors 2018, 18, 798. [Google Scholar] [CrossRef] [PubMed]
  6. Islam, Z.; Faruque, S.; Ahamed, M.M. Experimental Investigation of Underwater Turbulence Effect on BER for Orthogonal OOK Modulation; EIT: Milwaukee, WI, USA, 2013. [Google Scholar]
  7. Weilin, H.; Zhongping, L.; Weidemann, A.D. Why does the Secchi disk disappear? An imaging perspective. Opt. Express 2007, 15, 2791–2802. [Google Scholar]
  8. Weilin, H.; Gray, D.J.; Weidemann, A.D.; Arnone, R.A. Comparison and validation of point spread models for imaging in natural waters. Opt. Express 2008, 16, 9958–9965. [Google Scholar] [Green Version]
  9. Weilin, H. A simple underwater imaging model. Opt. Lett. 2009, 34, 2688–2690. [Google Scholar]
  10. Hou, W.; Jarosz, E.; Woods, S.; Goode, W.; Weidemann, A. Impacts of underwater turbulence on acoustical and optical signals and their linkage. Opt. Express 2013, 21, 4367–4375. [Google Scholar] [CrossRef]
  11. Hou, W.; Woods, S.; Jarosz, E.; Goode, W.; Weidemann, A. Optical turbulence on underwater image degradation in natural environments. Appl. Opt. 2012, 51, 2678–2686. [Google Scholar] [CrossRef]
  12. Gero, N.; Weilin, H.; Dalgleish, F.R.; Rhodes, W.T. Determination of flow orientation of an optically active turbulent field by means of a single beam. Opt. Lett. 2013, 38, 2185–2187. [Google Scholar]
  13. Gero, N.; Ewa, J.; Dalgleish, F.R.; Hou, W. Quantification of optical turbulence in the ocean and its effects on beam propagation. Appl. Opt. 2016, 55, 8813–8820. [Google Scholar]
  14. Nootz, G.; Matt, S.; Kanaev, A.; Judd, K.P.; Hou, W. Experimental and numerical study of underwater beam propagation in a Rayleigh-Bénard turbulence tank. Appl. Opt. 2017, 56, 6065–6072. [Google Scholar] [CrossRef] [PubMed]
  15. Matt, S.; Hou, W.; Woods, S.; Goode, W.; Jarosz, E.; Weidemann, A. A Novel Platform to Study the Effect of Small-scale Turbulent Density Fluctuations on Underwater Imaging in the Ocean. Methods Oceanogr. 2017, 11, 39–58. [Google Scholar] [CrossRef]
  16. Matt, S.; Hou, W.; Goode, W.; Hellman, S. Introducing SiTTE: A controlled laboratory setting to study the impact of turbulent fluctuations on light propagation in the underwater environment. Opt. Express 2017, 25, 5662–5683. [Google Scholar] [CrossRef] [PubMed]
  17. Farwell, N.; Korotkova, O. Intensity and coherence properties of light in oceanic turbulence. Opt. Commun. 2012, 285, 872–875. [Google Scholar] [CrossRef]
  18. Farwell, N.H.; Korotkova, O. Multiple Phase-Screen Simulation of Oceanic Beam Propagation; SPIE: Bellingham, WA, USA; University of Miami: Coral Gables, FL, USA, 2015. [Google Scholar]
  19. Chen, Y.; Yang, W.; Tan, H.; Yang, Y.; Hao, N.; Yang, K. Image enhancement for LD based imaging in turbid water. Opt. Int. J. Light Electron Opt. 2015, 127, 517–521. [Google Scholar] [CrossRef]
  20. Chen, Y.; Yang, K. MAP-regularized robust reconstruction for underwater imaging detection. Opt. Int. J. Light Electron Opt. 2013, 124, 4514–4518. [Google Scholar] [CrossRef]
  21. Hu, W.; Xie, Y.; Zhang, W.; Tan, Y. Removing water fluctuation via motion field-based kernel regression. J. Inf. Comput. Sci. 2014, 11, 5289–5296. [Google Scholar] [CrossRef]
  22. Holohan, M.L.; Dainty, J.C. Low-order adaptive optics: A possible use in underwater imaging? Opt. Laser Technol. 1997, 29, 51–55. [Google Scholar] [CrossRef]
  23. Wen, Z.; Lambert, A.; Donald, F.; Li, H. Bispectral analysis and recovery of images distorted by a moving water surface. Appl. Opt. 2010, 49, 6376–6384. [Google Scholar] [CrossRef]
  24. Halder, K.K.; Paul, M.; Tahtali, M.; Anavatti, S.G.; Murshed, M. Correction of geometrically distorted underwater images using shift map analysis. J. Opt. Soc. Am. A 2017, 34, 666–673. [Google Scholar] [CrossRef] [PubMed]
  25. Ahn, N.; Kang, B.; Sohn, K.A. Image Distortion Detection using Convolutional Neural Network. arXiv 2018, arXiv:1805.10881v1. [Google Scholar]
  26. Mao, D. Bistatic SAR Polar Format Image Formation: Distortion Correction and Scene Size Limits. Master’s Thesis, Wright State University, Dayton, OH, USA, 2017. [Google Scholar]
  27. Sun, S.; Jiang, Y.; Yuan, Y.; Hu, B.; Yeo, T.S. Defocusing and distortion elimination for shipborne bistatic ISAR. Remote Sens. Lett. 2016, 7, 523–532. [Google Scholar] [CrossRef]
  28. Wang, X.; Liu, C.; Qi, Y.; Zhuang, Q. Image Distortion Detection of Head-Mounted Display Based on Optical Transform Function. Laser Optoelectron. Prog. 2018, 55, 081205. [Google Scholar] [CrossRef]
  29. Zhu, L.; Zhang, Y.; Wang, S.; Yuan, H.; Kwong, S.; Ip, H.H.S. Convolutional Neural Network Based Synthesized View Quality Enhancement for 3D Video Coding. IEEE Trans. Image Process. 2018, 27, 5365–5377. [Google Scholar] [CrossRef] [PubMed]
  30. Lu, L.; Ji, X.; Yahya, B. Wave structure function and spatial coherence radius of plane and spherical waves propagating through oceanic turbulence. Opt. Express 2014, 22, 27112. [Google Scholar] [CrossRef] [PubMed]
  31. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-Speed Tracking with Kernelized Correlation Filters. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 583–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. You, Y.; Kaveh, M. Blind image restoration by anisotropic regularization. IEEE Trans. Image Process. 1999, 8, 396–407. [Google Scholar] [PubMed]
  33. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. Siam Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  34. Jin, X.J.; Deng, Z.L. Super resolution reconstruction based on L1-norm and orthogonal gradient operator. J. Appl. Opt. 2012, 33, 305–312. [Google Scholar]
  35. Chen, Y.; Guo, Y.; Pan, Y. Measurement and Analysis of Turbulence Degradation in Underwater Laser Imaging Using the Particle Image Velocimetry (PIV) Method. Lasers Eng. 2019, 44, 81–97. [Google Scholar]
  36. Duarte, A.; Codevilla, F.; Gaya, J.D.; Botelho, S.S.C. A dataset to evaluate underwater image restoration methods. In Proceedings of the OCEANS 2016 Shanghai, Shanghai, China, 10–13 April 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  37. Fu, X.; Fan, Z.; Ling, M. Two-step approach for single underwater image enhancement. In Proceedings of the 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, 6–9 November 2017; pp. 789–794. [Google Scholar]
  38. Drews, P.L., Jr.; Nascimento, E.R.; Botelho, S.S.C. Mario Fernando Montenegro Campos. Underwater depth estimation and image restoration based on single images. IEEE Comput. Graph. Appl. 2016, 36, 24–35. [Google Scholar] [CrossRef] [PubMed]
  39. Li, C.-Y.; Guo, J.-C.; Cong, R.-M.; Pang, Y.-W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef] [PubMed]
  40. Peng, Y.; Cosman, P. Underwater image restoration based on image blurriness and light absorption. IEEE Trans. Image Process. 2017, 26, 1579–1594. [Google Scholar] [CrossRef] [PubMed]
  41. Ancuti, C.; Ancuti, C.O.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 81–88. [Google Scholar]
  42. Ghani, A.; Isa, N. Underwater image quality enhancement through integrated color model with Rayleigh distribution. Appl. Soft Comput. 2015, 27, 219–230. [Google Scholar] [CrossRef]
  43. Li, C.; Guo, J.; Guo, C.; Cong, R.; Gong, J. A hybrid method for underwater image correction. Pattern Recognit. Lett. 2017, 94, 62–67. [Google Scholar] [CrossRef]
  44. Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 2018, 25, 323–327. [Google Scholar] [CrossRef]
  45. Sun, X.; Liu, L.; Li, Q.; Dong, J.; Lima, E.; Yin, R. Deep pixel-to-pixel network for underwater image enhancement and restoration. IET Image Process. 2019, 13, 469–474. [Google Scholar] [CrossRef]
  46. Lu, H.; Li, Y.; Uemura, T.; Kim, H.; Serikawa, S. Low illumination underwater light field images reconstruction using deep convolutional neural networks. Future Gener. Comput. Syst. 2018, 82, 142–148. [Google Scholar] [CrossRef]
  47. Sun, X.; Liu, L.; Dong, J. Underwater image enhancement with encoding-decoding deep CNN networks. In Proceedings of the 2017 IEEE SmartWorld, San Francisco, CA, USA, 4–8 August 2017. [Google Scholar]
  48. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. arXiv 2015, arXiv:1511.04587v2. [Google Scholar]
  49. Zhang, M.; Peng, J. Underwater Image Restoration Based on A New Underwater Image Formation Model. IEEE Access 2018. [Google Scholar] [CrossRef]
  50. Berman, D.; Levy, D.; Avidan, S.; Treibitz, T. Underwater single image color restoration using haze-lines and a new quantitative dataset. arXiv 2018, arXiv:1811.01343. [Google Scholar]
  51. Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark and beyond. arXiv 2019, arXiv:1901.05495. [Google Scholar]
Figure 1. Stereoscopic structure diagram of laboratory experimental system.
Figure 1. Stereoscopic structure diagram of laboratory experimental system.
Sensors 19 03818 g001
Figure 2. Sample image.
Figure 2. Sample image.
Sensors 19 03818 g002
Figure 3. Restoration and reconstruction results: (a) result of BD; (b) result of POCS; (c) result of M−SB; (d) result of M−TV; (e) result of PIV−RR; (f) result of the proposed method in this paper.
Figure 3. Restoration and reconstruction results: (a) result of BD; (b) result of POCS; (c) result of M−SB; (d) result of M−TV; (e) result of PIV−RR; (f) result of the proposed method in this paper.
Sensors 19 03818 g003
Figure 4. Sample image.
Figure 4. Sample image.
Sensors 19 03818 g004
Figure 5. Restoration and reconstruction results: (a) result of BD; (b) result of POCS; (c) result of M−SB; (d) result of M−TV; (e) result of PIV−RR; (f) result of the proposed method in this paper.
Figure 5. Restoration and reconstruction results: (a) result of BD; (b) result of POCS; (c) result of M−SB; (d) result of M−TV; (e) result of PIV−RR; (f) result of the proposed method in this paper.
Sensors 19 03818 g005
Figure 6. The framework of underwater image detecting system for field tests.
Figure 6. The framework of underwater image detecting system for field tests.
Sensors 19 03818 g006
Figure 7. Sample image, restoration and reconstruction results: (a) sample image; (b) result of M−SB; (c) result of M−TV; (d) result of PIV−RR; (e) result of the proposed method in this paper.
Figure 7. Sample image, restoration and reconstruction results: (a) sample image; (b) result of M−SB; (c) result of M−TV; (d) result of PIV−RR; (e) result of the proposed method in this paper.
Sensors 19 03818 g007
Figure 8. Sample image, restoration and reconstruction results: (a) sample image; (b) result of M−SB; (c) result of M−TV; (d) result of PIV−RR; (e) result of the proposed method in this paper.
Figure 8. Sample image, restoration and reconstruction results: (a) sample image; (b) result of M−SB; (c) result of M−TV; (d) result of PIV−RR; (e) result of the proposed method in this paper.
Sensors 19 03818 g008
Figure 9. Sample image, restoration and reconstruction results (size 3630 × 2723 pixels) of Chlorophyll (upper row): (a) result of VDSR; (b) result of Zhang’s method; (c) result of PIV−RR; (d) result of the proposed method, and that of Deep blue (bottom row): (a) result of VDSR; (b) result of Zhang’s method; (c) result of PIV−RR; (d) result of the proposed method.
Figure 9. Sample image, restoration and reconstruction results (size 3630 × 2723 pixels) of Chlorophyll (upper row): (a) result of VDSR; (b) result of Zhang’s method; (c) result of PIV−RR; (d) result of the proposed method, and that of Deep blue (bottom row): (a) result of VDSR; (b) result of Zhang’s method; (c) result of PIV−RR; (d) result of the proposed method.
Sensors 19 03818 g009
Table 1. Comparison of evaluation results.
Table 1. Comparison of evaluation results.
ImageOriginalBDPOCSM-SBM-TVPIV-RRProposed
BM0.57450.37650.49100.17850.27650.16430.2568
IC7.14018.001810.83529.378211.47828.735211.2806
GMG1588523981029023671298701117652917829971865205
Table 2. Comparison of algorithm running time (min).
Table 2. Comparison of algorithm running time (min).
AlgorithmTime
BD11.5
POCS8.3
M−SB5.2
M−TV2.5
PIV−RR2.9
Proposed0.9
Table 3. Comparison of evaluation results.
Table 3. Comparison of evaluation results.
ImageOriginalBDPOCSM-SBM-TVPIV-RRProposed
BM0.29170.18330.27830.03820.17820.09220.1892
IC5.50113.02836.38728.98627.13824.90129.0023
GMG559964789389743878930690375410783229799371
Table 4. Comparison of algorithm running time (min).
Table 4. Comparison of algorithm running time (min).
AlgorithmTime
BD5.3
POCS4.8
M−SB5.2
M−TV1.9
PIV−RR2.3
Proposed1.1
Table 5. Physical properties of experimental system.
Table 5. Physical properties of experimental system.
ParametersValue
Water attenuation ( t )2.9 m−1
LD power ( P 0 )1 W
Operating Voltage ( V )12 V
Angle of viewing ( θ )90°
Distance between LD and CCD ( d 0 )1 cm
Table 6. Comparison of evaluation results.
Table 6. Comparison of evaluation results.
ImageOriginalM-SBM-TVPIV-RRProposed
BM0.80320.38910.57100.28350.3978
IC4.892312.90119.78938.308115.8923
GMG2893497403344021910983641063774
Table 7. Comparison of evaluation results.
Table 7. Comparison of evaluation results.
ImageOriginalM-SBM-TVPIV-RRProposed
BM0.49030.06711.90320.02120.0298
IC2.990311.23414.20887.002318.9032
GMG57891520301309872908891943374
Table 8. Comparison of algorithm running time (min).
Table 8. Comparison of algorithm running time (min).
AlgorithmTime
M−SB4.5
M−TV2.4
PIV−RR3.7
Proposed0.5
Table 9. Comparison of algorithm running time (min).
Table 9. Comparison of algorithm running time (min).
AlgorithmTime
M−SB6.3
M−TV1.5
PIV−RR2.5
Proposed1.3
Table 10. Evaluation results and running time (min).
Table 10. Evaluation results and running time (min).
VDSRZhangPIV−RRProposed
Deep blueBM0.85390.37480.68470.3184
IC5.23429.34356.23439.2344
Time0.51.32.10.9
ChlorophyllBM0.83740.13420.58900.1193
IC3.98775.23426.234211.3454
Time0.32.51.80.5

Share and Cite

MDPI and ACS Style

Zhang, M.; Chen, Y.; Pan, Y.; Zeng, Z. A Fast Image Deformity Correction Algorithm for Underwater Turbulent Image Distortion. Sensors 2019, 19, 3818. https://doi.org/10.3390/s19183818

AMA Style

Zhang M, Chen Y, Pan Y, Zeng Z. A Fast Image Deformity Correction Algorithm for Underwater Turbulent Image Distortion. Sensors. 2019; 19(18):3818. https://doi.org/10.3390/s19183818

Chicago/Turabian Style

Zhang, Min, Yuzhang Chen, Yongcai Pan, and Zhangfan Zeng. 2019. "A Fast Image Deformity Correction Algorithm for Underwater Turbulent Image Distortion" Sensors 19, no. 18: 3818. https://doi.org/10.3390/s19183818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop