Next Article in Journal
Testing of Indoor Obstacle-Detection Prototypes Designed for Visually Impaired Persons
Previous Article in Journal
Exploring HDV Driver–CAV Interaction in Mixed Traffic: A Two-Step Method Integrating Latent Profile Analysis and Multinomial Logit Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Underwater Degraded Image Restoration by Joint Evaluation and Polarization Partition Fusion

1
School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
2
Dalian Advanced Robot Sensing and Control Technology Innovation Center, Dalian 116028, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(5), 1769; https://doi.org/10.3390/app14051769
Submission received: 12 January 2024 / Revised: 19 February 2024 / Accepted: 19 February 2024 / Published: 21 February 2024
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)

Abstract

:
Images of underwater environments suffer from contrast degradation, reduced clarity, and information attenuation. The traditional method is the global estimate of polarization. However, targets in water often have complex polarization properties. For low polarization regions, since the polarization is similar to the polarization of background, it is difficult to distinguish between target and non-targeted regions when using traditional methods. Therefore, this paper proposes a joint evaluation and partition fusion method. First, we use histogram stretching methods for preprocessing two polarized orthogonal images, which increases the image contrast and enhances the image detail information. Then, the target is partitioned according to the values of each pixel point of the polarization image, and the low and high polarization target regions are extracted based on polarization values. To address the practical problem, the low polarization region is recovered using the polarization difference method, and the high polarization region is recovered using the joint estimation of multiple optimization metrics. Finally, the low polarization and the high polarization regions are fused. Subjectively, the experimental results as a whole have been fully restored, and the information has been retained completely. Our method can fully recover the low polarization region, effectively remove the scattering effect and increase an image’s contrast. Objectively, the results of the experimental evaluation indexes, EME, Entropy, and Contrast, show that our method performs significantly better than the other methods, which confirms the feasibility of this paper’s algorithm for application in specific underwater scenarios.

1. Introduction

Underwater image restoration technology is widely used in marine archaeology, deep-sea exploration, marine equipment manufacturing, marine ecological protection, and marine biology research [1,2]. The degradation of underwater images hinders further exploration of the oceans, causing a rise in the use of underwater image restoration and detection technology. During the propagation of underwater light, the transmission through water is reduced due to absorption, so the underwater image obtained will become blurred, the color will be distorted, and the image contrast and image brightness will be reduced [3,4]. In order to solve the problems of underwater image blurring and information attenuation, many scholars have proposed a series of methods to improve the quality of underwater imaging. Bazeille et al. enhanced the contrast of underwater images by preprocessing methods such as wavelet denoising and color balancing [5]. He et al. proposed an image enhancement algorithm based on dark channel prior (DCP) [6]. Galdran et al. optimized the dark channel prior (DCP) algorithm by improving the DCP algorithm using inverse red channel and blue-green channel minimization and introducing saturation information to subtract the effect of the active light source [7].
Traditional intensity-camera-captured underwater images cannot obtain accurate and reliable underwater image information, and cannot meet the actual needs of scientific researchers and explorers. In contrast, underwater image restoration based on polarization information is widely used nowadays [8]. Polarization imaging is able to separate the target and backscattered light from the target to obtain multi-dimensional information. Polarization is one of the intrinsic properties of light, and the underwater image recovery method based on polarization information usually collects polarized images in the same scene, combines the polarization information, and separates the background light and the target light, so as to achieve the purpose of recovering underwater images.
Combining the polarization information, Schechner proposed a physical model-based algorithm to invert the underwater image degradation process; it takes into account the causes of image degradation and estimates the parameters in the underwater imaging model, which are effective in improving the visibility of degraded images to some extent. However, this method assumes that the light is uniform, but has some limitations [9,10]. Since then, more and more researchers have improved upon this model. The traditional methods are poor for the recovery of highly polarized objects, and Huang proposed a method based on curve fitting to estimate the transmittance and background light [11]. With this method, highly polarized objects in water can be well recovered. However, due to the introduction of a large number of nonlinear calculations, although the results are optimized, the processing time increases substantially. Hu proposed a transmittance correction method [12] that uses a simple polynomial fit to correct the transmittance of objects with high polarization. The processing time of this method was significantly reduced compared to that of the method of Huang et al. Recently, Li et al. recovered images based on the fact that polarization information has low-order properties, breaking through the limitations of the assumption of backward-scattered light in traditional polarization imaging methods [13].
Traditional methods are globally optimal for the entire image, but not for each target within the image. This may result in some targets not receiving image enhancement or even receiving degradation. Zhang et al. extracted the connectivity domain of low polarization and high polarization targets by rotating the polarization angle, which realized the transition from the global optimal estimation to the local optimal estimation [14]. Recently li et al. also performed multiple partitions based on the polarization value of a target and recovered the partitions with different polarization values separately, but the method has too many partitions, which rather affects the quality of the final fused image [15]. Meanwhile, li [16] et al. proposed a polarization parameter partitioning optimization recovery method for degraded underwater images, which allowed for the recovery of multiple targets, optimized the extraction method of different targets, and solved the problem that some of the targets could not be adequately recovered in the overall image. However, the method has certain shortcomings in recovering low polarization regions. The traditional method is to estimate the polarization of the whole image. But targets in water often have complex polarization properties. For low polarization regions, since their polarization is similar to the polarization of backscattered light, it is difficult to distinguish between target and non-targeted regions when using traditional methods of recovery; the recovered image is darkened and not even distinguishable from the background. Wang et al. proposed a non-uniform illumination active underwater polarization imaging method based on complex polarization characteristics [17]. The recovery effect of this method is remarkable, and the EME index of the recovered image is improved significantly. However, in the case of a large gap between the polarization characteristics of the target, the recovery effect of his method is not satisfactory, and both the low polarization region and the high polarization region are not fully recovered, while our method proposed in this paper can perfectly solve this problem. The aim of our article is to recover the overall region efficiently when the target polarization characteristics are complex.
In summary, the main organization of this paper can be summarized as follows: In Section 2.1, we first describe the scientific model of the imaging system and introduce the overall flow of the algorithm, which lays the foundation for the following. In Section 2.2, we preprocess Imin and Imax. In Section 2.3, we analyze the high and low polarization regions, and in Section 2.4, we recover the low polarization regions using polarization differencing. In Section 2.5, we estimate the target polarization using a joint evaluation, and fuse the high and low polarization regions. The experimental scenario and environment of this paper are described in Section 2.6. The experimental results are presented in Section 3. Finally, the full paper is summarized in Section 4.

2. Underwater Degraded Image Restoration Methods

2.1. Underwater Polarization Image Restoration Model

Most of the current methods, which are based on the Jaffe–McGlamery model, improve and optimize this basic underwater physical model. In this paper, this consists of an active light source, a polarizer, detection target, turbid water, polarizer, and polarization camera. Analyzing the known scattering properties of light, it can be seen that the information received by the polarization imaging model consists of a target light component, a forward-scattering component and a backward-scattering component. Target light is emitted by an active light source that is reflected on the surface of the target object. Forward-scattered light is reflected from the target, which is scattered by impurities in the water before entering the camera. Scattering is the cause of blurred images and the loss of contrast. Therefore, the construction of an underwater polarization image clarity model can effectively separate the forward and backward scattering information in the image and recover the target light, as shown in Figure 1a. Since the intensity of forward-scattered light is extremely low compared to backscattered light, considering forward-scattered light within the model will significantly increase the complexity of the algorithm and will not have much effect on the improvement of image quality. Therefore, this paper focuses on the relationship between the target light and the backscattered light in the underwater polarization imaging.
Based on the active light underwater polarization imaging schematic, we know that the total optical signal I received by the system is expressed as:
I ( x , y ) = S ( x , y ) + B ( x , y )
where I ( x , y ) is the image obtained by the underwater polarization imaging system, which is the original information received by the camera, S ( x , y ) is the reflected light from the target, and B ( x , y ) is the backscattered light. The goal of underwater image restoration is to solve S ( x , y ) to overcome the effect of the backscattered light B ( x , y ) in the original image I ( x , y ) .
In this paper, the polarization information of light is described by Stokes vectors (parameters I , Q , U , V), and a representation of Stokes vectors can be derived
I = I 0 ° ,   0 + I 90 ° ,   0
Q = I 0 ° , 0 I 90 ° ,   0
U = I 45 ° , 0 I 135 ° ,   0
V = I 45 ° , π 2 I 135 ° ,   π 2
where I denotes the total light intensity, which is the sum of the light intensity components in the two polarization directions of 0° and 90°, Q denotes the difference between the light intensity components in the two polarization directions of 0° and 90°, U denotes the difference between the linearly polarized light components in the two polarization directions of 45° and 135°, and V is the difference between the left rotationally polarized light and the right rotationally polarized light. In this paper, we mainly use line-polarized light for the original information acquisition, and do not acquire circularly polarized light, and the line polarization degree is used in the subsequent calculations, so we assume that the circularly polarized parameter V = 0. This assumption has a negligible effect on the subsequent image restoration, and can be ignored. According to the definition of polarization degree, the line polarization degree is expressed as follows:
P = Q 2 + U 2 I
The equations for the fitted curves of light intensity at different polarization angles are as follows:
I α = I + Q cos 2 α + U sin 2 α / 2  
where α is the angle between the polarization direction and the standard direction. Using the multi-channel real-time polarization detection system to obtain 0°, 45°, 90°, 135°, four different polarization angle images combined with the Stokes vector can be solved for I, Q, U. Substituting these into Formula (7), the different polarization angle can be found when the light intensity image is I(α). The two polarization images with the maximum and minimum light intensity can be obtained by following I(α), and their polarization angles are orthogonal, which are written as I m a x x , y and I m i n x , y , according to the intensity, respectively, as shown in Figure 1b.
Combined with Equation (1), the two polarized images with the maximum and minimum light intensities can also be represented by backscattered light and light reflected from the target:
I m a x x , y = S m a x x , y + B m a x x , y
I m i n x , y = S m i n ( x , y ) + B m i n ( x , y )
S m a x x , y and B m a x x , y in Equation (8) are the target reflected and backscattered light at the maximum light intensity, respectively. Similarly, S m i n ( x , y ) and B m i n x , y in Equation (9) are the target reflected light and backward-scattered light when the light intensity is at the minimum, respectively.
Based on the definition of polarizability, the polarizability of backscattered light P s c a t can be derived as follows:
P S c a t = B m a x x , y B m i n ( x , y ) B m a x x , y + B m i n ( x , y )
Similarly, the target light polarization P t a r can be expressed as Equation (11) as follows:
P t a r = S m a x x , y S m i n ( x , y ) S m a x x , y + S m i n ( x , y )  
Associating Equations (8) and (9) yields the total light intensity of the image expressed as I t o t a l , and the coordinate quantity ( x , y ) is omitted for the sake of concise representation:
I t o t a l = I m a x + I m i n = B + S
The association of Equations (8)–(11) yields:
I m a x I m i n = P s c a t · B + P t a r · S
Therefore, according to Equations (12) and (13), we can solve the target reflected light S and backscattered light B:
S = 1 P s c a t P t a r I m i n 1 + P s c a t I m a x 1 P s c a t
B = 1 P s c a t P t a r I m i n 1 P t a r I m a x 1 + P t a r
After the above derivation, according to Equations (14) and (15), based on the original information of the underwater polarization image, the recovered image can be derived by solving P t a r , P s c a t , I m i n ,   I m a x .
We aim to recover the overall region efficiently when the target polarization characteristics are complex. This paper further builds on reference [16]. First, we use histogram stretching methods for preprocessing two polarized orthogonal images, this increases the image contrast and enhances the image detail information. Then, the target is partitioned according to the values of each pixel point of the polarization image, and the low and high polarization target regions are extracted based on the polarization values. To address the practical problem, the low polarization region is recovered using the polarization difference method, and the high polarization region is recovered using the joint estimation of multiple optimization metrics. Finally, the low polarization and the high polarization regions are fused, as shown in Figure 2.

2.2. Imin, Imax Pre-Processing

As turbidity increases, the target region S x , y becomes fuzzy, and the interference B x , y , caused by the turbid medium, and the ratio (sharpness) γ = S x , y / B x , y become smaller, causing low contrast in the image. As in reference [18], the imaging quality in turbid environments can eventually be improved by utilizing the polarization orthogonal image contrast enhancement preprocessing method proposed in this paper [18]. Since there is a polarization relationship between two orthogonally polarized images, contrast enhancement of each of the two orthogonally polarized images may destroy this polarization relationship, which would make the polarization characteristics not guaranteed.
P x , y = I m a x x , y I m i n x , y I m a x x , y + I m i n x , y
Polarizability is the most important and basic polarization parameter, and Equation (16) is the determinant [18]. This is also known as the Michelson contrast, which uses the maximum and minimum luminances (I). So, in order to maintain the polarization relationship, we only need to process one of the orthogonal images by the contrast enhancement method, and then obtain the other image according to the polarization relationship between the two images. In this paper, histogram stretching is performed on polarized orthogonal images; in the image’s gray-level histogram, it stretches the narrower gray level intervals to both ends. The contrast of the whole image is increased to achieve the effect of image enhancement.
A = m i n I m i n p r o x , y
B = m a x I m i n p r o x , y
I m i n p r o x , y = 0                                                                                                             f   x , y A 255 B A I m i n p r o x , y A                                         B > f   x , y > A     255                                                                                                           f   x , y B
where A is the maximum gray level, B is the minimum gray level, f x , y is the original image, and I m i n p r o x , y is the stretched image.
I m a x p r o x , y = 1 + P x , y 1 P x , y I m i n p r o x , y
There exists an intrinsic polarization relationship between the original orthogonally polarized image   I m i n   ,   I m a x pairs, as shown in Equation (16), and the polarization degree P x , y can be derived from Equation (6), and the same polarization relationship should be maintained between the preprocessed images. In addition, as in reference [18], due to the large information attenuation in turbid media, the “darkest” ( I m i n ) image suffers from less backward scattering of light than the “brightest” ( I m a x ) image [18]. In contrast, target light accounts for the largest percentage. Therefore, it is more effective to directly preprocess I m i n for contrast enhancement. Combined with Equation (20), the preprocessed I m a x can be obtained.

2.3. Analysis and Extraction of Low and High Polarization Regions

The traditional method is the global estimate of polarization; however, targets in the water often have complex polarization characteristics, and the use of the overall global estimation is likely to lead to poor restoration, and some areas are even degraded. In reality, the recovered target does not have only one polarization characteristic, so there will be defects when using the traditional method to recover the image. For the low polarization region, due to the similarity of its polarization and the backscattered light polarization, it is not possible to differentiate between the target and the background when recovering using the traditional method, which often results in the recovered image becoming darker, or even the same as the background, resulting in the low polarization region and the background being indistinguishable, as shown in Figure 3.
There is a certain gap between the polarization of high polarization objects and low polarization regions in the target, for example, the metal region has high polarization value, while the plastic and other regions have low polarization value. According to this characteristic, the high and low polarization regions in the target are screened out by the polarization partition, and the polarization image is obtained by the Stokes vector, according to Equation (8). Each pixel point of this image represents the polarization value of the corresponding point, and according to different polarization degrees, a reasonable polarization degree partition is established to facilitate the successful extraction of the high and low polarization regions of the target. Figure 3b shows a pseudo-color map of polarization, which shows that in this pseudo-color map, it is easy to filter and distinguish the two according to the differences in the polarization values of different regions, which is convenient for the extraction of different regions.

2.4. Polarization Difference Recovery in Low Polarization Region

Firstly, in this paper, the partitioned, pre-extracted images are subjected to maximum filtering; the advantage of maximum filtering is that it removes the dark spots present in the image and also increases the brightness of the image. It achieves the effect of enhancing the target outline and facilitates target extraction.
g c h o o s e x , y = max s , t Ω ( x , y ) ( I c h o o s e c s , t )
where I c h o o s e c is the image after the selected partition; Ω ( x , y ) is the window at the pixel point; and c is the image three channels. After that, using the Otsu method, the image is binarized according to the probability of the variance distribution, and the target coordinate information is extracted separately. Next, edge recognition is performed using sobel operator, and finally, closure operation is performed to realize the extraction of multiple regions.
The Otsu method (OTSU) is an algorithm for determining the threshold for binarized segmentation of an image, which is also known as the maximum inter-class variance method; it divides an image into two parts, the background and the foreground, according to the gray scale characteristics of the image. Since the variance is a measure of the uniformity of the gray scale distribution, the larger the inter-class variance between the background and the foreground, the larger the difference between the two parts of the image. The advantages of this method are that it is simple to understand, fast to compute, and applicable to most image segmentation problems.
Given a grayscale image of size M × N, x is the gray value, y is the gradient value, and k is the number of gray levels. The probability of a pixel n x y is given by Equation (22):
P x y = n x y M × N
The probabilities of background A and target B are shown in Equations (23) and (24).
P A = x = 1 S y = 1 t P x y
P B = x = S + 1 K y = t + 1 K P x y
where s and t are the mean gray level and gradient level of the segmentation threshold, respectively. The mean vectors U A and U B of background A and target B are, respectively, given by Equations (25) and (26).
U A = ( U A x , U A y ) T = ( x = 1 S y = 1 t x P x y P A , x = 1 S y = 1 t y P x y P A ) T
U B = ( U B x , U B y ) T = ( x = S + 1 K y = t + 1 K x P x y P B , x = S + 1 K y = t + 1 K y P x y P B ) T
The total mean vector u of the image is as in Equation (27)
U = ( U x , U y ) T = ( x = 1 S y = 1 t x P x y , x = 1 S y = 1 t y P x y ) T
The dispersion matrix between two pixels is given in Equation (28)
S s , t = P A U A U U A U T + P B U B U U B U T
The difference between the background and target classes is proportional to the discretization measure, so the optimal threshold (s*, t*) is the threshold when the discretization is chosen to be maximum, as in Equation (29)
t r = S s * , t * = m a x ( t r ( S s , t ) )
The Sobel operator belongs to the gradient magnitude detection operator; the basic idea is to find the weighted mean of the image and use the convolutional template to detect edges, then realize the edges of the detected image by the first-order differentiation calculation. Assuming that f x , y represents an image, its gradient at the point f x , y is defined as Equation (30).
f x , y = G x G y = f x f y T
where   f x , y denotes the mode of gradient and its value is defined as in Equation (31).
f x , y = ( f x ) 2 + ( f y ) 2 1 2  
The Sobel operator contains convolutional templates in both horizontal and vertical directions, as shown below.
1 2 1 0 0 0 1 2 1 1 0 1 2 0 2 1 0 1
As can be seen from the previous section, using the traditional method to recover low polarization regions has defects, and the polarization difference method can overcome this defect; it uses two orthogonal polarization images for the difference, and low polarization regions are well recovered. Therefore, in this paper, the polarization difference method is used for the low polarization region. Assuming that the target light and the background scattered light are both perfectly linearly polarized, two orthogonally polarized images are then acquired and differenced. The active light source emits a light beam, which is first polarized by a linear polarizer P 1 and then irradiated onto the target T. Another linear polarizer P 2 is placed in front of the detector to act as a detector, as shown in Figure 4. When the direction of polarization of P 2 is parallel to the direction of polarization of P 1 , the image of light intensity parallel to the direction of polarization of the incident light can be obtained, denoted as I c o . When you rotate P 2 so that its polarization direction is orthogonal to the polarization direction of P 1 , you can obtain an image of the light intensity orthogonal to the polarization direction of the incident light, denoted as I c r o s s [19]. Polarization differential imaging is the difference between the light intensities of the above two mutually perpendicular polarization states: I d i f f e r e n c e = I c o I c r o s s .

2.5. Joint Estimation of Target Light Polarization

Cosine similarity, also known as cosine distance, is a measure of the difference between two images; it uses the cosine of the angle between two vectors in a vector space. A cosine value close to 1 indicates that the angle tends to 0, and the more similar the two vectors are, and a cosine value close to 0 indicates that the angle tends to 90 degrees, and the less similar the two vectors are. When images are similarly presented as a vector, the similarity of two pictures is characterized by computing the cosine distance. The cosine similarity algorithm transforms the feature set of each image into vectors in a high-dimensional space, and the cosine of the angle between two vectors can be effectively used to determine the similarity of two images.
cos θ = i = 1 n ( S i B i ) i = 1 n ( S i ) 2 i = 1 n ( B i ) 2
EME (Edge Model Estimation) is the expression of the degree of local gray scale variation of the image; the stronger the local gray scale variation of the image, the more detailed the representation of the image, and the higher the calculated EME value [20].
E M E = 1 K 1 K 2 l = 1 K 2 k = 1 K 1 20 log i m a x ; k , l ω x , y i m i n ; k ,   l ω x ,   y  
According to Equation (33), the image is divided into K1 × K2 small regions, and the logarithmic mean of the ratio of the largest to the smallest values of gray scale in each small region is calculated. The result obtained is the evaluation result. EME is often used to measure the quality of an image; the larger the value of EME, the better the quality of the image.
Using Iterative EME to estimate the target light polarization, in the range of 0 to 1, in steps of 0.01, is a common method. However, the estimation of the polarization of the target light only by EME also suffers from certain drawbacks. In most cases, when the polarization of the target light is too low, the image is poorly observed subjectively, with little structural and detailed similarity to the original image, but at this time, the image has a high EME value, which interferes with the use of EME to accurately estimate the polarization of the target light, as shown in Figure 5.
To this end, a method for joint estimation of target light polarization by cosine similarity and EME is induced in this paper; comparing the target light (S) and the backscattered light (B), the target light polarization is first pre-estimated using cosine similarity in the range of 0 to 1, iteratively with a step size of 0.01. The initial optimized selections were screened to exclude cases where the estimation result had a low structural and detail similarity to the original image, but the result had a high EME value. Next, after screening, the EME was used to iteratively estimate the target polarization within the optimized selection area in steps of 0.01 to determine the final estimate. The specific process is shown in Figure 6 below.

2.6. Experimental Environment

The experimental environment is shown in Figure 7, and the main components include a polarization camera, a polarized light source, a linear polarizer, a glass water tank, and a target. The camera uses a color polarization camera TRIO50S-QC equipped with a Sony IMX250 MYR image sensor, which is a four-channel split-focus planar polarization camera that can simultaneously acquire polarized images at four polarization angles: 0°, 45°, 90°, and 135°, the maximum pixel size of each image is 2448 × 2048. This four-channel split-focus plane polarization camera solves the problem of poor recovery effect caused by the time delay and irregular movement of scattering particles. In the experiment, the target was irradiated using an active light source (Rawray 150 W LED lamp) with the addition of a USP-50C0.4-38 linear polarizer, and the target was placed in a transparent, high-transparency acrylic tank (with a transmittance of 96%). And the inside of the tank was covered with a black coating to avoid interference from ambient light outside the lab. And in order to simulate the turbidity of natural water, milk was added to the water, and the turbidity was changed by quantitatively adding an amount of skimmed milk (1200:5), which was used to simulate the turbid underwater environment for underwater experiments, as shown in Table 1. And finally, the optimized experimental images were obtained by the method proposed in this paper.

3. Experimental Results and Discussion

This paper uses three evaluation indicators for objective evaluation. According to the experimental results, the range of Entropy values is within 10, the range of EME values is within 20, and the range of Contrast values is within 50. The three differences are large, and in order to facilitate lateral comparison, so that the results of the experiments are more intuitive, the values of the same evaluation indicator for each group of experiments were normalized by percentage occupancy (values ranging from 0 to 1). Where EME is shown in Equation (33), the better the quality of image restoration, the larger the EME value obtained. Image information Entropy is also a common index for image quality evaluation, which reflects the degree of image information richness from the perspective of information theory. The larger the Entropy, the clearer the image. For Contrast, a larger value means that the image outline is clear, and the details of the information are richer [21].
In Experiment 1, objectively, the EME value of Wang’s method is quite high, but the Entropy and Contrast values are the lowest. The Entropy and Contrast values of our method are enhanced significantly, and EME is improved by 2.1 times more than the original image, so we are able to recover the image information effectively. Moreover, subjectively, the method in this paper has the best recovery results, resulting in good recovery of both low and high polarization regions. In Experiment 2, Wang’s method has the highest EME value and our method has the second highest, but both Wang’s Entropy and Contrast values are lower than those in our method; subjectively, the rest of the methods fail to recover the image effectively in the low polarization region, and our method is subjectively better. In Experiment 3, the EME values of both Wang’s method and our method are significantly enhanced, with EME increasing by 1.06 and 1.004 times, respectively. Our method has the highest Entropy value, which is better than that of Wang’s method, and in contrast, both our method and Wang’s method improve significantly and similarly to each other’s. In Experiment 4, Wang’s method has the highest EME value and Tali’s method [22] has the second highest, but subjectively, our method is better, and the EME value of our method is also improved by 1.05 times compared to the original image, which is better than Schechner’s method. The Entropy and Contrast values of our method are also much higher than those of the other methods, as shown in Table 2 and Figure 8.
In reality, target composition is complex, as it does not have only one kind of polarization characteristic, and when using the traditional method to recover images, some of the low polarized regions are poorly recovered. The method in this paper has significant advantages over other methods when a target has complex polarization information. The method in this paper can optimize the recovery of high and low polarization regions separately; for a low polarization region, because its polarization is similar to the polarization of the backscattered light, recovery using the traditional method cannot distinguish between the target and the background, which often results in the recovered image becoming darker, or even the same as the background. The method in this paper uses the polarization difference method to recover the low polarization region, uses the multiple optimization indexes to recover the high polarization region, and finally, fuses the low polarization region and the high polarization region. In this paper, several groups of experiments are conducted. Figure 9 is a three-dimensional, disaggregated bar chart showing the results of four group experiments. Subjectively, the experimental results as a whole have been fully restored, and the information has been retained completely. Our method can fully recover the low polarization region, effectively remove the scattering effect, and enhance the image visual quality. Objectively, according to the results of the experimental evaluation indexes, EME, Entropy, and Contrast, our method performs significantly better than the other methods, which confirms the feasibility of this paper’s algorithm for application in specific underwater scenarios.

4. Conclusions

In this paper, we propose an underwater degraded image restoration method by joint evaluation and polarization partition fusion. Underwater targets are complex in composition and contain complex polarization properties. For a low polarization region, since its polarization is similar to the polarization of backscattered light, it is impossible to distinguish between the target and the background. But our method can effectively solve this problem. As can be seen from the results of multiple experiments in Figure 9, the EME, Entropy, and Contrast values of this paper’s method are significantly better than those of the other methods listed in this paper. Therefore, the method in this paper can effectively recover degraded underwater images, increase the image contrast, improve the image quality, and realize image restoration. The results of this paper provide a research basis for underwater polarization image restoration and enhancement in specific cases, while the robustness and adaptive ability of the algorithm still need to be improved. Applying deep learning for further refinement and improvement is our aim for future work [23]. The run rate of our algorithm is not ideal and also can be improved in the future.

Author Contributions

C.C., Y.F. and R.L. proposed the idea of the paper. C.C., H.C., S.Z. and M.W. helped manage the annotation group and helped clean the raw annotation. C.C. conducted all experiments and wrote the manuscript. C.C., Y.F. and R.L. revised and improved the text. C.C., Y.F. and R.L. are the people in charge of this project. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Science and Technology Foundation of State Key Laboratory] grant number [2022-JCJQ-L8-015-0201], [Liaoning Provincial Department of Education Scientific research funding project] grant number [LJKZ0475] and [Dalian High-Level Talent Innovation Support Program] grant number [2022RJ03].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yuan, X.; Guo, L.X.; Luo, C.T.; Zhao, X.T.; Yu, C.T. A Survey of Target Detection and Recognition Methods in Underwater Turbid Areas. Appl. Sci. 2022, 12, 4898. [Google Scholar] [CrossRef]
  2. Hu, K.; Weng, C.H.; Zhang, Y.W.; Jin, J.L.; Xia, Q.F. An Overview of Underwater Vision Enhancement: From Traditional Methods to Recent Deep Learning. J. Mar. Sci. Eng. 2022, 10, 241. [Google Scholar] [CrossRef]
  3. Guo, J.C.; Li, C.Y.; Guo, C.L.; Chen, S.J. Research progress of underwater image enhancement and restoration methods. J. Image Graph. 2017, 22, 273–287. [Google Scholar] [CrossRef]
  4. Zhao, Y.Q.; Dai, H.M.; Shen, L.H. Review of underwater polarization clear imaging methods. Infrared Laser Eng. 2020, 49, 43–53. [Google Scholar] [CrossRef]
  5. Bazeille, S.; Quidu, I.; Jaulin, L.; Malkasse, J.-P. Automatic underwater image pre-processing. In Proceedings of the CMM’06, Brest, France, 16–19 October 2006. [Google Scholar]
  6. He, K.M.; Sun, J.; Tang, X.O. Single image haze removal using dark channel prior. In Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, Miami, FL, USA, 18 August 2009. [Google Scholar] [CrossRef]
  7. Galdran, A.; Pardo, D.; Picón, A. Automatic Red-Channel underwater image restoration. J. Vis. Commun. Image Represent. 2015, 26, 132–145. [Google Scholar] [CrossRef]
  8. Zhang, H.J.; Gong, J.R.; Ren, M.Y.; Zhou, N.; Wang, H.T.; Meng, Q.G.; Zhang, Y. Active Polarization Imaging for Cross-Linear Image Histogram Equalization and Noise Suppression in Highly Turbid Water. Photonics 2023, 10, 145. [Google Scholar] [CrossRef]
  9. Schechner, Y.Y.; Karpel, N. Recovery of underwater visibility and structure by polarization analysis. IEEE J. Ocean. Eng. 2005, 30, 102–119. [Google Scholar] [CrossRef]
  10. Schechner, Y.Y.; Karpel, N. Clear underwater vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 19 July 2004. [Google Scholar] [CrossRef]
  11. Huang, B.J.; Liu, T.G.; Hu, H.F.; Han, J.F.; Yu, M.X. Under-water image recovery considering polarization effects of objects. Opt. Express 2016, 24, 9826–9838. [Google Scholar] [CrossRef] [PubMed]
  12. Hu, H.F.; Zhao, L.; Huang, B.J.; Li, X.B.; Wang, H.; Liu, T.G. Enhancing Visibility of Polarimetric Underwater Image by Transmittance Correction. IEEE Photonics J. 2017, 9, 6802310. [Google Scholar] [CrossRef]
  13. Li, H.X.; Zhu, J.P.; Deng, J.X.; Guo, F.Q.; Zhu, J.P. Visibility enhancement of underwater images based on polarization common-mode rejection of a highly polarized target signal. Opt. Express 2022, 30, 43973–43986. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, H.J.; Zhou, N.; Meng, Q.G.; Guo, F.Q.; Ren, M.Y.; Wang, H.T. Local optimum underwater polarization imaging enhancement based on connected domain prior. J. Opt. 2022, 24, 10570. [Google Scholar] [CrossRef]
  15. Li, R.H.; Zhang, S.H.; Cai, C.; Xu, Y.; Cao, H. Underwater polarization image restoration based on a partition method. Opt. Eng. 2023, 62, 068103. [Google Scholar] [CrossRef]
  16. Li, R.H.; Cai, C.Y.; Zhang, S.H.; Xu, Y.H.; Cao, H.T. Polarization parameter partition optimization restoration method for underwater degraded image. Opt. Precis. Eng. 2023, 31, 3010. [Google Scholar] [CrossRef]
  17. Wang, J.J.; Wan, M.J.; Cao, X.Q.; Zhang, X.J.; Gu, G.H.; Chen, Q. Active non-uniform illumination-based underwater polarization imaging method for objects with complex polarization properties. Opt. Express 2022, 19, 46926–46943. [Google Scholar] [CrossRef] [PubMed]
  18. Li, X.B.; Hu, H.F.; Zhao, L.; Wang, H.; Yu, Y.; Wu, L. Polarimetric image recovery method combining histogram stretching for underwater imaging. Sci. Rep. 2018, 8, 12430. [Google Scholar] [CrossRef] [PubMed]
  19. Shi, C.Y.; Zhu, Z.W.; Yin, G.F.; Gao, X.H.; Wang, Z.M.; Zhang, S.; Zhou, Z.H.; Hu, X.Y. Measurement of Submicron Particle Size Using Scattering Angle-Corrected Polarization Difference with High Angular Resolution. Photonics 2023, 10, 1282. [Google Scholar] [CrossRef]
  20. Miao, Y.; Sowmya, A. New image quality evaluation metric for underwater video. IEEE Signal Lett. 2014, 21, 1215–1219. [Google Scholar] [CrossRef]
  21. Agaian, S.S.; Panetta, K.; Grigoryan, A.M. Transform-based image enhancement algorithms with performance measure. IEEE Trans. Image Process. 2001, 10, 367–382. [Google Scholar] [CrossRef] [PubMed]
  22. Treibitz, T.; Schechner, Y.Y. Active polarization descattering. IEEE Trans. PAMI 2009, 31, 385–399. [Google Scholar] [CrossRef] [PubMed]
  23. Han, P.; Li, X.; Liu, F.; Cai, Y.; Yang, K.; Yan, M.; Sun, S.; Liu, Y.; Shao, X. Accurate passive 3D polarization face reconstruction under complex conditions assisted with deep learning. Photonics 2022, 9, 924. [Google Scholar] [CrossRef]
Figure 1. (a) Schematic diagram of underwater polarization imaging of active light; (b) polarization orthogonal images.
Figure 1. (a) Schematic diagram of underwater polarization imaging of active light; (b) polarization orthogonal images.
Applsci 14 01769 g001
Figure 2. Overall flowchart.
Figure 2. Overall flowchart.
Applsci 14 01769 g002
Figure 3. (a) The traditional method of low-polarized region recovery defects; (b) pseudo-color plot of polarization (the scale range on the right is the polarization value calculated by Equation (6)).
Figure 3. (a) The traditional method of low-polarized region recovery defects; (b) pseudo-color plot of polarization (the scale range on the right is the polarization value calculated by Equation (6)).
Applsci 14 01769 g003
Figure 4. Polarization difference diagram.
Figure 4. Polarization difference diagram.
Applsci 14 01769 g004
Figure 5. (a) Using traditional EME method to estimate Ptar. The estimation results have low structural and detail similarity, but have high EME values; (b) EME single estimated target light polarization curve, where the horizontal coordinate represents the value of polarizability, and the vertical coordinate represents the value of EME.
Figure 5. (a) Using traditional EME method to estimate Ptar. The estimation results have low structural and detail similarity, but have high EME values; (b) EME single estimated target light polarization curve, where the horizontal coordinate represents the value of polarizability, and the vertical coordinate represents the value of EME.
Applsci 14 01769 g005
Figure 6. Flowchart for joint estimation of target optical polarization.
Figure 6. Flowchart for joint estimation of target optical polarization.
Applsci 14 01769 g006
Figure 7. Experimental environment diagram.
Figure 7. Experimental environment diagram.
Applsci 14 01769 g007
Figure 8. Final experimental results.
Figure 8. Final experimental results.
Applsci 14 01769 g008
Figure 9. Three-dimensional bar graph of experimental.
Figure 9. Three-dimensional bar graph of experimental.
Applsci 14 01769 g009
Table 1. Working environment.
Table 1. Working environment.
Camera Sensor Resolution2248 × 2048 px, 5.0 MP
operating temperature−20 to 50 °C
active lightRawray 150 W LED Light
linearly polarized lightUSP-50C0.4-38
High-Transparency Acrylic Tank40 cm, 40 cm, 50 cm with over 96% light transmission
software platformmatlab2020a
Table 2. Final experimental data (red is the maximum value, blue is the minimum value, ↑ indicates that the larger evaluation index, the better the image quality).
Table 2. Final experimental data (red is the maximum value, blue is the minimum value, ↑ indicates that the larger evaluation index, the better the image quality).
Underwater Experiment 1Joint Evaluation Indicators
EME ↑Entropy ↑Contrast ↑
Origin Image6.0265.1803.216
Schechner‘s method12.9125.41224.318
Tali’s method20.4436.31232.048
Wang‘s method23.7344.77715.181
Our method18.7276.75145.058
Underwater Experiment 2Joint Evaluation Indicators
EME ↑Entropy ↑Contrast ↑
Origin Image4.5045.3052.253
Schechner‘s method7.9414.7482.812
Tali’s method8.3495.7999.790
Wang‘s method11.2145.69211.673
Our method8.8676.25525.987
Underwater Experiment 3Joint Evaluation Indicators
EME ↑Entropy ↑Contrast ↑
Origin Image2.5284.7381.202
Schechner‘s method4.9163.8660.803
Tali’s method3.9024.8084.491
Wang‘s method5.2264.2104.654
Our method5.0675.1934.615
Underwater Experiment 4Joint Evaluation Indicators
EME ↑Entropy ↑Contrast ↑
Origin Image2.1305.3632.242
Schechner‘s method3.6144.4902.034
Tali’s method4.9024.9554.648
Wang‘s method5.2394.5506.585
Our method4.3655.5607.029
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cai, C.; Fan, Y.; Li, R.; Cao, H.; Zhang, S.; Wang, M. Underwater Degraded Image Restoration by Joint Evaluation and Polarization Partition Fusion. Appl. Sci. 2024, 14, 1769. https://doi.org/10.3390/app14051769

AMA Style

Cai C, Fan Y, Li R, Cao H, Zhang S, Wang M. Underwater Degraded Image Restoration by Joint Evaluation and Polarization Partition Fusion. Applied Sciences. 2024; 14(5):1769. https://doi.org/10.3390/app14051769

Chicago/Turabian Style

Cai, Changye, Yuanyi Fan, Ronghua Li, Haotian Cao, Shenghui Zhang, and Mianze Wang. 2024. "Underwater Degraded Image Restoration by Joint Evaluation and Polarization Partition Fusion" Applied Sciences 14, no. 5: 1769. https://doi.org/10.3390/app14051769

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop