3.1. Experiment Settings
To verify the effectiveness of the proposed algorithm, we chose three different illumination types of sea images as test data: uniform illumination image, elliptical non-uniform illumination image, and partial elliptical non-uniform illumination image. The test images were panchromatic (image size: 1024 × 1024) and acquired by our self-developed high-resolution camera mounted on an airborne platform, and the imaging sensor of camera is manufactured by DALSA, Canada.
All verification tests used a PC running Windows 7 OS with Intel (R) Xeon (R) CPU E5-1620 v2 @3.70GHz and MATLAB R2015a. The convergence threshold ε for stopping criteria of iteration was set as 0.001.
We also compared the results using our method with those using the other six state-of-the-art image enhancement methods: HE, SSR [
10], MSR [
11], Naturalness-Preserved Enhancement (NPE) algorithm [
8], Simultaneous Reflectance and Illumination Estimation (SRIE) model [
19], and Probabilistic method for Image Enhancement (PIE) [
13]. The six comparison methods above basically cover the typical methods involved in the introduction: HE-based enhancement methods, masking-based enhancement methods, Retinex-based enhancement methods and total variation methods.
3.2. Sea Image Enhancement
Typical images selected in this paper are shown in
Figure 5a,
Figure 6a and
Figure 7a, where
Figure 5a is a uniform illumination image,
Figure 6a is an elliptical non-uniform illumination image, and
Figure 7a is a partial elliptical non-uniform illumination image. Comparisons of enhanced images using different methods are shown in
Figure 5,
Figure 6 and
Figure 7.
In
Figure 5a,
Figure 6a and
Figure 7a, the sea surface image is affected by the reflectivity, whose contrast is low, and the texture features are extremely insignificant. Under uniform light conditions, the image is less affected by illumination, and the relative difference in texture information is significant even when the exposure intensity is weak. For images formed under non-uniform illumination conditions, we regard the sea surface ripples as the background information. When the strong forward scattering of particles in the water vapor (large proportion) couples to the background information (small proportion), the relative differences between the texture information to be observed becomes smaller, making image enhancement more difficult.
For the uniform lighting scene, several enhancement methods used in the experiment effectively improved the global background gray level of the image. However, for SRIE and PIE methods, the difference between texture features was not effectively stretched after enhancing the global brightness, and the phenomena were basically the same (see
Figure 5f,g). It is assumed that this was affected by the excessive estimation of the illumination components by the SRIE method and the PIE method. Both illumination components contained a certain amount of sea texture information, resulting in distortion of the reflectance component. The contrast value of the HE method (
Figure 5b) was the largest among all the methods, but it did not take into account noise and illumination. In a uniform illumination environment, a large amount of image noise was amplified with the texture of the image. At the same time, the HE method stretched the gray level of the image globally, which caused the bright area of the image to become brighter, and the dark area of the image to become darker, which was unfavorable to the observation of the details of the dark area. Both the SSR and MSR methods had low luminance average values when dealing with uniformly illuminated sea images (
Figure 5c,d), and the details of dark area were not significant which was similar to that of HE method. The NPE method was a relatively good processing method. For the relative difference in texture information of sea surface images under uniform illumination, an ideal layered effect of wave information and local information was obtained. After the corresponding enhancement process was performed on different layers, the final effect was synthesized. The gray value range of the image enhanced by the NPE method (
Figure 5e) was appropriate; however, the image noise was also enhanced as high-frequency information. Areas with less texture information were kept in dark colors, which was hard to observe after contrasting with surrounding bright areas.
The subjective observations of the images acquired through the different enhancement methods used in the experiments are quite different for the non-uniform illumination images shown in
Figure 6a and
Figure 7a. Each method had specific enhancements and characteristics for the elliptical non-uniform illumination image (
Figure 6) and the partial elliptical non-uniform illumination image (
Figure 7).
Although enhanced images using the HE method (
Figure 6b and
Figure 7b) had uniform gray distribution, the simultaneous processing of the stretching transformation enhanced the global brightness difference of the image, leading to the reduction of texture details in the center of the halo (bright area) and the edge of the halo (dark area). The enhanced images using SSR method (
Figure 6c and
Figure 7c) had more appropriate gray mean value than those using MSR methods (
Figure 6d and
Figure 7d), and inaccurate estimation of illumination were found in enhanced images using an SSR method or MSR method, which was particularly evident in the global non-uniformity after image enhancement. Similar to uniform illumination, the SRIE method (
Figure 6f and
Figure 7f) and the PIE method (
Figure 6g and
Figure 7g) had the problem of excessive estimating illumination components under non-uniform illumination. The result of NPE (
Figure 6e and
Figure 7e) was worse than that of uniform illumination, the main reason being that texture information in the image was weak due to the heavy water vapor and halo at this time, and it cannot achieve the purpose of distinguishing each frequency detail information through multi-scale layering.
Our model achieved the constraint of illumination by modeling the halo under different illumination conditions and solving
. At the same time, we considered the characteristics of various noises and constrain noise information, and the final enhanced image simultaneously removed illumination and noise effects. Therefore, regardless of the uniform illumination environment in
Figure 5, or the non-uniform illumination environments shown in
Figure 6 and
Figure 7, the sea surface image enhanced by our method basically maintained a stable gray mean value and gray range, various types of noise were effectively suppressed, and the texture features and details of each area had been preserved as much as possible. In subjective human observation, our image enhancement result was the softest and most delicate.
To verify the correctness of the texture enhanced by the algorithm in this paper, we used the image shown in
Figure 6a to illustrate. It has a specific ship wake texture feature, which is more significant and regular than the water body. We first cropped the partial wake image and performed the contrast stretching operation to express the feature more clearly. At the same time, the enhanced partial image was cropped, and the comparison between the original image and the enhanced image is shown in
Figure 8. We can see that our texture information is basically accurate by comparing the wake details in the two images from
Figure 8.
3.3. Objective Quality Assessments
To investigate the differences between the algorithms, we also selected three objective quality assessments of the sea to objectively evaluate the performance of the algorithm in addition to the subjective visual experience. Since the purpose of sea image enhancement is to obtain a suitable gray value range and a clear texture expression, we chose the image mean value
M, contrast value
C, DEAV (Developed Edge Algorithm Value) value (from Point Sharpness Method [
22])
D as an objective evaluation indices. For an image of size
m ×
n, their definitions are:
where
P (i,
j) is the gray value of each pixel in the enhanced image,
s is the number of neighboring points,
dP is the amplitude of the grayscale change, and
dx is the distance increment between pixels.
In the evaluation indexes above, the mean value of an image is used to represent the global gray level. The contrast is used to represent the bright and dark difference of the whole image, and the DEAV value is used to represent the intensity of the image gray-level change. The mean value should not be too large or too small, and it is more appropriate to be half of the quantified maximum value. The image with a high contrast value gives a more vivid and prominent feeling, and the visual sense of change is stronger. Larger DEAV value means sharper image gray value change, more significant image edges and texture features, and higher image clarity. We used the evaluation indexes above to evaluate the original image and the enhanced image of each method in
Figure 5–7; the evaluation results are shown in
Figure 9.
Figure 9 shows that the average gray value of our enhanced images was always between 120–150, and no over-lighting or under-lighting occurred. This depended on our model’s ability to remove the illumination in general, and the sea reflection was basically maintained at that order of magnitude. For the contrast evaluation, compared with other types of non-variational methods (HE, SSR, MSR, NPE), the contrast of our method was relatively small, because the contrast calculation is performed globally. The methods with large contrast had retained the influence of illumination too much, and the difference between the bright and dark areas was large. Since the feeling of change at this time was too much from the image globality, this led to the problem that the image was unnatural, and the local observability was deteriorated. In the DEAV evaluation, our method had the highest DEAV value (70.2 for elliptical and 66.9 for partial elliptical) for non-uniform illumination images, indicating that the enhanced image detail retention was better, and the image sharpness was the highest. For uniform illumination image, our method was lower than the HE method; this is mainly caused by the HE method also enhanced the noise, and a large amount of non-uniform noise was amplified and mistaken for image detail changes.
3.5. Convergence Rate and Computational Time
It is worth mentioning that the convergence rate and the computational time in this paper are affected by our chosen stopping criteria of iteration threshold
ε and the input image size. It should be pointed out that this paper selects three threshold values of 0.01, 0.05, 0.001 when verifying the relationship between the computational time and the iteration threshold
ε. Among the three values, 0.001 is the threshold value for ensuring the optimal enhancement effect. The quality of the image will be significantly degraded when the threshold value is 0.1; 0.05 is a medial value. The three threshold values represent three typical effects after image enhancement, while clearly reflecting the time-consuming changes. We illustrated it with the typical iteration of
Figure 2. The relationship between iteration number and
ε, iteration number and image size are shown in
Figure 12. In addition, the computational times with respect to different
ε and image size are shown in
Table 1.
We can see from
Figure 12 that the iterations are in a trend of continuous convergence. In addition, smaller image size means higher iteration efficiency. It can also be seen from
Table 1 that smaller iteration stopping threshold means more time-consuming. This is because smaller error between actual image and reflectance image requires greater number of iterations. The other images show the same iteration characteristics, which will not be included here.