Next Article in Journal
An Improved Chinese Pause Fillers Prediction Module Based on RoBERTa
Previous Article in Journal
A Study of the Relationship between Dipole Noise Sources and the Flow Field Parameters around the Rearview Mirror of Passenger Cars
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual Histogram Equalization Algorithm Based on Adaptive Image Correction

1
School of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China
2
Guangxi Collaborative Innovation Centre for Earthmoving Machinery, Guangxi University of Science and Technology, Liuzhou 545006, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10649; https://doi.org/10.3390/app131910649
Submission received: 31 August 2023 / Revised: 20 September 2023 / Accepted: 21 September 2023 / Published: 25 September 2023

Abstract

:
For the visual measurement of moving arm holes in complex working conditions, a histogram equalization algorithm can be used to improve image contrast. To lessen the problems of image brightness shift, image over-enhancement, and gray-level merging that occur with the traditional histogram equalization algorithm, a dual histogram equalization algorithm based on adaptive image correction (AICHE) is proposed. To prevent luminance shifts from occurring during image equalization, the AICHE algorithm protects the average luminance of the input image by improving upon the Otsu algorithm, enabling it to split the histogram. Then, the AICHE algorithm uses the local grayscale correction algorithm to correct the grayscale to prevent the image over-enhancement and gray-level merging problems that arise with the traditional algorithm. It is experimentally verified that the AICHE algorithm can significantly improve the histogram segmentation effect and enhance the contrast and detail information while protecting the average brightness of the input image, and thus the image quality is significantly increased.

1. Introduction

In the industrial field, machine vision systems applied in practice will inevitably encounter environmental problems (e.g., light, fog, smoke, dust), imaging equipment problems, lighting problems, and other factors that will result in the acquisition of low-quality, low-contrast images, which is not conducive to subsequent image processing, and so image enhancement is necessary. The main methods of image enhancement include histogram equalization, homomorphic filtering [1], Retinex theory-based enhancement algorithm, and deep learning methods. For the image enhancement algorithm of homomorphic filtering, Gong [2] et al. proposed a homomorphic filtering method based on combination and addition in HSV space. However, this work merely improved the underground image data and had certain efficiency flaws. The enhancement method based on Retinex theory has a poor effect on high-brightness images (such as hazy photos), and it produces visible halo phenomena at the intersection of light and dark in the image, which is not conducive to industrial measurement. Deep learning technologies can increase image quality. However, there are still issues with data availability and the generalization of deep learning systems [3].
The histogram equalization method is widely used because it is fast, simple, and effective. Histogram equalization takes the histogram statistics of the pixel values of the input image and then distributes them evenly, which is effective for image enhancement. However, the traditional histogram equalization algorithm [4] can lead to the brightness of the image being offset due to over-stretching, which results in poor enhancement; it can also lead to a loss of detail information and over-enhancement due to gray-level merging. These image quality problems detract from the success of image processing and hinder the extraction of target information from the image. Therefore, histogram equalization algorithms have been improved through various methods [5,6,7,8].
To solve the problem of mean luminance shift, Kim proposed the bi-histogram equalization (BBHE) algorithm [9], which divides the input image histogram into two sub-histograms based on the mean value of the input image histogram, equalizes them separately, and finally merges them. Later on, many other scholars improved such algorithms, and Wang et al. proposed the dualistic sub-image histogram equalization (DSIHE) algorithm [10], which divides the image into two sub-histograms based on the median of the gray level, instead of the mean, and equalizes them separately. The recursive sub-image histogram equalization (RSIHE) algorithm [11] and the recursive mean-separate histogram equalization (RMSHE) algorithm [12] improve upon BBHE and DSIHE, respectively. Chen et al. [13] proposed a bi-histogram equalization algorithm with a “minimum” mean brightness error (i.e., minimum mean brightness error bi-histogram equalization, MMBEBHE), which determines the unique separation point by testing all intensity values and selecting the minimum difference between the average input brightness and the average output brightness. He et al. [14] proposed an infrared image enhancement method combining improved L-C saliency detection and dual-region histogram equalization in order to improve the visual effect of infrared images and highlight the detail information. The foreground and background regions are obtained by adaptive segmentation of the saliency map using the K-means algorithm. Although the K-means algorithm works well when the sample data are dense and the distinction between classes is particularly good, the selection of the K value is difficult to estimate. Blind determination of the K value will lead to inaccurate segmentation results.
The principle of all these methods is to calculate a suitable threshold to split the original histogram and then equalize each histogram separately. These methods can protect the average brightness of the input image, but their limitations are that the segmented sub-histogram is too narrow, leading to poor image enhancement, and the distribution is too wide, so it will contain noise, artifacts, and other defects. To solve the problem of image detail loss, some scholars proposed the local histogram equalization algorithm (AHE) for image contrast enhancement, but the algorithm is complex, has a long running time, and generates a lot of noise and block effects, so it was improved to produce the contrast-limited adaptive histogram equalization (CLAHE) algorithm [15].
In recent years, to mitigate the problem of average brightness change and image detail loss due to gray-level merging in the equalization process, Stark et al. [16] proposed adaptive histogram equalization, the idea of which is to segment the image, perform histogram equalization for each region separately, and finally merge multiple local maps, which can protect certain detail information but also introduce noise. To improve the image over-enhancement problem, Maitra et al. [17] proposed a pre-processing algorithm for pectoral muscle detection and suppression using contrast limited adaptive histogram equalization (ARAN) to enhance the contrast of digital mammograms. Bi-histogram with a plateau limit for digital image enhancement (BHEPL) [18] uses the average of the intensity of each sub-histogram as the platform limit. Aquino-Morínigo et al. [19] proposed a dual Bi-histogram histogram equalization algorithm using two platform limits (BHE2PL). Singh et al. [20] proposed an image enhancement technique using the idea of exposure values, called image enhancement using exposure-based sub-image histogram equalization (ESIHE), which divides the cropped histogram into two parts using a pre-computed exposure threshold. Paul [21] proposed a three-histogram equalization technique for digital image enhancement in the three-platform limit, which uses a separation threshold parameter to initially separate the histogram of the input image into three sub-histograms. Huang [22] proposed an image enhancement strategy—contrast-constrained dynamic quadratic histogram equalization (CLDQHE)—to overcome the drawbacks of over-enhancement and over-smoothing that exist in traditional histogram equalization methods. Although these algorithms perform well in contrast improvement, they fail to maintain brightness and preserve fine structures.
Hence, this study proposes a dual histogram equalization algorithm based on adaptive image correction (AICHE) for image enhancement in the process of moving arm hole machine vision measurement in complex working conditions. With AICHE, the global histogram is divided into two sub-histograms to solve the problem of mean luminance shift, and then the sub-histograms are corrected in two platform limits to avoid the over-enhancement of the image. Next, to prevent image over-enhancement and gray-level merging problems, grayscale correction is conducted using a local grayscale correction algorithm to perform histogram equalization on the basis of maintaining the average brightness of the input image to improve the image contrast while protecting image detail information.

2. Histogram Equalization

The main idea of the histogram equalization algorithm is to extend the probability density function (PDF) of the gray levels in the whole image and remap the gray levels of the pixels in the original image. First, the histogram of the original image F is normalized and its cumulative histogram is constructed. The conversion formula is mainly composed of the cumulative distribution function (CDF). Then, the cumulative histogram is quantized to the gray level of the output image. The three steps of the algorithm are detailed as follows:
Count the percentage of pixels for each gray value to obtain the PDF of the histogram:
P D F ( i ) = n i n , i = 0 , 1 , 2 . k
where i is the gray level of the input image, n is the total number of pixels in the input image, and n i is the total number of pixels in the image with gray level i .
Accumulate the PDF of each gray level to obtain the CDF of the histogram:
C D F ( i ) = i = 0 k P D F ( i )
where CDF is cumulative distribution function.
Quantize the CDF and map it to the output image:
F ( i ) = s t a r t + ( e n d s t a r t ) × C D F ( i )
where start and end denote the minimum and maximum gray levels of the mapping interval, respectively.
Based on the CDF, the traditional histogram equalization algorithm selectively enhances the gray levels that occupy more pixels and extends the distribution range of gray levels. However, it will over-enhance the gray levels with higher frequency, and it will merge the gray levels with fewer pixel points, resulting in the loss of details, which is also the drawback of traditional histogram equalization.

3. Proposed AICHE Transformation

In this study, we propose the AICHE algorithm to segment the image into two sub-histograms of target and background by improving upon the Otsu method, and then perform the histogram equalization process separately, which ensures that the average brightness of the original image will not be shifted. Additionally, the algorithm segments the histogram of the image based on the adaptive threshold. This effectively avoids the phenomenon of image over-enhancement and also prevents detail loss to a certain extent. The flowchart of the AICHE algorithm is shown in Figure 1.

3.1. Histogram Segmentation

For most images, the distribution of pixel grayscale is not uniform, and the average brightness of the image will be shifted during the equalization process. This can be solved using histogram segmentation. When segmenting the grayscale histogram, if the segmented sub-histogram is too narrow, the equalization effect of the image will be reduced, and if it is too wide, it will lead to excessive enhancement and the loss of details. Therefore, the selection of the threshold value is extremely important, and the improper selection of the threshold value will directly lead to the degradation of the image quality after equalization.
First, suppose a threshold t is the segmentation point, and the image is divided into target region A and background region B according to the gray level, where region A consists of pixels with gray value in the interval [ M I N , t ] , and region B consists of pixels with gray value in the interval [ t + 1 , M A X ] . Then, the ratio of class A to class B q A ( t ) , q B ( t ) is
q A ( t ) = i = M I N t P D F ( i ) , q B ( t ) = i = T + 1 M A X P D F ( i )
where M I N and M A X denote the initial and termination values of the histogram, respectively, and P ( i ) denotes the probability that the grayscale value is i.
μ A ( t ) and μ B ( t ) , can be calculated as follows:
μ A ( t ) = i = M I N t i P D F ( i ) q A ( t ) , μ B ( t ) = i = T + 1 M A X i P D F ( i ) q B ( t )
where μ A ( t ) and μ B ( t ) denote the probabilities of class A and class B, respectively.
Then, the μ can be calculated as follows:
μ = q A ( t ) μ A ( t ) + q B ( t ) μ B ( t )
where μ is the average grayscale of the input image.
The inter-class variance σ T 2 is defined as
σ T 2 = q A ( t ) [ μ A ( t ) μ ] 2 + q B ( t ) [ μ B ( t ) μ ] 2
The traditional Otsu algorithm is simple, convenient, and not affected by the brightness of the image. It sets the threshold at which the variance between the target and background grayscale reaches its maximum value as the optimal segmentation threshold: K O t s u = arg t min σ T 2 . The smaller the distance between each pixel in two regions and the class center, the better the pixel cohesion in each region. The traditional Otsu algorithm is less effective in segmentation because it does not consider pixel spatial correlation. To measure the goodness of pixel cohesion, d 2 ( t ) is assumed and calculated as follows:
d 2 ( t ) = ( μ A ( t ) μ B ( t ) ) 2
where d 2 ( t ) is a distance metric. σ A 2 and σ B 2 are calculated as follows:
σ A 2 ( t ) = 1 q A ( t ) i = M I N t ( i μ A ( t ) ) 2 P ( i )
σ B 2 ( t ) = 1 q B ( t ) i = t + 1 M A X ( i μ B ( t ) ) 2 P ( i )
where σ A 2 and σ B 2 denote the mean variance values of the target and background regions, respectively.
Obviously, the smaller the average variances σ A 2 and σ B 2 , the better the segmentation effect; on this basis, a new threshold-finding formula G ( t ) is obtained.
G ( t ) = q A ( t ) q B ( t ) d 2 ( t ) σ A 2 ( t ) + σ B 2 ( t ) = q A ( t ) q B ( t ) ( μ A ( t ) μ B ( t ) ) 2 σ A 2 ( t ) + σ B 2 ( t )
The corresponding t when G ( t ) takes the maximum value is the optimal threshold. Therefore, K o u t is obtained as follows:
K o u t = arg t max G ( t )
where K o u t is the optimal threshold value.
According to threshold K o u t , the histogram is divided into two sub-histograms, where the first part is defined as i [ 0 : K o u t ] and the second part is defined as i ( K o u t + 1 : L ) .

3.2. Adaptive Local Grayscale Correction

The input image is divided into two sub-histograms according to the algorithm above, and histogram equalization is performed on each of the two sub-histograms to improve the image brightness offset. However, a new histogram assignment algorithm is used in this study to solve the image over-enhancement and gray level merging problems. The algorithm is mainly divided into two parts: image over-enhancement suppression and local gray level correction.

3.2.1. Image Over-Enhancement Suppression

In the equalization process, the gray levels with higher frequencies appear to be over-enhanced, whereas the gray levels with lower frequencies are merged, leading to a loss of image details. Therefore, the AICHE algorithm suppresses the over-enhanced gray levels by setting a threshold T for each of the two sub-histograms. The procedure is as follows.
  • First, let the input image be F, and obtain the sets F A and F B of non-zero cells in the two sub-histograms.
    F A = F ( i ) | F ( i ) 0 , i [ M I N , K o u t ] F B = F ( i ) | F ( i ) 0 , i [ K o u t + 1 , M A X ]
    where i is the gray level of the image, and F A and F B denote non-zero cells in the two sub-histograms, respectively.
  • The one-dimensional median filtering of F A and F B is performed, and the segmentation thresholds T A and T B of the two sub-histograms are calculated as follows.
    T A = T M A × K o u t M I N M A X M I N T B = T M B × M A X K o u t M A X M I N
    where T M A and T M B denote the peaks of the two sub-histograms, respectively.
  • The image P S is obtained by independently equalizing the two sub-histograms according to Equations (1)–(3), and the equalization equation is as follows.
    P S ( i ) = M I N + ( K o u t M I N ) × i = M I N + 1 K o u t P D F ( i ) N A , i K o u t ( K o u t + 1 ) + [ M A X ( K o u t + 1 ) ] × i = K o u t M A X P D F ( i ) N B , i > K o u t = M I N + ( K o u t M I N ) × i = M I N + 1 K o u t n i n N A , i K o u t ( K o u t + 1 ) + ( M A X K o u t 1 ) × i = M I N + 1 K o u t n i n N B , i > K o u t
    where n i is the total number of pixels in the image at gray level i , P S ( i ) is the histogram after equalization at gray level i , and N A and N B are the total numbers of gray levels in region A and region B, respectively.
  • After cropping the balanced histogram according to Equation (16), the image P T is obtained.
    P T ( i ) = T A , i K o u t P S ( i ) T A T B , i > K o u t P S ( i ) T B P S ( i )
    where P T ( i ) indicates the cropped histogram with gray level i .

3.2.2. Local Gray Level Correction

To solve the problem that the gray levels will be merged after equalization, the AICHE algorithm corrects the image after equalization. First, the gradient value is obtained by convolving the input image and the equalized image with the Sobel operator to find the location where the gradient value is obviously reduced. Second, the gray value is modified with reference to the original image to enhance the local gradient value to protect the image detail information. The specific process is as follows.
  • The gradient matrices D i n and D H E of the input image and the equalized image are obtained by convolving the images F and P T with Sobel operators in four directions. The gradient matrix convolution is calculated as follows:
    D i n = ( D 0 ° F ) 2 + ( D 180 ° F ) 2 + ( D 45 ° F ) 2 + ( D 135 ° F ) 2 D H E = ( D 0 ° P T ) 2 + ( D 180 ° P T ) 2 + ( D 45 ° P T ) 2 + ( D 135 ° P T ) 2
    where D 0 ° , D 45 ° , D 135 ° , D 180 ° denote the convolution factors in the four directions of 0°, 45°, 135°, and 180°, respectively. The four convolution factors are
    D 0 ° = 1 0 1 2 0 2 1 0 1 ; D 180 ° = 1 2 1 0 0 0 1 2 1 ; D 45 ° = 2 1 0 1 0 1 0 1 2 ; D 135 ° = 0 1 2 1 0 1 2 1 0 ;
  • Local grayscale correction of the image P T is conducted according to Equation (19) to enhance the local information of the image.
    x o u t ( i , j ) = x m a i n H E ( i , j ) + ( x i n ( i , j ) x m a i n ( i , j ) ) , D H E ( i , j ) < D i n ( i , j ) x H E ( i , j ) , D H E ( i . j ) D i n ( i , j )
    where x o u t ( i , j ) is the grayscale value of the center pixel of the output image, x i n ( i , j ) and x H E ( i , j ) denote the center pixels of image F and image P T , respectively, and x m a i n ( i , j ) and x m a i n H E ( i , j ) are the grayscale averages of each pixel in a 5 × 5 window centered at ( i , j ) in the input image and the equalized image, respectively.
  • The final image is the output.
Figure 2 shows the effect of image processing and its grayscale histogram during the process of the HE algorithm and AICHE algorithm. It can be seen that although the HE algorithm can improve the image contrast, the image is overexposed due to image stretching. And, after histogram segmentation, the average brightness of the image is protected, but at this point there is still the problem of gray level merging and the loss of image details. After the adaptive local gray level correction of the image, the average brightness of the input image is protected while the contrast and detail information are enhanced, and the image quality is significantly improved.

4. Analysis of Algorithm Results

4.1. Improved Image Segmentation Effect of Otsu Algorithm

Figure 3 shows the comparison between the improved Otsu algorithm and other image segmentation algorithms in three scenarios. Unlike the traditional Otsu and K-means algorithms, the improved Otsu algorithm can segment the image reasonably well to obtain a more complete moving arm profile. The improved Otsu algorithm segmentation can show more details of the image and optimize the segmentation effect.

4.2. AICHE Algorithm Effect

To demonstrate the effectiveness of the AICHE algorithm, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 simulate the environment of insufficient light, fog, and smoke, and compare the image enhancement effects of seven histogram equalization algorithms with those of the AICHE algorithm. These include the classical algorithms HE, BBHE, and CLAHE and several more advanced algorithms, BHEPL, RSIHE, ESIHE, and MEBEBHE.

4.3. Objective Evaluation Indicators

Four objective evaluation indicators are selected in this study, which are detailed below.

4.3.1. Structure Similarity Index Measure

Structure similarity index measure (SSIM) is a metric used to compare the similarity of two images. The SSIM value is mainly based on three characteristics: structure, luminance, and contrast. Luminance is measured by the average gray value; contrast is measured by the gray standard deviation; and structure is measured by the correlation coefficient. The calculation method is as follows:
μ = 1 N i = 1 N x i
σ = ( 1 N 1 i = 1 N ( x i μ ) 2 ) 1 / 2
where μ is the average gray value, σ is the gray standard deviation, and C is the correlation coefficient.
SSIM is consistent with human visual characteristics in evaluating image quality. Its value falls in the range of [0, 1], where a higher value indicates a stronger similarity between the two images, reflecting higher image quality. Its calculation formula is as follows:
S S I M ( X , Y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where X and Y denote the input image and output image, respectively; σ x and σ y are the standard deviations of image X and Y, respectively; μ x and μ y are the grayscale averages of image X and image Y, respectively; σ x y is the covariance of the two images; and C 1 and C 2 are the correlation coefficients.

4.3.2. Peak Signal-to-Noise Ratio

Peak signal-to-noise ratio (PSNR) can be used to compare the contrast enhancement effect of images. PSNR is a measure of image quality based on the definition of mean square error (MSE), which expresses the average of the differences between two images at each pixel point and is calculated as follows:
M S E = i = 1 N j = 1 N | X ( i , j ) Y ( i , j ) | 2 N
PSNR is improved on the basis of MSE and its value is greater than zero. The larger the value, the less distortion in the output image, the higher the contrast, and the more obvious the enhancement effect; it is calculated as follows:
P S N R = 10 log 10 [ ( 2 N 1 ) 2 M S E ]
where N is the total number of pixels; X ( i , j ) and Y ( i , j ) denote the input image and output image, respectively; and MSE is the mean square error.

4.3.3. Absolute Mean Brightness Error

The absolute average brightness error (AMBE) indicates the absolute difference between the average brightness of the input image and the resulting image, and it is used to measure the performance in maintaining the original brightness. ASME is a value greater than zero. The smaller the AMBE value, the better the light preservation effect. It is calculated as follows:
A M B E = | Q ( X ) Q ( Y ) |
where X and Y denote the input image and the resultant image, respectively, and Q ( X ) and Q ( Y ) are the average brightness values of the input image and the resultant image, respectively.

4.3.4. Information Entropy

Information entropy (E) is used to measure the information richness of an image, which is greater than zero. The larger the value of E, the more information and details are present in the image. However, a large value of E also indicates significant noise in the image. It is calculated as follows:
E = i = 0 255 p i log p i
where p i denotes the proportion of pixels with gray value i in the image.

4.4. Evaluation Results

In this study, the performance of each algorithm is measured using image evaluation metrics such as SSIM, E, PSNR, AMBE, and time.
As can be seen in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, the HE algorithm shows an obvious phenomenon of image brightness change and detail loss. Under low light conditions, the BBHE, RSIHE, and MMBEBHE algorithms can protect the average brightness but will lead to uneven histogram balance due to unreasonable histogram segmentation, which is less effective for detail processing and will cause local area distortion. CLAHE can protect the image details but will introduce a large amount of noise, especially in Figure 10 and Figure 13 where the image visual effect is significantly reduced. The AICHE algorithm improves the image contrast under the condition of protecting the average brightness of the image; it does not introduce excessive noise, and, to a certain extent, it retains the original shape of the original histogram.
From Table 1 and Table 2, the AICHE algorithm has the highest PSNR value and the SSIM value is closest to 1.The image information entropy in Table 3 demonstrates the richness of details in the image. The information entropy of the image processed by CLAHE obviously exceeds that of the original image, which indicates that noise is introduced in the image and produces a block effect. Except for the CLAHE algorithm, the AICHE algorithm has the highest information entropy value, indicating that the gray level merging phenomenon is effectively avoided in the equalization process, which can protect the image information. ESIHE cannot maintain the average image brightness well, which is also clearly reflected by the AMBE values in Table 4.
Based on the evaluation results, the AICHE algorithm proposed herein has good image enhancement effects in all three working conditions; it can protect the average brightness of the image and reduce the merging of gray levels on the basis of improving the image contrast, and it has good robustness. However, as shown in Table 5, the computational time of the AICHE algorithm is relatively long.

5. Conclusions

In this study, the AICHE algorithm is proposed for image enhancement under complex working conditions. The problems of average brightness shift, image over-enhancement, and gray level merging in the traditional histogram equalization process are effectively solved. The algorithm improves upon the traditional Otsu method by segmenting the image histogram to solve the problem of mean luminance shift and adaptively obtain the threshold to suppress the gray level to avoid the image over-enhancement phenomenon. Then, it uses the local gray level correction method to avoid the gray level merging problem. According to the experimental analysis, the adaptive local correction method can effectively avoid image over-enhancement and image detail loss, and it can enhance the image contrast and detail information. Compared with other improved algorithms, the AICHE algorithm significantly enhances the PSNR, gray level, and information entropy while avoiding the introduction of noise; its SSIM is closer to 1, and its image visual effect is better.
Due to the pursuit of high-quality measurement accuracy, this method may lead to some defects in time efficiency, which is more suitable for image enhancement in the industrial environment. How to improve time efficiency will be a key issue in future research.

Author Contributions

Methodology, B.L. and S.J.; software, B.Y.; resources, S.Y. and D.Z.; writing—original draft preparation, B.Y.; writing—review and editing, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangxi Science and Technology Major Special Project, grant number AA22068064; Guangxi Science and Technology Programs, grant number AD22080042; Guangxi Key R&D Program Projects, grant number AB22035066.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We appreciate the support of the Guangxi College Students Innovation and Entrepreneurship Program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tarawneh, A.S.; Hassanat, A.B.; Elkhadiri, I.; Chetverikov, D.; Almohammadi, K. Automatic gamma correction based on root-mean-square-error maximization. Int. Conf. Comput. Inf. Technol. 2020, 1, 448–452. [Google Scholar]
  2. Gong, Y.; Xie, X. research on coal mine underground image recognition technology based on homomorphic filtering method. Coal Sci. Technol. 2023, 51, 241–250. [Google Scholar]
  3. Chen, Z.; Pawar, K.; Ekanayake, M.; Pain, C.; Zhong, S.; Egan, G.F. Deep learning for image enhancement and correction in magnetic resonance imaging-state-of-the-art and challenges. J. Digit. Imaging 2023, 36, 204–230. [Google Scholar]
  4. Castleman, K.R. Digital Image Processing; Prentice Hall Press: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
  5. Ding, C.; Luo, Z.; Hou, Y.; Chen, S.; Zhang, W. An effective method of infrared maritime target enhancement and detection with multiple maritime scene. Remote Sens. 2023, 15, 3623. [Google Scholar]
  6. Zhou, J.; Pang, L.; Zhang, D.; Zhang, W. Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE J. Ocean. Eng. 2023, 48, 474–488. [Google Scholar]
  7. Yuan, Z.; Zeng, J.; Wei, Z.; Jin, L.; Zhao, S.; Liu, X.; Zhang, Y.; Zhou, G. Clahe-based low-light image enhancement for robust object detection in overhead power transmission system. IEEE Trans. Power Deliv. 2023, 38, 2240–2243. [Google Scholar]
  8. Dyke, R.M.; Hormann, K. Histogram equalization using a selective filter. Vis. Comput. 2022, 69, 284–302. [Google Scholar]
  9. Kim, Y.-T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Trans. Consum. Electron. 1997, 43, 1–8. [Google Scholar]
  10. Wang, Y.; Chen, Q.; Zhang, B. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [Google Scholar]
  11. Sim, K.S.; Tso, C.P.; Tan, Y.Y. Recursive sub-image histogram equalization applied to gray scale images. Pattern Recognit. Lett. 2007, 28, 1209–1221. [Google Scholar]
  12. Chen, S.-D.; Ramli, A.R. Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans. Consum. Electron. 2003, 49, 1301–1309. [Google Scholar] [CrossRef]
  13. Chen, S.-D.; Ramli, A.R. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
  14. He, Z.; Zeng, X.; Deng, C. Infrared image enhancement based on local entropy-lc and dual-area histogram equalization. Infrared Technol. 2023, 45, 582–598. [Google Scholar]
  15. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef]
  16. Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef]
  17. Maitra, I.K.; Nag, S.; Bandyopadhyay, S.K. Technique for preprocessing of digital mammogram. Comput. Methods Programs Biomed. 2012, 107, 175–188. [Google Scholar] [CrossRef]
  18. Ooi, C.H.; Isa, N.A.M. Adaptive contrast enhancement methods with brightness preserving. IEEE Trans. Consum. Electron. 2010, 56, 2543–2551. [Google Scholar] [CrossRef]
  19. Aquino-Morínigo, P.B.; Lugo-Solís, F.R.; Pinto-Roa, D.P.; Ayala, H.L.; Noguera, J.L.V. Bi-histogram equalization using two plateau limits. Signal Image Video Process. 2017, 11, 857–864. [Google Scholar] [CrossRef]
  20. Singh, K.; Kapoor, R. Image enhancement using exposure based sub image histogram equalization. Pattern Recognit. Lett. 2014, 36, 10–14. [Google Scholar] [CrossRef]
  21. Paul, A. Adaptive tri-plateau limit tri-histogram equalization algorithm for digital image enhancement. Vis. Comput. 2023, 39, 297–318. [Google Scholar]
  22. Huang, Z.; Wang, Z.; Zhang, J.; Li, Q.; Shi, Y. Image enhancement with the preservation of brightness and structures by employing contrast limited dynamic quadri-histogram equalization. Optik 2021, 226, 165877. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart of AICHE algorithm.
Figure 1. Overall flowchart of AICHE algorithm.
Applsci 13 10649 g001
Figure 2. Image enhancement effect of AICHE algorithm: (a) original image; (b) effect of HE algorithm; (c) effect of histogram segmentation; and (d) effect of adaptive local grayscale correction.
Figure 2. Image enhancement effect of AICHE algorithm: (a) original image; (b) effect of HE algorithm; (c) effect of histogram segmentation; and (d) effect of adaptive local grayscale correction.
Applsci 13 10649 g002
Figure 3. Comparison of segmentation effect between improved Otsu algorithm and other image segmentation algorithms: (a) original image; (b) segmentation effect of traditional Otsu algorithm method; (c) segmentation effect of the K-means algorithm method; and (d) segmentation effect of improved Otsu algorithm.
Figure 3. Comparison of segmentation effect between improved Otsu algorithm and other image segmentation algorithms: (a) original image; (b) segmentation effect of traditional Otsu algorithm method; (c) segmentation effect of the K-means algorithm method; and (d) segmentation effect of improved Otsu algorithm.
Applsci 13 10649 g003
Figure 4. The simulation results (above) of the ‘scene 1’ image are presented along with its corresponding histogram (below).
Figure 4. The simulation results (above) of the ‘scene 1’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g004
Figure 5. The simulation results (above) of the ‘scene 2’ image are presented along with its corresponding histogram (below).
Figure 5. The simulation results (above) of the ‘scene 2’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g005
Figure 6. The simulation results (above) of the ‘scene 3’ image are presented along with its corresponding histogram (below).
Figure 6. The simulation results (above) of the ‘scene 3’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g006
Figure 7. The simulation results (above) of the ‘scene 4’ image are presented along with its corresponding histogram (below).
Figure 7. The simulation results (above) of the ‘scene 4’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g007
Figure 8. The simulation results (above) of the ‘scene 5’ image are presented along with its corresponding histogram (below).
Figure 8. The simulation results (above) of the ‘scene 5’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g008
Figure 9. The simulation results (above) of the ‘scene 6’ image are presented along with its corresponding histogram (below).
Figure 9. The simulation results (above) of the ‘scene 6’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g009
Figure 10. The simulation results (above) of the ‘scene 7’ image are presented along with its corresponding histogram (below).
Figure 10. The simulation results (above) of the ‘scene 7’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g010
Figure 11. The simulation results (above) of the ‘scene 8’ image are presented along with its corresponding histogram (below).
Figure 11. The simulation results (above) of the ‘scene 8’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g011
Figure 12. The simulation results (above) of the ‘scene 9’ image are presented along with its corresponding histogram (below).
Figure 12. The simulation results (above) of the ‘scene 9’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g012
Figure 13. The simulation results (above) of the ‘scene 10’ image are presented along with its corresponding histogram (below).
Figure 13. The simulation results (above) of the ‘scene 10’ image are presented along with its corresponding histogram (below).
Applsci 13 10649 g013
Table 1. SSIM of different algorithms.
Table 1. SSIM of different algorithms.
ImageHEBBHECLAHEBPLHERSIHEESIHEMMBEBHEAICHE
Scene 10.353550.817610.655320.966490.893350.768160.876110.97822
Scene 20.562020.829840.663540.869730.893350.905750.831810.91081
Scene 30.402670.837250.478920.883660.872250.703180.876990.89607
Scene 40.654540.942580.653750.778750.952230.871960.860330.95223
Scene 50.586030.846150.632510.757270.883040.902930.848390.92898
Scene 60.575180.721780.633950.856550.816860.838120.850810.89026
Scene 70.790640.891230.631850.955720.894630.94340.893650.95629
Scene 80.709790.885850.764520.774750.810180.899230.717820.91879
Scene 90.900190.948210.913240.962260.977940.966860.925140.97873
Scene 100.687660.773790.696630.839250.789480.835560.701650.85255
Average value0.622220.849420.672420.864440.878330.8635150.838270.926293
Standard deviation0.165380.070770.110520.079070.060070.080250.072650.04078
Note: The top averages produced by the compared algorithms are marked in bold font.
Table 2. PSNR of different algorithms.
Table 2. PSNR of different algorithms.
ImageHEBBHECLAHEBPLHERSIHEESIHEMMBEBHEAICHE
Scene 19.705128.220013.422432.819127.220220.364230.814533.1691
Scene 211.053218.817212.194017.752218.734022.483923.424631.9793
Scene 37.627013.942011.516922.696910.361812.180823.515727.3033
Scene 49.613113.265717.223711.526321.407120.579514.780420.5795
Scene 58.714414.154217.941811.805721.839818.464714.307223.1845
Scene 69.246312.620810.596511.030410.738112.33349.836112.7580
Scene 717.442427.765011.783230.938422.999030.115726.494831.3333
Scene 89.075125.262019.988610.408225.191316.69519.275521.3871
Scene 921.234924.76336.865926.131821.522527.568923.639529.8354
Scene 108.81323.497320.37849.420822.643216.30779.132525.8520
Average value11.252420.230714.191118.452920.265719.709318.522125.7381
Standard deviation4.439576.342234.463019.026475.600785.890097.963126.37072
Note: The top averages produced by the compared algorithms are marked in bold font.
Table 3. E of different algorithms.
Table 3. E of different algorithms.
ImageOriginal ImageHEBBHECLAHEBPLHERSIHEESIHEMMBEBHEAICHE
Scene 16.67956.25856.43297.37216.60756.41466.53996.43296.6100
Scene 26.93356.61336.57207.47980.83366.64346.76886.58316.7754
Scene 36.48326.04826.05297.24796.14145.99246.11805.97476.2263
Scene 45.81445.70525.63336.67255.74075.74535.74565.69135.7850
Scene 55.57465.07854.93326.34645.07074.87465.10375.03135.1829
Scene 66.58496.26536.25407.23366.35606.25206.34876.29186.3663
Scene 77.49807.23967.29087.94507.40687.28827.37817.31077.4299
Scene 86.03845.13845.46826.31065.53175.42455.35745.73345.5863
Scene 97.66537.48827.48527.89567.57637.53087.55087.48007.5963
Scene 106.03355.03565.13946.15895.20515.14655.20675.19655.2112
Average value6.530536.087086.126197.066245.646986.131236.211776.172576.27696
Standard deviation0.692560.869210.855740.694311.891710.873710.868210.814730.84929
Note: The top two averages produced by the compared algorithms are marked in bold font.
Table 4. AMBE of different algorithms.
Table 4. AMBE of different algorithms.
ImageHEBBHECLAHEBPLHERSIHEESIHEMMBEBHEAICHE
Scene 172.32038.402820.42674.12363.819219.30666.16411.9737
Scene 263.549520.739222.732210.6807.780712.541013.55276.6932
Scene 392.846424.481132.122517.328618.934852.785413.716812.1253
Scene 460.78124.225512.731252.02110.701331.61711.67920.6925
Scene 568.303817.600929.879642.259421.647713.024812.828912.6386
Scene 671.689430.170223.752620.193117.895032.448721.020817.6621
Scene 720.42802.110620.09782.44441.21510.96394.34220.7946
Scene 867.403827.025118.746455.487522.576941.507764.206415.7625
Scene 99.65275.939119.80776.21825.44924.68464.64024.2561
Scene 1068.845034.677147.834332.371332.041526.276764.611031.2214
Average value59.5820117.5371624.813124.3127213.2061423.5156520.6762310.382
Standard deviation25.1254111.714189.7970119.9763210.7883216.4819623.761459.61480
Note: The top averages produced by the compared algorithms are marked in bold font.
Table 5. Time of different algorithms.
Table 5. Time of different algorithms.
ImageHEBBHECLAHEBPLHERSIHEESIHEMMBEBHEAICHE
Scene 13.254.7810.364.982.253.464.237.35
Scene 22.333.7311.174.032.561.773.217.24
Scene 32.286.9611.2217.171.732.566.758.83
Scene 43.275.5610.265.523.263.754.897.89
Scene 52.276.729.396.281.792.364.957.93
Scene 61.823.2811.193.982.572.233.436.23
Scene 74.596.8913.255.573.894.895.6910.36
Scene 81.114.779.256.731.751.275.497.69
Scene 91.744.4111.303.262.311.152.825.53
Scene 102.2176.40011.1287.492.542.074.828.35
Average value2.48775.3510.85195.5012.4652.5514.6287.74
Standard deviation0.983441.351591.389651.438930.688441.169971.222751.33549
Note: The top averages produced by the compared algorithms are marked in bold font.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, B.; Jin, S.; Li, B.; Yan, S.; Zhang, D. Dual Histogram Equalization Algorithm Based on Adaptive Image Correction. Appl. Sci. 2023, 13, 10649. https://doi.org/10.3390/app131910649

AMA Style

Ye B, Jin S, Li B, Yan S, Zhang D. Dual Histogram Equalization Algorithm Based on Adaptive Image Correction. Applied Sciences. 2023; 13(19):10649. https://doi.org/10.3390/app131910649

Chicago/Turabian Style

Ye, Bowen, Sun Jin, Bing Li, Shuaiyu Yan, and Deng Zhang. 2023. "Dual Histogram Equalization Algorithm Based on Adaptive Image Correction" Applied Sciences 13, no. 19: 10649. https://doi.org/10.3390/app131910649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop