Highlight Removal Emphasizing Detail Restoration

: In existing highlight removal methods, research on highlights on metal surfaces is relatively limited. Therefore, this paper proposes a new, simple, effective method for removing highlights from metal surfaces, which can better restore image details. Additionally, the approach presented in this paper is highly effective for highlight removal in everyday real-world highlight scenarios. Specifically, we first separate the image’s illumination space based on the Retinex model and generate a highlight mask using the mean plus standard deviation method. Then, based on the mask, we transform the original image and the image at the corresponding mask position to the V channel of the HSV space, achieving the effective elimination of highlights. To enhance the details of the restored image, this paper introduces a method involving adaptive Laplacian sharpening operators and gradient fusion for detail enhancement at highlight removal positions. Finally, a highlight-free image with well-preserved details is obtained. In the experimental phase, we validate the proposed method using real welding seam highlight datasets and real-world highlight datasets. Compared with the existing methods, the proposed method achieves high-quality qualitative and quantitative evaluation.


Introduction
In metal images, most pictures exhibit noticeable highlights [1].Highlights not only cause significant visual disturbance but also, due to the discontinuity between highlight and non-highlight pixel areas, greatly reduce the image's detail information.Among them, welding, as a widely used joining method in industry, may result in an unstable structure if the quality of the weld seam is inadequate.Through weld seam detection, potential risks can be effectively reduced, ensuring the safety of the connected parts.The timely identification and repair of weld seam defects can prevent product failures during use, thereby reducing the costs of later maintenance and replacement.Through effective detection methods, quality issues can be eliminated before product delivery, enhancing production efficiency and reducing costs associated with maintenance and repair.In automated welding processes, timely and accurate detection can quickly eliminate nonconforming products, ensuring the continuity and stability of the production process.Therefore, highlight removal is crucial for the accurate detection of weld seams.Over the past period, researchers have extensively conducted studies on highlight removal [2,3].
The existing highlight removal algorithms are primarily divided into two categories: those based on multiple images and those based on a single image.Methods based on multiple images include perspective transformation [4,5] and variations in light source position [6,7].On the other hand, single-image-based highlight removal algorithms encompass color segmentation [8], pixel space in the domain [9,10], color space [11,12], and repair model based approaches [13] (utilizing deep learning methods).
However, most of the above methods overlook the presence of specular pixels and operate under the assumption of perfect diffuse reflection.For instance, these methods have limitations in experimental scenarios with a single background, overly restrictive Appl.Sci.2024, 14, 2469 2 of 15 scene conditions, and generally low ambient light.However, in the real world, most object surfaces exhibit both diffuse and specular reflections.Consequently, when applying these algorithms to process images from real-world scenes, the results are often unsatisfactory [7,8].These methods typically cover everyday scenes [14], metal surfaces [15], and special scenarios [16].Therefore, a method that can be applied to real-world scenes and perform highlight removal in certain special scenarios becomes particularly important.
Additionally, the restoration of image details involves recovering texture, edges, and subtle variations, which define the characteristics of the captured scene.This enhancement not only improves the aesthetics of the image but also enriches its interpretability and practicality.The importance of highlight removal and simultaneous enhancement of image details cannot be overlooked.It can reveal hidden information, enrich visual perception, and provide more accurate and meaningful image data for various fields [17].Therefore, while eliminating highlights from real-world images, restoring and preserving their detail information is particularly important.
In the summary by Khan [18] and others on previous approaches, the importance of identifying highlight regions is emphasized.Detecting highlight positions directly influences the final results of highlight removal.Through a systematic analysis of existing highlight removal algorithms, this paper focuses on detecting and removing highlights in individual images while preserving their detail information.The aim is to improve the accuracy of highlight removal and the fidelity of the final results.Therefore, a highlight removal method suitable for various real-world scenarios is proposed.In summary, the main contributions of this paper are as follows: This paper introduces a widely applicable highlight removal method that effectively eliminates highlights, enhances image visibility, and improves image details.
Due to the significant loss of image details after highlight removal, this paper proposes a method using adaptive Laplacian sharpening operators and gradient fusion to enhance the details of the image after highlight removal.
Finally, this paper validated the proposed method using real metal welding seam highlight datasets and everyday highlight image datasets.Compared to traditional methods, the approach in this paper more effectively removes highlights and better showcases the restored details of the welding seam.In the final comparative experiments, this method achieved superior results.

Related Works
In the context of color-space-based highlight removal, Yang [8] proposed a twodimensional Ch-CV space, which effectively reflects the correspondence between diffuse reflection and specular reflection components.The method involves segmenting highlight regions and separating reflections for each segmented area.While using color segmentation to remove highlights in a single image is a viable strategy, its algorithm exhibits limited robustness for images with complex textures and often requires manual detection of specular regions.Yang [11] proposed a color space based on H-S, separating reflection components by adjusting the saturation of specular pixels to the value of only diffuse reflection pixels with the same chromaticity.Ramos [12], building on the dual-color reflection model, transformed a color image to the YCbCr color space for histogram matching to eliminate highlights.
In the context of highlight removal based on the pixel space, Shen [9] and colleagues improved the SF image to obtain the MSF (modified specular-free) image.Subsequently, they determined specular separation and diffuse reflection components based on the difference between this image and the original one.Through iteration, the primary color and chromaticity differences were established, and highlights were eventually removed using the least squares method.The following year, Shen [10] and colleagues, building upon the previous work, performed pixel-level reflection adjustments for highlight restoration by using the smooth color transformation of the highlight surrounding region as a reference.Yang [19] and colleagues estimated the maximum chromaticity of the image's diffuse re-flection based on the SF image principle.They then used this value as the guided value for a bilateral filter to process the maximum chromaticity image, thereby separating the reflection component.Akashi [20] introduced a framework that combines non-negative matrix factorization (NNMF) and sparsity constraints to simultaneously estimate the dominant color and separate the diffuse reflection component.Yamamoto [3] and colleagues added high-intensity filters to the reflection and diffuse reflection components to detect pixels with separation errors.They replaced the separation results of erroneous pixels with the results of other reference pixels, achieving the secondary optimization of the resulting image.Zhang [21] used orthogonal decomposition to remove highlights and, based on the clustering results, separated diffuse reflection from specular reflection.Liu [22] leveraged the global chromaticity characteristics of reflection-separated pixels to obtain oversaturated images without highlights.They then restored saturation based on the respective characteristics of the two reflection components, thereby eliminating highlights.Zhao [23] and colleagues, considering the structural similarity between the reflection component and the highlight image and the local correlation characteristics of chromaticity in the diffuse reflection component, achieved highlight removal by addressing chromaticity joint compensation and local structure.Souza [24] and colleagues, building on previous pixel clustering, analyzed the distribution patterns of chromaticity extremes in the chromaticity space, achieving the goal of pixel clustering.Finally, they separated the reflection component based on the intensity ratio of each cluster.Xia [25] and colleagues used a globally optimized method based on the dual-color reflection model to remove highlights.This method involves correcting the hue and saturation of the highlighted region to estimate the chromaticity of diffuse reflection and using convex optimization with dual regularization to estimate the coefficients of diffuse reflection and specular reflection.
Utilizing a multiscale approach for highlight removal, Imai [5] and colleagues, based on the dual-color reflection model and spectral prediction of image data, proposed three multi-light source highlight detection methods.They simultaneously estimated the emission spectra for each light source.Wang [6] and colleagues captured four images using four specific points to form a dataset and reconstructed images without specular reflection using this dataset.Iwata [7] and colleagues proposed a method for video imaging.It primarily involves adjusting the capture rate of a moving camera and the flash rate of a flash unit to generate images with minimum and maximum illuminance.These two images are then used to determine images without specular reflection.Haghighat [4] and colleagues proposed an R-D-based strategy that can categorize images from multiple views into specular reflection and diffuse reflection components.They use two different transformations to distinguish between diffuse and specular reflection data.They introduced a progressive approach to eliminate specular reflection from diffuse data in stages.Takechi [2] revealed the low-rank structure of images at different viewpoints under various light source directions.They then expressed the formula for separating diffuse reflection and specular reflection as a low-rank approximation of a third-order tensor to achieve highlight removal.Jiao [16] and colleagues first captured document images from multiple viewpoints, merged and analyzed them for highlight region detection, and finally used a patching algorithm to repair highlight areas.However, this approach is only suitable for document images.Huo [26] and colleagues, through exposure correction, generated multiple low-dynamic-range (LDR) images with different exposures.They then used a highlight detection algorithm to identify highlight regions and synthesized a high-dynamic-range (HDR) image using LDR images with different exposure levels.Shah [27] and colleagues utilized multi-viewpoint images based on the continuity of image frames.They replaced pixels at highlight positions with pixels from adjacent frames to achieve highlight removal.

Method of This Article
The flowchart of the proposed method is shown in Figure 1, and it mainly consists of the following steps: First, apply the Retinex model to the input image, separating the image into an illumination space and reflection space.Next, process the illumination image based on the sum of the mean and standard deviation of the current image pixels, separate the highlight positions of the current image, and generate the corresponding mask.Then, based on this mask, separate the specular reflection in this area.The advantage of this approach is that it only processes the highlight region separately, avoiding color distortion due to global processing.Given that there are noticeable processing artifacts between the highlight and non-highlight regions after highlight removal, this paper introduces a compensation function to reduce the post-processing discontinuity.Finally, due to the significant loss of details at the original highlight positions after highlight removal, this paper uses an approach of the adaptive Laplacian operator and gradient fusion to enhance the image, resulting in the final output of the image after highlight removal.

Method of This Article
The flowchart of the proposed method is shown in Figure 1, and it mainly consists of the following steps: First, apply the Retinex model to the input image, separating the image into an illumination space and reflection space.Next, process the illumination image based on the sum of the mean and standard deviation of the current image pixels, separate the highlight positions of the current image, and generate the corresponding mask.Then, based on this mask, separate the specular reflection in this area.The advantage of this approach is that it only processes the highlight region separately, avoiding color distortion due to global processing.Given that there are noticeable processing artifacts between the highlight and non-highlight regions after highlight removal, this paper introduces a compensation function to reduce the post-processing discontinuity.Finally, due to the significant loss of details at the original highlight positions after highlight removal, this paper uses an approach of the adaptive Laplacian operator and gradient fusion to enhance the image, resulting in the final output of the image after highlight removal.This paper utilizes the illumination component separated based on the Retinex model and employs the mean and standard deviation method to isolate the highlight mask.Building upon the obtained highlight mask, separation of specular reflection is performed.The advantage of this approach lies in improving the traditional method of globally processing highlight images, thus avoiding color bias issues after separation.Additionally, to enhance the detail information after removing highlights from the image, this paper introduces gradient fusion and the Laplacian local enhancement algorithm.The processing logic in Figure 1 can be expressed using the following formulas: In the above formula,  represents the final output image,  represents the input highlight image,  • represents the specular reflection component extraction function,  • represents the smoothing function used primarily for a smooth transition of the edges after processing the masked region,  • represents the highlight extraction function proposed in this paper based on the Retinex model, and  • represents the gradient information of the image.

Highlight Removal
The accurate detection of highlight positions is crucial for highlight removal and directly affects the image restoration results and the efficiency of highlight removal algorithms.This paper is inspired by the Retinex model [28], which suggests that a digital image can be represented as the pixel-wise product of illumination and reflection components.In this context, the illumination component reflects ambient light information, This paper utilizes the illumination component separated based on the Retinex model and employs the mean and standard deviation method to isolate the highlight mask.Building upon the obtained highlight mask, separation of specular reflection is performed.The advantage of this approach lies in improving the traditional method of globally processing highlight images, thus avoiding color bias issues after separation.Additionally, to enhance the detail information after removing highlights from the image, this paper introduces gradient fusion and the Laplacian local enhancement algorithm.The processing logic in Figure 1 can be expressed using the following formulas: In the above formula, V represents the final output image, I represents the input highlight image, S(•) represents the specular reflection component extraction function, G(•) represents the smoothing function used primarily for a smooth transition of the edges after processing the masked region, R(•) represents the highlight extraction function proposed in this paper based on the Retinex model, and L(•) represents the gradient information of the image.

Highlight Removal
The accurate detection of highlight positions is crucial for highlight removal and directly affects the image restoration results and the efficiency of highlight removal algorithms.This paper is inspired by the Retinex model [28], which suggests that a digital image can be represented as the pixel-wise product of illumination and reflection components.In this context, the illumination component reflects ambient light information, while the reflection component reflects the detail information of the image.The relationship between them can be expressed as: where I c (x, y) represents the pixel at position (x, y) in the image, c ∈ {R, G, B} represents the color channel, and R c (x, y) and L c (x, y) represent the reflection and illumination component information at the corresponding pixel position.Based on this theory, this paper separates the highlight image into reflection and illumination components and performs highlight localization based on the illumination component, as shown below.
In the above equation, R m represents the extracted highlight mask, which also serves as the threshold for highlights.Here, n = W × H represents the total number of pixels, where W is the width and H is the height of the image.L i represents the i-th pixel value in the illumination component.Finally, the two are added together to obtain the mask for the highlight region.
Subsequently, in the mask region extracted based on the approach in this paper, specular reflection is separated using the dichromatic reflection model.However, compared to traditional methods, this paper processes only the region identified by highlight localization rather than the entire image.This can improve the color bias issues encountered in previous methods.The specific improvements are as follows: The above formula represents the dichromatic reflection model, where x = (x, y) denotes the image's horizontal and vertical coordinates, and I(x) = I r (x), I g (x), I b (x) represents the intensity of the three-channel pixels in a color image.In the formula, D(x) and S(x) indicate the diffuse reflection and specular reflection components, respectively, Λ(x) represents the diffuse reflection chromaticity, and Γ(x) represents the specular reflection chromaticity.m d (x) and m s (x) are the coefficients for these two components.
Based on the inference from reference [12], according to formula (4), the following formula can be derived, where Î(x) is the approximate illumination-free image separated in the RGB space, I min (x) is the minimum pixel value in the RGB space, and I min (x) is the mean of the minimum pixel values in the RGB space.
However, on the surface of metals, because they are mostly smooth and lack proper diffuse reflectance chromaticity [15], the highlight removal process in this paper is as follows.First, based on the highlight mask separated in this paper, the following operation is performed, where γ represents the conversion of the image to the HSV space, I(x) is the original RGB image, I min (x) is the minimum value in the RGB pixels, R m is the highlight mask separated based on Formula (3), and • indicates that the operation is only applied to the masked area.I v (x) represents the V space in the HSV color space of the original image, and finally, the minimum pixel is separated: Finally, the method for highlight removal in this paper is as follows, where γ −1 represents the conversion from the HSV space back to the RGB space and makes a more accurate judgment on the highlight removal by adding the threshold information calculated in this paper, R m .
However, after processing using our method, it was observed that there was a sense of disconnection at the edge of the mask.Therefore, we introduced a compensation factor Appl. Sci.2024, 14, 2469 6 of 15 to handle the mask, reducing the processing traces at the restoration location.The formula is as follows: In the above formula, x and y represent the coordinates of the corresponding pixels, and i and j represent the size of the convolution kernel.In this paper, a 5 × 5 convolution kernel is used to achieve the best restoration effect.The repair effect of highlight removal with the introduction of this compensation coefficient is shown in Figure 2.
However, after processing using our method, it was observed that there was a sense of disconnection at the edge of the mask.Therefore, we introduced a compensation factor to handle the mask, reducing the processing traces at the restoration location.The formula is as follows: In the above formula, x and y represent the coordinates of the corresponding pixels and i and j represent the size of the convolution kernel.In this paper, a 5 × 5 convolution kernel is used to achieve the best restoration effect.The repair effect of highlight remova with the introduction of this compensation coefficient is shown in Figure 2. Due to the fact that the compensation part in the above images is mainly located a the edges of the highlights, certain images may not exhibit significant improvements.For comparison, columns (c) and (d) in Figure 2 were selected to compare the performance metrics, and the specific results are shown in Figure 3.The horizontal axis represents the input images in Figure 2 (a) that's A minus E, and the vertical axis represents the PSNR (peak signal-to-noise ratio) values of the images.After comparison, it was found that the PSNR values of the images significantly improved after the addition of the compensation function.Due to the fact that the compensation part in the above images is mainly located at the edges of the highlights, certain images may not exhibit significant improvements.For comparison, columns (c) and (d) in Figure 2 were selected to compare the performance metrics, and the specific results are shown in Figure 3.The horizontal axis represents the input images in Figure 2a that's A minus E, and the vertical axis represents the PSNR (peak signal-to-noise ratio) values of the images.After comparison, it was found that the PSNR values of the images significantly improved after the addition of the compensation function.

Detail Restoration
After improving the method in this paper to repair highlight regions, there was often a severe loss of details in these areas.This had a significant impact on weld seam detection.Therefore, the paper introduces global detail and minor detail restoration.Firstly, in the HSV color space, the V channel of the repaired image is decomposed using guided filtering [29] (   ).Subsequently, based on the texture layer  , weight parameters are applied to the structural layer  for global detail enhancement to correct global detail issues.The paper improves the Laplacian global image enhancement operator, refining image sharpness to enhance the overall detail information and achieve global detail enhancement.The formula is as follows: ( , ) [ ( ( , 1) 2 ( , ) ( , 1))] In the above formula,  ,  represents the position of the image's structural layer at the corresponding pixel. and Λ are two adjustable parameters with default values of 0.1 and 0.75, where the parameter  can better control sharpness, and Λ can better restore the brightness of edges.In the final Laplacian operator, this paper introduces a weight Λ to control visual sharpness (this parameter requires Λ 1), and the higher the value of Λ, the clearer the result.The final formula is as follows: To adapt to images in different scenes, this paper proposes an adaptive weight Λ to meet the sharpening effects of various scene images.The formula is as follows, where  ,  represents the position of the image's texture layer at the corresponding pixel: After applying global optimization, there were still missing details in the highlight removal area.Therefore, this paper uses the method of maximum gradient fusion [30] to restore the tiny details of the image.In this paper, the horizontal gradient  is obtained through convolution, and its formula is   ⊗ , where ⊗ represents convolution,  represents the target image, and  represents the Sobel operator.Similarly, using the transpose of  , the paper calculates the vertical gradient  with the specific formula   ⊗ , where   ^.After obtaining the magnitudes of horizontal and vertical gradients, the image gradient can be defined as:

Detail Restoration
After improving the method in this paper to repair highlight regions, there was often a severe loss of details in these areas.This had a significant impact on weld seam detection.Therefore, the paper introduces global detail and minor detail restoration.Firstly, in the HSV color space, the V channel of the repaired image is decomposed using guided filtering [29] (V = V s + V t ).Subsequently, based on the texture layer V t , weight parameters are applied to the structural layer V s for global detail enhancement to correct global detail issues.The paper improves the Laplacian global image enhancement operator, refining image sharpness to enhance the overall detail information and achieve global detail enhancement.The formula is as follows: In the above formula, V s (x, y) represents the position of the image's structural layer at the corresponding pixel.γ and Λ are two adjustable parameters with default values of 0.1 and 0.75, where the parameter γ can better control sharpness, and Λ can better restore the brightness of edges.In the final Laplacian operator, this paper introduces a weight Λ to control visual sharpness (this parameter requires Λ > 1), and the higher the value of Λ, the clearer the result.The final formula is as follows: To adapt to images in different scenes, this paper proposes an adaptive weight Λ to meet the sharpening effects of various scene images.The formula is as follows, where V t (x, y) represents the position of the image's texture layer at the corresponding pixel: After applying global optimization, there were still missing details in the highlight removal area.Therefore, this paper uses the method of maximum gradient fusion [30] to restore the tiny details of the image.In this paper, the horizontal gradient G x is obtained through convolution, and its formula is G x = S x ⊗ V, where ⊗ represents convolution, V represents the target image, and S x represents the Sobel operator.Similarly, using the transpose of S x , the paper calculates the vertical gradient G y with the specific formula G y = S y ⊗ V, where S y = S y ˆT.After obtaining the magnitudes of horizontal and vertical gradients, the image gradient can be defined as: where G x (x, y) and G y (x, y) are the positions of the pixels in the horizontal and vertical gradient maps, respectively.Through the above equations, the gradient information of the original image and the image detected and repaired in the previous section can be obtained separately.The visibility of the image details is closely related to the magnitude of gradients, so this paper extracts the maximum gradient value G max (x, y) from each image.Since structural similarity is used to evaluate the perceptual similarity between two images, this paper calculates the similarity between the maximum gradient map G max and the gradient map G vs of the globally enhanced image, which can be expressed by the following formula: In the above equation, µ G max , µ G vs , σ G max , σ G vs , and σ G max G vs represent the local mean, local variance, and covariance of G max and G vs , respectively.c 1 and c 2 are stabilizing constants.In this way, this paper obtains the gradient quality L by averaging the quality maps, where H and W are the height and width of the image: After obtaining the gradient quality, the texture is pyramid blended with the globally enhanced image to obtain the final experimental results, as shown in Figure 4.
x y T x y G x y G x y (12) where  ,  and  ,  are the positions of the pixels in the horizontal and vertical gradient maps, respectively.Through the above equations, the gradient information of the original image and the image detected and repaired in the previous section can be obtained separately.The visibility of the image details is closely related to the magnitude of gradients, so this paper extracts the maximum gradient value  ,  from each image.Since structural similarity is used to evaluate the perceptual similarity between two images, this paper calculates the similarity between the maximum gradient map  and the gradient map  of the globally enhanced image, which can be expressed by the following formula: In the above equation,  ,  ,  ,  , and  represent the local mean, local variance, and covariance of  and  , respectively. and  are stabilizing constants.In this way, this paper obtains the gradient quality  by averaging the quality maps, where  and  are the height and width of the image: , ), ( , )) After obtaining the gradient quality, the texture is pyramid blended with the globally enhanced image to obtain the final experimental results, as shown in Figure 4.

Results
This paper conducted the final experiments using a real welding seam image dataset for validation.The dataset comprised 533 images containing welding seam images with highlight pollution.Additionally, 1000 highlight images from the SHIQ [13] dataset were also utilized.In this section, the universality of the proposed method is validated using the aforementioned datasets.Figures 5 and 6 present experimental comparison images, including validations with methods from Akashi [20], Fu [31], Shen [10], Shen [14], Yamamoto [3], Lin [32], and the proposed method.The images in Figure 5 are from the

Results
This paper conducted the final experiments using a real welding seam image dataset for validation.The dataset comprised 533 images containing welding seam images with highlight pollution.Additionally, 1000 highlight images from the SHIQ [13] dataset were also utilized.In this section, the universality of the proposed method is validated using the aforementioned datasets.Figures 5 and 6 present experimental comparison images, including validations with methods from Akashi [20], Fu [31], Shen [10], Shen [14], Yamamoto [3], Lin [32], and the proposed method.The images in Figure 5 are from the highlight-polluted welding seams collected in this paper, while the images in Figure 6 are from the SHIQ dataset.Table 1 shows the methods we compared.

Qualitative Evaluation
To conduct a qualitative assessment, this paper categorized the collected highlight images into four major classes: metal, plastic, glass, and decorative items.These images were sourced from the SHIQ dataset and a dataset of real-world weld seam images  [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.Table 1.These methods are replaced with abbreviations later in this article.

Qualitative Evaluation
To conduct a qualitative assessment, this paper categorized the collected highlight images into four major classes: metal, plastic, glass, and decorative items.These images were sourced from the SHIQ dataset and a dataset of real-world weld seam images collected in this paper.After contrasting the images in each category, three reference metrics were employed to evaluate the quality of the highlight removal images.They were the MSE (mean squared error), measuring the average squared difference between two images at each pixel; PSNR (peak signal-to-noise ratio), assessing the ratio of signal to noise; and SSIM (structure similarity index measure), comparing the brightness, contrast, and structure of the images.In conclusion, these metrics helped to determine the effectiveness of the highlight removal methods.
Figure 5 illustrates the comparison of the highlight removal effects between this paper's approach and others on the surface of metal.In Figure 5b, the highlights are not effectively removed, and many artifacts are present.In Figure 5d,f,g, the image quality significantly degrades after highlight removal.In Figure 5e, although the highlights were removed, there are noticeable artifacts at the transitions in the highlight region, with severe loss of details. Figure 5h represents a deep learning method, showing noticeable color discrepancies in the restoration.While this method effectively removed highlights, it also exhibited certain artifacts.Table 2 presents the mean performance metrics of the different methods on these three images.The optimal result is marked in bold.Figure 6 shows the results of highlight removal on a plastic surface.It can be observed that on the plastic surface, most methods resulted in some degree of color deviation after processing.This was mainly because the existing methods for highlight removal failed to constrain the highlight area, thereby mistakenly identifying the white background as highlights.Additionally, some methods exhibited a pseudo-shadow in the original highlight position after removing the highlights, indicating an incomplete elimination of highlights.These two issues are clearly visible in Figure 6e.The approach proposed in this paper performed better in suppressing pseudo-shadows and reducing color deviations in the highlight areas compared to the existing methods.Table 3 presents the average performance metrics for the images in Figure 6.The optimal result is marked in bold.The results of highlight removal on glass surfaces shown in Figure 7 were compared.The existing methods achieved relatively balanced highlight removal on glass surfaces, but color deviation was observed to some extent, indicating an area for potential optimization in our future work.In Table 4, the optimal result is marked in bold.[20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.Figure 8 shows the results of highlight removal on the surface of decorations.In environments with rich colors, most methods exhibited unnatural color transitions, as seen in Figure 8e.Additionally, in Figure 8g, there is significant overall color deviation in the restored image.Moreover, many methods did not handle the details of highlight removal well, as shown in Figure 8d,g.In Table 5, the optimal result is marked in bold.Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.[20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.Figure 8 shows the results of highlight removal on the surface of decorations.In environments with rich colors, most methods exhibited unnatural color transitions, as seen in Figure 8e.Additionally, in Figure 8g, there is significant overall color deviation in the restored image.Moreover, many methods did not handle the details of highlight removal well, as shown in Figure 8d,g.In Table 5, the optimal result is marked in bold.[20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.Figure 8 shows the results of highlight removal on the surface of decorations.In environments with rich colors, most methods exhibited unnatural color transitions, as seen in Figure 8e.Additionally, in Figure 8g, there is significant overall color deviation in the restored image.Moreover, many methods did not handle the details of highlight removal well, as shown in Figure 8d,g.In Table 5, the optimal result is marked in bold.Akashi [20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.[20], (c) Fu [31], (d) Shen [10], (e) Shen [14], (f) Yamamoto [3], (g) Lin [32], (h) our method.

Quantitative Evaluation
In the quantitative evaluation, to further demonstrate the universality and effectiveness of our approach, this paper evaluated both the SHIQ dataset and the collected weld seam dataset, which together comprised 1533 highlight images.
Tables 6 and 7 list the performance metrics of using our method and the existing methods on these two datasets.It can be seen that our method had a significant performance improvement compared to the existing traditional highlight removal methods.Although our method had some slight deficiencies compared to the deep learning methods on the SHIQ dataset, the actual visual effect of our method was better than the deep learning methods, and for the deep learning methods, they could only handle images of up to 512 × 512 pixels; in addition, the image details were severely lost after processing.The optimal method is highlighted in bold in Tables 6 and 7, and the values are the means after processing all images.

Runtime
Runtime is an important criterion for determining whether an algorithm can process in real-time.This paper selected highlight images at different resolutions to detect the runtime of the algorithm at different resolutions.All the methods mentioned in this paper were run on the same computer using MATLAB (2021b), with a computer configuration of i7-11800H, GTX 2060, and 16 GB of memory (Intel, Santa Clara, CA, USA).
Table 8 represents the runtime of the images.The table uses images with three different pixel resolutions for runtime comparison: 256 × 256, 512 × 512, and 1280 × 720.The table shows the runtime and final average runtime for each corresponding resolution, measured in seconds.* indicates that the image cannot be processed at the current resolution.
For the deep learning method, because it could only handle images up to 512 × 512 pixel, the processing time at the corresponding resolution is the training time of the model.The final average processing time is the processing time per image after the model is trained.

Detail Analysis
Information hidden in image details is often crucial, and bright image details are essential for practical applications such as object detection and tracking.Therefore, an excellent highlight removal algorithm should have the ability to enhance image details.This paper compared the detail information of the images after highlight processing, as shown in Figure 7.The detail information after restoration was compared between the algorithm from references [10,32], which performed well in the comparison mentioned above, and the algorithm proposed in this paper.
As shown in Figure 9, it was observed that the highlight removal method proposed in this paper excelled in detail restoration compared to the existing methods.This is particularly evident in the weld seam images.Additionally, in various other environments, the highlight removal approach presented in this paper outperformed the existing methods in terms of detail.

Detail Analysis
Information hidden in image details is often crucial, and bright image details are essential for practical applications such as object detection and tracking.Therefore, an excellent highlight removal algorithm should have the ability to enhance image details.This paper compared the detail information of the images after highlight processing, as shown in Figure 7.The detail information after restoration was compared between the algorithm from references [10,32], which performed well in the comparison mentioned above, and the algorithm proposed in this paper.
As shown in Figure 9, it was observed that the highlight removal method proposed in this paper excelled in detail restoration compared to the existing methods.This is particularly evident in the weld seam images.Additionally, in various other environments, the highlight removal approach presented in this paper outperformed the existing methods in terms of detail.
Therefore, the application scenarios of this method can be welding or industrial production scenarios.Among them, welding plays a vital role in industrial production, and weld quality is key to ensuring structural integrity.Detecting a weld can effectively reduce the potential risk and ensure the safety of the connection point.The timely identification and repair of welding defects can prevent product failures during use, thereby reducing the cost of later maintenance and replacement.By using effective inspection methods, quality problems can be eliminated before the product leaves the factory and production efficiency can be improved.In the automated welding process, timely and accurate detection helps to quickly eliminate nonconforming products and ensure the continuity and stability of the production workflow.

Conclusions
This paper proposes a widely applicable highlight removal method that emphasizes detail restoration.The method effectively removes highlights and better restores detail information in areas contaminated by highlights in the image.Compared to existing methods, the proposed method better preserves both overall and fine-grained detail information in the image.The repaired images are also more suitable for subsequent recognition operations.However, some shortcomings exist in terms of saturation in certain images after processing with this method, and there are still residual artifacts in certain types Therefore, the application scenarios of this method can be welding or industrial production scenarios.Among them, welding plays a vital role in industrial production, and weld quality is key to ensuring structural integrity.Detecting a weld can effectively reduce the potential risk and ensure the safety of the connection point.The timely identification and repair of welding defects can prevent product failures during use, thereby reducing the cost of later maintenance and replacement.By using effective inspection methods, quality problems can be eliminated before the product leaves the factory and production efficiency can be improved.In the automated welding process, timely and accurate detection helps to quickly eliminate nonconforming products and ensure the continuity and stability of the production workflow.

Conclusions
This paper proposes a widely applicable highlight removal method that emphasizes detail restoration.The method effectively removes highlights and better restores detail information in areas contaminated by highlights in the image.Compared to existing methods, the proposed method better preserves both overall and fine-grained detail information in the image.The repaired images are also more suitable for subsequent recognition operations.However, some shortcomings exist in terms of saturation in certain images after processing with this method, and there are still residual artifacts in certain types of image processing that have not been completely eliminated.These are issues that need to be considered and improved upon.With the increasing maturity of deep learning algorithms, future highlight removal algorithms should also evolve towards the direction of deep learning.

Figure 1 .
Figure 1.The framework of the proposed method.

Figure 1 .
Figure 1.The framework of the proposed method.

Figure 2 .
Figure 2. The highlight restoration effect of the proposed method in this paper: (a) the input image (b) the highlight pixels, (c) without the compensation coefficient, (d) using the compensation coef ficient.

Figure 2 .
Figure 2. The highlight restoration effect of the proposed method in this paper: (a) the input image, (b) the highlight pixels, (c) without the compensation coefficient, (d) using the compensation coefficient.

Figure 3 .
Figure 3. Performance metrics before and after compensation.

Figure 3 .
Figure 3. Performance metrics before and after compensation.

Figure 4 .
Figure 4. Effect after detail repair: (a) the input image, (b) gradient image, (c) the details are enlarged; the red box is before enhancement, and the blue box is after enhancement.

Figure 4 .
Figure 4. Effect after detail repair: (a) the input image, (b) gradient image, (c) the details are enlarged; the red box is before enhancement, and the blue box is after enhancement.

Table 2 .
Mean performance metrics of images in Figure5.

Table 3 .
Mean performance metrics of images in Figure6.

Table 4 .
Mean performance metrics of images in Figure7.

Table 4 .
Mean performance metrics of images in Figure7.

Table 4 .
Mean performance metrics of images in Figure7.

Table 5 .
Mean performance metrics of images in Figure8.

Table 6 .
Performance metrics on the real welding seam dataset.

Table 7 .
Performance metrics on the SHIQ dataset.