Enhancement of Asymmetrically Color-Cast Sandstorm Image Using Saturation-Based Color Correction and Hybrid Transmission Network

: The images discussed in this manuscript show atmospheric conditions of smog, sandstorm, and dust. Moreover, the images were taken in various environments and have features such as dimness or color cast. The smoggy image has a greenish or bluish color veil, and the sandstorm image has a yellowish or reddish color veil because of the various sand particles. Various methods have been used to enhance images containing dust. However, if the color-cast ingredients are not considered during image enhancement, then the enhanced image will have a new, artiﬁcial color veil that did not appear in the input image, as the color-veiled image does not have a uniform color channel. Certain channels are attenuated by sand particles. Therefore, this paper proposes a color-balancing method based on saturation to enhance asymmetrically cast colors due to the attenuation of the color channel by sand particles. Moreover, because the balanced image contains dust and the distribution of hazy ingredients is asymmetrical, a dehazing procedure is needed to enhance the image. This work used the original image and a reversed image to train the hybrid transmission network and generate the image’s transmission map. Moreover, an objective and subjective assessment procedure was used to compare the performance of the proposed method with that of other methods. Through the assessment, the performance of the proposed method was shown to be superior to other methods’ performance.


Introduction
The images discussed in this paper have diverse features with a hazy appearance or a color veil caused by various atmospheric circumstances.Hazy and dusty images are dim and unclear, and the sandstorm image contains a yellowish or reddish color veil because a certain color channel is attenuated by sand particles.Moreover, because the sandstorm image has low resolution and a rare color channel in certain environments, it presents a challenge in the areas of computer vision or image recognition.Therefore, a sandstorm image enhancement procedure is needed.Because sandstorm images and dusty images are obtained by a similar path, to enhance both images, a dehazing procedure is required.However, the existing dehazing methods have no color-balancing techniques; therefore, the enhanced image contains a new artificial color cast which was not visible in the input image with a color veil.Therefore, to enhance the sandstorm image naturally, a color-balancing procedure is needed.
The hazy-image-enhancement methods can be divided into two broad categories of machine-learning-based methods and non-machine-learning-based algorithms.
Numerous studies have been conducted on the use of non-machine-learning-based algorithms to enhance hazy images.He et al., proposed a dehazing algorithm using the dark channel prior (DCP) [1].This method is usually applied for dehazing.However, because this method uses the constant kernel size to estimate the transmission map, the enhanced image has an artificial effect similar to the blocked effect.Meng et al., used the boundary constraint transmission map to enhance hazy images [2].Their algorithm compensates for the DCP method using boundary constraints.Because this method has no color-balancing procedure, if the image has a cast color, the enhanced image will have an artificial color.Narasimhan et al., proposed a dehazing method using the image's scene depth map, which is generated under different weather conditions [3].Narasimhan et al., presented a hazy-image-enhancement algorithm according to changes in the scene color by atmospheric conditions [4].Although this method enhances hazy images, the enhancement effect becomes low with the increase in depth.Zhao et al., enhanced hazy images using a transmission map with the pixel-and patch-wise method to compensate for the edge region of the existing transmission map [5].In this method for enhancing hazy images, if the enhanced image is too dark, an exposure procedure is applied [5].Tarel et al., proposed an image-enhancement method using white balance, atmospheric veil inference, and corner-preserving smoothing [6].Nasseeba et al., enhanced hazy images using a depth estimation module to refine the transmission map with median filtering, a color analysis module using the gray world assumption, and a visibility restoration module to adjust the transmission map [7].Schechner et al., proposed a hazy image enhancement algorithm using polarization [8].Hong et al., enhanced hazy images using the adaptive gamma correction with the saturation increase [9].However, in this method of hazy image enhancement, only the constant value that is not image-adaptive is used.Al-Ameen proposed a dustyimage-enhancement method using a tri-threshold with gamma correction [10].Although this method enhances dusty images with a color cast, the constant value is not suitable for various other image conditions.Shi et al., enhanced dusty images using a normalized gamma transform and a contrast limit adaptive histogram equalization [11].This method enhances color-cast dusty images; however, to balance the color components, this method uses a mean shift in color ingredients, and an artificial color cast can appear in the enhanced image.Cheng et al., enhanced images containing sand dust using blue channel compensation [12].This method enhances the color-cast sandstorm image suitably; however, if the image's color channel is too rare, a new artificial color veil can appear.Cheng et al., proposed a sandstorm-image-enhancement algorithm using the blue channel prior and white balance [13].Gao et al., established a sand-dust-image-enhancement method using the blue channel prior and color-balancing method [14].This method enhances the color-cast sand-dust image sufficiently; however, in the case of greatly attenuated images, because certain color channels are rare, the enhanced image could show a new color veil.Shi et al., proposed a sandstorm-image-enhancement algorithm using the compensated dark channel prior [15].This method also uses the mean shift in color ingredients to enhance the color cast; however, a newly cast color can be seen in the enhanced image.
Furthermore, a great number of studies have focused on how to enhance hazy images based on machine learning.Zhu et al., enhanced hazy images using color attenuation prior and by training the depth map [16].Ren et al., enhanced hazy images using two kinds of multi-scale convolutional neural networks: one generating a transmission map and the other generating a refining-of-transmission map [17].Although their method enhances the hazy image suitably, because the training image is taken in the daytime, the nighttime image is not sufficiently enhanced.Wang et al., enhanced hazy images using atmospheric illumination prior and multiscale convolutional neural network [18].This method effectively estimates the transmission map; however, in some images, the sky region of the transmission map is not well-estimated [18].Lee enhanced the sandstorm image using image adaptive eigenvalue and brightness adaptive dark channel network [19].Santra et al., improved hazy images using a transmittance map and environmental illumination [20].This method enhances hazy images; however, for images taken at nighttime, the enhanced image has an artificial effect because the synthetic image does not include the nighttime environment.Yu et al., enhanced hazy images using ensemble learning with a two-branch neural network [21].Zhou et al., improved the haze image using robust polarization and neural networks [22].However, since this method is not able to estimate certain conditions of the atmosphere, the enhanced image has an artifact effect in some cases [22].Zhang et al., enhanced the hazy image using a pyramid channel-based feature attention network, which has three modules with three-scale feature extraction, a pyramid channel-based feature attention, and a reconstruction module to extract diverse characteristics of the image [23].Machine-learning-based dehazing methods are sometimes superior to non-machine-learning algorithms; however, creating a hazy dataset is a difficult task, and the synthetic image cannot contain the various circumstances required to train the neural network sufficiently.
The sandstorm image has a color veil due to the attenuation of color components.Therefore, to enhance the sandstorm image naturally, asymmetrically casted color needs a color-balancing procedure.Moreover, because the color-balanced image only appears to be a dusty image without a color veil, to enhance the image sufficiently, an image-adaptive dehazing procedure is needed because the distribution of hazy particles is asymmetrical.Therefore, this paper proposes an image-adaptive color-balancing method and a dehazing method, respectively.

Saturation-Based Asymmetric Color-Channel Compensation
The sandstorm image has a certain color veil, which is either reddish or yellowish, due to the attenuation of the color channel.Moreover, the distribution of the color channel is asymmetrical.In order to balance the asymmetrical color components of an image naturally by attenuation on the color channel, it needs certain parameters that show the image's characteristic.Hong et al. [9] enhanced hazy images using gamma correction with the value channel of the HSV domain and an increase in the saturation channel.As the hazy image only has dim characteristics, not a color cast, variations in saturation can lead to a new color cast.The saturation of image shows how the color is mixed and whether the color is dark or light.The sandstorm image has a yellowish or reddish color cast.To balance the image's color, if the hue channel is controlled, then the image color is changed, leading to an artificial effect.However, because the image saturation shows how the color is mixed, to balance the color-casted image, the image saturation should be adaptively controlled.
Figure 1 provides an overview of the color-balancing procedure used in the proposed method.Figure 1a is an asymmetrically color-casted sandstorm image; Figure 1b shows a variation of saturation (variation in circle's position: brown dotted arrow and circle are the saturation position of the yellowish casted sandstorm image; black dotted arrow and blue circle are the saturation position of the color-balanced sandstorm image).Figure 1c is a color-balanced image.As shown in Figure 1a-c, if the image saturation is changed, then the color veil of the image can be compensated.Because the color-casted image can be balanced by a change of saturation, this work proposes an image-adaptive color-balancing algorithm based on variations of saturation.
nighttime environment.Yu et al. enhanced hazy images using ensemble learning with a two-branch neural network [21].Zhou et al. improved the haze image using robust polarization and neural networks [22].However, since this method is not able to estimate certain conditions of the atmosphere, the enhanced image has an artifact effect in some cases [22].Zhang et al. enhanced the hazy image using a pyramid channel-based feature attention network, which has three modules with three-scale feature extraction, a pyramid channel-based feature attention, and a reconstruction module to extract diverse characteristics of the image [23].Machine-learning-based dehazing methods are sometimes superior to non-machine-learning algorithms; however, creating a hazy dataset is a difficult task, and the synthetic image cannot contain the various circumstances required to train the neural network sufficiently.
The sandstorm image has a color veil due to the attenuation of color components.Therefore, to enhance the sandstorm image naturally, asymmetrically casted color needs a color-balancing procedure.Moreover, because the color-balanced image only appears to be a dusty image without a color veil, to enhance the image sufficiently, an imageadaptive dehazing procedure is needed because the distribution of hazy particles is asymmetrical.Therefore, this paper proposes an image-adaptive color-balancing method and a dehazing method, respectively.

Saturation-Based Asymmetric Color-Channel Compensation
The sandstorm image has a certain color veil, which is either reddish or yellowish, due to the attenuation of the color channel.Moreover, the distribution of the color channel is asymmetrical.In order to balance the asymmetrical color components of an image naturally by attenuation on the color channel, it needs certain parameters that show the image's characteristic.Hong et al. [9] enhanced hazy images using gamma correction with the value channel of the HSV domain and an increase in the saturation channel.As the hazy image only has dim characteristics, not a color cast, variations in saturation can lead to a new color cast.The saturation of image shows how the color is mixed and whether the color is dark or light.The sandstorm image has a yellowish or reddish color cast.To balance the image's color, if the hue channel is controlled, then the image color is changed, leading to an artificial effect.However, because the image saturation shows how the color is mixed, to balance the color-casted image, the image saturation should be adaptively controlled.
Figure 1 provides an overview of the color-balancing procedure used in the proposed method.Figure 1a is an asymmetrically color-casted sandstorm image; Figure 1b shows a variation of saturation (variation in circle's position: brown dotted arrow and circle are the saturation position of the yellowish casted sandstorm image; black dotted arrow and blue circle are the saturation position of the color-balanced sandstorm image).Figure 1c is a color-balanced image.As shown in Figure 1a-c, if the image saturation is changed, then the color veil of the image can be compensated.Because the color-casted image can be balanced by a change of saturation, this work proposes an image-adaptive color-balancing algorithm based on variations of saturation.The sandstorm image has two types of color cast: reddish (yellowish) by sand particles or greenish (bluish) by smog particles.The reddish color-veiled image has the highest mean value for the red channel and a lower mean value for the blue or green channel than that of the red channel due to the color channel attenuation.The greenish color-veiled image has the highest mean value for the green channel and a lower mean value for the red and blue channel than that of the green color channel due to the color channel attenuation.Accordingly, to enhance the reddish and orange color-casted sandstorm images, this paper proposes the following color-balancing parameters: where ratio rb and ratio rg are the average difference in values between red and blue color channels or red and green color channels, respectively, m(•) is the average operator.If the sandstorm image has a reddish or yellowish color veil, then the mean value of the red channel of the image is bigger than that of other color channels; therefore, the ratio rb and ratio rg values are always more than zero.As the ratios are applied as follows: where ratio RY is the ratio of the reddish or yellowish sandstorm image, ω is the controlling parameter according to the image condition.If the image has a high level of color-casting and is reddish or yellowish, then the blue color channel of the sandstorm image is the rare condition, and the average value of the blue channel is lowest.However, because the average value of the reversed blue channel is the highest, the weight is somewhat uniform.Moreover, if the image has a light color veil because the average values of the color channels are somewhat uniform, the average value of the reversed color channel is uniform.Therefore, using Equation (4), the ratio can be changed, including adaptive image conditions.

Color-Compensation Measures for Greenish or Bluish Images
The reddish color-casted image has an imbalanced color channel, as the red channel is more abundant than other channels.Meanwhile, if the image has a greenish or bluish color veil, then the average value of the green channel is higher than other color channels.That is, the average value of the greenish or bluish image is higher than that of the red color channel.Therefore, if the ratio rb or ratio rg is lower than zero or equal zero, to enhance the greenish color-casted image, this paper uses the average value difference in color channels as follows: where ratio gr and ratio gb show the average difference in values between the green channel and red or blue channel.
ratio GB = (ratio gr + ratio gb )•ω, where ratio GB is the ratio of the greenish or bluish color-casted image.The ratios obtained by Equations ( 1)-( 7) are applied to balance the image based on saturation, as follows: where S p is the saturation channel enhanced by the proposed method; S is the saturation channel of the input image; ratio ϕ∈{RY,GB} is the image-adaptive ratio of the greenish or reddish color-casted image obtained by Equations ( 1)-( 7) where Hong et al. [9] enhances hazy image by increasing saturation of image with constant ratio ϕ∈{RY, GB} value however it is not image adaptive.Using Equations ( 1)-( 8), although the color channel of the image is rare, the enhanced image has a balanced natural color channel.
Figure 2 shows a balanced image, with the color-casted image and non-color-casted image.Figure 2b shows the color-balanced image obtained by Hong et al.'s [9] method.The image improved by Hong et al. [9] still has a shifted color because this method increases saturation with a constant value to enhance the hazy image.Meanwhile, the proposed color-balancing algorithm shown in Figure 2c shows a nice performance for both greatly color-casted images and non-color-casted images due to the image-adaptive saturation variations.Therefore, the proposed color-balancing algorithm is suitable for enhancing the sandstorm image.

Color Compensation Using Image-Adaptive Measures
The ratios obtained by Equations ( 1)-( 7) are applied to balance the image based on saturation, as follows: where  is the saturation channel enhanced by the proposed method;  is the saturation channel of the input image;  ∈{ , } is the image-adaptive ratio of the greenish or reddish color-casted image obtained by Equations ( 1)-( 7) where Hong et al. [9] enhances hazy image by increasing saturation of image with constant  ∈{ , } value however it is not image adaptive.Using Equations ( 1)-( 8), although the color channel of the image is rare, the enhanced image has a balanced natural color channel.
Figure 2 shows a balanced image, with the color-casted image and non-color-casted image.Figure 2b shows the color-balanced image obtained by Hong et al.'s [9] method.The image improved by Hong et al. [9] still has a shifted color because this method increases saturation with a constant value to enhance the hazy image.Meanwhile, the proposed color-balancing algorithm shown in Figure 2c shows a nice performance for both greatly color-casted images and non-color-casted images due to the image-adaptive saturation variations.Therefore, the proposed color-balancing algorithm is suitable for enhancing the sandstorm image.

Hybrid Transmission Network
The color-balanced image has hazy features, similar to dim images.Moreover, because haze particles are distributed asymmetrically, a dehazing procedure is required to enhance hazy images.The existing dehazing methods usually use dark channel prior (DCP) [1].This method is useful for estimating the transmission map of a single image.However, when a transmission map is estimated because the constant kernel size is used to estimate the image's dark region, the estimated image has a blocked area.Meanwhile, because the convolutional neural network (CNN) uses various kernel sizes, it can generate a transmission map naturally.Therefore, this work estimated the transmission map using CNN.When training the neural network, various training data are needed.Because acquiring the transmission map of images is a challenging task, a synthetic dataset is used.However, because the synthetic dataset does not contain various image circumstances, the enhanced image contains artifact phenomena.The transmission map is defined as [1]:

Hybrid Transmission Network
The color-balanced image has hazy features, similar to dim images.Moreover, because haze particles are distributed asymmetrically, a dehazing procedure is required to enhance hazy images.The existing dehazing methods usually use dark channel prior (DCP) [1].This method is useful for estimating the transmission map of a single image.However, when a transmission map is estimated because the constant kernel size is used to estimate the image's dark region, the estimated image has a blocked area.Meanwhile, because the convolutional neural network (CNN) uses various kernel sizes, it can generate a transmission map naturally.Therefore, this work estimated the transmission map using CNN.When training the neural network, various training data are needed.Because acquiring the transmission map of images is a challenging task, a synthetic dataset is used.However, because the synthetic dataset does not contain various image circumstances, the enhanced image contains artifact phenomena.The transmission map is defined as [1]: where β is a scattering parameter; d(x) is a depth map of the image.The transmission map is changed by the β parameter.The transmission map has diverse features according to whether the β parameter is low or high.Therefore, this work generated a suitable transmission map using variation in the β parameter, and called this the ground truth image of the transmission map.The proposed transmission map was obtained as follows: where t g (x) is the ground truth transmission map; N is the length of β i ; t i (x) is i th trans- mission map; β i ∈ [0.5, 2) with 0.1 intervals.Obtained using Equations ( 10) and ( 11), the generated ground truth transmission map has diverse features.To generate the imageadaptive transmission map, this work used the hybrid transmission map applying the theories of dark channel prior (DCP) [1] and bright channel prior (BCP) [24].The DCP estimates the dark region of images; however, if the image has bright regions, such as sky regions, the estimated image is still bright, not dark, and the enhanced image using the estimated DCP has an artificial effect.Additionally, because the BCP [24] estimates the bright region if DCP [1] and BCP [24] are hybridized, the estimated transmission map will be more natural.Therefore, this work used the hybrid transmission map with DCP [1] and BCP [24].The DCP [1], BCP [24], and transmission map were obtained as follows: where I d (x) is the dark channel, I b (x) is the bright channel, Ω(x) is the patch size used to estimate dark or bright region, A c is backscatter light, c ∈ {r, g, b}, and t(x) is the transmission map obtained by reversing the dark channel or bright channel.By using Equations ( 12)-( 14), to estimate dark or bright regions, a certain size kernel can be applied, and because the transmission map is estimated by reversing the dark channel or bright channel, the enhanced image can obtain a blocked effect by using a constant kernel size.Lee [19] designs neural network applying DCP theory [1].Therefore, this work, aiming to estimate a transmission map without blocked phenomenon, used multi-scale CNN because CNN has a diverse kernel size with DCP [1] and BCP [24] theories.The brief design of neural network is as follows: where l t d (x) is the transmission layer obtained applying DCP theory [1], l t b (x) is the transmission layer obtained applying BCP theory [24], l d (x) is the dark channel layer with minimum pooling, l b (x) is the bright channel layer with maximum pooling, cat(•) is the concatenate layer, and the rectified linear unit (ReLU) [25] is used as an activation function with each convolution layer and post-arithmetic operation.This work aims to apply the theories of DCP [1] and BCP [24] using minimum pooling and maximum pooling, respectively.Moreover, to obtain various image characteristics, multi-scale convolutional neural networks are applied.Figure 3 provides an overview of the proposed neural networks and individual networks.Figure 3a provides an overview of the proposed method, while Figure 3b shows the networks of the dark channel: the brown color is the minimum pooling layer, sky blue color is the convolution layer, green color is the up-sampling layer, and the dark blue color is the concatenate layer.This network has 10 convolution layers, 2 minimum pooling layers, 3 up-sampling layers, and 4 concatenate layers.Figure 3c shows the networks of the multi-scale bright channel: the yellow color is the maximum pooling layer, sky blue color is the convolution layer, dark blue is the concatenate layer, and green color is the up-sampling layer.This network has 8 convolution layers, 2 maximum pooling layers, 2 up sampling layers, and 2 concatenate layers.Figure 3d shows a hybrid network: the sky blue color indicates the convolution layer, dark blue is the concatenate layer.This network has 2 convolution layers, and 1 concatenate layer.Moreover, the yellow rectangular shape shown in Figure 3b-d  Figure 3 provides an overview of the proposed neural networks and individual networks.Figure 3a provides an overview of the proposed method, while Figure 3b shows the networks of the dark channel: the brown color is the minimum pooling layer, sky blue color is the convolution layer, green color is the up-sampling layer, and the dark blue color is the concatenate layer.This network has 10 convolution layers, 2 minimum pooling layers, 3 up-sampling layers, and 4 concatenate layers.Figure 3c shows the networks of the multi-scale bright channel: the yellow color is the maximum pooling layer, sky blue color is the convolution layer, dark blue is the concatenate layer, and green color is the up-sampling layer.This network has 8 convolution layers, 2 maximum pooling layers, 2 up sampling layers, and 2 concatenate layers.Figure 3d shows a hybrid network: the sky blue color indicates the convolution layer, dark blue is the concatenate layer.This network has 2 convolution layers, and 1 concatenate layer.Moreover, the yellow rectangular shape shown in Figure 3b-d    Figure 4 shows ground truth transmission maps, the transmission map generated by the proposed algorithm, and the existing transmission map.In the existing methods established by He et al. [1], Santra et al. [20], Ren et al. [17], Meng et al. [2], and Zhao et al. [5], the bright region is too dark or too bright; however, the transmission map generated by the proposed algorithm estimates bright and dark regions suitably.Therefore, the proposed algorithm is competitive in terms of transmission map estimation.

The Training Environment Set
The color-balanced image has diverse features, such as dusty or hazy.Therefore, to enhance hazy images suitably, the training dataset should also be diverse.In order to train the neural network suitably, this work used a D hazy dataset [27], which has 1449 original images, synthetic hazy images, and depth map images.Moreover, during the training, 10% of 1449 images were used for validation, and 90% of 1449 images were used for training.Moreover, the hybrid loss function was applied for the loss function with mean squared error (MSE) and structural similarity index measure (SSIM) [28] as follows: where  is mse loss function; e is error;  is ssim loss function;  is the average intensity of the target image;  is the average intensity of the generated image;  is the

The Training Environment Set
The color-balanced image has diverse features, such as dusty or hazy.Therefore, to enhance hazy images suitably, the training dataset should also be diverse.In order to train the neural network suitably, this work used a D hazy dataset [27], which has 1449 original images, synthetic hazy images, and depth map images.Moreover, during the training, 10% of 1449 images were used for validation, and 90% of 1449 images were used for training.Moreover, the hybrid loss function was applied for the loss function with mean squared error (MSE) and structural similarity index measure (SSIM) [28] as follows: where L mse is mse loss function; e is error; L ssim is ssim loss function; µ t mis the average intensity of the target image; µ G is the average intensity of the generated image; σ tG mis the correlation coefficient; σ t mis standard deviation of the target image; σ G is the standard deviation of the generated image.Figure 5 shows the variation in loss function and accuracy of the training.The loss value gradually converges, and accuracy gradually converges.
correlation coefficient;  is standard deviation of the target image;  is the standard deviation of the generated image. ,  are constant values.Using Equation ( 18), the loss value can be adjusted more suitably because both SSIM [28] and MSE indicate the similarities between two objects in different ways, and Adam optimizer [29] is used.Moreover, the train batch size and validation batch size were set as 8, while the learning rate and weight decay were, respectively, 0. Figure 5 shows the variation in loss function and accuracy of the training.The loss value gradually converges, and accuracy gradually converges.Additionally, the detection in adverse weather nature (DAWN) dataset [30] was used, which has 323 natural sandstorm images to validate the trained data.

Image Recovery
The sandstorm image has color-casted characteristics due to the color of sand particles.In order to improve this phenomenon, this work proposed a color-balancing algorithm based on saturation.The balanced image seems hazy.Therefore, to enhance the image, this work used the CNN with a hybrid transmission map.This section, using the color-balanced image and generated transmission map, recovered the image as follows [1,4,[31][32][33]: where  () is the enhanced image;  is the pixel location;  () is the color-balanced image obtained using the proposed method;  () is the generated transmission map;  sets 0.1 to prohibit divided 0;  is the backscatter light of the balanced image obtained by He et al. [1] method.Moreover, to refine the enhanced image, this work applied a guided image filter [34] as follows: Additionally, the detection in adverse weather nature (DAWN) dataset [30] was used, which has 323 natural sandstorm images to validate the trained data.

Image Recovery
The sandstorm image has color-casted characteristics due to the color of sand particles.In order to improve this phenomenon, this work proposed a color-balancing algorithm based on saturation.The balanced image seems hazy.Therefore, to enhance the image, this work used the CNN with a hybrid transmission map.This section, using the color-balanced image and generated transmission map, recovered the image as follows [1,4,[31][32][33]: where J c (x) is the enhanced image; x is the pixel location; I c B (x) is the color-balanced image obtained using the proposed method; t p (x) is the generated transmission map; t 0 sets 0.1 to prohibit divided 0; A c B is the backscatter light of the balanced image obtained by He et al. [1] method.Moreover, to refine the enhanced image, this work applied a guided image filter [34] as follows: where J c G (x) is guided filtered image; G f {•} is the guided filter; K is kernel size, set as 16; eps was set as 0.1 2 ; J c en (x) is the refined enhanced image; ratio was set as 5. Figure 6 shows the color-balanced image, transmission map, and enhanced image obtained by the methods of He et al. [1] and Santra et al. [20].Figure 6b shows the colorbalanced image; Figure 6c,d shows the transmission map and enhanced image obtained by He et al. [1] using Figure 6b. Figure 6e,f shows the transmission map and enhanced image obtained by Santra et al. [20] method using Figure 6b. Figure 6g,h shows the transmission map and enhanced image obtained by the proposed algorithm using Figure 6b.The enhanced images obtained by He et al. [1] and Santra et al. [20] contain an artificial effect due to the transmission map.Meanwhile, the enhanced image obtained by the proposed algorithm has no artificial effect. was set as 0.1 ;  () is the refined enhanced image;  was set as 5.
Figure 6 shows the color-balanced image, transmission map, and enhanced image obtained by the methods of He et al. [1] and Santra et al. [20].Figure 6b shows the colorbalanced image; Figure 6c,d shows the transmission map and enhanced image obtained by He et al. [1] using Figure 6b. Figure 6e,f shows the transmission map and enhanced image obtained by Santra et al. [20] method using Figure 6b. Figure 6g,h shows the transmission map and enhanced image obtained by the proposed algorithm using Figure 6b.The enhanced images obtained by He et al. [1] and Santra et al. [20] contain an artificial effect due to the transmission map.Meanwhile, the enhanced image obtained by the proposed algorithm has no artificial effect.

Experiment Result and Discussion
The color-casted sandstorm image is balanced by the proposed algorithm, and the balanced image has a hazy characteristic.In order to enhance the hazy image, this work applied the dehazing algorithm using CNN.This section shows the suitable performance of the proposed algorithm to enhance the sandstorm image.The categories of the assessment procedure are divided into two.The first is a subjective assessment, and the other is an objective assessment.Moreover, because the sandstorm image has a casted color, the subjective assessment is divided into two branches: color correction, and image enhancement, through comparison with state-of-the-art methods.

Experiment Result and Discussion
The color-casted sandstorm image is balanced by the proposed algorithm, and the balanced image has a hazy characteristic.In order to enhance the hazy image, this work applied the dehazing algorithm using CNN.This section shows the suitable performance of the proposed algorithm to enhance the sandstorm image.The categories of the assessment procedure are divided into two.The first is a subjective assessment, and the other is an objective assessment.Moreover, because the sandstorm image has a casted color, the subjective assessment is divided into two branches: color correction, and image enhancement, through comparison with state-of-the-art methods.

Subjective Assessment
The sandstorm image has a yellowish or reddish color cast.Therefore, to assess the enhanced sandstorm image subjectively, two procedures are required: color balancing and image enhancement.Therefore, this work was divided into two branches: color correction and enhanced image, and compared with state-of-the-art methods such as those of Al Ameen [10], Shi et al. [11], Shi et al. [15], Gao et al. [14], Ren et al. [17], He et al. [1], Meng et al. [2], Santra et al. [20], Zhao et al. [5], Hong et al. [9], and Yu et al. [21].Moreover, to conduct comparisons in various environments, the detection in adverse weather nature (DAWN) dataset [30] was used, which has 323 natural sandstorm and dust storm images.

Color Correction
This section shows the image color is balanced compared with the state-of-the-art methods, such as those of Al Ameen [10], Shi et al. [11], Shi et al. [15], and Hong et al. [9], using the DAWN dataset [30].
Figures 7 and 8 compare the color-balancing effect to that obtained with state-of-the-art methods.Shi et al.'s [11,15] methods contain a color-balancing procedure; however, the color-balanced image has a bluish artificial effect because these methods are used to balance the color channel mean shift of the color components.The color-balanced image obtained by Al Ameen [10] method has a yellowish or reddish casted color, which appears due to the use of a constant value; moreover, this is not an image-adaptive measure to enhance the image.Because Hong et al. [9] use an increase in saturation to enhance hazy images, if the image contains color-casting, then the balanced image will still contain a color-shifted effect, which may thicken due to the increase in saturation.However, the color-balanced image obtained by the proposed method has no color-casted effect.
Figures 9 and 10 show a performance comparison of the proposed method and state-ofthe-art methods.He et al. [1] and Meng et al.'s [2] methods enhance hazy images; however, when used with color-casted images, the enhanced image has an artificial color because these methods have no color compensation procedure.Shi et al.'s [11,15] algorithms enhance sandstorm images, although these contain a color veil.However, due to the mean shift in color ingredients to balance the color channel, these methods sometimes have an artificial bluish color.Gao et al. [14] enhanced the sandstorm image.However, the enhanced image seems dim because of the transmission map.Al Ameen [10] enhanced a sandstorm image with a lightly casted color because this method uses a constant value to enhance the image and is not an image-adaptive measure.Ren et al. [17] and Santra et al.'s [20] methods enhance hazy images using CNN; however, these methods have no color-compensation procedure, and the enhanced image shows a color shift.Moreover, the image enhanced by Hong et al. [9] also has a casted color because this method does not have an image-adaptive color-balancing procedure but increases the image saturation of the image, meaning that the enhanced image has a thicker casted color veil.Yu et al. [21] enhanced hazy images; however, because this method has no color-compensation procedure, the enhanced image has color-shift components.Zhao et al. [5] improved a lightly color-casted sandstorm image; however, because this method has no suitable color-correction procedure, the enhanced reddish or orange color-casted image still contains color ingredients.Meanwhile, the image enhanced by the proposed algorithm has no shifted color and no artificial effect.Therefore, the proposed algorithm is suitable for application in the sandstorm imageenhancement sphere.

Objective Assessment
The color-casted sandstorm image is balanced by the proposed algorithm, and Figures 7 and 8 show the performance of the proposed algorithm, which was shown to be suitably compared with state-of-the-art methods.Moreover, the dehazing algorithm used by the proposed method is superior to state-of-the art-methods in subjective terms.This section assesses the performance of the proposed method and how suitable it is for the enhancement of sandstorm images.To objectively assess the enhanced image, this work used two metrics: the natural image quality evaluator (NIQE) [35] and the underwater image quality measure (UIQM) [36].The NIQE [35] metric indicates how natural an image is.The lower the NIQE [35] score, the more natural the enhanced image and the better its quality.Meanwhile, the UIQM [36] score shows how well an image is enhanced in terms of the image's contrast, colorfulness, and sharpness.The higher the UIQM [36] score, the more enhanced the image.Moreover, to assess the generated transmission map, SSIM [28] and MSE metrics are used.
Table 1 shows how similar the transmission map is to the ground truth.The transmission map by Ren et al. [17] is more similar to the ground truth image, although the transmission map is dimmer than those obtained using He et al. [1], Santra et al. [20], Zhao et al. [5], and Meng et al.'s [2] methods according to their SSIM [28] score.Meanwhile, the transmission map used in He et al.'s [1] method is more similar to the ground truth image than those of Ren et al. [17], Santra et al. [20], Zhao et al. [5], and Meng et al. [2] according to MSE score.The transmission map obtained by the proposed method is more similar to the ground truth image than those of He et al. [1], Santra et al. [20], Ren et al. [17], Meng et al. [2], and Zhao et al. [5] according to both SSIM [28] and MSE scores.
Table 1.The comparison of transmission map obtained through SSIM [28] and MSE metrics with state-of-the-art methods shown in Figure 4 and D hazy dataset [27] (PM is proposed method).

SSIM [28]
[ Tables 2 and 3 show the NIQE [35] scores of Figures 9 and 10.The lower the NIQE score, the better enhanced and more natural the image.The NIQE score obtained by He et al. [1] is higher than that obtained by Gao et al. [14] in some images because He et al.'s [1] method has no color-compensation procedure.Gao et al. [14] obtained a higher NIQE score than Al Ameen [10], though the image enhanced by Gao et al. [14] method has a smaller color shift than that obtained using Al Ameen's method [10].Meng et al.'s [2] method has a lower NIQE score than Gao et al.'s [14] method, although the enhanced image contains casted color.Shi et al. [15] obtained a lower NIQE score than Meng et al. [2] because Shi et al. [15] used a color-compensation procedure.Ren et al. [17] obtained a higher NIQE score than Shi et al. [15] because Ren et al. [17] did not contain a colorcompensation procedure.Shi et al. [11] obtained a lower NIQE score than Ren et al. [17] in some images, because Ren et al.'s [17] method does not contain a color-compensation procedure.Santra et al. [20] obtained a higher NIQE score than that of Shi et al. [11] in some images because Santra et al.'s [20] method contains no color-compensation procedure.Hong et al. [9] obtained a higher NIQE score than Shi et al. [11] because Shi et al. [11] used a color-balancing procedure.Yu et al. [21] obtained a higher NIQE score than Zhao et al. [5] and Shi et al. [11] because Shi et al.'s [11] method contains a color-compensation procedure.Although the image has a shifted color, the NIQE score is lower than that of the non-color-shifted image.Therefore, the NIQE score is not an absolute but a referenceable measure.The proposed method has a lower NIQE score than other methods.Tables 4 and 5 compare the performance of the enhanced image with state-of-the-art methods and the proposed method through UIQM [36] score.A higher score denotes a better-enhanced image.He et al. [1] obtained a higher UIQM score than Gao et al. [14], although He et al.'s [1] method contains no color-compensation procedure.Gao et al.'s [14] method obtained a lower UIQM score than Al Ameen's [10] method, although the image enhanced using Al Ameen's [10] method has a casted color.Meng et al. [2] obtained a lower UIQM score than Al Ameen [10] because Meng et al.'s [2] method has no colorcompensation procedure.Shi et al. [15] obtained a higher UIQM score than Meng et al. [2] because Shi et al. [15] used a color-compensation procedure.Ren et al. [17] obtained a lower UIQM score than Shi et al. [15] because Ren et al.'s [17] method has no color-compensation procedure.Shi et al. [11] obtained a lower UIQM score than Ren et al. [17], although Shi et al.'s [11] method contains a color-compensation procedure.Santra et al. [20] obtained a higher UIQM score than Shi et al. [11] in some images, although Santra et al.'s [20] method contains no color-compensation procedure, and the enhanced image has a casted color.Hong et al. [9] obtained a lower UIQM score than Shi et al. [11] because Shi et al.'s [11] method contains a color-compensation procedure.Zhao et al. [5] obtained a higher UIQM score than Yu et al. [21] and Gao et al. [14], although Gao et al.'s [14] method contains a color-compensation procedure.Through the UIQM score, though, if the image has a casted color because the UIQM score is higher than that of others, the UIQM is not an absolute but a referenceable measure.The image enhanced by the proposed method has a higher UIQM score than other methods.Tables 6 and 7 show a comparison of the enhanced image with state-of-the-art methods and the proposed method through averaged NIQE [35] and UIQM [36] scores on Figures 9 and 10 and the DAWN dataset [30].Table 6 shows the averaged NIQE score of Figures 9 and 10 and the DAWN dataset [30].The existing dehazing method contains no color-compensation procedure; however, sometimes, the NIQE score is lower than that of the non-color-casted image on the color-casted image.Moreover, although the enhanced image contains a color shift, the UIQM score of the enhanced image with casted color is higher than that of the non-color-casted image.Therefore, the NIQE and UIQM metrics are not absolute but referenceable measures.The NIQE score of the proposed method was lower than those obtained for other methods, and the UIQM score of the proposed method was higher than those obtained for other methods.

Conclusions
The sandstorm image has an asymmetrically casted color, such as yellowish or reddish, due to the color-channel attenuation caused by sand particles.If the color-casted components are not considered when enhancing the sandstorm image, then the enhanced image has an artificial color.Therefore, this work balanced the image using a saturation-based color-correction algorithm on an asymmetrically casted color.The balanced image contains no color veil and seems hazy.Moreover, as the distribution of the haze ingredients is asymmetrical, a dehazing procedure was needed to enhance the hazy image; therefore, this work obtained a transmission map with hybrid theories, such as dark-channel prior and bright-channel prior, based on CNN.The enhanced image has no artificial effect or naturalism.The contribution of this work is that the proposed color-correction algorithm is based on saturation using the average value difference of color channels in sandstorm images with various color casts.Moreover, this method can easily and widely compensate images, even when the color channel is too rare due to great attenuation, and by using the hybrid transmission map, the proposed algorithm is naturally enhanced, although the image has regions that are too bright or dark.The next aim of this work is to enhance the image naturally, pursuing image-adaptive measures to balance the color and estimate the transmission map in low-light circumstances and a thick, hazy environment.

Figure 1 . 1 .
Figure 1.Overview of the color-balancing procedure: (a) sandstorm image; (b) overview of colorbalancing procedure with [16] (blue and brown circles with brown and black dotted arrows are variations of saturation); (c) color-balanced image.

Figure 2 .
Figure 2. The performance comparison of color-balancing algorithms; (a) sandstorm image with asymmetrically color-casted or non-color-casted images; (b) improved image obtained by Hong et al. [9]; (c) color-balanced image obtained by the proposed method.

Figure 2 .
Figure 2. The performance comparison of color-balancing algorithms; (a) sandstorm image with asymmetrically color-casted or non-color-casted images; (b) improved image obtained by Hong et al. [9]; (c) color-balanced image obtained by the proposed method.
indicates the grouping of unit layers, where 1/2 and x2 indicate variation of size; downsize as 1/2, upsize as x2, the number below the layers indicates the channel size.The networks partially applied a U-net[26] architecture with a multi-scale resolution to obtain the various image characteristics.Symmetry 2023, 15, x FOR PEER REVIEW 7 of 20 Figure3provides an overview of the proposed neural networks and individual networks.Figure3aprovides an overview of the proposed method, while Figure3bshows the networks of the dark channel: the brown color is the minimum pooling layer, sky blue color is the convolution layer, green color is the up-sampling layer, and the dark blue color is the concatenate layer.This network has 10 convolution layers, 2 minimum pooling layers, 3 up-sampling layers, and 4 concatenate layers.Figure3cshows the networks of the multi-scale bright channel: the yellow color is the maximum pooling layer, sky blue color is the convolution layer, dark blue is the concatenate layer, and green color is the up-sampling layer.This network has 8 convolution layers, 2 maximum pooling layers, 2 up sampling layers, and 2 concatenate layers.Figure3dshows a hybrid network: the sky blue color indicates the convolution layer, dark blue is the concatenate layer.This network has 2 convolution layers, and 1 concatenate layer.Moreover, the yellow rectangular shape shown in Figure3b-dindicates the grouping of unit layers, where 1/2 and x2 indicate variation of size; downsize as 1/2, upsize as x2, the number below the layers indicates the channel size.The networks partially applied a U-net[26] architecture with a multi-scale resolution to obtain the various image characteristics.

Figure 3 .
Figure 3.The hybrid transmission networks: (a) overview of the hybrid transmission network; (b) transmission network of the dark channel; (c) transmission network of the bright channel; (d) the hybid transmission network.

Figure 4 20 Figure 3 .
Figure4shows ground truth transmission maps, the transmission map generated by the proposed algorithm, and the existing transmission map.In the existing methods established by He et al.[1], Santra et al.[20], Ren et al.[17], Meng et al.[2], and Zhao et al.[5], the bright region is too dark or too bright; however, the transmission map generated by the proposed algorithm estimates bright and dark regions suitably.Therefore, the proposed algorithm is competitive in terms of transmission map estimation.

Figure 4 .
Figure 4.The comparison of transmission maps: (a) input; (b) ground truth transmission map; (c) transmission map developed by Zhao et al. [5]; (d) transmission map developed by He et al. [1]; (e) transmission map developed by Meng et al. [2]; (f) transmission map developed by Santra et al. [20]; (g) transmission map developed by Ren et al. [17]; (h) transmission map developed by the proposed method.

Figure 4 .
Figure 4.The comparison of transmission maps: (a) input; (b) ground truth transmission map; (c) transmission map developed by Zhao et al. [5]; (d) transmission map developed by He et al. [1]; (e) transmission map developed by Meng et al. [2]; (f) transmission map developed by Santra et al. [20]; (g) transmission map developed by Ren et al. [17]; (h) transmission map developed by the proposed method.

C 1 ,
C 2 are constant values.Using Equation (18), Symmetry 2023, 15, 1095 9 of 19the loss value can be adjusted more suitably because both SSIM[28] and MSE indicate the similarities between two objects in different ways, and Adam optimizer[29] is used.Moreover, the train batch size and validation batch size were set as 8, while the learning rate and weight decay were, respectively, 0.0001 and 20 epoch sets.While network is training, per 1 epoch, 163 iterations with 8 batches = 1304 images are used, and when 20 epochs, 163 iterations, and 8 batches = 26,080 images are trained.Moreover, during validation, per 1 epoch, 18 iterations with 8 batches = 144 images are used, and when 20 epochs, 18 iterations, and 8 batches = 2880 images are used for validation, approximately.Moreover, to show accuracy during the training, SSIM [28] measure is used.The hardware environment set was Intel ® Core™ i7-8700 CPU @3.20 GHz, 32 GB RAM, 12 GB Geforce RTX 2060, and 6 GB Geforce GXT 1660 super.

Figure 5 .
Figure 5.The variation in loss value and accuracy: (a) loss value; (b) accuracy.

Figure 5 .
Figure 5.The variation in loss value and accuracy: (a) loss value; (b) accuracy.

Figure 6 .
Figure 6.The comparison of the enhanced image with the transmission map: (a) input; (b) colorbalanced image obtained by the proposed method; (c) transmission map obtained by He et al. [1]; (d) enhanced image obtained by He et al. [1]; (e) transmission map obtained by Santra et al. [20]; (f) enhanced image obtained by Santra et al. [20]; (g) transmission map obtained by the proposed method; (h) enhanced image obtained by the proposed method (the transmission map and enhanced image of comparison algorithms use the color-balanced image (b)).

Figure 6 .
Figure 6.The comparison of the enhanced image with the transmission map: (a) input; (b) colorbalanced image obtained by the proposed method; (c) transmission map obtained by He et al. [1]; (d) enhanced image obtained by He et al. [1]; (e) transmission map obtained by Santra et al. [20]; (f) enhanced image obtained by Santra et al. [20]; (g) transmission map obtained by the proposed method; (h) enhanced image obtained by the proposed method (the transmission map and enhanced image of comparison algorithms use the color-balanced image (b)).
Symmetry 2023, 15, x FOR PEER REVIEW 10 of 20 where  () is guided filtered image;  {•} is the guided filter;  is kernel size, set as 16;

Table 2 .
[35]comparison of enhanced image through the NIQE[35]metric with state of the art methods in Figure9(the lower score is the better-enhanced image, PM is proposed method).

Table 3 .
[35]comparison of enhanced image through the NIQE[35]metric with state-of-the-art methods in Figure10(the lower score is the better enhanced image, PM is proposed method).

Table 4 .
[36]comparison of enhanced image through the UIQM[36]metric with state-of-the-art methods in Figure9(the higher score denotes a better enhanced image; PM is proposed method).

Table 5 .
[36]comparison of enhanced image through the UIQM[36]metric with state-of-the artmethods in Figure10(the higher score denotes a better-enhanced image; PM is proposed method).

Table 6 .
[30]comparison of enhanced image through averaged NIQE[35]metric with state-of-the-art methods in Figures 9 and 10 and DAWN dataset[30](a lower score denotes a better-enhanced image; PM is proposed method).

Table 7 .
[30]comparison of enhanced image through averaged UIQM[36]metric with state-of-the-art methods in Figures 9 and 10 and DAWN dataset[30](a higher score denotes a better-enhanced image; PM is proposed method).