Next Article in Journal
Improving S-Band Polarimetric Radar Monsoon Rainfall Estimation with Two-Dimensional Video Disdrometer Observations in South China
Next Article in Special Issue
Single Image Dehazing Using Sparse Contextual Representation
Previous Article in Journal
Different Characteristics of PM2.5 Measured in Downtown and Suburban Areas of a Medium-Sized City in South Korea
Previous Article in Special Issue
Polarimetric Imaging vs. Conventional Imaging: Evaluation of Image Contrast in Fog
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation

1
Department of Computer Science, Chu Hai College of Higher Education, 80 Castle Peak Road, Castle Peak Bay, Tuen Mun, Hong Kong 999077, China
2
Department of Electrical Engineering, City University of Hong Kong, Hong Kong 999077, China
3
Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Atmosphere 2021, 12(7), 828; https://doi.org/10.3390/atmos12070828
Submission received: 30 April 2021 / Revised: 23 June 2021 / Accepted: 25 June 2021 / Published: 28 June 2021
(This article belongs to the Special Issue Vision under Adverse Weather Conditions)

Abstract

:
Estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. This paper summarizes the outcomes of the experimental evaluation of a Particle Swarm Optimization (PSO) based transfer learning method for meteorological visibility estimation method. This paper proposes a modified approach of the transfer learning method for visibility estimation by using PSO feature selection. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide information of static landmark objects for automatic extraction of effective regions from images. Effective regions are then extracted from image database and the image features are then extracted from the Neural Network. Subset of Image features are selected based on the Particle Swarming Optimization (PSO) methods to obtain the image feature vectors for each effective sub-region. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and robust.

1. Introduction

Meteorological visibility can be used to indicate the transparency of air and quality of atmosphere and this parameter is an important indicator for sea and air transport [1]. The accuracy of the visibility measurement is affected by a number of external factors such as air lights, object available, light scatter and absorption and suspended particles in the air [2,3]. The classical approach for Meteorological Optical Range (MOR) [4] measurement includes the manual evaluation method in which visual observation of largest visible distance by well-trained meteorological observer. The accuracy of manual evaluation method is subjected the number of reference targets available in the environment [5] and the judgement of observer [6]. Another popular visibility measurement method is the visibility meter method in which the meter estimates the visible distance by measurement of atmospheric transmittance or extinction coefficient [5]. The forward scattering and back-scattering method [7] are among the most popular metering methods while most visibility meter employs the forward scattering method as it can give reasonable accuracy and performance with lower cost. However, forward scattering visibility meter with high accuracy is very expensive and requires specialized installation and calibration skills and usually this type of meter could give good accuracy with relatively short range only. Therefore, there are many past research work for the visibility estimation based on digital image analysis methods. With the advancement of image processing and machine learning methods, this approach can give reasonable accuracy with relatively lower cost.
Huang [8] proposed a new approach to restore visibility of outdoor digital images in presence of haze, fog and sandstorms. Farhan [9] proposed a novel deep neural networks method for enhancing visibility under foggy conditions. Ling [10] presented a deep network for dehazing and to enhance the image quality. Ju [11] proposed a fast single image defogging technique and an Atmospheric Scattering Model (ASM) to overcome the problems of non-uniform illumination and multiple scattering. Zhu [12] proposed a regression model to forecast the visibility in the Urumqi International Airport. Li [13] proposed an intelligent digital method to estimate the visibility by Generalized Regression Neural Network (GRNN).
Chen [14] proposed a novel Radial Basis Function (RBF) neural network method to eliminate the haze and retain the visible structure’s edges and the image brightness. Hazar Chaabani [15] proposed a deep learning method with feature extraction and Support Vector Machine (SVM) to enhance safer driving under foggy weather. Palvanov [16] proposed a Deep Hybrid Convolutional Neural Network (DHCNN) to estimate visibility under heavy fog. Yang [17] propose a deep learning method to estimate the relative atmospheric visibility from digital images. Choi [18] proposed a new method to estimate the visibilities from digital images of Closed-circuit television (CCTV) under sea fog. Wenqi Ren [19] proposed a Multi-scale Convolutional Neural Networks to remove the haze of the image. Lu [20] proposed a hierarchical sparse representation method to estimate the image visibility.
Li [21] proposed a Deep Convolutional Neural Networks (DCNN) method to estimate the visibility with insufficient visibility labeled data. Fatma [22] proposed a use AlexNet Deep Convolution Neural Networks (DCNN) and Support Vector Machine (SVM) classifier to estimate visibility under foggy weather. Chuang Zhang [23] proposed a visibility prediction method by using multimodal fusion. Palvanov [24] reviewed the latest research results on visibility estimation. Lo [25] proposed to use Multiple Support Vector Regression (MSVR) model and deep learning method to estimate visibility.
Malm [26] has proposed to use cameras to monitoring visibility impairment. Bruine [27] studied the impact of precipitation evaporation on the atmospheric aerosol distribution. Hautiere [28] proposed a method for automatic fog detection of visibility distance through use of an onboard camera. Yang [29] proposed a method for single image deraining via visibility-enhanced recurrent wavelet learning. Cheng [30] proposed a variational approach to atmospheric visibility estimation in the weather of fog and haze. Chaabani [31] proposed to use neural network approach to visibility range estimation under foggy weather conditions.

2. Methodology

2.1. Related Work

Some of the past research on deep neutral network methods for the visibility estimation study on the network optimization, network performances improvement, shortening the computation time [12,13] and accuracy improvement. However, accuracy improvement may also increase the computation load [17]. Fusion methods for increasing the adaptability of the extracted features has been proposed [21] but reductant features would affect the training efficiency and estimation accuracy. This paper proposes a visibility estimation algorithm for effective features extraction and minimizing reductant information for meteorological visibility estimation.
For the transfer learning approach, Li [13] proposed to use a Convolutional Neural Networks (CNN) network (AlexNet) to extract image features from low resolution Webcam images and the visibilities are estimated by using a Generalized Regression Neural Network (GRNN). However, the estimation is relatively narrow, and the estimation accuracy is about 61.8% only. In order to improve the estimation accuracy, past research has been accomplished for selecting image subregions based on prerequisite landmark target information and human judgement [25]. A single subregion is used in this the method and the overall accuracy is about 87%.
In [13], the paper attempted to identify relevant feature from large number of extracted features from pre-trained Convolutional Neural Networks (CNN) model (AlexNet). However, test accuracy was very low (61.8%) and it gave high prediction error and reference objects were required for classification. The case using the whole image as reference has been considered in [13]. It has been found that subregions which focus on selected landmark objects could give better accuracy as compared to the prediction results obtained by the whole image.
In [25], subregions images were divided into different classes according to visibility range. Features are imported into different SVM for visibility estimation [25]. However, the subregions are selected based on human judgement of landmark objects and some important image information or area may be ignored. The feature values at the output layer of the VGG16 network were used for SVM parameters estimation. The case of using the whole image without estimates’ fusion as reference has also been considered in [25] in which the estimation accuracy was also about 87% but the method proposed in [32] could give accuracy up to 90%.
In [17], model could be adapted for a small practical dataset with sparsely distributed visibilities, but it costed high computation load and consumed long time for training. The method employed manual annotations instead of sensors and the visibility assessment range is relatively narrow.
In [24], Palvanov has proposed to use a deep integrated convolutional neural networks (VisNet) method to estimate images visibility by using webcam weather images. The ANN training is quite time-consuming as it has a preprocessing stage and several integrated Convolutional Neural Networks CNN layers and the calculation time is relatively long.
In order solve the problem of automatic subregion selection, a novel deep learning neural network method for the meteorological visibility estimation based on image feature fusion method has been proposed in [32], which can find the most effective image subregions through image pre-processing and gray-level averaging process. In [32], the pre-processing gray-weighed averaging was performed in the first step. The coordinates of the effective subregions were located. The subregions’ image features are then extracted by the deep learning neural network (VGG-16 network [33], VGG-19 network [33], DenseNet network [34] and ResNet_50 network [35]). The subregions’ regression models and visibility estimates were obtained by the Support Vector Machine (SVM) methods. The overall visibility estimates of the whole digital image was then obtained by the fusion method.
However, in [32], all the image feature generated by the pre-trained Artificial Neural Network (VGG-16, VGG-19, DenseNet and ResNet_50 network) are used for the next stage of SVM modelling. Large feature vector’s dimension will increase the computation load significantly, especially in matrix calculations. The training time will also increase. It is expected that not all the image features at the output layer of the ANN could provide effective information for the estimation of visibilities. Therefore, this paper proposed a modification of the algorithms in [32] which could reduce the dimension of the feature vector. The evaluation in [32] was based on a dataset provided by the Hong Kong Observatory (HKO) which measured the visibilities by the sensors and camera installed at fixed locations selected by the HKO. In this paper, we attempt to evaluate the effectiveness of the proposed method by using a self-built visibility monitoring system setup at the selected campus site.
The procedures of the proposed method were summarized in Figure 1 and Figure 2. For the evaluation process shown in Figure 2, image pre-processing is first performed for all the images in the database and a comprehensive gray-average image G ¯ [I] is obtained. After applying the adaptive threshold algorithm on the gray-average image, locations of the effective subregions (Rjk) were determined. Image features are then extracted from the effective subregions by the pre-trained deep learning neural network. In Figure 2, Rjk represents kth subregion of jth image, Fjk is feature vector generated from the kth subregion of jth image. Subset of image features at last output convolution layer of ANN are selected according to the PSO particle Xi and the feature vectors (Fjk) of the sub-regions are generated. According to the feature vectors of each sub-regions, the Support Vector Regression models of the subregions and are then derived by the support vector machine (SVM) and the visibility estimates of subregions (vjk) can be estimated. The visibility weighted fusion method was then used to derive the final overall visibility estimates (vj) of the whole image.

2.2. Particle Swarm Optimization (PSO) Approach

For the feature extraction process, feature vectors are extracted by using the VGG-16 [33], VGG-19 [33], DenseNet [34] and ResNet_50 [35] networks. However, we could expect that not all the feature values at the output layers of the above networks are useful for the visibility estimation. Therefore, we propose to use particle swarm optimization (PSO) [36,37,38] method to select a subset of the output layers’ feature values to form the feature vectors. For the i× particle of the PSO method, Xi = [Xi1, Xi2, …Xik…Xin], represents a subset of the whole set of the feature values generated at the last convolution output layer of the ANN. The mean absolute error (MAE) is used as a PSO fitness function (objective value) for the feature values’ selection process:
Φ x i = 1 N j = 1 N | e j | = 1 N j = 1 N | v j v ^ j |
where N is the total number of testing data sample, ej is the absolute error for the jth image, v ^ j and v j are the estimated and the actual visibility of the jth image, respectively. For the ith PSO particle Xi, the feature vectors of each subregion (Fjk) are then generated for the images. The overall visibility v ^ j of the jth image is estimated based on the method in Figure 2. The fitness value of Xi is then determined by Equation (1) and the binary PSO method [39] is used to select the best candidate of Xi. The position (Xi) and velocity (Vi) of ith particle are updated by:
Xi = [Xi1, Xi2, …Xik…Xin],    Xik ∈ {0,1}
Vi = [Vi1,Vi2, …Xik…Vin]
Vik(t + 1) = ωVik(t) + C1 r1k (PikXik) + C2 r2k (Pgk − Xik)
Xik(t + 1) = Xik(t) + Vik(t + 1), i = 1…Np
where:
  • Pi and Φibest are its own best experience, position and objective value.
  • Pg and Φgbest are best experience of the whole swarm, position and objective value.
  • ω is the inertia weight, C1 and C2 are cognitive and social acceleration coefficients.
  • r1d and r2d are random numbers in [0,1]. Np is the population size.
  • Xik ∈ {0,1}, Xik = 1 ⇒ the kth feature value is selected in the ith particle, the total number of “1” in Xi vector is kept constant at 80% of maximum feature dimension (n).
It is expected that the set of feature vectors maps to the visibility value of vj. The step-by-step procedures for the PSO method are summarized as follows.
  • Initialize the particles’ velocities Vi and positions Xi.
  • Updated particles’ Vi velocities and positions Xi by Equations (4) and (5).
  • Compare the estimated visibility ( v ^ j ) with the actual visibility (vj) for all the database images (j = 1…N). Compute the objective values ΦXi for each particle Xi by Equation (1).
  • Update the best position Pi and the best objective value Φibest for each particle Xi.
  • Update the global best position Pg and the global best objective value Φgbest.
  • Go back to step 2 to 5 for updating cycle until maximum generation is reached.
The fitness function of the PSO process is designed to select an optimal subset of the feature values at the last convolution output layer of the ANN. The selected feature values will be used to form the feature vectors for the visibility estimation.

2.3. Database Construction

This paper uses the image database provided by the self-built visibility monitoring system and the images were collected at the campus site of Chu Hai College of Higher Education. The image database composes of 6048 images collected by the automatic digital camera with fixed viewing angle at the visibility measurement station of the campus site. The visibility monitoring system operated from 9 a.m. to 6 p.m., July 2020 to November 2020 and the visibilities at campus site are measured by the computer-controlled visibility meter. The block diagram and the equipment setup for the visibility monitoring system is shown in Figure 3. Samples of database images are shown in Figure 4.

2.4. Method Overview

In this paper, 6048 images with fixed viewing angle were used as the experimental database. Due to the interference from the moving objects (e.g., hull and clouds in the sky) in the image, the images need to be pre-processed before extraction of the image features from the effective regions. After extracting the features from the effective subregions, features values are then selected by PSO method and the resulting feature vectors were imported into the Support Vector Regression (SVR) models for training and evaluation. Compared with the general approach, the proposed method in this paper can give more efficient and accurate visibility estimation by using the information from the effective subregions.

2.4.1. Image Preprocessing

Due to the interference from the moving objects such as hull and the clouds in the sky during the process of image collection, the moving objects will change their shapes and positions in the images. Therefore, these images need to be preprocessed to identify the locations of the effective subregions before feature extraction. First, gray weighted averaging is applied to all the images in the database and a comprehensive gray average image G ¯ [I] is obtained. After designing the threshold value, the images in the database were adaptively segmented to obtain the subregion images. Samples of database images with interference from moving objects were shown in Figure 5. As the viewing angle of the camera was fixed while the background objects in the image (e.g., hull and clouds) changed with time, the background moving objects will interfere the visibility evaluation and it will cause error in visibility estimation. In order to obtain accurate visibility estimation, it was necessary to eliminate these interferences and background objects from the images. Therefore, it was preferable to extract the effective subregions with removal of redundant background moving objects from the images.
In Figure 5, the main components of the images were the sky, buildings and the hulls in the water area. However, only buildings and stationary objects were relevant for the visibility estimation while the water area and sky cannot provide useful information for the visibility estimation. Buildings and fixed landmark objects could be considered as static objects in the image while moving hull or clouds in the sky are dynamics objects in the image. Visibility measurement can be considered as the largest distance that could be observed from the viewing point. As the moving objects (e.g., sky, water and hull) are changing in positions or shape with time, they should not be used as reference for accurate visibility assessment. Therefore, we should filter out or remove the dynamic objects so as to extract the effective area with static objects that can provide useful image features for visibility estimation. The effective subregions cover the buildings or stationary structure (e.g., island) at different landmark distances from the viewing point. In this paper, we have used the gray average weighted image to locate the effective subregions of the images.
The gray-level average weighted image of the image database was obtained by applying the gray-level weighted average to each data point of the images in the database. Image with 1920 × 1080 pixels were used in this paper. The average gray level of the pixels at coordinate (xn, ym) (n = 1…nmax, m = 1…mmax) given by Equation (6).
g ¯ ( x n , y m ) = 1 N j = 1 N g j ( x n , y m )
where gj(xn, ym) is the gray level of the jth image at coordinate (xn, ym) (j = 1…N). An example of the comprehensive gray average image was shown in Figure 6. The images in the database are sorted according to their collecting time. These sequences of image can provide the moving trajectories of the moving objects. After extensive simulations, it has been found that moving objects could be filtered out if we perform the gray-level averaging for these sequences of images. Fixed or static objects will be remained in the comprehensive gray weighted average image. Image features in the effective subregions with static objects could be extracted for visibility estimation. By performing gray-level average processing on the images in the entire database, the overall gray-level average image could be obtained.

2.4.2. Effective Subregions’ Extraction

After obtaining the comprehensive gray average image G ¯ [I] of the image database, it is necessary to extract the effective area in the images Ij. Image segmentation is a popular method that can extract feature regions of an image. The adaptive threshold segmentation method is used to extract effective subregions regions [32]. For different subregions of the image, it can calculate different thresholds adaptively. Figure 7 showed the histogram of the gray-level averaged image [37]. We used the gray-level distribution of the histogram to determine the threshold value.
After applying the adaptive threshold algorithm, we have found the best threshold value (185) as shown in Figure 8. According to the selected threshold value, the coordinates of the highest and lowest points of the effective area were located. As shown in Figure 6, parallel lines were used to divide the effective area into subregions. The image feature will then be extracted as described in the next session. In summary, the step-by-step procedures for the effective area extraction are described as follows.
  • Apply gray-level averaging to all the images in the database, derive the comprehensive image G ¯ [I].
  • Generate the grayscale distribution of G ¯ [I], search the gray level distribution curve from 0 to find the local maximum pk (or peak) of the distribution curve. The local minimum between the first highest maximum p1 and the second highest maximum p2 (or peak) is selected as the threshold gray level (δ).
  • Apply the adaptive threshold segmentation algorithm to G ¯ [I] to obtain an output image. Scan the output image from the top to bottom to search the y level (ymin) for which the pixel’s gray level start to exceed the threshold. Scan the output image from bottom to top to search the y level (ymax) for which the gray level exceeds the threshold.
  • The image area S of gray level higher δ is then equally subdivided into subregions Si (e.g., Nr = 5). Effective subregions are then extracted from the results of step 3.
In step 4, the image area S is divided into Nr subregions. Selection of Nr should ensure the subregion to match with (or greater than) the ANN’s input requirement so that neural network function could be fully utilized and the estimation accuracy could be achieved by effective fusion process. A small value of Nr (Nr ≤ 3) will reduce the level of fusions and the estimation accuracy, and the proposed method will be similar to a non-fusion method single image approach. Too high value of Nr will cause the subregion images’ size smaller than the input requirement (224 × 224) for the ANN’s feature extraction and the pre-trained ANN network function may not be fully utilized. With an image width of 1920 as described in the Section 3, Nr can be chosen between 5 to 8. In this paper, Nr is selected as 5 to ensure that the subregion images’ size is greater than the ANNs’ input requirements, the ANNs’ function could be fully utilized and the accurate estimation results could be achieved.

2.4.3. Region Feature Extraction and Visibility Evaluation

After segmenting the sub-regions, the features of the sub-regions need to be extracted to provide input variables for subsequent visibility estimation. The image of the sub-region contains lots of information and features, including features related to visibility estimation and other redundant information due to interference. If all the information of the sub-region image was directly mapped with the visibility value, the amount of data calculation for training was too large and the redundant information may cause errors in the final results. Therefore, feature extraction was an indispensable and important step for the image of the sub-region. Feature extraction was one of the most basic operations in image processing. It considered the image as a data set and pixels’ features as elements. The process aims at finding the elements in the data set that could give the best representation of the data characteristics. According to different needs of image classification and parameter evaluation, the method and dimension for feature extraction could be greatly different. In this paper, four common deep learning feature extraction methods based on the Keras-platform were used. These methods were (VGG)-16 network, (VGG)-19 network, DenseNet network and ResNet_50 network. The extracted features were used as the input variables for the visibility evaluation model, and the variables were used as input for the Support Vector Regression model, and then visibility evaluation was performed on the effective subregions. However, as the number of feature values at the output layer of the above ANN is still large, the computation load for the visibility estimation algorithm is quite high. Therefore, we proposed to use PSO method to find the optimal subset of the feature values at output convolution layer of ANN for visibility estimation.

3. Experiment Results and Analysis

3.1. Data and Equipment

In order to evaluate the method in this paper, the experiments were conducted on servers using 2.6 GHz Intel CPU i7-8700 and 32 GB memory. Here, the image resolution is 1920 × 1080 pixels. The true value of visibility for each image to be trained comes from the visibility meter. The visibility monitoring system has collected 6696 images during the data collection period. Images with too low light intensity (e.g., during the dawn and evening), out of focus images and images taken at the instant that the visibility meter feedback with error messages are removed from the database. The final database consists of a total number of 6048 images. We have selected 4536 images randomly as training set and the remaining 1512 images are selected as test set. The distribution of visibility of the database is shown in Table 1 and Figure 9.

3.2. Result and Analysis

In the experiments, the estimated visibility values of the test set images were counted and analyzed for performance evaluation and the training set images are used for model training. In order to evaluate the performances of the four neural networks, image features were extracted from different neural networks and selection of feature values is optimized by the PSO algorithm. Feature vectors are then formed and imported into the support vector machine for regression analysis. The results of different network for different visibility ranges were shown in Section 3.2.1.

3.2.1. Comparison of Different Feature Extraction Networks

Figure 10 showed the test results of the visibility evaluation of these four networks. As shown in Table 2, the accuracies (goodness of fit) of the four network models were shown, respectively. The PSO method can reduce the dimension of the feature vectors and the accuracy can be kept at about 90%.
From Figure 10, we could see that the test results of each network were quite good, and the accuracy of their respective visibility numerical estimates were all above 90%. Even though the test results of the four networks were above 90%, it could be seen from the figure that the numerical points of DenseNet network and ResNet_50 network were more concentrated near to the region around the datum line, which showed that the use of DenseNet network and ResNet_50 network to extract image features was more robust and stable.

3.2.2. Performance Comparison of Different Feature Extraction Networks

Table 3 showed the experimental results of different networks for different visibility ranges. For the reason that ResNet_50 and DenseNet networks were more sensitive to image attenuation and could extract valid features from images at different levels, it could increase the network’s extraction rate of valid features and thus could be more sensitive to acquiring valid features for visibility estimation regression. ResNet_50 network was recommended for image feature extraction, especially in the low visibility range, as it was more stable. ResNet_50 network had greater stability and robustness in other ranges. This was due to the fact that more low-visibility image samples were included in the image database, and low-visibility images contain more sensitive image structure and detail.
Static objects observed in different visibility ranges can be located by performing the gray level averaging for images with different visibility ranges. While gray averaging can filter out the moving objects and objects outside the visibility range will not be appeared on image photo. We can detect observable objects for different visibility ranges by using the grey averaging and the threshold segmentation method [32]. In order to facilitates the grey-averaging preprocessing steps, images could be sorted for different visibility ranges and subregions are extracted for the training of SVR models. Estimated visibilities obtained by multiple SVR models are fused together to provide more accurate piecewise approximation for the mapping surface between image features’ values and visibilities. PSO method can reduce the computation load of the algorithm by optimizing the dimension of feature vectors.

4. Conclusions

There are many past research on the estimation of Meteorological visibility from image characteristics is a challenging problem in the research of meteorological parameters estimation. Meteorological visibility can be used to indicate the weather transparency and this indicator is important for transport safety. Most of the past research has not focused effective feature and subregion extraction. This paper proposes a modified PSO method for effective feature extraction and visibilities estimation. Effective subregions are determined by grey averaging and adaptive threshold segmentation method. The selection of feature values at the output layer of ANN is optimized by PSO. The feature vectors are then imported to the SVR models and the overall visibilities is found by apply fusion method on the visibilities estimates of each sub-regions. Furthermore, the grey weighted averaging can remove the interference of moving objects. Feature vectors are formed by the selecting subset of the output feature values of the ANN (Densest, ResNet_50, Vgg16 and Vgg19). Overall visibilities estimate is found by applying the fusion methods to the estimated visibilities from the subregions.
The accuracy of visibility estimation depended on the number of landmark object available for the overall process. Such external factor is depended on the environment of the images collection site. This paper proposed a modification of the algorithms in [32] which could reduce the dimension of the feature states vector. Furthermore, we attempt to evaluate the effectiveness of the proposed algorithm by using a self-built visibility meter and digital camera setup at the selected campus site. This paper summarizes the outcomes of the experimental evaluation of a PSO based transfer learning method for meteorological visibility estimation method. Image data are collected at fixed location with fixed viewing angle. The database images were gone through a pre-processing step of gray-averaging so as to provide the information for automatic extraction of effective regions from images. Effective subregions are extracted from image database and the image features are then extracted from the pre-trained Neural Network. Subset of Image features at output layer of ANN are selected based on the Particle Swarming Optimization (PSO) methods and the image feature vectors for each effective sub-region are generated. The image feature vectors are then used to estimate the visibilities of the images by using the Multiple Support Vector Regression (SVR) models. Experimental results show that the proposed method can give an accuracy more than 90% for visibility estimation and the proposed method is effective and accurate.

Author Contributions

W.L.L.: conceived and designed the algorithms, analyzed the data, raised funding, managed projects, write articles and draft original manuscripts. H.F.: reviewed the manuscript. H.S.H.C.: analyzed the data and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received external funding from Research Grants Council of the Hong Kong Special Administrative Region, China (Project Reference No.: UGC/FDS13/E02/18).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are not publicly available as restricted data.

Acknowledgments

The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Reference No.: UGC/FDS13/E02/18).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khademi, S.; Rasouli, S.; Hariri, E. Measurement of the atmospheric visibility distance by imaging a linear grating with sinusoidal amplitude and having variable spatial period through the atmosphere. J. Earth Space Phys. 2016, 42, 449–458. [Google Scholar]
  2. Zhuang, Z.; Tai, H.; Jiang, L. Changing Baseline Lengths Method of Visibility Measurement and Evaluation. Acta Opt. Sin. 2016, 36, 0201001. [Google Scholar] [CrossRef]
  3. Song, H.; Chen, Y.; Gao, Y. Visibility estimation on road based on lane detection and image inflection. J. Comput. Appl. 2012, 32, 3397–3403. [Google Scholar] [CrossRef]
  4. Liu, N.; Ma, Y.; Wang, Y. Comparative Analysis of Atmospheric Visibility Data from the Middle Area of Liaoning Province Using Instrumental and Visual Observations. Res. Environ. Sci. 2012, 25, 1120–1125. [Google Scholar]
  5. Minnis, P.; Doelling, D.R.; Nguyen, L.; Miller, W.F.; Chakrapani, V. Assessment of the Visible Channel Calibrations of the VIRS on TRMM and MODIS on Aqua and Terra. J. Atmos. Ocean. Technol. 2008, 25, 385–400. [Google Scholar] [CrossRef] [Green Version]
  6. Chattopadhyay, P.; Ray, A.; Damarla, T. Simultaneous tracking and counting of targets in a sensor network. J. Acoust. Soc. Am. 2016, 139, 2108. [Google Scholar] [CrossRef]
  7. Zhang, J.; Zhang, G.Y.; Sun, G.F.; Su, S.; Zhang, J.L. Calibration Method for Standard Scattering Plate Calibration System Used in Calibrating Visibility Meter. Acta Photonica Sin. 2017, 46, 312003. [Google Scholar] [CrossRef]
  8. Huang, S.C.; Chen, B.H.; Wang, W.J. Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1814–1824. [Google Scholar] [CrossRef]
  9. Farhan, H.; Jechang, J. Visibility Enhancement of Scene Images Degraded by Foggy Weather Conditions with Deep Neural Networks. J. Sens. 2016, 1–9. [Google Scholar] [CrossRef]
  10. Ling, Z.; Fan, G.; Gong, J.; Guo, S. Learning deep transmission network for efficient image dehazing. Multimed. Tools Appl. 2019, 78, 213–236. [Google Scholar] [CrossRef]
  11. Mingye, J.; Zhenfei, G.; Dengyin, Z.; Qin, H. Visibility Restoration for Single Hazy Image Using Dual Prior Knowledge. Math. Probl. Eng. 2017, 2017, 8190182.1–8190182.10. [Google Scholar]
  12. Zhu, L.; Zhu, G.D.; Han, L.; Wang, N. The Application of Deep Learning in Airport Visibility Forecast. Atmos. Clim. Sci. 2017, 7, 314–322. [Google Scholar] [CrossRef] [Green Version]
  13. Li, S.Y.; Fu, H.; Lo, W.L. Meteorological Visibility Evaluation on Webcam Weather Image Using Deep Learning Features. Int. J. Comput. Theory Eng. 2017, 9, 455–461. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, B.H.; Huang, S.C.; Li, C.Y.; Kuo, S.Y. Haze Removal Using Radial Basis Function Networks for Visibility Restoration Applications. IEEE Trans. Neural Netw. Learn. Syst. 2017, 99, 1–11. [Google Scholar]
  15. Chaabani, H.; Werghi, N.; Kamoun, F.; Taha, B.; Outay, F.; Yasar, A.-U.-H. Estimating meteorological visibility range under foggy weather conditions: A deep learning approach. Procedia Comput. Sci. 2018, 141, 478–483. [Google Scholar] [CrossRef]
  16. Palvanov, A.; Cho, Y.I. DHCNN for Visibility Estimation in Foggy Weather Conditions[C]. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018. [Google Scholar]
  17. You, Y.; Lu, C.W.; Wang, W.M.; Tang, C.-K. Relative CNN-RNN: Learning Relative Atmospheric Visibility from Images. IEEE Trans. Image Process. 2018, 28, 45–55. [Google Scholar] [CrossRef] [PubMed]
  18. Choi, Y.; Choe, H.-G.; Choi, J.Y.; Kim, K.T.; Kim, J.-B.; Kim, N.-I. Automatic Sea Fog Detection and Estimation of Visibility Distance on CCTV. J. Coast. Res. 2018, 85, 881–885. [Google Scholar] [CrossRef]
  19. Ren, W.; Pan, J.; Zhang, H.; Cao, H.; Yang, M.-H. Single Image Dehazing via Multi-scale Convolutional Neural Networks with Holistic Edges. Int. J. Comput. Vis. 2019, 128, 240–259. [Google Scholar] [CrossRef]
  20. Lu, Z.; Lu, B.; Zhang, H.; Fu, Y.; Qiu, Y.; Zhan, T.; Zhenyu, L.; Bingjian, L.; Hengde, Z.; You, F.; et al. A method of visibility forecast based on hierarchical sparse representation. J. Vis. Commun. Image Represent. 2019, 58, 160–165. [Google Scholar] [CrossRef]
  21. Li, Q.; Tang, S.; Peng, X.; Ma, Q. A Method of Visibility Detection Based on the Transfer Learning. J. Atmos. Ocean. Technol. 2019, 36, 1945–1956. [Google Scholar] [CrossRef]
  22. Outay, F.; Taha, B.; Chaabani, H.; Kamoun, F.; Werghi, N.; Yasar, A.-U.-H. Estimating ambient visibility in the presence of fog: A deep convolutional neural network approach. Pers. Ubiquitous Comput. 2019, 25, 51–62. [Google Scholar] [CrossRef]
  23. Zhang, C.; Wu, M.; Chen, J.Y.; Chen, K.; Zhang, C.; Xie, C.; Huang, B.; He, Z. Weather Visibility Prediction Based on Multimodal Fusion. IEEE Access 2019, 7, 74776–74786. [Google Scholar] [CrossRef]
  24. Palvanov, A.; Cho, Y. VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility. Sensors 2019, 19, 1343. [Google Scholar] [CrossRef] [Green Version]
  25. Wai, L.L.; Zhu, M.M.; Fu, H. Meteorology Visibility Estimation by Using Multi-Support Vector Regression Method. J. Adv. Inf. Technol. 2020, 11, 40–47. [Google Scholar]
  26. Malm, W.; Cismoski, S.; Prenni, A.; Peters, M. Use of cameras for monitoring visibility impairment. Atmos. Environ. 2018, 175, 167–183. [Google Scholar] [CrossRef]
  27. De Bruine, M.; Krol, M.; van Noije, T.; Sager, P.L.; Röckmann, T. The impact of precipitation evaporation on the atmospheric aerosol distribution in EC-Earth v3.2.0. Geosci. Model Dev. Discuss. 2017, 11, 1–34. [Google Scholar] [CrossRef] [Green Version]
  28. Hautiére, N.; Tarel, J.P.; Lavenant, J.; Aubert, D. Automatic fog detection and estimation of visibility distance through use of an onboard camera. Mach. Vis. Appl. 2006, 17, 8–20. [Google Scholar] [CrossRef]
  29. Yang, W.; Liu, J.; Yanga, S.; Guo, Z. Scale-Free Single Image Deraining Via Visibility-Enhanced Recurrent Wavelet Learning. IEEE Trans. Image Process. 2019, 28, 2948–2961. [Google Scholar] [CrossRef]
  30. Cheng, X.B.; Yang, G.; Liu, T.; Olofsson, T.; Li, H. A variational approach to atmospheric visibility estimation in the weather of fog and haze. Sustain. Cities Soc. 2018, 39, 215–224. [Google Scholar] [CrossRef]
  31. Chaabani, H.; Kamoun, F.; Bargaoui, H.; Outay, F.; Yasar, A.-U.-H. Neural network approach to visibility range estimation under foggy weather conditions. Procedia Comput. Sci. 2017, 113, 466–471. [Google Scholar] [CrossRef]
  32. Li, J.; Lo, W.L.; Fu, H.; Chung, H.S.H. A Transfer Learning Method for Meteorological Visibility Estimation Based on Feature Fusion Method. Appl. Sci. 2021, 11, 997. [Google Scholar] [CrossRef]
  33. Simonyan, K.; Zisserman, A. Very Deep Convolution Networks for Large-scale Image Recognition. In Proceedings of the International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  35. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  36. Hu, M.; Wu, T.; Weir, J.D. An Adaptive Particle Swarm Optimization with Multiple Adaptive Methods. IEEE Trans. Evol. Comput. 2013, 17, 705–720. [Google Scholar] [CrossRef]
  37. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S.-H. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man Cybern. Part B 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Han, H.; Lu, W.; Qiao, J. An Adaptive Multi-objective Particle Swarm Optimization Based on Multiple Adaptive Methods. IEEE Trans. Cybern. 2017, 47, 2754–2767. [Google Scholar] [CrossRef] [PubMed]
  39. Cervante, L.; Xue, B.; Zhang, M.; Shang, L. Binary particle swarm optimisation for feature selection: A filter based approach. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
Figure 1. PSO Algorithms.
Figure 1. PSO Algorithms.
Atmosphere 12 00828 g001
Figure 2. Particle Xi Evaluation process.
Figure 2. Particle Xi Evaluation process.
Atmosphere 12 00828 g002
Figure 3. (a) Block Diagram for Visibility Monitoring System (108 samples/ day × 62 days = 6696). (b) Equipment Testing at campus site.
Figure 3. (a) Block Diagram for Visibility Monitoring System (108 samples/ day × 62 days = 6696). (b) Equipment Testing at campus site.
Atmosphere 12 00828 g003
Figure 4. Samples of the database images.
Figure 4. Samples of the database images.
Atmosphere 12 00828 g004
Figure 5. Image samples with interference from moving objects.
Figure 5. Image samples with interference from moving objects.
Atmosphere 12 00828 g005
Figure 6. Comprehensive gray average image with subregions.
Figure 6. Comprehensive gray average image with subregions.
Atmosphere 12 00828 g006
Figure 7. Gray histogram of Comprehensive image after gray weighted average.
Figure 7. Gray histogram of Comprehensive image after gray weighted average.
Atmosphere 12 00828 g007
Figure 8. Threshold segmentation image after gray weighted average.
Figure 8. Threshold segmentation image after gray weighted average.
Atmosphere 12 00828 g008
Figure 9. Distribution of Data samples.
Figure 9. Distribution of Data samples.
Atmosphere 12 00828 g009
Figure 10. True and Predicted visibilities values by various ANN networks.
Figure 10. True and Predicted visibilities values by various ANN networks.
Atmosphere 12 00828 g010
Table 1. Distribution of Visibilities for the image samples.
Table 1. Distribution of Visibilities for the image samples.
Visibility Range (km)
10–1415–2021–2425–3031–3435–40
No. of training set sample images270527700884915735
No. of test set sample images 135264350442458368
4057911050132613731103
Table 2. Experimental results of the proposed method by using different networks. (* refence data without PSO selection).
Table 2. Experimental results of the proposed method by using different networks. (* refence data without PSO selection).
NetworkOverall Accuracy (%)
VGG-16 (%)90.26 → (* 90.78)
VGG-19 (%)90.32 → (* 90.85)
DenseNet (%)91.46 → (* 91.86)
ResNet_50 (%)92.72 → (* 92.23)
Table 3. Experimental results of different networks for different visibility ranges.
Table 3. Experimental results of different networks for different visibility ranges.
Visibility Range (km)
Network11–2021–3031–40
VGG-16 (%)91.4290.8989.66
VGG-19 (%)91.6790.9389.21
DenseNet (%)94.3493.4191.23
ResNet_50 (%)94.7193.9191.32
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lo, W.L.; Chung, H.S.H.; Fu, H. Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation. Atmosphere 2021, 12, 828. https://doi.org/10.3390/atmos12070828

AMA Style

Lo WL, Chung HSH, Fu H. Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation. Atmosphere. 2021; 12(7):828. https://doi.org/10.3390/atmos12070828

Chicago/Turabian Style

Lo, Wai Lun, Henry Shu Hung Chung, and Hong Fu. 2021. "Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation" Atmosphere 12, no. 7: 828. https://doi.org/10.3390/atmos12070828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop