Next Article in Journal
CEBA: A Data Lake for Data Sharing and Environmental Monitoring
Previous Article in Journal
Technical, Regulatory, Economic, and Trust Issues Preventing Successful Integration of Sensors into the Mainstream Consumer Wearables Market
Previous Article in Special Issue
Analysis of Impact of Rain Conditions on ADAS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Artificial Intelligence and Fleet Sensor Data towards a Higher Resolution Road Weather Model

1
IDLab-Faculty of Applied Engineering, University of Antwerp-IMEC, Sint-Pietersvliet 7, 2000 Antwerp, Belgium
2
Royal Meteorological Institute of Belgium, Ringlaan 3, 1180 Brussels, Belgium
3
Verhaert Masters in Innovation, Hogenakkerhoekstraat 21, 9150 Kruibeke, Belgium
4
Inuits-Open Source Innovators, Essensteenweg 31, 2930 Brasschaat, Belgium
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(7), 2732; https://doi.org/10.3390/s22072732
Submission received: 15 December 2021 / Revised: 17 February 2022 / Accepted: 31 March 2022 / Published: 2 April 2022
(This article belongs to the Special Issue The Weather and Pollution Sensing for Intelligent Transport Systems)

Abstract

:
Road weather conditions such as ice, snow, or heavy rain can have a significant impact on driver safety. In this paper, we present an approach to continuously monitor the road conditions in real time by equipping a fleet of vehicles with sensors. Based on the observed conditions, a physical road weather model is used to forecast the conditions for the following hours. This can be used to deliver timely warnings to drivers about potentially dangerous road conditions. To optimally process the large data volumes, we show how artificial intelligence is used to (1) calibrate the sensor measurements and (2) to retrieve relevant weather information from camera images. The output of the road weather model is compared to forecasts at road weather station locations to validate the approach.

1. Introduction

Weather conditions have a significant impact on the accident risk on road networks [1]. These accidents can lead to traffic jams, increased pollution, material costs, and person injuries. Visibility on the road can be affected by fog or rain showers. The road surface is also affected by weather conditions; for example, snow or freezing rain can cause slippery road conditions. These types of conditions can cause dangerous situations for drivers. Furthermore, they can also impact advanced driver assistance systems [2]. The technologies designed for self-driving vehicles also have a varying performance under different weather conditions [3].
To forecast dangerous road weather conditions, various dedicated road weather models (RWM) have been developed in the past [4,5,6,7,8,9]. These models are usually run for specific locations with fixed road weather stations (RWS) that provide the necessary observations such as the road surface temperature. RWS do not always cover the entire the road network: when sparsely scattered, they might miss very local phenomena. RWMs can be integrated into intelligent transport systems to increase safety and tackle growing emission and congestion problems [10,11]. In this work, we explore the potential improvement that a vehicle fleet, equipped with sensors, can bring to road weather models. Previous research has shown potential with regard to the assimilation of mobile road surface temperature observations [12]. Various vendors have developed mobile sensors for this purpose [13]. Accurate forecasts of dangerous road conditions allow the distribution of real-time warnings to nearby vehicles to alert drivers or to inform road management authorities, who can take preventative measures such as salting roads or measures such as snow removal after an event. The Finnish Meteorological Institute recently introduced a system to distribute road weather data amongst vehicles [14], and in [15], the potential of floating car data was also addressed. The potential of vehicles-based observations is continuously increasing [16]. The value of the connected car market has tripled between 2012 and 2019 [17]. It is expected to grow further towards $212.7 billion by 2027 [18]. A fleet of smart connected vehicles could provide us with fine-grained observations, allowing the prediction of local road weather conditions and the generation of real-time warnings. Furthermore, these observations have the potential to improve numerical weather prediction (NWP) model forecasts [19,20].
A group of industrial stakeholders and researchers consisting of more than twenty partners from seven countries (Belgium, France, Portugal, Romania, Spain, Turkey, and South Korea) initiated the real-time location-aware road weather services composed from multi-modal data (SARWS) project [21]. The SARWS project aims to enable real-time weather services using data originating from large-scale vehicle fleets. This paper is written by the Belgian consortium, which aims to utilise vehicle-fleet observations for real-time local road weather condition detection.
Local measurements are gathered around the city of Antwerp in Belgium from a fleet of postal vans [22]. The vehicles are equipped with various external sensors. The collected data are first centralised and (pre-)processed in the car at the In Car Smart Sensor Node (ICSSN) before being further distributed to the cloud. To obtain relevant information from the large data volumes, machine learning techniques are leveraged. In this paper, we create a convolutional neural network (CNN) to obtain information about the visibility and precipitation from the camera images and explore how neural network can be used to calibrate the sensor measurements. We also demonstrate how the collected data can be used to enhance the road weather model of the Royal Meteorological Institute of Belgium (RMI) at a high resolution.
The next section presents the different materials and methods that are used and some corresponding state-of-the-art. The results are presented in Section 3 and finally, in Section 4, the conclusions are presented.

2. Materials and Methods

2.1. Detection of Precipitation Type and Visibility from Camera Images

As described in Section 1, the ICSSN contains several sensors, which cover different complementary types of perception, to enhance road weather services. To detect precipitation, and more specifically precipitation type (e.g., rain, snow, …), and visibility (fog), visual perception is considered and investigated as an important sensor. Our approach to this multi-task problem of detecting precipitation type and visibility is a smart sensor, WeathercAIm, consisting of a combination of a camera with edge artificial intelligence (AI) computer vision (CV) processing. This smart sensor is part of the ICSSN and processes the camera images locally.
Recent research on weather detection from images applies both classical CV approaches as—with the rise of deep neural networks (DNNs) and especially CNNs for CV tasks–DNN based methods [23,24]. DNN-based methods have the advantage of increasing the accuracy with the amount of data available. In recent years, street-level images in combination with DNN have shown the potential of AI for weather detection [23,24,25,26,27]. These related works are mainly theoretical, whereas we also deploy the WeathercAIm in an actual vehicle fleet and tackle the problem together with the more practical side and constraints.
From the practical side, one of the important aspects for the WeathercAIm design is where to deploy the AI model: in the cloud or locally on the ICSSN. Sending images frequently over the data (radio) network to be processed in the cloud would massively increase the network communication traffic. The bandwidth needed would drastically limit the scalability to large vehicle fleets. Moreover, to comply with the General Data Protection Regulation (GDPR), sending and storing data that could contain privacy sensitive information in the cloud should be avoided. With these constraints, i.e., optimising communication and keeping privacy in mind, the choice for edge AI where the images are immediately processed locally on the ICSSN is evident [28,29,30,31]. There is no need for the images to be stored, not even locally. The available computing resources, e.g., processing power and memory, on edge platforms are typically lower than in the cloud [29,30,32]. Thus, edge AI has its implications on the AI model that can be used as it puts tighter constraints on the computing resources. Larger or complex AI architectures, e.g., using multiple parallel or cascaded CNNs and/or combined with other DNNs as in [23,25,26], could make the solution too computationally heavy or memory hungry to fit on an edge platform. The inference—detection processing—time will also take longer for more complex AI architectures. Another specific of the SARWS project is that a prediction is desired for every road segment of 50–250 m. This puts an extra timing constraint on the inference time. Complex AI architectures will likely take too long to comply to these constraints on an edge platform.
The proposed design of WeathercAIm is a camera connected to the ICSSN. The ICSSN also runs the precipitation type and visibility AI model besides running and processing all other vehicle sensors. The camera is a Basler puA1920-30uc [33]. This is a low-cost compact and low-weight area scan camera. The camera is placed behind the windshield close to the rear-view mirror. For the machine learning (ML) model, a supervised learning approach is used for multiclass classification, using a classifier based on a CNN that has proven its merits in the computer vision field. We use a single CNN handling both precipitation type and visibility detection. The current implementation is the well-known, seasoned ResNet50v2 [34] backbone and a dense layer as the classification head. This was chosen as it was considered a reference for image classification with a good balance between learning capabilities, accuracy, needed compute resources, and inference time. It has been studied extensively in the literature [32,35] and shown to be working and even used as a reference benchmark on edge platforms [31,36,37]. So, it was likely to also comply to the constraints discussed above and deployable in the field on the ICSSN edge platform, which made it a sensible choice. The CNN is trained via transfer learning, with the backbone network pre-trained on ImageNet [38]. Cross-entropy is used as loss function, and different optimisers (stochastic gradient descent (SGD) and Nadam) are evaluated. To counter overfitting, image augmentation, i.e., rotation, horizontal flip, translation, and zoom, regularisation and dropout layers are used. The CNN is implemented in Keras/TensorFlow. For the inference and deployment on the actual target/ICSSN, TensorFlow was compiled from source to be able to run on and use the target specific processor (Intel Atom x7-E3950) optimisations for optimal performance.
To train and validate ML models, having enough good data, and in the case of supervised learning good, labelled data, is vital. The better the dataset represents the data in production, the better the performance of the ML model. The data weather classes/labels of interest to enhance the road weather services and which need to be represented in the dataset are:
  • Precipitation: rain, melting snow, snow, hail, no precipitation;
  • Visibility: fog, normal (no fog)
To gather a representative dataset, images were collected during the beginning of the data-gathering campaign, the so-called development phase, with the first 3 ICSSN equipped vehicles between 23 October and 31 May 2021. We also performed a dataset survey to search for relevant publicly available weather-related datasets to build an initial model. Finally, the “Adverse Weather Dataset” (AWD) [39] was used for the first validation of the approach as it covers most dataset requirements (except for the lack of hail). In addition to the weather classes of interest, the dataset also contains other classes such as dusk, overcast, and sunny. These were merged to the weather classes of interest; e.g., if classified as just sunny, this corresponds to no precipitation, see the AWD weather class frequency distribution plots in Figure 1.
Although the AWD classes seem quite balanced, in reality, the occurrence of no precipitation is much higher than the precipitation types (especially the rarer types such as hail or fog). So, we decided to keep and split the no-precipitation class into subclasses, e.g., overcast, sunny, to avoid class imbalance issues during modelling. A simple rule-based system is used after the CNN to map these extra no-precipitation classes to the classes of interest.
The results presented in Section 3.1 are from the initial model experiments on the public AWD dataset.

2.2. Calibration of Sensor Measurements

The need for the calibration of low-cost measurements is presented by Williams et al. [40]. The authors state that regular calibration can improve data believability, but manual calibration can be costly. Previous research has been done to calibrate mobile sensors using machine learning regarding urban air quality [41]. Furthermore, mobile sensor calibration using sensor fusion and mixed models for road weather predictions has proven its value, as shown by Lovén et al. [42]. In this paper, we investigate a complete data-driven calibration of our low-cost sensor measurements. We are currently collecting observations from 15 postal vans. These measurements are environmental observations such as the ambient temperature. Every time the Global Positioning System (GPS) is updated (on average 2–3 s), the sensor measurements are collected and sent to the cloud. These sensors provide the input for our applications. Table 1 gives an overview of the external sensors. The camera is mounted behind the windscreen together with the GPS, the thermal image sensor is mounted at the front of the vehicle and aimed at the road surface, and other sensors are placed in a sensor box on top of the vehicle. These measurements are validated by comparing the observations by our mobile sensors to meteorological forecasts. This is a continuation of our previous research [22,43]. Given the large amounts of generated data, artificial intelligence and machine learning are perfectly suited to improve the sensor accuracy [15]. The accuracy of data-driven approaches rises with the amount of data available, outperforming statistical methods. For this, we use a fully connected neural network. These networks take a certain number of inputs and convert them with trainable parameters to construct the output.
As discussed later in Section 3.3, the measurements of the sensors differ from the RWS. For example, this can be caused by sensor placement or vehicle speed. Our mobile sensing fleet covers a large area. However, we cannot calibrate using only fixed weather stations, as they are sparsely spread. We do have the output from weather forecasts at a high resolution of 10 min and on a 1 km × 1 km grid. We use these outputs as pseudo labels for our calibration model. These outputs might not be the real ground truth but are expected to indicate trends where our measurements are incorrect. As we have a lot of data, we explore the possibilities of data-driven calibration. Our model consists of three fully connected layers with 10 neurons, 64 neurons, and 1 neuron. The first two layers have a Rectified Linear Unit activation function to allow for sparse activation, which can be useful to give less attention to outliers. The input of the model is the measured T2M, RH, and GPS velocity. Two models are created; the output is either the forecasted T2M or RH. Our model is trained using the Adam optimiser and the mean absolute error as loss function. The model is trained for 20 epochs and a batch size of 500.

2.3. Road Weather Model

The Royal Netherlands Meteorological Institute developed an RWM [44] that was used as a basis for the RWM of the RMI. Within this model, further referred to as the “classic RWM”, the road is modeled up to a depth of 30 cm and is divided into 20 asphalt layers. The RWM computes the energy balance at the surface of the road where incoming and outgoing radiation, atmospheric, and ground heat fluxes are considered. Changes of state (evaporation, melting, etc.) are also taken into account. Several parameters can be fine-tuned to better match the modeled RST with observations, with the possibility of factoring in environmental conditions such as a sky-view factor and the presence of bridges [44]. The initial conditions are updated by performing a data assimilation of RWS observed road surface temperature (RST). The observed air temperature (T2M) and dew point temperature (TD2M) are also used to correct the NWP forecast at the RWS location.
In order to use the SARWS data to their full potential, we developed an enhanced version of the classic RWM, named after the project: RWM_SARWS. This enhanced RWM not only uses weather observations from the Flemish Road and Traffic Agency RWS but also the near real-time SARWS car sensor data aggregated at the road segment level (∼50 m). These observations are available for the region of Antwerp and are aggregated by Be-Mobile. In addition to the observations, RWM_SARWS uses weather input from an NWP model, which can be selected by the RMI forecasters based on the expected accuracy in given meteorological conditions. This NWP model output is combined with the output of RMI’s operational nowcasting system INCA-BE (Integrated Nowcasting through Comprehensive Analysis Belgium). INCA-BE is an adapted and extended version of the INCA system [45] developed at the national meteorological service of Austria. It generates high-resolution short-term forecasts integrating observational data from weather radars, synoptic stations, and other official networks. Due to the importance of precipitation nowcasts for road weather warnings, we ingest the INCA-BE precipitation information at a high resolution of 10 min in RWM_SARWS.
Every 20 min, RWM_SARWS forecasts are performed for each road segment in Antwerp and its surroundings, which corresponds approximately to 140,000 road segments. Each of these has a length of about 50 m. Since the SARWS project focuses on nowcasting, the forecast range is limited to two hours. RST, T2M, TD2M, and road surface condition as well as the amounts of liquid water, ice, and snow on the road are provided as output.
RWM_SARWS makes use of the observed RST, T2M, and TD2M from RWS and car sensors, with the same correction methods employed as for the classic RWM. A new version of RWM_SARWS is currently under development that will make use of car wiper information and the predictions from the WeathercAIm. When rain is deduced from the car sensor observations, the RWM road surface status can be updated to wet, which can then be taken into account in the forecast for that road segment. This information can be validated when the car is close to the RWS, which measures road surface status and rain state (rain occurring yes/no).

3. Results

3.1. Detection of Precipitation Type and Visibility from Camera Images

We present our experiments to validate our approach by training and testing an initial WeathercAIm model with the Adverse Weather Dataset [39] before deployment. The considered weather classes are fog, dusk, overcast, rain, sleet, snow, and sunny. The dataset was split in a training and validation set according to an approximate 80/20 grouped stratified split on drive sequence for each weather class. Except for fog, which was present during only one drive sequence, which was then split taking the first 80% samples for training and the last 20% for validation. The experiments were run with different hyperparameters for the CNN to train and tune the network: learning rate, dropout rate, batch size, and whether the pre-trained backbone layers should be fixed or fine-tuned. A selection of the results are presented in Table 2 for multiclass weather classification on the 7 classes defined in the dataset with different performance validation metrics (accuracy, area under the ROC curve (AUC), and F1 score), for different hyperparameters.
Using data augmentation, a pre-trained backbone with fine tuning and SGD (with momentum 0.9) as optimiser gave the best results in our experiments. These results reveal that the WeathercAIm CNN can learn the weather classes with a good accuracy based on the given dataset.
The model was also deployed on the ICSSN in vehicles in the field. Initial results from the field test development phase—running from 23 October 2020 to 31 May 2021—reveal that there is still room for improvement in accuracy on real-life production data. One of the main observations is that the resolution of the camera does not allow for a high performance in changing lighting conditions, producing lesser quality images than the camera images from the AWD dataset:
  • Automatic gain functions take time to settle on initial captures and on big lighting changes e.g., coming out of a tunnel.
  • Over/underexposure in certain outside conditions
  • A lot of image noise in low lighting conditions, e.g., during the night (which the WeathercAIm model seems to confuse with rain/snow, as also a human could).
In addition, the different environmental setting, e.g., camera placement, different urban scenes compared to the used dataset will play a role. Another observation was that special (non-weather) cases are encountered in real life that are not present in the training dataset and need special attention, like tunnels, vehicle in garage, etc. To handle these special cases, in principle an extra task could be added by creating extra classification classes that can be added to the WeathercAIm CNN classifier. To further process these special cases, the rule-based system can be used and updated as they are likely to occur only rarely. These observations confirm the need for (re-)training on actual captured images from the field development phase to improve the WeathercAIm model.

3.2. Calibration of Sensor Measurements

In our first experiment, we compare the measured T2M to the forecasted T2M by INCA-BE. First, we preprocess the data by splitting the dataset into training and testing with the last 25% as the test set. The data are scaled using MinMax scaling. In total, around 4.5 million samples are used as training data and 1.5 million are used as evaluation. To make our results interpretable, we use the daily average on our metrics. These metrics consist of the mean absolute error, mean squared error, and mean bias. As shown in Table 3, our model can correct the temperature. However, the error on our training set is still significant after training. This can be explained by our use of pseudo labels. The period of the labels is 10 min, while our measurements are on a 3–4 s interval. This difference will result in a fixed error as the real weather changes based on the situation in space (e.g., effects of traffic, sunlight, etc.). As we improved using the correction algorithm, there is still a bias present in the testing set. This might be caused by using the last months as evaluation data. As ambient conditions are seasonally bound, evaluation on a different season can lead to a fixed bias. Figure 2 illustrates the error between our predictions and labels as well as the error between the original measurements and labels. In this figure, we can see that our correction has a positive effect on accuracy. The largest error occurs around 23 January 2021. This can be caused by our sensors that are still operating while the vehicle is for example in a depot. In such a case, the measurements occur indoors while the label is the forecasted temperature outside. A possible solution could be to filter out values when the vehicle is standing still. However, as our fleet consists of postal delivery vehicles, frequent stops occur too much to remove these valuable measurements.
In a second experiment, we train the same architecture to correct the humidity measurements. The data used in this experiment include every measurement after July 2021 until September 2021. In July, we made a change to the sensor box, creating ventilation holes to improve the humidity measurements. These data are preprocessed in the same way as the T2M but with random sampling between training and testing with a 25% split as the testing set. We make use of random sampling to have better coverage on our limited dataset. Figure 3 illustrates the error between the corrected signal and our pseudo labels as well as the original measurements and the labels of the testing set. Results are shown in Table 3. Here, we can see the accuracy is improved overall. The difference between training and testing is very small, meaning our model has a good generalisation.

3.3. Road Weather Model

The SARWS data, aggregated by BeMobile for each road segment, are displayed in Figure 4. This figure clearly shows that various types of roads have been covered by the cars of the project, which means that weather information from several types of environments were collected, e.g., on highways and in the city center.
To assess the quality of the aggregated SARWS data, we compared these with observations from the RWS, an Automatic Weather Station (AWS), the Vlinder network [46], VMM (Flanders Environment Agency) stations, and SYNOP stations that are close enough to the itineraries, that is, less than 500 m. The AWS never meets this criterion and thus is excluded from the comparisons. Figure 5 shows an example of this comparison for the RWS of Stabroek and the following variables: RST, T2M, and RH. Verification for this RWS was first performed in [43] for a shorter verification period. Until 11 March 2021, the SARWS T2M and RH can originate from the Bpost cars or the INCA outputs. The SARWS and the RWS data show a high correlation (>0.8 for RST, >0.9 for T2M, >0.7 for RH), although there is a positive bias of 1.6 °C for RST, 2.3 °C for T2M, and 3.9% for RH. The RMSE is 5.1 °C for RST, 2.7 °C for T2M, and 11.8% for RH. The correlation (r) between SARWS data and observations from other weather stations is also generally high (time period: 19 December 2020–1 December 2021, not shown): 12 stations out of 14 with a r > 0.7 for RH, six stations out of six with a r > 0.9 for T2M. Note that the T2M of these comparisons is not necessarily observed exactly at 2 m.
To assess the impact of car sensor observation integration into the RWM, we performed a first validation of RWM_SARWS for the time period 6 May 2021 (the data assimilation was corrected on this date) to 1 December 2021. RWM_SARWS forecasts for each road segment are performed every 20 min from H + 0 to H + 2 with a time step of 10 min. These forecasts are compared to the ones performed at the RWS of Stabroek (with RWS observation integration) when the distance between the two forecast locations is less than or equal to 500 m. In Figure 6, we show the RMSE computed for the RST forecasts between H + 0 and H + 2, for the time period between 6 May 2021 and 1 December 2021. The median RMSE is below 2 °C.

4. Discussion

In this paper, we presented our research on the use of car fleet sensor data towards real-time road weather warnings and a higher-resolution road weather model. We demonstrated two examples of how AI/ML can be leveraged to process data from a fleet of vehicles and obtain relevant weather information with an improved data quality compared to the raw measurements. First, we showed how convolutional neural networks can be used to extract relevant weather information (visibility and precipitation) from raw camera images. Second, we demonstrated how neural networks can be used to calibrate sensor measurements to reduce the observation errors and improve the data quality.
An updated version of the road weather model, denoted RWM_SARWS, was presented that was specifically developed at RMI to enrich the modeling of road weather with the cars data at a high spatio-temporal resolution. The preliminary results show a median RMSE below 2 °C between the RST forecasts performed at a RWS and the ones performed at close observed road segments. This could be partly explained by the errors from the cars’ sensors. Further improvement is expected in the future, when the demonstrated ML-based calibration method is used operationally. The extracted information from wipers and camera images will also be put to use in a next version of RWM_SARWS, which is expected to further increase the accuracy of the forecasts.

Author Contributions

Conceptualisation, T.B., S.W., N.D.B., C.T., W.C., J.V.d.B. and M.R.; methodology, T.B., S.W., N.D.B., C.T. and T.C.; software, T.C., T.B., S.W., N.D.B. and C.T.; validation, T.B., W.C., S.W., N.D.B., C.T., J.V.d.B., M.R. and T.C.; formal analysis, T.B., S.W., N.D.B., C.T. and T.C.; investigation, T.B., S.W., N.D.B., C.T. and T.C.; resources, P.H., M.R. and D.S.; data curation, T.C., S.W., N.D.B., C.T. and T.B.; writing—original draft preparation, T.B., S.W., N.D.B., T.C. and C.T.; writing—review and editing, J.V.d.B., M.R., D.S., W.C., S.L., P.H., T.B., S.W., N.D.B., T.C. and C.T.; visualisation, T.C., S.W., N.D.B., C.T. and T.B.; supervision, W.C., P.H., D.S., M.R. and J.V.d.B.; project administration, W.C., P.H., D.S., M.R., S.L. and J.V.d.B.; funding acquisition, P.H., M.R., D.S. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Agentschap Innoveren en Ondernemen (VLAIO) Grant number HBC.2017.0999.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The AWD dataset used is publicly available [39].

Acknowledgments

The SARWS (Real-time location-aware road weather services composed from multi-modal data [21]) Celtic-Next project (October 2018–2021) combines the expertise of commercial partners Verhaert New Products & Services, Be-Mobile, Inuits and bpost with the scientific expertise of research partners imec-IDLab (University of Antwerp) and the Royal Meteorological Institute of Belgium, together with an international consortium with partners in Portugal, France, South-Korea, Turkey, Romania, and Spain. The Flemish project was realized with the financial support of Flanders Innovation & Entrepreneurship (VLAIO, project nr. HBC.2017.0999).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AUCArea Under (the ROC) Curve
AWDAdverse Weather Dataset
AWSAutomatic Weather Station
CNNConvolutional Neural Network
CVComputer Vision
DNNDeep Neural Network
GDPRGeneral Data Protection Regulation
GPSGlobal Positioning System
ICSSNIn-Car Smart Sensor Node
INCAIntegrated Nowcasting through Comprehensive Analysis
INCA-BEINCA Belgium
MLMachine Learning
NWPNumerical Weather Prediction
RHRelative humidity
RMIRoyal Meteorological Institute of Belgium
RMSERoot-Mean-Square Error
ROCReceiver Operating Characteristic
RWMRoad Weather Model
RWSRoad Weather Station
SARWSSecure and Accurate Road Weather Services
SGDStochastic Gradient Descent
T2MAir temperature at 2 m
TD2MDewpoint temperature at 2 m
RSTRoad surface temperature
VMMDe Vlaamse Milieumaatschappij
ZAMGZentralanstalt für Meteorologie und Geodynamik

References

  1. Malin, F.; Norros, I.; Innamaa, S. Accident risk of road and weather conditions on different road types. Accid. Anal. Prev. 2019, 122, 181–188. [Google Scholar] [CrossRef] [PubMed]
  2. Roh, C.G.; Kim, J.; Im, I. Analysis of Impact of Rain Conditions on ADAS. Sensors 2020, 20, 6720. [Google Scholar] [CrossRef] [PubMed]
  3. Dixit, V.V.; Chand, S.; Nair, D.J. Autonomous vehicles: Disengagements, accidents and reaction times. PLoS ONE 2016, 11, e0168054. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Parmenter, B.; Thornes, J.E. The Use of a Computer Model to Predict the Formation of Ice on Road Surfaces; Research Report; Transport and Road Research Laboratory: Wokingham, UK, 1986. [Google Scholar]
  5. Chapman, L.; Thornes, J.E.; Bradley, A.V. Modelling of road surface temperature from a geographical parameter database. Part 2: Numerical. Meteorol. Appl. A J. Forecast. Pract. Appl. Train. Tech. Model. 2001, 8, 421–436. [Google Scholar] [CrossRef]
  6. Crevier, L.P.; Delage, Y. METRo: A new model for road-condition forecasting in Canada. J. Appl. Meteorol. 2001, 40, 2026–2037. [Google Scholar] [CrossRef]
  7. Kangas, M.; Heikinheimo, M.; Hippi, M. RoadSurf: A modelling system for predicting road weather and road surface conditions. Meteorol. Appl. 2015, 22, 544–553. [Google Scholar] [CrossRef]
  8. Yin, Z.; Hadzimustafic, J.; Kann, A.; Wang, Y. On statistical nowcasting of road surface temperature. Meteorol. Appl. 2019, 26, 1–13. [Google Scholar] [CrossRef] [Green Version]
  9. Hippi, M.; Kangas, M.; Ruuhela, R.; Ruotsalainen, J.; Hartonen, S. RoadSurf-Pedestrian: A sidewalk condition model to predict risk for wintertime slipping injuries. Meteorol. Appl. 2020, 27, e1955. [Google Scholar] [CrossRef]
  10. Dey, K.C.; Mishra, A.; Chowdhury, M. Potential of intelligent transportation systems in mitigating adverse weather impacts on road mobility: A review. IEEE Trans. Intell. Transp. Syst. 2014, 16, 1107–1119. [Google Scholar] [CrossRef]
  11. Sukuvaara, T.; Mäenpää, K.; Stepanova, D.; Karsisto, V. Vehicular networking road weather information system tailored for arctic winter conditions. Int. J. Commun. Netw. Inf. Secur. 2020, 12, 281–288. [Google Scholar]
  12. Karsisto, V.; Lovén, L. Verification of road surface temperature forecasts assimilating data from mobile sensors. Weather Forecast. 2019, 34, 539–558. [Google Scholar] [CrossRef]
  13. Minge, E.; Gallagher, M.; Hanson, Z.; Hvizdos, K. Mobile Technologies for Assessment of Winter Road Conditions; Technical Report; Department of Transportation, Clear Roads Pooled Fund: St. Paul, MN, USA, 2019. [Google Scholar]
  14. Stepanova, D.; Sukuvaara, T.; Karsisto, V. Intelligent Transport Systems–Road weather information and forecast system for vehicles. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
  15. Hellweg, M.; Acevedo-Valencia, J.W.; Paschalidi, Z.; Nachtigall, J.; Kratzsch, T.; Stiller, C. Using floating car data for more precise road weather forecasts. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–3. [Google Scholar]
  16. Mahoney, W.P., III; O’Sullivan, J.M. Realizing the potential of vehicle-based observations. Bull. Am. Meteorol. Soc. 2013, 94, 1007–1018. [Google Scholar] [CrossRef]
  17. mAutomotive, G. Connected Car Forecast: Global Connected Car Market to Grow Threefold within Five Years. 2013. Available online: https://www.gsma.com/iot/wp-content/uploads/2013/06/cl_ma_forecast_06_13.pdf (accessed on 26 May 2021).
  18. Markets. Connected Car Market. 2019. Available online: https://www.marketsandmarkets.com/Market-Reports/connected-car-market-102580117.html (accessed on 26 May 2021).
  19. Hintz, K.S.; O’Boyle, K.; Dance, S.L.; Al-Ali, S.; Ansper, I.; Blaauboer, D.; Clark, M.; Cress, A.; Dahoui, M.; Darcy, R.; et al. Collecting and utilising crowdsourced data for numerical weather prediction: Propositions from the meeting held in Copenhagen, 4–5 December 2018. Atmos. Sci. Lett. 2019, 20, e921. [Google Scholar] [CrossRef] [Green Version]
  20. Bell, Z.; Dance, S.L.; Waller, J.A. Exploring the characteristics of a vehicle-based temperature dataset for convection-permitting numerical weather prediction. arXiv 2021, arXiv:2105.12526. [Google Scholar]
  21. SARWS, C.N. Real-Time Location-Aware Road Weather Services Composed from Multi-Modal Data. 2019. Available online: https://www.celticnext.eu/project-sarws/ (accessed on 26 May 2021).
  22. Mercelis, S.; Watelet, S.; Casteels, W.; Bogaerts, T.; Van den Bergh, J.; Reyniers, M.; Hellinckx, P. Towards detection of road weather conditions using large-scale vehicle fleets. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–7. [Google Scholar]
  23. Ibrahim, M.R.; Haworth, J.; Cheng, T. WeatherNet: Recognising weather and visual conditions from street-level images using deep residual learning. ISPRS Int. J. Geo-Inf. 2019, 8, 549. [Google Scholar] [CrossRef] [Green Version]
  24. Tan, L.; Xuan, D.; Xia, J.; Wang, C. Weather recognition based on 3C-CNN. KSII Trans. Internet Inf. Syst. 2020, 14, 3567–3582. [Google Scholar]
  25. Palvanov, A.; Cho, Y.I. Visnet: Deep convolutional neural networks for forecasting atmospheric visibility. Sensors 2019, 19, 1343. [Google Scholar] [CrossRef] [Green Version]
  26. Zhao, B.; Li, X.; Lu, X.; Wang, Z. A CNN–RNN architecture for multi-label weather recognition. Neurocomputing 2018, 322, 47–57. [Google Scholar] [CrossRef] [Green Version]
  27. Guerra, J.C.V.; Khanam, Z.; Ehsan, S.; Stolkin, R.; McDonald-Maier, K. Weather Classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of Convolutional Neural Networks. In Proceedings of the 2018 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Edinburgh, UK, 6–9 August 2018; pp. 305–310. [Google Scholar]
  28. Tsigkanos, C.; Avasalcai, C.; Dustdar, S. Architectural Considerations for Privacy on the Edge. IEEE Internet Comput. 2019, 23, 76–83. [Google Scholar] [CrossRef]
  29. Wang, H.; Barriga, L.; Vahidi, A.; Raza, S. Machine Learning for Security at the IoT Edge-A Feasibility Study. In Proceedings of the 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW), Monterey, CA, USA, 4–7 November 2019; pp. 7–12. [Google Scholar] [CrossRef]
  30. Krishnasamy, E.; Varrette, S.; Mucciardi, M. Edge Computing: An Overview of Framework and Applications. Technical Report, PRACE Aisbl. 2020. Available online: https://prace-ri.eu/wp-content/uploads/Edge-Computing-An-Overview-of-Framework-and-Applications.pdf (accessed on 17 February 2022).
  31. González, J.P. Machine Learning Edge Devices: Benchmark Report. 2019. Available online: https://tryolabs.com/blog/machine-learning-on-edge-devices-benchmark-report (accessed on 17 February 2022).
  32. Véstias, M.P. A Survey of Convolutional Neural Networks on Edge with Reconfigurable Computing. Algorithms 2019, 12, 154. [Google Scholar] [CrossRef] [Green Version]
  33. Basler. Basler pulse puA1920-30uc-Area Scan Camera. 2021. Available online: https://www.baslerweb.com/en/products/cameras/area-scan-cameras/pulse/pua1920-30uc/ (accessed on 7 December 2021).
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity Mappings in Deep Residual Networks. In Computer Vision–ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 630–645. [Google Scholar]
  35. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
  36. MLCommons. Inference: Edge Benchmark v0.7 Results. 2020. Available online: https://mlcommons.org/en/inference-edge-07/ (accessed on 17 February 2022).
  37. Google. Coral Edge TPU Performance Benchmarks. 2021. Available online: https://coral.ai/docs/edgetpu/benchmarks/ (accessed on 17 February 2022).
  38. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  39. Rawashdeh Research Group. Adverse Weather Dataset. 2019. Available online: http://sar-lab.net/adverse-weather-dataset/ (accessed on 7 December 2021).
  40. Williams, D.E. Low cost sensor networks: How do we know the data are reliable? ACS Sens. 2019, 4, 2558–2565. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Lin, Y.; Dong, W.; Chen, Y. Calibrating low-cost sensors by a two-phase learning approach for urban air quality measurement. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–18. [Google Scholar] [CrossRef]
  42. Lovén, L.; Karsisto, V.; Järvinen, H.; Sillanpää, M.J.; Leppänen, T.; Peltonen, E.; Pirttikangas, S.; Riekki, J. Mobile road weather sensor calibration by sensor fusion and linear mixed models. PLoS ONE 2019, 14, e0211702. [Google Scholar] [CrossRef] [Green Version]
  43. Bogaerts, T.; Watelet, S.; Thoen, C.; Coopman, T.; Van den Bergh, J.; Reyniers, M.; Seynaeve, D.; Casteels, W.; Latré, S.; Hellinckx, P. Enhancement of road weather services using vehicle sensor data. In Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2022; pp. 1–6. [Google Scholar]
  44. Karsisto, V.; Tijm, S.; Nurmi, P. Comparing the performance of two road weather models in the Netherlands. Weather Forecast. 2017, 32, 991–1006. [Google Scholar] [CrossRef]
  45. Haiden, T.; Kann, A.; Wittmann, C.; Pistotnik, G.; Bica, B.; Gruber, C. The Integrated Nowcasting through Comprehensive Analysis (INCA) system and its validation over the Eastern Alpine region. Weather Forecast. 2011, 26, 166–183. [Google Scholar] [CrossRef]
  46. Caluwaerts, S.; Top, S.; Vergauwen, T.; Wauters, G.; De Ridder, K.; Hamdi, R.; Mesuere, B.; Van Schaeybroeck, B.; Wouters, H.; Termonia, P. Engaging schools to explore meteorological observational gaps. Bull. Am. Meteorol. Soc. 2021, 102, E1126–E1132. [Google Scholar] [CrossRef]
Figure 1. AWD weather class frequency distributions.
Figure 1. AWD weather class frequency distributions.
Sensors 22 02732 g001
Figure 2. Overview of corrected and original T2M measurements in comparison to the forecasted T2M by INCA-BE.
Figure 2. Overview of corrected and original T2M measurements in comparison to the forecasted T2M by INCA-BE.
Sensors 22 02732 g002
Figure 3. Overview of corrected and original humidity measurements of the testing set in comparison to forecasted humidity by INCA-BE.
Figure 3. Overview of corrected and original humidity measurements of the testing set in comparison to forecasted humidity by INCA-BE.
Sensors 22 02732 g003
Figure 4. Blue road segments have observations between 19 December 2020 and 1 December 2021. Stabroek RWS is highlighted with a marker.base map and data from OpenStreetMap and the OpenStreetMap Foundation.
Figure 4. Blue road segments have observations between 19 December 2020 and 1 December 2021. Stabroek RWS is highlighted with a marker.base map and data from OpenStreetMap and the OpenStreetMap Foundation.
Sensors 22 02732 g004
Figure 5. RST (top), T2M (middle), and RH (bottom) between 1 February 2021 and 1 December 2021 at the RWS of Stabroek (black) and from the Bpost cars (red) for nearby road segments (distance to the RWS of maximum 500 m). The scores include the RMSE, bias, and correlation, which are computed from the times series binned on an hourly basis. RST values below −30 C are considered unrealistic and not used in the computation of the scores.
Figure 5. RST (top), T2M (middle), and RH (bottom) between 1 February 2021 and 1 December 2021 at the RWS of Stabroek (black) and from the Bpost cars (red) for nearby road segments (distance to the RWS of maximum 500 m). The scores include the RMSE, bias, and correlation, which are computed from the times series binned on an hourly basis. RST values below −30 C are considered unrealistic and not used in the computation of the scores.
Sensors 22 02732 g005aSensors 22 02732 g005b
Figure 6. Box and whisker plot of the RMSE between the RWM_SARWS forecasts of RST at the RWS of Stabroek and at close observed road segments (<=500 m) computed for the whole forecast length (2 h). The RMSE box extends from the first to the last quartile, while the orange line corresponds to the median. Each whisker corresponds to a length of maximum 1.5 interquartile range. The time period ranges from 6 May 2021 to 1 December 2021.
Figure 6. Box and whisker plot of the RMSE between the RWM_SARWS forecasts of RST at the RWS of Stabroek and at close observed road segments (<=500 m) computed for the whole forecast length (2 h). The RMSE box extends from the first to the last quartile, while the orange line corresponds to the median. Each whisker corresponds to a length of maximum 1.5 interquartile range. The time period ranges from 6 May 2021 to 1 December 2021.
Sensors 22 02732 g006
Table 1. External sensors mounted on the vehicle fleet.
Table 1. External sensors mounted on the vehicle fleet.
In CarSensor Box
GPS-moduleGyroscope
CameraAccelerometer
Thermal imaging sensorTemperature sensor
Humidity sensor
Table 2. WeathercAIm validation classification metrics for different training parameters.
Table 2. WeathercAIm validation classification metrics for different training parameters.
OptimiserLearning RateDropout RateBatch SizeLossAccuracyAUCF1
sgd 1 × 10 6 0.5 64 0.838 0.949 0.997 0.988
sgd 1 × 10 6 0.5 16 0.866 0.932 0.996 0.97
sgd 1 × 10 7 0.2 16 0.89 0.928 0.996 0.98
nadam 1 × 10 6 0.6 16 0.921 0.897 0.991 0.992
nadam 1 × 10 6 0.7 16 0.907 0.889 0.988 0.988
nadam 1 × 10 6 0.8 16 0.921 0.889 0.984 0.969
nadam 1 × 10 6 0.9 16 0.952 0.878 0.982 0.962
Table 3. Results of corrections on measurements.
Table 3. Results of corrections on measurements.
Mean Squared ErrorMean Absolute ErrorMean Bias
TrainTestTrainTestTrainTest
Temperature original3.680 C 2 3.360 C 2 2.916 °C3.023 °C−2.780 °C−3.003 °C
Temperature corrected1.695 °C 2 1.705 °C 2 1.285 °C1.409 °C0.100 °C1.077 °C
Humidity original7.248 % 2 7.259 % 2 5.718%5.732%1.167%1.176%
Humidity corrected5.644 % 2 5.561 % 2 4.262%4.265%0.172%0.182%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bogaerts, T.; Watelet, S.; De Bruyne, N.; Thoen, C.; Coopman, T.; Van den Bergh, J.; Reyniers, M.; Seynaeve, D.; Casteels, W.; Latré, S.; et al. Leveraging Artificial Intelligence and Fleet Sensor Data towards a Higher Resolution Road Weather Model. Sensors 2022, 22, 2732. https://doi.org/10.3390/s22072732

AMA Style

Bogaerts T, Watelet S, De Bruyne N, Thoen C, Coopman T, Van den Bergh J, Reyniers M, Seynaeve D, Casteels W, Latré S, et al. Leveraging Artificial Intelligence and Fleet Sensor Data towards a Higher Resolution Road Weather Model. Sensors. 2022; 22(7):2732. https://doi.org/10.3390/s22072732

Chicago/Turabian Style

Bogaerts, Toon, Sylvain Watelet, Niko De Bruyne, Chris Thoen, Tom Coopman, Joris Van den Bergh, Maarten Reyniers, Dirck Seynaeve, Wim Casteels, Steven Latré, and et al. 2022. "Leveraging Artificial Intelligence and Fleet Sensor Data towards a Higher Resolution Road Weather Model" Sensors 22, no. 7: 2732. https://doi.org/10.3390/s22072732

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop