Abstract
Connected and autonomous vehicles (CAVs) have witnessed significant attention from industries, and academia for research and developments towards the on-road realisation of the technology. State-of-the-art CAVs utilise existing navigation systems for mobility and travel path planning. However, reliable connectivity to navigation systems is not guaranteed, particularly in urban road traffic environments with high-rise buildings, nearby roads and multi-level flyovers. In this connection, this paper presents TAKEN-Traffic Knowledge-based Navigation for enabling CAVs in urban road traffic environments. A traffic analysis model is proposed for mining the sensor-oriented traffic data to generate a precise navigation path for the vehicle. A knowledge-sharing method is developed for collecting and generating new traffic knowledge from on-road vehicles. CAVs navigation is executed using the information enabled by traffic knowledge and analysis. The experimental performance evaluation results attest to the benefits of TAKEN in the precise navigation of CAVs in urban traffic environments.
1. Introduction
Connected and autonomous vehicles (CAVs) are emerging as next-generation transportation in terms of growing connectivity and automation devices in vehicles [1,2]. Most modern cars have a dashboard and connectivity to the internet or personal devices enabling a wide range of traffic services for easing driving and travel for drivers [3]. Nowadays, the majority of the drivers use GPS service travel pathfinder, which helps in reaching the correct destination particularly significant in urban dense road networks [4,5]. Internet connectivity enabled traffic services to not only improve safety and travel experience but also resulted in green transport, considering lesser congested and shorter travel path-oriented guidance to drivers [6]. The green transport capability of modern vehicles has attracted government attention around the work towards supporting CAV technology development and implementation in coming years [7,8]. In line with this, the UK department of transport has supported various R&D projects related to the advancement of CAV [9]. Various enabling technologies have been developed to support the on-road realisation of CAV for public and private transport [10].
The early advancements in mobility planning for CAVs were majorly based on sensor-oriented predictions [11]. For example, a vision centric steering control and path planning was suggested for autonomous vehicles focusing on polynomial interpolation problem-based modelling of the on-road traffic environment [12]. A point-by-point vehicle movement plan and iterative steering supervision strategy were developed relying on on-road real-time vision or images. Towards enhancing the vision-based path planning for CAVs, manoeuvres-based path planning was suggested focusing two-level of information gathering from the on-road traffic environment [13]. In the first level, information gathering focused on the feasibility of manoeuvres considering the traffic scenario whereas in the second level optimisation of manoeuvres has been performed in terms of various performance metrics, including transportation time, rules of traffic, fuel consumption and ease of travel. For incorporating some complex steering control situations, such as sliding effects of tires, a new autonomous vehicle control system was suggested considering slip angles of the vehicle in kinematic modelling of the path following traffic environment [14]. The aforementioned advancements heavily relied on sensor data-centric mobility planning in on-road traffic environments without applying any learning from historical scenarios or data.
The recent advancement in advanced machine learning has enabled deep learning techniques to be applied in diverse domains [15,16] including mobility planning for CAVs. For example, deep neural network-based calibration automation has been suggested for improving the accuracy of sensor centric distance calculation [17]. Specifically, stereo matching and lidar projection-based path planning for autonomous vehicles were enhanced by modelling calibration network and loss function. However, the accuracy of the calibration automation approach is highly reliant on the accuracy of sensor data of autonomous vehicles without considering the overall traffic environment. Towards addressing the sensor-based path planning, a reinforcement learning-based car following model has been suggested focusing on video frame analysis [18]. In particular, you look, once the strategy has been developed, for identifying leader vehicles and obstacles in the car-following traffic environment. Q-learning and deep q-learning have been used for autonomous vehicle path planning considering red green blue depth frames in the on-road traffic video analysis. Similarly, for improving the autonomous vehicles’ consumer experience, the deep learning-based caching mechanism has been suggested focusing on edge-based media execution [19]. However, autonomous vehicle decision making was not considered in the media-oriented study. In these aforementioned machine learning and deep learning-centric CAV investigations, deep learning-enabled traffic knowledge sharing is lacking among autonomous vehicles in the same traffic environment.
Towards this end, this paper presents a deep learning-enabled framework TAKEN- Traffic Knowledge sharing based Navigation for connected and autonomous vehicles. Different autonomous aspects have been considered such as state estimation, visual perception and path or motion planning for effectively reducing the dependency of autonomous vehicles on the existing navigation systems. The major contributions of the paper can be summarised as follows:
2. Related Work
The advancements in mobility planning for CAVs can be divided into two major sub-themes including sensor-enabled predictions and advanced machine learning enabled predictions [20,21]. Towards sensors enabled predictions, a vision centric steering control and path planning was suggested for autonomous vehicles focusing on polynomial interpolation problem-based modelling of the on-road traffic environment [12]. A point-by-point vehicle movement plan and iterative steering supervision strategy were developed relying on on-road real-time vision or images. However, vision centric steering control is limited in terms of overall traffic knowledge-centric movement planning considers not only neighbouring vehicles rather whole traffic environments in a larger area. Similarly, manoeuvres-based path planning was suggested focusing on two levels of information gathering from the on-road traffic environment [13]. In the first level information gathering focused on the feasibility of manoeuvres considering the traffic scenario whereas in the second level optimisation of manoeuvres has been performed in terms of various performance metrics including transportation time, rules of traffic, fuel consumption, and ease of travel. However, manoeuvres-based path planning is lacking real-time efficiency criteria considering level-wise execution as some sensors’ input needs to process immediately rather than waiting for other sensors to calibrate. A new autonomous vehicle control system was suggested considering slip angles of vehicles in kinematic modelling of the path following traffic environment [14]. However, the slip angle-based framework is limited in terms of applicability in sparse traffic environments due to the dependence on neighbouring vehicles.
Towards advanced machine learning-enabled predictions, deep neural network-based calibration automation has been suggested for improving the accuracy of sensor centric distance calculation [17]. Specifically, stereo matching and lidar projection-based path planning for autonomous vehicles were enhanced by modelling calibration network and loss function. However, the accuracy of the calibration automation approach is highly reliant on the accuracy of sensor data of autonomous vehicles without considering the overall traffic environment. A reinforcement learning-based car following model has been suggested focusing on video frame analysis [18]. Here, you look, once the strategy has been developed, for identifying leader vehicles and obstacles in the car-following traffic environment. Q-learning and deep q-learning have been used for autonomous vehicle path planning considering red, green and blue depth frames in the on-road traffic video analysis. Similarly, a deep learning-based caching mechanism has been suggested focusing on edge-based media execution [19]. The multi-sensor data-fusion algorithm has been developed using unscented Kalman filter to compute improvised Unmanned Surface Vehicle (USV) operation in a practical environment [22]. The robust fuzzy sliding mode rule has been applied to guide the Autonomous Underwater Vehicles (AUV) [23]. Here, both sensors enable path planning and machine learning-enabled path planning [24,25] learning from historical scenario data and traffic knowledge sharing is lacking among autonomous vehicles in the same traffic environment which is the core target of this research detailed in the following sections.
4. Experiments
The TAKEN system uses the Carla Simulator to simulate the autonomous vehicle and its environment. It is an open-source tool, transparent and closely related to the real world. It supports the simulation of different sensors such as LIDAR, RADAR, GNSS system, odometer, depth cameras, segmentation cameras and RGB cameras. People can build environments using the Unreal engine and associate rules and regulations. Carla simulator provides free access to its various digital assets such as vehicles, pedestrians, buildings and other entities present in the scene. The simulator is characterised by scalable client-server architecture. It also enables the programmers to attach more than one player to the same server. In addition, it facilitates traffic management, recording and tracing, ROS bridging and Autoware implementation, scenario runners and public contributions.
The Carla simulator provides us with a variety of towns/maps in which we could test our autonomous models. The working environment registers a set of actions taken by the player and associates with it the corresponding responses. The player on the other hand perceives its environment through various inbuilt simulated sensors and uses it as the source/input for the next action that ought to be taken, this is represented in the Figure 9. The environment is also associated with non-players such as pedestrians and other vehicles. The TAKEN system seeds such instances and simulates their movement/actions so as to align them with the behavioural models.
Figure 9.
Environment-agent interaction (here, the environment is the Carla Simulator Server and the agent is a client side python script modelling SDC).
The TAKEN system implements a simulated car which is capable of driving on its own in the Carla simulator’s environment. The system uses several functionalities and add-ons provided by Carla. The proposed model falls under level three autonomy. Different situations and scenarios such as the sudden appearance of a pedestrian or a vehicle in front of the car, roadblocks, overtaking, and roundabouts which we generally come across while driving, are addressed in this work using various concepts namely, visual perception, state estimation, planning and controllers. The simulated player can manoeuvre correctly by adhering to traffic rules and regulations in medium traffic conditions. In addition to these, the paper elaborates on strategies through which we could account for sensor failures, navigation systems specific, and the task of knowledge sharing using the centralised/decentralised communication architecture. Table 1 delineates the uniqueness and the general qualitative comparison of the TAKEN system with other existing works.
Table 1.
Comparison of the TAKEN system with other existing works.
5. Results
Figure 10A,B represent the implication of using the Stanley controller on the simulated car model. This module makes use of the navigation system to identify its current position in the environment and thus estimates the steering angle. Figure 10A represents a set of waypoints that a model must take to reach the destination and Figure 10B represents the path taken by the simulated car guided by the Stanley controller, both in terms of the map coordinate system. The effect of using the PID controller is shown in Figure 10E. The red-coloured curve represents the reference speed and the blue-coloured curve represents the actual speed measured with respect to simulator time in seconds vs speed in kmph. In case of sensor failures or erroneous sensors, the control shifts to the AI state estimator which is used for the identification of the current position in the environment. The aforementioned modules are run in parallel so as to keep them synchronised. Figure 10C,D are the outputs of the AI state estimator obtained during the execution. Figure 10C represents the velocity that a vehicle must use as a reference to define the throttle position. On the other hand, Figure 10D shows how the vehicle abides by the designated path. Here the blue points represent the predicted value and red points represent the actual/true value. This is the normalised representation of the Carla environment whose axes are longitude, latitude and velocity/heading respectively.
Figure 10.
Overview of the testing and the corresponding results. The reference path is shown in (A) and the traversed path is shown in (B). The velocity and heading predictors have been shown in (C) and (D), respectively. The effect of using a PID controller in the designed environment is shown in (E) (max speed limited to 30 kmph). The output of the object detection is shown in (F) and the lane stability graph in (G). The accuracies of the different machine learning models are shown in (H) with the average loss of the models for custom object detection in (I).
The knowledge sharing module in the TAKEN system uses images as raw information for detecting potholes on the road (Figure 10F) and cautioning autonomous vehicles about road conditions. Figure 10G shows that the autonomous vehicle maintained the correct lane during its course of action. For every time step, if the vehicle maintained the correct lane, the agent is rewarded with value one, otherwise, it is credited with zero rewards. In the execution environment, during the events such as overtaking and intersection crossing, the rewards are considered to be zero. Pretrained Convolution Neural Network (CNN) models such as VGG19, Xception, InceptionV3, MobileNet, ResNet152 and ResNet152V2 were used to identify events. Figure 10H represents the accuracy of each model used to design the knowledge sharing module. It is observed that all of these models performed well on the dataset. The main intention of using the model is only to attain a low dimensional representation of an event by which we could calculate similarities between different events so as to make decisions. However, information accumulated over time enables us to perform better and is considered a future work.
Figure 10I depicts the model’s loss during training for two thousand five hundred iterations in detecting and recognising objects on our custom dataset as a part of the visual perception stack. The resultant model achieved an accuracy of 93% on this dataset. Figure 11 represents the estimation of depth and sectors for every object detected in the frame. For any detected object whose depth is less than 15 meters, the autonomous vehicle performs the sector analysis so as to identify where the object of interest lies in the scene and depending on this, corresponding actions are taken by the autonomous vehicle. It is observed that the vehicle comes to a halt when an object is detected in the vehicle’s threshold region. Figure 12 is the sample output of the knowledge sharing module which illustrates the idea of knowledge sharing between the cloud and players for the identification of good and bad conditioned roads. Table 1 delineates the uniqueness and the general qualitative comparison of the TAKEN system with other existing works. Thus, in this paper we have elaborated on strategies through which we could account for sensor failures, navigation systems specific, and the task of knowledge sharing using the centralised/decentralised communication architecture.
Figure 11.
Estimation results by the TAKEN model with object and depth estimation in (A) and depth and sector estimation in (B).
Figure 12.
Road condition estimation.
6. Conclusions
The TAKEN system has implemented various components of an SDC namely, visual perception, state estimator, motion planning and behavioural modelling, using the Carla simulator. The TAKEN system has successfully devised an alternative solution to handle issues faced by self-driving agents due to sensor failures. In addition to this, the work has also introduced how one could infer knowledge from other players in the environment so as to make wiser and quicker decisions. The main motivation behind introducing this concept was from papers such as “Crazyswarm: A large nano-quadcopter swarm” [33] in which the quadcopters collectively estimate state and performs trajectory planning. The drawback of this work is that the knowledge-sharing module has been trained only on a subset of a class—pothole images. Because the training is performed with images and not videos, historical information about series of actions and situations has not been accounted for. We look forward to improving the current work’s perception stack by considering these issues. We plan on achieving this by extracting the information from the latent space of the time-series-action-pair data (that is, a pair of the situation and the corresponding action over time) and estimating the expected action of the player. Deep Reinforcement learning is an unsung branch of artificial intelligence with the potential to help us implement a solution for this problem. This concept will allow the model to learn on its own by exploration and exploitation. We will also be working on scaling up the communication architecture discussed in the TAKEN system and methodology section by providing additional functionalities so that it can aid vehicles in different complex scenarios. Building accurate models with minimal resource consumption is an open challenge in this work. The result of this work is recorded as a video and can be viewed by opening the following custom link. https://drive.google.com/file/d/1IXyGhBM2OLqZS4HTRtfFpyoeYW-f11aI/view?usp=sharing (accessed on 30 October 2022).
Author Contributions
Conceptualization, N.K.B. and R.F.; methodology, N.K.B., R.F. and A.P.R.; software, N.K.B.; validation, A.P.R., T.R.G. and P.V.; formal analysis, M.M., T.R.G.; investigation, M.M.; resources, R.F.; data curation, A.P.R.; writing—original draft preparation, N.K.B. and R.F.; writing—review and editing, M.M., T.R.G.; visualization, A.P.R., M.S.K. and P.V.; supervision, R.F. and M.M.; project administration, M.M., T.R.G. and M.S.K. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
M. Mahmud is supported by the AI-TOP (2020-1-UK01-KA201-079167) and DIVERSASIA (618615-EPP-1-2020-1-UKEPPKA2-CBHEJP) projects funded by the European Commission under the Erasmus+ programme.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| CAV | Connected and Autonomous Vehicle |
| TAKEN | A Traffic Knowledge-based Navigation System for Connected and Autonomous Vehicles |
| SDC | Self Driving Car |
References
- Tan, Z.; Dai, N.; Su, Y.; Zhang, R.; Li, Y.; Wu, D.; Li, S. Human-machine interaction in intelligent and connected vehicles: A review of status quo, issues and opportunities. IEEE Trans. Intell. Transp. Syst. 2021, 23, 1–22. [Google Scholar] [CrossRef]
- Dhawankar, P.; Agrawal, P.; Abderezzak, B.; Kaiwartya, O.; Busawon, K.; Raboaca, M.S. Design and numerical implementation of v2x control architecture for autonomous driving vehicles. Mathematics 2021, 9, 1696. [Google Scholar] [CrossRef]
- Kaiwartya, O.; Abdullah, A.H.; Cao, Y.; Altameem, A.; Prasad, M.; Lin, C.-T.; Liu, X. Internet of vehicles: Motivation, layered architecture, network model, challenges, and future aspects. IEEE Access 2016, 4, 5356–5373. [Google Scholar] [CrossRef]
- Gao, Y.; Jing, H.; Dianati, M.; Hancock, C.M.; Meng, X. Performance analysis of robust cooperative positioning based on gps/uwb integration for connected autonomous vehicles. IEEE Trans. Intell. Veh. 2022, 1. [Google Scholar] [CrossRef]
- Kaiwartya, O.; Cao, Y.; Lloret, J.; Kumar, S.; Aslam, N.; Kharel, R.; Abdullah, A.H.; Shah, R.R. Geometry-based localization for gps outage in vehicular cyber physical systems. IEEE Trans. Veh. Technol. 2018, 67, 3800–3812. [Google Scholar] [CrossRef]
- Kumar, N.; Chaudhry, R.; Kaiwartya, O.; Kumar, N.; Ahmed, S.H. Green computing in software defined social internet of vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3644–3653. [Google Scholar] [CrossRef]
- Mushtaq, A.; Haq, I.U.; Imtiaz, M.U.; Khan, A.; Shafiq, O. Traffic flow management of autonomous vehicles using deep reinforcement learning and smart rerouting. IEEE Access 2021, 9, 51005–51019. [Google Scholar] [CrossRef]
- Menelaou, C.; Timotheou, S.; Kolios, P.; Panayiotou, C.G.; Polycarpou, M.M. Minimizing traffic congestion through continuous-time route reservations with travel time predictions. IEEE Trans. Intell. Veh. 2018, 4, 141–153. [Google Scholar] [CrossRef]
- GOV.UK. Connected and Automated Vehicles: Market Forecast 2020. January 2021. Available online: https://www.gov.uk/government/publications/connected-and-automated-vehicles-market-forecast-2020 (accessed on 20 May 2022).
- Makarfi, A.U.; Rabie, K.M.; Kaiwartya, O.; Adhikari, K.; Nauryzbayev, G.; Li, X.; Kharel, R. Toward physical-layer security for internet of vehicles: Interference-aware modeling. IEEE Internet Things J. 2020, 8, 443–457. [Google Scholar] [CrossRef]
- Kaiser, M.S.; Lwin, K.T.; Mahmud, M.; Hajializadeh, D.; Chaipimonplin, T.; Sarhan, A.; Hossain, M.A. Advances in crowd analysis for urban applications through urban event detection. IEEE Trans. Intell. Transp. Syst. 2017, 19, 3092–3112. [Google Scholar] [CrossRef]
- Piazzi, A.; Bianco, C.L.; Bertozzi, M.; Fascioli, A.; Broggi, A. Quintic g/sup 2/-splines for the iterative steering of vision-based autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2002, 3, 27–36. [Google Scholar] [CrossRef]
- Glaser, S.; Vanholme, B.; Mammar, S.; Gruyer, D.; Nouveliere, L. Maneuver-based trajectory planning for highly autonomous vehicles on real road with traffic and driver interaction. IEEE Trans. Intell. Transp. Syst. 2010, 11, 589–606. [Google Scholar] [CrossRef]
- Arogeti, S.A.; Berman, N. Path following of autonomous vehicles in the presence of sliding effects. IEEE Trans. Veh. Technol. 2012, 61, 1481–1492. [Google Scholar] [CrossRef]
- Mahmud, M.; Kaiser, M.S.; Hussain, A.; Vassanelli, S. Applications of deep learning and reinforcement learning to biological data. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 2063–2079. [Google Scholar] [CrossRef]
- Mahmud, M.; Kaiser, M.S.; McGinnity, T.M.; Hussain, A. Deep learning in mining biological data. Cogn. Comput. 2021, 13, 1–33. [Google Scholar] [CrossRef] [PubMed]
- Wu, S.; Hadachi, A.; Vivet, D.; Prabhakar, Y. This is the way: Sensors auto-calibration approach based on deep learning for self-driving cars. IEEE Sens. J. 2021, 21, 27779–27788. [Google Scholar] [CrossRef]
- Masmoudi, M.; Friji, H.; Ghazzai, H.; Massoud, Y. A reinforcement learning framework for video frame-based autonomous car-following. IEEE Open J. Intell. Transp. Syst. 2021, 2, 111–127. [Google Scholar] [CrossRef]
- Ndikumana, A.; Tran, N.H.; Kim, K.T.; Hong, C.S. Deep learning based caching for self-driving cars in multi-access edge computing. IEEE Trans. Intell. Transp. Syst. 2020, 22, 2862–2877. [Google Scholar] [CrossRef]
- Muhammad, K.; Ullah, A.; Lloret, J.; Ser, J.D.; de Albuquerque, V.H.C. Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4316–4336. [Google Scholar] [CrossRef]
- Qureshi, K.N.; Idrees, M.M.; Lloret, J.; Bosch, I. Self-assessment based clustering data dissemination for sparse and dense traffic conditions for internet of vehicles. IEEE Access 2020, 8, 10363–10372. [Google Scholar] [CrossRef]
- Liu, W.; Liu, Y.; Bucknall, R. Filtering based multi-sensor data fusion algorithm for a reliable unmanned surface vehicle navigation. J. Mar. Eng. Technol. 2022, 1–17. [Google Scholar] [CrossRef]
- Lakhekar, G.V.; Waghmare, L.M. Robust self-organising fuzzy sliding mode-based path-following control for autonomous underwater vehicles. J. Mar. Eng. Technol. 2022, 1–22. [Google Scholar] [CrossRef]
- Rego, A.; Garcia, L.; Sendra, S.; Lloret, J. Software defined network-based control system for an efficient traffic management for emergency situations in smart cities. Future Gener. Comput. Syst. 2018, 88, 243–253. [Google Scholar] [CrossRef]
- Shah, P.; Kasbe, T. A review on specification evaluation of broadcasting routing protocols in vanet. Comput. Sci. Rev. 2021, 41, 100418. [Google Scholar] [CrossRef]
- Hakak, S.; Gadekallu, T.R.; Maddikunta, P.K.R.; Ramu, S.P.; Parimala, M.; De Alwis, C.; Liyanage, M. Autonomous Vehicles in 5G and beyond: A Survey. Veh. Commun. 2022, 100551. [Google Scholar] [CrossRef]
- Arikumar, K.S.; Deepak Kumar, A.; Gadekallu, T.R.; Prathiba, S.B.; Tamilarasi, K. Real-Time 3D Object Detection and Classification in Autonomous Driving Environment Using 3D LiDAR and Camera Sensors. Electronics 2022, 11, 4203. [Google Scholar] [CrossRef]
- Han, Z.; Yang, Y.; Wang, W.; Zhou, L.; Gadekallu, T.R.; Alazab, M.; Su, C. RSSI Map-Based Trajectory Design for UGV Against Malicious Radio Source: A Reinforcement Learning Approach. IEEE Trans. Intell. Transp. Syst. 2022. [Google Scholar] [CrossRef]
- Dev, K.; Xiao, Y.; Gadekallu, T.R.; Corchado, J.M.; Han, G.; Magarini, M. Guest Editorial Special Issue on Green Communication and Networking for Connected and Autonomous Vehicles. IEEE Trans. Green Commun. Netw. 2022, 6, 1260–1266. [Google Scholar] [CrossRef]
- Victor, N.; Alazab, M.; Bhattacharya, S.; Magnusson, S.; Maddikunta, P.K.R.; Ramana, K.; Gadekallu, T.R. Federated Learning for IoUT: Concepts, Applications, Challenges and Opportunities. arXiv 2022, arXiv:2207.13976. [Google Scholar]
- Hoffmann, G.M.; Tomlin, C.J.; Montemerlo, M.; Thrun, S. Autonomous automobile trajectory tracking for off-road driving: Controller design, experimental validation and racing. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 2296–2301. [Google Scholar]
- Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Preiss, J.A.; Honig, W.; Sukhatme, G.S.; Ayanian, N. Crazyswarm: A large nano-quadcopter swarm. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).