You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

25 October 2022

Autonomous Short-Term Traffic Flow Prediction Using Pelican Optimization with Hybrid Deep Belief Network in Smart Cities

,
,
,
,
and
1
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
2
Department of Information Systems, College of Sciences and Arts, King Khalid University, Muhayil Asir 61421, Saudi Arabia
3
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
4
Department of Information Systems, College of Computing and Information System, Umm Al-Qura University, Mecca 24382, Saudi Arabia
This article belongs to the Section Transportation and Future Mobility

Abstract

Accurate and timely traffic flow prediction not just allows traffic controllers to evade traffic congestion and guarantee standard traffic functioning, it even assists travelers to take advantage of planning ahead of schedule and modifying travel routes promptly. Therefore, short-term traffic flow prediction utilizing artificial intelligence (AI) techniques has received significant attention in smart cities. This manuscript introduces an autonomous short-term traffic flow prediction using optimal hybrid deep belief network (AST2FP-OHDBN) model. The presented AST2FP-OHDBN model majorly focuses on high-precision traffic prediction in the process of making near future prediction of smart city environments. The presented AST2FP-OHDBN model initially normalizes the traffic data using min–max normalization. In addition, the HDBN model is employed for forecasting the traffic flow in the near future, and makes use of DBN with an adaptive learning step approach to enhance the convergence rate. To enhance the predictive accuracy of the DBN model, the pelican optimization algorithm (POA) is exploited as a hyperparameter optimizer, which in turn enhances the overall efficiency of the traffic flow prediction process. For assuring the enhanced predictive outcomes of the AST2FP-OHDBN algorithm, a wide-ranging experimental analysis can be executed. The experimental values reported the promising performance of the AST2FP-OHDBN method over recent state-of-the-art DL models with minimal average mean-square error of 17.19132 and root-mean-square error of 22.6634.

1. Introduction

As a new form of intellectually complex mechanism, with high interaction and integration among multidimensional heterogeneous physical substances in network atmospheres [1,2], the cyber-physical system (CPS) compiles control, computing, and communication technologies to offer a practicable solution to and advanced technologies for the new generation of intelligent transportation system (ITS). This therefore is the key advancement direction of the CPS and resolves the issues of intellectual real-time target control and optimal dispatch of ITS [3]. Meanwhile, the issue of dense functions of large-scale data-computing and optimum control-scheduling methods in large-scale ITS is resolved by the speedy advancement of cloud computing (CC) technology. The basic principle of this was dispensing computing tasks on a great number of cloud-distributed computers; ITS management departments could then match CC sources to ITS cloud-controlled applications, evaluating storage systems and computers as required [4]. The implementation of CPS and CC technology makes it possible to attain, transfer and compute traffic data practically, and the implementation of the dynamic matrix method and artificial intelligence (AI) method could forecast traffic data in the next moment in advance [5].
Real-time precise short-time traffic flow forecasting could provide traffic guidance for traffic participants by selecting a suitable travel route, and aid the traffic controllers to have a fair control strategy for relieving traffic congestion [6]. Traffic flow becomes a time sequence data, having robust cyclicity and regularity that were considered as the base for precise estimation. However, its uncertainty and randomness raise prediction difficulties. Therefore, short-time traffic flow forecasting becomes necessary and challenging one in research domains and transport management [7]. Over the last few years, more techniques were deployed for forecasting short-term traffic flow, such as the autoregressive integrated moving average (ARIMA), fuzzy theory, artificial neural network (ANN), and Kalman filter. Such techniques proved to be helpful in deriving traffic flow temporary tendency and forecasting the future traffic flow. It was discovered from the literature that AI-related methods have been extensively utilized for object analytics and detection in ITS. However, such AI-enabled methods need precise perception [8]. However, prevailing approaches produce several mistakes at the time of execution, which may not applicable for realistic data analytics in ITS.
During the past few decades, several research proposals were suggested for enhancing self-learning approaches to dynamic and complex applications of the transport system [9]. Self-learning methods for traffic prediction could be widely split into 2 parts: nonparametric and parametric. In this context, a nonparametric method termed deep learning (DL) was found to be helpful for traffic flow forecasting with multidimensional features [10]. DL was a subset of machine learning (ML) that depends on the idea of deep neural network (DNN) and it was broadly utilized for object recognition, data classification, and natural language processing (NLP).
This manuscript introduces an autonomous short-term traffic flow prediction using an optimal hybrid deep belief network (AST2FP-OHDBN) model. The presented AST2FP-OHDBN model initially normalizes the traffic data using min–max normalization. In addition, the HDBN model is employed for forecasting the traffic flow in the near future, by the use of DBN with an adaptive learning step approach to enhance the convergence rate. To enhance the predictive accuracy of the DBN model, the pelican optimization algorithm (POA) is exploited as a hyperparameter optimizer, which in turn enhances the overall efficiency of the traffic flow prediction process. For assuring enhanced predictive outcomes of the AST2FP-OHDBN algorithm, a wide-ranging experimental analysis can be carried out.

3. The Proposed Model

Traffic flow prediction is developed by the use of present traffic data, past traffic data, and other related statistical data for establishing an appropriate mathematical method. An intellectual computation technique can be employed for making more accurate forecasting of traffic occurs in the upcoming years. The outcomes could offer a realistic foundation for vehicle dynamic guidance and urban traffic control. Assumes a provided time interval t (i.e., every 15 min), V t indicates the entire traffic data in time break. After, short run traffic flow forecasting issue solving methods in this study can be framed as follows. Presented a historical and present traffic data set D = V t ,   V   t 1 ,   ,   V t q , whereas t represents the current duration, q indicates time lag. The aim of short run traffic flow prediction was to forecast Y = V t + h , where h     1 denotes the prediction horizon. For instance, h = 1 means the predicted traffic flow at t + 1 . In this article, a new AST2FP-OHDBN technique was projected for traffic flow prediction in smart city environments. The presented AST2FP-OHDBN model employed min–max normalization to scale the input data and HDBN method can be employed for forecasting the traffic flow in the upcoming years. Finally, the POA is exploited as a hyperparameter optimizer which in turn enhances the overall efficiency of the traffic flow prediction process. Figure 1 depicts the overall process of proposed method.
Figure 1. Overall process of proposed method.

3.1. Data Normalization

At the beginning level, the presented AST2FP-OHDBN model normalizes the traffic data using min–max normalization. The min–max method can be used in this work, given that the outcome of the linear transformation process of the original data comes up with the range of [0,1], as:
x s c a l e = x x m i n x m a x x m i n
where x represents a recent value, x m i n and x m a x were the minimum and maximal value of the attributes, and x s c a l e denotes the measure adjacent to the attribute matching processes.

3.2. Traffic Flow Prediction Using HDBN Model

For forecasting the traffic flow in the near future, the HDBN model is derived. DBN is a vital process from DL. Restricted Boltzmann Machine (RBM) unit is a generative stochastic artificial neural network which can learn a probability distribution over the set of inputs. It is similar to Boltzmann machines. It can be represented as the orientation of all actions and utilization of resources to accomplish effective results. The RBM improve the transparency and accountability, permitting intervention to complement each other and eliminate overlapping [21]. There is no linking amongst all the neural units from all the layers of RBM technique, besides, every neural unit from the visible layer (VL) was linked for every neural unit from the hidden layer (HL). Besides, the resultant of every layers of RBM was utilized as input for next layer. The bottom layer of DBN approach implements a multilayer RBM infrastructure. The greedy technique was utilized for training the instance data layer by layer. Figure 2 showcases the infrastructure of DBN. The DBN composed of multiple layers of latent variables (“hidden units”), with connections between the layers but not between units within each layer. The parameters attained with trained the primary layer RBM were utilized as input of secondary layer RBM, and the parameters of all the layers were attained by analogy. The trained procedure goes to unsupervised learning. The joint configuration energy of VL and HL in RBM are demonstrated as:
E ν ,   h | θ = i = 1 n a i v i j = 1 m b i h i i = 1 n j = 1 m v i w i j h j
where θ = w i j , a i , b j , it can be linking weighted value amongst visible unit i and hidden unit j ,   a i refers the internal bias of VL neurons, and b j can be HLs. If the parameter θ is set, dependent upon the energy function, the joint probability distribution of VL and HL are attained in Equation (3) and linking it with Equation (4) as follows:
P ν ,   h | θ = e E v , h | θ Z θ
Z θ = v , h   e E ( v , h | θ )
Figure 2. Structure of DBN.
If the state of VL ν is identified, the activation probability of j t h neural unit of HL h is obtained:
P h j = 1 | ν ,   θ = σ b j + i   v i w i j
If the HL state h can be recognized, the activation probability of i t h neural unit of VL ν is reached:
P v i = 1 | h ,   θ = σ a i + i   h i w i j
In which, σ x = 1 1 +   exp   x refers the activation function, termed the sigmoid function. Every neuron is defining their state value as one or zero has probability P . For the unsupervised learning method, the drive of trained RBM can be obtaining the parameter method that is offered by log-likelihood function as:
L θ = n = 1 N l n v n , h
θ = a r g m a χ L θ = a r g m a χ n = 1 N l n v n , h
The HDBN model is presented by the use of DBN with an adaptive learning step approach to enhance the convergence rate.
Training of the DBN model is difficult, especially in terms of training numerous RBMs. So, proper learning data requires fixing, which is essential to train the DBN model, by the use of contrastive divergence. A comparatively high learning rate results in unbalanced training procedure and low learning rate leads to a poor convergence rate. For addressing this issue, the HDBN model is derived using adaptive learning step (ALS) for computing effectual learning rate. The step size is modified based on the sign changes.
u γ i j o l d i f ν i h j 0 ν i h j k ( ν i h j 0 o l d ν i h j 0 o l d ) > 0  
d γ i j o l d   i f   ν i h j 0 ν i h j k ( ν i h j 0 o l d ν i h j k o l d ) < 0
where u > 1 denotes incremental factor of learning step, d < 1 defines decremental factor of learning step, γ i j o l d was individual learning rate. If 2 consecutive upgrades were in the same direction, the step size would be increased and vice versa. The issues produced by inappropriate step size are evaded. In addition, convergence rate of the DBN method gets enhanced.

3.3. Hyperparameter Tuning

At the last stage, the POA is exploited as a hyperparameter optimizer, which in turn enhances the overall efficiency of the traffic flow prediction process. The presented POA is a population-based technique, whereas pelicans were members of this population [22]. For the population-based techniques, all the population members imply the candidate solutions. All the population members suggest values to optimize problem variables based on their place in relation to the searching space. Primarily, the population member is arbitrarily initialized based on the lower as well as upper bounds of the problem, utilizing Equation (11).
x i , j = l j + r a n d u j l j ,   i = 1 , 2 ,   ,   N ,   j = 1 , 2 ,   , m ,
where x i , j denotes the value of j t h variable detailed by the i t h candidate solution, N represents the amount of population members, m implies the amount of problem variables, r a n d stands for the arbitrary number from interval of zero and one, l j refers the j t h lower bound, and u j represents the j t h upper bound of problem variables. The population members of pelicans from the presented POA were recognized utilizing a matrix, named as the population matrix in Equation (12). All the rows of this matrix signify the candidate solution, but the columns of this matrix signify the presented values for the problem variables.
X = X 1   X i   X N   N × m = χ 1 , 1 χ 1 , j χ 1 , m   χ i , 1 χ i , j χ i , m   x N , 1 x N , j x N , m   N × m
where X refers to the population matrix of pelicans and X i denotes the i t h pelican. For the presented POA, all the population members are a pelican that is a candidate solution to the provided problem. Thus, the objective function of provided problem is estimated dependent upon all the candidate solutions. The value attained to objective function was defined utilizing a vector named the objective function vector in Equation (13).
F = F 1   F i   F N   N × 1 = F X 1 F X i F X N   N × 1
In which F stands for the objective function vector and F i represents the objective function value of i t h candidate solution.
The presented POA inspires the approach and behavior of pelicans if attack and hunt prey for updating the candidate solution. This hunting approach was inspired from 2 phases:
(i)
Moving to prey (exploration stage).
(ii)
Winging on the water surface (exploitation stage).
Exploration Phase
During the primary stage, the pelicans recognize the place of prey, and next moved to this recognized region. The pelican approach of moving to the place of prey was mathematically reflected in Equation (14).
x i , j P 1 = x i , j + r a n d p j I x i , j ,   F p < F i , x i , j + r a n d x i , j p j , e l s e ,
where x i , j P 1 signifies the novel status of i t h pelican from the j t h dimensional dependent upon phase 1, I signifies the arbitrary number that is equivalent to 1 or 2, p j represent the place of prey from the j t h dimensional, and F p is their objective function value. The parameter I is a number which is arbitrarily equivalent to one or two. The upgrade procedure was modeled utilizing in Equation (15).
X i = X i P 1 ,   F i P 1 < F i ; X i ,   e l s e ,
In which X i P 1 demonstrates the novel status of i t h pelican and F i P 1 is their objective function value dependent upon stage 1.
Exploitation Phase
During the second stage, then the pelicans obtain the surface of water, it is spread its wings on the surface of water for moving the fish upwards, next gather the prey from its throat pouch. This approach leads additional fishes from the attacked region that caught by pelicans. The modeling this behavior of pelicans causes the presented POA for converging optimum points from the hunting region. This procedure enhances the local searching power and the exploitation capability of POA. In the mathematical approach, the technique must inspect the points from the neighborhood of pelican place for converging to an optimum solution. This behavior of pelicans in hunting was mathematically reflected in Equation (16).
x i , j P 2 = x i , j + R 1 t T 2 r a n d 1 x i , j ,
where x i , j P 2 signifies the novel status of i t h pelican from the j t h dimensional dependent upon stage 2, R represents the constant that is equivalent to 0.2, R 1 t / T signifies the neighborhood radius of x i , j but, t refers the iteration counter, and T denotes the maximal amount of iterations. The coefficient “ R ( 1 t / T ) ” implies radius of neighborhood of population members for searching locally neighboring all the members for converging to an optimum solution. During this stage, effectual upgrade is also utilized for accepting or rejecting the novel pelican place that is demonstrated in Equation (17).
X i = X i P 2 ,   F i P 2 < F i ; X i ,   e l s e ,
where X i P 2 refers the novel status of i t h pelican and F i P 2 is their objective function value dependent upon stage 2. Then, every population member is upgraded dependent upon the primary and secondary stages, dependent upon novel status of populations and the value of objective function, an optimum candidate solution so far is upgraded. This technique enters the next iteration and distinct steps of the presented POA dependent upon Equations (14)–(17) were repeating still the end of whole implementation. At last, an optimum candidate solution attained in the technique iterations is projected as a quasi-optimal solution to provided problem. The pseudocode of POA is given in Algorithm 1.
Algorithm 1: Pseudocode of POA.
1. Input
2. Computer POA population size (N) and iterations (T).
3. Initialize pelican location and determine objective function.
4. For t = 1 : T
5.     Produce prey location arbitrarily
6.     For I = 1 : N
7.     Stage 1: Move towards prey (exploration stage).
8.       For j = 1 : m
9.          Determine new position of the   j t h dimension
10.        End.
11.       Upgrade the ith population member.
12.    Stage 2: Wing on water surface (exploitation stage).
13.       For j = 1 : m .
14.          Determine new status of the j t h dimension.
15.        End.
16.        Upgrade the ith population member.
17.     End.
18.     Upgrade optimal candidate solution.
19. End.
20. Output: Optimal candidate solution attained by POA.
End POA.

4. Results and Discussion

This section inspects the prediction performance of the AST2FP-OHDBN model in distinct aspects. The proposed model is tested by the traffic data containing all 30 s raw sensor data for a duration of 30 days. The traffic data collected during the first 10 days are used as a training set and the remaining 20 days data are utilized as testing set. In this experiment, data groups consist of the 15 min of aggregated data in vehicles per 15 min (veh per 15 min). Thereby, 96 data groups are available for each day. Before the calculation, the data groups are normalized (as given in Section 3.1), rendering the data in the range of 0 to 1.
Firstly, the mean absolute percentage error (MAPE) analysis of the proposed model is performed. The MAPE is a widely utilized measure for prediction process. It defines the ration of the sum of the individual absolute errors to the demand (each period separately). Table 1 and Figure 3 offer a detailed fitness value examination of the AST2FP-OHDBN model under varying numbers of iterations. The experimental values implicit in the AST2FP-OHDBN method have shown effectual outcomes with minimal fitness values. For instance, with 10 iterations, the AST2FP-OHDBN model has offered best, average, and worst fitness values of 1.996%, 5.237%, and 8.795% respectively. Meanwhile, with 50 iterations, the AST2FP-OHDBN method has provided best, average, and worst fitness values of 0.471%, 1.678%, and 5.046%, correspondingly. Eventually, with 100 iterations, the AST2FP-OHDBN technique has offered best, average, and worst fitness values of 0.471%, 1.013%, and 3.421%, correspondingly.
Table 1. MAPE analysis of AST2FP-OHDBN approach with distinct count of iterations.
Figure 3. MAPE analysis of AST2FP-OHDBN approach with distinct count of iterations.
Table 2 and Figure 4 provide a detailed prediction results study of the AST2FP-OHDBN model under varying index. The experimental values highlighted the AST2FP-OHDBN method have attained closer predicted values under each run. For instance, on run-1 with an actual value of 190, the AST2FP-OHDBN model has attained the predicted value of 226. Moreover, on run-2 with an actual value of 190, the AST2FP-OHDBN method has reached the predicted value of 225. Furthermore, on run-3 with an actual value of 190, the AST2FP-OHDBN technique has gained the predicted value of 247. Finally, on run-5 with an actual value of 190, the AST2FP-OHDBN approach has reached the predicted value of 268.
Table 2. Traffic flow analysis of AST2FP-OHDBN approach with distinct runs.
Figure 4. Traffic flow analysis of AST2FP –OHDBN approach (a) Run1, (b) Run2, (c) Run3, (d) Run4, and (e) Run5.
Table 3 demonstrates an extensive comparative study of the AST2FP-OHDBN model, with recent models given in terms of different measures [23]. Figure 5 illustrates a comparative scrutiny of the AST2FP-OHDBN method with existing methods in terms of RMSE. The figure represented the AST2FP-OHDBN technique has gained effectual outcomes over other models with minimal values of RMSE under all lags. For example, with lag = 1, the AST2FP-OHDBN technique has offered reduced RMSE of 26.9257. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have offered increased RMSE of 53.8171, 42.6758, 39.5084, 38.0203, and 32.7534 respectively. At the same time, with lag = 5, the AST2FP-OHDBN model has gained lower RMSE value of 23.4312. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have resulted in ineffectual outcomes with higher RMSE values of 98.8703, 58.9394, 55.5521, 53.4518, and 28.7079 respectively.
Table 3. Comparative analysis of AST2FP-OHDBN approach with existing methodologies under various measures.
Figure 5. RMSE analysis of AST2FP-OHDBN approach with existing methodologies.
Figure 6 demonstrates a comparative inspection of the AST2FP-OHDBN approach with existing models in terms of MSE. The figure denoted the AST2FP-OHDBN method has gained effectual outcome over other models with minimal values of MSE under all lags. For example, with lag = 1, the AST2FP-OHDBN approach has rendered a reduced MSE of 15.4770. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have offered an increased MSE of 38.3663, 32.3363, 24.7114, 22.3607, and 21.8427, correspondingly. Meanwhile, with lag = 5, the AST2FP-OHDBN approach has acquired a lower MSE value of 19.7508. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have resulted in ineffectual outcomes with higher MSE values of 67.0300, 57.1088, 52.4225, 44.5820, and 24.4347, correspondingly.
Figure 6. MSE analysis of AST2FP-OHDBN approach with existing methodologies.
A detailed equal coefficient (ECC) inspection of the results presented by the AST2FP-OHDBN model with existing models is given in Figure 7. The results demonstrated that the AST2FP-OHDBN model has attained enriched results, with higher values of ECC. For example, with lag = 1, the AST2FP-OHDBN method has reached increased ECC of 0.9835. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have obtained decreased ECC of 0.9537, 0.9557, 0.9569, 0.9579, and 0.9792 respectively. Likewise, with lag = 5, the AST2FP-OHDBN model has attained improved ECC of 0.9861. Conversely, the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have reached a reduced ECC of 0.9462, 0.9477, 0.9489, 0.9507, and 0.9724 respectively.
Figure 7. ECC analysis of AST2FP-OHDBN approach with existing methodologies.
Table 4 and Figure 8 depict a brief inspection of the AST2FP-OHDBN model with existing methods in terms of running time (RT). The figure implicit in the AST2FP-OHDBN model has acquired effectual outcomes over other models with minimal values of RT under all lags. For example, with lag = 1, the AST2FP-OHDBN model has provided a reduced RT of 3.84 s, whereas the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have offered increased RT of 7.26 s, 19.53 s, 11.29 s, 14.25 s, and 9.80 s, correspondingly.
Table 4. RT analysis of AST2FP-OHDBN approach with existing methodologies.
Figure 8. RT analysis of AST2FP-OHDBN approach with existing methodologies.
Simultaneously, with lag = 5, the AST2FP-OHDBN method has gained lower RT value of 5.07 s whereas the LS-SVM system, GA-LSSVM approach, PSO-LSSVM method, FFO-LSSVM technique, and hybrid LSSVM models have resulted in ineffectual outcomes with higher RT values of 7.89 s, 22 s, 12.24 s, 15.31 s, and 10.44 s, correspondingly. From the detailed result analysis, it is concluded that the AST2FP-OHDBN model has reached an effectual traffic flow forecasting performance.

5. Conclusions

In this article, a novel AST2FP-OHDBN model was projected for traffic flow prediction in smart city environments. The presented AST2FP-OHDBN model follows a three-stage process: min–max normalization, HDBN-based traffic flow forecasting, and POA-based hyperparameter tuning. Since the trial-and-error hyperparameter tuning of the HDBN model is a tedious process, the POA is applied as a hyperparameter optimizer, which considerable enhances the overall efficiency of the traffic flow prediction process. For assuring the enhanced predictive outcomes of the AST2FP-OHDBN algorithm, a wide-ranging experimental analysis can be executed. The experimental values reported the promising performance of the AST2FP-OHDBN model over recent state-of-the-art DL models with minimal average MSE of 17.19132 and RMSE of 22.6634. Therefore, the AST2FP-OHDBN algorithm can be employed to accomplish high-precision traffic prediction in the near future prediction on smart cities environment. In future, hybrid metaheuristics can be designed to enhance prediction outcomes.

Author Contributions

Conceptualization, H.A. and S.S.A.; methodology, N.A. (Naif Alasmari); software, G.P.M.; validation, N.A. (Najm Alotaibi), H.A. and H.M.; formal analysis, H.M.; investigation, N.A. (Naif Alasmari); resources, H.M.; data curation, G.P.M.; writing—original draft preparation, H.A., S.S.A. and N.A. (Naif Alasmari); writing—review and editing, N.A. (Najm Alotaibi); visualization, H.M.; supervision, S.S.A.; project administration, G.P.M.; funding acquisition, H.A., N.A. (Naif Alasmari) and S.S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through General Research Project under grant number (40/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R303), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4210118DSR37).

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any of the authors.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare that they have no conflict of interest. The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.

References

  1. Nguyen, D.D.; Rohacs, J.; Rohacs, D. Autonomous flight trajectory control system for drones in smart city traffic management. ISPRS Int. J. Geo-Inf. 2021, 10, 338. [Google Scholar] [CrossRef]
  2. Cui, Q.; Wang, Y.; Chen, K.C.; Ni, W.; Lin, I.C.; Tao, X.; Zhang, P. Big data analytics and network calculus enabling intelligent management of autonomous vehicles in a smart city. IEEE Internet Things J. 2018, 6, 2021–2034. [Google Scholar] [CrossRef]
  3. Azgomi, H.F.; Jamshidi, M. A brief survey on smart community and smart transportation. In Proceedings of the 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), Volos, Greece, 5–7 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 932–939. [Google Scholar]
  4. Kuru, K.; Khan, W. A framework for the synergistic integration of fully autonomous ground vehicles with smart city. IEEE Access 2020, 9, 923–948. [Google Scholar] [CrossRef]
  5. Seuwou, P.; Banissi, E.; Ubakanma, G. The future of mobility with connected and autonomous vehicles in smart cities. In Digital Twin Technologies and Smart Cities; Springer: Cham, Switzerland, 2020; pp. 37–52. [Google Scholar]
  6. Chehri, A.; Mouftah, H.T. Autonomous vehicles in the sustainable cities, the beginning of a green adventure. Sustain. Cities Soc. 2019, 51, 101751. [Google Scholar] [CrossRef]
  7. Chen, X.; Chen, H.; Yang, Y.; Wu, H.; Zhang, W.; Zhao, J.; Xiong, Y. Traffic flow prediction by an ensemble framework with data denoising and deep learning model. Phys. A Stat. Mech. Its Appl. 2021, 565, 125574. [Google Scholar] [CrossRef]
  8. Zhou, T.; Han, G.; Xu, X.; Han, C.; Huang, Y.; Qin, J. A learning-based multimodel integrated framework for dynamic traffic flow forecasting. Neural Process. Lett. 2019, 49, 407–430. [Google Scholar] [CrossRef]
  9. Jia, T.; Yan, P. Predicting citywide road traffic flow using deep spatiotemporal neural networks. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3101–3111. [Google Scholar] [CrossRef]
  10. Gu, Y.; Lu, W.; Xu, X.; Qin, L.; Shao, Z.; Zhang, H. An improved Bayesian combination model for short-term traffic prediction with deep learning. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1332–1342. [Google Scholar] [CrossRef]
  11. Zhang, W.; Yu, Y.; Qi, Y.; Shu, F.; Wang, Y. Short-term traffic flow prediction based on spatio-temporal analysis and CNN deep learning. Transp. A Transp. Sci. 2019, 15, 1688–1711. [Google Scholar] [CrossRef]
  12. Tang, J.; Zhang, X.; Yin, W.; Zou, Y.; Wang, Y. Missing data imputation for traffic flow based on combination of fuzzy neural network and rough set theory. J. Intell. Transp. Syst. 2021, 25, 439–454. [Google Scholar] [CrossRef]
  13. Li, L.; Qin, L.; Qu, X.; Zhang, J.; Wang, Y.; Ran, B. Day-ahead traffic flow forecasting based on a deep belief network optimized by the multi-objective particle swarm algorithm. Knowl.-Based Syst. 2019, 172, 1–14. [Google Scholar] [CrossRef]
  14. Huang, Y.Q.; Zheng, J.C.; Sun, S.D.; Yang, C.F.; Liu, J. Optimized YOLOv3 algorithm and its application in traffic flow detections. Appl. Sci. 2020, 10, 3079. [Google Scholar] [CrossRef]
  15. Chen, C.; Liu, B.; Wan, S.; Qiao, P.; Pei, Q. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1840–1852. [Google Scholar] [CrossRef]
  16. Qu, Z.; Li, H.; Li, Z.; Zhong, T. Short-term traffic flow forecasting method with MB-LSTM hybrid network. IEEE Trans. Intell. Transp. Syst. 2020, 23, 225–235. [Google Scholar]
  17. Feng, X.; Ling, X.; Zheng, H.; Chen, Z.; Xu, Y. Adaptive multi-kernel SVM with spatial–temporal correlation for short-term traffic flow prediction. IEEE Trans. Intell. Transp. Syst. 2018, 20, 2001–2013. [Google Scholar] [CrossRef]
  18. Xia, M.; Jin, D.; Chen, J. Short-Term Traffic Flow Prediction Based on Graph Convolutional Networks and Federated Learning. IEEE Trans. Intell. Transp. Syst. 2022, 1–13. [Google Scholar] [CrossRef]
  19. Lin, G.; Lin, A.; Gu, D. Using support vector regression and K-nearest neighbors for short-term traffic flow prediction based on maximal information coefficient. Inf. Sci. 2022, 608, 517–531. [Google Scholar] [CrossRef]
  20. Chen, Z.; Lu, Z.; Chen, Q.; Zhong, H.; Zhang, Y.; Xue, J.; Wu, C. A spatial-temporal short-term traffic flow prediction model based on dynamical-learning graph convolution mechanism. arXiv 2022, arXiv:2205.04762. [Google Scholar] [CrossRef]
  21. Li, F.; Zhang, J.; Shang, C.; Huang, D.; Oko, E.; Wang, M. Modelling of a post-combustion CO2 capture process using deep belief network. Appl. Therm. Eng. 2018, 130, 997–1003. [Google Scholar] [CrossRef]
  22. Trojovský, P.; Dehghani, M. Pelican optimization algorithm: A novel nature-inspired algorithm for engineering applications. Sensors 2022, 22, 855. [Google Scholar] [CrossRef] [PubMed]
  23. Luo, C.; Huang, C.; Cao, J.; Lu, J.; Huang, W.; Guo, J.; Wei, Y. Short-term traffic flow prediction based on least square support vector machine with hybrid optimization algorithm. Neural Process. Lett. 2019, 50, 2305–2322. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.