# A Real-Time Channel Prediction Model Based on Neural Networks for Dedicated Short-Range Communications

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Background and Motivation

- ⬤
**Theoretical model:**Theoretical models mainly include ray optical prediction models such as ray-tracing models [4,5] and ray-launching models [6,7]. Ray tracing models are approximations of Maxwell equations and are based on simulations. These models can accurately depict the channel. However, they suffer from heavy computational load; therefore, it is difficult to use them to depict the variance of the real world in real-time.- ⬤
**Empirical model:**

#### 1.2. Related Work

#### 1.3. Contributions and Innovations

- ⬤
- The proposed NN model can be used for predicting PL. The PL prediction model shows better performance compared to that of commonly used empirical models, such as the dual-slope distance-breakpoint PL model, polynomial fitting model, and GG shadowed fading model.
- ⬤
- The proposed NN model can be used to predict PL in real-time and it can be updated in real-time. Therefore, the model not only guarantees the real-time performance, but also updates the PL prediction locally according to the actual environment with higher accuracy.
- ⬤
- The proposed NN model can be used for predicting PD. The model predicts the PDR in the future period by the PD situation in the current time slot. And, with the acquisition of new data, the PD prediction model is updated in real-time. This model provides a higher prediction accuracy than the statistics-based model.

- ⬤
- In the theoretical aspect, the paper introduces the ANN algorithm into the field of vehicular communication. Although artificial neural algorithms are making rapid progress, this is an innovative application for vehicular communication, in terms of channel modeling and prediction.
- ⬤
- In the semi-physical simulation, this paper proposes a prediction model based on ANN. The model can predict channel parameters, including PL and PDR, based on the measured dataset. The simulation results show that: (1) in the PL offline prediction part, the ANN model is more accurate than the traditional model; (2) in the PL online prediction part, the ANN model has a better ability to fit the dataset than the traditional model, and is more capable of reflecting the details of different time steps; and (3) in the PDR online prediction part, the ANN model has a better trade-off between precision and accuracy than the traditional model.

## 2. DSRC Network Sniffer and Data Connection

- ⬤
- Highways
- ⬤
- Local areas
- ⬤
- Residential areas
- ⬤
- State Park and rural areas

## 3. A Novel Path Loss Prediction Model

#### 3.1. Typical Empirical Models for Path Loss Prediction

#### 3.1.1. Dual-Slope Distance-Breakpoint Path Loss Model

- (i)
- The relative distances in the training set are rounded to the nearest meter, denoted as ${d}_{k}$, $k=\{5,6,\cdots ,199\}$. The corresponding PLs are denoted as $PL({d}_{k})=\{PL{({d}_{k})}_{1},PL{({d}_{k})}_{2},\cdots \}$.
- (ii)
- Take ${d}_{0}=\{{d}_{{0}_{1}},{d}_{{0}_{2}},\cdots \}$ into Equation (4) to calculate the PL at the reference distance $PL({d}_{0})$.
- (iii)
- Calculate the average of PLs at each relative distance ${d}_{k}$, denoted as $E\{PL({d}_{k})\}$.
- (iv)
- For each reference distance ${d}_{{0}_{p}}$, take ${n}_{1}=\{0,0.01,0.02,\cdots ,10\}$ into Equation (3) (subject to ${d}_{0}\le d\le {d}_{b}$) to calculate the optimal ${n}_{opt1}$ and ${X}_{\sigma 1}$, which leads to the minimum RMSE of $E\{PL({d}_{k})\}$ and $P{L}_{DS}({d}_{k})$.
- (v)
- For each reference distance ${d}_{0}{}_{p}$, take ${n}_{2}=\{0,0.01,0.02,\cdots ,10\}$ into Equation (3) (subject to $d>{d}_{b}$) to calculate the optimal ${n}_{opt2}$ and ${X}_{\sigma 2}$, which leads to the minimum RMSE of $E\{PL({d}_{k})\}$ and $P{L}_{DS}({d}_{k})$.

#### 3.1.2. Polynomial Fitting Model

- (i)
- The relative distances in the training set are rounded to the nearest meter, denoted as ${d}_{k}$, $k=\{5,6,\cdots ,199\}$, while corresponding PLs are denoted as $PL({d}_{k})=\{PL{({d}_{k})}_{1},PL{({d}_{k})}_{2},\cdots \}$.
- (ii)
- Calculate the average of PLs at each relative distance ${d}_{k}$, denoted as $E\{PL({d}_{k})\}$.
- (iii)
- Take $k=\{{k}_{1},{k}_{2},\cdots \}$ into Equation (5) to calculate the approximate curve for order ${k}_{p}$.
- (iv)
- Calculate the coefficient matrix of ${A}_{p}={X}^{-1}Y$ using Equation (7) according to the order of ${k}_{p}$.
- (v)
- For each order ${k}_{p}$, take ${A}_{p}$ into Equation (5) to calculate the optimal ${k}_{opt}$, which generates the minimum RMSE of $E\{PL({d}_{k})\}$ and $P{L}_{PF}({d}_{k})$.

#### 3.1.3. Generalized Gamma Shadowed Fading Model

- (i)
- The relative distances in the training set are rounded to the nearest 5 m, denoted as ${d}_{k}$, $k=\{5,10,\cdots ,195\}$. The corresponding PLs are denoted as ${p}_{r}({d}_{k})=\{{p}_{r}{({d}_{k})}_{1},{p}_{r}{({d}_{k})}_{2},\cdots \}$.
- (ii)
- Calculate PDF $f({p}_{r})$ at the relative distance ${d}_{k}$ of the received signal power.
- (iii)
- Take $m=\{0.5,0.6,\cdots ,20\}$ and $s=\{0.1,0.2,\cdots ,1\}$ into Equation (8) to calculate the optimal ${m}_{opt}({d}_{k})$ and ${s}_{opt}({d}_{k})$, which lead to the minimum RMSE of $f({p}_{r})$ and ${f}_{GG}({p}_{r}:m,s)$.
- (iv)
- Putting ${m}_{opt}({d}_{k})$ and ${s}_{opt}({d}_{k})$ into Equation (8), we get the estimated PDF and CDF of the received signal power at the relative distance ${d}_{k}$, denoted as ${\widehat{f}}_{GG}({p}_{r}:{m}_{opt}({d}_{k}))$ and ${\widehat{F}}_{GG}({p}_{r}:{m}_{opt}({d}_{k}))$, respectively.
- (v)
- The estimated received signal power at the relative distance ${d}_{k}$ is ${\widehat{{p}_{r}}}_{GG}({d}_{k})={\displaystyle \sum {p}_{r}}({d}_{k})\cdot {\widehat{F}}_{GG}({p}_{r}:{m}_{opt}({d}_{k}),{s}_{opt}({d}_{k}))$.

#### 3.2. A Novel Neural Network-Based Path Loss Prediction Model

Algorithm 1 Developed of Neural Network |

Input: |

1: $dataset=[train[{d}_{k},PL({d}_{k})],validation[{d}_{k},PL({d}_{k})],test[{d}_{k},PL({d}_{k})]]$ |

2: $par{a}_{hyper}$ = [] // NN hyper-parameters including number of hidden layers, nodes of each hidden layers, activation function, learning rate, epochs, training function, learning function, etc. |

Output: |

1: $pred$ // Predict $P{L}_{NN}({d}_{k})$ at each relative distance ${d}_{k}$ |

2: $rmse$ //RMSE of $PL({d}_{k})$ and $P{L}_{NN}({d}_{k})$ |

3: $ACC$ // Accuracy of $PL({d}_{k})$ and $P{L}_{NN}({d}_{k})$ |

for$par{a}_{hyper}^{i}$ in $par{a}_{hyper}$ do |

$net=train\_validation(train,validation,par{a}_{hyper}^{i})$ |

$pred=net(test,par{a}_{hyper}^{i})$ |

$E=RMSE\left(pred,test\right)$ |

end for |

$par{a}_{hyper}^{opt}=\mathrm{arg}\mathrm{min}(E)$ |

$pre{d}^{opt}=net(test,par{a}_{hyper}^{opt})$ |

$rms{e}^{opt}=RMSE(pre{d}^{opt},test)$ |

$AC{C}^{opt}=Accuracy(pre{d}^{opt},test)$ |

return$pre{d}^{opt}$, $rms{e}^{opt}$, $AC{C}^{opt}$ |

- (i)
- The relative distances in the training set are rounded to the nearest meter, denoted as ${d}_{k}$, $k=\{5,6,\cdots ,199\}$. The corresponding PLs are denoted as $PL({d}_{k})=\{PL{({d}_{k})}_{1},PL{({d}_{k})}_{2},\cdots \}$.
- (ii)
- Randomly divide the dataset into training set, validation set, and test set in proportions of 60%, 15%, and 25%, respectively.
- (iii)
- Set NN parameters: number of nodes in each layer ${N}_{1}=1,{N}_{2}=24,{N}_{3}=1$, activation function $tanh$, learning rate $\eta =0.05$, maximum number of epochs to train $epochs=60$, sum-squared error goal $goal=0.001$, back-propagation weight/bias learning function $learngdm$, etc.
- (iv)
- Train the NN model with the training set, and then tune the model’s hyper-parameters with the validation set to reduce overfitting.
- (v)
- Predict $P{L}_{NN}({d}_{k})$ at each relative distance ${d}_{k}$ with the NN model from (iv) and then calculate the RMSE of $PL({d}_{k})$ and $P{L}_{NN}({d}_{k})$.
- (vi)
- Repeat step (iii)–(v) when the scenario changes to obtain the optimal combination of parameters. In the same scenario, the NN parameters are fixed after the optimal parameters are obtained.
- (vii)
- Estimate $P{L}_{NN}({d}_{k})$ at each relative distance ${d}_{k}$ with The NN model from (vi) and then calculate the RMSE of $PL({d}_{k})$ and $P{L}_{NN}({d}_{k})$.

- ⬤
- In the Local case, the optimal parameters of the dual-slope model are ${d}_{0}=$ 15 m, ${n}_{1}=$ 1.34, ${n}_{2}=$ 1.36, and that of the PF model is $order=$ 1.
- ⬤
- In the Residence case, the optimal parameters of the dual-slope model are ${d}_{0}=$ 10 m, ${n}_{1}=$ 1.22, ${n}_{2}=$ 1.48, and that of the PF model is $order=$ 1.
- ⬤
- In the Rural case, the optimal parameters of the dual-slope model are ${d}_{0}=$ 12 m, ${n}_{1}=$ 1.64, ${n}_{2}=$ 2, and that of the PF model is $order=$ 2.
- ⬤
- In the Highway case, the optimal parameters of the dual-slope model are ${d}_{0}=$ 54 m, ${n}_{1}=$ 1.88, ${n}_{2}=$ 2.07, and that of the PF model is $order=$ 1.

#### 3.3. A Novel Real-Time Neural Network-Based Path Loss Prediction Model

- ⬤
- $0\le \left|\rho \right|\le 0.20$: the correlation is non-important;
- ⬤
- $0.20<\left|\rho \right|\le 0.60$: the correlation is weak;
- ⬤
- $0.60<\left|\rho \right|\le 0.90$: the correlation is strong;
- ⬤
- $0.90<\left|\rho \right|\le 1.00$: the correlation is very strong.

- ⬤
- The updated dataset can effectively predict the trend of PL. The PL prediction curves at different slots have coherent shapes and trends at the same distance.
- ⬤
- It also varies practically as the environment changes. In some cases, even though Tx and Rx take the same distance, the PL shape predicted demonstrated variations. This is due to changes in the environment where the two vehicles are located, such as obstacles and occlusion.

## 4. A Real-Time Packet Drop Prediction Model

- ⬤
- The value of ${n}_{1}$ should be carefully chosen to balance between past data and prediction data. In the model, ${n}_{1}=\{100,120,\cdots ,500\}$.
- ⬤
- The value of ${n}_{2}$ should be selected in such a way that corresponding prediction time should be within the channel coherent time. In the model, ${n}_{2}=\{2,4,\cdots ,30\}$.
- ⬤
- ${n}_{3}$ is selected as to be equal or less than v resulting in a prediction window covering transmission times of interest. In the model, ${n}_{3}=0.5\cdot {n}_{2}$, i.e. ${n}_{3}=\{1,2,\cdots ,15\}$.
- ⬤
- Several combinations of ${n}_{1}$, ${n}_{2}$, and ${n}_{3}$ were adopted in the simulation to get a good prediction accuracy. A comparison factor is defined as, $\alpha =accuracy/MS{E}^{2}$. The set of ${n}_{1}$, ${n}_{2}$, and ${n}_{3}$ resulting in the largest $\alpha $ is optimized.

- ⬤
- Increasing the size of the dataset enhances the prediction accuracy of the statistics-based model.
- ⬤
- Adding more datasets requires more previous time data, which are less relevant to the present and future slots. Furthermore, it reduces the accuracy of PD prediction.

## 5. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Mahjoub, H.N.; Tahmasbi-Sarvestani, A.; Gani, S.M.O.; Fallah, Y.P. Composite α − μ Based DSRC Channel Model Using Large Data Set of RSSI Measurements. IEEE Trans. Intell. Transp. Syst.
**2018**, 20, 205–217. [Google Scholar] [CrossRef] - El, M.E.E.A.; Youssif, A.A.A.; Ghalwash, A.Z. Energy aware and adaptive cross-layer scheme for video transmission over wireless sensor networks. IEEE Sens. J.
**2016**, 16, 7792–7802. [Google Scholar] - Carpenter, S.E.; Sichitiu, M.L. Evaluating the Accuracy of Vehicular Channel Models in a Large-Scale DSRC Test. In Proceedings of the 2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), Orlando, FL, USA, 22–25 October 2017; pp. 241–249. [Google Scholar]
- Parsons, J.D. The Mobile Radio Propagation Channel, 2nd ed.; John Wiley & Sons: Chichester, UK, 2000. [Google Scholar]
- Al-Hourani, A.; Kandeepan, S.; Jamalipour, A. Modeling air-to-ground path loss for low altitude platforms in urban environments. In Proceedings of the 2014 IEEE Global Communications Conference, Austin, TX, USA, 8–12 December 2014; pp. 2898–2904. [Google Scholar]
- Azpilicueta, L.; Rawat, M.; Rawat, K.; Ghannouchi, F.M.; Falcone, F. A ray launching-neural network approach for radio wave propagation analysis in complex indoor environments. IEEE Trans. Antennas Propag.
**2014**, 62, 2777–2786. [Google Scholar] [CrossRef] - Rose, D.M.; Kurner, T. An analytical 3D ray-launching method using arbitrary polygonal shapes for wireless propagation prediction. In Proceedings of the 2014 IEEE 80th Vehicular Technology Conference, Vancouver, BC, Canada, 14–17 September 2014; pp. 1–6. [Google Scholar]
- Feuerstein, M.J.; Blackard, K.L.; Rappaport, T.S.; Seidel, S.Y.; Xia, H.H. Path loss, delay spread, and outage models as functions of antenna height for microcellular system design. IEEE Trans. Veh. Technol.
**1994**, 43, 487–498. [Google Scholar] [CrossRef] - Cheng, L.; Henty, B.E.; Stancil, D.D.; Bai, F.; Mudalige, P. Mobile vehicle-to-vehicle narrow-band channel measurement and characterization of the 5.9 GHz dedicated short range communication (DSRC) frequency band. IEEE J. Sel. Areas Commun.
**2007**, 25, 1501–1516. [Google Scholar] [CrossRef] - Lee, H.C.; Chen, C.W.; Wei, S.W. Channel estimation for OFDM system with two training symbols aided and polynomial fitting. IEEE Trans. Commun.
**2010**, 58, 733–736. [Google Scholar] [CrossRef] - Aalo, V.A.; Piboongungon, T.; Iskander, C.-D. Bit-error rate of binary digital modulation schemes in generalized gamma fading channels. IEEE Commun. Lett.
**2005**, 9, 139–141. [Google Scholar] [CrossRef] - Chan, P.T.H.; Palaniswami, M.; Everitt, D. Neural network-based dynamic channel assignment for cellular mobile communication systems. IEEE Trans. Veh. Technol.
**1994**, 43, 279–288. [Google Scholar] [CrossRef] - Li, Z.; Zhao, X. BP artificial neural network based wave front correction for sensor-less free space optics communication. Opt. Commun.
**2017**, 385, 219–228. [Google Scholar] [CrossRef] - Abiodun, C.I.; Azi, S.O.; Ojo, J.S.; Akinyemi, P. Assessment of Path Loss Prediction Models for Wireless Propagation Channels at L-Band Frequency over Different Micro-Cellular Environments of Ekiti State, Southwestern Nigeria. AEU-Int. J. Electron. Commun.
**2017**, 11, 1103–1109. [Google Scholar] - O’Shea, T.; Hoydis, J. An Introduction to Deep Learning for the Physical Layer. IEEE Trans. Cogn. Commun. Netw.
**2017**, 3, 563–575. [Google Scholar] [CrossRef][Green Version] - Nachmani, E.; Marciano, E.; Lugosch, L.; Gross, W.J.; Burshtein, D.; Be’ery, Y. Deep learning methods for improved decoding of linear codes. IEEE J. Sel. Top. Signal. Process.
**2018**, 12, 119–131. [Google Scholar] [CrossRef] - Liang, F.; Shen, C.; Wu, F. An iterative BP-CNN architecture for channel decoding. IEEE J. Sel. Top. Signal. Process.
**2018**, 12, 144–159. [Google Scholar] [CrossRef] - Ye, H.; Li, G.Y.; Juang, B.H. Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett.
**2018**, 7, 114–117. [Google Scholar] [CrossRef] - Neumann, D.; Wiese, T.; Utschick, W. Learning the MMSE channel estimator. IEEE Trans. Signal. Process.
**2018**, 11, 2905–2917. [Google Scholar] [CrossRef] - Wang, T.; Wen, C.K.; Wang, H.; Gao, F.; Jiang, T.; Jin, S. Deep learning for wireless physical layer: Opportunities and challenges. China Commun.
**2017**, 14, 92–111. [Google Scholar] [CrossRef] - Dorner, S.; Cammerer, S.; Hoydis, J.; ten Brink, S. Deep Learning Based Communication Over the Air. IEEE J. Sel. Top. Signal. Process.
**2018**, 12, 132–143. [Google Scholar] [CrossRef] - Sulyman, A.I.; Alwarafy, A.; MacCartney, G.R.; Rappaport, T.S.; Alsanie, A. Directional Radio Propagation Path Loss Models for Millimeter-Wave Wireless Networks in the 28-, 60-, and 73-GHz Bands. IEEE Trans. Wirel. Commun.
**2016**, 15, 6939–6947. [Google Scholar] [CrossRef] - MacCartney, G.R.; Rappaport, T.S. Rural Macrocell Path Loss Models for Millimeter Wave Wireless Communications. IEEE J. Sel. Areas Commun.
**2017**, 35, 1663–1677. [Google Scholar] [CrossRef] - Liu, J.; Sheng, M.; Liu, L.; Li, J. Effect of Densification on Cellular Network Performance with Bounded Pathloss Model. IEEE Commun. Lett.
**2017**, 21, 346–349. [Google Scholar] [CrossRef] - Ostlin, E.; Zepernick, H.J.; Suzuki, H. Macrocell Path-Loss Prediction Using Artificial Neural Networks. IEEE Trans. Veh. Technol.
**2010**, 59, 2735–2747. [Google Scholar] [CrossRef][Green Version] - Sasaki, M.; Inomata, M.; Yamada, W.; Kita, N.; Onizawa, T.; Nakatsugawa, M.; Kitao, K.; Imai, T. Path Loss Model Considering Blockage Effects of Traffic Signs up to 40 GHz in Urban Microcell Environments. IEICE Trans. Commun.
**2018**, 101, 1891–1902. [Google Scholar] [CrossRef] - Sun, S.; Rappaport, T.S.; Thomas, T.A.; Ghosh, A.; Nguyen, H.C.; Kovács, I.Z.; Rodriguez, I.; Koymen, O.; Partyka, A. Investigation of prediction accuracy, sensitivity, and parameter stability of large-scale propagation path loss models for 5G wireless communications. IEEE Trans. Veh. Technol.
**2016**, 65, 2843–2860. [Google Scholar] [CrossRef] - Sotiroudis, S.P.; Goudos, S.K.; Gotsis, K.A.; Siakavara, K.; Sahalos, J.N. Application of a composite differential evolution algorithm in optimal neural network design for propagation path-loss prediction in mobile communication systems. IEEE Antennas Wirel. Propag. Lett.
**2013**, 12, 364–367. [Google Scholar] [CrossRef] - Faruk, N.; Popoola, S.I.; Surajudeen-Bakinde, N.T.; Oloyede, A.A.; Abdulkarim, A.; Olawoyin, L.A.; Ali, M.; Calafate, C.T.; Atayero, A.A. Path Loss Predictions in the VHF and UHF Bands within Urban Environments: Experimental Investigation of Empirical, Heuristics and Geospatial Models. IEEE Access
**2019**, 7, 77293–77307. [Google Scholar] [CrossRef] - Ayadi, M.; Zineb, A.B.; Tabbane, S. A UHF path loss model using learning machine for heterogeneous networks. IEEE Trans. Antennas Propag.
**2017**, 65, 3675–3683. [Google Scholar] [CrossRef] - Oroza, C.A.; Zhang, Z.; Watteyne, T.; Glaser, S.D. A machine-learning-based connectivity model for complex terrain large-scale low-power wireless deployments. IEEE Trans. Cogn. Commun. Netw.
**2017**, 3, 576–584. [Google Scholar] [CrossRef] - Gital, A.Y.; Ismail, A.S.; Chiroma, H.; Muaz, S.A. Packet Drop Rate and Round Trip Time Analysis of TCP Congestion Control Algorithm in a Cloud Based Collaborative Virtual Environment. In Proceedings of the 2014 European Modelling Symposium, Pisa, Italy, 21–23 October 2014; pp. 442–447. [Google Scholar]
- Liu, W.; Zhao, D.; Zhu, G. End-to-end delay and packet drop rate performance for a wireless sensor network with a cluster-tree topology. Wirel. Commun. Mob. Comput.
**2014**, 14, 729–744. [Google Scholar] [CrossRef] - Lin, T.L.; Shin, J.; Cosman, P. Packet dropping for widely varying bit reduction rates using a network-based packet loss visibility model. In Proceedings of the 2010 Data Compression Conference, Snowbird, UT, USA, 24–26 March 2010; pp. 445–454. [Google Scholar]
- Khelage, P.B.; Kolekar, D.U. Survey and Simulation based Performance Analysis of TCP-Variants in terms of Throughput, Delay and drop Packets over MANETs. Int. J. Sci. Eng. Res.
**2014**, 5, 775–786. [Google Scholar] - Esmaeilzadeh, M.; Aboutorab, N.; Sadeghi, P. Joint optimization of throughput and packet drop rate for delay sensitive applications in TDD satellite network coded systems. IEEE Trans. Commun.
**2014**, 62, 676–690. [Google Scholar] [CrossRef] - Padhy, B.P.; Srivastava, S.C.; Verma, N.K. A wide-area damping controller considering network input and output delays and packet drop. IEEE Trans. Power Syst.
**2017**, 32, 166–176. [Google Scholar] [CrossRef] - Quevedo, D.E.; Jurado, I. Stability of sequence-based control with random delays and dropouts. IEEE Trans. Autom. Control
**2014**, 59, 1296–1302. [Google Scholar] [CrossRef] - Quevedo, D.E.; Ostergaard, J.; Nesic, D. Packetized predictive control of stochastic systems over bit-rate limited channels with packet loss. IEEE Trans. Autom. Control.
**2011**, 56, 2854–2868. [Google Scholar] [CrossRef]

**Figure 1.**Diagram of the dedicated short-range communications (DSRC) network sniffer and data connection.

**Figure 2.**DSRC on-board unit (OBU), GPS antenna, and additional micro-controller (Raspberry Pi) in the DSRC network vehicle experiments.

Maximum (dB) | Minimum (dB) | Median (dB) | Mean (dB) | Standard Deviation (dB) | |
---|---|---|---|---|---|

Highways | 112 | 72 | 90 | 90.7923 | 7.9492 |

Local Areas | 112 | 72 | 95 | 94.5932 | 6.7879 |

Residential Areas | 110 | 58 | 91 | 90.3498 | 6.3856 |

Rural Areas | 116 | 70 | 85 | 87.6001 | 7.8370 |

Input Data | Hidden Layer | Output Layer |

${x}_{i}$: distance ${y}_{i}$: path loss | ${H}_{j}=f({\displaystyle \sum _{i=1}^{{N}_{1}}{w}_{ij}{x}_{i}}+{b}_{j}^{H})$ | ${O}_{k}={\displaystyle \sum _{j=1}^{{N}_{2}}{w}_{jk}{H}_{j}}+{b}_{k}^{O}$ |

Squared Error Function | Hidden Layer Weights | Hidden Layer Weights |

$E=\frac{1}{2}{\{{\displaystyle \sum _{j=1}^{{N}_{2}}{\omega}_{jk}f({\displaystyle \sum _{i=1}^{{N}_{1}}{\omega}_{ij}}{x}_{i})}-{y}_{k}\}}^{2}$ | $\delta {w}_{ij}={\displaystyle \sum _{k=1}^{{N}_{3}}({O}_{k}-{y}_{k}){w}_{jk}(1-{O}_{k}^{2}){x}_{i}}$ | $\delta {w}_{jk}=({O}_{k}-{y}_{k}){H}_{j}$ |

Distance (m) | Path Loss (dB) | |||
---|---|---|---|---|

Maximum | Minimum | Mean | Standard Deviation | |

10 | 97 | 58 | 80.1250 | 8.0274 |

20 | 99 | 74 | 86.2434 | 4.0042 |

30 | 106 | 76 | 89.6890 | 5.4820 |

40 | 104 | 79 | 91.4143 | 4.7768 |

50 | 109 | 84 | 95.2534 | 4.2361 |

60 | 102 | 89 | 94.7170 | 3.4828 |

70 | 110 | 90 | 96.1111 | 6.4118 |

80 | 105 | 92 | 100.0000 | 5.5678 |

Distance (m) | Path Loss (dB) | |||
---|---|---|---|---|

Maximum | Minimum | Mean | Standard Deviation | |

10 | 95 | 72 | 87.3768 | 4.4295 |

20 | 106 | 79 | 88.2857 | 4.6142 |

30 | 102 | 81 | 88.8382 | 4.3174 |

40 | 106 | 80 | 93.0845 | 5.5978 |

50 | 104 | 82 | 95.8140 | 4.4468 |

60 | 102 | 86 | 95.7778 | 3.5261 |

70 | 106 | 85 | 97.6714 | 3.8475 |

80 | 111 | 82 | 96.0159 | 5.1334 |

90 | 111 | 89 | 99.7500 | 6.6114 |

100 | 109 | 89 | 100.6410 | 5.8915 |

110 | 112 | 91 | 101.0256 | 4.7099 |

120 | 109 | 91 | 101.0882 | 4.1514 |

130 | 105 | 91 | 98.0000 | 3.8730 |

140 | 111 | 90 | 98.3947 | 4.6646 |

150 | 109 | 90 | 102.0000 | 4.3028 |

Distance (m) | Path Loss (dB) | |||
---|---|---|---|---|

Maximum | Minimum | Mean | Standard Deviation | |

10 | 96 | 70 | 83.0457 | 3.8613 |

20 | 100 | 76 | 86.2235 | 4.5544 |

30 | 100 | 78 | 88.1616 | 4.2047 |

40 | 104 | 81 | 89.4088 | 4.7390 |

50 | 100 | 84 | 92.9333 | 3.8857 |

60 | 100 | 85 | 91.4545 | 4.7616 |

70 | 108 | 85 | 95.2857 | 8.2606 |

80 | 107 | 84 | 96.4545 | 7.1044 |

90 | 113 | 85 | 104.1562 | 6.6338 |

100 | 111 | 85 | 101.8293 | 5.9745 |

110 | 110 | 92 | 104.3333 | 4.8866 |

120 | 116 | 85 | 101.3846 | 9.4122 |

130 | 111 | 92 | 103.0000 | 5.1547 |

140 | 115 | 97 | 107.3333 | 5.0513 |

150 | 114 | 93 | 105.2500 | 6.8772 |

160 | 108 | 95 | 101.0000 | 6.5574 |

170 | 105 | 98 | 102.2500 | 3.0957 |

180 | 110 | 99 | 105.0000 | 4.6904 |

190 | 116 | 102 | 109.5000 | 4.5056 |

200 | 113 | 104 | 107.0000 | 3.7417 |

Distance (m) | Path Loss (dB) | |||
---|---|---|---|---|

Maximum | Minimum | Mean | Standard Deviation | |

10 | 96 | 74 | 84.9412 | 5.1898 |

20 | 94 | 72 | 82.9568 | 3.4555 |

30 | 97 | 76 | 84.9444 | 3.4816 |

40 | 98 | 76 | 87.8354 | 5.0656 |

50 | 104 | 80 | 91.2535 | 5.3068 |

60 | 102 | 80 | 94.5312 | 4.2151 |

70 | 106 | 80 | 95.9216 | 4.4670 |

80 | 111 | 82 | 97.2157 | 4.6173 |

90 | 111 | 81 | 99.8387 | 7.5012 |

100 | 109 | 78 | 101.1613 | 7.3986 |

110 | 112 | 80 | 100.4474 | 6.2545 |

120 | 109 | 81 | 100.3143 | 5.7944 |

130 | 105 | 79 | 96.9756 | 5.2321 |

140 | 111 | 82 | 97.8537 | 5.1940 |

150 | 109 | 90 | 101.6757 | 4.6789 |

PDR | Slot 1 | Slot 2 | Slot 3 | Slot 4 | Slot 5 | Slot 6 | Slot 7 | Slot 8 | Slot 9 | Slot 10 |
---|---|---|---|---|---|---|---|---|---|---|

0 | 0.1010 | 0.5141 | 0.7770 | 0.1493 | 0.4073 | 0.1006 | 0.0272 | 0.1346 | 0.0031 | 0.0027 |

0.05 | 0.0091 | 0.1227 | 0.0306 | 0.0486 | 0.1841 | 0.4865 | 0.6315 | 0.2689 | 0.0258 | 0.0689 |

0.1 | 0.6214 | 0.1364 | 0.0469 | 0.6910 | 0.1469 | 0.0142 | 0.1043 | 0.0094 | 0.0448 | 0.0784 |

0.15 | 0.0802 | 0.0544 | 0.0372 | 0.0522 | 0.0814 | 0.1027 | 0.0756 | 0.2819 | 0.0890 | 0.0539 |

0.2 | 0.0066 | 0.0091 | 0.0085 | 0.0049 | 0.1055 | 0.1333 | 0.0397 | 0.0154 | 0.0794 | 0.0280 |

0.25 | 0.1133 | 0.0036 | 0.0013 | 0.0135 | 0.0253 | 0.0751 | 0.0161 | 0.0187 | 0.0942 | 0.0258 |

0.3 | 0.0335 | 0.1364 | 0.0840 | 0.0089 | 0.0085 | 0.0065 | 0.0232 | 0.0237 | 0.0009 | 0.0022 |

0.35 | 0.0076 | 0.0021 | 0.0018 | 0.0045 | 0.0035 | 0.0145 | 0.0044 | 0.1576 | 0.0084 | 0.0102 |

0.4 | 0.0043 | 0.0102 | 0.0054 | 0.0010 | 0.0123 | 0.0309 | 0.0243 | 0.0731 | 0.0602 | 0.0682 |

0.45 | 0.0008 | 0.0084 | 0.0064 | 0.0013 | 0.0013 | 0.0005 | 0.0077 | 0.0018 | 0.0002 | 0.0013 |

0.5 | 0.0159 | 0.0004 | 0.0001 | 0.0034 | 0.0042 | 0.0063 | 0.0113 | 0.0022 | 0.4935 | 0.5798 |

0.55 | 0.0011 | 0.0004 | 0.0002 | 0.0009 | 0.0121 | 0.0191 | 0.0249 | 0.0020 | 0.0940 | 0.0614 |

0.6 | 0.0052 | 0.0019 | 0.0005 | 0.0204 | 0.0077 | 0.0100 | 0.0100 | 0.0107 | 0.0065 | 0.0191 |

PDR | Slot 1 | Slot 2 | Slot 3 | Slot 4 | Slot 5 | Slot 6 | Slot 7 | Slot 8 | Slot 9 | Slot 10 |
---|---|---|---|---|---|---|---|---|---|---|

0 | 0.4128 | 0.1069 | 0.0260 | 0.0297 | 0.9445 | 0.0196 | 0.6496 | 0.7081 | 0.7426 | 0.6915 |

0.05 | 0.1707 | 0.1433 | 0.0606 | 0.0776 | 0.0081 | 0.6812 | 0.2323 | 0.1752 | 0.1502 | 0.1875 |

0.1 | 0.0599 | 0.0090 | 0.7560 | 0.8014 | 0.0061 | 0.0258 | 0.0444 | 0.0248 | 0.0285 | 0.0190 |

0.15 | 0.1215 | 0.5091 | 0.0646 | 0.0125 | 0.0221 | 0.0615 | 0.0377 | 0.0593 | 0.0505 | 0.0532 |

0.2 | 0.0942 | 0.0042 | 0.0094 | 0.0092 | 0.0031 | 0.0233 | 0.0062 | 0.0051 | 0.0057 | 0.0050 |

0.25 | 0.0099 | 0.0031 | 0.0057 | 0.0044 | 0.0036 | 0.0461 | 0.0016 | 0.0011 | 0.0013 | 0.0009 |

0.3 | 0.0453 | 0.0406 | 0.0064 | 0.0216 | 0.0096 | 0.0566 | 0.0183 | 0.0150 | 0.0113 | 0.0271 |

0.35 | 0.0038 | 0.0183 | 0.0077 | 0.0038 | 0.0012 | 0.0352 | 0.0008 | 0.0014 | 0.0012 | 0.0012 |

0.4 | 0.0573 | 0.1572 | 0.0100 | 0.0031 | 0.0008 | 0.0119 | 0.0026 | 0.0038 | 0.0041 | 0.0074 |

0.45 | 0.0079 | 0.0040 | 0.0026 | 0.0129 | 0.0004 | 0.0113 | 0.0040 | 0.0041 | 0.0027 | 0.0063 |

0.5 | 0.0045 | 0.0011 | 0.0265 | 0.0058 | 0.0000 | 0.0046 | 0.0001 | 0.0001 | 0.0001 | 0.0001 |

0.55 | 0.0099 | 0.0013 | 0.0121 | 0.0008 | 0.0001 | 0.0056 | 0.0007 | 0.0007 | 0.0008 | 0.0004 |

0.6 | 0.0023 | 0.0019 | 0.0123 | 0.0173 | 0.0005 | 0.0173 | 0.0016 | 0.0013 | 0.0011 | 0.0005 |

PDR | Slot 1 | Slot 2 | Slot 3 | Slot 4 | Slot 5 | Slot 6 | Slot 7 | Slot 8 | Slot 9 | Slot 10 |
---|---|---|---|---|---|---|---|---|---|---|

0 | 0.8990 | 0.3699 | 0.6171 | 0.0535 | 0.1934 | 0.0291 | 0.1063 | 0.1026 | 0.0574 | 0.0573 |

0.05 | 0.0325 | 0.1297 | 0.0892 | 0.1119 | 0.4130 | 0.7108 | 0.0794 | 0.0963 | 0.4116 | 0.2640 |

0.1 | 0.0207 | 0.0057 | 0.0294 | 0.6808 | 0.0185 | 0.0994 | 0.0338 | 0.0294 | 0.0202 | 0.0282 |

0.15 | 0.0283 | 0.3187 | 0.1835 | 0.0535 | 0.1708 | 0.0192 | 0.1682 | 0.0929 | 0.0987 | 0.1590 |

0.2 | 0.0076 | 0.0153 | 0.0198 | 0.0157 | 0.0994 | 0.0257 | 0.0026 | 0.0602 | 0.2247 | 0.1464 |

0.25 | 0.0024 | 0.0060 | 0.0088 | 0.0153 | 0.0154 | 0.0016 | 0.0321 | 0.5370 | 0.0301 | 0.0203 |

0.3 | 0.0043 | 0.0506 | 0.0180 | 0.0321 | 0.0113 | 0.0301 | 0.5369 | 0.0097 | 0.0161 | 0.0086 |

0.35 | 0.0017 | 0.0195 | 0.0104 | 0.0033 | 0.0174 | 0.0198 | 0.0111 | 0.0111 | 0.0119 | 0.0581 |

0.4 | 0.0011 | 0.0765 | 0.0168 | 0.0103 | 0.0421 | 0.0247 | 0.0144 | 0.0105 | 0.0783 | 0.1462 |

0.45 | 0.0006 | 0.0020 | 0.0012 | 0.0090 | 0.0011 | 0.0130 | 0.0089 | 0.0006 | 0.0065 | 0.0019 |

0.5 | 0.0001 | 0.0012 | 0.0007 | 0.0068 | 0.0018 | 0.0027 | 0.0015 | 0.0239 | 0.0136 | 0.0525 |

0.55 | 0.0005 | 0.0025 | 0.0017 | 0.0028 | 0.0105 | 0.0018 | 0.0011 | 0.0173 | 0.0238 | 0.0316 |

0.6 | 0.0013 | 0.0023 | 0.0034 | 0.0051 | 0.0052 | 0.0221 | 0.0037 | 0.0085 | 0.0069 | 0.0259 |

NN Model | Statistic Model | Improvement | |
---|---|---|---|

${n}_{1opt}$ | 140 | 100 | |

${n}_{2opt}$ | 20 | 22 | |

${n}_{3opt}$ | 10 | 11 | |

$RMS{E}_{train}$ | 0.0881 | 0.1050 | 16.10% |

$RMS{E}_{val}$ | 0.0722 | 0.0884 | 18.33% |

$RMS{E}_{test}$ | 0.0814 | 0.1084 | 24.91% |

$RMS{E}_{all}$ | 0.0842 | 0.1036 | 18.73% |

$AC{C}_{train}$ | 0.7067 | 0.3634 | 94.47% |

$AC{C}_{val}$ | 0.6915 | 0.4186 | 65.19% |

$AC{C}_{test}$ | 0.6943 | 0.3916 | 77.30% |

$AC{C}_{all}$ | 0.7013 | 0.3787 | 85.19% |

$\alpha $ | 13925 | 3288 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhang, T.; Liu, S.; Xiang, W.; Xu, L.; Qin, K.; Yan, X. A Real-Time Channel Prediction Model Based on Neural Networks for Dedicated Short-Range Communications. *Sensors* **2019**, *19*, 3541.
https://doi.org/10.3390/s19163541

**AMA Style**

Zhang T, Liu S, Xiang W, Xu L, Qin K, Yan X. A Real-Time Channel Prediction Model Based on Neural Networks for Dedicated Short-Range Communications. *Sensors*. 2019; 19(16):3541.
https://doi.org/10.3390/s19163541

**Chicago/Turabian Style**

Zhang, Tianhong, Sheng Liu, Weidong Xiang, Limei Xu, Kaiyu Qin, and Xiao Yan. 2019. "A Real-Time Channel Prediction Model Based on Neural Networks for Dedicated Short-Range Communications" *Sensors* 19, no. 16: 3541.
https://doi.org/10.3390/s19163541