## Author Contributions

Conceptualization, E.C., L.R.; investigation, E.C., L.R., A.T.S. and E.G writing—review and editing E.C., L.R., A.T.S. and E.G. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Borshchev, A. The Big Book of Simulation Modeling: Multimethod Modeling with AnyLogic 6; AnyLogic North America: Chicago, IL, USA, 2013. [Google Scholar]
- Fujimoto, R. Parallel and Distributed Simulation, 1st ed.; John Wiley & Sons: New York, NY, USA, 2000. [Google Scholar]
- Page, E.; Buss, A.; Fishwick, P.; Healy, K.; Nance, R.; Paul, R. Web-based Simulation: Revolution or Evolution? ACM Trans. Modeling Comput. Simul. (TOMACS)
**2000**, 10, 3–17. [Google Scholar] [CrossRef] - Fujimoto, R.; Malik, A.; Park, A. Parallel and distributed Simulation in the cloud. SCS Modeling Simul. Mag.
**2010**, 3, 1–10. [Google Scholar] - Jávor, A.; Fur, A. Simulation on the Web with distributed models and intelligent agents. Simulation
**2012**, 88, 1080–1092. [Google Scholar] [CrossRef] - Amoretti, M.; Zanichelli, F.; Conte, G. Efficient autonomic cloud computing using online discrete event simulation. J. Parallel Distrib. Comput.
**2013**, 73, 767–776. [Google Scholar] [CrossRef] - Jafer, S.; Liu, Q.; Wainer, G. Synchronization methods in Parallel and distributed discrete-event Simulation. Simul. Model. Pract. Theory
**2013**, 30, 54–73. [Google Scholar] [CrossRef] - Padilla, J.; Diallo, S.; Barraco, A.; Lynch, C.; Kavak, H. Cloud-based simulators: Making simulations accessible to non-experts and experts alike. In Proceedings of the 2014 Winter Simulation Conference 2014, Savannah, GA, USA, 7–10 December 2014; pp. 3630–3639. [Google Scholar]
- Yoginath, S.; Perumalla, K. Efficient Parallel Discrete Event Simulation on cloud/virtual machine platforms. ACM Trans. Modeling Comput. Simul. (TOMACS)
**2015**, 26, 1–26. [Google Scholar] [CrossRef] - Padilla, J.; Lynch, C.; Diallo, S.; Gore, R.; Barraco, A.; Kavak, H.; Jenkins, B. Using simulation games for teaching and learning discrete-event simulation. In Proceedings of the 2016 Winter Simulation Conference (WSC), Washington, DC, USA, 11–14 December 2016; pp. 3375–3384. [Google Scholar]
- Liu, D.; De Grande, R.; Boukerche, A. Towards the Design of an Interoperable Multi-cloud Distributed Simulation System. In Proceedings of the 2017 Spring Simulation Multi-Conference—Annual Simulation Symposium, Virginia Beach, VA, USA, 23−26 April 2017; pp. 1–12. [Google Scholar]
- Diallo, S.; Gore, R.; Padilla, J.; Kavak, H.; Lynch, C. Towards a World Wide Web of Simulation. J. Def. Modeling Simul. Appl. Methodol. Technol.
**2017**, 14, 159–170. [Google Scholar] [CrossRef] - Shchur, L.; Shchur, L. Parallel Discrete Event Simulation as a Paradigm for Large Scale Modeling Experiments. In Proceedings of the XVII International Conference “Data Analytics and Management in Data Intensive Domains” (DAMDID/RCDL’2015), Obninsk, Russia, 13−16 October 2015. [Google Scholar]
- Tang, Y.; Perumalla, K.; Fujimoto, R.; Karimabadi, H.; Driscoll, J.; Omelchenko, Y. Optimistic parallel discrete event simulations of physical systems using reverse computation. In Proceedings of the Workshop on Principles of Advanced and Distributed Simulation (PADS’05), Monterey, CA, USA, 1−3 June 2005. [Google Scholar]
- Ziganurova, L.; Novotny, M.; Shchur, L. Model for the evolution of the time profile in optimistic parallel discrete event simulations. In Proceedings of the International Conference on Computer Simulation in Physics and Beyond, Moscow, Russia, 6–10 September 2015. [Google Scholar]
- Steinman, J. The WarpIV Simulation Kernel. In Proceedings of the Workshop on Principles of Advanced and Distributed Simulation (PADS 2005), Monterey, CA, USA, 1–3 June 2005. [Google Scholar]
- Steinman, J. Breathing Time Warp. In Proceedings of the 7th Workshop on Parallel and Distributed Simulation (PADS93), San Diego, CA, USA, 16–19 May 1993. [Google Scholar]
- Cortes, E.; Rabelo, L.; Lee, G. Using Deep Learning to Configure Parallel Distributed Discrete-Event Simulators. In Artificial Intelligence: Advances in Research and Applications, 1st ed.; Rabelo, L., Bhide, S., Gutierrez, E., Eds.; Nova Science Publishers: Hauppauge, NY, USA, 2018. [Google Scholar]
- Steinman, J. Discrete-Event Simulation, and the Event Horizon. ACM SIGSIM Simul. Dig.
**1994**, 24, 39–49. [Google Scholar] [CrossRef] - Steinman, J. Discrete-Event Simulation and the Event Horizon Part 2: Event List Management. ACM SIGSIM Simul. Dig.
**1996**, 26, 170–178. [Google Scholar] [CrossRef] - Steinman, J.; Nicol, D.; Wilson, L.; Lee, C. Global Virtual Time and Distributed Synchronization. In Proceedings of the 1995 Parallel and Distributed Simulation Conference, Lake Placid, NY, USA, 14–16 June 1995. [Google Scholar]
- Hinton, G.; Salakhutdinov, R. Reducing the dimensionality of data with neural networks. Science
**2006**, 313, 504–507. [Google Scholar] [CrossRef] - Yu, K.; Jia, L.; Chen, Y.; Xu, W. Deep learning: Yesterday, today, and tomorrow. J. Comput. Res. Dev.
**2013**, 50, 1799–1804. [Google Scholar] - Jiang, L.; Zhou, Z.; Leung, T.; Li, T.; Fei-Fei, L. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In Proceeding of the Thirty-Fifth International Conference on Machine Learning, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
- Hinton, G. A practical guide to training restricted Boltzmann machines. Momentum
**2010**, 9, 926. [Google Scholar] - Mohamed, A.; Sainath, T.; Dahl, G.; Ramabhadran, B.; Hinton, G.; Picheny, M. Deep belief networks using discriminative features for phone recognition. In Proceedings of the Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference, Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
- Mohamed, A.; Dahl, G.; Hinton, G. Acoustic modeling using deep belief networks. IEEE Trans. Audio Speech Lang. Process.
**2012**, 20, 14–22. [Google Scholar] [CrossRef] - Huang, W.; Song, G.; Hong, G. Deep architecture for traffic flow prediction: Deep belief networks with multitask learning. IEEE Trans. Intell. Transp. Syst.
**2014**, 15, 2191–2201. [Google Scholar] [CrossRef] - Sarikaya, R.; Hinton, G.; Deoras, A. Application of deep belief networks for natural language understanding. IEEE/ACM Trans. Audio Speech Lang. Process.
**2014**, 22, 778–784. [Google Scholar] [CrossRef] - Movahedi, F.; Coyle, J.; Sejdić, E. Deep belief networks for electroencephalography: A review of recent contributions and future outlooks. IEEE J. Biomed. Health Inform.
**2018**, 22, 642–652. [Google Scholar] [CrossRef] - Hinton, G.; Osindero, S.; Yee-Whye, T. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput.
**2006**, 18, 1527–1554. [Google Scholar] [CrossRef] - Cho, K.; Ilin, A.; Raiko, T. Improved learning of Gaussian-Bernoulli restricted Boltzmann machines. In Artificial Neural Networks and Machine Learning—ICANN 2011; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6791, pp. 10–17. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - LeCun, Y.; Corinna, C. THE MNIST DATABASE of Handwritten Digits. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 22 September 2020).
- Wu, M.; Chen, L. Image Recognition Based on Deep Learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 542–546. [Google Scholar]
- Cortes, E.; Rabelo, L. An architecture for monitoring and anomaly detection for space systems. SAE Int. J. Aerosp.
**2013**, 6, 81–86. [Google Scholar] [CrossRef] - Carothers, C.; Bauer, D.; Pearce, S. ROSS: A high-performance, low memory modular time warp system. J. of Parallel Distrib. Comput.
**2002**, 62, 1648–1669. [Google Scholar] [CrossRef] - Mubarak, M.; Carothers, C.; Ross, R.; Carns, P. Using massively parallel Simulation for MPI collective communication modeling in extreme-scale networks. In Proceedings of the 2014 Winter Simulation Conference, Savannah, GA, USA, 7–10 December 2014; pp. 3107–3118. [Google Scholar]
- Steinman, J.; Lammers, C.; Valinski, M. A Proposed Open Cognitive Architecture Framework (OpenCAF). In Proceedings of the 2009 Winter Simulation Conference, Austin, TX, USA, 13–16 December 2009. [Google Scholar]
- Steinman, J.; Lammers, C.; Valinski, M.; Steinman, W. External Modeling Framework and the OpenUTF. Report of WarpIV Technologies. Available online: http://www.warpiv.com/Documents/Papers/EMF.pdf (accessed on 30 September 2020).
- Plauger, P.; Stepanov, A.; Lee, M.; Musser, D. The C++ Standard Template Library; Prentice-Hall PTR, Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
- Shao, J.; Wang, Y. A new measure of software complexity based on cognitive weights. Can. J. Electr. Comput. Eng.
**2003**, 28, 69–74. [Google Scholar] [CrossRef] - Misra, S. A Complexity Measure based on Cognitive Weights. Int. J. Theor. Appl. Comput. Sci.
**2006**, 1, 1–10. [Google Scholar] - Kent, E.; Hoops, S.; Mendes, P. Condor-COPASI: High-throughput computing for biochemical networks. BMC Syst. Biol.
**2012**, 6, 91. [Google Scholar] [CrossRef] [PubMed] - Wang, Y.; Jung, Y.; Supinie, T.; Xue, M. A Hybrid MPI–OpenMP Parallel Algorithm and Performance Analysis for an Ensemble Square Root Filter Designed for Multiscale Observations. J. Atmos. Ocean. Technol.
**2013**, 30, 1382–1397. [Google Scholar] [CrossRef] - Zhan, D.; Qian, J.; Cheng, Y. Balancing global and local search in parallel efficient global optimization algorithms. J. Glob. Optim.
**2017**, 67, 873–892. [Google Scholar] [CrossRef] - Grandison, A.; Cavanagh, Y.; Lawrence, P.; Galea, E. Increasing the Simulation Performance of Large-Scale Evacuations Using Parallel Computing Techniques Based on Domain Decomposition. Fire Technol.
**2017**, 53, 1399–1438. [Google Scholar] [CrossRef] - Rumelhart, D.; Hinton, G.; Williams, R. Learning representations by back-propagating errors. Nature
**1986**, 323, 533–536. [Google Scholar] [CrossRef] - Wang, X.; Zhao, Y.; Pourpanah, F. Recent advances in deep learning. Int. J. Mach. Learn. Cybern.
**2020**, 11, 747–750. [Google Scholar] [CrossRef] - Ren, K.; Zheng, T.; Qin, Z.; Liu, X. Adversarial Attacks and Defenses in Deep Learning. Engineering
**2020**, 6, 346–360. [Google Scholar] [CrossRef]

**Figure 1.**
Fixed time buckets allow events to be scheduled and processed asynchronously using the concept of a global lookahead.

**Figure 2.**
The implementation of rollback produced by straggler messages and antimessages in time warp (TW).

**Figure 3.**
The event horizon for a single node and the insertion of events on the list.

**Figure 4.**
Example of the breathing time warp (BTW) event-processing cycle with a TW phase, a breathing time buckets (BTB) phase, computing of global virtual time (GVT), and the corresponding commitment of events in five nodes.

**Figure 5.**
Example of handwritten digits from the MNIST handwritten digits database.

**Figure 6.**
Detection by comparison of signals using nominal patterns as the basis to contrast with off-nominal patterns.

**Figure 7.**
Simulation scenario (case study) using two classes of simulations objects (SOs) with their respective events and trajectories. These SOs are radars and aircraft.

**Figure 8.**
Unified modeling language (UML) schematics of the development with two types of simulation objects (Aircraft and Radar) and two events (i.e., Scan and TestUpdateAttribute). (The symbol * means: many).

**Figure 9.**
Example of a theater of operations as defined by the rectangle with vertices (A–D).

**Figure 10.**
Different methods in the C programming language adapted to WarpIV to program the case study of

Figure 7.

**Figure 11.**
Examples of node configurations with cores and distributed computing elements for the experiments.

**Figure 12.**
Speedup chart for different time and synchronization schemes (BTW, BTB, and TW) for the distributed configurations. It is essential to observe the differences in performance due to the configuration and the time and synchronization scheme for the case study—this graph will be different for other performance measures.

**Figure 13.**
Calculation of the cognitive weights for a program.

**Figure 14.**
Root mean square error and cross-entropy error—training curve for the DBN developed with 21 inputs, 50 neurons in the first hidden layer, 50 neurons in the second hidden layer, 50 neurons in the third hidden layer, and 3 output neurons.

**Figure 15.**
The testing performance of the DBNs built using 21 inputs, 50 neurons in the first hidden layer, 50 neurons in the second hidden layer, 50 neurons in the third hidden layer, and 3 output neurons.

**Table 1.**
Datasets of different shuttle flights (telemetry data from the three main engines) for training.

Flight Number | Flight Date | Start GMT | End GMT | TCID |
---|

133 | 24 February 2011 | 124700 | 150000 | SA133B |

132 | 14 May 2010 | 090000 | 111000 | SA132B |

131 | 4 May 2010 | 010627 | 045800 | SA131A |

128 | 28 August 2009 | 185712 | 210000 | SA128B |

126 | 14 October 2008 | 154210 | 190000 | SA126A |

Space Shuttle Main Engine Main Fuel Valve Telemetry Retrieved |

Engine 3: E41T3153A1, E41T3154A1 (β(1), β(2)). Engine 2: E41T2153A1, E41T2154A1 (β(3), β(4)). Engine 1: E41T1153A1, E41T1154A1 (β(5), β(6)). |

**Table 2.**
Deep belief network (DBN) architecture and elements of the neurodynamics were built for the case study of the space shuttle.

Learning Rate | Hidden Layer 1 Neurons | Hidden Layer 2 Neurons | Hidden Layer 3 Neurons | Number Output Neurons | RBM Mini-Batch Size | RBM Epochs | DBN Mini-Batch Size | DBN Epochs | Weight Cost | Momentum |
---|

10^{−6} | 30 | 20 | 10 | 1 | 50 | 50 | 50 | 1 | 0.01 | 0.5 |

**Table 3.**
Example of simulation runs (variations) with different configurations and their respective wall-clock times for the case study depicted in

Figure 7,

Figure 8,

Figure 9 and

Figure 10.

| # Nodes | | | | |
---|

Local | Global | Wall Clock Time (s) | Speedup Rel | Speedup Theoretical | Server |
---|

BTW | 1 | 1 | 16.5 | 1 | 3 | PC1 |

| 1 | 2 | 14.1 | 1.2 | 3 | PC1 |

| 1 | 3 | 12.4 | 1.3 | 3 | PC1 |

| 1 | 4 | 11.4 | 1.4 | 3 | PC1 |

| 2 to 4 | 14 | 6.1 | 2.7 | 3 | PC1 |

| 4 | 8 | 6.5 | 2.6 | 3 | PC1 |

| 4 | 4 | 9.4 | 1.8 | 3 | |

| 3 | 3 | 10.5 | 1.6 | 3 | |

BTB | 1 | 1 | 16.1 | 1 | 3 | PC1 |

| 1 | 2 | 62.1 | 0.3 | 3 | PC1 |

| 1 | 3 | 148 | 0.1 | 3 | PC1 |

| 1 | 4 | 162.6 | 0.1 | 3 | PC1 |

| 2 to 4 | 14 | 7.7 | 2.1 | 3 | PC1 |

| 4 | 8 | 6.2 | 2.6 | 3 | PC1 |

| 4 | 4 | 9.4 | 1.7 | 3 | |

| 3 | 3 | 10.2 | 1.6 | 3 | |

TW | 1 | 1 | 17.2 | 1 | 3 | PC1 |

| 1 | 2 | 13.8 | 1.2 | 3 | PC1 |

| 1 | 3 | 12.6 | 1.4 | 3 | PC1 |

| 1 | 4 | 10.9 | 1.6 | 3 | PC1 |

| 2 to 4 | 14 | 5.9 | 2.9 | 3 | PC1 |

| 4 | 8 | 6.2 | 2.8 | 3 | PC1 |

| 4 | 4 | 10 | 1.7 | 3 | |

| 3 | 3 | 11.4 | 1.5 | 3 | |

**Table 4.**
Calculation of cognitive weights for the case study.

| | Cognitive Weights | |
---|

C_Radar.C | | 3 | |

| C_Radar::Init() | 41 | |

| C_Radar::Terminate() | 13 | |

| C_Radar::DiscoverFo | 5 | |

| C_Radar::RemoveFo | 5 | |

| C_Radar::UpdateFoAttributes | 5 | |

| C_Radar::ReflectFoAttributes | 5 | |

| C_Radar::Scan() | 2547 | **<-Event** |

C_RandomMotion.C | | 5 | |

| C_RandomMotion::Init | 83 | |

| C_RandomMotion::Terminate | 7 | |

| C_RandomMotion::TestUpdateAttribute | 144 | **<-Event** |

| C_RandomMotion::RabeloCircle | 16 | |

S_AirCraft.C | | 0 | |

| S_AirCraft::Init() | 9 | |

| S_AirCraft::Terminate() | 7 | |

S_GroundRadar.C | | 0 | |

| S_GroundRadar::Init() | 0 | |

| S_GroundRadar::Terminate() | 7 | |

Sim.c | | | |

| main | 17 | |

| **Total Program Weights** | **2919** | |

**Table 5.**
Example of a vector that defines the parallel distributed discrete-event simulation (PDDES) implementation for the aircraft detection model of

Figure 7 with 4 global nodes and 1 local node using block as the distribution policy, with TW as the best performance (best wall-clock time).

**Inputs (21 Input Neurons)** | |

Total Simulation Program Cognitive Weights | 2919 |

Number of Sim objects | 6 |

Types of Sim objects | 3 |

Mean Events per Object | 1 |

STD Events per Simulation Object | 0 |

Mean Cog Weights of All objects | 1345 |

STD Cog Weights of All objects | 1317 |

Number of Global Nodes | 4 |

Mean Local Nodes per Computer | 1 |

STD Local Nodes per Computer | 0 |

Mean number of cores | 1 |

STD Number of cores | 0 |

Mean processor Speed | 2.1 |

STD processor Speed | 0.5 |

Mean RAM | 6.5 |

STD RAM | 1.9 |

Critical Path% | 0.32 |

Theoretical Speedup | 3 |

Local Events/(Local Events + External Events) | 1 |

Subscribers/(Publishers + Subscribers) | 0.5 |

Block or Scatter? | 1 |

**Outputs (3 output neurons)** | |

BTB | 0 |

BTW | 0 |

TW | 1 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).