Revising the Classic Computing Paradigm and Its Technological Implementations
Abstract
:1. Introduction
 if [timing relations of] vacuum_tubes
 then Classic_Paradigm;
 else Unsound;
2. von Neumann’s Ideas
2.1. The “von Neumann Architecture”
2.2. The Model of Computing
 
 The input operand(s) need to be delivered to the processing element;
 
 The processing must be completely performed;
 
 The output operand(s) must be delivered to their destination.
2.3. The Computer and the Brain
2.4. Timing Relations
2.5. The Synchronous Operating Mode
2.6. Dispersion of the Synchronization
3. Scrutinizing Dispersion
3.1. The Case of EDVAC
3.2. The Case of Integrated Circuits
3.3. The Case of Technology Blocks
3.4. The Need for Communication
3.5. Using New Physical Effect/Technology/Material in the Computing Chain
3.5.1. Quantum Computing
3.5.2. Biomorphic Architectures
3.5.3. Artificial Neural Networks
3.5.4. Using Memristors for Processing
3.5.5. HalfLength Operands vs. DoubleLength Ones
3.5.6. The Role of Transfer Time
3.5.7. How the Presence of Transfer Time Was Covered
4. The Time–Space System
4.1. Considering the Transfer Time
4.2. Introducing the Time–Space System
4.3. Validating the Time–Space System
4.4. Scrutinizing the Temporal Behavior
4.5. Computing Efficiency as a Consequence of Temporal Behavior
5. Technical Solutions for the VacuumTube Age
5.1. Method of Identifying Bottlenecks of Computing
5.2. GateLevel Processing

5.3. Design Aspects
5.4. The Serial Bus
5.5. Distributed Processing
6. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
 Eckert, J.J.P.; Mauchly, J.W. Automatic HighSpeed Computing: A Progress Report on the EDVAC; Technical Report of Work under Contract No. W670ORD4926, Supplement No 4; Moore School Library, University of Pennsylvania: Philadephia, PA, USA, 1945. [Google Scholar]
 Von Neumann, J. First draft of a report on the EDVAC. IEEE Ann. Hist. Comput. 1993, 15, 27–75. [Google Scholar] [CrossRef]
 Cadareanu, P.; Reddy C, N.; Almudever, C.G.; Khanna, A.; Raychowdhury, A.; Datta, S.; Bertels, K.; Narayanan, V.; Ventra, M.D.; Gaillardon, P.E. Rebooting Our Computing Models. In Proceedings of the 2019 Design, Automation Test in Europe Conference Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 1469–1476. [Google Scholar] [CrossRef]
 Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, J.D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A Survey of Neuromorphic Computing and Neural Networks in Hardware. Available online: https://arxiv.org/abs/1705.06963 (accessed on 7 July 2021).
 Poznanovic, D.S. The emergence of nonvon Neumann processors. In International Workshop on Applied Reconfigurable Computing; Springer: Berlin, Germany, 2006; pp. 243–254. [Google Scholar]
 Fuller, S.H.; Millett, L.I. The Future of Computing Performance: Game Over or Next Level? National Academies Press: Washington, DC, USA, 2011. [Google Scholar] [CrossRef]
 Asanovic, K.; Bodik, R.; Demmel, J.; Keaveny, T.; Keutzer, K.; Kubiatowicz, J.; Morgan, N.; Patterson, D.; Sen, K.; Wawrzynek, J.; et al. A View of the Parallel Computing Landscape. Comm. ACM 2009, 52, 56–67. [Google Scholar] [CrossRef] [Green Version]
 S(o)OS Project. ResourceIndependent Execution Support on ExaScale Systems. Available online: http://www.soosproject.eu/index.php/relatedinitiatives (accessed on 14 December 2020).
 Machine Intelligence Research Institute. Erik DeBenedictis on Supercomputing. Available online: https://intelligence.org/2014/04/03/erikdebenedictis/ (accessed on 7 July 2021).
 Sawada, J.; Akopyan, F.; Cassidy, A.S.; Taba, B.; Debole, M.V.; Datta, P.; AlvarezIcaza, R.; Amir, A.; Arthur, J.V.; Andreopoulos, A.; et al. TrueNorth Ecosystem for BrainInspired Computing: Scalable Systems, Software, and Applications. In Proceedings of the SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, USA, 13–18 November 2016; pp. 130–141. [Google Scholar]
 Godfrey, M.D.; Hendry, D.F. The Computer as von Neumann Planned It. IEEE Ann. Hist. Comput. 1993, 15, 11–21. [Google Scholar] [CrossRef]
 Amdahl, G.M. Validity of the Single Processor Approach to Achieving LargeScale Computing Capabilities. AFIPS Conf. Proc. 1967, 30, 483–485. [Google Scholar] [CrossRef]
 Saini, S.; Jin, H.; Hood, R.; Barker, D.; Mehrotra, P.; Biswas, R. The impact of hyperthreading on processor resource utilization in production applications. In Proceedings of the 2011 18th International Conference on High Performance Computing (HiPC), Bengaluru, India, 18–21 December 2011; pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
 Végh, J.; Berki, A.J. On the Role of Information Transfer’s Speed in Technological and Biological Computations. SN Neuroinform. 2021. under review. [Google Scholar] [CrossRef]
 Végh, J.; Berki, Á.J. Storing and Processing Information in Technological and Biological Computing Systems. In Proceedings of the 17th International Conference on Foundations of Computer Science (FCS’21, FCS4378), Las Vegas, NA, USA, 26–29 July 2021. [Google Scholar]
 Végh, J.; Berki, A.J. Why learning and machine learning are different. Adv. Artif. Intell. Mach. Learn. 2021, 1, 131–148. [Google Scholar]
 Aspray, W. John von Neumann and the Origins of Modern Computing; Cohen, B., Aspray, W., Eds.; MIT Press: Cambridge, MA, USA, 1990; pp. 34–48. [Google Scholar]
 Furber, S.; Temple, S. Neural systems engineering. J. R. Soc. Interface 2007, 4, 193–206. [Google Scholar] [CrossRef] [Green Version]
 Lines, A.; Joshi, P.; Liu, R.; McCoy, S.; Tse, J.; Weng, Y.H.; Davies, M. Loihi Asynchronous Neuromorphic Research Chip. In Proceedings of the 24th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC), Vienna, Austria, 13–16 May 2018; pp. 32–33. [Google Scholar] [CrossRef]
 Markovic, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2020, 2, 499–510. Available online: https://doi.org/https://www.nature.com/articles/s4225402002082.pdf (accessed on 7 July 2021). [CrossRef]
 Kendall, J.D.; Kumar, S. The building blocks of a braininspired computer. Appl. Phys. Rev. 2020, 7, 011305. [Google Scholar] [CrossRef]
 Schlansker, M.; Rau, B. EPIC: Explicitly Parallel Instruction Computing. Computer 2000, 33, 37–45. [Google Scholar] [CrossRef] [Green Version]
 Waser, R. (Ed.) Advanced Electronics Materials and Novel Devices; Nanoelectronics and Information Technology; WileyVCH: Weinheim, Germany, 2012. [Google Scholar]
 Esmaeilzadeh, H.; Blem, E.; Amant, R.S.; Sankaralingam, K.; Burger, D. Dark Silicon and the End of Multicore Scaling. IEEE Micro 2012, 32, 122–134. [Google Scholar] [CrossRef] [Green Version]
 Hameed, R.; Qadeer, W.; Wachs, M.; Azizi, O.; Solomatnikov, A.; Lee, B.C.; Richardson, S.; Kozyrakis, C.; Horowitz, M. Understanding Sources of Inefficiency in Generalpurpose Chips. In Proceedings of the ISCA’10 37th Annual International Symposium on Computer Architecture, SaintMalo, France, 19–23 June 2010; ACM: New York, NY, USA, 2010; pp. 37–47. [Google Scholar] [CrossRef] [Green Version]
 Simon, H. Why We Need Exascale and Why We Won’t Get There by 2020. Available online: https://www.researchgate.net/publication/261879110_Why_we_need_Exascale_and_why_we_won’t_get_there_by_2020 (accessed on 7 July 2021).
 Birkhoff, G.; Von Neumann, J. The logic of quantum mechanics. Ann. Math. 1936, 37, 823–843. [Google Scholar] [CrossRef]
 Cho, A. Tests measure progress of quantum computers. Science 2018, 364, 1218–1219. [Google Scholar] [CrossRef]
 Wang, B.; Hu, F.; Yao, H.; Wang, C. Prime factorization algorithm based on parameter optimization of Ising model. Sci. Rep. 2021, 10. [Google Scholar] [CrossRef] [PubMed]
 Mariantoni, M.; Wang, H.; Yamamoto, T.; Neeley, M.; Bialczak, R.C.; Chen, Y.; Lenander, M.; Lucero, E.; O’Connell, A.D.; Sank, D.; et al. Implementing the quantum von Neumann architecture with superconducting circuits. Science 2011, 334, 61–65. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 RuizPerez, L.; GarciaEscartin, J.C. Quantum arithmetic with the quantum Fourier transform. Quantum Inf. Process. 2017, 16, 152. [Google Scholar] [CrossRef]
 Linder, B.; GarciaOjalvo, J.; Neiman, A.; SchimanskyGeier, L. Effects of noise in excitable systems. Phys. Rep. 2004, 392, 321–424. [Google Scholar] [CrossRef]
 Goychuk, I.; Hänggi, P.; Vega, J.L.; MiretArtés, S. NonMarkovian stochastic resonance: Threestate model of ion channel gating. Phys. Rev. E 2005, 71, 061906. [Google Scholar] [CrossRef] [Green Version]
 Bell, G.; Bailey, D.H.; Dongarra, J.; Karp, A.H.; Walsh, K. A look back on 30 years of the Gordon Bell Prize. Int. J. High Perform. Comput. 2017, 31, 469–484. [Google Scholar] [CrossRef]
 Végh, J. Which scaling rule applies to Artificial Neural Networks. Neural Comput. Appl. 2021. [Google Scholar] [CrossRef]
 Chicca, E.; Indiveri, G. A recipe for creating ideal hybrid memristiveCMOS neuromorphic processing systems. Appl. Phys. Lett. 2020, 116, 120501. [Google Scholar] [CrossRef]
 Building braininspired computing. Nat. Commun. 2019, 10, 4838. [CrossRef] [PubMed] [Green Version]
 Wang, C.; Liang, S.J.; Wang, C.Y.; Yang, Z.Z.; Ge, Y.; Pan, C.; Shen, X.; Wei, W.; Zhao, Y.; Zhang, Z.; et al. Scalable massively parallel computing using continuoustime data representation in nanoscale crossbar array. Nat. Nanotechnol. 2021, 16, 1079–1085. [Google Scholar] [CrossRef]
 Strukov, D.B.; Snider, G.S.; Stewart, D.R.; Williams, R.S. The missing memristor found. Nature 2008, 453/7191, 80–83. [Google Scholar] [CrossRef]
 Abraham, I. The case for rejecting the memristor as a fundamental circuit element. Sci. Rep. 2018, 8, 10972. [Google Scholar] [CrossRef]
 Haidar, A.; Wu, P.; Tomov, S.; Dongarra, J. Investigating Half Precision Arithmetic to Accelerate Dense Linear System Solvers. In Proceedings of the ScalA’17 8th Workshop on Latest Advances in Scalable Algorithms for LargeScale Systems, Denver, CO, USA, 12–17 November 2017; ACM: New York, NY, USA, 2017; pp. 10:1–10:8. [Google Scholar]
 Végh, J. Finally, how many efficiencies the supercomputers have? J. Supercomput. 2020, 76, 9430–9455. [Google Scholar] [CrossRef] [Green Version]
 US National Research Council. The Future of Computing Performance: Game Over or Next Level? US National Research Council: Washington, DC, USA, 2011. [Google Scholar]
 Markov, I. Limits on fundamental limits to computation. Nature 2014, 512, 147–154. [Google Scholar] [CrossRef] [Green Version]
 Singh, J.P.; Hennessy, J.L.; Gupta, A. Scaling Parallel Programs for Multiprocessors: Methodology and Examples. Computer 1993, 26, 42–50. [Google Scholar] [CrossRef]
 Tsafrir, D. The Contextswitch Overhead Inflicted by Hardware Interrupts (and the Enigma of Donothing Loops). In Proceedings of the ExpCS’07 2007 Workshop on Experimental Computer Science, San Diego, CA, USA, 13–14 June 2007; ACM: New York, NY, USA, 2007; p. 3. [Google Scholar]
 David, F.M.; Carlyle, J.C.; Campbell, R.H. Context Switch Overheads for Linux on ARM Platforms. In Proceedings of the ExpCS’07 2007 Workshop on Experimental Computer Science, San Diego, CA, USA, 13–14 June; ACM: New York, NY, USA, 2007. [Google Scholar] [CrossRef] [Green Version]
 Gustafson, J.L. Reevaluating Amdahl’s Law. Commun. ACM 1988, 31, 532–533. [Google Scholar] [CrossRef] [Green Version]
 Luk, W. Imperial College London, Textbook. Available online: http://www.imperial.ac.uk/~wl/teachlocal/cuscomp/notes/chapter2.pdf (accessed on 14 December 2020).
 Végh, J. von Neumann’s missing “Second Draft”: What it should contain. In Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence, (CSCI’20), Las Vegas, NA, USA, 16–18 December 2020; IEEE Computer Society: Washington, DC, USA, 2020; pp. 1260–1264. [Google Scholar] [CrossRef]
 Grubl, A.; Billaudelle, S.; Cramer, B.; Karasenko, V.; Schemmel, J. Verification and Design Methods for the BrainScaleS Neuromorphic Hardware System. J. Signal Process. Syst. 2020, 92, 1277–1292. [Google Scholar] [CrossRef]
 TOP500. Top500 List of Supercomputers. Available online: https://www.top500.org/lists/top500/ (accessed on 7 July 2021).
 Hutson, M. Core progress in AI has stalled in some fields. Science 2020, 368, 927. [Google Scholar] [CrossRef]
 Van Albada, S.J.; Rowley, A.G.; Senk, J.; Hopkins, M.; Schmidt, M.; Stokes, A.B.; Lester, D.R.; Diesmann, M.; Furber, S.B. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a FullScale Cortical Microcircuit Model. Front. Neurosci. 2018, 12, 291. [Google Scholar] [CrossRef]
 Keuper, J.; Pfreundt, F.J. Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability. In Proceedings of the 2nd Workshop on Machine Learning in HPC Environments (MLHPC), Salt Lake City, UT, USA, 14–16 November 2016; pp. 1469–1476. [Google Scholar] [CrossRef] [Green Version]
 Végh, J. Why do we need to Introduce Temporal Behavior in both Modern Science and Modern Computing. Glob. J. Comput. Sci. Technol. Hardw. Comput. 2020, 20, 13–29. [Google Scholar]
 Végh, J.; Berki, A.J. On the Spatiotemporal Behavior in BiologyMimicking Computing Systems. Available online: https://www.researchgate.net/publication/344325571_On_the_Spatiotemporal_Behavior_in_BiologyMimicking_Computing_Systems (accessed on 7 July 2021).
 Végh, J. Introducing Temporal Behavior to Computing Science. Available online: https://www.researchgate.net/publication/341851322_Introducing_temporal_behavior_to_computing_science (accessed on 7 July 2021).
 Das, A. The Special Theory of Relativity: A Mathematical Exposition, 1st ed.; Springer: New York, NY, USA, 1993. [Google Scholar]
 D’Angelo, G.; Rampone, S. Towards a HPCoriented parallel implementation of a learning algorithm for bioinformatics applications. BMC Bioinform. 2014, 15, S2. [Google Scholar] [CrossRef] [Green Version]
 Backus, J. Can Programming Languages Be liberated from the von Neumann Style? A Functional Style and its Algebra of Programs. Commun. ACM 1978, 21, 613–641. [Google Scholar] [CrossRef] [Green Version]
 Anderson, P.W. More Is Different. Science 1972, 177, 393–396. [Google Scholar] [CrossRef] [Green Version]
 Végh, J. A model for storing and processing information in technological and biological computing systems. In Proceedings of the 17th International Conference on Foundations of Computer Science (FCS’21, FCS4404), Las Vegas, NA, USA, 26–29 July 2021; IEEE: Manhattan, NY, USA, 2021. [Google Scholar]
 De Macedo Mourelle, L.; Nedjah, N.; Pessanha, F.G. chapter 5: Interprocess Communication via Crossbar for Shared Memory Systemsonchip. In Reconfigurable and Adaptive Computing: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar] [CrossRef]
 Moradi, S.; Manohar, R. The impact of onchip communication on memory technologies for neuromorphic systems. J. Phys. D Appl. Phys. 2018, 52, 014003. [Google Scholar] [CrossRef] [Green Version]
 Furber, S.B.; Lester, D.R.; Plana, L.A.; Garside, J.D.; Painkras, E.; Temple, S.; Brown, A.D. Overview of the SpiNNaker System Architecture. IEEE Trans. Comput. 2013, 62, 2454–2467. [Google Scholar] [CrossRef] [Green Version]
 Weaver, V.; Terpstra, D.; Moore, S. Nondeterminism and overcount on modern hardware performance counter implementations. In Proceedings of the 2013 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Austin, TX, USA, 21–23 April 2013; pp. 215–224. [Google Scholar] [CrossRef] [Green Version]
 Végh, J.; Molnár, P. How to measure perfectness of parallelization in hardware/software systems. In Proceedings of the 18th Internattional Carpathian Control Conference ICCC, Sinaia, Romania, 28–31 May 2017; pp. 394–399. [Google Scholar]
 Wustenhoff, E.; Ng, T.S.E. Cloud Computing Benchmark. Available online: https://www.burstorm.com/priceperformancebenchmark/1stContinuousCloudPricePerformanceBenchmarking.pdf (accessed on 7 July 2021).
 Fiscale, S.; De Luca, P.; Inno, L.; Marcellino, L.; Galletti, A.; Rotundi, A.; Ciaramella, A.; Covone, G.; Quintana, E. A GPU Algorithm for Outliers Detection in TESS Light Curves. In International Conference on Computational Science; Springer: Cham, Switzerland, 2021; pp. 420–432. [Google Scholar] [CrossRef]
 Ellen, F.; Hendler, D.; Shavit, N. On the Inherent Sequentiality of Concurrent Objects. SIAM J. Comput. 2012, 43, 519–536. [Google Scholar] [CrossRef] [Green Version]
 Williams, S.; Waterman, A.; Patterson, D. Roofline: An Insightful Visual Performance Model for Multicore Architectures. Commun. ACM 2009, 52, 65–76. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Végh, J. Revising the Classic Computing Paradigm and Its Technological Implementations. Informatics 2021, 8, 71. https://doi.org/10.3390/informatics8040071
Végh J. Revising the Classic Computing Paradigm and Its Technological Implementations. Informatics. 2021; 8(4):71. https://doi.org/10.3390/informatics8040071
Chicago/Turabian StyleVégh, János. 2021. "Revising the Classic Computing Paradigm and Its Technological Implementations" Informatics 8, no. 4: 71. https://doi.org/10.3390/informatics8040071