On Implementing Technomorph Biology for Inefficient Computing
Abstract
:1. Introduction
2. Notions of Computation
2.1. The Fundamental Task
- input operand(s) need to be delivered to the processing element
- processing must be wholly performed
- output operand(s) must be delivered to their destination
2.2. Modeling Computation
2.3. Time Windows
2.4. Three-Stage Computing
2.5. Issues in Formulating Efficiency
2.6. Payload vs. Theoretical Efficiency
2.7. Instruction- and Data-Driven Modes
2.8. Connecting Elemental Units
2.9. Proper Sequencing
2.10. Looping Circuits
3. Technical Computing
3.1. Cost Function
3.1.1. Thermal Limit
3.1.2. Word Length
3.1.3. Wrong Execution Time
3.1.4. Central Clock Signal
3.1.5. Dispersion
3.1.6. Generating Square Waves
3.1.7. Resource Utilization
3.2. Hardware/Software Cooperation
3.2.1. Single-Thread View
3.2.2. Communication
3.2.3. Wiring
3.3. Structure vs. Architecture
3.3.1. Single-Processor Performance
3.3.2. Multi- and Many-Core Processors
3.3.3. Memory
3.3.4. Bus
3.4. Accelerating Computing
3.4.1. ‘Multiple Data’ Computing
3.4.2. Special-Purpose Accelerators
3.4.3. New Materials/Technologies for Data Storing
3.4.4. Using Memristors for Processing
3.4.5. Using Mixed-Length Operands
3.5. Mitigating Communication
4. Biological Computing
4.1. State Machine
4.2. Conceptual Operation
4.2.1. Stage ‘Computing’
4.2.2. Stage ‘Delivering’
4.2.3. Stage ‘Relaxing’
4.2.4. Synaptic Control
4.2.5. Operating Diagrams
4.2.6. Classic Stages
4.3. Electrical Description
4.3.1. Hodgkin–Huxley Model
4.3.2. Electrotonic Model
4.3.3. Physical Model
4.4. Timing Relations
5. Biological Learning vs. Machine Learning
5.1. Biological Learning
5.2. Machine Learning
5.3. Comparing Learnings and Intelligences
5.4. Imitating Neural Computations
5.4.1. Using Accelerators
5.4.2. Training ANNs
6. Tendency of Computing Performance
6.1. Energy Consumption
6.2. Computing Efficiency
7. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ngai, J. BRAIN @ 10: A decade of innovation. Neuron 2024, 112, P3003–P3006. [Google Scholar] [CrossRef] [PubMed]
- Johnson, D.H. Information theory and neuroscience: Why is the intersection so small? In Proceedings of the 2008 IEEE Information Theory Workshop, Porto, Portugal, 5–9 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 104–108. [Google Scholar] [CrossRef]
- European Union. Human Brain Project. 2018. Available online: https://www.humanbrainproject.eu/en/ (accessed on 26 April 2025).
- Chu, D.; Prokopenko, M.; Ray, J.C. Computation by natural systems. Interface Focus 2018, 8, 2180–2209. [Google Scholar] [CrossRef]
- Almog, M.; Korngreen, A. Is realistic neuronal modeling realistic? J. Neurophysiol. 2016, 5, 2180–2209. [Google Scholar] [CrossRef]
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Végh, J.; Berki, Á.J. Towards generalizing the information theory for neural communication. Entropy 2022, 24, 1086. [Google Scholar] [CrossRef]
- Shannon, C.E. The Bandwagon. IRE Trans. Inf. Theory 1956, 2, 3. [Google Scholar] [CrossRef]
- Nizami, L. Information theory is abused in neuroscience. Cybern. Hum. Knowing 2019, 26, 47–97. [Google Scholar]
- Brette, R. Is coding a relevant metaphor for the brain? Behav. Brain Sci. 2018, 42, e215. [Google Scholar] [CrossRef]
- Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, J.D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A Survey of Neuromorphic Computing and Neural Networks in Hardware. 2017. Available online: https://arxiv.org/abs/1705.06963 (accessed on 10 September 2024).
- Kandel, E.R.; Schwartz, J.H.; Jessell, T.M.; Hudspeth, A.J. Principles of Neural Science, 5th ed.; The McGraw-Hill: New York, NY, USA, 2013. [Google Scholar]
- Young, A.R.; Dean, M.E.; Plank, J.S.; Rose, G.S. A Review of Spiking Neuromorphic Hardware Communication Systems. IEEE Access 2019, 7, 135606–135620. [Google Scholar] [CrossRef]
- van Albada, S.J.; Rowley, A.G.; Senk, J.; Hopkins, M.; Schmidt, M.; Stokes, A.B.; Lester, D.R.; Diesmann, M.; Furber, S.B. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Front. Neurosci. 2018, 12, 291. [Google Scholar] [CrossRef]
- Moradi, S.; Manohar, R. The impact of on-chip communication on memory technologies for neuromorphic systems. J. Phys. D Appl. Phys. 2018, 52, 014003. [Google Scholar] [CrossRef]
- Carbone, J.N.; Crowder, J.A. The great migration: Information content to knowledge using cognition based frameworks. In Biomedical Engineering: Health Care Systems, Technology and Techniques; Suh, S.C., Gurupur, V.P., Tanik, M.M., Eds.; Springer: New York, NY, USA, 2011; pp. 17–46. [Google Scholar]
- Naddaf, M. Europe spent €600 million to recreate the human brain in a computer. How did it go? Nature 2023, 620, 718–720. [Google Scholar] [CrossRef]
- Végh, J. von Neumann’s missing “Second Draft”: What it should contain. In Proceedings of the 2020 International Conference on Computational Science and Computational Intelligence (CSCI’20), Las Vegas, NV, USA, 16–18 December 2020; IEEE Computer Society: Piscataway, NJ, USA, 2020; pp. 1260–1264. [Google Scholar] [CrossRef]
- Backus, J. Can Programming Languages Be liberated from the von Neumann Style? A Functional Style and its Algebra of Programs. Commun. ACM 1978, 21, 613–641. [Google Scholar] [CrossRef]
- Végh, J. Why does von Neumann obstruct deep learning? In Proceedings of the 2023 IEEE 23rd International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 20–22 November 2023; pp. 000165–000170. [Google Scholar] [CrossRef]
- Végh, J. Revising the Classic Computing Paradigm and Its Technological Implementations. Informatics 2021, 8, 71. [Google Scholar] [CrossRef]
- Waser, R. (Ed.) Advanced Electronics Materials and Novel Devices; Nanoelectronics and Information Technology; Wiley-VCH: Hoboken, NJ, USA, 2012. [Google Scholar]
- Buzsáki, G. Neural syntax: Cell assemblies, synapsembles, and readers. Neuron 2010, 68, 362–385. [Google Scholar] [CrossRef]
- Buzsáki, G.; Mizuseki, K. The log-dynamic brain: How skewed distributions affect network operations. Nat. Rev. Neurosci. 2014, 15, 264–278. [Google Scholar] [CrossRef]
- Caporale, N.; Dan, Y. Spike Timing–Dependent Plasticity: A Hebbian Learning Rule. Annu. Rev. Neurosci. 2008, 31, 25–46. [Google Scholar] [CrossRef] [PubMed]
- Madl, T.; Baars, B.J.; Franklin, S. The timing of the cognitive cycle. PLoS ONE 2011, 6, e14803. [Google Scholar] [CrossRef]
- Cai, M.; Demmans Epp, C. Exploring the Optimal Time Window for Predicting Cognitive Load Using Physiological Sensor Data. arXiv 2024, arXiv:2406.13793. [Google Scholar] [CrossRef]
- Linder, B.; Garcia-Ojalvo, J.; Neiman, A.; Schimansky-Geier, L. Effects of noise in excitable systems. Phys. Rep. 2004, 392, 321–424. [Google Scholar] [CrossRef]
- Perkel, D.; Mulloney, B. Electrotonic properties of neurons: Steady-state compartmental model. J. Neurophysiol. 1978, 41, 621–639. [Google Scholar] [CrossRef] [PubMed]
- Smirnova, L.; Caffo, B.S.; Gracias, D.H.; Huang, Q.; Morales Pantoja, I.E.; Tang, B.; Zack, D.J.; Berlinicke, C.A.; Boyd, J.L.; Harris, T.D.; et al. Organoid intelligence (OI): The new frontier in biocomputing and intelligence-in-a-dish. Front. Sci. 2023, 1, 1017235. [Google Scholar] [CrossRef]
- Kunkel, S.; Schmidt, M.; Eppler, J.M.; Plesser, H.E.; Masumoto, G.; Igarashi, J.; Ishii, S.; Fukai, T.; Morrison, A.; Diesmann, M.; et al. Spiking network simulation code for petascale computers. Front. Neuroinform. 2014, 8, 78. [Google Scholar] [CrossRef]
- Végh, J. How Amdahl’s Law limits performance of large artificial neural networks. Brain Inform. 2019, 6, 4. [Google Scholar] [CrossRef]
- US DOE Office of Science. Report of a Roundtable Convened to Consider Neuromorphic Computing Basic Research Needs. 2015. Available online: https://science.osti.gov/-/media/ascr/pdf/programdocuments/docs/Neuromorphic-Computing-Report_FNLBLP.pdf (accessed on 26 April 2025).
- Markovic, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
- Mehonic, A.; Kenyon, A.J. Brain-inspired computing needs a master plan. Nature 2022, 604, 255–260. [Google Scholar] [CrossRef]
- von Neumann, J. The Computer and the Brain; Yale University Press: New Haven, CT, USA, 2012. [Google Scholar]
- Koch, C. Biophysics of Computation; Oxford University Press: Oxford, NY, USA, 1999. [Google Scholar]
- Abbott, L.; Sejnowski, T.J. Neural Codes and Distributed Representations; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Lytton, W.W. From Computer to Brain; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Sejnowski, T.J. The Computer and the Brain Revisited. IEEE Ann. Hist. Comput. 1989, 11, 197–201. [Google Scholar] [CrossRef]
- Levy, W.B.; Calvert, V.G. Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Proc. Natl. Acad. Sci. USA 2021, 118, e2008173118. [Google Scholar] [CrossRef]
- Végh, J.; Berki, Á.J. On the Role of Speed in Technological and Biological Information Transfer for Computations. Acta Biotheor. 2022, 70, 26. [Google Scholar] [CrossRef]
- Végh, J.; Berki, A.J. Revisiting neural information, computing and linking capacity. Math. Biol. Eng. 2023, 20, 12380–12403. [Google Scholar] [CrossRef]
- Végh, J. Which scaling rule applies to Artificial Neural Networks. Neural Comput. Appl. 2021, 33, 16847–16864. [Google Scholar] [CrossRef]
- Tsafrir, D. The Context-switch Overhead Inflicted by Hardware Interrupts (and the Enigma of Do-nothing Loops). In Proceedings of the 2007 Workshop on Experimental Computer Science, ExpCS ’07, San Diego, CA, USA, 13–14 June 2007; p. 3. [Google Scholar]
- David, F.M.; Carlyle, J.C.; Campbell, R.H. Context Switch Overheads for Linux on ARM Platforms. In Proceedings of the 2007 Workshop on Experimental Computer Science, ExpCS ’07, San Diego, CA, USA, 13–14 June 2007. [Google Scholar] [CrossRef]
- nextplatform.com. CRAY Revamps Clusterstor for the Exascale Era. 2019. Available online: https://www.nextplatform.com/2019/10/30/cray-revamps-clusterstor-for-the-exascale-era/ (accessed on 10 September 2024).
- Kendall, J.D.; Kumar, S. The building blocks of a brain-inspired computer. Appl. Phys. Rev. 2020, 7, 011305. [Google Scholar] [CrossRef]
- Berger, T.; Levy, W.B. A Mathematical Theory of Energy Efficient Neural Computation and Communication. IEEE Trans. Inf. Theory 2010, 56, 852–874. [Google Scholar] [CrossRef]
- Schrödinger, E. Is life based on the laws of physics? In What is Life?: With Mind and Matter and Autobiographical Sketches; Cambridge University Press: Canto, OH, USA, 1992; pp. 76–85. [Google Scholar]
- Végh, J. The non-ordinary laws of physics describing life. BioSystems 2025. in review. [Google Scholar] [CrossRef]
- Quirion, R. Brain Organoids: Are They for Real? 2023. Available online: https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1148127/full (accessed on 26 April 2025).
- von Neumann, J. First draft of a report on the EDVAC. IEEE Ann. Hist. Comput. 1993, 15, 27–75. [Google Scholar] [CrossRef]
- Bell, G.; Bailey, D.H.; Dongarra, J.; Karp, A.H.; Walsh, K. A look back on 30 years of the Gordon Bell Prize. Int. J. High Perform. Comput. 2017, 31, 469–484. [Google Scholar] [CrossRef]
- IEEE. IEEE Rebooting Computing. 2013. Available online: http://rebootingcomputing.ieee.org/ (accessed on 26 April 2025).
- Cadareanu, P.; Reddy C, N.; Almudever, C.G.; Khanna, A.; Raychowdhury, A.; Datta, S.; Bertels, K.; Narayanan, V.; Ventra, M.D.; Gaillardon, P.E. Rebooting Our Computing Models. In Proceedings of the 2019 Design, Automation Test in Europe Conference Exhibition (DATE), Florence, Italy, 25–29 March 2019; pp. 1469–1476. [Google Scholar] [CrossRef]
- Végh, J. Finally, how many efficiencies the supercomputers have? J. Supercomput. 2020, 76, 9430–9455. [Google Scholar] [CrossRef]
- Hameed, R.; Qadeer, W.; Wachs, M.; Azizi, O.; Solomatnikov, A.; Lee, B.C.; Richardson, S.; Kozyrakis, C.; Horowitz, M. Understanding Sources of Inefficiency in General-purpose Chips. In Proceedings of the 37th Annual International Symposium on Computer Architecture, ISCA ’10, Saint-Malo, France, 19–23 June 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 37–47. [Google Scholar] [CrossRef]
- Végh, J. Introducing the Explicitly Many-Processor Approach. Parallel Comput. 2018, 75, 28–40. [Google Scholar] [CrossRef]
- Végh, J. How to Extend Single-Processor Approach to Explicitly Many-Processor Approach. In Advances in Software Engineering, Education, and e-Learning; Arabnia, H.R., Deligiannidis, L., Tinetti, F.G., Tran, Q.N., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 435–458. [Google Scholar]
- Birkhoff, G.; Von Neumann, J. The logic of quantum mechanics. In Annals of Mathematics; Springer: Dordrecht, The Netherlands, 1936; pp. 823–843. [Google Scholar]
- Cho, A. Tests measure progress of quantum computers. Science 2018, 364, 1218–1219. [Google Scholar] [CrossRef]
- Ruiz-Perez, L.; Garcia-Escartin, J.C. Quantum arithmetic with the quantum Fourier transform. Quantum Inf. Process. 2017, 16, 152. [Google Scholar] [CrossRef]
- Goychuk, I.; Hänggi, P.; Vega, J.L.; Miret-Artés, S. Non-Markovian stochastic resonance: Three-state model of ion channel gating. Phys. Rev. E 2005, 71, 061906. [Google Scholar] [CrossRef] [PubMed]
- Feynman, R.P. Feynman Lectures on Computation; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Asanovic, K.; Bodik, R.; Demmel, J.; Keaveny, T.; Keutzer, K.; Kubiatowicz, J.; Morgan, N.; Patterson, D.; Sen, K.; Wawrzynek, J.; et al. A View of the Parallel Computing Landscape. Comm. ACM 2009, 52, 56–67. [Google Scholar] [CrossRef]
- Esmaeilzadeh, H.; Blem, E.; St. Amant, R.; Sankaralingam, K.; Burger, D. Dark Silicon and the End of Multicore Scaling. IEEE Micro 2012, 32, 122–134. [Google Scholar] [CrossRef]
- Shafique, M.; Garg, S. Computing in the dark silicon era: Current trends and research challenges. IEEE Des. Test 2017, 34, 8–23. [Google Scholar] [CrossRef]
- Haghbayan, M.H.; Rahmani, A.M.; Liljeberg, P.; Jantsch, A.; Miele, A.; Bolchini, C.; Tenhunen, H. Can Dark Silicon Be Exploited to Prolong System Lifetime? IEEE Des. Test 2017, 34, 51–59. [Google Scholar] [CrossRef]
- Markov, I. Limits on fundamental limits to computation. Nature 2014, 512, 147–154. [Google Scholar] [CrossRef]
- Bourzac, K. Streching supercomputers to the limit. Nature 2017, 551, 554–556. [Google Scholar] [CrossRef]
- Service, R.F. Design for U.S. exascale computer takes shape. Science 2018, 359, 617–618. [Google Scholar] [CrossRef]
- Furber, S.; Temple, S. Neural systems engineering. J. R. Soc. Interface 2007, 4, 193–206. [Google Scholar] [CrossRef]
- Wang, C.; Liang, S.J.; Wang, C.Y.; Yang, Z.Z.; Ge, Y.; Pan, C.; Shen, X.; Wei, W.; Zhao, Y.; Zhang, Z.; et al. Beyond von Neumann. Nat. Nanotechnol. 2020, 15, 507. [Google Scholar] [CrossRef]
- Eckert, J.P.; Mauchly, J.W. Automatic High-Speed Computing: A Progress Report on the EDVAC. Technical Report of Work under Contract No. W-670-ORD-4926, Supplement No 4; Moore School Library, University of Pennsylvania: Philadelphia, PA, USA, 1945. [Google Scholar]
- Schlansker, M.; Rau, B. EPIC: Explicitly Parallel Instruction Computing. Computer 2000, 33, 37–45. [Google Scholar] [CrossRef]
- Fuller, S.H.; Millett, L.I. Computing Performance: Game Over or Next Level? Computer 2011, 44, 31–38. [Google Scholar] [CrossRef]
- Ousterhout, J.K. Why Aren’t Operating Systems Getting Faster As Fast As Hardware? 1990. Available online: http://www.stanford.edu/~ouster/cgi-bin/papers/osfaster.pdf (accessed on 10 September 2024).
- Sha, L.; Rajkumar, R.; Lehoczky, J.P. Priority inheritance protocols: An approach to real-time synchronization. IEEE Trans. Comput. 1990, 39, 1175–1185. [Google Scholar] [CrossRef]
- Babaoglu, O.; Marzullo, K.; Schneider, F.B. A formalization of priority inversion. Real-Time Syst. 1993, 5, 285–303. [Google Scholar] [CrossRef]
- Amdahl, G.M. Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities. In Proceedings of the AFIPS Conference Proceedings, Atlantic, NJ, USA, 18–20 April 1967; Volume 30, pp. 483–485. [Google Scholar] [CrossRef]
- ARM. big.LITTLE Technology. 2011. Available online: https://developer.arm.com/technologies/big-little (accessed on 26 April 2025).
- Ao, Y.; Yang, C.; Liu, F.; Yin, W.; Jiang, L.; Sun, Q. Performance Optimization of the HPCG Benchmark on the Sunway TaihuLight Supercomputer. ACM Trans. Archit. Code Optim. 2018, 15, 11:1–11:20. [Google Scholar] [CrossRef]
- Gordon, S. (Ed.) The Synaptic Organization of the Brain, 5th ed.; Oxford Academic: New York, NY, USA, 2006; Available online: https://medicine.yale.edu/news/yale-medicine-magazine/article/the-synaptic-organization-of-the-brain-5th-ed/ (accessed on 26 April 2025).
- Singh, J.P.; Hennessy, J.L.; Gupta, A. Scaling Parallel Programs for Multiprocessors: Methodology and Examples. Computer 1993, 26, 42–50. [Google Scholar] [CrossRef]
- D’Angelo, G.; Rampone, S. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications. BMC Bioinform. 2014, 15, S2. [Google Scholar] [CrossRef]
- Keuper, J.; Pfreundt, F.J. Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability. In Proceedings of the 2nd Workshop on Machine Learning in HPC Environments (MLHPC), Salt Lake City, UT, USA, 14 November 2016; pp. 1469–1476. [Google Scholar] [CrossRef]
- Luccioni, A.S.; Viguier, S.; Ligozat, A.L. Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model. J. Mach. Learn. Res. 2023, 24, 1–15. [Google Scholar]
- Matheou, G.; Evripidou, P. Architectural Support for Data-Driven Execution. ACM Trans. Archit. Code Optim. 2015, 11, 52:1–52:25. [Google Scholar] [CrossRef]
- Denning, P.J.; Lewis, T. Exponential Laws of Computing Growth. Commun. Acm 2017, 60, 54–65. [Google Scholar] [CrossRef]
- Vetter, J.S.; DeBenedictis, E.P.; Conte, T.M. Architectures for the Post-Moore Era. IEEE Micro 2017, 37, 6–8. [Google Scholar] [CrossRef]
- Nature. In AI, is bigger always better? Nature 2023, 615. [Google Scholar] [CrossRef]
- Smith, B. Reinventing computing. In Proceedings of the International Supercomputing Conference, Seattle, WT, USA, 17–21 June 2007. [Google Scholar]
- Lee, V.W.; Kim, C.; Chhugani, J.; Deisher, M.; Kim, D.; Nguyen, A.D.; Satish, N.; Smelyanskiy, M.; Chennupaty, S.; Hammarlund, P.; et al. Debunking the 100X GPU vs. CPU Myth: An Evaluation of Throughput Computing on CPU and GPU. In Proceedings of the 37th Annual International Symposium on Computer Architecture, ISCA ’10, Saint-Malo, France, 19–23 June 2010; Association for Computing Machinery: New York, NY, USA, 2010; pp. 451–460. [Google Scholar] [CrossRef]
- cortical.io. Third AI Winter Ahead? Why OpenAI, Google et Co Are Heading Towards a Dead-End. 2022. Available online: https://www.cortical.io/blog/third-ai-winter-ahead-why-openai-google-co-are-heading-towards-a-dead-end/ (accessed on 26 April 2025).
- Antolini, A.; Lico, A.; Zavalloni, F.; Scarselli, E.F.; Gnudi, A.; Torres, M.L.; Canegallo, R.; Pasotti, M. A Readout Scheme for PCM-Based Analog In-Memory Computing With Drift Compensation Through Reference Conductance Tracking. IEEE Open J. Solid-State Circuits Soc. 2024, 4, 69–82. [Google Scholar] [CrossRef]
- de Macedo Mourelle, L.; Nedjah, N.; Pessanha, F.G. Interprocess Communication via Crossbar for Shared Memory Systems-on-chip. In Reconfigurable and Adaptive Computing: Theory and Applications; Chapter 5; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar] [CrossRef]
- Beggs, J.M.; Plenz, D. Neuronal Avalanches in Neocortical Circuits. J. Neurosci. 2003, 23, 11167–11177. [Google Scholar] [CrossRef]
- Végh, J. Introducing Temporal Behavior to Computing Science. In Advances in Software Engineering, Education, and e-Learning; Arabnia, H.R., Deligiannidis, L., Tinetti, F.G., Tran, Q.N., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 471–491. [Google Scholar]
- Végh, J. A configurable accelerator for manycores: The Explicitly Many-Processor Approach. arXiv 2016, arXiv:1607.01643. [Google Scholar]
- Mahlke, S.; Chen, W.; Chang, P.; Hwu, W.M. Scalar program performance on multiple-instruction-issue processors with a limited number of registers. In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, Kauai, HI, USA, 7–10 January 1992; Volume 1, pp. 34–44. [Google Scholar] [CrossRef]
- Kneip, A.; Lefebvre, M.; Verecken, J.; Bol, D. IMPACT: A 1-to-4b 813-TOPS/W 22-nm FD-SOI Compute-in-Memory CNN Accelerator Featuring a 4.2-POPS/W 146-TOPS/mm2 CIM-SRAM With Multi-Bit Analog Batch-Normalization. IEEE J. Solid-State Circuits 2023, 58, 1871–1884. [Google Scholar] [CrossRef]
- Chicca, E.; Indiveri, G. A recipe for creating ideal hybrid memristive-CMOS neuromorphic processing systems. Appl. Phys. Lett. 2020, 116, 120501. [Google Scholar] [CrossRef]
- Strukov, D.; Indiveri, G.; Grollier, J.; Fusi, S. Building brain-inspired computing. Nat. Commun. 2019, 10, 4838. [Google Scholar] [CrossRef]
- Wang, C.; Liang, S.J.; Wang, C.Y.; Yang, Z.Z.; Ge, Y.; Pan, C.; Shen, X.; Wei, W.; Zhao, Y.; Zhang, Z.; et al. Scalable massively parallel computing using continuous-time data representation in nanoscale crossbar array. Nat. Nanotechnol. 2021, 16, 1079–1085. [Google Scholar] [CrossRef]
- Pan, X.; Shi, J.; Wang, P.; Wang, S.; Pan, C.; Yu, W.; Cheng, B.; Liang, S.J.; Miao, F. Parallel perception of visual motion using light-tunable memory matrix. Sci. Adv. 2023, 9, eadi4083. [Google Scholar] [CrossRef]
- Wang, S.; Sun, Z. Dual in-memory computing of matrix-vector multiplication for accelerating neural networks. Device 2024, 2, 100546. [Google Scholar] [CrossRef]
- Han, S.; Pool, J.; Tran, J.; Dally, W.J. Learning both Weights and Connections for Efficient Neural Networks. 2015. Available online: https://arxiv.org/pdf/1506.02626.pdf (accessed on 26 April 2025).
- Bengio, E.; Bacon, P.L.; Pineau, J.; Precu, D. Conditional Computation in Neural Networks for Faster Models. 2016. Available online: https://arxiv.org/pdf/1511.06297 (accessed on 30 August 2024).
- Johnston, D.; sin Wu, S.M. Foundations of Cellular Neurophysiology; Massachusetts Institute of Technology: Cambridge, MA, USA; London, UK, 1995. [Google Scholar]
- Somjen, G. Sensory Coding in the Mammalian Nervous System; Meredith Corporation: New York, NY, USA, 1972. [Google Scholar] [CrossRef]
- Susi, G.; Garcés, P.; Paracone, E.; Cristini, A.; Salerno, M.; Maestú, F.; Pereda, E. FNS allows efficient event-driven spiking neural network simulations based on a neuron model supporting spike latency. Nat. Sci. Rep. 2021, 11, 12160. [Google Scholar] [CrossRef] [PubMed]
- Tschanz, J.W.; Narendra, S.; Ye, Y.; Bloechel, B.; Borkar, S.; De, V. Dynamic sleep transistor and body bias for active leakage power control of microprocessors. IEEE J. Solid State Circuits 2003, 38, 1838–1845. [Google Scholar] [CrossRef]
- Onen, M.; Emond, N.; Wang, B.; Zhang, D.; Ross, F.M.; Li, J.; Yildiz, B.; del Alamo, J.A. Nanosecond protonic programmable resistors for analog deep learning. Science 2022, 377, 539–543. [Google Scholar] [CrossRef]
- Hodgkin, A.L.; Huxley, A.F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef]
- Losonczy, A.; Magee, J. Integrative properties of radial oblique dendrites in hippocampal CA1 pyramidal neurons. Neuron 2006, 50, 291–307. [Google Scholar] [CrossRef]
- Leterrier, C. The Axon Initial Segment: An Updated Viewpoint. J. Neurosci. 2018, 38, 2135–2145. [Google Scholar] [CrossRef]
- Goikolea-Vives, A.; Stolp, H. Connecting the Neurobiology of Developmental Brain Injury: Neuronal Arborisation as a Regulator of Dysfunction and Potential Therapeutic Target. Int. J. Mol. Sci. 2021, 15, 8220. [Google Scholar] [CrossRef]
- Hasegawa, K.; Kuwako, K. Molecular mechanisms regulating the spatial configuration of neurites. Semin. Cell Dev. Biol. 2022, 129, 103–114. [Google Scholar] [CrossRef]
- Végh, J. Dynamic Abstract Neural Computing with Electronic Simulation. 2025. Available online: https://jvegh.github.io/DynamicAbstractNeuralComputing/ (accessed on 6 February 2025).
- Forcella, D.; Zaanen, J.; Valentinis, D.; van der Marel, D. Electromagnetic properties of viscous charged fluids. Phys. Rev. B 2014, 90, 035143. [Google Scholar] [CrossRef]
- McKenna, T.; Davis, J.; Zornetzer, S. Single Neuron Computation; Neural Networks: Foundations to Applications; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Huang, C.Y.M.; Rasband, M.N. Axon initial segments: Structure, function, and disease. Ann. N. Y. Acad. Sci. 2018, 1420, 46–61. [Google Scholar] [CrossRef] [PubMed]
- Alonso1, L.M.; Magnasco, M.O. Complex spatiotemporal behavior and coherent excitations in critically-coupled chains of neural circuits. Chaos Interdiscip. J. Nonlinear Sci. 2018, 28, 093102. [Google Scholar] [CrossRef]
- Li, M.; Tsien, J.Z. Neural Code-Neural Self-information Theory on How Cell-Assembly Code Rises from Spike Time and Neuronal Variability. Front. Cell. Neurosci. 2017, 11, 236. [Google Scholar] [CrossRef]
- D’Angelo, G.; Palmieri, F. Network traffic classification using deep convolutional recurrent autoencoder neural networks for spatial–temporal features extraction. J. Netw. Comput. Appl. 2021, 173, 102890. [Google Scholar] [CrossRef]
- TOP500. Top500 List of Supercomputers. 2025. Available online: https://www.top500.org/lists/top500/ (accessed on 24 October 2024).
- Aspray, W. John von Neumann and the Origins of Modern Computing; Cohen, B., Aspray, W., Eds.; MIT Press: Cambridge, MA, USA, 1990; pp. 34–48. [Google Scholar]
- Sterling, P.; Laughlin, S. Principles of Neural Design, 1st ed.; The MIT Press: Cambridge, MA, USA; London, UK, 2017. [Google Scholar]
- Antle, M.C.; Silver, R. Orchestrating time: Arrangements of the brain circadian clock. Trends Neurosci. 2005, 28, 145–151. [Google Scholar] [CrossRef]
- Végh, J.; Berki, Á.J. Storing and Processing Information in Technological and Biological Computing Systems. In Proceedings of the 2021 International Conference on Computational Science and Computational Intelligence; Foundations of Computer Science FCS, Las Vegas, NV, USA, 15–17 December 2021; Volume 21, p. FCS4378. [Google Scholar]
- Stone, J.V. Principles of Neural Information Theory; Sebtel Press: Sheffield, UK, 2018. [Google Scholar]
- McKenzie, S.; Huszár, R.; English, D.F.; Kim, K.; Yoon, E.; Buzsáki, G. Preexisting hippocampal network dynamics constrain optogenetically induced place fields. Neuron 2021, 109, 1040–1054.e7. [Google Scholar] [CrossRef]
- Jordan, M.I. Artificial Intelligence—The Revolution Hasn’t Happened Yet. 2019. Available online: https://hdsr.mitpress.mit.edu/pub/wot7mkc1/release/10 (accessed on 26 April 2025).
- Science. Core progress in AI has stalled in some fields. Science 2020, 368, 927. [Google Scholar] [CrossRef]
- Rouleau, N.; Levin, M. Discussions of machine versus living intelligence need more clarity. Nat. Mach. Intell. 2024, 6, 1424–1426. [Google Scholar] [CrossRef]
- Editorial. Seeking clarity rather than strong opinions on intelligence. Nat. Mach. Intell. 2024, 6, 1408. [Google Scholar] [CrossRef]
- Végh, J.; Berki, A.J. Why learning and machine learning are different. Adv. Artif. Intell. Mach. Learn. 2021, 1, 131–148. [Google Scholar] [CrossRef]
- Ho, R.; Horowitz, M. More About Wires and Wire Models. 2019. Available online: https://web.stanford.edu/class/archive/ee/ee371/ee371.1066/lectures/lect_09_1up.pdf (accessed on 26 April 2025).
- Black, C.D.; Donovan, J.; Bunton, B.; Keist, A. SystemC: From the Ground up, 2nd ed.; Springer: New York, NY, USA, 2010. [Google Scholar]
- IEEE/Accellera. Systems Initiative. 2017. Available online: http://www.accellera.org/downloads/standards/systemc (accessed on 26 April 2025).
- Mitra, P. Fitting elephants in modern machine learning by statistically consistent interpolation. Nat. Mach. Intell. 2021, 3, 378–386. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- axios.com. Artificial Intelligence Pioneer Says We Need to Start Over. Available online: https://www.axios.com/2017/12/15/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524 (accessed on 26 April 2025).
- Marcus, G. Deep Learning: A Critical Appraisal. Available online: https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf (accessed on 26 April 2025).
- Cremer, C.Z. Deep Limitations? Examining Expert Disagreement Over Deep Learning. Prog. Artif. Intell. 2021, 10, 449–464. [Google Scholar] [CrossRef]
- semiengineering.com. AI Power Consumption Exploding. 2022. Available online: https://semiengineering.com/ai-power-consumption-exploding/ (accessed on 10 September 2024).
- Xie, S.; Sun, C.; Huang, J.; Tu, Z.; Murphy, K. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Cham, Switzerland, 2018; pp. 318–335. [Google Scholar]
- Xu, K.; Qin, M.; Sun, F.; Wang, Y.; Chen, Y.K.; Ren, F. Learning in the Frequency Domain. arXiv 2002, arXiv:2002.12416. [Google Scholar]
- Simon, H. Why We Need Exascale and Why We Won’T Get There by 2020. 2014. Available online: https://www.researchgate.net/publication/261879110_Why_we_need_Exascale_and_why_we_won’t_get_there_by_2020 (accessed on 10 September 2024).
- Végh, J. How Science and Technology Limit the Performance of AI Networks. In Proceedings of the 5th International Conference on Advances in Signal Processing and Artificial Intelligence (ASPAI’ 2023), Tenerife (Canary Islands), Spain, 7–9 June 2023; International Frequency Sensor Association (IFSA) Publishing: Barcelona, Spain, 2023; pp. 90–92. [Google Scholar]
- nature.com. Solving the Big Computing Problems in the Twenty-First Century. 2023. Available online: https://www.nature.com/articles/s41928-023-00985-1.epdf (accessed on 10 September 2024).
- Fuller, S.H.; Millett, L.I. The Future of Computing Performance: Game Over or Next Level? National Academies Press: Washington, DC, USA, 2011. [Google Scholar] [CrossRef]
Item | Technical | Biological | Notes |
---|---|---|---|
Computing units | , single complex processors | , simple cooperating processors | Processor/brain (supercomputer) |
No of independent connections | 1 | Typical bus/axons | |
Versatility | Many operations | One operation | Per trigger |
Utilization | Almost full | Very low | |
Power consumption | 200 W (20 MW) | 0.1 W | [41] (supercomputer) |
Power utilization | Wasteful | Barely minimal | |
Cost function | Minimizing gates in the implementation | Minimizing time differences when delivering result | Affects power consumption and operating accuracy |
Time for elementary processing, sec | (fixed) | (variable) Decreased to 1/100 by cooperation | |
Theoretical base | Tends to deviate from theory | Only partly known | [21,42] |
Time awareness | n/a | Crucial | [42] |
Operations to imitate the other | up to | up to | Depends on the operation |
Operand and result type | Mathematically defined, exact, perfect | Physiologically defined; non-entirely deterministic | |
Parallelization | Parallelized sequential | True parallel (native) | [21] |
Computing stack | Poor | Perfect | [21] |
Information type | Digitally described by Shannon model | Spatially and temporally distributed | [7] |
Communication method | Point-to-point (fast but serial) | Many-to-many (slow but parallel) | [43] |
Communication time | from (secs up to minutes) | 1–10 ms | ANNs and supercomputer [32,42,44] |
Communication wiring | Strongly simplified, sparse, shared | Unbelievably dense and complex, private | |
Communication scaling | Very poor | Very good | [32,43,44] |
Data storage mode | Permanent | Only what needed; temporal | |
Data storage form | Separated unit, uniform; also instructions | Different forms; also inherited/local; caching (implies multiple copying) | |
Amount of data transfer | Fixed (bus width/word width) | Variable (Just the needed minimum) | |
Signal representation | Digital | Analog | Typical |
Information representation | Mainly amplitude; time suppressed | Mainly time; amplitude suppressed | [7,43] |
Spiking | Misunderstood (represents computing time) | Represents synchrony signal | [7] |
Operating principle | Mainly instruction-, partly event- and data-driven | Event- and data-driven | |
Operating mode | Sequential | Parallel | [21] |
Control mode | External, clock, programmed; data-controlled | internal and environmental signals; pre-programmed and learned | Biomorph implementations inherit wrong technical ideas |
Synchrony | Synchronous | Asynchronous | [21] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Végh, J. On Implementing Technomorph Biology for Inefficient Computing. Appl. Sci. 2025, 15, 5805. https://doi.org/10.3390/app15115805
Végh J. On Implementing Technomorph Biology for Inefficient Computing. Applied Sciences. 2025; 15(11):5805. https://doi.org/10.3390/app15115805
Chicago/Turabian StyleVégh, János. 2025. "On Implementing Technomorph Biology for Inefficient Computing" Applied Sciences 15, no. 11: 5805. https://doi.org/10.3390/app15115805
APA StyleVégh, J. (2025). On Implementing Technomorph Biology for Inefficient Computing. Applied Sciences, 15(11), 5805. https://doi.org/10.3390/app15115805