Next Article in Journal
CAA-RF: An Anomaly Detection Algorithm for Computing Power Blockchain Networks
Next Article in Special Issue
A Case Study on Virtual HPC Container Clusters and Machine Learning Applications
Previous Article in Journal
Unraveling the Immobilization Mechanisms of Biochar and Humic Acid on Heavy Metals: DOM Insights from EEMs-PARAFAC and 2D-COS Analysis
 
 
Review
Peer-Review Record

On Implementing Technomorph Biology for Inefficient Computing

Appl. Sci. 2025, 15(11), 5805; https://doi.org/10.3390/app15115805
by János Végh
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Appl. Sci. 2025, 15(11), 5805; https://doi.org/10.3390/app15115805
Submission received: 21 February 2025 / Revised: 25 April 2025 / Accepted: 27 April 2025 / Published: 22 May 2025
(This article belongs to the Special Issue Novel Insights into Parallel and Distributed Computing)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1, The paper appears to be more of a review than an original contribution. A large portion of the content summarizes existing research rather than presenting new ideas, findings, or creative insights. To enhance its value, the paper would benefit from a stronger focus on original work or innovative research that advances the field, rather than reiterating what is already known.

2, Figure 7: The left figure should be labeled as "A".

3, The paper should also reference neuromorphic computing when discussing supercomputer energy and computing efficiency. A comparison between the two could provide valuable insight into how neuromorphic computing approaches energy efficiency differently and potentially more effectively in certain contexts.

4, A reference is needed for ANN, and the full name should be provided before using the abbreviation. 

5, Machine learning, ANN, and neural computations are well-established concepts and not novel or new. The author should propose something original and innovative in this paper to contribute new insights or advancements to the field. Simply discussing these existing concepts without introducing new ideas or proposals does not add significant value to the paper.

5, 

Author Response

Many thanks for reviewing the long paper and making valuable suggestions.

> 1, The paper appears to be more of a review than an original contribution. A large portion of the content summarizes existing research rather than presenting new ideas, findings, or creative insights. To enhance its value, the paper would benefit from a stronger focus on original work or innovative research that advances the field, rather than reiterating what is already known.

This is a comparison of two drastically different implementations of computing, where the terms and notions
have identical names but extremely different interpretations and implementations. In addition to the original work
on formulating neuronal operation in technical terms, it adds the original work on checking and comparing the efficacy
of different implementations, including the major aspects, the purely technical and the biological implementations,
furthermore their mixtures. As far as the author knows, no similar comparison, based on technical
details, is available.

> 2, Figure 7: The left figure should be labeled as "A".

Thanks, fixed.

>3, The paper should also reference neuromorphic computing when discussing supercomputer energy and computing efficiency. A comparison between the two could provide valuable insight into how neuromorphic computing approaches energy efficiency differently and potentially more effectively in certain contexts.

It would we a "mission impossible". On the one side, most of the information needed to peform the task is proprietary
and not public. On the other, changing one aspect also changes some other, usually not measured. On the third, 
the parameters (including energy efficacy) changes due to different other factors. On the fourth, as the Gordon Prize Jury
expressed, there are no biomorphic architectures among the Top500 supercomputers. On the fifth, as mentioned
in the MS, for axample the energy consumption share between data processing and data transfer, in a decade, has changed
from 20:80 to 80:20, just because of introducing GPU computing and running AI algorithms.
Or, SpiNNaker, built using some ideas their developers thought "biomorphic", but they used non-biomorphic
design experiences and HW/SW components, can use only 1% of the available HW for the targeted goal.
There is no reasonable definition for efficacy. For the reasonable goal  the
reviewer suggested, dedicated measurements are needed and such data (such as the work by D’Angelo, cited)
are rarely measured and published (they are not successful in this aspect).

>4, A reference is needed for ANN, and the full name should be provided before using the abbreviation. 
Thanks, fixed.

>5, Machine learning, ANN, and neural computations are well-established concepts and not novel or new. The author should propose something original and innovative in this paper to contribute new insights or advancements to the field. Simply discussing these existing concepts without introducing new ideas or proposals does not add significant value to the paper.

The paper is about how the mentioned, really well established, concepts affect efficiency of computing,
furthermore how some implementation methods violate fundamental principles of computing. That view is new.
Formulating notions and operation of biological computing in terms and notions of technical computing
is first time discussed in the literature.

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript is original, well organized and presented. I have some question to address to the Authors.

  • It would be significant to stress the relevance of this work if the Authors could expand the introduction with a brief review on digital and analog HW accelerators. Some hints can be [1], [2]. Therefore, to increase the scientific impact of the presented results, I would strongly encourage the Authors to assess and stress accordingly the possible highlights brought by their work for this additional topic.
  • The paper introduces a "generalized computing" framework to compare technical and biological computing. Could you clarify how this concept differs from existing models of hybrid computing, and provide concrete examples of its practical implications?
  • The manuscript discusses inefficiencies in both biological and technical computing. Could you provide a quantitative framework or metric to compare inefficiencies in these two systems beyond qualitative descriptions?
  • The paper emphasizes the importance of communication between computing elements. How do you propose mitigating communication delays in technical computing architectures, and are there any biological strategies that could be realistically adapted?
  • The section on "Biological Learning vs. Machine Learning" suggests fundamental differences in their mechanisms. Could you expand on how these differences impact computational efficiency and the potential limitations of mimicking biological learning in artificial systems?
  • While the paper critiques current computing paradigms, what specific applications or research directions do you foresee benefiting from adopting a more biologically inspired approach to computing?

[1] 10.1109/JSSC.2023.3269098;

[2] 10.1109/OJSSCS.2024.3432468;

 

 

Author Response

> The manuscript is original, well organized and presented. I have some question to address to the Authors.
Thanks for reviewing the long MS and for the interest expressed by the questions.

> It would be significant to stress the relevance of this work if the Authors could expand the introduction with a brief review on digital and analog HW accelerators. Some hints can be [1], [2]. Therefore, to increase the scientific impact of the presented results, I would strongly encourage the Authors to assess and stress accordingly the possible highlights brought by their work for this additional topic.

A new subsection 3.4.2 about those HW accelerations was added, furthermore some
references to features of those accelerators.
[2] was very useful for drawing a parallel between the technical and biological implementations of  matrix–vector multiplication
as well as in discussing the role of "time window" in the elementary computation.
The author is especially grateful for calling attention to their Eqs. (12) and (13);


>The paper introduces a "generalized computing" framework to compare technical and biological computing. Could you clarify how this concept differs from existing models of hybrid computing, and provide concrete examples of its practical implications?

Generalized computing formulates von Neumann's model in terms of event-driven handling and abstract terms.
Both technical and biological computing know events; furthermore, chained elementary events use them,
and even partly parallelized processing units 
or networked systems use events for synchronizing, from sub-elementary operations to complex sequences.
There are many mixtures behind the notion of hybrid computing. Generalized computing is valid for all kinds and combinations
of them, provided that they can synchronize each other by properly defined synchrony signals
and they can provide access to the corresponding arguments in the input and output sections. 
Reviewer presumably refers to analog/digital or electronic/quantum hybrids or maybe biological/electronic hybrids.
They can co-operate provided they have the corresponding interfaces.
Case A/D: the signals Operand Available, Computing Begin, Computing End, etc. can be logically well defined.
The transition interfaces can introduce issues on complexity, precision, computability  and efficiency.
Case E/Q: quantum operations inherently produce probabilistic results; this is why von Neumann
did not consider them (the number of correcting bits increases more quickly than the number of payload bits).
In the case of Q computing elements, there are more fundamental problems:
- writing in and reading out arguments needs affecting one system but not the remaining ones
- producing timing signals is at least challenging: several bits must be set up individually and sequentially
and kept back until all bits are set up; at the signal "Computing Begin" they can start and 
either stop at the computer-generated "End computing" or self-produce it; all this without
affecting the other's quantum state.
- somehow the direction of time must be set (there are only quantum STATES, such as UP and DOWN,
but not processes such as "Rising" and "Falling"); otherwise only random readouts can be produced, with fanfares
Case B/E: biological computing is data controlled, time-coded, analog, non-mathematical elementary operations.
Technical computing is instruction controlled, amplitude-coded, digital, mathematical elementary operations.
The interface between those two principles must provide the appropriate conversion between those systems.


>The manuscript discusses inefficiencies in both biological and technical computing. Could you provide a quantitative framework or metric to compare inefficiencies in these two systems beyond qualitative descriptions?

It is not possible in some reasonable way, similarly to as the resource utilization and the tasks
cannot be compared in a reasonable way.
Simply, the definitions are different. We would compare an orange to a cat.
As emphasized, computing inherently has an inefficiency, whether technical or biological.
In biological computing, the elementary units (the neurons) are data and the way as they are wired are the (very simple) instructions.
Only assemblies can produce some useful results, with redundancy, replicability, associativity, etc.
Due to the dedicated wiring and parallel operation, all computations have a fixed time
(although it is adjusted by learning, and the brain knows its own size and so the needed/used delivery times)
Events (external or internal) start executing a new computation, which runs from the beginning to the end.
In technical computing, the elementary processing units are instructions and the adjustable wiring
delivers data make (rather complex) operations, even if physically the same super-complex processing unit is used.
For historical reasons, in technical computing the payload efficiency means the 'net' computing time.
The total processing time depends on the implementation. The sequential operation, alone,
degrades the efficiency, by orders of magnitude, decreasing the temporal length of the computing operation
does the same in the opposite direction. The high speed connection upscales several orders of magnitude,
while the super-parallelized transfer downscales transfer speed, and the arbitration falsifies all timings.

> The paper emphasizes the importance of communication between computing elements. How do you propose mitigating communication delays in technical computing architectures, and are there any biological strategies that could be realistically adapted?

Using high speed bus with arbitration in huge systems is a technological suicide.
The biological strategy is to implement computing and transfer inseparably, to cooperate (no 
Single Processor approach), to do as much parallel activity as possible
(see the conceptual graph) and to transfer "as little as possible and as slowly as possible", 
(cited by the paper): no use to construct super-fast buses and to send obsolete data bits on very wide datapaths
after a lengthy arbitration.

The author proposed a hierarchic network-like communication and a different approach to computing power in
https://www.sciencedirect.com/science/article/pii/S0167819118300577
and
https://link.springer.com/chapter/10.1007/978-3-030-70873-3_31
without any interest, although a simulator (actually implementing an Y-86 processor) was also prepared.
The essence is to release the Single Processor Approach, as Amdahl suggested in 1968,
to release the single thread optimization, use relatively simple processors (they are free) combined,
to forget synchronous operation, pipelining, etc.
take back the idea of "big.LITTLE technology", to connect processors inside and outside chips
by network-like but hyerarchic buses, apply "locality" also for data transfer; introduce two layers
of processing levels to implement processing and transfer in parallel; to use multiple-type memories.

The physical communication delay is defined by science, but transfer's technical implementation
can be and must be redesigned. To implement an asynchnonous operation needs taking a deep breath,
and redesigning computing and transfer, from the grounds up. The elementary circuits in simple cases will need
several times more transistors, but the processors will be implemented by using a fraction of
transistors we use today and will consume a fraction of power. 


>The section on "Biological Learning vs. Machine Learning" suggests fundamental differences in their mechanisms.
Could you expand on how these differences impact computational efficiency and the potential limitations
of mimicking biological learning in artificial systems?

The short reply is: very much.
The real reply should fill a multi-volume book. It depends on what to learn and using what hardware.
Using (implicitly) the wrong practice of technical computing, one faces the inherent limitations; see
https://braininformatics.springeropen.com/ articles/10.1186/s40708-019-0097-2
and
http://link.springer.com/article/10.1007/s00521-021-06456-y

Biological learning is about adjusting time
https://link.springer.com/article/10.1007/s10441-022-09450-6
So is the technical one
https://www.mdpi.com/2227-9709/8/4/71
It is not understood in either of those fields.

One of major problems: technical computing has HW and SW, biology has also a transition between them
called plasticity (that could be implemented as described in the EMPA papers) not to mention genes.
Learning is a multi-level process, with different temporal characteristics.
To implement them in using inadequate HW and SW based on biologically irrelevant mathematical methods,
without time-awareness, without understanding its principles, is hopeless.
Unfortunately, biology does not understand the operation of the elementary neuronal operation,
and as a consequence, the operation of the brain. Although it is in its baby state,
you may have a look at the site
https://jvegh.github.io/DynamicAbstractNeuralComputing/

>While the paper critiques current computing paradigms, what specific applications
or research directions do you foresee benefiting from adopting a more biologically inspired approach to computing?

Well, until we go back to redesigning and rebooting computing, no significant improvement will happen.
In addition, there is a danger to serve AI needs with solutions violating computing principles.
Really, we attempt to apply forcefully our technical ideas to biology.
Still, we could not understand what von Neumann told about timing and what Feynman told about
that building and using computers is an engineering field.  
There is, from a distance, some resemblance between technical and biological computing. 
Both has developed using the success-failure method. The part-solutions are optimized in their context.
Grasping a part-solution into another context is useless.


[1] 10.1109/JSSC.2023.3269098;

[2] 10.1109/OJSSCS.2024.3432468;

Reviewer 3 Report

Comments and Suggestions for Authors

The paper addresses the relationship between biological principles and technical computing, highlighting how implementing biological concepts without understanding their context can lead to efficiency losses. The concept of "generalized computation," which includes biological computation and technical computation, is introduced, analyzing their fundamental differences in principles, efficiency, and costs. In addition, the inherent limitations of current computational systems, the trend of computational performance, and the challenges of efficiency in terms of energy and resources are discussed.

The paper deals with an interesting and relevant topic in the context of efficient computing. Drawing inspiration from biological models to design more efficient computational architectures is a necessary approach, especially considering the increasing energy demands of modern computing. Strategies for making computation more efficient are central to the design of future systems, and this paper contributes to that discussion.

The paper presents extensive coverage of the topic, addressing multiple aspects of computing from different perspectives. However, the text is considerably long for a typical scientific article. The depth and breadth of the contents make it more suitable to be structured as a book chapter, where it could be developed in greater detail without the constraints of length.

Although the topic is extremely necessary and relevant, the manuscript does not fully follow the scientific structure expected for publication in an academic journal. The journal's target audience is composed of researchers in the sciences, which implies the need for a more structured presentation aligned with established scientific methodologies. Currently, the paper has a more expository and conceptual orientation, which may hinder the rigorous validation of its assertions.

To improve the adequacy of the manuscript to the scientific standards of the journal, the following changes are suggested:

Structure the paper as a systematic literature review:

Apply an established methodology, such as PRISMA or Kitchenham's guide, to collect, analyze, and synthesize information rigorously.

Justify the selection of sources and provide a methodological framework to support the claims of the study.

 

Define a clear research question to guide the study:

Instead of a lengthy exposition, it is recommended to focus the research on a specific question and answer it through documented evidence.

This would allow a better organization of the content and a more apparent validation of the results.

Reinforce the validity of claims through previous studies and quantitative comparisons:

Include case studies or numerical examples to illustrate computational efficiency differences between technical and biological models.

Incorporate comparative tables or graphs to reinforce the arguments presented.

 

Reduce the length of the manuscript:

Focus content on key findings and reduce the level of detail in sections that do not directly contribute to the research question.

Transfer some of the more explanatory or theoretical content to supplementary material or consider publishing it in another format (e.g., book chapter).

Please consider some of these changes for your article to be considered for the journal.

Author Response

Many thanks for reviewing the long paper and making valuable suggestions.

>The paper addresses the relationship between biological principles and technical computing, highlighting how implementing biological concepts without understanding their context can lead to efficiency losses. The concept of "generalized computation," which includes biological computation and technical computation, is introduced, analyzing their fundamental differences in principles, efficiency, and costs. In addition, the inherent limitations of current computational systems, the trend of computational performance, and the challenges of efficiency in terms of energy and resources are discussed.
>The paper deals with an interesting and relevant topic in the context of efficient computing. Drawing inspiration from biological models to design more efficient computational architectures is a necessary approach, especially considering the increasing energy demands of modern computing. Strategies for making computation more efficient are central to the design of future systems, and this paper contributes to that discussion.
>The paper presents extensive coverage of the topic, addressing multiple aspects of computing from different perspectives. However, the text is considerably long for a typical scientific article. The depth and breadth of the contents make it more suitable to be structured as a book chapter, where it could be developed in greater detail without the constraints of length.

It is correct, but the author preliminarily was granted to publish a long paper. This way an audience can be reached
which could not surely be reached by a book chapter. It would really be worth to publish the extended version as a book. 
Presently the author is working on a book approaching the subject from the side of defining the biological computing
with all its related issues, but in an abstract way.

>Although the topic is extremely necessary and relevant, the manuscript does not fully follow the scientific structure expected for publication in an academic journal. The journal's target audience is composed of researchers in the sciences, which implies the need for a more structured presentation aligned with established scientific methodologies. Currently, the paper has a more expository and conceptual orientation, which may hinder the rigorous validation of its assertions.

It is correct. However, the subject needs a special approach. 
As von Neumann pointed out in his classical paper 
"The ideal procedure would be, to take up the specific parts in some definite order, to treat
each one of them exhaustively, and go on to the next one only after the predecessor is completely
disposed of. However, this seems hardly feasible. The desirable features of the various parts, and
the decisions based on them, emerge only after a somewhat zigzagging discussion."
The  paper puts together three related but distinct subjects and must cover the needed notions,
which sometimes have only their name in commmon.

> To improve the adequacy of the manuscript to the scientific standards of the journal, the following changes are suggested:
> Structure the paper as a systematic literature review:
> Apply an established methodology, such as PRISMA or Kitchenham's guide, to collect, analyze, and synthesize information rigorously.
> Justify the selection of sources and provide a methodological framework to support the claims of the study.

> Define a clear research question to guide the study:
> Instead of a lengthy exposition, it is recommended to focus the research on a specific question and answer it through documented evidence.

The research question was reviewing the efficiency of ad-hoc solutions used in
technical and biological computing. As far as the author knows, this is the first paper
that discusses biological computing in an abstract way and in technical terms.
This approach enables pointing out that practically only the names of notions are identical
and that those solutions have internal consistency. Grasping out some aspects of a solution
without its context and inserting into another context is not reasonable.

>This would allow a better organization of the content and a more apparent validation of the results.

>Reinforce the validity of claims through previous studies and quantitative comparisons:
As far as the author knows, the previous studies are based on partly misunderstood notions
(such as comparing elemental operations of those absolutely different implementations or 
comparing the billions of dedicated slow parallel transfers with a general purpose fast serial connection)
Even only within technical computing, several efficiency definitions can be and are used,
and unfortunately, most of the needed data are not public (proprietary).
Because of the very different principles and methods of implementations, a quantitative 
comparison between biological and technical computing is not possible.
Even, comparing some aspect of efficiency is hard, given that quite different notions define efficiencies.
(technical computing is digital, high-speed, deterministic, amplitude-coded, sequential, synchronized,
biological computing is analog, low speed, non-deterministic, time-coded, parallel, asynchronous,)

> Include case studies or numerical examples to illustrate computational efficiency differences between technical and biological models.

> Incorporate comparative tables or graphs to reinforce the arguments presented.

> Reduce the length of the manuscript:

> Focus content on key findings and reduce the level of detail in sections that do not directly contribute to the research question.

As far as the author knows, this is the first paper discussing biological operation in terms of technical terms
and provides the correct (physically and mathemetically established) operation;
this way giving support for engineers to understand what should be mimicked and how it could be done.
For technical operation, solutions of several decades should be addressed, so the text cannot be complete
and contains only minimum details.

> Transfer some of the more explanatory or theoretical content to supplementary material or consider publishing it in another format (e.g., book chapter).

The goal was to call attention to points where technical computing was mistaken concerning efficiency during 
the past decades, furthermore to present the correct interpretation of the biological implementation.
Given that the fundamental reason of inefficiency is applying the theory of computation
in a wrong way, theoretical content should not be transferred to supplementary material.
Thanks for the idea to extend the material to a book chapter. The author presently works on it.

> Please consider some of these changes for your article to be considered for the journal.

Many thanks, I used a lot from your ideas.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

1, Please check the reference format. Some references are missing journal information

2, Fig. 12 :The state plot is hard to read. Please replot it

3, Check equation 3. The units on both sides are not aligned.

Author Response

Comments and Suggestions for Authors

Thanks for the careful reading and the valuable comments.

> 1, Please check the reference format. Some references are missing journal information
Sorry, I used journal's LaTeX bibliography style. Now I manually edited some bibliography items.


> 2, Fig. 12 :The state plot is hard to read. Please replot it
I have added a detailed caption (a concise summary. The text was really far from the figure.)
The figure attempts to summarize the operation of neuron, showing several of its important characteristics,
using time as parameter.

> 3, Check equation 3. The units on both sides are not aligned.
Thanks for noticing that. Really, the multiplier with capacity was missing.

 

Reviewer 3 Report

Comments and Suggestions for Authors

The article is of interest to the scientific community. In this second round, I compared the newly submitted document with the previous one to comment on the recommendations provided in the first round of review.


My new comments are:
The original article lacked a systematic review structure, presenting a lengthy and somewhat disorganized discussion. This second version remains largely unchanged, continuing with broad, conceptual arguments without adopting a systematic review framework or established methodology such as the PRISMA or Kitchenham guidelines. You may consider other guidelines that allow for a rigorous working methodology.
In the new version, it still lacks a specific research question. The reviews did not limit the focus to a clearly defined research problem based on a specific question.
Furthermore, there are no case studies or numerical examples to support claims about efficiency differences between biological and technical computational models. Please add cases.

The conceptual framework section needs to be summarized. The paper still covers an extensive range of topics, maintaining an unusually long length for a journal article. This paper is suitable as a book chapter.

The original article lacked rigorous validation of its claims, relying heavily on theoretical and conceptual discussion. The new article continues to rely on conceptual arguments without empirical validation or a robust methodological framework to support its claims.

These issues need to be addressed to improve the article for its scientific contribution. However, the topic is very interesting, so some modifications to the way the results are presented are worthwhile.

Author Response

Comments and Suggestions for Authors
The article is of interest to the scientific community. In this second round, I compared the newly submitted document with the previous one to comment on the recommendations provided in the first round of review.


My new comments are:
The original article lacked a systematic review structure,
> Because it is not a systematic review article.
It would be a "mission impossible" to compare 
everything from the first "perceptron" to brain-imitating large conventional computing systems,
the expert systems from the second wave of AI to human experts, the operation of humanoid robots
to the results of the behavioral sciences, the attempts for compensating the 
technical subtleties of large technical computing systems, primarily for AI,
with the details of the operations of the biological intelligence.
to considering neural-networks, artificial vs biological


presenting a lengthy and somewhat disorganized discussion. 
> It is lenghthy, because it publishes the theoretical base for the temporal operation of various technical computing
devices and the technical description of biological computing. It is a long invited research paper.
The discussion is organized along different lines; not as in a review.

This second version remains largely unchanged, continuing with broad, conceptual arguments without adopting a systematic review framework or established methodology such as the PRISMA or Kitchenham guidelines. 
> Well, even if it were a systematic review, this remark is a fundamental misunderstanding of PRISMA.
https://www.bmj.com/content/372/bmj.n71
"PRISMA 2020 is not intended to guide systematic review conduct"
"Systematic reviews serve many critical roles. They can provide syntheses of the state of knowledge in a field,
from which future research priorities can be identified; they can address questions
that otherwise could not be answered by individual studies;
they can identify problems in primary research that should be rectified in future studies;
and they can generate or evaluate theories about how or why phenomena occur."
 
 You may consider other guidelines that allow for a rigorous working methodology.
 In the new version, it still lacks a specific research question. The reviews did not limit the focus to a clearly defined research problem based on a specific question.
 > The research question was to point out how the referred works missed the understanding
 of the context of the grasped biological idea.


Furthermore, there are no case studies or numerical examples to support claims about efficiency differences between biological and technical computational models. Please add cases.
> As the paper emphasized at several points, especially when connecting the two concrete implementations of computing
to the general theory, some aspects are resemblant, but in most cases, only their names are the same.
It is simply not reasonable to compare the single high-speed sequential transfer bus with mandatory arbitration
with the billions of parallel low speed buses without arbitration. No use to compare biological neuron's 
elementary operation 'charge integration' with say a vector matrix multiplication of specialized AI processors.
As mentioned in the paper, any comparison might be wrong in plus or minus two orders of magnitude. 
For example, as mentioned in the paper, the biologically implemented "in-memory computing"
(implemented as a simple phase-dependent summing) takes less than 0.05ms, transferring the argument 
takes up to 100 ms, delivering the result to the axon takes about 0.2 ms.
Technically, the same operation is implemented as a vector-matrix
multiplication is performed in a single clock cycle, (i.e., maybe in a few nsecs)
to deliver the weights takes a few hundred clock cycles (additionally, more clock cycles to calculate the weights),
i.e., in the order of microsecs, to program the analog weights takes cca. 10~ms,
to transfer those weights in vast computing systems, takes up to dozens of minutes
See Fig. 1 in
https://www.hpcwire.com/2022/05/30/top500-exascale-is-officially-here-with-debut-of-frontier/
following a 2-hour computation, it takes some 45 minutes to collect the results from the several 
millions of processors, due to the arbitration.
In the table, which biological data should be compared to which technical data, whetn defining a merit
for either the efficiency or the computing speed?

The conceptual framework section needs to be summarized. The paper still covers an extensive range of topics,
maintaining an unusually long length for a journal article. This paper is suitable as a book chapter.
> Biomorph computing grasps many, mostly unrelated, ideas from biology, each of them affecting efficiency. 
Yes, the paper is long, as previously agreed with the editors.


The original article lacked rigorous validation of its claims, relying heavily on theoretical and conceptual discussion. The new article continues to rely on conceptual arguments without empirical validation or a robust methodological framework to support its claims.
> As the article emphasizes, the fundamental issue is violating (maybe: not knowing) the fundamental laws of computing.
If you need empirical validation: touch your processor's surface and recall that 20 years ago it did not burn you.
The statements in the paper are not claims; they are the (unfortunately: unknown for the community)
theoretical underpinning of the empirical facts.
Figures 2 and 3 provide the theory and your burned hand the empirical validation. 


These issues need to be addressed to improve the article for its scientific contribution. However, the topic is very interesting, so some modifications to the way the results are presented are worthwhile.
> Thanks for acknowledging article's value.

 

Back to TopTop