Next Article in Journal
Observer-Based Local Stabilization of State-Delayed Quasi-One-Sided Lipschitz Systems with Actuator Saturation
Previous Article in Journal
Scalable QR Factorisation of Ill-Conditioned Tall-and-Skinny Matrices on Distributed GPU Systems
 
 
Article
Peer-Review Record

An IPNN-Based Parameter Identification Method for a Vibration Sensor Sensitivity Model

Mathematics 2025, 13(22), 3609; https://doi.org/10.3390/math13223609
by Honglong Li 1, Zhihua Liu 2, Chenguang Cai 2, Kemin Yao 3,*, Jun Pan 3 and Ming Yang 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Mathematics 2025, 13(22), 3609; https://doi.org/10.3390/math13223609
Submission received: 14 October 2025 / Revised: 5 November 2025 / Accepted: 8 November 2025 / Published: 11 November 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper introduces an Algorithm-Unrolled Interpretable Physics-Informed Neural Network (IPINN) for identifying parameters in a vibration sensor sensitivity model. The proposed framework, which integrates the algorithm unrolling technique, demonstrates superior generalization capabilities over traditional neural networks. The IPINN results are validated against both simulated and experimental data, and its performance is benchmarked against Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. However, the authors may look into the comments given before proceeding the publication process:

  1. IPINN consists of an input layer, a feedback layer, a feedforward layer, and an output layer. How many neurons are present in each layer?
  2. Although Figure 5. depicts the sinusoidal excitation and the voltage output signals on the same graph with a shared vertical axis (mV), this representation is misleading as the two signals differ fundamentally in their physical meaning. A revised presentation is needed to resolve this misrepresentation.
  3. As noted in Section 4.1, the sinusoidal excitation and voltage output signals were normalized. Please elucidate the specific normalization method employed. Furthermore, an apparent discrepancy is observed in Figure 5., where only the excitation signals appear normalized, while the output signals do not. An explanation for this discrepancy is requested.
  4. The dimensionality of the input data (i.e., the number of data points per entry) is not specified in the paper.
  5. During training process, the model uses a batch size of 36 despite the dataset containing only 21 samples. While this setting does not cause a program error, it results in the entire dataset being processed as a single batch in each iteration. This practice eliminates the stochasticity of the mini-batch gradient descent, which can degrade the training dynamics and model performance.
  6. A discrepancy exists between the model description in Section 3.2 and Table 3. The text outlines four distinct layers, while the table lists two hidden layers. Specifically, which layers in the textual description correspond to the two hidden layers noted in the table?
  7. The number of hidden layers is a critical hyperparameter that significantly influences model performance. For a fair comparison between the IPINN, LSTM, and GRU models, it is essential to control for this variable. However, this paper employs differing numbers of hidden layers across the models, a methodological choice that undermines the validity of the comparative conclusions.
  8. The use of bias terms is a common practice in conventional neural network architectures. Please justify the choice of setting bias=None for the IPINN, LSTM and GRU models.

Author Response

Thanks very much for your proposed valuable comments for this manuscript. We believe that your 
comments strongly improved the quality of our manuscript and increased its research contribution. We have made modifications and noted changes with highlights for these comments in the corresponding locations of our revision manuscript. Thank you again for your comments. We respond to these comments in the next few pages. 

Reviewer#1 Comments

This paper introduces an Algorithm-Unrolled Interpretable Physics-Informed Neural Network (IPINN) for identifying parameters in a vibration sensor sensitivity model. The proposed framework, which integrates the algorithm unrolling technique, demonstrates superior generalization capabilities over traditional neural networks. The IPINN results are validated against both simulated and experimental data, and its performance is benchmarked against Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. However, the authors may look into the comments given before proceeding the publication process:

Comments #1: IPNN consists of an input layer, a feedback layer, a feedforward layer, and an output layer. How many neurons are present in each layer?

Author response 1: Thanks very much for your valuable suggestion. The IPNN network consists of an input layer, a feedback layer, a hidden layer, a feedforward layer, and an output layer, with 1, 2, 2, 1, and 1 neurons, respectively. Corresponding, we have updated the manuscript by newly adding …a hidden layer…”and …with 1, 2, 2, 1, and 1 neurons, respectively…on lines 186-187 of paragraph 2 in Section 3.2 on page 5.

Comments #2: Although Figure 5. depicts the sinusoidal excitation and the voltage output signals on the same graph with a shared vertical axis (mV), this representation is misleading as the two signals differ fundamentally in their physical meaning. A revised presentation is needed to resolve this misrepresentation.

Author response 2: Thanks very much for your valuable suggestion, we have revised Figure 5 by adding the units “(g)” and “(mV)” separately in the legend so as to avoid confusion regarding the physical meanings of the input excitation and output signals.

Comments #3: As noted in Section 4.1, the sinusoidal excitation and voltage output signals were normalized. Please elucidate the specific normalization method employed. Furthermore, an apparent discrepancy is observed in Figure 5., where only the excitation signals appear normalized, while the output signals do not. An explanation for this discrepancy is requested.

Author response 3: Thanks very much for your valuable suggestion. The normalization process adopted in this study follows the min–max normalization method that scales data to the range of [–1, 1], as expressed by xnorm=2*(x-xmin)/(xmax-xmin)-1  to eliminate the amplitude scaling influence. We have updated the manuscript by newly adding Both the excitation and voltage output signals were normalized to the range of [–1, 1] using the min–max normalization method, expressed as  xnorm=2*(x-xmin)/(xmax-xmin)-1 , which can eliminate the amplitude scaling influence. on lines 236-38 of paragraph 3 in Section 4.1 on page 7. Correspondingly, Figure 5 has been updated accordingly to ensure consistent normalization.

Comments #4: The dimensionality of the input data (i.e., the number of data points per entry) is not specified in the paper.

Author response 4: Thanks very much for your valuable suggestion. We have updated the manuscript by newly adding Each input sample contained 16,000 time-domain data points for the excitation and output signals used for the model training.” on lines 238-239 of paragraph 3 in Section 4.1 on page 7.

Comments #5: During training process, the model uses a batch size of 36 despite the dataset containing only 21 samples. While this setting does not cause a program error, it results in the entire dataset being processed as a single batch in each iteration. This practice eliminates the stochasticity of the mini-batch gradient descent, which can degrade the training dynamics and model performance.

Author response 5: Thanks very much for your valuable suggestion, since the dataset contains only 21 samples, the entire dataset was processed as a single batch in each iteration. This deterministic setting was intentionally chosen to ensure stable convergence given the small dataset size, and it did not adversely affect the training performance.

Comments #6: A discrepancy exists between the model description in Section 3.2 and Table 3. The text outlines four distinct layers, while the table lists two hidden layers. Specifically, which layers in the textual description correspond to the two hidden layers noted in the table?

Author response 6: Thanks very much for your valuable suggestion. In the IPINN structure, the hidden layer listed in Table 3 corresponds to the hidden layer described in Section 3.2. To clarify, the IPINN consists of an input layer, a feedback layer, a hidden layer, a feedforward layer, and an output layer, where only the hidden layer is counted as the hidden layer in Table 3. We have corrected the number of hidden layers in Table 3 to “one” accordingly on page 9.

Comments #7: The number of hidden layers is a critical hyperparameter that significantly influences model performance. For a fair comparison between the IPINN, LSTM, and GRU models, it is essential to control for this variable. However, this paper employs differing numbers of hidden layers across the models, a methodological choice that undermines the validity of the comparative conclusions.

Author response 7: Thanks very much for your valuable suggestion. We fully agree that the number of hidden layers is an important hyperparameter affecting model performance. In this study, the number of hidden layers was deliberately set to highlight the structural simplicity and superiority of the IPINN. Specifically, the IPINN employs one hidden layer, while both the LSTM and GRU networks were designed with a greater number of hidden layers to ensure consistency between them. This configuration ensures a fair comparison and demonstrates that the unfolded algorithm-based IPINN can achieve superior performance with fewer layers and parameters, further emphasizing its efficiency and inherent advantages in model learning and generalization. We have also correctedthe number of hidden layers of the GRU model in Table 3” so as to ensure consistency with the actual configuration.

Comments #8: The use of bias terms is a common practice in conventional neural network architectures. Please justify the choice of setting bias=None for the IPINN, LSTM and GRU models.

Author response 8: Thanks very much for your valuable suggestion. In this study, the bias terms were set to None for all three models (IPINN, LSTM, and GRU) to ensure consistency and fairness in comparison. Specifically, the IPINN incorporates physical parameters directly into its network weights, and including additional bias terms will distort the correspondence between network parameters and physical quantities in the state-space model. For the LSTM and GRU models, the bias terms were also removed to maintain the same training conditions and parameter comparability.

Comments #9: The English could be improved to more clearly express the research.

Author response 9: Thanks very much for your valuable suggestion regarding the English language and clarity of the manuscript. We have carefully reviewed the entire manuscript and made comprehensive improvements to enhance readability and clarity. Correspondingly, we have updated the manuscript by newly revising:

  • The “Feedback layer, Feedforward layer, Output layer”, was corrected to “Hidden layer, Feedback layer, Feedforward layer” in Table 1;
  • The “[29-31]” was corrected to “[31-33]” on lines 78 of Paragraph 3 in Section 1 on Page 2;
  • The “[32]” was corrected to “[37]” on lines 161 of Paragraph 1 in Section 3.1 on Page 5;
  • The parameters of the hidden layers of IPNN and GRU were respectively corrected to “1 and 8” in Table 3.
  • The phrase “could be mainly attributed” was corrected to “can be mainly attributed” on line 343 of paragraph 1 in Section 5.3 on page 12.
  • The phrase “maintained constraints consistent with the true physical process” was corrected to “adhered to physical constraints” on line 346 of Paragraph 1 in Section 5.3 on Page 12.
  • The word “fitted” was corrected to “captured” on line 348 of Paragraph 1 in Section 5.3 on Page 12.
  • The Professor Kemin Yao's email address 258598135@qq.com was corrected to 201910798@gzcc.edu.cn.

Reviewer 2 Report

Comments and Suggestions for Authors

After a careful review of this article, I have several questions and concerns I'd like to raise:

What are the limitations of modeling the sensor as a single-degree-of-freedom system? Could multi-degree or nonlinear models offer better fidelity?

How does the IPNN perform under varying noise levels beyond the 20 dB Gaussian white noise used in simulation?

Can the IPNN framework be adapted to other sensor types (e.g., piezoelectric, MEMS)? What modifications would be needed?

Is a dataset of 21 samples sufficient to demonstrate generalization across frequency ranges? Could more samples improve statistical confidence?

How well does the IPNN handle non-sinusoidal or transient excitations typical in operational environments?

Why were GRU and LSTM chosen as baselines? Would including classical methods (e.g., Kalman filters, ARMA models) strengthen the comparison?

How sensitive is the IPNN’s interpretability to variations in the discretization method or sampling period?

Can the learned weights be traced back to physical parameters in a way that supports sensor diagnostics or fault detection?

 What are the computational requirements for deploying IPNN in embedded systems or real-time calibration setups?

What are the authors’ plans for extending this work — e.g., online learning, adaptive calibration, or integration with digital twins?

 

Author Response

Thanks very much for your proposed valuable comments for this manuscript. We believe that your 
comments strongly improved the quality of our manuscript and increased its research contribution. We have made modifications and noted changes with highlights for these comments in the corresponding locations of our revision manuscript. Thank you again for your comments. We respond to these comments in the next few pages. 

Reviewer#2 Comments

After a careful review of this article, I have several questions and concerns I'd like to raise:

Comments #1: What are the limitations of modeling the sensor as a single-degree-of-freedom system? Could multi-degree or nonlinear models offer better fidelity?

Author response 1: Thanks very much for your valuable suggestion. For the investigated vibration sensor in this study, a single-degree-of-freedom (SDOF) system is sufficient to capture its full-band dynamic characteristics. However, when modeling the sensor operating over a higher frequency range, the multi-degree-of-freedom (MDOF) or nonlinear models can provide higher accuracy and better fidelity. Nevertheless, these models introduce additional parameters and computational complexity, which may reduce model interpretability and increase the parameter identification difficulty. Correspondingly, we have updated the manuscript by newly addingAlthough multi-degree-of-freedom or nonlinear models may offer higher accuracy at the cost of increased complexity.” on lines 96–98 of paragraph 1 in Section 2.1 on page 3.

Comments #2: How does the IPNN perform under varying noise levels beyond the 20 dB Gaussian white noise used in simulation?

Author response 2: Thanks very much for your valuable suggestion. In this study, a 20 dB Gaussian white noise level was used to represent a typical signal-to-noise ratio in practical measurements, the IPNN inherently exhibits robustness due to its physical constraints and feedback structure, which help suppress random noise. Therefore, 20 dB is considered representative for evaluating the model’s performance under realistic conditions.

Comments #3: Can the IPNN framework be adapted to other sensor types (e.g., piezoelectric, MEMS)? What modifications would be needed?

Author response 3: Thanks very much for your valuable suggestion. In this study, the investigated vibration sensor belongs to a class of MEMS-based sensors. The IPNN framework is flexible, which can also be adapted to other sensor types, such as piezoelectric sensors. Adaptation to other sensor types would primarily require adjusting the physical parameters and training data to match the dynamics of the target sensor.

Comments #4: Is a dataset of 21 samples sufficient to demonstrate generalization across frequency ranges? Could more samples improve statistical confidence?

Author response 4: Thanks very much for your valuable suggestion. Although the dataset contains only 21 samples, they cover the full ISO-specified frequency range, providing representative training. The IPNN leverages physical constraints and feedback structure, allowing it to learn sensor dynamics rather than merely fit data. Therefore, the current dataset is sufficient to demonstrate generalization across frequencies.

Comments #5: How well does the IPNN handle non-sinusoidal or transient excitations typical in operational environments?

Author response 5: Thanks very much for your valuable suggestion. This study focuses on sinusoidal excitations, which are recommended by ISO standards for vibration sensor calibration to systematically investigate the sensor’s frequency response. Although only sinusoidal signals were considered, the IPNN learns the sensor’s underlying physical dynamics rather than fitting specific waveforms, allowing it to generalize to non-sinusoidal or transient excitations in principle.

Comments #6: Why were GRU and LSTM chosen as baselines? Would including classical methods (e.g., Kalman filters, ARMA models) strengthen the comparison?

Author response 6: Thanks very much for your valuable suggestion. The GRU and LSTM were selected as baselines because they are widely used recurrent neural network architectures for modeling temporal sequences. Classical methods such as Kalman filters or ARMA models could provide additional baseline comparisons, but they do not offer the same level of physical interpretability as the IPNN.

Comments #7: How sensitive is the IPNN’s interpretability to variations in the discretization method or sampling period?

Author response 7: Thanks very much for your valuable suggestion. In the IPNN, the network parameters are directly linked to the physical parameters of the sensor’s discrete state-space model, ensuring interpretability. Although variations in the discretization method or sampling period may slightly affect the numerical values of these parameters, the overall physical meaning and structure of the network remain unchanged, and the interpretability of the IPNN remains robust.

Comments #8: Can the learned weights be traced back to physical parameters in a way that supports sensor diagnostics or fault detection?

Author response 8: Thanks very much for your valuable suggestion. In the IPNN, the network weights are directly linked to the physical parameters of the sensor’s discrete state-space model, providing inherent interpretability. While this study focuses on parameter identification and sensitivity modeling, the direct mapping of weights to physical parameters suggests potential for supporting sensor diagnostics or fault detection. Exploration of this application is considered a promising direction for future work.

Comments #9: What are the computational requirements for deploying IPNN in embedded systems or real-time calibration setups?

Author response 9: Thanks very much for your valuable suggestion. The IPNN is designed based on a single-degree-of-freedom linear system with a simple network architecture consisting of only one hidden layer. Compared with deep or recurrent neural networks, its computational load is relatively low, making it suitable for deployment in resource-constrained environments such as embedded systems. Correspondingly, we have updated the manuscript by newly addingAdditionally, the interpretable structure and simple network architecture of the IPNN make it suitable for online learning and deployment in resource-constrained environments, such as embedded systems.” on lines 369–371 of paragraph 1 in Section 6 on page 13.

Comments #10: What are the authors’ plans for extending this work — e.g., online learning, adaptive calibration, or integration with digital twins?

Author response 10: Thanks very much for your valuable suggestion. The current study focuses on offline parameter identification and sensitivity modeling of vibration sensors using the IPNN. Given its interpretable structure and simple network architecture, the IPNN framework could potentially be extended to online learning and adaptive calibration in future work. Correspondingly, we have updated the manuscript by newly addingProviding a promising direction for adaptive calibration and practical applications.” on lines 372 of paragraph 1 in Section 6 on page 13. 

Reviewer 3 Report

Comments and Suggestions for Authors

1- Please clarify what is algorithmically new beyond the unrolling state-space equivalence table (Table 1).

2- The paper should include a comparative discussion of recent hybrid frameworks such as PINN–Kalman filters, interpretable graph PINNs, or adaptive-constraint networks (Raissi et al., 2019; Zhao et al., 2025).

3- The experimental dataset is limited to 21 samples. Please justify the sufficiency of this dataset in training a neural model, even with physics priors.

4- Were k-fold cross-validations performed to avoid overfitting? If not, the generalization capability should be re-examined.

5- Although the network parameters correspond to physical coefficients, the manuscript does not provide a quantitative link, such us weight value to stiffness/damping estimation.

6- The authors might enrich the discussion by connecting this framework with SHM-oriented studies particularly those employing vibration sensors and hybrid optimization like [A novel Optimization-Based Damage Detection in Beam Systems Using Advanced Algorithms for Joint-Induced Structural Vibrations -Dynamic Analysis of Two Straight and Curved Beams Connected with Intermediate Vertical Beams Made of Functionally Graded Porous Materials  - Vibration analysis of new cosine functionally graded microplates using isogeometric analysis ]

7- A sensitivity analysis illustrating how parameter variations affect predicted voltage output or resonance response would improve physical interpretability claims.

8- Section 4.1 mentions Gaussian noise but does not specify the signal-to-noise ratio or its physical relevance.

9- How does the IPNN performance vary with different excitation frequencies or noise levels? Can this framework generalize to nonlinear sensors or multi-axis accelerometers?

Author Response

Thanks very much for your proposed valuable comments for this manuscript. We believe that your 
comments strongly improved the quality of our manuscript and increased its research contribution. We have made modifications and noted changes with highlights for these comments in the corresponding locations of our revision manuscript. Thank you again for your comments. We respond to these comments in the next few pages. 

Comments #1: Please clarify what is algorithmically new beyond the unrolling state-space equivalence table (Table 1).

Author response 1: Thanks very much for your valuable suggestion. Beyond the unrolling state-space equivalence provided in Table 1, the algorithmic novelty lies in three aspects: (I) converting the physical sensitivity model into a discrete state-space form to reduce structural simplification errors; (II) constructing an interpretable physical neural network via algorithmic unrolling, where each neuron and weight correspond to specific physical parameters; and (III) optimizing the network using the vibration calibration data by laser interferometry to suppress noise and nonlinear disturbances. This integration enhances parameter interpretability, stability, and robustness, providing a methodological advancement beyond the conventional state-space representation.

Comments #2: The paper should include a comparative discussion of recent hybrid frameworks such as PINN–Kalman filters, interpretable graph PINNs, or adaptive-constraint networks (Raissi et al., 2019; Zhao et al., 2025).

Author response 2: Thanks very much for your valuable suggestion. We have updated the manuscript to include a discussion of relevant physics-informed and hybrid PINN frameworks by newly adding Additionally, Raissi et al. [29] introduced the foundational PINN framework for solving forward and inverse problems involving non-linear partial differential equations, high-lighting the ability of PINNs to incorporate physical laws into deep learning. Sorrenti-no et al. [30] combined PINNs with an Unscented Kalman Filter for sensorless joint torque estimation in humanoid robots, illustrating the integration of physics-informed networks with estimation techniques in complex dynamical systems. on lines 68-74 of Paragraph 3 in Section 1 on Page 2. Correspondingly, we have updated the manuscript by newly adding the following references:

  1. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Phys. 2019, 378, 686–707. https://doi.org/10.1016/j.jcp.2018.10.045
  2. Sorrentino, I.; Romualdi, G.; Moretti, L.; Traversaro, S.; Pucci, D. Physics-informed neural networks with Unscented Kalman Filter for sensorless joint torque estimation in humanoid robots. IEEE Robot. Autom. Lett. 2025, 10, 728–735. https://doi.org/10.1109/LRA.2025.3562792

Comments #3: The experimental dataset is limited to 21 samples. Please justify the sufficiency of this dataset in training a neural model, even with physics priors.

Author response 3: Thanks very much for your valuable suggestion. Although the dataset contains only 21 samples, they covered the full ISO-specified frequency range, providing representative training. The IPNN leverages physical constraints and feedback structure, allowing it to learn sensor dynamics rather than merely fit data. Therefore, the current dataset is sufficient to demonstrate generalization across frequencies.

Comments #4: Were k-fold cross-validations performed to avoid overfitting? If not, the generalization capability should be re-examined.

Author response 4: Thanks very much for your valuable suggestion. In this study, due to the limited number of samples (21), traditional k-fold cross-validation was not applied. However, the IPNN leverages physical constraints and a feedback structure, which inherently regularizes the network and prevents overfitting. Moreover, the dataset covers the full ISO-specified frequency range, ensuring that the trained model generalizes across the operational frequency spectrum. Therefore, despite the small sample size, the generalization capability is adequately supported by the physical modeling and network design.

Comments #5: Although the network parameters correspond to physical coefficients, the manuscript does not provide a quantitative link, such us weight value to stiffness/damping estimation.

Author response 5: Thanks very much for your valuable suggestion. As described in Section 3.2, paragraph 4, lines 200–201 of the revised manuscript, “By combining Eqs. (7) and (8), the unknown parameters of the sensitivity model can be accurately identified.” That is, the parameters to be identified can be solved directly from Eqs. (7) and (8), ensuring the physical interpretability of the network weights without requiring an explicit quantitative mapping.

Comments #6: The authors might enrich the discussion by connecting this framework with SHM-oriented studies particularly those employing vibration sensors and hybrid optimization like [A novel Optimization-Based Damage Detection in Beam Systems Using Advanced Algorithms for Joint-Induced Structural Vibrations -Dynamic Analysis of Two Straight and Curved Beams Connected with Intermediate Vertical Beams Made of Functionally Graded Porous Materials - Vibration analysis of new cosine functionally graded microplates using isogeometric analysis]

Author response 6: Thanks very much for your valuable suggestion. We have updated the manuscript to include a discussion connecting the proposed IPNN framework with SHM-oriented studies employing vibration sensors and hybrid optimization techniques by newly adding: “Furthermore, this framework also shows potential applicability to SHM-oriented studies using vibration sensors and hybrid optimization techniques, as explored in recent studies on beam and microplate structures [34–36].” on lines 83-85 of paragraph 4 in Section 1 on page 2. Correspondingly, we have updated the manuscript by newly adding the following references:

  1. Mansouri, A.; Tiachacht, S.; Ait-Aider, H.; et al. A novel Optimization-Based Damage Detection in Beam Systems Using Ad-vanced Algorithms for Joint-Induced Structural J. Vib. Eng. Technol. 2025, 13, 1–30. https://doi.org/10.1007/s42417‑025‑02003‑4
  2. Jafari‑Talookolaei, R.A.; Ghandvar, H.; Jumaev, E.; et  Dynamic Analysis of Two Straight and Curved Beams Connected with Intermediate Vertical Beams Made of Functionally Graded Porous Materials. Int. J. Struct. Stab. Dyn. 2025, 46, 37–62. https://doi.org/10.1142/S021945542650135X
  3. Khatir, B.; Filali, S.; Belabdeli, S.; et  Vibration analysis of new cosine functionally graded microplates using isogeometric analysis. Structures 2024, 69, 107467. https://doi.org/10.1016/j.istruc.2024.107467

Comments #7: A sensitivity analysis illustrating how parameter variations affect predicted voltage output or resonance response would improve physical interpretability claims.

Author response 7: Thanks very much for your valuable suggestion. The IPNN inherently links network weights to physical parameters through the discrete state-space model, enabling a direct reflection of parameter variations in the predicted voltage output and resonance behavior. Hence, the proposed framework provides a physically interpretable mapping between model parameters and sensor response, which implicitly captures the parameter sensitivity relationships.

Comments #8: Section 4.1 mentions Gaussian noise but does not specify the signal-to-noise ratio or its physical relevance.

Author response 8: Thanks very much for your valuable suggestion. As described in Section 4.1, paragraph 3, line 230 of the revised manuscript, Gaussian noise with a signal-to-noise ratio of 20 dB was added to the excitation signals to simulate typical environmental vibration disturbances. This level represents a moderate noise condition commonly encountered in practical vibration measurement systems.

Comments #9: How does the IPNN performance vary with different excitation frequencies or noise levels? Can this framework generalize to nonlinear sensors or multi-axis accelerometers?

Author response 9: Thanks very much for your valuable suggestion. In this study, the IPNN was evaluated under sinusoidal excitations across the full ISO-specified frequency range with 20 dB Gaussian noise, representing typical operating conditions. While further simulations under other noise levels or excitations were not performed, the network’s physical constraints and feedback structure provide inherent robustness, supporting generalization across the studied frequency range. The current work focuses on single-degree-of-freedom linear vibration sensors; extending the framework to nonlinear sensors or multi-axis accelerometers can be considered in future work.

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The author has made the necessary revisions and additions based on the reviewers' comments and recommends acceptance.

Reviewer 2 Report

Comments and Suggestions for Authors

The authors addressed all the comments.

Reviewer 3 Report

Comments and Suggestions for Authors

Authors have addressed required clarifications and revisions

Back to TopTop