Next Article in Journal
Ethicametrics: A New Interdisciplinary Science
Previous Article in Journal
Component Analysis When Testing for Fixed Effects in Unbalanced ANOVAs
 
 
Article
Peer-Review Record

Mission Reliability Assessment for the Multi-Phase Data in Operational Testing

by Jianping Hao and Mochao Pei *
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Submission received: 13 May 2025 / Revised: 19 June 2025 / Accepted: 19 June 2025 / Published: 20 June 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Title: Mission Reliability Assessment for the Multi-phase Data in Operational Testing
Authors: Jianping Hao and Mochao Pei

1. Importance of the article:

- The proposed methodology addresses a critical gap in the reliability assessment of military systems under operational conditions, where data are often scarce and heterogeneous.

- It advances state-of-the-art Bayesian data fusion techniques by embedding covariates, making the models more adaptable and reflective of real-world complexities.

- Its focus on phased, mission-specific analysis aligns well with modern military operational requirements, emphasizing system-wide reliability over individual component metrics.

- The research is timely, relevant, and potentially influential for military testing agencies and reliability engineers seeking robust, practical assessment tools.

2. Strengths of the Article

- Novel Methodology: The integration of maximum entropy principles with reliability constraints and covariate embedding represents a significant innovation, enabling more accurate parameter estimation under non-stationary, heterogeneous data conditions.

- Practical Relevance: The detailed case study on VMLS demonstrates the real-world applicability and effectiveness of the proposed framework. The approach’s flexibility suggests potential for wide adoption across various weapon systems.

- Comprehensive Framework: The combination of phased modeling, Bayesian data fusion, and multi-path reliability assessment provides a thorough strategy for operational testing scenarios that are often complex and data-limited.

- Clear Organization: The paper delineates methodological development, simulation validation, and real-world application, facilitating understanding despite technical complexity.

3. Weaknesses and Areas for Improvement

- Technical Density and Clarity: While the target audience is likely experts in reliability, Bayesian methods, and military systems, some sections contain dense, lengthy sentences that could be simplified for clarity. Shorter sentences and clearer explanations would make the paper more accessible.

- Methodological Detail: Although the innovations are well-described, a more illustrative example or schematic diagram of how covariates are integrated into the Bayesian model would enhance understanding.

- Data Presentation: Figures (though not visible here) should include more detailed captions to elucidate what each visual emphasizes and how it contributes to supporting the conclusions.

- Generalizability: The study focuses primarily on VMLS; a discussion on the broader applicability or potential adaptation to other systems would strengthen its appeal for wider readership.

- Stylistic and Language Refinements: Minor editing for grammatical precision and consistency would improve overall readability.

4. Final Recommendation

Overall, the article is of high quality and makes a meaningful contribution to the field of mission reliability assessment under operational testing conditions.

It is recommended for acceptance pending minor revisions focused on clarity, illustrative explanations, and language editing. These adjustments will enhance readability, ensuring that the research can be comprehensively understood and appreciated by its intended scholarly audience.

 

Comments on the Quality of English Language

The English language in this document is generally clear and appropriately formal for an academic publication. The writing is technically precise, with proper use of terminology related to reliability assessment and Bayesian methods. The sentences are well-structured, contributing to the overall clarity of the complex methodological explanations.

However, there are minor areas where language could be improved for increased readability and flow. For example, some sentences are lengthy and could benefit from simplification or division to enhance clarity. Additionally, there are occasional typographical issues, such as missing conjunctions or punctuation, that could be refined.

Author Response

Comments 1- Technical Density and Clarity: While the target audience is likely experts in reliability, Bayesian methods, and military systems, some sections contain dense, lengthy sentences that could be simplified for clarity. Shorter sentences and clearer explanations would make the paper more accessible.

Response 1: Thank you for your feedback. We have made revisions in multiple places throughout the entire manuscript, with changes marked in red. Details can be found in the attached document.

Comments 2:Data Presentation: Figures (though not visible here) should include more detailed captions to elucidate what each visual emphasizes and how it contributes to supporting the conclusions.

Response 2:  Thank you for your feedback. We have supplemented the description of experimental scenarios in Section 3.2 Field Deployment Case.

Comments 3:Generalizability: The study focuses primarily on VMLS; a discussion on the broader applicability or potential adaptation to other systems would strengthen its appeal for wider readership.

Response 3:  Thank you for your feedback. We have supplemented additional content at the end of the article, as follows: 

The multi-path mission reliability assessment demonstrates that other performance metrics of the system also influence its mission reliability level, and its logic is extensible to other multi-phase systems (e.g., unmanned platforms, electronic warfare systems) with phased mission profiles and heterogeneous data sources.

The work presented in this paper provides a reference methodological framework for studying mission reliability assessment under operational testing conditions. Due to limitations in testing conditions, the covariate-embedded assessment models employed in this study have not yet incorporated additional unknown risk factors. The proposed framework exhibits inherent generalizability. Its core components, such as phased RBD construction, covariate-embedded Bayesian model, and multi-path synthesis, are adaptable to systems beyond VMLS. For instance, phased RBD can map to any equipment with sequential operational modes (e.g., reconnaissance-strike-assessment loops in UAVs), while covariate embedding accommodates diverse environmental drivers (e.g., jamming intensity in electronic warfare systems). The next phase of research will focus on validating this adaptability across domains.

The supplemented materials above have also been highlighted in red within the attached document.

Comments 4:- Stylistic and Language Refinements: Minor editing for grammatical precision and consistency would improve overall readability.

Response 4:  Thank you for your notification. We have conducted checks and made improvements regarding grammar and consistency throughout the article; details can be reviewed in the attached document.

Comments 5:It is recommended for acceptance pending minor revisions focused on clarity, illustrative explanations, and language editing. These adjustments will enhance readability, ensuring that the research can be comprehensively understood and appreciated by its intended scholarly audience.

Response 5: Thank you for your feedback. We sought guidance from English instructors within our institution to refine the manuscript, and their inputs have significantly enhanced the quality of our work. Corresponding revisions have been implemented accordingly.

Comments 6:However, there are minor areas where language could be improved for increased readability and flow. For example, some sentences are lengthy and could benefit from simplification or division to enhance clarity. Additionally, there are occasional typographical issues, such as missing conjunctions or punctuation, that could be refined.

Response 6: Thank you for your suggestion. As this recommendation is similar in nature to the preceding ones, we have implemented revisions based on your specific advice accordingly.

Comments 7:- Methodological Detail: Although the innovations are well-described, a more illustrative example or schematic diagram of how covariates are integrated into the Bayesian model would enhance understanding.

Response 7:  Thank you for your feedback. In the revised manuscript, we have supplemented Figure 3 and Figure 4, accompanied by concise captions to demonstrate this process.

 

Thank you once again for your valuable comments, which have significantly enhanced the quality of our paper. We acknowledge that this revision may still contain imperfections, and we sincerely welcome your further guidance.. We remain fully open to implementing more thorough revisions as needed.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

The authors present a case study for the reliability assessment of a vehicular missile launch system using reliability block diagrams, combined inputs in a Bayesian framework and maximum likelihood estimations of parameters through Monte Carlo Marko Chain simulation. The authors present an interesting case study of the reliability assessment of the subsystems and an application to the full system level reliability estimation for mission success. The presented method combines 3 well established techniques to form a workflow with the potential for application to other system developments.

The reviewer would like to thank the authors for their contribution to the journal and request some minor considerations for improvement to the submission;

  • The work of Krolo and Bertsche (2003 & 2003) layout substantial portions of the Bayesian framework for reliability assessment with examples and should be included in the references/literature review.
  • The novelty of this submission should be made clearer to the reader. The block diagram method for reliability can be dated back to the work of Lusser, while the Bayesian framework for reliability was largely established in the 1980 (e.g. Martz & Waller 1982). The application of maximum likelihood estimation and MCMC methods for fitting parameters is similarly well established. Can the authors make the specifics of this contribution clearer.
  • Generally, the English used is understandable and correct but somewhat convoluted and difficult to read. The abstract in particular would benefit from review (e.g. opening sentence is very long).
  • Below table 2, there is a capitalization of “Larger” which should be corrected.

Author Response

Comments 1 :The work of Krolo and Bertsche (2003 & 2003) layout substantial portions of the Bayesian framework for reliability assessment with examples and should be included in the references/literature review.

Response 1: Thank you for your feedback! We have inserted the following content into the Introduction section:

“Krolo and Bertsche [17] proposed a Bayesian framework for reliability demonstration testing, introducing the decrease-factor to quantify the transferability of historical data under varying product conditions. Their approach effectively reduces required sample sizes while mitigating over-optimism in prior distributions.”

The above content is highlighted in red in the revised draft I uploaded.

Comments  2: The novelty of this submission should be made clearer to the reader. The block diagram method for reliability can be dated back to the work of Lusser, while the Bayesian framework for reliability was largely established in the 1980 (e.g. Martz & Waller 1982). The application of maximum likelihood estimation and MCMC methods for fitting parameters is similarly well established. Can the authors make the specifics of this contribution clearer.

Response 2: Thank you for your feedback! We have inserted the following content into the Introduction section:

"While foundational methodologies, including RBD [18], Bayesian reliability frameworks [19], and Markov Chain Monte Carlo (MCMC) [20] parameter estimation, serve as essential tools in this study, the contribution lies in their innovative engineering integration for mission reliability assessment under operational testing constraints. Specifically:

  1. Traditional RBDare static and function-oriented, while the phased RBD is dynamically constructed based on the mission profile.
  2. Traditional Bayesian frameworks enable multi-source data combination through prior distribution, they rarely consider quantifying the intrinsic sources of heterogeneity. This work extends the Bayesian framework by embedding covariates via generalized linear model (GLM) and location-scale regression model, quantitatively characterizing environmental impacts.
  3. Posterior inference employed the No-U-Turn Sampler (NUTS), an adaptive Hamiltonian Monte Carlo(HMC) variant within the MCMC framework. It automates tuning of leapfrog steps via recursive tree doubling, eliminating inefficient random-walk behavior inherent in traditional MCMC. This achieves faster convergence to target distribution."                                The above content is highlighted in red in the revised draft I uploaded.

Comments 3 :Generally, the English used is understandable and correct but somewhat convoluted and difficult to read. The abstract in particular would benefit from review (e.g. opening sentence is very long).Below table 2, there is a capitalization of “Larger” which should be corrected.

Response 3:Thank you for your feedback! We have modified the opening sentence of the abstract section in accordance with your guidance as follows: 

“Traditional methods for mission reliability assessment under operational testing conditions exhibit some limitations. They include coarse modeling granularity, significant parameter estimation biases, and inadequate adaptability for handling heterogeneous test data. To address these challenges, this study establishes an assessment framework using a vehicular missile launching system (VMLS) as a case study.”

The remaining contents of the paper have also been checked and revised according to your guidance.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

The manuscript proposes an inferential framework, which integrates the maximum entropy criterion with reliability monotonic decreasing constraints, develops a covariate-embedded Bayesian data fusion model, and employs a multi-path weight adjustment assessment method.
The viability of the approach is demonstrated to be superior in parameter estimation accuracy in simulation and physical testing studies.
Some steps of the inferential framework need to be further justified, elaborated and described, according to the Main Comments below.

Main Comments

  1. In page 6 (and likewise, in page 9), the R_ik is assigned a (beta) prior distribution, and are then posterior estimates of these R_ik parameters, under a monotonicity constraint, are obtained; these posterior estimates referred to as Rhat_ik values (which, apparently, are used to smooth the observed reliability observations).
    Then the posterior estimates Rhat_ik (the monotone constrained posterior estimates) are used as data observations in the likelihood function (equation (11)), to define a posterior distribution (equation (12)) of the parameters of a Bayesian logit regression model; this latter posterior distribution (of the parameters of this logit model) are estimated by using MCMC.
    Please provide further justification of this two-stage estimation approach, as it might seem odd that you are obtaining a posterior distribution of logit model parameters, from data observations which are themselves posterior estimates (i.e., the Rhat_ik values).   That is, you seem to be inferring a posterior distribution from another posterior distribution, which is a non-standard or atypical approach to perform inference of the posterior distribution.
  2. Posterior inferences are based on the posterior mean and credible intervals of model parameters. Also consider reporting the highest posterior density (HPD) intervals for each model parameter, which have some favorable properties over posterior credible intervals. 

  3. Throughout the manuscript, you mention the use of "the MCMC algorithm" to estimate the given posterior distribution of parameters of interest, without describing any details about the associated MCMC algorithm, which need to be described in the manuscript, including the algorithm (e.g., Gibbs sampler, Metropolis Hastings algorithm, or slice sampler, etc.) that was used to sample the full conditional posterior distribution of each model parameter, and describe each full conditional posterior distribution, etc. Also, you need to describe the results of MCMC convergence analyses, and correspondingly establish MCMC convergence, which can be done using the coda R package.

  4. Be sure to provide a weblink (e.g., GitHub) of the relevant software code, so that readers can reproduce all the equations and results presented in the manuscript.

Details
In page 13 and in page 14, you use
Normal(0, 0,100,100, 0)
to denote a multivariate (bivariate?) normal distribution,
but this is non-standard notation which needs to be fixed.

Author Response

Comments 1:

In page 6 (and likewise, in page 9), the R_ik is assigned a (beta) prior distribution, and are then posterior estimates of these R_ik parameters, under a monotonicity constraint, are obtained; these posterior estimates referred to as Rhat_ik values (which, apparently, are used to smooth the observed reliability observations).Then the posterior estimates Rhat_ik (the monotone constrained posterior estimates) are used as data observations in the likelihood function (equation (11)), to define a posterior distribution (equation (12)) of the parameters of a Bayesian logit regression model; this latter posterior distribution (of the parameters of this logit model) are estimated by using MCMC.
Please provide further justification of this two-stage estimation approach, as it might seem odd that you are obtaining a posterior distribution of logit model parameters, from data observations which are themselves posterior estimates (i.e., the Rhat_ik values).   That is, you seem to be inferring a posterior distribution from another posterior distribution, which is a non-standard or atypical approach to perform inference of the posterior distribution.

Response 1: On pages 9 and 10 of the revised draft, we have added content (including Figure 3 and the text immediately preceding it) to address the issue you raised. These additions are highlighted in red.

Comments 2:Posterior inferences are based on the posterior mean and credible intervals of model parameters. Also consider reporting the highest posterior density (HPD) intervals for each model parameter, which have some favorable properties over posterior credible intervals. 

Response 2: Thank you greatly for pointing this out! In fact, this paper consistently uses HPD intervals as they represent a specific form of credible intervals, though we had not explicitly stated it. Following your suggestion, we have added "2.2.2 MCMC-based posterior inference and model validation" spanning pages 13 to 15. Within Step 2 on page 15, we now explicitly describe the computation method for HPD intervals. Consequently, all references to "credible interval" have been replaced with "HPD interval" throughout this paper.

Comments 3: Throughout the manuscript, you mention the use of "the MCMC algorithm" to estimate the given posterior distribution of parameters of interest, without describing any details about the associated MCMC algorithm, which need to be described in the manuscript, including the algorithm (e.g., Gibbs sampler, Metropolis Hastings algorithm, or slice sampler, etc.) that was used to sample the full conditional posterior distribution of each model parameter, and describe each full conditional posterior distribution, etc. Also, you need to describe the results of MCMC convergence analyses, and correspondingly establish MCMC convergence, which can be done using the coda R package.

Response 3: Thank you for raising this point! In the revised draft, we have added a dedicated section titled "2.2.2 MCMC-based posterior inference and model validation" to describe the MCMC algorithm. Additionally, content regarding MCMC convergence analysis is now presented on pages 21 and 22. All these additions are highlighted in red.

Comments 4: Be sure to provide a weblink (e.g., GitHub) of the relevant software code, so that readers can reproduce all the equations and results presented in the manuscript.

Response 4: We have uploaded the code associated with this paper to:   https://github.com/peimochao/Coder-for-stats

Comments 5:

Details
In page 13 and in page 14, you use
Normal(0, 0,100,100, 0)
to denote a multivariate (bivariate?) normal distribution,
but this is non-standard notation which needs to be fixed.

Response 5: The modification has been implemented on page 21 of the revised manuscript. Due to formatting limitations in this document, please refer to the revised manuscript file for visual verification.

Author Response File: Author Response.docx

Reviewer 4 Report

Comments and Suggestions for Authors

see the attached file

Comments for author File: Comments.pdf

Author Response

Comments 1: In what ways does the framework differentiate between continuous, demandoperated, and pulse-operated subsystems, and why is this differentiation important for reliability modeling?

Response 1: Thank you for raising this question! Accordingly, we have refined the content immediately below Figure 1. The revised text (highlighted in red in the draft) reads as follows:

  • Continuous operation systems: Include subsystems C, N, and H. These subsystems operate persistently throughout a phase, accumulating operational exposure metrics and generating time-dependent failure data. Aircraft engines (during flight), conveyor belts, data center servers, and analogous systems also fall within this category. Critical faults in C and H can be immediately detected, while subsystem N requires scheduled inspections. Consequently, the data generated by subsystem N differs from that of C and H: the formerproduces success/failure data with varying success rates, whereas the latter generates TBF data.
  • Demand-operated systems: Include subsystem F. Maintaining quiescent operation (standby/dormant mode) during extended periods with transition to active state only upon defined triggering events, such systems exhibit characteristically transient functional execution phases and consequently undergo reliability assessment via binary success/failure data without operational time monitoring or feasible recording. Fire alarm units, automotive starter motors, and pilot ejection systems are likewise classified within this paradigm.

 • Pulse-operated systems: Include subsystems M1-M3. Such systems undergo exceptionally high peak loads or stresses within vanishingly brief durations (millisecond to second scale), subsequently persisting in low-load or standby states for extended periods. This operational paradigm is characterized by extreme instantaneous power/stress density juxtaposed with markedly diminished average power/stress levels. Electromagnetic railguns, pyrotechnic airbag inflators, and surge protective devices are also classified within this domain. Missiles further fall into the category of non-repairable systems that produce binary outcome data structurally equivalent to that of subsystem F. 

This differentiation is critical because different types of systems generate distinct categories of data, necessitating tailored reliability assessment models.

Comments 2: How does the paper introduce a phased reliability block diagram tailored to each mission phase, and what theoretical benefits does this bring to mission reliability analysis?

Response 2:  The foundational materials required for introduce a phased reliability block diagram include:

(1)The mission profile of the system-of-systems (SoS)  to which the system under test belongs.

(2)The mission phases the test system must undergo to accomplish predefined operational objectives.

(3)The critical subsystems enabling mission execution during each operational phase.

Additionally, as stated in the concluding paragraph of Section 2.1 in the revised manuscript:

"Under operational testing conditions, phased RBD construction shall adhere to the principles of functional aggregation and hierarchical compression, while differentiating phases and subsystems exhibiting distinctly heterogeneous characteristics..."

The adoption of phased RBD methodology provides visual clarification of  phase demarcation boundaries, Functional subsystem, Data categorization, and operational mission execution pathways.

Comments 3: How is the hierarchical Bayesian data fusion model formulated to combine prior knowledge, historical data, and operational test results?

Response 3: Thank you for your question! In the revised draft (pages 9-12), we have supplemented the modeling process descriptions with added textual explanations and supporting Figures 3 & 4. These additions (highlighted in red) collectively provide a systematic response to your question.

Comments 4: How does the maximum entropy criterion mathematically guide hyperparameter estimation under data uncertainty?

Response 4: The maximum entropy criterion mathematically guides hyperparameter estimation by a constrained optimization problem that:

(1) Maximizes uncertainty preservation. The objective function selects the distribtuion with the highest information entropy, which is a metric quantifying the uncertainty inherent in the distribution. This ensures minimal artificial information is introduced into the distribution. (This reflects a fundamental philosophy of distribution estimation: When estimating an unknown distribution, we seek the maximally uncertain estimate compatible with available information.)

(2)Monotonic decrease of prior expection and posterior estimation.

Comments 5: How are confidence intervals for mission reliability derived, especially under small-sample or censored data conditions?

Response 5: In the revised manuscript, Section 2.2.2 ('MCMC-based Posterior Inference and Model Validation') has been expanded to include detailed descriptions of credible interval computation methods, particularly on page 15. While operational test data inherently exhibits small-sample and censoring challenges, these limitations are effectively mitigated through the combination of historical datasets within our framework.

Comments 6: What assumptions are made in deriving equation 3?

Response 6: The derivation of Equation (3) on page 7 for the Beta distribution's mathematical expectation represents a fundamental result in probability theory. This closed-form expression follows directly from the probability density function's definition through standard integration techniques, without requiring additional assumptions beyond the standard Beta distribution parameter constraints.

Comments 7: How are the definitions of "time" and "failure" standardized for operational testing within the framework?

Response 7: In the second paragraph of Section 2.1 ('Construction of Phased RBD'), we have incorporated additional content marked in red text to clarify this as follows:

"The diagram also annotates data types within the operational testing framework. "Time" in time-between-failures (TBF) data is defined as generalized operational time, utilizing the most mission-relevant metric (e.g., hours, mileage for mobility phases). This ensures reliability metrics are meaningful and directly tied to mission execution stress, rather than solely calendar time. "Failure" is strictly defined as an operational mission failure (OMF), which is a critical failure directly preventing the accomplishment of the current phase’s essential objective and leading to mission termination. This OMF definition focuses on failures impacting mission essential functions (MEFs) at the mission-critical level. While other failure classifications exist, including essential function failures (EFF) that affect specific MEFs without necessarily terminating the mission and non-essential function failures (NEFF) impacting performance without MEFs loss, the framework consistently applies the OMF standard for assessing mission reliability across all phases."

Comments 8: How does the framework's performance compare quantitatively to traditional reliability assessment methods in terms of accuracy and precision?

Response 8: In the revised manuscript, Section 3.1 (Simulation Scenario Validation) demonstrates the superiority of our evaluation model over traditional approaches in parameter estimation accuracy and precision using simulated data for Subsystem N. The newly added Section 3.2.4 (Comparative Validation) further validates the model's advantages in goodness-of-fit and predictive accuracy through empirical validation with field chassis data."

Comments 9: How is the methodology validated using both simulation and real-world engineering data?

Response 9:  We sincerely appreciate this insightful question. In response, we have systematically expanded Section 3 (Results and Discussion) with substantial additions marked in red text. The simulated data primarily validates our monotonicity-constrained maximum entropy optimization model, which is a important methodological contribution of this work. The real-world data serves to: (1) demonstrate practical implementation of the proposed framework, (2) conduct sensitivity analysis, and (3) enable deeper comparative evaluation against benchmark methods."

Comments 10: In what ways does the Bayesian approach address data scarcity, and what are the observed benefits in real-world military equipment testing scenarios?

Response 10: We gratefully acknowledge your valuable inquiry. In response, we have incorporated the following additions marked in red within the fourth paragraph of Section 1 (Introduction):

“Bayesian methods overcome the limitations of traditional frequentist approaches in operational testing by incorporating domain knowledge, expert judgment, or historical data through prior distributions [11-12]. When updated with sparse, real-world operational testing data, the prior distributions yield posterior distributions quantifying parameters of interest. Key benefits demonstrated in military applications include: significantly reducing required test sample sizes and cost [13], dynamically adapting to varying operational conditions through hierarchical modeling [14], and enabling sequential knowledge accumulation across test phases for continuous refinement [15].  ”

Comments 11: How does the model quantify and incorporate environmental impacts on subsystem reliability using real operational data?

Response 11: The covariate-embedded Bayesian model recurs throughout this study. By integrating operational test data with supplementary datasets, robust estimation of environmental effect coefficients within the covariates is achieved. Combining these coefficients with environmental measurements (e.g., temperature, humidity) from operational testing enables quantitative assessment of environmental impacts on system reliability. Section 3.2.3 (Sensitivity Analysis) contains some corroborative evidences.

Comments 12: What real-world case studies (such as the VMLS operational test) are used to demonstrate the utility and accuracy of the framework?

Response 12: we have incorporated supplementary content at the conclusion of the revised manuscript specifically responding to the points you raised:

"

(4) The multi-path mission reliability assessment demonstrates that other performance metrics of the system also influence its mission reliability level, and its logic is extensible to other multi-phase systems (e.g., unmanned platforms, electronic warfare systems) with phased mission profiles and heterogeneous data sources.

The work presented in this paper provides a reference methodological framework for studying mission reliability assessment under operational testing conditions. Due to limitations in testing conditions, the covariate-embedded assessment models employed in this study have not yet incorporated additional unknown risk factors. The proposed framework exhibits inherent generalizability. Its core components, such as phased RBD construction, covariate-embedded Bayesian model, and multi-path synthesis, are adaptable to systems beyond VMLS. For instance, phased RBD can map to any equipment with sequential operational modes (e.g., reconnaissance-strike-assessment loops in UAVs), while covariate embedding accommodates diverse environmental drivers (e.g., jamming intensity in electronic warfare systems). The next phase of research will focus on validating this adaptability across domains.

"

Comments 13: How is the proposed framework validated using simulation data, and what are the key findings from these simulations?

Response 13: The simulated data primarily validates our monotonicity-constrained maximum entropy optimization model. Time-between-failure (TBF) sequences were simulated from logistic distributions with known parameters.  Multiple ( T, Δt) combinations simulating sparse/dense monitoring.  key findings inclue:

(1)larger Δt values lead to increased estimation bias for both methods. However, the proposed method exhibits smaller bias.

(2)Second, the proposed method achieves higher accuracy and precision.

(3)The proposed method consistently yields narrower 95% HPD intervals, reflecting enhanced precision.

Comments 14: Provide more real-world examples or case studies illustrating the limitations of traditional reliability assessment methods. This will help readers better  appreciate the need for your proposed framework.

Response 14: Section 3.2.4 (Comparative Validation) has been newly incorporated in the revised manuscript.

Comments 15: Include a sensitivity analysis to show how changes in key parameters (such as prior distributions or covariate effects) impact the reliability estimates.

Response 15: Section 3.2.3 (Sensitivity Analysis) has been newly incorporated in the revised manuscript.

Comments 16: Make sure all figures (such as the phased reliability block diagram) are highresolution, clearly labeled, and accompanied by descriptive captions.

Response 16: This issue has been examined and corrected, with Figure 1 replaced in the revised manuscript.

Comments 17: Provide more information on the operational scenarios, data collection process, and specific challenges faced during the VMLS case study.

Response 17: Thank you for your suggestion! In Section 3.2 (Field Deployment Case), we have augmented the opening paragraph with content reflecting combat operational context. Minor refinements were also implemented in Section 3.2.1 (Data Acquisition). While these adjustments enhance the manuscript, they may not be exhaustive. We would greatly appreciate your more in-depth guidance on this section.

We sincerely thank you for your exceptionally thorough and insightful suggestions. Your detailed and actionable feedback has significantly enhanced the quality of our manuscript. Should any deficiencies remain—which we fully acknowledge as a possibility—we would be deeply grateful for your continued guidance. We stand ready to diligently refine this work based on your further recommendations.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The reviewer would like to thank the authors for their work in extending the paper.
Please can a minor change be made to figure 9, which currently has the caption in Chinese characters.

Author Response

Comments 1: Please can a minor change be made to figure 9, which currently has the caption in Chinese characters.

Response 1: We have corrected Figure 9 as requested. The revised one has been uploaded to the attachment. Thank you very much for bringing this to our attention, which was an oversight in our work.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

The authors addressed nearly all the comments I raised in the previous version of the manuscript. 

In the authors' following response to my comment below:
"Comments 4: Be sure to provide a weblink (e.g., GitHub) of the relevant software code, so that readers can reproduce all the equations and results presented in the manuscript.

Response 4: We have uploaded the code associated with this paper to:   https://github.com/peimochao/Coder-for-stats
"
they did not provide the reference and github weblink in the
Data Availability Statement section of the manuscript.

(This recommendation reflects the journal's Instruction for Authors website: 
https://www.mdpi.com/journal/stats/instructions
in the section on Supplementary Materials, Data Deposit and Software Source Code.)

The manuscript can be accepted for publication after the authors addresses the above requested simple revision.

Author Response

Comments 1: ... they did not provide the reference and github weblink in the Data Availability Statement section of the manuscript.

Response 1:  Thank you for identifying this oversight. We have revised the Data Availability Statement section as follows:

Data Availability Statement: The data that support the findings of this study are available from the corresponding author, M.P., upon reasonable request. The fundamental source code developed with Python 3.13.3 is deposited in a GitHub repository at https://github.com/peimochao/Coder-for-stats.

Reviewer 4 Report

Comments and Suggestions for Authors

No other comments

Author Response

Comments 1: No other comments.

Response 1: We sincerely appreciate your dedicated efforts in enhancing the quality of our manuscript. We look forward to continuing to benefit from your expert guidance in our future scholarly endeavors.

Back to TopTop