Next Article in Journal
Boundary-Aware Camouflaged Object Detection via Spatial-Frequency Domain Supervision
Previous Article in Journal
Enhancing Review-Based Recommendations Through Local and Global Feature Fusion
 
 
Article
Peer-Review Record

MTBF-PoL Reliability Evaluation and Comparison Using Prediction Standard MIL-HDBK-217F vs. SN 29500

Electronics 2025, 14(13), 2538; https://doi.org/10.3390/electronics14132538
by Dan Butnicu * and Gabriel Bonteanu
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2025, 14(13), 2538; https://doi.org/10.3390/electronics14132538
Submission received: 28 April 2025 / Revised: 12 June 2025 / Accepted: 17 June 2025 / Published: 23 June 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This study investigates reliability prediction methods for power electronic converters and their components, with a comparative analysis of two standards, MIL-217 and SN 29500. A practical guide is provided for applying reliability estimation handbooks. I have the following questions.
(1) The research gap and contributions of this work are not sufficiently articulated in the introduction. A clearer distinction should be made between existing limitations and the advancements of this paper.
(2) The term "fase" is likely a misspelling and should be corrected to "phase." (Line 179)
(3) A critical issue in reliability handbooks is the validation of data-based prediction methods. Discrepancies among observed, predicted, and demonstrated MTBF raise concerns about accuracy verification.
Long-term operational validation under normal conditions is impractical, while accelerated testing (e.g., thermal cycling) may compromise accuracy due to unrealistic stress conditions. For instance, the SN 29500-predicted MTBF of 746 years (Fig. 5) appears questionable? If this question cannot be definitively addressed in this paper, further investigation in subsequent studies is recommended.

Author Response

Cover Letter - Manuscript ID: electronics-3642690

Dear editors and reviewers,

We sincerely thank you for the valuable feedback about the manuscript entitled “MTBF‐PoL Reliability Evaluation and Comparison using Prediction Standard MIL‐HDBK‐217F vs. SN 29500” (Manuscript - ID: electronics-3642690). Your comments are all valuable and helpful for revising and improving our work. Based on the comments we received, careful modifications have been made to the manuscript. All changes are highlighted in text - Red for Rev. nr. 1, Green for Rev. Nr. 2 and Indigo for Rev. nr. 3  in the revised manuscript. We hope the new manuscript will meet your standard. Below you will find point-by-point responses to the reviewers’ comments and questions.

Thanks for all the help.

 

Rev.1 (*Text modifications in manuscript  suggested by reviewer nr. 1 are in red )

Many thanks to reviewer 1 for detailed comments and crucial suggestions for the manuscript. We have done our best to improve our paper with full respect to your notes.

Comments to the Author

This study investigates reliability prediction methods for power electronic converters and their components, with a comparative analysis of two standards, MIL-217 and SN 29500. A practical guide is provided for applying reliability estimation handbooks. I have the following questions.


(1) The research gap and contributions of this work are not sufficiently articulated in the introduction. A clearer distinction should be made between existing limitations and the advancements of this paper.

Response: The introduction of the paper is updated in order to manifest the existing limitations and the advancements of the paper .

(2) The term "fase" is likely a misspelling and should be corrected to "phase." (Line 179)

Response: Yes it is a typos indeed. I have replace the term in text with ”phase”.


(3) A critical issue in reliability handbooks is the validation of data-based prediction methods. Discrepancies among observed, predicted, and demonstrated MTBF raise concerns about accuracy verification. Long-term operational validation under normal conditions is impractical, while accelerated testing (e.g., thermal cycling) may compromise accuracy due to unrealistic stress conditions. For instance, the SN 29500-predicted MTBF of 746 years (Fig. 5) appears questionable? If this question cannot be definitively addressed in this paper, further investigation in subsequent studies is recommended.

 

Response: An 746-year Mean Time Between Failures (MTBF) obtained using the SN 29500 standard can indeed be questionable, depending on the context:

  1. Interpretation of MTBF

MTBF does not mean a single unit will last 746 years—it represents the average time between failures in a large population of identical components. If a system has an MTBF of 746 years, it means that in a large sample, failures will statistically occur at a rate of one failure per 746 years of cumulative operation.

  1. Practicality of Prediction

SN 29500 is based on industrial reliability data, but predicting such long lifespans can be highly uncertain due to unforeseen aging effects, environmental changes, and technological advancements. Components degrade over time due to wear-out mechanisms (e.g., material fatigue, oxidation, and thermal cycling), which may not be fully accounted for in MTBF calculations.

  1. Environmental and Operational Factors

MTBF predictions assume constant operating conditions, but real-world environments introduce temperature fluctuations, humidity, mechanical stress, and other factors that accelerate failures. If the prediction does not account for mission profiles (actual usage conditions), the estimated MTBF may be overly optimistic.

  1. Industry Perspective

Some experts argue that MTBF is often misinterpreted and should be supplemented with failure rate analysis and lifetime predictions. SN 29500 is widely used in industrial applications, but its assumptions may not always align with real-world failure rates.

A slight text modification was made to reflect the above at the end of chapter 5.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Point 1:

Please use higher resolution of Figure 2.

Point 2:

What does the author mean by saying “simulation PSPICE fase” in line 179 of Page 6? Is it a typo?

Point 3:

Please put the references of the models into title of Figure 3 in line 205, not showing them inside Figure 3. Also list them in the reference.

Point 4:

Please replace “÷” by “/” in lines not limited to 212 and 213.

Point 5:

Please divide the equations in 228, 234 into multiple lines instead of making them too long in one line. Also applies to other long equations like equation 15-19.

Point 6:

Have the reliability calculation results compared with adequate reliability measurement data? How to conclude SN29500 is more accurate in line 422-423? Have the authors calculated % difference of each standard with field data? Haven/t justify why SN29500 provides “a better MTBF” for line 467.

 Point 7:

Please do not abbreviate by “it” in line 437, but naming the observation explicitly. Also edit the quotation marks in this section.

Point 8:

Please add more in discussion section. What the differences in environment factors in these two models bring to the reliability estimation results? Discuss it regarding the observation in this study.

Point 9:

Many of the bullet points in conclusion section needs to be reduced/abbreviate because many of them use general description. Please rewrite the conclusion and make it closely related with the analysis in this study by providing numbers/proofs, etc. All the statements need to be justified. For example, why listing : “identifying the most reliable components early in the design process can reduce costs associated… budget”. It is not relevant to the “reliability evaluation and comparison” topic in this paper. Please delete all the bullet points that are not relevant. The conclusion should be concluding everything that has been discussed and analyzed before instead of piling up irrelevant statements that has never been discussed in former sections. Many other bullet points should also be deleted. Please edit either the conclusion or provide enough discussion/proofs for all the statements.

Author Response

Cover Letter - Manuscript ID: electronics-3642690

Dear editors and reviewers,

We sincerely thank you for the valuable feedback about the manuscript entitled “MTBF‐PoL Reliability Evaluation and Comparison using Prediction Standard MIL‐HDBK‐217F vs. SN 29500 ” (Manuscript - ID: electronics-3642690). Your comments are all valuable and helpful for revising and improving our work. Based on the comments we received, careful modifications have been made to the manuscript. All changes are highlighted in text - Red for Rev. nr. 1, Green for Rev. Nr. 2 and Indigo for Rev. nr. 3  in the revised manuscript. We hope the new manuscript will meet your standard. Below you will find point-by-point responses to the reviewers’ comments and questions.

Thanks for all the help.

 

Rev.2 (*Text modifications in manuscript  suggested by reviewer nr. 2 are in green )

Many thanks to reviewer 2 for detailed comments and crucial suggestions for the manuscript. We have done our best to improve our paper with full respect to your notes.

Comments to the Author

Point 1:

Please use higher resolution of Figure 2.

Response: Figure 2 was repixelated.

Point 2:

What does the author mean by saying “simulation PSPICE fase” in line 179 of Page 6? Is it a typo?

Response:  Yes it is a typos indeed. I have replace the term in text with ”phase”.

Point 3:

Please put the references of the models into title of Figure 3 in line 205, not showing them inside Figure 3. Also list them in the reference.

Response: Figure 3 was modified and the models are listed now in Reference.

Point 4:

Please replace “÷” by “/” in lines not limited to 212 and 213.

Response:Done

Point 5:

Please divide the equations in 228, 234 into multiple lines instead of making them too long in one line. Also applies to other long equations like equation 15-19.

Response: Done

Point 6:

Have the reliability calculation results compared with adequate reliability measurement data? How to conclude SN29500 is more accurate in line 422-423? Have the authors calculated % difference of each standard with field data? Haven/t justify why SN29500 provides “a better MTBF” for line 467.

Response:

The Siemens SN 29500 reliability prediction standard is often considered more modern and practical compared to MIL-HDBK-217, particularly for commercial applications; it provide Up-to-Date Failure Data - MIL-HDBK-217 was last updated in 1995, meaning its failure rate models reflect older electronic technologies when SN 29500 incorporates more recent field failure data, making it more relevant for today’s components and manufacturing processes. SN 29500 provide realistic failure rate estimates and offers more balanced and realistic estimates, aligning better with actual failure rates observed in commercial environments. MIL-HDBK-217 tends to provide pessimistic failure rate predictions, often underestimating modern component reliability. SN 29500 offers more balanced and realistic estimates, aligning better with actual failure rates observed in commercial environments. Also, SN 29500 includes modern components, ensuring reliability predictions remain relevant. Many companies prefer SN 29500 because it aligns better with real-world reliability expectations.

MIL-HDBK-217 is still used in defense and aerospace but is often supplemented with additional reliability models to improve accuracy.

 Point 7:

Please do not abbreviate by “it” in line 437, but naming the observation explicitly. Also edit the quotation marks in this section.

Response: in text  ”it” was replaced by ”of the difference appeared between the two standards” , quotation marks as well.

Point 8:

Please add more in discussion section. What the differences in environment factors in these two models bring to the reliability estimation results? Discuss it regarding the observation in this study.

Rsponse:

Environmental factors play a crucial role in reliability estimation, and different prediction standards account for them in varying ways. Environmental differences can impact reliability predictions by Variability in Failure Rates. Some standards, like MIL-HDBK-217, use environmental multipliers to adjust failure rates based on conditions such as temperature, humidity, and vibration. Other standards, such as SN 29500, incorporate environmental stress factors differently, often focusing on industrial (automotive) specific conditions. Also there are differences in Environmental Assumptions - standards may assume different baseline environments. MIL-HDBK-217 defines multiple operational environments (e.g., ground fixed, airborne, naval sheltered), each with distinct reliability models. Other  standards use conversion matrices to adjust predictions when the operating environment changes. And there is the problem of adaptability of predictions. Some standards allow for custom environmental adjustments, while others rely on predefined conditions. If a product is moved to a harsher environment than initially predicted, reliability estimates may need recalibration using environmental conversion matrices.

Discussion chapter was modified according to your suggestons.

 

Point 9:

Many of the bullet points in conclusion section needs to be reduced/abbreviate because many of them use general description. Please rewrite the conclusion and make it closely related with the analysis in this study by providing numbers/proofs, etc. All the statements need to be justified. For example, why listing : “identifying the most reliable components early in the design process can reduce costs associated… budget”. It is not relevant to the “reliability evaluation and comparison” topic in this paper. Please delete all the bullet points that are not relevant. The conclusion should be concluding everything that has been discussed and analyzed before instead of piling up irrelevant statements that has never been discussed in former sections. Many other bullet points should also be deleted. Please edit either the conclusion or provide enough discussion/proofs for all the statements.

Response: Conclusion chapter was modified according to your suggestions.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This paper evaluate the Mean Time Between Failure for DC-DC converters, considering two reliability standard MIL-217 and SN-29500. The results under the two standards exhibit quite different MTBF, while both report very similar failure rate distribution considering specific devices within the converter. In general, the contributions of this work are not clearly presented, and the circuit under evaluation is quite limited (only considering a certain DC-DC converter). Futhermore, the presentation needs to be largely improved. My detailed comments are as follows:

1. The MTBF model (Figure 1) must be improved. Though MTBF model has been widely used and the relative terms like infant mortality, use life and wear-out failures are familar by some researchers, a more detailed explanation for each region is necessary, to guide the readers who are not familar with the concept.

2. Section 3 A, a detailed explanation for the circuit under test (chosen DC-DC converter) is required, which include:
- How the capacitance and the MOSFET parameters are determined? How different values for the parameters would affect the result and conclusion of this work?
- Any alternative circuit structure to perform the same DC-DC convertion? If so, please list them and explain the reason why the certain structure is selected. How different structure would affect the result and the conclusion?

3. The presentation needs be largely improved. Specifically, for the plenty of equations included in the paper, lack of proper descriptions for the notations make them not understandable. In addition, the mix of mathematical equation and the content also decrease the readability. Finally, how the value of each parameter is determined is not clear. As all the results are based on the computations using the equation, lack of proper explanations for the equations make the overall results and comparison not convinciable.

4. The contribution is one of my biggest concerns: the paper only reveals that different reliability standards may lead to very different MTBF, but this is not surprising, as they are based on different models. More imporant question is that how the different results can be used for design process? This is not explained. The authors need to explain how the experiment results can contribute to the electronic design community in more details.

5. I wonder will the conclusion stay the same, if the two standards are used to evaluate the reliability for other circuits? A discussion related to this is suggested to improve the generalization of this work.    

Author Response

Cover Letter - Manuscript ID: electronics-3642690

Dear editors and reviewers,

We sincerely thank you for the valuable feedback about the manuscript entitled “MTBF‐PoL Reliability Evaluation and Comparison using Prediction Standard MIL‐HDBK‐217F vs. SN 2950” (Manuscript - ID: electronics-3642690). Your comments are all valuable and helpful for revising and improving our work. Based on the comments we received, careful modifications have been made to the manuscript. All changes are highlighted in text - Red for Rev. nr. 1, Green for Rev. Nr. 2 and Indigo for Rev. nr. 3  in the revised manuscript. We hope the new manuscript will meet your standard. Below you will find point-by-point responses to the reviewers’ comments and questions.

Thanks for all the help.

 

Rev.3 (*Text modifications in manuscript  suggested by reviewer nr. 3 are in Indigo )

This paper evaluate the Mean Time Between Failure for DC-DC converters, considering two reliability standard MIL-217 and SN-29500. The results under the two standards exhibit quite different MTBF, while both report very similar failure rate distribution considering specific devices within the converter. In general, the contributions of this work are not clearly presented, and the circuit under evaluation is quite limited (only considering a certain DC-DC converter). Futhermore, the presentation needs to be largely improved. My detailed comments are as follows:

  1. The MTBF model (Figure 1) must be improved. Though MTBF model has been widely used and the relative terms like infant mortality, use life and wear-out failures are familar by some researchers, a more detailed explanation for each region is necessary, to guide the readers who are not familar with the concept.

Response:

The bathtub model in reliability engineering describes the failure rate of a product over its lifecycle using a characteristic curve that resembles the shape of a bathtub. This model is widely used to understand and predict system failures, helping engineers design more reliable products.

Phases of the Bathtub Curve

The bathtub curve consists of three distinct phases:

Early Failure (Infant Mortality) Phase: This phase occurs at the beginning of a product's life. The failure rate is initially high due to manufacturing defects, poor material quality, or improper assembly. Over time, defective units are identified and removed, leading to a decreasing failure rate. Burn-in testing, stress screening, and quality control measures help reduce early failures.

Random Failure (Useful Life) Phase:

This is the longest phase, where the failure rate remains relatively constant. Failures occur randomly due to unforeseen external factors such as environmental conditions, accidental damage, or unpredictable component failures. Regular maintenance, redundancy in design, and robust quality assurance help minimize failures.

Wear-Out Failure Phase: As the product ages, its components degrade due to wear and tear, leading to an increasing failure rate. Common causes include material fatigue, corrosion, and aging effects in electronic components. Preventive maintenance, component replacement, and design improvements to extend lifespan.

Applications of the Bathtub Model

  • Reliability prediction -Helps manufacturers estimate product lifespan and failure rates.
  • Maintenance planning - Guides preventive maintenance schedules to reduce unexpected breakdowns.
  • Quality control -  Identifies weak points in production and improves manufacturing processes.
  • Warranty and Cost analysis - Assists in setting warranty periods and optimizing repair costs.

The bathtub reliability model has a direct impact on product design decisions because it helps engineers anticipate failure rates and optimize durability at every stage of a product’s life cycle.

Text was updated according to above considerations.

  1. Section 3 A, a detailed explanation for the circuit under test (chosen DC-DC converter) is required, which include:
    - How the capacitance and the MOSFET parameters are determined? How different values for the parameters would affect the result and conclusion of this work?
    - Any alternative circuit structure to perform the same DC-DC convertion? If so, please list them and explain the reason why the certain structure is selected. How different structure would affect the result and the conclusion?

Response:

Reliability prediction results for a MOSFET can have consequences on its parasitic capacitances and overall parameters: Degradation over time - reliability models often consider factors like temperature, voltage stress, and aging effects. These can cause shifts in parasitic capacitances due to changes in oxide thickness or charge trapping. Electrical performance variations - as reliability decreases, key MOSFET parameters such as threshold voltage, transconductance and leakage currents may drift, impacting overall circuit functionality. Failure mechanisms affecting capacitances - factors like hot carrier injection, bias temperature instability, and time-dependent dielectric breakdown can alter capacitances, potentially affecting switching speed and efficiency. Impact on Circuit Design - if reliability prediction results indicate significant degradation risks, designers may need to compensate with different biasing techniques or alternative transistor configurations.

Text in subchapter 3A was modified accordingly.

  1. The presentation needs be largely improved. Specifically, for the plenty of equations included in the paper, lack of proper descriptions for the notations make them not understandable. In addition, the mix of mathematical equation and the content also decrease the readability. Finally, how the value of each parameter is determined is not clear. As all the results are based on the computations using the equation, lack of proper explanations for the equations make the overall results and comparison not convinciable.

 

Response:  Regarding the parameters, please note in the text that I have highlighted for each type of passive device/component the information about how it was used in the equations and in addition I have also specified the source (e.g. : see table 2 within SN 29500 standard ). Of course, the accumulation of equations with explanations can generate a very small confusion, but this work being specific to the reliability of electronic components and circuits, I assume that it is read especially by experts in the field. I could not have put the development of the equations in the text but only as a separate chapter in the form of an appendix, but I would have offended the readers who are dedicated to the phenomenon of reliability and to whom, I repeat, this work is especially addressed. Please be lenient with this aspect, especially considering that those who write on reliability are anyway disadvantaged from the start by the small number of citations obtained due to the niche character of these types of works. Thank you for your understanding.

 

  1. The contribution is one of my biggest concerns: the paper only reveals that different reliability standards may lead to very different MTBF, but this is not surprising, as they are based on different models. More imporant question is that how the different results can be used for design process? This is not explained. The authors need to explain how the experiment results can contribute to the electronic design community in more details.

Response:

Comparing different reliability prediction standards can significantly impact the design process of electronic equipment. Each standard has unique methodologies, assumptions, and data sources, so evaluating them side by side helps engineers make informed decisions about durability and failure rates.

The results can be applied as follow: Optimizing design choices: If one standard predicts lower reliability for certain components, designers can reconsider materials, packaging, or redundancy measures to improve performance. Risk assessment and mitigation: Comparing standards helps identify potential weak spots early, allowing for proactive measures like enhanced cooling systems or circuit protection. Cost vs. Reliability trade-offs: Some standards might predict longer lifespans at a higher cost. Understanding these differences aids in selecting cost-effective yet dependable solutions. Regulatory and Industry compliance: Various sectors adhere to specific reliability benchmarks. Comparing standards ensures compliance with the most relevant requirements. Improving predictive accuracy: If one standard aligns better with real-world failure data, companies can refine their models for future projects.

 

  1. I wonder will the conclusion stay the same, if the two standards are used to evaluate the reliability for other circuits? A discussion related to this is suggested to improve the generalization of this work.  

Response:

 The insights gained from comparing reliability prediction standards can often be applied to different product categories, especially if they share similar operational conditions, materials, or failure mechanisms.  Comparison can be useful beyond the original product category taking account of failure pattern recognition - comparing standards helps identify recurring failure modes, which can be useful for industries dealing with durability concerns, such as automotive or aerospace,  and optimizing maintenance strategies by studying reliability data across multiple standards and industries, companies can refine maintenance schedules for various types of equipment. Also, improving risk assessments - if one standard is found to provide more realistic predictions, companies can use that insight when evaluating reliability for unrelated products.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Point 1:

Replied to the author’s response to previous point 6:

The response is not convincing. The reviewer looks for straightforward data comparison, calculation results, test results, instead of stating that since SN29500 contains more “recent data” and SN 29500 provides more “realistic estimates”, so it is superior.  Please provide numbers to justify the statement, otherwise this manuscript is not meaningful enough, because people can easily see SN29500 is more recent so there should have some improvement. By stating SN29500 “aligns better with real-world reliability estimates”, where is the % difference comparison with the field data, how to tell it “aligns better”? With no field data collected, how to tell which one provides better estimate to MTBF? Need to justify with data.

Point 2:

Please improve the resolution in Figure 6 and Figure 7 in page 16.

Point 3:

Thanks for adding a paragraph of the environment factors in discussion. Please use at least a table listing the numbers to explicitly show the comparison of “typical industrial and commercial settings”, “SN 29500 environment factor range”, and “MIL 217 more extreme conditions”. Please do not add new statement with no proofs in the discussion section. The concept might be basic for the reviewer and the authors, but might not be the case to the readers. When stating “typical”, please list the values for “typical”, when stating “extreme”, please list the values for “extreme”. Any statement should be supported by proofs, like references, numbers, calculation, field data, etc. Please make edits accordingly.

Point 4:

Format of equation (8)-(10) needs edits.

Author Response

Open Review – answers to the reviewer nr. 2 (round 2) are in green underlined in manuscript

(x) I would not like to sign my review report

( ) I would like to sign my review report

Quality of English Language

( ) The English could be improved to more clearly express the research.

(x) The English is fine and does not require any improvement.

Comments and Suggestions for Authors

Point 1:

 

Replied to the author’s response to previous point 6:

 

The response is not convincing. The reviewer looks for straightforward data comparison, calculation results, test results, instead of stating that since SN29500 contains more “recent data” and SN 29500 provides more “realistic estimates”, so it is superior.  Please provide numbers to justify the statement, otherwise this manuscript is not meaningful enough, because people can easily see SN29500 is more recent so there should have some improvement. By stating SN29500 “aligns better with real-world reliability estimates”, where is the % difference comparison with the field data, how to tell it “aligns better”? With no field data collected, how to tell which one provides better estimate to MTBF? Need to justify with data.

 

 

Response :

 

We would like to thank the Reviewer for these valuable observations and for highlighting important

aspects that indeed deserve careful consideration in the development of the proposed approach. We are

pleased to clarify these considerations below.

A comparative analysis [22] between Siemens SN 29500, Telcordia SR-332 and MIL-HDBK-217F demonstrated that SN 29500 provides more realistic estimates of the reliability of modern electronic components. In particular, it was observed that MIL-HDBK-217F can overestimate the failure rate, while SN 29500 better reflects the actual reliability of current commercial components.

We can justify with field data that SN 29500 is more appropriate than MIL-HDBK-217F using Figure8 which is an excerpt from [22] which shows the failure rates for different prediction standards determined for a batch of 1400 components on five printed circuit boards and following the data in Table 6 which compares the ratio of failure rates for the two standards in question with the ratio found in the investigation carried out in this paper.

          

Figure 8. Variation of the failure rate with temperature for different international prediction standards obtained on a batch of 1400 parts – dotted blue line: MIL 217, solid green line: SN 29500 [22]. (Failure rate λ is denoted in the figure as FR and is measured in [F/106 h] or failures per million hours)

The yellow points correspond to the failure rate values for the MIL standard, and the orange ones to the SN 29500 standard; in Table 6 below, these values , plus the failure rate ratios for the two standards in question, have been grouped, also comparing them with the ratio value obtained in this work:

Table 6. Failure rates for the two standards investigated (extracted from Figure 8), the ratio between them, and the resulting ratio within our work.

 

 

Temperature

Failure Rate λ , according to Prediction Standard

0°C

10°C

20°C

30°C

40°C

MIL-HDBK-217F

(1995)

8

9.6

12

15.6

20.3

SN 29500

(2008)

1.8

2

2.6

3.5

4.9

FR_Mil217 / FR_SN29500 ratio

4.444444

4.8

4.615385

4.457143

4.142857

FR_Mil217 / FR_SN29500 ratio  in our  work

7.2889E-07 [FIT] / 1.5298E-07 [FIT] = 4.76461

 

 

 

Therefore, it is observed that the ratio between λMIL-HDBK-217F over λSN 29500 obtained in this investigation (4.76461) is consistent with the field data (between 4.14285 and 4.8) obtained in [22].

Figure 9. Ratio variation between λMIL-HDBK-217F over λSN 29500 for different temperatures.

 

Point 2:

 

Please improve the resolution in Figure 6 and Figure 7 in page 16.

Response:

Thank you for this valuable observation. We have recreated the two figures.

 

Point 3:

 

Thanks for adding a paragraph of the environment factors in discussion. Please use at least a table listing the numbers to explicitly show the comparison of “typical industrial and commercial settings”, “SN 29500 environment factor range”, and “MIL 217 more extreme conditions”. Please do not add new statement with no proofs in the discussion section. The concept might be basic for the reviewer and the authors, but might not be the case to the readers. When stating “typical”, please list the values for “typical”, when stating “extreme”, please list the values for “extreme”. Any statement should be supported by proofs, like references, numbers, calculation, field data, etc. Please make edits accordingly.

 

Response:

Thank you for this valuable observation. Typical industrial and commercial settings expressed by the average and moderate conditions found in most industrial and commercial applications as well as the range of environmental factors representing the average and moderate conditions that can be encountered in various applications and last but not least the extreme conditions ( can be encountered in certain applications, such as military ones or those that require performance in extreme environmental conditions.

Table 5. Typical SN 29500 industrial and commercial settings, environment factor range  and MIL 217 more extreme conditions

Parameter

Temperature [°C]

Humidity [%]

Vibration

[g]

Pressure

[bar]

Radiation

[W/m²]

Typical industrial and commercial settings - SN 29500

20  … 40

40  … 60

0.1 …1.0

1.0 … 5.0

1.0  …10.0

Environmental factors range - SN 29500

0 …70

20 … 80

0.01 … 10

0.5 …10

0.1 … 100

Extreme conditions - MIL HDBK 217F

-40 … 80

10 … 90

1.0 …100

1.0  … 5.0

10 …1000

 

 

 

Explanations

Typical industrial and commercial settings expressed by the average and moderate conditions found in most industrial and commercial applications as well as the range of environmental factors representing the average and moderate conditions that can be encountered in various applications and last but not least the extreme conditions ( can be encountered in certain applications, such as military ones or those that require performance in extreme environmental conditions.

Conclusions

SN 29500 provides typical industrial and commercial settings that are more moderate than the extreme conditions in MIL HDBK 217F. The range of environmental factors in SN 29500 is wider than that in MIL HDBK 217F, which makes SN 29500 more flexible and adaptable to various environmental conditions. MIL HDBK 217F provides extreme conditions that are more severe than those in SN 29500, which makes MIL HDBK 217F more suitable for applications that require performance in extreme environmental conditions.

 

Point 4:

 

Format of equation (8)-(10) needs edits.

Response:

Thank you for your valuable feedback. Done.

 

Author Response File: Author Response.pdf

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

1) please improve resolution of Figure 8.

2) please do correct IEEE format for referencing an online article or online source. REFERENCE 8, 15, 16,17, 21, 22 are all in wrong format

Author Response

Response for the Reviewer nr. 2 – round 3

Q1: 1) please improve resolution of Figure 8.

Response: Thank you for this valuable observation. We have recreated Figure 8.

Q2: please do correct IEEE format for referencing an online article or online source. REFERENCE 8, 15, 16,17, 21, 22 are all in wrong format

Response: We would like to thank the Reviewer for these valuable observations. We have rewritten all online references in IEEE style.

References

[1]  D. Butnicu, "POL DC-DC Converter Output Capacitor Bank’s Reliability Comparison using Prediction Standard MIL-HDBK-217F and SN 29500," 2021 IEEE 27th International Symposium for Design and Technology in Electronic Packaging (SIITME), Timisoara, Romania, 2021, pp. 169-172, doi: 10.1109/SIITME53254.2021.9663431.

[2]   Anon., Military Handbook - Reliability Prediction of Electronic Equipment, MIL-HDBK-217F, Notice 2, Feb 28, 1995.

[3]   D. Zhou, Y. Song, Y. Liu and F. Blaabjerg, “Mission profile-based reliability evaluation of capacitor banks in wind power

converters,” in IEEE Trans. Power Electron. 34 (5) 4665–4677, May 2019.

[4]   D. Zhou, Y. Song, Y. Liu and F. Blaabjerg, “Mission profile-based reliability evaluation of capacitor banks in wind power

converters,” in IEEE Trans. Power Electron. 34 (5) 4665–4677, May 2019.

[5]    IEC. TR 62380: Reliability Data Handbook; IEC: Geneva, Switzerland, 2006.

[6]    Anon., http://www.applied-statistics.org/Siemens_SN_29500.html

[7]  “IEC 61709 (2017): Electric Components - Reliability - Reference Conditions for Failure Rates and Stress Models for Conversion,” 2017.

[8]     Ericsson, „SR-332 reliability Prediction for Electronic Equipment,” Ericsson Telecom Info Depot, [Interactive]. Available: https://telecom-info.njdepot.ericsson.net/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=SR-332). [Accessed 06 12 2025].

[9]  Xiaoge Zhang, Indranil Bose, Reliability estimation for individual predictions in machine learning systems: A model reliability-based approach, Decision Support Systems, Volume 186, 2024, 114305, ISSN 0167-9236, https://doi.org/10.1016/j.dss.2024.114305.

[10]  L. J. Gullo, "History of the First IEEE Reliability Standard," in IEEE Reliability Magazine, vol. 1, no. 2, pp. 17-19, June 2024, doi: 10.1109/MRL.2024.3385733.

[11]   Guillaume Jean, “Link Reliability Prediction: Developing models to predict the reliability of external links based on various features”, September 2024,              https://www.researchgate.net/publication/384291221_Link_Reliability_Prediction_Developing_models_to_predict_the_reliability_of_external_links_based_on_various_features#fullTextFileContent

[12]  M. G. Pecht and F. R. Nash, “Predicting the reliability of electronic equipment” in Proceedings of the IEEE, vol. 82, no. 7, pp. 992-   1004, July 1994. doi: 10.1109/5.293157.

[13]  Dorin O. Neacsu, “Telecom Power Systems ”, ISBN 9781138099302, Published December 8, 2017 by CRC Press

[14]   J. Falck, C. Felgemacher, A. Rojko, M. Liserre, P. Zacharias, “Reliability of power electronic systems: an industry perspective,” IEEE Ind. Electron. Mag. 12, pp. 24–35, June 2018

[15]   Infineon, [Interactive]. Available: https://www.infineon.com/dgdl/irf6617pbf.pdf?fileId=5546d462533600a4015355e853f21a17. [Accesed 06 12 2025].

[16]   Infineon, „www.infineon.com,” infineon, [Interactive]. Available: https://eu.mouser.com/ProductDetail/Infineon-Technologies/IRF6691?qs=oV44URRYgr3WftVaKeO8aQ%3D%3D&srsltid=AfmBOoq7Qvb-_bLJIr016kXpIoo0g-vn3SRGv3e8nCPUlyrzF1-0vN7I [Accessed 12 06 2025].

[17]   Murata, „www.murata.com,” murata, [Interactive]. Available: https://www.murata.com/en/global//search/productsearch?cate=cgsubCeramicCapacitors&partno= GRM32ER60J107ME20%23&realtime=1. [Accessed 12 06 2025].

[18]   Anon., http://www.nichicon.co.jp/english/products/spice/index.html

[19]  D. Butnicu and D. O. Neacsu, “Using SPICE for reliability based design of capacitor bank for telecom power supplies”, 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME), pp. 423-426, Constanta, 2017.

[20]  Dan Butnicu, Dorin O. Neacsu, “Using SPICE for multiple-constraint choice of capacitor bank for telecom power supplies” , 2017 IEEE 23rd International Symposium for Design and Technology in Electronic Packaging (SIITME)

[21]  Revistas. [Interactive]. Available: https://revistas.unal.edu.co/index.php/dyna/article/view/114851/93169. [Accessed 12 06 2025].

[22]  Applied-statistics,[Interactive]. Available: http://www.applied-statistics.org/mil-hdbk-217.html [Accessed 12 06 2025]

 

Author Response File: Author Response.pdf

Back to TopTop