Special Issue "Statistics and Modeling in Reliability Engineering"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Engineering Mathematics".

Deadline for manuscript submissions: closed (31 December 2019).

Special Issue Editor

Prof. Dr. Hoang Pham
Website
Guest Editor
Department of Industrial and Systems Engineering, Rutgers University, Piscataway, New Jersey, USA
Interests: reliability engineering; software reliability; statistical inferences; fault-tolerant computing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Growing international competition has increased the need for all industries to produce reliable products at a minimum cost by integrating statistical methods and quantifying the reliability earlier in the design stages. Articles concerning new theoretical research and methods on statistical reliability, applied mathematics in reliability, and optimization are solicited. Preference will be given to papers with real-world applications over purely theoretical papers. Topics of interest include but are not limited to the following:

  • Statistical methods in reliability;
  • Reliability modeling and optimization;
  • Mechanical reliability and life testing;
  • Failure analysis in design;
  • Field data analysis and case studies;
  • Applied mathematics in reliability.

Prof. Hoang Pham
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Cumulative Sum Chart Modeled under the Presence of Outliers
Mathematics 2020, 8(2), 269; https://doi.org/10.3390/math8020269 - 18 Feb 2020
Cited by 2
Abstract
Cumulative sum control charts that are based on the estimated control limits are extensively used in practice. Such control limits are often characterized by a Phase I estimation error. The presence of these errors can cause a change in the location and/or width [...] Read more.
Cumulative sum control charts that are based on the estimated control limits are extensively used in practice. Such control limits are often characterized by a Phase I estimation error. The presence of these errors can cause a change in the location and/or width of control limits resulting in a deprived performance of the control chart. In this study, we introduce a non-parametric Tukey’s outlier detection model in the design structure of a two-sided cumulative sum (CUSUM) chart with estimated parameters for process monitoring. Using Monte Carlo simulations, we studied the estimation effect on the performance of the CUSUM chart in terms of the average run length and the standard deviation of the run length. We found the new design structure is more stable in the presence of outliers and requires fewer amounts of Phase I observations to stabilize the run-length performance. Finally, a numerical example and practical application of the proposed scheme are demonstrated using a dataset from healthcare surveillance where received signal strength of individuals’ movement is the variable of interest. The implementation of classical CUSUM shows that a shift detection in Phase II that received signal strength data is indeed masked/delayed if there are outliers in Phase I data. On the contrary, the proposed chart omits the Phase I outliers and gives a timely signal in Phase II. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Show Figures

Figure 1

Open AccessArticle
A New Criterion for Model Selection
Mathematics 2019, 7(12), 1215; https://doi.org/10.3390/math7121215 - 10 Dec 2019
Cited by 2
Abstract
Selecting the best model from a set of candidates for a given set of data is obviously not an easy task. In this paper, we propose a new criterion that takes into account a larger penalty when adding too many coefficients (or estimated [...] Read more.
Selecting the best model from a set of candidates for a given set of data is obviously not an easy task. In this paper, we propose a new criterion that takes into account a larger penalty when adding too many coefficients (or estimated parameters) in the model from too small a sample in the presence of too much noise, in addition to minimizing the sum of squares error. We discuss several real applications that illustrate the proposed criterion and compare its results to some existing criteria based on a simulated data set and some real datasets including advertising budget data, newly collected heart blood pressure health data sets and software failure data. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Show Figures

Figure 1

Open AccessArticle
Reliability Evaluation for a Stochastic Flow Network Based on Upper and Lower Boundary Vectors
Mathematics 2019, 7(11), 1115; https://doi.org/10.3390/math7111115 - 15 Nov 2019
Abstract
For stochastic flow network (SFN), given all the lower (or upper) boundary points, the classic problem is to calculate the probability that the capacity vectors are greater than or equal to the lower boundary points (less than or equal to the upper boundary [...] Read more.
For stochastic flow network (SFN), given all the lower (or upper) boundary points, the classic problem is to calculate the probability that the capacity vectors are greater than or equal to the lower boundary points (less than or equal to the upper boundary points). However, in some practical cases, SFN reliability would be evaluated between the lower and upper boundary points at the same time. The evaluation of SFN reliability with upper and lower boundary points at the same time is the focus of this paper. Because of intricate relationships among upper and lower boundary points, a decomposition approach is developed to obtain several simplified subsets. SFN reliability is calculated according to these subsets by means of the inclusion-exclusion principle. Two heuristic options are then established in order to calculate SFN reliability in an efficient direction based on the lower and upper boundary points. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Show Figures

Figure 1

Open AccessArticle
A Novel System Reliability Modeling of Hardware, Software, and Interactions of Hardware and Software
Mathematics 2019, 7(11), 1049; https://doi.org/10.3390/math7111049 - 04 Nov 2019
Abstract
In the past few decades, a great number of hardware and software reliability models have been proposed to address hardware failures in hardware subsystems and software failures in software subsystems, respectively. The interactions between hardware and software subsystems are often neglected in order [...] Read more.
In the past few decades, a great number of hardware and software reliability models have been proposed to address hardware failures in hardware subsystems and software failures in software subsystems, respectively. The interactions between hardware and software subsystems are often neglected in order to simplify reliability modeling, and hence, most existing reliability models assumed hardware subsystems and software subsystem are independent of each other. However, this may not be true in reality. In this study, system failures are classified into three categories, which are hardware failures, software failures, and hardware-software interaction failures. The main contribution of our research is that we further classify hardware-software interaction failures into two groups: software-induced hardware failures and hardware-induced software failures. A Markov-based unified system reliability modeling incorporating all three categories of system failures is developed in this research, which provides a novel and practical perspective to define system failures and further improve reliability prediction accuracy. Comparison of system reliability estimation between the reliability models with and without considering hardware-software interactions is elucidated in the numerical example. The impacts on system reliability prediction as the changes of transition parameters are also illustrated by the numerical examples. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Show Figures

Figure 1

Open AccessArticle
Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation
Mathematics 2019, 7(9), 831; https://doi.org/10.3390/math7090831 - 08 Sep 2019
Cited by 2
Abstract
This paper considers sequentially two main problems. First, we estimate both the mean and the variance of the normal distribution under a unified one decision framework using Hall’s three-stage procedure. We consider a minimum risk point estimation problem for the variance considering a [...] Read more.
This paper considers sequentially two main problems. First, we estimate both the mean and the variance of the normal distribution under a unified one decision framework using Hall’s three-stage procedure. We consider a minimum risk point estimation problem for the variance considering a squared-error loss function with linear sampling cost. Then we construct a confidence interval for the mean with a preassigned width and coverage probability. Second, as an application, we develop Fortran codes that tackle both the point estimation and confidence interval problems for the inverse coefficient of variation using a Monte Carlo simulation. The simulation results show negative regret in the estimation of the inverse coefficient of variation, which indicates that the three-stage procedure provides better estimation than the optimal. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Open AccessArticle
Bayesian Inference of δ = P(X < Y) for Burr Type XII Distribution Based on Progressively First Failure-Censored Samples
Mathematics 2019, 7(9), 794; https://doi.org/10.3390/math7090794 - 01 Sep 2019
Cited by 1
Abstract
Let X and Y follow two independent Burr type XII distributions and δ = P ( X < Y ) . If X is the stress that is applied to a certain component and Y is the strength to sustain the stress, then [...] Read more.
Let X and Y follow two independent Burr type XII distributions and δ = P ( X < Y ) . If X is the stress that is applied to a certain component and Y is the strength to sustain the stress, then δ is called the stress–strength parameter. In this study, The Bayes estimator of δ is investigated based on a progressively first failure-censored sample. Because of computation complexity and no closed form for the estimator as well as posterior distributions, the Markov Chain Monte Carlo procedure using the Metropolis–Hastings algorithm via Gibbs sampling is built to collect a random sample of δ via the joint distribution of the progressively first failure-censored sample and random parameters and the empirical distribution of this collected sample is used to estimate the posterior distribution of δ . Then, the Bayes estimates of δ using the square error, absolute error, and linear exponential error loss functions are obtained and the credible interval of δ is constructed using the empirical distribution. An intensive simulation study is conducted to investigate the performance of these three types of Bayes estimates and the coverage probabilities and average lengths of the credible interval of δ . Moreover, the performance of the Bayes estimates is compared with the maximum likelihood estimates. The Internet of Things and a numerical example about the miles-to-failure of vehicle components for reliability evaluation are provided for application purposes. Full article
(This article belongs to the Special Issue Statistics and Modeling in Reliability Engineering)
Show Figures

Figure 1

Back to TopTop