Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = Anderson-Darling estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4595 KB  
Article
The Unit Inverse Maxwell–Boltzmann Distribution: A Novel Single-Parameter Model for Unit-Interval Data
by Murat Genç and Ömer Özbilen
Axioms 2025, 14(8), 647; https://doi.org/10.3390/axioms14080647 - 21 Aug 2025
Viewed by 147
Abstract
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval [...] Read more.
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval distributions, the UIMB model exhibits flexible density shapes and hazard rate behaviors, including right-skewed, left-skewed, unimodal, and bathtub-shaped patterns, making it suitable for applications in reliability engineering, environmental science, and health studies. This study derives the statistical properties of the UIMB distribution, including moments, quantiles, survival, and hazard functions, as well as stochastic ordering, entropy measures, and the moment-generating function, and evaluates its performance through simulation studies and real-data applications. Various estimation methods, including maximum likelihood, Anderson–Darling, maximum product spacing, least-squares, and Cramér–von Mises, are assessed, with maximum likelihood demonstrating superior accuracy. Simulation studies confirm the model’s robustness under normal and outlier-contaminated scenarios, with MLE showing resilience across varying skewness levels. Applications to manufacturing and environmental datasets reveal the UIMB distribution’s exceptional fit compared to competing models, as evidenced by lower information criteria and goodness-of-fit statistics. The UIMB distribution’s computational efficiency and adaptability position it as a robust tool for modeling complex unit-interval data, with potential for further extensions in diverse domains. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

16 pages, 666 KB  
Article
Bayesian Analysis of the Maxwell Distribution Under Progressively Type-II Random Censoring
by Rajni Goel, Mahmoud M. Abdelwahab and Mustafa M. Hasaballah
Axioms 2025, 14(8), 573; https://doi.org/10.3390/axioms14080573 - 25 Jul 2025
Viewed by 243
Abstract
Accurate modeling of product lifetimes is vital in reliability analysis and engineering to ensure quality and maintain competitiveness. This paper proposes the progressively randomly censored Maxwell distribution, which incorporates both progressive Type-II and random censoring within the Maxwell distribution framework. The model allows [...] Read more.
Accurate modeling of product lifetimes is vital in reliability analysis and engineering to ensure quality and maintain competitiveness. This paper proposes the progressively randomly censored Maxwell distribution, which incorporates both progressive Type-II and random censoring within the Maxwell distribution framework. The model allows for the planned removal of surviving units at specific stages of an experiment, accounting for both deliberate and random censoring events. It is assumed that survival and censoring times each follow a Maxwell distribution, though with distinct parameters. Both frequentist and Bayesian approaches are employed to estimate the model parameters. In the frequentist approach, maximum likelihood estimators and their corresponding confidence intervals are derived. In the Bayesian approach, Bayes estimators are obtained using an inverse gamma prior and evaluated through a Markov Chain Monte Carlo (MCMC) method under the squared error loss function (SELF). A Monte Carlo simulation study evaluates the performance of the proposed estimators. The practical relevance of the methodology is demonstrated using a real data set. Full article
Show Figures

Figure 1

40 pages, 600 KB  
Article
Advanced Lifetime Modeling Through APSR-X Family with Symmetry Considerations: Applications to Economic, Engineering and Medical Data
by Badr S. Alnssyan, A. A. Bhat, Abdelaziz Alsubie, S. P. Ahmad, Abdulrahman M. A. Aldawsari and Ahlam H. Tolba
Symmetry 2025, 17(7), 1118; https://doi.org/10.3390/sym17071118 - 11 Jul 2025
Viewed by 291
Abstract
This paper introduces a novel and flexible class of continuous probability distributions, termed the Alpha Power Survival Ratio-X (APSR-X) family. Unlike many existing transformation-based families, the APSR-X class integrates an alpha power transformation with a survival ratio structure, offering a new mechanism for [...] Read more.
This paper introduces a novel and flexible class of continuous probability distributions, termed the Alpha Power Survival Ratio-X (APSR-X) family. Unlike many existing transformation-based families, the APSR-X class integrates an alpha power transformation with a survival ratio structure, offering a new mechanism for enhancing shape flexibility while maintaining mathematical tractability. This construction enables fine control over both the tail behavior and the symmetry properties, distinguishing it from traditional alpha power or survival-based extensions. We focus on a key member of this family, the two-parameter Alpha Power Survival Ratio Exponential (APSR-Exp) distribution, deriving essential mathematical properties including moments, quantile functions and hazard rate structures. We estimate the model parameters using eight frequentist methods: the maximum likelihood (MLE), maximum product of spacings (MPSE), least squares (LSE), weighted least squares (WLSE), Anderson–Darling (ADE), right-tailed Anderson–Darling (RADE), Cramér–von Mises (CVME) and percentile (PCE) estimation. Through comprehensive Monte Carlo simulations, we evaluate the estimator performance using bias, mean squared error and mean relative error metrics. The proposed APSR-X framework uniquely enables preservation or controlled modification of the symmetry in probability density and hazard rate functions via its shape parameter. This capability is particularly valuable in reliability and survival analyses, where symmetric patterns represent balanced risk profiles while asymmetric shapes capture skewed failure behaviors. We demonstrate the practical utility of the APSR-Exp model through three real-world applications: economic (tax revenue durations), engineering (mechanical repair times) and medical (infection durations) datasets. In all cases, the proposed model achieves a superior fit over that of the conventional alternatives, supported by goodness-of-fit statistics and visual diagnostics. These findings establish the APSR-X family as a unique, symmetry-aware modeling framework for complex lifetime data. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 422 KB  
Article
A Novel Alpha-Power X Family: A Flexible Framework for Distribution Generation with Focus on the Half-Logistic Model
by A. A. Bhat , Aadil Ahmad Mir , S. P. Ahmad , Badr S. Alnssyan , Abdelaziz Alsubie  and Yashpal Singh Raghav
Entropy 2025, 27(6), 632; https://doi.org/10.3390/e27060632 - 13 Jun 2025
Viewed by 473
Abstract
This study introduces a new and flexible class of probability distributions known as the novel alpha-power X (NAP-X) family. A key development within this framework is the novel alpha-power half-logistic (NAP-HL) distribution, which extends the classical half-logistic model through an alpha-power transformation, allowing [...] Read more.
This study introduces a new and flexible class of probability distributions known as the novel alpha-power X (NAP-X) family. A key development within this framework is the novel alpha-power half-logistic (NAP-HL) distribution, which extends the classical half-logistic model through an alpha-power transformation, allowing for greater adaptability to various data shapes. The paper explores several theoretical aspects of the proposed model, including its moments, quantile function and hazard rate. To assess the effectiveness of parameter estimation, a detailed simulation study is conducted using seven estimation techniques: Maximum likelihood estimation (MLE), Cramér–von Mises estimation (CVME), maximum product of spacings estimation (MPSE), least squares estimation (LSE), weighted least squares estimation (WLSE), Anderson–Darling estimation (ADE) and a right-tailed version of Anderson–Darling estimation (RTADE). The results offer comparative insights into the performance of each method across different sample sizes. The practical value of the NAP-HL distribution is demonstrated using two real datasets from the metrology and engineering domains. In both cases, the proposed model provides a better fit than the traditional half-logistic and related distributions, as shown by lower values of standard model selection criteria. Graphical tools such as fitted density curves, Q–Q and P–P plots, survival functions and box plots further support the suitability of the model for real-world data analysis. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 5064 KB  
Article
Sine Unit Exponentiated Half-Logistic Distribution: Theory, Estimation, and Applications in Reliability Modeling
by Murat Genç and Ömer Özbilen
Mathematics 2025, 13(11), 1871; https://doi.org/10.3390/math13111871 - 3 Jun 2025
Viewed by 415
Abstract
This study introduces the sine unit exponentiated half-logistic distribution, a novel extension of the unit exponentiated half-logistic distribution within the sine-G family. Using trigonometric transformations, the proposed distribution offers flexible density shapes for modeling asymmetric unit-interval data. We derive its statistical properties, including [...] Read more.
This study introduces the sine unit exponentiated half-logistic distribution, a novel extension of the unit exponentiated half-logistic distribution within the sine-G family. Using trigonometric transformations, the proposed distribution offers flexible density shapes for modeling asymmetric unit-interval data. We derive its statistical properties, including quantiles, moments, and stress–strength reliability, and estimate parameters via classical methods like maximum likelihood and Anderson–Darling. Simulations and real-world applications to fiber strength and burr datasets demonstrate the superior fit of the proposed distribution over competing models, highlighting its utility in reliability engineering and manufacturing. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

29 pages, 510 KB  
Article
Statistical Inference and Goodness-of-Fit Assessment Using the AAP-X Probability Framework with Symmetric and Asymmetric Properties: Applications to Medical and Reliability Data
by Aadil Ahmad Mir, A. A. Bhat, S. P. Ahmad, Badr S. Alnssyan, Abdelaziz Alsubie and Yashpal Singh Raghav
Symmetry 2025, 17(6), 863; https://doi.org/10.3390/sym17060863 - 1 Jun 2025
Viewed by 535
Abstract
Probability models are instrumental in a wide range of applications by being able to accurately model real-world data. Over time, numerous probability models have been developed and applied in practical scenarios. This study introduces the AAP-X family of distributions—a novel, flexible framework for [...] Read more.
Probability models are instrumental in a wide range of applications by being able to accurately model real-world data. Over time, numerous probability models have been developed and applied in practical scenarios. This study introduces the AAP-X family of distributions—a novel, flexible framework for continuous data analysis named after authors Aadil Ajaz and Parvaiz. The proposed family effectively accommodates both symmetric and asymmetric characteristics through its shape-controlling parameter, an essential feature for capturing diverse data patterns. A specific subclass of this family, termed the “AAP Exponential” (AAPEx) model is designed to address the inflexibility of classical exponential distributions by accommodating versatile hazard rate patterns, including increasing, decreasing and bathtub-shaped patterns. Several fundamental mathematical characteristics of the introduced family are derived. The model parameters are estimated using six frequentist estimation approaches, including maximum likelihood, Cramer–von Mises, maximum product of spacing, ordinary least squares, weighted least squares and Anderson–Darling estimation. Monte Carlo simulations demonstrate the finite-sample performance of these estimators, revealing that maximum likelihood estimation and maximum product of spacing estimation exhibit superior accuracy, with bias and mean squared error decreasing systematically as the sample sizes increases. The practical utility and symmetric–asymmetric adaptability of the AAPEx model are validated through five real-world applications, with special emphasis on cancer survival times, COVID-19 mortality rates and reliability data. The findings indicate that the AAPEx model outperforms established competitors based on goodness-of-fit metrics such as the Akaike Information Criteria (AIC), Schwartz Information Criteria (SIC), Akaike Information Criteria Corrected (AICC), Hannan–Quinn Information Criteria (HQIC), Anderson–Darling (A*) test statistic, Cramer–von Mises (W*) test statistic and the Kolmogorov–Smirnov (KS) test statistic and its associated p-value. These results highlight the relevance of symmetry in real-life data modeling and establish the AAPEx family as a powerful tool for analyzing complex data structures in public health, engineering and epidemiology. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

22 pages, 1823 KB  
Article
Heavy Rainfall Probabilistic Model for Zielona Góra in Poland
by Marcin Wdowikowski, Monika Nowakowska, Maciej Bełcik and Grzegorz Galiniak
Water 2025, 17(11), 1673; https://doi.org/10.3390/w17111673 - 31 May 2025
Viewed by 819
Abstract
The research focuses on probabilistic modeling of maximum rainfall in Zielona Góra, Poland, to improve urban drainage system design. The study utilizes archived pluviographic data from 1951 to 2020, collected at the IMWM-NRI meteorological station. These data include 10 min rainfall records and [...] Read more.
The research focuses on probabilistic modeling of maximum rainfall in Zielona Góra, Poland, to improve urban drainage system design. The study utilizes archived pluviographic data from 1951 to 2020, collected at the IMWM-NRI meteorological station. These data include 10 min rainfall records and aggregated hourly and daily totals. The study employs various statistical distributions, including Fréchet, gamma, generalized exponential (GED), Gumbel, log-normal, and Weibull, to model rainfall intensity–duration–frequency (IDF) relationships. After testing the goodness of fit using the Anderson–Darling test, Bayesian Information Criterion (BIC), and relative residual mean square Error (rRMSE), the GED distribution was found to best describe rainfall patterns. A key outcome is the development of a new rainfall model based on the GED distribution, allowing for the estimation of precipitation amounts for different durations and exceedance probabilities. However, the study highlights limitations, such as the need for more accurate local models and a standardized rainfall atlas for Poland. Full article
Show Figures

Figure 1

26 pages, 517 KB  
Article
Enhanced Estimation of the Unit Lindley Distribution Parameter via Ranked Set Sampling with Real-Data Application
by Sid Ahmed Benchiha, Amer Ibrahim Al-Omari and Ghadah Alomani
Mathematics 2025, 13(10), 1645; https://doi.org/10.3390/math13101645 - 17 May 2025
Cited by 1 | Viewed by 402
Abstract
This paper investigates various estimation methods for the parameters of the unit Lindley distribution (U-LD) under both ranked set sampling (RSS) and simple random sampling (SRS) designs. The distribution parameters are estimated using the maximum likelihood estimation, ordinary least squares, weighted least squares, [...] Read more.
This paper investigates various estimation methods for the parameters of the unit Lindley distribution (U-LD) under both ranked set sampling (RSS) and simple random sampling (SRS) designs. The distribution parameters are estimated using the maximum likelihood estimation, ordinary least squares, weighted least squares, maximum product of spacings, minimum spacing absolute distance, minimum spacing absolute log-distance, minimum spacing square distance, minimum spacing square log-distance, linear-exponential, Anderson–Darling (AD), right-tail AD, left-tail AD, left-tail second-order, Cramér–von Mises, and Kolmogorov–Smirnov. A comprehensive simulation study is conducted to assess the performance of these estimators, ensuring an equal number of measuring units across both designs. Additionally, two real datasets of items failure time and COVID-19 are analyzed to illustrate the practical applicability of the proposed estimation methods. The findings reveal that RSS-based estimators consistently outperform their SRS counterparts in terms of mean squared error, bias, and efficiency across all estimation techniques considered. These results highlight the advantages of using RSS in parameter estimation for the U-LD distribution, making it a preferable choice for improved statistical inference. Full article
Show Figures

Figure 1

27 pages, 15276 KB  
Article
The Dynamics of Shannon Entropy in Analyzing Climate Variability for Modeling Temperature and Precipitation Uncertainty in Poland
by Bernard Twaróg
Entropy 2025, 27(4), 398; https://doi.org/10.3390/e27040398 - 8 Apr 2025
Viewed by 1187
Abstract
The aim of this study is to quantitatively analyze the long-term climate variability in Poland during the period 1901–2010, using Shannon entropy as a measure of uncertainty and complexity within the atmospheric system. The analysis is based on the premise that variations in [...] Read more.
The aim of this study is to quantitatively analyze the long-term climate variability in Poland during the period 1901–2010, using Shannon entropy as a measure of uncertainty and complexity within the atmospheric system. The analysis is based on the premise that variations in temperature and precipitation reflect the dynamic nature of the climate, understood as a nonlinear system sensitive to fluctuations. This study focuses on monthly distributions of temperature and precipitation, modeled using the bivariate Clayton copula function. A normal marginal distribution was adopted for temperature and a gamma distribution for precipitation, both validated using the Anderson–Darling test. To improve estimation accuracy, a bootstrap resampling technique and numerical integration were applied to calculate Shannon entropy at each of the 396 grid points, with a spatial resolution of 0.25° × 0.25°. The results indicate a significant increase in Shannon entropy during the summer months, particularly in July (+0.203 bits) and January (+0.221 bits), compared to the baseline period (1901–1971), suggesting a growing unpredictability of the climate. The most pronounced trend changes were identified in the years 1985–1996 (as indicated by the Pettitt test), while seasonal trends were confirmed using the Mann–Kendall test. A spatial analysis of entropy at the levels of administrative regions and catchments revealed notable regional disparities—entropy peaked in January in the West Pomeranian Voivodeship (4.919 bits) and reached its minimum in April in Greater Poland (3.753 bits). Additionally, this study examined the relationship between Shannon entropy and global climatic indicators, including the Land–Ocean Temperature Index (NASA GISTEMP) and the ENSO index (NINO3.4). Statistically significant positive correlations were observed between entropy and global temperature anomalies during both winter (ρ = 0.826) and summer (ρ = 0.650), indicating potential linkages between local climate variability and global warming trends. To explore the direction of this relationship, a Granger causality test was conducted, which did not reveal statistically significant causality between NINO3.4 and Shannon entropy (p > 0.05 for all lags tested), suggesting that the observed relationships are likely co-varying rather than causal in the Granger sense. Further phase–space analysis (with a delay of τ = 3 months) allowed for the identification of attractors characteristic of chaotic systems. The entropy trajectories revealed transitions from equilibrium states (average entropy: 4.124–4.138 bits) to highly unstable states (up to 4.768 bits), confirming an increase in the complexity of the climate system. Shannon entropy thus proves to be a valuable tool for monitoring local climatic instability and may contribute to improved risk modeling of droughts and floods in the context of climate change in Poland. Full article
(This article belongs to the Special Issue 25 Years of Sample Entropy)
Show Figures

Figure 1

30 pages, 1867 KB  
Article
A New Hybrid Class of Distributions: Model Characteristics and Stress–Strength Reliability Studies
by Mustapha Muhammad, Jinsen Xiao, Badamasi Abba, Isyaku Muhammad and Refka Ghodhbani
Axioms 2025, 14(3), 219; https://doi.org/10.3390/axioms14030219 - 16 Mar 2025
Viewed by 513
Abstract
This study proposes a generalized family of distributions to enhance flexibility in modeling complex engineering and biomedical data. The framework unifies existing models and improves reliability analysis in both engineering and biomedical applications by capturing diverse system behaviors. We introduce a novel hybrid [...] Read more.
This study proposes a generalized family of distributions to enhance flexibility in modeling complex engineering and biomedical data. The framework unifies existing models and improves reliability analysis in both engineering and biomedical applications by capturing diverse system behaviors. We introduce a novel hybrid family of distributions that incorporates a flexible set of hybrid functions, enabling the extension of various existing distributions. Specifically, we present a three-parameter special member called the hybrid-Weibull–exponential (HWE) distribution. We derive several fundamental mathematical properties of this new family, including moments, random data generation processes, mean residual life (MRL) and its relationship with the failure rate function, and its related asymptotic behavior. Furthermore, we compute advanced information measures, such as extropy and cumulative residual entropy, and derive order statistics along with their asymptotic behaviors. Model identifiability is demonstrated numerically using the Kullback–Leibler divergence. Additionally, we perform a stress–strength (SS) reliability analysis of the HWE under two common scale parameters, supported by illustrative numerical evaluations. For parameter estimation, we adopt the maximum likelihood estimation (MLE) method in both density estimation and SS-parameter studies. The simulation results indicated that the MLE demonstrates consistency in both density and SS-parameter estimations, with the mean squared error of the MLEs decreasing as the sample size increases. Moreover, the average length of the confidence interval for the percentile and Student’s t-bootstrap for the SS-parameter becomes smaller with larger sample sizes, and the coverage probability progressively aligns with the nominal confidence level of 95%. To demonstrate the practical effectiveness of the hybrid family, we provide three real-world data applications in which the HWE distribution outperforms many existing Weibull-based models, as measured by AIC, BIC, CAIC, KS, Anderson–Darling, and Cramer–von Mises criteria. Furthermore, the HLW exhibits strong performance in SS-parameter analysis. Consequently, this hybrid family holds immense potential for modeling lifetime data and advancing reliability and survival analysis. Full article
Show Figures

Figure 1

29 pages, 2519 KB  
Article
Fitting the Seven-Parameter Generalized Tempered Stable Distribution to Financial Data
by Aubain Nzokem and Daniel Maposa
J. Risk Financial Manag. 2024, 17(12), 531; https://doi.org/10.3390/jrfm17120531 - 22 Nov 2024
Cited by 1 | Viewed by 1136
Abstract
This paper proposes and implements a methodology to fit a seven-parameter Generalized Tempered Stable (GTS) distribution to financial data. The nonexistence of the mathematical expression of the GTS probability density function makes maximum-likelihood estimation (MLE) inadequate for providing parameter estimations. Based on the [...] Read more.
This paper proposes and implements a methodology to fit a seven-parameter Generalized Tempered Stable (GTS) distribution to financial data. The nonexistence of the mathematical expression of the GTS probability density function makes maximum-likelihood estimation (MLE) inadequate for providing parameter estimations. Based on the function characteristic and the fractional Fourier transform (FRFT), we provide a comprehensive approach to circumvent the problem and yield a good parameter estimation of the GTS probability. The methodology was applied to fit two heavy-tailed data (Bitcoin and Ethereum returns) and two peaked data (S&P 500 and SPY ETF returns). For each historical data, the estimation results show that six-parameter estimations are statistically significant except for the local parameter, μ. The goodness of fit was assessed through Kolmogorov–Smirnov, Anderson–Darling, and Pearson’s chi-squared statistics. While the two-parameter geometric Brownian motion (GBM) hypothesis is always rejected, the GTS distribution fits significantly with a very high p-value and outperforms the Kobol, Carr–Geman–Madan–Yor, and bilateral Gamma distributions. Full article
(This article belongs to the Special Issue Featured Papers in Mathematics and Finance)
Show Figures

Figure 1

24 pages, 3301 KB  
Article
Statistical Analysis and Several Estimation Methods of New Alpha Power-Transformed Pareto Model with Applications in Insurance
by Meshayil M. Alsolmi, Fatimah A. Almulhim, Meraou Mohammed Amine, Hassan M. Aljohani, Amani Alrumayh and Fateh Belouadah
Symmetry 2024, 16(10), 1367; https://doi.org/10.3390/sym16101367 - 14 Oct 2024
Viewed by 1156
Abstract
This article defines a new distribution using a novel alpha power-transformed method extension. The model obtained has three parameters and is quite effective in modeling skewed, complex, symmetric, and asymmetric datasets. The new approach has one additional parameter for the model. Certain distributional [...] Read more.
This article defines a new distribution using a novel alpha power-transformed method extension. The model obtained has three parameters and is quite effective in modeling skewed, complex, symmetric, and asymmetric datasets. The new approach has one additional parameter for the model. Certain distributional and mathematical properties are investigated, notably reliability, quartile, moments, skewness, kurtosis, and order statistics, and several approaches of estimation, notably the maximum likelihood, least square, weighted least square, maximum product spacing, Cramer-Von Mises, and Anderson Darling estimators of the model parameters were obtained. A Monte Carlo simulation study was conducted to evaluate the performance of the proposed techniques of estimation of the model parameters. The actuarial measures are computed for our recommended model. At the end of the paper, two insurance applications are illustrated to check the potential and utility of the suggested distribution. Evaluation using four selection criteria indicates that our recommended model is the most appropriate probability model for modeling insurance datasets. Full article
Show Figures

Figure 1

13 pages, 2186 KB  
Article
New Test to Detect Clustered Graphical Passwords in Passpoints Based on the Perimeter of the Convex Hull
by Joaquín Alberto Herrera-Macías, Lisset Suárez-Plasencia, Carlos Miguel Legón-Pérez, Guillermo Sosa-Gómez and Omar Rojas
Information 2024, 15(8), 447; https://doi.org/10.3390/info15080447 - 30 Jul 2024
Viewed by 1310
Abstract
This research paper presents a new test based on a novel approach for identifying clustered graphical passwords within the Passpoints scenario. Clustered graphical passwords are considered a weakness of graphical authentication systems, introduced by users during the registration phase, and thus it is [...] Read more.
This research paper presents a new test based on a novel approach for identifying clustered graphical passwords within the Passpoints scenario. Clustered graphical passwords are considered a weakness of graphical authentication systems, introduced by users during the registration phase, and thus it is necessary to have methods for the detection and prevention of such weaknesses. Graphical authentication methods serve as a viable alternative to the conventional alphanumeric password-based authentication method, which is susceptible to known weaknesses arising from user-generated passwords of this nature. The test proposed in this study is based on estimating the distributions of the perimeter of the convex hull, based on the hypothesis that the perimeter of the convex hull of a set of five clustered points is smaller than the one formed by random points. This convex hull is computed based on the points that users select as passwords within an image measuring 1920 × 1080 pixels, using the built-in function convhull in Matlab R2018a relying on the Qhull algorithm. The test was formulated by choosing the optimal distribution that fits the data from a total of 54 distributions, evaluated using the Kolmogorov–Smirnov, Anderson–Darling, and Chi-squared tests, thus achieving the highest reliability. Evaluating the effectiveness of the proposed test involves estimating type I and II errors, for five levels of significance α{0.01,0.02,0.05,0.1,0.2}, by simulating datasets of random and clustered graphical passwords with different levels of clustering. In this study, we compare the effectiveness and efficiency of the proposed test with existing tests from the literature that can detect this type of pattern in Passpoints graphical passwords. Our findings indicate that the new test demonstrates a significant improvement in effectiveness compared to previously published tests. Furthermore, the joint application of the two tests also shows improvement. Depending on the significance level determined by the user or system, the enhancement results in a higher detection rate of clustered passwords, ranging from 0.1% to 8% compared to the most effective previous methods. This improvement leads to a decrease in the estimated probability of committing a type II error. In terms of efficiency, the proposed test outperforms several previous tests; however, it falls short of being the most efficient, using computation time measured in seconds as a metric. It can be concluded that the newly developed test demonstrates the highest effectiveness and the second-highest efficiency level compared to the other tests available in the existing literature for the same purpose. The test was designed to be implemented in graphical authentication systems to prevent users from selecting weak graphical passwords, enhance password strength, and improve system security. Full article
Show Figures

Figure 1

47 pages, 776 KB  
Article
Bivariate Random Coefficient Integer-Valued Autoregressive Model Based on a ρ-Thinning Operator
by Chang Liu and Dehui Wang
Axioms 2024, 13(6), 367; https://doi.org/10.3390/axioms13060367 - 29 May 2024
Viewed by 1032
Abstract
While overdispersion is a common phenomenon in univariate count time series data, its exploration within bivariate contexts remains limited. To fill this gap, we propose a bivariate integer-valued autoregressive model. The model leverages a modified binomial thinning operator with a dispersion parameter ρ [...] Read more.
While overdispersion is a common phenomenon in univariate count time series data, its exploration within bivariate contexts remains limited. To fill this gap, we propose a bivariate integer-valued autoregressive model. The model leverages a modified binomial thinning operator with a dispersion parameter ρ and integrates random coefficients. This approach combines characteristics from both binomial and negative binomial thinning operators, thereby offering a flexible framework capable of generating counting series exhibiting equidispersion, overdispersion, or underdispersion. Notably, our model includes two distinct classes of first-order bivariate geometric integer-valued autoregressive models: one class employs binomial thinning (BVGINAR(1)), and the other adopts negative binomial thinning (BVNGINAR(1)). We establish the stationarity and ergodicity of the model and estimate its parameters using a combination of the Yule–Walker (YW) and conditional maximum likelihood (CML) methods. Furthermore, Monte Carlo simulation experiments are conducted to evaluate the finite sample performances of the proposed estimators across various parameter configurations, and the Anderson-Darling (AD) test is employed to assess the asymptotic normality of the estimators under large sample sizes. Ultimately, we highlight the practical applicability of the examined model by analyzing two real-world datasets on crime counts in New South Wales (NSW) and comparing its performance with other popular overdispersed BINAR(1) models. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

16 pages, 3270 KB  
Article
Lifetime Analysis of Dies Manufactured by Conventional Processes and Reconditioned by Deposition Welding Operation
by Daniela Maria Iovanas and Adela-Eliza Dumitrascu
Materials 2024, 17(7), 1469; https://doi.org/10.3390/ma17071469 - 22 Mar 2024
Cited by 1 | Viewed by 1153
Abstract
The refurbishment of dies by the deposition welding of wear areas is an efficient and economical process. The aim of this study was to conduct a comparative analysis of the lifetimes of different types of dies for the manufacturing of wagon wheels. The [...] Read more.
The refurbishment of dies by the deposition welding of wear areas is an efficient and economical process. The aim of this study was to conduct a comparative analysis of the lifetimes of different types of dies for the manufacturing of wagon wheels. The analyzed dies were manufactured by conventional processes (Type I) and reconditioned through a deposition welding procedure using a dedicated electrode (Type II). The Anderson–Darling test was conducted to analyze the goodness of fit of the lifetime data specific to the die types. The maximum likelihood estimation method (MLE) with a 95% confidence interval (CI) was applied in order to estimate the lifetime distribution parameters. It was found that the lifetimes of type II dies were longer than those of type I dies. The mean time to failure (MTTF) recorded for reconditioned dies was 426 min, while the mean time to failure of dies manufactured by conventional processes was approximatively 253 min. In addition, an accentuated hazard rate for type I dies compared to type II dies was observed. The results of this analysis emphasized the fact that dies can be restored to their initial operating capacity by successfully using deposition welding procedures that confer a high resistance to operational loads. At the same time, the use of these procedures allows for the sustainable development of resources and waste management. Full article
(This article belongs to the Special Issue Manufacturing Technology, Materials and Methods (Second Edition))
Show Figures

Figure 1

Back to TopTop