Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (142)

Search Parameters:
Keywords = Bayesian information criteria

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 11039 KiB  
Article
Optimum Progressive Data Analysis and Bayesian Inference for Unified Progressive Hybrid INH Censoring with Applications to Diamonds and Gold
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(8), 559; https://doi.org/10.3390/axioms14080559 - 23 Jul 2025
Viewed by 157
Abstract
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for [...] Read more.
A novel unified progressive hybrid censoring is introduced to combine both progressive and hybrid censoring plans to allow flexible test termination either after a prespecified number of failures or at a fixed time. This work develops both frequentist and Bayesian inferential procedures for estimating the parameters, reliability, and hazard rates of the inverted Nadarajah–Haghighi lifespan model when a sample is produced from such a censoring plan. Maximum likelihood estimators are obtained through the Newton–Raphson iterative technique. The delta method, based on the Fisher information matrix, is utilized to build the asymptotic confidence intervals for each unknown quantity. In the Bayesian methodology, Markov chain Monte Carlo techniques with independent gamma priors are implemented to generate posterior summaries and credible intervals, addressing computational intractability through the Metropolis—Hastings algorithm. Extensive Monte Carlo simulations compare the efficiency and utility of frequentist and Bayesian estimates across multiple censoring designs, highlighting the superiority of Bayesian inference using informative prior information. Two real-world applications utilizing rare minerals from gold and diamond durability studies are examined to demonstrate the adaptability of the proposed estimators to the analysis of rare events in precious materials science. By applying four different optimality criteria to multiple competing plans, an analysis of various progressive censoring strategies that yield the best performance is conducted. The proposed censoring framework is effectively applied to real-world datasets involving diamonds and gold, demonstrating its practical utility in modeling the reliability and failure behavior of rare and high-value minerals. Full article
(This article belongs to the Special Issue Applications of Bayesian Methods in Statistical Analysis)
Show Figures

Figure 1

26 pages, 6714 KiB  
Article
End-of-Line Quality Control Based on Mel-Frequency Spectrogram Analysis and Deep Learning
by Jernej Mlinarič, Boštjan Pregelj and Gregor Dolanc
Machines 2025, 13(7), 626; https://doi.org/10.3390/machines13070626 - 21 Jul 2025
Viewed by 183
Abstract
This study presents a novel approach to the end-of-line (EoL) quality inspection of brushless DC (BLDC) motors by implementing a deep learning model that combines MEL diagrams, convolutional neural networks (CNNs) and bidirectional gated recurrent units (BiGRUs). The suggested system utilizes raw vibration [...] Read more.
This study presents a novel approach to the end-of-line (EoL) quality inspection of brushless DC (BLDC) motors by implementing a deep learning model that combines MEL diagrams, convolutional neural networks (CNNs) and bidirectional gated recurrent units (BiGRUs). The suggested system utilizes raw vibration and sound signals, recorded during the EoL quality inspection process at the end of an industrial manufacturing line. Recorded signals are transformed directly into Mel-frequency spectrograms (MFS) without pre-processing. To remove non-informative frequency bands and increase data relevance, a six-step data reduction procedure was implemented. Furthermore, to improve fault characterization, a reference spectrogram was generated from healthy motors. The neural network was trained on a highly imbalanced dataset, using oversampling and Bayesian hyperparameter optimization. The final classification algorithm achieved classification metrics with high accuracy (99%). Traditional EoL inspection methods often rely on threshold-based criteria and expert analysis, which can be inconsistent, time-consuming, and poorly scalable. These methods struggle to detect complex or subtle patterns associated with early-stage faults. The proposed approach addresses these issues by learning discriminative patterns directly from raw sensor data and automating the classification process. The results confirm that this approach can reduce the need for human expert engagement during commissioning, eliminate redundant inspection steps, and improve fault detection consistency, offering significant production efficiency gains. Full article
(This article belongs to the Special Issue Advances in Noises and Vibrations for Machines)
Show Figures

Figure 1

14 pages, 2612 KiB  
Article
Reassessment Individual Growth Analysis of the Gulf Corvina, Cynoscion othonopterus (Teleostei: Sciaenidae), Using Observed Residual Error
by Eugenio Alberto Aragón-Noriega, José Adán Félix-Ortiz, Jaime Edzael Mendivil-Mendoza, Gilberto Genaro Ortega-Lizárraga and Marcelo Vidal Curiel-Bernal
Animals 2025, 15(14), 2008; https://doi.org/10.3390/ani15142008 - 8 Jul 2025
Viewed by 644
Abstract
Growth is the most influential aspect in demographic species analysis. Collecting data on ages and sizes (such as length and weight) is a fundamental step in growth modeling, particularly in fishery science. Residual analysis plays a crucial role in parameterizing the mathematical models [...] Read more.
Growth is the most influential aspect in demographic species analysis. Collecting data on ages and sizes (such as length and weight) is a fundamental step in growth modeling, particularly in fishery science. Residual analysis plays a crucial role in parameterizing the mathematical models chosen to describe the growth patterns of the species under investigation. Using optimal residual criteria is essential to improving model performance and accuracy. In the present study, the length-at-age data of the Gulf corvina (Cynoscion othonopterus) were evaluated with the Schnute model to obtain the best error type and to establish the most accurate growth pattern. Later, the observed, constant, depensatory, and compensatory variance approaches were tested using the logistic model. The Bayesian information criterion (BIC) was used as the goodness-of-fit test to obtain the best variance approach parametrizing the growth model. The BIC values selected the observed variance as the best approach to parametrize the logistic growth model. The conclusion is that the observed variance approach produces robust results—that is, the observed variance produced the most plausible fits. It is suggested that the observed error structure should be used to estimate individual growth. Full article
Show Figures

Graphical abstract

32 pages, 4701 KiB  
Review
Machine-Learning-Guided Design of Nanostructured Metal Oxide Photoanodes for Photoelectrochemical Water Splitting: From Material Discovery to Performance Optimization
by Xiongwei Liang, Shaopeng Yu, Bo Meng, Yongfu Ju, Shuai Wang and Yingning Wang
Nanomaterials 2025, 15(12), 948; https://doi.org/10.3390/nano15120948 - 18 Jun 2025
Cited by 1 | Viewed by 608
Abstract
The rational design of photoanode materials is pivotal for advancing photoelectrochemical (PEC) water splitting toward sustainable hydrogen production. This review highlights recent progress in the machine learning (ML)-assisted development of nanostructured metal oxide photoanodes, focusing on bridging materials discovery and device-level performance optimization. [...] Read more.
The rational design of photoanode materials is pivotal for advancing photoelectrochemical (PEC) water splitting toward sustainable hydrogen production. This review highlights recent progress in the machine learning (ML)-assisted development of nanostructured metal oxide photoanodes, focusing on bridging materials discovery and device-level performance optimization. We first delineate the fundamental physicochemical criteria for efficient photoanodes, including suitable band alignment, visible-light absorption, charge carrier mobility, and electrochemical stability. Conventional strategies such as nanostructuring, elemental doping, and surface/interface engineering are critically evaluated. We then discuss the integration of ML techniques—ranging from high-throughput density functional theory (DFT)-based screening to experimental data-driven modeling—for accelerating the identification of promising oxides (e.g., BiVO4, Fe2O3, WO3) and optimizing key parameters such as dopant selection, morphology, and catalyst interfaces. Particular attention is given to surrogate modeling, Bayesian optimization, convolutional neural networks, and explainable AI approaches that enable closed-loop synthesis-experiment-ML frameworks. ML-assisted performance prediction and tandem device design are also addressed. Finally, current challenges in data standardization, model generalizability, and experimental validation are outlined, and future perspectives are proposed for integrating ML with automated platforms and physics-informed modeling to facilitate scalable PEC material development for clean energy applications. Full article
(This article belongs to the Special Issue Nanomaterials for Novel Photoelectrochemical Devices)
Show Figures

Figure 1

22 pages, 2323 KiB  
Article
Finite Mixture Model-Based Analysis of Yarn Quality Parameters
by Esra Karakaş, Melik Koyuncu and Mülayim Öngün Ükelge
Appl. Sci. 2025, 15(12), 6407; https://doi.org/10.3390/app15126407 - 6 Jun 2025
Viewed by 337
Abstract
This study investigates the applicability of finite mixture models (FMMs) for accurately modeling yarn quality parameters in 28/1 Ne ring-spun polyester/viscose yarns, focusing on both yarn imperfections and mechanical properties. The research addresses the need for advanced statistical modeling techniques to better capture [...] Read more.
This study investigates the applicability of finite mixture models (FMMs) for accurately modeling yarn quality parameters in 28/1 Ne ring-spun polyester/viscose yarns, focusing on both yarn imperfections and mechanical properties. The research addresses the need for advanced statistical modeling techniques to better capture the inherent heterogeneity in textile production data. To this end, the Poisson mixture model is employed to represent count-based defects, such as thin places, thick places, and neps, while the gamma mixture model is used to model continuous variables, such as tenacity and elongation. Model parameters are estimated using the expectation–maximization (EM) algorithm, and model selection is guided by the Akaike and Bayesian information criteria (AIC and BIC). The results reveal that thin places are optimally modeled using a two-component Poisson mixture distribution, whereas thick places and neps require three components to reflect their variability. Similarly, a two-component gamma mixture distribution best describes the distributions of tenacity and elongation. These findings highlight the robustness of FMMs in capturing complex distributional patterns in yarn data, demonstrating their potential in enhancing quality assessment and control processes in the textile industry. Full article
Show Figures

Figure 1

23 pages, 2098 KiB  
Article
Modeling Time Series with SARIMAX and Skew-Normal and Zero-Inflated Skew-Normal Errors
by M. Alejandro Dinamarca, Fernando Rojas, Claudia Ibacache-Quiroga and Karoll González-Pizarro
Mathematics 2025, 13(11), 1892; https://doi.org/10.3390/math13111892 - 5 Jun 2025
Viewed by 618
Abstract
This study proposes an extension of Seasonal Autoregressive Integrated Moving Average models with exogenous regressors (SARIMAX) by incorporating skew-normal and zero-inflated skew-normal error structures to better accommodate asymmetry and excess zeros in time series data. The proposed framework demonstrates improved flexibility and robustness [...] Read more.
This study proposes an extension of Seasonal Autoregressive Integrated Moving Average models with exogenous regressors (SARIMAX) by incorporating skew-normal and zero-inflated skew-normal error structures to better accommodate asymmetry and excess zeros in time series data. The proposed framework demonstrates improved flexibility and robustness compared to traditional Gaussian-based models. Simulation experiments reveal that the skewness parameter significantly affect forecasting accuracy, with reductions in mean absolute error (MAE) and root mean square error (RMSE) observed across both positively and negatively skewed scenarios. Notably, in negative-skew contexts, the model achieved an MAE of 0.40 and RMSE of 0.49, outperforming its symmetric-error counterparts. The inclusion of zero-inflation probabilities further enhances model performance in sparse datasets, yielding superior values in goodness-of-fit criteria such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). To illustrate the practical value of the methodology, a real-world case study is presented involving the modeling of optical density (OD600) data from Escherichia coli during stationary-phase growth. A SARIMAX(1,1,1) model with skew-normal errors was fitted to 200 time-stamped absorbance measurements, revealing significant positive skewness in the residuals. Bootstrap-derived confidence intervals confirmed the significance of the estimated skewness parameter (α=14.033 with 95% CI [12.07, 15.99]). The model outperformed the classical ARIMA benchmark in capturing the asymmetry of the stochastic structure, underscoring its relevance for biological, environmental, and industrial applications in which non-Gaussian features are prevalent. Full article
(This article belongs to the Special Issue Applied Statistics in Management Sciences)
Show Figures

Figure 1

26 pages, 743 KiB  
Article
Dependent and Independent Time Series Errors Under Elliptically Countered Models
by Fredy O. Pérez-Ramirez, Francisco J. Caro-Lopera, José A. Díaz-García and Graciela González Farías
Econometrics 2025, 13(2), 22; https://doi.org/10.3390/econometrics13020022 - 21 May 2025
Viewed by 350
Abstract
We explore the impact of time series behavior on model errors when working under an elliptically contoured distribution. By adopting a time series approach aligned with the realistic dependence between errors under such distributions, this perspective shifts the focus from increasingly complex and [...] Read more.
We explore the impact of time series behavior on model errors when working under an elliptically contoured distribution. By adopting a time series approach aligned with the realistic dependence between errors under such distributions, this perspective shifts the focus from increasingly complex and challenging correlation analyses to volatility modeling that utilizes a novel likelihood framework based on dependent probabilistic samples. With the introduction of a modified Bayesian Information Criterion, which incorporates a ranking of degrees of evidence of significant differences between the compared models, the critical issue of model selection is reinforced, clarifying the relationships among the most common information criteria and revealing limited relevance among the models based on independent probabilistic samples, when tested on a well-established database. Our approach challenges the traditional hierarchical models commonly used in time series analysis, which assume independent errors. The application of rigorous differentiation criteria under this novel perspective on likelihood, based on dependent probabilistic samples, provides a new viewpoint on likelihood that arises naturally in the context of finance, adding a novel result. We provide new results for criterion selection, evidence invariance, and transitions between volatility models and heuristic methods to calibrate nested or non-nested models via convergence properties in a distribution. Full article
Show Figures

Figure 1

23 pages, 1136 KiB  
Article
Objective Framework for Bayesian Inference in Multicomponent Pareto Stress–Strength Model Under an Adaptive Progressive Type-II Censoring Scheme
by Young Eun Jeon, Yongku Kim and Jung-In Seo
Mathematics 2025, 13(9), 1379; https://doi.org/10.3390/math13091379 - 23 Apr 2025
Viewed by 277
Abstract
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial [...] Read more.
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial information to improve the accuracy of point estimation and to ensure the construction of a credible interval for uncertainty assessment. This approach is particularly useful for addressing several limitations of a widely used likelihood-based approach in estimating the multicomponent stress–strength reliability under the Pareto distribution. For instance, in the likelihood-based method, the asymptotic variance–covariance matrix may not exist due to certain constraints. This limitation hinders the construction of an approximate confidence interval for assessing the uncertainty. Moreover, even when an approximate confidence interval is obtained, it may fail to achieve nominal coverage levels in small sample scenarios. Unlike the likelihood-based method, the proposed method provides an efficient estimator across various criteria and constructs a valid credible interval, even with small sample sizes. Extensive simulation studies confirm that the proposed method yields reliable and accurate inference across various censoring scenarios, and a real data application validates its practical utility. These results demonstrate that the proposed method is an effective alternative to the likelihood-based method for reliability inference in the multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. Full article
Show Figures

Figure 1

19 pages, 4399 KiB  
Article
Spike Stall Precursor Detection in a Single-Stage Axial Compressor: A Data-Driven Dynamic Modeling Approach
by Anish Thapa, Jichao Li and Marco P. Schoen
Machines 2025, 13(4), 338; https://doi.org/10.3390/machines13040338 - 21 Apr 2025
Viewed by 417
Abstract
Operational safety and fuel efficiency are critical, yet often conflicting, objectives in modern civil get engine designs. Optimal efficiency operating conditions are typically close to unsafe regions, such as compressor stalls, which can cause severe engine damage. Consequently, engines are generally operated below [...] Read more.
Operational safety and fuel efficiency are critical, yet often conflicting, objectives in modern civil get engine designs. Optimal efficiency operating conditions are typically close to unsafe regions, such as compressor stalls, which can cause severe engine damage. Consequently, engines are generally operated below peak efficiency to maintain a sufficient stall margin. Reducing this margin through active control requires stall precursor detection and mitigation mechanisms. While several algorithms have shown promising results in predicting modal stalls, predicting spike stalls remains a challenge due to their rapid onset, leaving little time for corrective actions. This study addresses this gap by proposing a method to identify spike stall precursors based on the changing dynamics within a compressor blade passage. An autoregressive time series model is utilized to capture these dynamics and its changes are related to the flow condition within the blade passage. The autoregressive model is adaptively extracted from measured pressure data from a one-stage axial compressor test stand. The corresponding eigenvalues of the model are monitored by utilizing an outlier detection mechanism that uses pressure reading statistics. Outliers are proposed to be associated with spike stall precursors. The model order, which defines the number of relevant eigenvalues, is determined using three information criteria: the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the Conditional Model Estimator (CME). For prediction, an outlier detection algorithm based on the Generalized Extreme Studentized Deviate (GESD) Test is introduced. The proposed method is experimentally validated on a single-stage low-speed axial compressor. Results demonstrate consistent stall precursor detection, with future application for timely control interventions to prevent spike stall inception. Full article
(This article belongs to the Section Turbomachinery)
Show Figures

Figure 1

21 pages, 2029 KiB  
Article
Comparing Frequentist and Bayesian Methods for Factorial Invariance with Latent Distribution Heterogeneity
by Xinya Liang, Ji Li, Mauricio Garnier-Villarreal and Jihong Zhang
Behav. Sci. 2025, 15(4), 482; https://doi.org/10.3390/bs15040482 - 7 Apr 2025
Viewed by 550
Abstract
Factorial invariance is critical for ensuring consistent relationships between measured variables and latent constructs across groups or time, enabling valid comparisons in social science research. Detecting factorial invariance becomes challenging when varying degrees of heterogeneity are present in the distribution of latent factors. [...] Read more.
Factorial invariance is critical for ensuring consistent relationships between measured variables and latent constructs across groups or time, enabling valid comparisons in social science research. Detecting factorial invariance becomes challenging when varying degrees of heterogeneity are present in the distribution of latent factors. This simulation study examined how changes in latent means and variances between groups influence the detection of noninvariance, comparing Bayesian and maximum likelihood fit measures. The design factors included sample size, noninvariance levels, and latent factor distributions. Results indicated that differences in factor variance have a stronger impact on measurement invariance than differences in factor means, with heterogeneity in latent variances more strongly affecting scalar invariance testing than metric invariance testing. Among model selection methods, goodness-of-fit indices generally exhibited lower power compared to likelihood ratio tests (LRTs), information criteria (ICs; except BIC), and leave-one-out cross-validation (LOO), which achieved a good balance between false and true positive rates. Full article
Show Figures

Figure 1

33 pages, 1233 KiB  
Article
Volatility Modelling of the Johannesburg Stock Exchange All Share Index Using the Family GARCH Model
by Israel Maingo, Thakhani Ravele and Caston Sigauke
Forecasting 2025, 7(2), 16; https://doi.org/10.3390/forecast7020016 - 3 Apr 2025
Viewed by 2427
Abstract
In numerous domains of finance and economics, modelling and predicting stock market volatility is essential. Predicting stock market volatility is widely used in the management of portfolios, analysis of risk, and determination of option prices. This study is about volatility modelling of the [...] Read more.
In numerous domains of finance and economics, modelling and predicting stock market volatility is essential. Predicting stock market volatility is widely used in the management of portfolios, analysis of risk, and determination of option prices. This study is about volatility modelling of the daily Johannesburg Stock Exchange All Share Index (JSE ALSI) stock price data between 1 January 2014 and 29 December 2023. The modelling process incorporated daily log returns derived from the JSE ALSI. The following volatility models were presented for the period: sGARCH(1, 1) and fGARCH(1, 1). The models for volatility were fitted using five unique error distribution assumptions, including Student’s t, its skewed version, the generalized error and skewed generalized error distributions, and the generalized hyperbolic distribution. Based on information criteria such as Akaike, Bayesian, and Hannan–Quinn, the ARMA(0, 0)-fGARCH(1, 1) model with a skewed generalized error distribution emerged as the best fit. The chosen model revealed that the JSE ALSI prices are highly persistent with the leverage effect. JSE ALSI price volatility was notably influenced during the COVID-19 pandemic. The forecast over the next 10 days shows a rise in volatility. A comparative study was then carried out with the JSE Top 40 and the S&P500 indices. Comparison of the FTSE/JSE Top 40, S&P 500, and JSE ALLSI return indices over the COVID-19 pandemic indicated higher initial volatility in the FTSE/JSE Top 40 and S&P 500, with the JSE ALLSI following a similar trend later. The S&P 500 showed long-term reliability and high rolling returns in spite of short-run volatility, the FTSE/JSE Top 40 showed more pre-pandemic risk and volatility but reduced levels of rolling volatility after the pandemic, similar in magnitude for each index with low correlations among them. These results provide important insights for risk managers and investors navigating the South African equity market. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

21 pages, 4729 KiB  
Article
Enhancing Hierarchical Classification in Tree-Based Models Using Level-Wise Entropy Adjustment
by Olga Narushynska, Anastasiya Doroshenko, Vasyl Teslyuk, Volodymyr Antoniv and Maksym Arzubov
Big Data Cogn. Comput. 2025, 9(3), 65; https://doi.org/10.3390/bdcc9030065 - 11 Mar 2025
Viewed by 1720
Abstract
Hierarchical classification, which organizes items into structured categories and subcategories, has emerged as a powerful solution for handling large and complex datasets. However, traditional flat classification approaches often overlook the hierarchical dependencies between classes, leading to suboptimal predictions and limited interpretability. This paper [...] Read more.
Hierarchical classification, which organizes items into structured categories and subcategories, has emerged as a powerful solution for handling large and complex datasets. However, traditional flat classification approaches often overlook the hierarchical dependencies between classes, leading to suboptimal predictions and limited interpretability. This paper addresses these challenges by proposing a novel integration of tree-based models with hierarchical-aware split criteria through adjusted entropy calculations. The proposed method calculates entropy at multiple hierarchical levels, ensuring that the model respects the taxonomic structure during training. This approach aligns statistical optimization with class semantic relationships, enabling more accurate and coherent predictions. Experiments conducted on real-world datasets structured according to the GS1 Global Product Classification (GPC) system demonstrate the effectiveness of our method. The proposed model was applied using tree-based ensemble methods combined with the newly developed hierarchy-aware metric Penalized Information Gain (PIG). PIG was implemented with level-wise entropy adjustments, assigning greater weight to higher hierarchical levels to maintain the taxonomic structure. The model was trained and evaluated on two real-world datasets based on the GS1 Global Product Classification (GPC) system. The final dataset included approximately 30,000 product descriptions spanning four hierarchical levels. An 80-20 train–test split was used, with model hyperparameters optimized through 5-fold cross-validation and Bayesian search. The experimental results showed a 12.7% improvement in classification accuracy at the lowest hierarchy level compared to traditional flat classification methods, with significant gains in datasets featuring highly imbalanced class distributions and deep hierarchies. The proposed approach also increased the F1 score by 12.6%. Despite these promising results, challenges remain in scaling the model for very large datasets and handling classes with limited training samples. Future research will focus on integrating neural networks with hierarchy-aware metrics, enhancing data augmentation to address class imbalance, and developing real-time classification systems for practical use in industries such as retail, logistics, and healthcare. Full article
(This article belongs to the Special Issue Natural Language Processing Applications in Big Data)
Show Figures

Figure 1

20 pages, 2739 KiB  
Article
Analysis of Molecular Aspects of Periodontitis as a Risk Factor for Neurodegenerative Diseases: A Single-Center 10-Year Retrospective Cohort Study
by Amr Sayed Ghanem, Marianna Móré and Attila Csaba Nagy
Int. J. Mol. Sci. 2025, 26(6), 2382; https://doi.org/10.3390/ijms26062382 - 7 Mar 2025
Cited by 2 | Viewed by 934
Abstract
Neurodegenerative diseases (NDDs) represent a considerable global health burden with no definitive treatments. Emerging evidence suggests that periodontitis may contribute to NDD through shared inflammatory, microbial, and genetic pathways. A retrospective cohort design was applied to analyze hospital records from 2012–2022 and to [...] Read more.
Neurodegenerative diseases (NDDs) represent a considerable global health burden with no definitive treatments. Emerging evidence suggests that periodontitis may contribute to NDD through shared inflammatory, microbial, and genetic pathways. A retrospective cohort design was applied to analyze hospital records from 2012–2022 and to determine whether periodontitis independently increases NDD risk when accounting for major cardiovascular, cerebrovascular, metabolic, and inflammatory confounders. Likelihood ratio-based Cox regression tests and Weibull survival models were applied to assess the association between periodontitis and NDD risk. Model selection was guided by Akaike and Bayesian information criteria, while Harrell’s C-index and receiver operating characteristic curves evaluated predictive performance. Periodontitis demonstrated an independent association with neurodegenerative disease risk (HR 1.43, 95% CI 1.02–1.99). Cerebral infarction conferred the highest hazard (HR 4.81, 95% CI 2.90–7.96), while pneumonia (HR 1.96, 95% CI 1.05–3.64) and gastroesophageal reflux disease (HR 2.82, 95% CI 1.77–4.51) also showed significant increases in risk. Older individuals with periodontitis are at heightened risk of neurodegenerative disease, an effect further intensified by cerebrovascular, cardiometabolic, and gastroesophageal conditions. Pneumonia also emerged as an independent pathophysiological factor that may accelerate disease onset or progression. Attention to oral and systemic factors through coordinated clinical management may mitigate the onset and severity of neurodegeneration. Full article
Show Figures

Figure 1

19 pages, 5256 KiB  
Article
Comparison of Machine Learning Models for Real-Time Flow Forecasting in the Semi-Arid Bouregreg Basin
by Fatima Zehrae Elhallaoui Oueldkaddour, Fatima Wariaghli, Hassane Brirhet, Ahmed Yahyaoui and Hassane Jaziri
Limnol. Rev. 2025, 25(1), 6; https://doi.org/10.3390/limnolrev25010006 - 5 Mar 2025
Cited by 1 | Viewed by 726
Abstract
Morocco is geographically located between two distinct climatic zones: temperate in the north and tropical in the south. This situation is the reason for the temporal and spatial variability of the Moroccan climate. In recent years, the increasing scarcity of water resources, exacerbated [...] Read more.
Morocco is geographically located between two distinct climatic zones: temperate in the north and tropical in the south. This situation is the reason for the temporal and spatial variability of the Moroccan climate. In recent years, the increasing scarcity of water resources, exacerbated by climate change, has underscored the critical role of dams as essential water reservoirs. These dams serve multiple purposes, including flood management, hydropower generation, irrigation, and drinking water supply. Accurate estimation of reservoir flow rates is vital for effective water resource management, particularly in the context of climate variability. The prediction of monthly runoff time series is a key component of water resources planning and development projects. In this study, we employ Machine Learning (ML) techniques—specifically, Random Forest (RF), Support Vector Regression (SVR), and XGBoost—to predict monthly river flows in the Bouregreg basin, using data collected from the Sidi Mohamed Ben Abdellah (SMBA) Dam between 2010 and 2020. The primary objective of this paper is to comparatively evaluate the applicability of these three ML models for flow forecasting in the Bouregreg River. The models’ performance was assessed using three key criteria: the correlation coefficient (R2), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The results demonstrate that the SVR model outperformed the RF and XGBoost models, achieving high accuracy in flow prediction. These findings are highly encouraging and highlight the potential of machine learning approaches for hydrological forecasting in semi-arid regions. Notably, the models used in this study are less data-intensive compared to traditional methods, addressing a significant challenge in hydrological modeling. This research opens new avenues for the application of ML techniques in water resource management and suggests that these methods could be generalized to other basins in Morocco, promoting efficient, effective, and integrated water resource management strategies. Full article
Show Figures

Figure 1

24 pages, 398 KiB  
Article
Objective Posterior Analysis of kth Record Statistics in Gompertz Model
by Zoran Vidović and Liang Wang
Axioms 2025, 14(3), 152; https://doi.org/10.3390/axioms14030152 - 20 Feb 2025
Viewed by 520
Abstract
The Gompertz distribution has proven highly valuable in modeling human mortality rates and assessing the impacts of catastrophic events, such as plagues, financial crashes, and famines. Record data, which capture extreme values and critical trends, are particularly relevant for analyzing such phenomena. In [...] Read more.
The Gompertz distribution has proven highly valuable in modeling human mortality rates and assessing the impacts of catastrophic events, such as plagues, financial crashes, and famines. Record data, which capture extreme values and critical trends, are particularly relevant for analyzing such phenomena. In this study, we propose an objective Bayesian framework for estimating the parameters of the Gompertz distribution using record data. We analyze the performance of several objective priors, including the reference prior, Jeffreys’ prior, the maximal data information (MDI) prior, and probability matching priors. The suitability and properties of the resulting posterior distributions are systematically examined for each prior. A detailed simulation study is performed to assess the effectiveness of various estimators based on the performance criteria. To demonstrate the practical application of the methodology, it is applied to a real-world dataset. This study contributes to the field by providing a thorough comparative evaluation of objective priors and showcasing their impact and applicability in parameter estimation for Gompertz distribution based on record values. Full article
(This article belongs to the Special Issue Stochastic Modeling and Its Analysis)
Show Figures

Figure 1

Back to TopTop