Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,319)

Search Parameters:
Keywords = likelihood estimation maximum

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 278 KB  
Article
A Transfer Learning Approach to Semiparametric Probit Regression for Interval-Censored Failure Times Data
by Lanxin Cui, Shishun Zhao and Jianhua Cheng
Symmetry 2026, 18(4), 566; https://doi.org/10.3390/sym18040566 - 26 Mar 2026
Abstract
Regression analysis of interval-censored failure time data commonly arises in biomedical studies, particularly when the available sample size is limited. Although many methods have been proposed for the semiparametric probit model with interval-censored data, there does not appear to exist an established approach [...] Read more.
Regression analysis of interval-censored failure time data commonly arises in biomedical studies, particularly when the available sample size is limited. Although many methods have been proposed for the semiparametric probit model with interval-censored data, there does not appear to exist an established approach that effectively borrows information from external sources to improve estimation efficiency. Such external information may arise, for example, in clinical trials where an auxiliary dataset from a related population is available but may differ from the target population in certain aspects, leading to heterogeneity between populations. To address this issue, a sieve maximum likelihood estimation procedure is developed for the semiparametric probit model with interval-censored data, and a transfer learning method is proposed to leverage auxiliary information from a source domain to improve estimation efficiency in the target domain while accounting for population heterogeneity. The proposed approach is based on a penalized likelihood formulation and uses monotone splines to approximate the unknown baseline function, providing flexibility in both modeling and computation. Simulation studies show that the proposed estimator substantially improves estimation accuracy compared with methods that rely solely on the target data, particularly when the target sample size is small. An application to an Alzheimer’s disease dataset further illustrates the practical usefulness of the proposed approach in biomedical studies. Full article
28 pages, 11208 KB  
Article
Deep-Sea Target Localization with Entropy Reduction: Sound Ray Bending Correction Based on TOA Time Series Analysis and Joint TOA-AOA Fusion
by Yuzhu Kang, Xiaohong Shen, Haiyan Wang, Yongsheng Yan and Tianyi Jia
Entropy 2026, 28(4), 373; https://doi.org/10.3390/e28040373 - 25 Mar 2026
Abstract
Unlike terrestrial environments, the inhomogeneity distribution of underwater sound speed poses significant challenges for underwater ranging and target localization. In the presence of sound ray bending and sensor node position errors in underwater acoustic sensor networks (UASNs), this paper proposes a joint TOA-AOA [...] Read more.
Unlike terrestrial environments, the inhomogeneity distribution of underwater sound speed poses significant challenges for underwater ranging and target localization. In the presence of sound ray bending and sensor node position errors in underwater acoustic sensor networks (UASNs), this paper proposes a joint TOA-AOA deep-sea target localization framework based on sound ray bending correction. From the perspective of information theory and time series analysis, the TOA measurements are time series signals carrying target position information, and the entropy-based analysis quantifies the fundamental limit on localization uncertainty. First, based on the TOA time series measurements and combined with the acoustic propagation characteristics of the deep sea, a sound ray bending correction method is adopted to improve the accuracy of slant range measurement. To enhance target localization accuracy, this paper proposes a two-step WLS closed-form solution based on TOA-AOA. To further reduce localization bias, a maximum likelihood estimation (MLE) method based on the Gauss-Newton is also derived. Subsequently, the paper derives and analyzes the Cramér-Rao lower bound (CRLB) for target localization, proving theoretically that jointly using TOA-AOA can improve localization accuracy. Simulations verify the performance of the proposed methods. The slant range estimation method based on sound ray bending correction effectively improves range measurement accuracy. The proposed closed-form solution enhances target localization accuracy, achieving the CRLB accuracy. The Gauss-Newton MLE solution can attain the CRLB accuracy under certain localization geometries and further reduces localization bias. Full article
(This article belongs to the Special Issue Time Series Analysis for Signal Processing)
Show Figures

Figure 1

20 pages, 502 KB  
Article
Unit Linear Failure Rate Distribution with Applications in Socioeconomic and Reliability Data
by Asmaa S. Al-Moisheer, Khalaf S. Sultan, Mahmoud M. M. Mansour and Heba Nagaty
Symmetry 2026, 18(4), 554; https://doi.org/10.3390/sym18040554 - 24 Mar 2026
Viewed by 19
Abstract
In this paper, a new probability model is suggested, known as the Unit Linear Failure Rate Distribution (ULFRD), which is used to analyse data expressed on a unit interval (0, 1), e.g., proportions, rates, and normalised indices. The proposed model is a transformation [...] Read more.
In this paper, a new probability model is suggested, known as the Unit Linear Failure Rate Distribution (ULFRD), which is used to analyse data expressed on a unit interval (0, 1), e.g., proportions, rates, and normalised indices. The proposed model is a transformation of the classical linear failure rate distribution to finite domains and gives us the opportunity to have shapes with a variety of shapes that can model any hazard rate behaviour, such as bathtub-shaped ones that are common in reliability research. Various fundamental statistical features of the distribution are obtained. The parameter estimation is analysed under Type-II censoring, where maximum likelihood and Bayesian estimations are used. Bayesian estimates are obtained under a symmetric and an asymmetric loss of a Metropolis–Hastings within a Gibbs approximation. The analyses of the estimates’ performance are performed via a simulation study of various sample sizes and censoring plans. Lastly, the generalisability of the proposed model is also demonstrated with two real datasets in the socioeconomic and reliability settings. The findings prove that the ULFRD offers a flexible and competitive alternative to model-bound data. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

33 pages, 566 KB  
Article
A Semiparametric Single-Index Modelling Approach to Learning Optimal Treatment Regimens with Interval-Censored Data
by Changhui Yuan, Shishun Zhao and Shiying Li
Symmetry 2026, 18(3), 532; https://doi.org/10.3390/sym18030532 - 20 Mar 2026
Viewed by 104
Abstract
Precision medicine tailored to individual patient characteristics is crucial for improving long-term health outcomes. In survival analysis, a significant challenge for learning an optimal treatment regimen is to handle censoring, which is ubiquitous due to insufficient follow-up or other reasons. While there exist [...] Read more.
Precision medicine tailored to individual patient characteristics is crucial for improving long-term health outcomes. In survival analysis, a significant challenge for learning an optimal treatment regimen is to handle censoring, which is ubiquitous due to insufficient follow-up or other reasons. While there exist some ready-made methods under right censoring, learning an optimal treatment regimen with the more complicated interval censoring mechanism is still unexplored. To address this significant gap, this work proposes a novel semiparametric single-index modeling method, in which the interaction between the treatment and a single-index combination of covariates is linked through an unknown monotonic function. The proposed approach can capture complex, nonlinear treatment–covariate relationships while maintaining interpretability for clinical decision-making. Our estimation strategy employs sieve maximum likelihood, utilizing monotone splines to approximate the cumulative baseline hazard and B-splines for the unknown link function. To tackle the challenge of maximizing the complicated likelihood, we develop a stable and computationally efficient EM algorithm. The consistency and asymptotic distribution of the resultant estimators are established through the empirical process theory. Simulation studies demonstrate that the proposed approach performs well in finite samples. An application to a clinical trial data set on AIDS highlights the practical utility of the proposed method. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

20 pages, 718 KB  
Article
A Self-Determination Perspective in Healthcare: Leader–Member Exchange and Job Satisfaction in an Italian Sample
by Domenico Sanseverino, Alessandra Sacchi and Chiara Ghislieri
Healthcare 2026, 14(6), 794; https://doi.org/10.3390/healthcare14060794 - 20 Mar 2026
Viewed by 103
Abstract
Background/Objectives: Healthcare professionals operate in complex and demanding environments characterized by high workloads, emotional strain, and organizational pressures that can undermine well-being. According to Self-Determination Theory, the fulfillment of core psychological needs (autonomy, competence, and relatedness) leads to increased job satisfaction, a [...] Read more.
Background/Objectives: Healthcare professionals operate in complex and demanding environments characterized by high workloads, emotional strain, and organizational pressures that can undermine well-being. According to Self-Determination Theory, the fulfillment of core psychological needs (autonomy, competence, and relatedness) leads to increased job satisfaction, a key indicator of occupational well-being. Additionally, leadership plays a central role in shaping needs-fulfilling environments. Drawing on Leader–Member Exchange Theory (LMX), which emphasizes that high-quality leader-follower relationships foster greater discretion, provide learning opportunities, and build constructive team interactions, this study aimed to examine whether supportive leadership is associated with job satisfaction through the mediation of autonomy, team task cohesion, and perceived training opportunities. Methods: Data were collected from a local health authority in Northern Italy through an anonymous online survey, completed by 697 healthcare professionals, including 546 non-medical healthcare staff (primarily nurses) and 151 physicians. Structural equation modeling with a robust maximum likelihood estimator was employed to test the mediation model, including professional role as a covariate. Results: Higher LMX was positively and directly associated with job satisfaction, through the partial mediation of autonomy, team cohesion, and training opportunities, all positively associated with satisfaction. Team task cohesion showed the strongest associations with both LMX and satisfaction. Physicians reported slightly higher levels of autonomy, training opportunities, and job satisfaction than non-medical professionals. Conclusions: The findings suggest that supportive leadership contributes to healthcare professionals’ job satisfaction both directly and indirectly by contributing to core needs fulfillment. Interventions that strengthen relational quality, promote team cohesion, and enhance professional development may help sustain well-being and adaptive functioning in high-demand healthcare environments. Full article
(This article belongs to the Special Issue Job Satisfaction and Mental Health of Workers: Second Edition)
Show Figures

Figure 1

22 pages, 2810 KB  
Article
Economic Policy Uncertainty and Trade Flows: Evidence from the Asia-Pacific Region
by Manh Hung Nguyen, Thi Mai Thanh Tran and Sy An Pham
Economies 2026, 14(3), 99; https://doi.org/10.3390/economies14030099 - 19 Mar 2026
Viewed by 198
Abstract
Amidst the polycrisis of 2018–2024, Asia-Pacific trade flows exhibited a structural resilience that contrasts with traditional theoretical predictions of severe trade contraction under high uncertainty. This study investigates these resilience dynamics using a structural gravity model estimated via the Poisson Pseudo Maximum Likelihood [...] Read more.
Amidst the polycrisis of 2018–2024, Asia-Pacific trade flows exhibited a structural resilience that contrasts with traditional theoretical predictions of severe trade contraction under high uncertainty. This study investigates these resilience dynamics using a structural gravity model estimated via the Poisson Pseudo Maximum Likelihood (PPML) approach. The analysis utilizes a balanced panel of 14 key regional economies (N = 4914), explicitly disaggregated into geographic (ASEAN-6 vs. non-ASEAN) and global value chain (high vs. low GVC intensity) subgroups to capture heterogeneous responses. The empirical results confirm that economic policy uncertainty (EPU) acts as a significant trade friction (β = −3.371), consistent with the wait-to-invest mechanism of real options theory. However, this effect is heterogeneous and significantly mitigated by institutional frameworks. We identify a robust institutional shield effect, where participation in trade agreements effectively neutralizes the adverse transmission of policy shocks (interaction coefficient = 3.396). Furthermore, this study uncovers a structural break during periods of extreme geopolitical conflict, characterized by a convex U-shaped relationship between uncertainty and trade. This pattern provides macro-level evidence of a behavioral shift in regional supply chains from a just-in-time cost-efficiency optimization model to a just-in-case security maximization paradigm, consistent with precautionary inventory accumulation. These findings underscore the critical role of modern trade pacts as institutional credibility anchors and the necessity of adaptive strategies in navigating heightened macroeconomic volatility. Full article
(This article belongs to the Section International, Regional, and Transportation Economics)
Show Figures

Figure 1

25 pages, 3930 KB  
Article
A Novel Unit Exponential Delay Time Distribution: Theory, Inference and Applications
by Ahmed M. Herzallah, Asmaa S. Al-Moisheer and Khalaf S. Sultan
Mathematics 2026, 14(6), 1029; https://doi.org/10.3390/math14061029 - 18 Mar 2026
Viewed by 132
Abstract
This paper introduces the Unit Exponential Delay Time Distribution (UEDTD), a two-parameter model for data with support in the unit interval (0,1). The model is derived using two distinct approaches: transformation method applied to the Exponential Delay Time [...] Read more.
This paper introduces the Unit Exponential Delay Time Distribution (UEDTD), a two-parameter model for data with support in the unit interval (0,1). The model is derived using two distinct approaches: transformation method applied to the Exponential Delay Time Distribution (EDTD), which itself arises as the convolution of two independent exponential random variables, and product convolution method of two independent power-function random variables that connects UEDTD to Pareto distribution, offering additional interpretability and giving rise to several exact and efficient algorithms for generating random samples. The limit distribution is examined with derivation of key statistical properties. The order statistics with interesting asymptotic results for extremes distribution are discussed and formulated. A reparameterization for the model is suggested to improve estimation stability and formulation with maximum likelihood approach employed for parameter inference. A simulation study demonstrates the consistency and efficiency of the estimators across various sample sizes and parameter configurations. The practical applicability of the UEDTD is demonstrated through a real-world dataset, where it shows superior performance compared to established unit distributions, confirming the utility of the UEDTD for modeling proportional data in applied research. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

11 pages, 264 KB  
Article
Realigning Risk Management Priorities
by Indranil Ghosh
Mathematics 2026, 14(6), 1024; https://doi.org/10.3390/math14061024 - 18 Mar 2026
Viewed by 127
Abstract
Industry surveys have shown that efforts of thwarting or at least mitigating strategic, operational, and financial risks may not always be adequately aligned: financial risks are given most of the attention, whereas lesser-mitigated strategic and operational risks account for most of a company’s [...] Read more.
Industry surveys have shown that efforts of thwarting or at least mitigating strategic, operational, and financial risks may not always be adequately aligned: financial risks are given most of the attention, whereas lesser-mitigated strategic and operational risks account for most of a company’s volatility. The novelty of this article lies in the fact that we have made a sincere effort to efficiently estimate the total number of possible risk sources corresponding to three types of risk, namely, financial, operational and strategic. To address the issue of estimation, we have put forward two different approaches for what we call a proportional method, for a more balanced allocation of a company’s risk management priorities and resources by assuming a probability distribution for each of the three types of risk sources and then estimating the number of risk resources using the method of maximum likelihood. In addition, we have also discussed a scenario in which the risk-thwarting probability corresponding to any of three types of risks might be unknown, along with the unknown value of the number of risk sources. Full article
(This article belongs to the Special Issue Computational Statistics with Applications)
21 pages, 2363 KB  
Article
Probabilistic Modeling of Inter-Vehicle Spacing on Two-Lane Roads: Implications for Safety-Oriented and Sustainable Traffic Operations
by Andrea Pompigna, Giuseppe Cantisani and Giulia Del Serrone
Sustainability 2026, 18(6), 2896; https://doi.org/10.3390/su18062896 - 16 Mar 2026
Viewed by 228
Abstract
Accurate characterization of inter-vehicle spacing is fundamental for safety assessment and sustainable operation of road networks, particularly on two-lane rural roads where monitoring infrastructure is limited. Unlike temporal headways, vehicle spacing directly reflects physical vehicle interactions and roadway occupancy, making it a more [...] Read more.
Accurate characterization of inter-vehicle spacing is fundamental for safety assessment and sustainable operation of road networks, particularly on two-lane rural roads where monitoring infrastructure is limited. Unlike temporal headways, vehicle spacing directly reflects physical vehicle interactions and roadway occupancy, making it a more appropriate variable for evaluating collision risk and operational efficiency. This study develops a probabilistic framework for modeling vehicle spacing based on the statistical isomorphism between Event Flows and Linear Fields of Random Points. Using a calibrated microscopic simulation model, spacing distributions are generated for unidirectional traffic over flow rates from 100 to 1300 veh/h. A Pearson Type III distribution is shown to consistently reproduce the observed asymmetry, kurtosis, and non-zero minimum spacing across traffic regimes. Distribution parameters are estimated via maximum likelihood and validated using a heuristic Kolmogorov–Smirnov procedure suitable for large samples. Results demonstrate systematic relationships between spacing distribution parameters and macroscopic traffic variables, enabling estimation of the probability of unsafe spacing conditions from commonly available traffic data. The proposed framework supports sustainability-oriented traffic management by providing a quantitative basis for safety evaluation and operational control without requiring extensive sensing infrastructure. Full article
Show Figures

Figure 1

24 pages, 4894 KB  
Article
Power Load Probabilistic Prediction Based on Multi-Value Quantile Regression and Timing Fusion Ensemble Learning Model
by Yuhang Liu, Fei Mei, Jun Zhang, Xiang Dai and Wen Li
Entropy 2026, 28(3), 329; https://doi.org/10.3390/e28030329 - 16 Mar 2026
Viewed by 222
Abstract
The core component to ensure the refined and safe operation of distribution network scheduling is 10 kV bus load probabilistic prediction. However, existing probabilistic prediction methods suffer from insufficient dynamic feature extraction and compromised prediction reliability caused by quantile crossing. To address these [...] Read more.
The core component to ensure the refined and safe operation of distribution network scheduling is 10 kV bus load probabilistic prediction. However, existing probabilistic prediction methods suffer from insufficient dynamic feature extraction and compromised prediction reliability caused by quantile crossing. To address these issues, this paper proposes a 10 kV bus load probabilistic prediction method integrating multi-value quantile regression (MQR) and a temporal fusion ensemble learning model (ELM). Firstly, a temporal fusion ensemble learning model is constructed, which integrates multiple temporal fusion network (TFN) sub-models through a stacking framework to parallel extract multi-dimensional temporal features of loads, effectively enhancing its feature capture capability for complex load data. Secondly, MQR is introduced as the core objective function to synchronously generate multi-quantile load forecasting results, comprehensively depicting the load probability distribution. Finally, a Listwise Maximum Likelihood Estimation (ListMLE) ranking constraint mechanism is embedded, which optimizes quantile ordering through monotonicity constraints, significantly reducing the degree of quantile crossing and improving the interpretability of forecasting results. The results show that the MQR-ELM algorithm achieves a Prediction Interval Coverage Probability of 94.624% (close to the nominal coverage rate of 95%), a Prediction Interval Averaged Width of 588.526, a Crossing Degree Index of only 0.0476, and a Continuous Ranked Probability Score as low as 84.931. All core indicators are significantly superior to those of the comparative algorithms. Full article
Show Figures

Figure 1

39 pages, 8656 KB  
Article
The Unit Arcsine–Exponential Distribution and Its Statistical Properties with Inference and Application to Reliability Data
by Asmaa S. Al-Moisheer, Khalaf S. Sultan, Moustafa N. Mousa and Mahmoud M. M. Mansour
Axioms 2026, 15(3), 218; https://doi.org/10.3390/axioms15030218 - 15 Mar 2026
Viewed by 179
Abstract
This paper presents a new continuous data model, the Unit Arcsine–Exponential distribution (UASED), a flexible data model on the unit interval. It is built up by an exponential-based arcsine-type transformation to allow it to represent a very wide range of shapes that can [...] Read more.
This paper presents a new continuous data model, the Unit Arcsine–Exponential distribution (UASED), a flexible data model on the unit interval. It is built up by an exponential-based arcsine-type transformation to allow it to represent a very wide range of shapes that can be used to model proportions and rates. A number of basic properties are obtained, such as closed-form formulas of the quantile function, moments, and entropy measures. Maximum likelihood and maximum product of spacings methods are developed to estimate parameters, and their performance is determined by Monte Carlo simulation, which shows that these methods can reasonably estimate the parameters and be stable over a variety of different parameter settings. To demonstrate that a model is practically useful, an application to real-world data on the reliability of devices in terms of failure time is discussed. The findings indicate that the UASED is a good fit to the data, in the sense that it is effective in terms of skewness and tail behavior and compares well or competes favorably with current unit distributions. All in all, the suggested model is a sparse alternative to model bounded data with sound inferential characteristics and high practical utility. Full article
Show Figures

Figure 1

34 pages, 2605 KB  
Article
Quasi-Maximum Exponential Likelihood Estimation of Conditional Quantiles for GARCH Models Based on High-Frequency Augmented Data
by Zhenming Zhang, Shishun Zhao, Jianhua Cheng and Anze Wang
Entropy 2026, 28(3), 326; https://doi.org/10.3390/e28030326 - 13 Mar 2026
Viewed by 161
Abstract
GARCH models play a fundamental role in modeling time-varying volatility in financial return series. In practice, financial returns are also well known to exhibit heavy-tailed distributions, which naturally motivates the use of quasi-maximum exponential likelihood estimation (QMELE) for accurately capturing tail behavior and [...] Read more.
GARCH models play a fundamental role in modeling time-varying volatility in financial return series. In practice, financial returns are also well known to exhibit heavy-tailed distributions, which naturally motivates the use of quasi-maximum exponential likelihood estimation (QMELE) for accurately capturing tail behavior and risk measures such as Value-at-Risk. At the same time, the increasing availability of intraday high-frequency data has led to the development of high-frequency augmented GARCH models, which incorporate intraday information into conventional low-frequency volatility frameworks. By exploiting transaction-level data recorded at very fine time scales, these models are able to capture intraday volatility dynamics and market microstructure effects that are not reflected in standard low-frequency observations. Against this background, this paper studies conditional quantile estimation for high-frequency augmented GARCH models. We develop QMELE-based estimators for both model parameters and conditional quantiles, and construct an adjusted test statistic for assessing model adequacy. The asymptotic properties of the proposed estimators and test statistic are established, and their finite-sample performance is examined through extensive simulation studies. Empirical applications to three major stock indices demonstrate that augmenting GARCH models with high-frequency information leads to substantial improvements in conditional quantile estimation compared with traditional low-frequency approaches. Full article
Show Figures

Figure 1

33 pages, 4991 KB  
Article
Inference for Upper Record Ranked Set Sampling from Kies Model with k-Cycle Effect
by Zirui Chu, Min Wu, Liang Wang and Yuhlong Lio
Mathematics 2026, 14(6), 979; https://doi.org/10.3390/math14060979 - 13 Mar 2026
Viewed by 157
Abstract
This study investigates statistical inference for upper record ranked set sampling (URRSS) data from the Kies distribution. In multiple-cycle URRSS settings where the heterogeneity across cycles is non-ignorable, both classical and Bayesian approaches are adopted to estimate the unknown model parameters and associated [...] Read more.
This study investigates statistical inference for upper record ranked set sampling (URRSS) data from the Kies distribution. In multiple-cycle URRSS settings where the heterogeneity across cycles is non-ignorable, both classical and Bayesian approaches are adopted to estimate the unknown model parameters and associated reliability metrics. Likelihood-based point and interval estimates are derived for these parameters and reliability indices, and the existence and uniqueness of the maximum likelihood estimators for the Kies distribution parameters are rigorously established. Moreover, a hierarchical Bayesian framework is developed to accommodate cycle-specific variability, with a Metropolis–Hastings algorithm embedded within a Gibbs sampler proposed to facilitate posterior computation in complex scenarios. The performance of the suggested methods is assessed through extensive simulation studies, supplemented by two real-world data applications that demonstrate their practical utility. Numerical results show that the proposed estimators perform well overall, with the hierarchical Bayesian approach showing a particular advantage when uncertainty about the cycle effect is present. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

28 pages, 5589 KB  
Article
A New Approach for Developing Combined Empirical Rainfall-Triggered Landslide Thresholds: Application to São Miguel Island (Azores, Portugal)
by Rui Fagundes Silva, Rui Marques and José Luís Zêzere
Water 2026, 18(6), 673; https://doi.org/10.3390/w18060673 - 13 Mar 2026
Viewed by 447
Abstract
Landslides, often triggered by intense or prolonged rainfall, pose significant risks to communities and infrastructure. Identifying accurate rainfall thresholds is crucial for predicting landslide events and developing effective early warning systems. This study, conducted on São Miguel Island (Azores), aimed to improve the [...] Read more.
Landslides, often triggered by intense or prolonged rainfall, pose significant risks to communities and infrastructure. Identifying accurate rainfall thresholds is crucial for predicting landslide events and developing effective early warning systems. This study, conducted on São Miguel Island (Azores), aimed to improve the predictive capability of rainfall thresholds by integrating both rainfall preparatory and rainfall trigger thresholds. Using data from 61 landslide events and rainfall measurements recorded at four stations between 1977 and 2020, the study applied the Generalised Extreme Value (GEV) distribution with Maximum Likelihood Estimation (MLE) to identify the cumulative rainfall–duration pair with the highest return period for each event, thereby establishing a preparatory threshold. The trigger threshold was determined by analysing the rainfall amount recorded on the day of the event while also accounting for the duration of the preparatory rainfall period. The final threshold combines both the preparatory and trigger thresholds, and an event is detected when both thresholds are exceeded. Preparatory thresholds showed similar patterns across the stations, with Sete Cidades and Furnas recording the highest cumulative rainfall values, while Santana and Ponta Delgada exhibited lower thresholds. The trigger thresholds at Furnas reflected the highest daily rainfall intensities. The analysis also indicated that the rainfall intensity required to trigger landslides decreases with increasing durations of the antecedent rainfall. Performance of the thresholds using ROC metrics revealed that the combined threshold outperformed the preparatory threshold alone by reducing false positives (FPs) and improving predictive accuracy. In all cases, the combined threshold demonstrated superior performance in detecting landslide events, highlighting its effectiveness in landslide prediction. This study provides a detailed analysis of rainfall thresholds for landslides on São Miguel Island and underscores the advantages of the combined threshold approach for improving landslide prediction and supporting the development of robust early warning systems. Full article
(This article belongs to the Section Hydrogeology)
Show Figures

Figure 1

22 pages, 1101 KB  
Systematic Review
Radiomics for Detection and Differentiation of Intrahepatic Cholangiocarcinoma: A Systematic Review and Meta-Analysis
by Zayan Alidina, Illiyun Banani, Umm E. Abiha, Ujala Sultan and Timothy M. Pawlik
Cancers 2026, 18(6), 937; https://doi.org/10.3390/cancers18060937 - 13 Mar 2026
Viewed by 237
Abstract
Background: Intrahepatic cholangiocarcinoma (ICC) is an aggressive primary liver malignancy with limited survival, largely due to delayed diagnosis, recurrence and limited effective therapeutic options. Radiomics- and artificial intelligence (AI)-based imaging models have emerged as promising tools to improve noninvasive detection and differentiation of [...] Read more.
Background: Intrahepatic cholangiocarcinoma (ICC) is an aggressive primary liver malignancy with limited survival, largely due to delayed diagnosis, recurrence and limited effective therapeutic options. Radiomics- and artificial intelligence (AI)-based imaging models have emerged as promising tools to improve noninvasive detection and differentiation of ICC. We conducted a systematic review and meta-analysis to evaluate the diagnostic performance of radiomics-based AI models for ICC. Methods: A systematic search of PubMed, Embase, Scopus, and the Cochrane Library was performed from inception through 2025 in accordance with PRISMA guidelines. Studies assessing radiomics- or AI-based models derived from CT, MRI, PET, or ultrasound for differentiation of ICC from other hepatic lesions were included. Pooled sensitivity, specificity, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) were estimated using a bivariate random-effects model. Study quality and risk of bias were assessed using the Radiomics Quality Score (RQS) and QUADAS-2 tools. Results: Twenty retrospective studies comprising 8746 participants were included. Across pooled validation and test datasets, radiomics-based AI models demonstrated a pooled sensitivity of 0.77 (95% CI, 0.69–0.84) and specificity of 0.88 (95% CI, 0.78–0.94) for differentiating ICC from non-ICC hepatic lesions. The pooled PLR was 6.81 (95% CI, 3.51–13.2), and the pooled NLR was 0.23 (95% CI, 0.09–0.61). CT-based models showed higher diagnostic performance compared with MRI and ultrasound. Subgroup and meta-regression analyses identified imaging modality, contrast phase, segmentation strategy, and validation approach as contributors to interstudy heterogeneity. The overall methodological quality demonstrated a mean Radiomics Quality Score (RQS) of 14.0 (range 11–24), corresponding to approximately 39% of the maximum achievable score. External validation cohorts were incorporated in 60% of the studies, although adherence to standardized feature reproducibility frameworks varied. Conclusions: Radiomics-based AI models demonstrate clinically meaningful diagnostic accuracy for noninvasive differentiation of ICC and may complement conventional imaging in preoperative evaluation. Prospective, multicenter studies with standardized imaging protocols and rigorous external validation are required before routine clinical adoption. Full article
(This article belongs to the Section Systematic Review or Meta-Analysis in Cancer Research)
Show Figures

Figure 1

Back to TopTop