Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,824)

Search Parameters:
Keywords = exponential estimate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 4529 KB  
Article
SEP-HMM: A Flexible Hidden Markov Model Framework for Asymmetric and Non-Mesokurtic Emission Patterns
by Didik Bani Unggul, Nur Iriawan, Irhamah Irhamah and Andriyas Aryo Prabowo
Mathematics 2026, 14(3), 393; https://doi.org/10.3390/math14030393 - 23 Jan 2026
Abstract
This paper proposes a new Hidden Markov Model (HMM) framework integrated with the Skew Exponential Power (SEP) distribution, named SEP-HMM. The primary advantage of this method is its ability to capture and represent asymmetric and non-mesokurtic emission patterns, which are often encountered in [...] Read more.
This paper proposes a new Hidden Markov Model (HMM) framework integrated with the Skew Exponential Power (SEP) distribution, named SEP-HMM. The primary advantage of this method is its ability to capture and represent asymmetric and non-mesokurtic emission patterns, which are often encountered in real-world phenomena. This advantage makes it more flexible than well-known HMMs, such as Gaussian-HMM, which are still rigidly based on symmetric and mesokurtic assumptions. We formulate and present its complete algorithm for parameter estimation and hidden state decoding. To test its effectiveness, we run simulations with various scenarios and apply SEP-HMM to real datasets consisting of stock price and temperature datasets. In the simulations conducted, the superiority of SEP-HMM compared to Gaussian-HMM and Skew Normal-HMM is confirmed in most of the replications, both in assessing model fit and in identifying hidden states. This is also supported by the real-case dataset, where SEP-HMM outperforms the benchmark models in all tested metrics. Full article
(This article belongs to the Special Issue Statistics and Data Science)
19 pages, 11982 KB  
Article
A Baseflow Equation: Example of the Middle Yellow River Basins
by Haoxu Tong and Li Wan
Water 2026, 18(2), 280; https://doi.org/10.3390/w18020280 - 21 Jan 2026
Viewed by 54
Abstract
Existing baseflow estimation methods—such as exponential recession models, linear reservoir approaches, and digital filtering techniques—seldom account for anthropogenic disturbances or evapotranspiration-induced streamflow alterations. To address this limitation, a physically based baseflow equation that explicitly integrates human water withdrawals and evapotranspiration losses has been [...] Read more.
Existing baseflow estimation methods—such as exponential recession models, linear reservoir approaches, and digital filtering techniques—seldom account for anthropogenic disturbances or evapotranspiration-induced streamflow alterations. To address this limitation, a physically based baseflow equation that explicitly integrates human water withdrawals and evapotranspiration losses has been introduced. The governing equation was reformulated from a nonlinear storage–discharge relationship and validated against multi-decadal streamflow records in the Middle Yellow River Basin (MYRB). Results demonstrate that the proposed model accurately reproduces observed recession behavior across diverse sub-basins (NSE ≥ 0.94; RMSE ≤ 152 m3 s−1). By providing reliable baseflow estimates, the equation enables quantitative assessment of eco-hydrological benefits and informs cost-effective water-resource investments. Furthermore, long-term baseflow simulations driven by climate projections offer a scientific basis for evaluating climate-change impacts on regional water security. Full article
(This article belongs to the Special Issue Application of Hydrological Modelling to Water Resources Management)
Show Figures

Figure 1

26 pages, 766 KB  
Article
Regression Extensions of the New Polynomial Exponential Distribution: NPED-GLM and Poisson–NPED Count Models with Applications in Engineering and Insurance
by Halim Zeghdoudi, Sandra S. Ferreira, Vinoth Raman and Dário Ferreira
Computation 2026, 14(1), 26; https://doi.org/10.3390/computation14010026 - 21 Jan 2026
Viewed by 139
Abstract
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological [...] Read more.
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological extensions: (i) a generalized linear model (NPED-GLM) in which the distribution parameter depends on covariates, and (ii) a Poisson–NPED count regression model suitable for overdispersed and heavy-tailed count data. Likelihood-based inference, asymptotic properties, and simulation studies are developed to investigate the performance of the estimators. Applications to engineering failure-count data and insurance claim frequencies illustrate the advantages of the proposed models relative to classical Poisson, negative binomial, and Poisson–Lindley regressions. These developments substantially broaden the applicability of the NPED in actuarial science, reliability engineering, and applied statistics. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

23 pages, 659 KB  
Article
Robust Lifetime Estimation from HPGe Radiation-Sensor Time Series Using Pairwise Ratios and MFV Statistics
by Victor V. Golovko
Sensors 2026, 26(2), 706; https://doi.org/10.3390/s26020706 - 21 Jan 2026
Viewed by 56
Abstract
High-purity germanium (HPGe) gamma-ray detectors are core instruments in nuclear physics and astrophysics experiments, where long-term stability and reliable extraction of decay parameters are essential. However, the standard exponential decay analyses of the detector time-series data are often affected by the strong correlations [...] Read more.
High-purity germanium (HPGe) gamma-ray detectors are core instruments in nuclear physics and astrophysics experiments, where long-term stability and reliable extraction of decay parameters are essential. However, the standard exponential decay analyses of the detector time-series data are often affected by the strong correlations between the fitted parameters and the sensitivity to detector-related fluctuations and outliers. In this study, we present a robust analysis framework for HPGe detector decay data based on pairwise ratios and the Steiner’s most frequent value (MFV) statistic. By forming point-to-point ratios of background-subtracted net counts, the dependence on the absolute detector response is eliminated, removing the amplitude–lifetime correlation that is inherent to conventional regression. The resulting pairwise lifetime estimates exhibit heavy-tailed behavior, which is efficiently summarized using the MFV, a robust estimator designed for such distributions. For the case study, a long and stable dataset from an HPGe detector was used. This data was gathered during a low-temperature nuclear physics experiment focused on observing the 216 keV gamma-ray line in 97Ru. Using measurements spanning approximately 10 half-lives, we obtain a mean lifetime of τ=4.0959±0.0007stat±0.0110syst d, corresponding to a half-life of T1/2=2.8391±0.0005stat±0.0076syst d. These results demonstrate that the pairwise–MFV approach provides a robust and reproducible tool for analyzing long-duration HPGe detector data in nuclear physics and nuclear astrophysics experiments, particularly for precision decay measurements, detector-stability studies, and low-background monitoring. Full article
(This article belongs to the Special Issue Detectors & Sensors in Nuclear Physics and Nuclear Astrophysics)
Show Figures

Graphical abstract

29 pages, 425 KB  
Article
Analysis of Solutions to Nonlocal Tensor Kirchhoff–Carrier-Type Problems with Strong and Weak Damping, Multiple Mixed Time-Varying Delays, and Logarithmic-Term Forcing
by Aziz Belmiloudi
Symmetry 2026, 18(1), 172; https://doi.org/10.3390/sym18010172 - 16 Jan 2026
Viewed by 100
Abstract
In this contribution, we propose and study long-time behaviors of a new class of N-dimensional delayed Kirchhoff–Carrier-type problems with variable transfer coefficients involving a logarithmic nonlinearity. We take into account the dependence of diffusion and damping coefficients on the position and direction, [...] Read more.
In this contribution, we propose and study long-time behaviors of a new class of N-dimensional delayed Kirchhoff–Carrier-type problems with variable transfer coefficients involving a logarithmic nonlinearity. We take into account the dependence of diffusion and damping coefficients on the position and direction, as well as the presence of different types of delays. This class of nonlocal anisotropic and nonlinear wave-type equations with multiple time-varying mixed delays and dampings, of a fairly general form, containing several arbitrary functions and free parameters, is of the following form: 2ut2div(K(σuL2(Ω)2)Aσ(x)u)+M(uL2(Ω)2)udiv(ζ(t)Aσ(x)ut)+d0(t)ut+Dr(x,t;ut)=G(u), where u(x,t) is the state function, M and K are the nonlocal Kirchhoff operators and the nonlinear operator G(u) corresponds to a logarithmic source term. The symmetric tensor Aσ describes the anisotropic behavior and processes of the system, and the operator Dr represents the multiple time-varying mixed delays related to velocity ut. Our problem, which encompasses numerous equations already studied in the literature, is relevant to a wide range of practical and concrete applications. It not only considers anisotropy in diffusion, but it also assumes that the strong damping can be totally anisotropic (a phenomenon that has received very little mathematical attention in the literature). We begin with the reformulation of the problem into a nonlinear system coupling a nonlocal wave-type equation with ordinary differential equations, with the help of auxiliary functions. Afterward, we study the local existence and some necessary regularity results of the solutions by using the Faedo–Galerkin approximation, combining some energy estimates and the logarithmic Sobolev inequality. Next, by virtue of the potential well method combined with the Nehari manifold, conditions for global in-time existence are given. Finally, subject to certain conditions, the exponential decay of global solutions is established by applying a perturbed energy method. Many of the obtained results can be extended to the case of other nonlinear source terms. Full article
(This article belongs to the Section Mathematics)
12 pages, 248 KB  
Article
Blockwise Exponential Covariance Modeling for High-Dimensional Portfolio Optimization
by Congying Fan and Jacquline Tham
Symmetry 2026, 18(1), 171; https://doi.org/10.3390/sym18010171 - 16 Jan 2026
Viewed by 88
Abstract
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees [...] Read more.
This paper introduces a new framework for high-dimensional covariance matrix estimation, the Blockwise Exponential Covariance Model (BECM), which extends the traditional block-partitioned representation to the log-covariance domain. By exploiting the block-preserving properties of the matrix logarithm and exponential transformations, the proposed model guarantees strict positive definiteness while substantially reducing the number of parameters to be estimated through a blockwise log-covariance parameterization, without imposing any rank constraint. Within each block, intra- and inter-group dependencies are parameterized through interpretable coefficients and kernel-based similarity measures of factor loadings, enabling a data-driven representation of nonlinear groupwise associations. Using monthly stock return data from the U.S. stock market, we conduct extensive rolling-window tests to evaluate the empirical performance of the BECM in minimum-variance portfolio construction. The results reveal three main findings. First, the BECM consistently outperforms the Canonical Block Representation Model (CBRM) and the native 1/N benchmark in terms of out-of-sample Sharpe ratios and risk-adjusted returns. Second, adaptive determination of the number of clusters through cross-validation effectively balances structural flexibility and estimation stability. Third, the model maintains numerical robustness under fine-grained partitions, avoiding the loss of positive definiteness common in high-dimensional covariance estimators. Overall, the BECM offers a theoretically grounded and empirically effective approach to modeling complex covariance structures in high-dimensional financial applications. Full article
(This article belongs to the Section Mathematics)
31 pages, 13946 KB  
Article
The XLindley Survival Model Under Generalized Progressively Censored Data: Theory, Inference, and Applications
by Ahmed Elshahhat and Refah Alotaibi
Axioms 2026, 15(1), 56; https://doi.org/10.3390/axioms15010056 - 13 Jan 2026
Viewed by 93
Abstract
This paper introduces a novel extension of the classical Lindley distribution, termed the X-Lindley model, obtained by a specific mixture of exponential and Lindley distributions, thereby substantially enriching the distributional flexibility. To enhance its inferential scope, a comprehensive reliability analysis is developed under [...] Read more.
This paper introduces a novel extension of the classical Lindley distribution, termed the X-Lindley model, obtained by a specific mixture of exponential and Lindley distributions, thereby substantially enriching the distributional flexibility. To enhance its inferential scope, a comprehensive reliability analysis is developed under a generalized progressive hybrid censoring scheme, which unifies and extends several traditional censoring mechanisms and allows practitioners to accommodate stringent experimental and cost constraints commonly encountered in reliability and life-testing studies. Within this unified censoring framework, likelihood-based estimation procedures for the model parameters and key reliability characteristics are derived. Fisher information is obtained, enabling the establishment of asymptotic properties of the frequentist estimators, including consistency and normality. A Bayesian inferential paradigm using Markov chain Monte Carlo techniques is proposed by assigning a conjugate gamma prior to the model parameter under the squared error loss, yielding point estimates, highest posterior density credible intervals, and posterior reliability summaries with enhanced interpretability. Extensive Monte Carlo simulations, conducted under a broad range of censoring configurations and assessed using four precision-based performance criteria, demonstrate the stability and efficiency of the proposed estimators. The results reveal low bias, reduced mean squared error, and shorter interval lengths for the XLindley parameter estimates, while maintaining accurate coverage probabilities. The practical relevance of the proposed methodology is further illustrated through two real-life data applications from engineering and physical sciences, where the XLindley model provides a markedly improved fit and more realistic reliability assessment. By integrating an innovative lifetime model with a highly flexible censoring strategy and a dual frequentist–Bayesian inferential framework, this study offers a substantive contribution to modern survival theory. Full article
(This article belongs to the Special Issue Recent Applications of Statistical and Mathematical Models)
Show Figures

Figure 1

28 pages, 2058 KB  
Article
Tiny Language Model Guided Flow Q Learning for Optimal Task Scheduling in Fog Computing
by Bhargavi K and Sajjan G. Shiva
Algorithms 2026, 19(1), 60; https://doi.org/10.3390/a19010060 - 10 Jan 2026
Viewed by 147
Abstract
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation [...] Read more.
Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation of fog computing by the industries worldwide is due to the advantages like reduced latency, high operational efficiency, and high-level data privacy. The highly distributed and heterogeneous nature of fog computing leads to significant challenges related to resource management, data security, task scheduling, data privacy, and interoperability. The task typically represents a job generated by the IoT device. The action indicates the way of executing the tasks whose decision is taken by the scheduler. Task scheduling is one of the prominent issues in fog computing which includes the process of effectively scheduling the tasks among fog devices to effectively utilize the resources and meet the Quality of Service (QoS) requirements of the applications. Improper task scheduling leads to increased execution time, overutilization of resources, data loss, and poor scalability. Hence there is a need to do proper task scheduling to make optimal task distribution decisions in a highly dynamic resource-constrained heterogeneous fog computing environment. Flow Q learning (FQL) is a potential form of reinforcement learning algorithm which uses the flow matching policy for action distribution. It can handle complex forms of data and multimodal action distribution which make it suitable for the highly volatile fog computing environment. However, flow Q learning struggles to achieve a proper trade-off between the expressive flow model and a reduction in the Q function, as it relies on a one-step optimization policy that introduces bias into the estimated Q function value. The Tiny Language Model (TLM) is a significantly smaller form of a Large Language Model (LLM) which is designed to operate over the device-constrained environment. It can provide fair and systematic guidance to disproportionally biased deep learning models. In this paper a novel TLM guided flow Q learning framework is designed to address the task scheduling problem in fog computing. The neutrality and fine-tuning capability of the TLM is combined with the quick generable ability of the FQL algorithm. The framework is simulated using the Simcan2Fog simulator considering the dynamic nature of fog environment under finite and infinite resources. The performance is found to be good with respect to parameters like execution time, accuracy, response time, and latency. Further the results obtained are validated using the expected value analysis method which is found to be satisfactory. Full article
Show Figures

Figure 1

14 pages, 1394 KB  
Article
A Model to Describe the Genetic Potential for Nitrogen Deposition and Estimate Amino Acid Intake in Poultry
by Edney Pereira da Silva, Michele Bernardino de Lima, Rita Brito Vieira and Nilva Kazue Sakomura
Poultry 2026, 5(1), 8; https://doi.org/10.3390/poultry5010008 - 9 Jan 2026
Viewed by 205
Abstract
The maximum protein or nitrogen deposition is commonly used as the basis for modeling the amino acid intake in growing birds. In previous studies, the exponential functions of the nitrogen balance data were used to estimate the theoretical maximum for nitrogen deposition (ND [...] Read more.
The maximum protein or nitrogen deposition is commonly used as the basis for modeling the amino acid intake in growing birds. In previous studies, the exponential functions of the nitrogen balance data were used to estimate the theoretical maximum for nitrogen deposition (NDmaxT) as a reference model for the amino acid intake. However, this amino acid intake value is only valid for the period in which the NDmaxT was estimated. Additionally, physiological changes, such as the rapid development of reproductive organs and associated increases in protein deposition that occur in the period before the first egg is laid, should be considered in the models. Thus, this study was conducted to model the daily NDmaxT of pullets and integrate this value into the factorial model to estimate the daily methionine + cysteine (Met+Cys) intake. Our results showed that, up to 63 days of age, the values of NDmaxT obtained via the modeling procedure were 11% higher than the values predicted using the Gompertz function. At 105 days, there was a protein deposition peak from the growth of the reproductive organs, which contributed 14% of the variation in the model in this age. Alongside these factors, the integration of the models enabled daily Met+Cys estimates consistent with the literature; however, the recommendations varied according to the targeted daily protein deposition (50% or 60% of NDmaxT), daily feed intake, and amino acid utilization efficiency. The modeling approach demonstrated here for Met+Cys can be used to model other amino acid requirements and can be extended to other species. Full article
Show Figures

Figure 1

27 pages, 1856 KB  
Article
Waypoint-Sequencing Model Predictive Control for Ship Weather Routing Under Forecast Uncertainty
by Marijana Marjanović, Jasna Prpić-Oršić and Marko Valčić
J. Mar. Sci. Eng. 2026, 14(2), 118; https://doi.org/10.3390/jmse14020118 - 7 Jan 2026
Viewed by 223
Abstract
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling [...] Read more.
Ship weather routing optimization has evolved from deterministic great-circle navigation to sophisticated frameworks that account for dynamic environmental conditions and operational constraints. This paper presents a waypoint-sequencing Model Predictive Control (MPC) approach for energy-efficient ship weather routing under forecast uncertainty. The proposed rolling horizon framework integrates neural network-based vessel performance models with ensemble weather forecasts to enable real-time route adaptation while balancing fuel efficiency, navigational safety, and path smoothness objectives. The MPC controller operates with a 6 h control horizon and 24 h prediction horizon, re-optimizing every 6 h using updated meteorological forecasts. A multi-objective cost function prioritizes fuel consumption (60%), safety considerations (30%), and trajectory smoothness (10%), with an exponential discount factor (γ = 0.95) to account for increasing forecast uncertainty. The framework discretises planned routes into waypoints and optimizes heading angles and discrete speed options (12.0, 13.5, and 14.5 knots) at each control step. Validation using 21 transatlantic voyage scenarios with real hindcast weather data demonstrates the method’s capability to propagate uncertainties through ship performance models, yielding probabilistic estimates for attainable speed, fuel consumption, and estimated time of arrival (ETA). The methodology establishes a foundation for more advanced stochastic optimization approaches while offering immediate operational value through its computational tractability and integration with existing ship decision support systems. Full article
(This article belongs to the Special Issue The Control and Navigation of Autonomous Surface Vehicles)
Show Figures

Figure 1

46 pages, 1025 KB  
Article
Confidence Intervals for the Difference and Ratio Means of Zero-Inflated Two-Parameter Rayleigh Distribution
by Sasipong Kijsason, Sa-Aat Niwitpong and Suparat Niwitpong
Symmetry 2026, 18(1), 109; https://doi.org/10.3390/sym18010109 - 7 Jan 2026
Viewed by 130
Abstract
The analysis of road traffic accidents often reveals asymmetric patterns, providing insights that support the development of preventive measures, reduce fatalities, and improve road safety interventions. The Rayleigh distribution, a continuous distribution with inherent asymmetry, is well suited for modeling right-skewed data and [...] Read more.
The analysis of road traffic accidents often reveals asymmetric patterns, providing insights that support the development of preventive measures, reduce fatalities, and improve road safety interventions. The Rayleigh distribution, a continuous distribution with inherent asymmetry, is well suited for modeling right-skewed data and is widely used in scientific and engineering fields. It also shares structural characteristics with other skewed distributions, such as the Weibull and exponential distributions, and is particularly effective for analyzing right-skewed accident data. This study considers several approaches for constructing confidence intervals, including the percentile bootstrap, bootstrap with standard error, generalized confidence interval, method of variance estimates recovery, normal approximation, Bayesian Markov Chain Monte Carlo, and Bayesian highest posterior density methods. Their performance was evaluated through Monte Carlo simulation based on coverage probabilities and expected lengths. The results show that the HPD method achieved coverage probabilities at or above the nominal confidence level while providing the shortest expected lengths. Finally, all proposed confidence intervals were applied to fatalities recorded during the seven hazardous days of Thailand’s Songkran festival in 2024 and 2025. Full article
Show Figures

Figure 1

22 pages, 394 KB  
Article
A Fractional Calculus Approach to Energy Balance Modeling: Incorporating Memory for Responsible Forecasting
by Muath Awadalla and Abulrahman A. Sharif
Mathematics 2026, 14(2), 223; https://doi.org/10.3390/math14020223 - 7 Jan 2026
Viewed by 160
Abstract
Global climate change demands modeling approaches that are both computationally efficient and physically faithful to the system’s long-term dynamics. Classical Energy Balance Models (EBMs), while valuable, are fundamentally limited by their memoryless exponential response, which fails to represent the prolonged thermal inertia of [...] Read more.
Global climate change demands modeling approaches that are both computationally efficient and physically faithful to the system’s long-term dynamics. Classical Energy Balance Models (EBMs), while valuable, are fundamentally limited by their memoryless exponential response, which fails to represent the prolonged thermal inertia of the climate system—particularly that associated with deep-ocean heat uptake. In this study, we introduce a fractional Energy Balance Model (fEBM) by replacing the classical integer-order time derivative with a Caputo fractional derivative of order α(0<α1), thereby embedding long-range memory directly into the model structure. We establish a rigorous mathematical foundation for the fEBM, including proofs of existence, uniqueness, and asymptotic stability, ensuring theoretical well-posedness and numerical reliability. The model is calibrated and validated against historical global mean surface temperature data from NASA GISTEMP and radiative forcing estimates from IPCC AR6. Relative to the classical EBM, the fEBM achieves a substantially improved representation of observed temperatures, reducing the root mean square error by approximately 29% during calibration (1880–2010) and by 47% in out-of-sample forecasting (2011–2023). The optimized fractional order α=0.75±0.03 emerges as a physically interpretable measure of aggregate climate memory, consistent with multi-decadal ocean heat uptake and observed persistence in temperature anomalies. Residual diagnostics and robustness analyses further demonstrate that the fractional formulation captures dominant temporal dependencies without overfitting. By integrating mathematical rigor, uncertainty quantification, and physical interpretability, this work positions fractional calculus as a powerful and responsible framework for reduced-order climate modeling and long-term projection analysis. Full article
Show Figures

Figure 1

34 pages, 5123 KB  
Article
Comparative Analysis of Tail Risk in Emerging and Developed Equity Markets: An Extreme Value Theory Perspective
by Sthembiso Dlamini and Sandile Charles Shongwe
Int. J. Financial Stud. 2026, 14(1), 11; https://doi.org/10.3390/ijfs14010011 - 6 Jan 2026
Viewed by 583
Abstract
This research explores the application of extreme value theory in modelling and quantifying tail risks across different economic equity markets, with focus on the Nairobi Securities Exchange (NSE20), the South African Equity Market (FTSE/JSE Top40) and the US Equity Index (S&P500). The study [...] Read more.
This research explores the application of extreme value theory in modelling and quantifying tail risks across different economic equity markets, with focus on the Nairobi Securities Exchange (NSE20), the South African Equity Market (FTSE/JSE Top40) and the US Equity Index (S&P500). The study aims to recommend the most suitable probability distribution between the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD) and to assess the associated tail risk using the value-at-risk and expected shortfall. To address volatility clustering, four generalised autoregressive conditional heteroscedasticity (GARCH) models (standard GARCH, exponential GARCH, threshold-GARCH and APARCH (asymmetric power ARCH)) are first applied to returns before implementing the peaks-over-threshold and block maxima methods on standardised residuals. For each equity index, the probability models were ranked based on goodness-of-fit and accuracy using a combination of graphical and numerical methods as well as the comparison of empirical and theoretical risk measures. Beyond its technical contributions, this study has broader implications for building sustainable and resilient financial systems. The results indicate that, for the GEVD, the maxima and minima returns of block size 21 yield the best fit for all indices. For GPD, Hill’s plot is the preferred threshold selection method across all indices due to higher exceedances. A final comparison between GEVD and GPD is conducted to estimate tail risk for each index, and it is observed that GPD consistently outperforms GEVD regardless of market classification. Full article
(This article belongs to the Special Issue Financial Markets: Risk Forecasting, Dynamic Models and Data Analysis)
Show Figures

Figure 1

16 pages, 3532 KB  
Article
A Fast Method for Estimating Generator Matrixes of BCH Codes
by Shunan Han, Yuanzheng Ge, Yu Shi and Renjie Yi
Electronics 2026, 15(1), 244; https://doi.org/10.3390/electronics15010244 - 5 Jan 2026
Viewed by 123
Abstract
The existing methods used for estimating generator matrixes of BCH codes, which are based on Galois Field Fourier transforms, need to exhaustively test all the possible codeword lengths and corresponding primitive polynomials. With the increase of codeword length, the search space exponentially expands. [...] Read more.
The existing methods used for estimating generator matrixes of BCH codes, which are based on Galois Field Fourier transforms, need to exhaustively test all the possible codeword lengths and corresponding primitive polynomials. With the increase of codeword length, the search space exponentially expands. Consequently, the computational complexity of the estimation scheme becomes very high. To overcome this limitation, a fast estimation method is proposed based on Gaussian elimination. Firstly, the encoded bit stream is reshaped into a matrix according to the assumed codeword length. Then, by using Gaussian elimination, the bit matrix is simplified as the upper triangle form. By testing the independent columns of the upper triangle matrix, the assumed codeword length is judged to be right or not. Simultaneously, by using an augmented matrix, the parity check matrix of a BCH code can be estimated from the simplification result in the procedure of Gaussian elimination. Furthermore, the generator matrix is estimated by using the orthogonality between the generator matrix and parity check matrix. To improve the performance of the proposed method in resisting bit errors, soft-decision data is adopted to evaluate the reliability of received bits, and reliable bits are selected to construct the matrix to be analyzed. Experimental results indicate that the proposed method can recognize BCH codes effectively. The robustness of our method is acceptable for application, and the computation required is much less than the existing methods. Full article
Show Figures

Figure 1

20 pages, 566 KB  
Article
Bayesian and Classical Inferences of Two-Weighted Exponential Distribution and Its Applications to HIV Survival Data
by Asmaa S. Al-Moisheer, Khalaf S. Sultan and Mahmoud M. M. Mansour
Symmetry 2026, 18(1), 96; https://doi.org/10.3390/sym18010096 - 5 Jan 2026
Viewed by 182
Abstract
The paper presents a statistical model based on the two-weighted exponential distribution (TWED) to examine censored Human Immunodeficiency Virus (HIV) survival information. Identifying HIV as a disability, the study endorses an inclusive and sustainable health policy framework through some predictive findings. The TWED [...] Read more.
The paper presents a statistical model based on the two-weighted exponential distribution (TWED) to examine censored Human Immunodeficiency Virus (HIV) survival information. Identifying HIV as a disability, the study endorses an inclusive and sustainable health policy framework through some predictive findings. The TWED provides an accurate representation of the inherent hazard patterns and also improves the modelling of survival data. The parameter estimation is achieved in both a classical maximum likelihood estimation (MLE) and a Bayesian approach. Bayesian inference can be carried out under general entropy loss conditions and the symmetric squared error loss function through the Markov Chain Monte Carlo (MCMC) method. Based on the symmetric properties of the inverse of the Fisher information matrix, the asymptotic confidence intervals (ACLs) for the MLEs are constructed. Moreover, two-sided symmetric credible intervals (CRIs) of Bayesian estimates are also constructed based on the MCMC results that are based on symmetric normal proposals. The simulation studies are very important for indicating the correctness and probability of a statistical estimator. Implementing the model on actual HIV data illustrates its usefulness. Altogether, the paper supports the idea that statistics play an essential role in promoting disability-friendly and sustainable research in the field of public health in general. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

Back to TopTop