Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (194)

Search Parameters:
Keywords = least squares Monte Carlo

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1168 KiB  
Article
A Seasonal Transmuted Geometric INAR Process: Modeling and Applications in Count Time Series
by Aishwarya Ghodake, Manik Awale, Hassan S. Bakouch, Gadir Alomair and Amira F. Daghestani
Mathematics 2025, 13(15), 2334; https://doi.org/10.3390/math13152334 - 22 Jul 2025
Viewed by 313
Abstract
In this paper, the authors introduce the transmuted geometric integer-valued autoregressive model with periodicity, designed specifically to analyze epidemiological and public health time series data. The model uses a transmuted geometric distribution as a marginal distribution of the process. It also captures varying [...] Read more.
In this paper, the authors introduce the transmuted geometric integer-valued autoregressive model with periodicity, designed specifically to analyze epidemiological and public health time series data. The model uses a transmuted geometric distribution as a marginal distribution of the process. It also captures varying tail behaviors seen in disease case counts and health data. Key statistical properties of the process, such as conditional mean, conditional variance, etc., are derived, along with estimation techniques like conditional least squares and conditional maximum likelihood. The ability to provide k-step-ahead forecasts makes this approach valuable for identifying disease trends and planning interventions. Monte Carlo simulation studies confirm the accuracy and reliability of the estimation methods. The effectiveness of the proposed model is analyzed using three real-world public health datasets: weekly reported cases of Legionnaires’ disease, syphilis, and dengue fever. Full article
(This article belongs to the Special Issue Applied Statistics in Real-World Problems)
Show Figures

Figure 1

21 pages, 1057 KiB  
Article
Hybrid Sensor Placement Framework Using Criterion-Guided Candidate Selection and Optimization
by Se-Hee Kim, JungHyun Kyung, Jae-Hyoung An and Hee-Chang Eun
Sensors 2025, 25(14), 4513; https://doi.org/10.3390/s25144513 - 21 Jul 2025
Viewed by 210
Abstract
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and [...] Read more.
This study presents a hybrid sensor placement methodology that combines criterion-based candidate selection with advanced optimization algorithms. Four established selection criteria—modal kinetic energy (MKE), modal strain energy (MSE), modal assurance criterion (MAC) sensitivity, and mutual information (MI)—are used to evaluate DOF sensitivity and generate candidate pools. These are followed by one of four optimization algorithms—greedy, genetic algorithm (GA), particle swarm optimization (PSO), or simulated annealing (SA)—to identify the optimal subset of sensor locations. A key feature of the proposed approach is the incorporation of constraint dynamics using the Udwadia–Kalaba (U–K) generalized inverse formulation, which enables the accurate expansion of structural responses from sparse sensor data. The framework assumes a noise-free environment during the initial sensor design phase, but robustness is verified through extensive Monte Carlo simulations under multiple noise levels in a numerical experiment. This combined methodology offers an effective and flexible solution for data-driven sensor deployment in structural health monitoring. To clarify the rationale for using the Udwadia–Kalaba (U–K) generalized inverse, we note that unlike conventional pseudo-inverses, the U–K method incorporates physical constraints derived from partial mode shapes. This allows a more accurate and physically consistent reconstruction of unmeasured responses, particularly under sparse sensing. To clarify the benefit of using the U–K generalized inverse over conventional pseudo-inverses, we emphasize that the U–K method allows the incorporation of physical constraints derived from partial mode shapes directly into the reconstruction process. This leads to a constrained dynamic solution that not only reflects the known structural behavior but also improves numerical conditioning, particularly in underdetermined or ill-posed cases. Unlike conventional Moore–Penrose pseudo-inverses, which yield purely algebraic solutions without physical insight, the U–K formulation ensures that reconstructed responses adhere to dynamic compatibility, thereby reducing artifacts caused by sparse measurements or noise. Compared to unconstrained least-squares solutions, the U–K approach improves stability and interpretability in practical SHM scenarios. Full article
Show Figures

Figure 1

23 pages, 5310 KiB  
Article
Prediction of the Calorific Value and Moisture Content of Caragana korshinskii Fuel Using Hyperspectral Imaging Technology and Various Stoichiometric Methods
by Xuehong De, Haoming Li, Jianchao Zhang, Nanding Li, Huimeng Wan and Yanhua Ma
Agriculture 2025, 15(14), 1557; https://doi.org/10.3390/agriculture15141557 - 21 Jul 2025
Viewed by 240
Abstract
Calorific value and moisture content are the key indices to evaluate Caragana pellet fuel’s quality and combustion characteristics. Calorific value is the key index to measure the energy released by energy plants during combustion, which determines energy utilization efficiency. But at present, the [...] Read more.
Calorific value and moisture content are the key indices to evaluate Caragana pellet fuel’s quality and combustion characteristics. Calorific value is the key index to measure the energy released by energy plants during combustion, which determines energy utilization efficiency. But at present, the determination of solid fuel is still carried out in the laboratory by oxygen bomb calorimetry. This has seriously hindered the ability of large-scale, rapid detection of fuel particles in industrial production lines. In response to this technical challenge, this study proposes using hyperspectral imaging technology combined with various chemometric methods to establish quantitative models for determining moisture content and calorific value in Caragana korshinskii fuel. A hyperspectral imaging system was used to capture the spectral data in the 935–1720 nm range of 152 samples from multiple regions in Inner Mongolia Autonomous Region. For water content and calorific value, three quantitative detection models, partial least squares regression (PLSR), random forest regression (RFR), and extreme learning machine (ELM), respectively, were established, and Monte Carlo cross-validation (MCCV) was chosen to remove outliers from the raw spectral data to improve the model accuracy. Four preprocessing methods were used to preprocess the spectral data, with standard normal variate (SNV) preprocessing performing best on the quantitative moisture content detection model and Savitzky–Golay (SG) preprocessing performing best on the calorific value detection method. Meanwhile, to improve the prediction accuracy of the model to reduce the redundant wavelength data, we chose four feature extraction methods, competitive adaptive reweighted sampling (CARS), successive pojections algorithm (SPA), genetic algorithm (GA), iteratively retains informative variables (IRIV), and combined the three models to build a quantitative detection model for the characteristic wavelengths of moisture content and calorific value of Caragana korshinskii fuel. Finally, a comprehensive comparison of the modeling effectiveness of all methods was carried out, and the SNV-IRIV-PLSR modeling combination was the best for water content prediction, with its prediction set determination coefficient (RP2), root mean square error of prediction (RMSEP), and relative percentage deviation (RPD) of 0.9693, 0.2358, and 5.6792, respectively. At the same time, the moisture content distribution map of Caragana fuel particles is established by using this model. The SG-CARS-RFR modeling combination was the best for calorific value prediction, with its RP2, RMSEP, and RPD of 0.8037, 0.3219, and 2.2864, respectively. This study provides an innovative technical solution for Caragana fuel particles’ value and quality assessment. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

40 pages, 600 KiB  
Article
Advanced Lifetime Modeling Through APSR-X Family with Symmetry Considerations: Applications to Economic, Engineering and Medical Data
by Badr S. Alnssyan, A. A. Bhat, Abdelaziz Alsubie, S. P. Ahmad, Abdulrahman M. A. Aldawsari and Ahlam H. Tolba
Symmetry 2025, 17(7), 1118; https://doi.org/10.3390/sym17071118 - 11 Jul 2025
Viewed by 221
Abstract
This paper introduces a novel and flexible class of continuous probability distributions, termed the Alpha Power Survival Ratio-X (APSR-X) family. Unlike many existing transformation-based families, the APSR-X class integrates an alpha power transformation with a survival ratio structure, offering a new mechanism for [...] Read more.
This paper introduces a novel and flexible class of continuous probability distributions, termed the Alpha Power Survival Ratio-X (APSR-X) family. Unlike many existing transformation-based families, the APSR-X class integrates an alpha power transformation with a survival ratio structure, offering a new mechanism for enhancing shape flexibility while maintaining mathematical tractability. This construction enables fine control over both the tail behavior and the symmetry properties, distinguishing it from traditional alpha power or survival-based extensions. We focus on a key member of this family, the two-parameter Alpha Power Survival Ratio Exponential (APSR-Exp) distribution, deriving essential mathematical properties including moments, quantile functions and hazard rate structures. We estimate the model parameters using eight frequentist methods: the maximum likelihood (MLE), maximum product of spacings (MPSE), least squares (LSE), weighted least squares (WLSE), Anderson–Darling (ADE), right-tailed Anderson–Darling (RADE), Cramér–von Mises (CVME) and percentile (PCE) estimation. Through comprehensive Monte Carlo simulations, we evaluate the estimator performance using bias, mean squared error and mean relative error metrics. The proposed APSR-X framework uniquely enables preservation or controlled modification of the symmetry in probability density and hazard rate functions via its shape parameter. This capability is particularly valuable in reliability and survival analyses, where symmetric patterns represent balanced risk profiles while asymmetric shapes capture skewed failure behaviors. We demonstrate the practical utility of the APSR-Exp model through three real-world applications: economic (tax revenue durations), engineering (mechanical repair times) and medical (infection durations) datasets. In all cases, the proposed model achieves a superior fit over that of the conventional alternatives, supported by goodness-of-fit statistics and visual diagnostics. These findings establish the APSR-X family as a unique, symmetry-aware modeling framework for complex lifetime data. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 1556 KiB  
Article
Modified Two-Parameter Ridge Estimators for Enhanced Regression Performance in the Presence of Multicollinearity: Simulations and Medical Data Applications
by Muteb Faraj Alharthi and Nadeem Akhtar
Axioms 2025, 14(7), 527; https://doi.org/10.3390/axioms14070527 - 10 Jul 2025
Viewed by 240
Abstract
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and [...] Read more.
Predictive regression models often face a common challenge known as multicollinearity. This phenomenon can distort the results, causing models to overfit and produce unreliable coefficient estimates. Ridge regression is a widely used approach that incorporates a regularization term to stabilize parameter estimates and improve the prediction accuracy. In this study, we introduce four newly modified ridge estimators, referred to as RIRE1, RIRE2, RIRE3, and RIRE4, that are aimed at tackling severe multicollinearity more effectively than ordinary least squares (OLS) and other existing estimators under both normal and non-normal error distributions. The ridge estimators are biased, so their efficiency cannot be judged by variance alone; instead, we use the mean squared error (MSE) to compare their performance. Each new estimator depends on two shrinkage parameters, k and d, making the theoretical analysis complex. To address this, we employ Monte Carlo simulations to rigorously evaluate and compare these new estimators with OLS and other existing ridge estimators. Our simulations show that the proposed estimators consistently minimize the MSE better than OLS and other ridge estimators, particularly in datasets with strong multicollinearity and large error variances. We further validate their practical value through applications using two real-world datasets, demonstrating both their robustness and theoretical alignment. Full article
(This article belongs to the Special Issue Applied Mathematics and Mathematical Modeling)
Show Figures

Figure 1

13 pages, 771 KiB  
Article
Valuation of Euro-Convertible Bonds in a Markov-Modulated, Cox–Ingersoll–Ross Economy
by Yu-Min Lian, Jun-Home Chen and Szu-Lang Liao
Mathematics 2025, 13(13), 2075; https://doi.org/10.3390/math13132075 - 23 Jun 2025
Viewed by 207
Abstract
This study investigates the valuation of Euro-convertible bonds (ECBs) using a novel Markov-modulated cojump-diffusion (MMCJD) model, which effectively captures the dynamics of stochastic volatility and simultaneous jumps (cojumps) in both the underlying stock prices and foreign exchange (FX) rates. Furthermore, we introduce a [...] Read more.
This study investigates the valuation of Euro-convertible bonds (ECBs) using a novel Markov-modulated cojump-diffusion (MMCJD) model, which effectively captures the dynamics of stochastic volatility and simultaneous jumps (cojumps) in both the underlying stock prices and foreign exchange (FX) rates. Furthermore, we introduce a Markov-modulated Cox–Ingersoll–Ross (MMCIR) framework to accurately model domestic and foreign instantaneous interest rates within a regime-switching environment. To manage computational complexity, the least-squares Monte Carlo (LSMC) approach is employed for estimating ECB values. Numerical analyses demonstrate that explicitly incorporating stochastic volatilities and cojumps significantly enhances the realism of ECB pricing, underscoring the novelty and contribution of our integrated modeling approach. Full article
Show Figures

Figure 1

20 pages, 6028 KiB  
Article
Improving Orbit Prediction of the Two-Line Element with Orbit Determination Using a Hybrid Algorithm of the Simplex Method and Genetic Algorithm
by Jinghong Liu, Chenyun Wu, Wanting Long, Bo Yuan, Zhengyuan Zhang and Jizhang Sang
Aerospace 2025, 12(6), 527; https://doi.org/10.3390/aerospace12060527 - 11 Jun 2025
Viewed by 419
Abstract
With the rapidly increasing number of satellites and orbital debris, collision avoidance and reentry prediction are very important for space situational awareness. A precise orbital prediction through orbit determination is crucial to enhance the space safety. The two-line element (TLE) data sets are [...] Read more.
With the rapidly increasing number of satellites and orbital debris, collision avoidance and reentry prediction are very important for space situational awareness. A precise orbital prediction through orbit determination is crucial to enhance the space safety. The two-line element (TLE) data sets are publicly available to users worldwide. However, the data sets have uneven qualities and biases, resulting in exponential growth of orbital prediction errors in the along-track direction. A hybrid algorithm of the simplex method and genetic algorithm is proposed to improve orbit determination accuracy using TLEs. The parameters of the algorithm are tuned to achieve the best performance of orbital prediction. Six satellites with consolidated prediction format (CPF) ephemeris and four satellites with precise orbit ephemerides (PODs) are chosen to test the performance of the algorithm. Compared with the results of the least-squares method and simplex method based on Monte Carlo simulation, the new algorithm demonstrated its superiorities in orbital prediction. The algorithm exhibits an accuracy improvement as high as 40.25% for 10 days of orbital prediction compared to that using the single last two-line element. In addition, six satellites are used to evaluate the time efficiency, and the experiments prove that the hybrid algorithm is robust and has computational efficiency. Full article
Show Figures

Figure 1

29 pages, 510 KiB  
Article
Statistical Inference and Goodness-of-Fit Assessment Using the AAP-X Probability Framework with Symmetric and Asymmetric Properties: Applications to Medical and Reliability Data
by Aadil Ahmad Mir, A. A. Bhat, S. P. Ahmad, Badr S. Alnssyan, Abdelaziz Alsubie and Yashpal Singh Raghav
Symmetry 2025, 17(6), 863; https://doi.org/10.3390/sym17060863 - 1 Jun 2025
Viewed by 452
Abstract
Probability models are instrumental in a wide range of applications by being able to accurately model real-world data. Over time, numerous probability models have been developed and applied in practical scenarios. This study introduces the AAP-X family of distributions—a novel, flexible framework for [...] Read more.
Probability models are instrumental in a wide range of applications by being able to accurately model real-world data. Over time, numerous probability models have been developed and applied in practical scenarios. This study introduces the AAP-X family of distributions—a novel, flexible framework for continuous data analysis named after authors Aadil Ajaz and Parvaiz. The proposed family effectively accommodates both symmetric and asymmetric characteristics through its shape-controlling parameter, an essential feature for capturing diverse data patterns. A specific subclass of this family, termed the “AAP Exponential” (AAPEx) model is designed to address the inflexibility of classical exponential distributions by accommodating versatile hazard rate patterns, including increasing, decreasing and bathtub-shaped patterns. Several fundamental mathematical characteristics of the introduced family are derived. The model parameters are estimated using six frequentist estimation approaches, including maximum likelihood, Cramer–von Mises, maximum product of spacing, ordinary least squares, weighted least squares and Anderson–Darling estimation. Monte Carlo simulations demonstrate the finite-sample performance of these estimators, revealing that maximum likelihood estimation and maximum product of spacing estimation exhibit superior accuracy, with bias and mean squared error decreasing systematically as the sample sizes increases. The practical utility and symmetric–asymmetric adaptability of the AAPEx model are validated through five real-world applications, with special emphasis on cancer survival times, COVID-19 mortality rates and reliability data. The findings indicate that the AAPEx model outperforms established competitors based on goodness-of-fit metrics such as the Akaike Information Criteria (AIC), Schwartz Information Criteria (SIC), Akaike Information Criteria Corrected (AICC), Hannan–Quinn Information Criteria (HQIC), Anderson–Darling (A*) test statistic, Cramer–von Mises (W*) test statistic and the Kolmogorov–Smirnov (KS) test statistic and its associated p-value. These results highlight the relevance of symmetry in real-life data modeling and establish the AAPEx family as a powerful tool for analyzing complex data structures in public health, engineering and epidemiology. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

26 pages, 529 KiB  
Article
A First-Order Autoregressive Process with Size-Biased Lindley Marginals: Applications and Forecasting
by Hassan S. Bakouch, M. M. Gabr, Sadiah M. A. Aljeddani and Hadeer M. El-Taweel
Mathematics 2025, 13(11), 1787; https://doi.org/10.3390/math13111787 - 27 May 2025
Viewed by 396
Abstract
In this paper, a size-biased Lindley (SBL) first-order autoregressive (AR(1)) process is proposed, the so-called SBL-AR(1). Some probabilistic and statistical properties of the proposed process are determined, including the distribution of its innovation process, the Laplace transformation function, multi-step-ahead conditional measures, autocorrelation, and [...] Read more.
In this paper, a size-biased Lindley (SBL) first-order autoregressive (AR(1)) process is proposed, the so-called SBL-AR(1). Some probabilistic and statistical properties of the proposed process are determined, including the distribution of its innovation process, the Laplace transformation function, multi-step-ahead conditional measures, autocorrelation, and spectral density function. In addition, the unknown parameters of the model are estimated via the conditional least squares and Gaussian estimation methods. The performance and behavior of the estimators are checked through some numerical results by a Monte Carlo simulation study. Additionally, two real-world datasets are utilized to examine the model’s applicability, and goodness-of-fit statistics are used to compare it to several pertinent non-Gaussian AR(1) models. The findings reveal that the proposed SBL-AR(1) model exhibits key theoretical properties, including a closed-form innovation distribution, multi-step conditional measures, and an exponentially decaying autocorrelation structure. Parameter estimation via conditional least squares and Gaussian methods demonstrates consistency and efficiency in simulations. Real-world applications to inflation expectations and water quality data reveal a superior fit over competing non-Gaussian AR(1) models, evidenced by lower values of the AIC and BIC statistics. Forecasting comparisons show that the classical conditional expectation method achieves accuracy comparable to some modern machine learning techniques, underscoring its practical utility for skewed and fat-tailed time series. Full article
(This article belongs to the Special Issue Statistical Simulation and Computation: 3rd Edition)
Show Figures

Figure 1

28 pages, 3529 KiB  
Article
A Coverage-Based Cooperative Detection Method for CDUAV: Insights from Prediction Error Pipeline Modeling
by Jiong Li, Xianhai Feng, Yangchao He and Lei Shao
Drones 2025, 9(6), 397; https://doi.org/10.3390/drones9060397 - 27 May 2025
Viewed by 341
Abstract
To address the challenges of detection and acquisition caused by trajectory prediction errors during the midcourse–terminal guidance handover phase in cross-domain unmanned aerial vehicles (CDUAV), this study proposes a collaborative multi-interceptor detection coverage optimization method based on predictive error pipeline modeling. Firstly, we [...] Read more.
To address the challenges of detection and acquisition caused by trajectory prediction errors during the midcourse–terminal guidance handover phase in cross-domain unmanned aerial vehicles (CDUAV), this study proposes a collaborative multi-interceptor detection coverage optimization method based on predictive error pipeline modeling. Firstly, we employ nonlinear least squares to fit parameters for the motion model of CDUAV. By integrating error propagation theory, we derive a recursive expression for error pipelines under t-distribution and establish a parametric model for the target’s high-probability region (HPR). Next, we analyze target acquisition scenarios during guidance handover and reformulate the collaborative detection problem as a field-of-view (FOV) coverage optimization task on a two-dimensional detection plane. This framework incorporates the target HPR and the seeker detection FOV models, with an objective function defined for coverage optimization. Finally, inspired by wireless sensor network (WSN) coverage strategies, we implement the starfish optimization algorithm (SFOA) to enhance computational efficiency. Simulation results demonstrate that compared to Monte Carlo statistical methods, our parametric modeling approach reduces prediction error computation time from 15.82 s to 0.09 s while generating error pipeline envelopes with 99% confidence intervals, showing superior generalization capability. The proposed collaborative detection framework effectively resolves geometric coverage optimization challenges arising from mismatches between target HPR and FOV morphology, exhibiting rapid convergence and high computational efficiency. Full article
Show Figures

Figure 1

28 pages, 453 KiB  
Article
Bayesian Tapered Narrowband Least Squares for Fractional Cointegration Testing in Panel Data
by Oyebayo Ridwan Olaniran, Saidat Fehintola Olaniran, Ali Rashash R. Alzahrani, Nada MohammedSaeed Alharbi and Asma Ahmad Alzahrani
Mathematics 2025, 13(10), 1615; https://doi.org/10.3390/math13101615 - 14 May 2025
Viewed by 298
Abstract
Fractional cointegration has been extensively examined in time series analysis, but its extension to heterogeneous panel data with unobserved heterogeneity and cross-sectional dependence remains underdeveloped. This paper develops a robust framework for testing fractional cointegration in heterogeneous panel data, where unobserved heterogeneity, cross-sectional [...] Read more.
Fractional cointegration has been extensively examined in time series analysis, but its extension to heterogeneous panel data with unobserved heterogeneity and cross-sectional dependence remains underdeveloped. This paper develops a robust framework for testing fractional cointegration in heterogeneous panel data, where unobserved heterogeneity, cross-sectional dependence, and persistent shocks complicate traditional approaches. We propose the Bayesian Tapered Narrowband Least Squares (BTNBLS) estimator, which addresses three critical challenges: (1) spectral leakage in long-memory processes, mitigated via tapered periodograms; (2) precision loss in fractional parameter estimation, resolved through narrowband least squares; and (3) unobserved heterogeneity in cointegrating vectors (θi) and memory parameters (ν,δ), modeled via hierarchical Bayesian priors. Monte Carlo simulations demonstrate that BTNBLS outperforms conventional estimators (OLS, NBLS, TNBLS), achieving minimal bias (0.041–0.256), near-nominal coverage probabilities (0.87–0.94), and robust control of Type 1 errors (0.01–0.07) under high cross-sectional dependence (ρ=0.8), while the Bayesian Chen–Hurvich test attains near-perfect power (up to 1.00) in finite samples. Applied to Purchasing Power Parity (PPP) in 18 fragile Sub-Saharan African economies, BTNBLS reveals statistically significant fractional cointegration between exchange rates and food price ratios in 15 countries (p<0.05), with a pooled estimate (θ^=0.33, p<0.001) indicating moderate but resilient long-run equilibrium adjustment. These results underscore the importance of Bayesian shrinkage and spectral tapering in panel cointegration analysis, offering policymakers a reliable tool to assess persistence of shocks in institutionally fragmented markets. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

31 pages, 3674 KiB  
Article
Self-Weighted Quantile Estimation for Drift Coefficients of Ornstein–Uhlenbeck Processes with Jumps and Its Application to Statistical Arbitrage
by Yuping Song, Ruiqiu Chen, Chunchun Cai, Yuetong Zhang and Min Zhu
Mathematics 2025, 13(9), 1399; https://doi.org/10.3390/math13091399 - 24 Apr 2025
Viewed by 498
Abstract
The estimation of drift parameters in the Ornstein–Uhlenbeck (O-U) process with jumps primarily employs methods such as maximum likelihood estimation, least squares estimation, and least absolute deviation estimation. These methods generally assume specific error distributions and finite variances. However, with the increasing uncertainty [...] Read more.
The estimation of drift parameters in the Ornstein–Uhlenbeck (O-U) process with jumps primarily employs methods such as maximum likelihood estimation, least squares estimation, and least absolute deviation estimation. These methods generally assume specific error distributions and finite variances. However, with the increasing uncertainty in financial markets, asset prices exhibit characteristics such as skewness and heavy tails, which lead to biases in traditional estimators. This paper proposes a self-weighted quantile estimator for the drift parameters of the O-U process with jumps and verifies its asymptotic normality under large samples, given certain assumptions. Furthermore, through Monte Carlo simulations, the proposed self-weighted quantile estimator is compared with least squares, quantile, and power variation estimators. The estimation performance is evaluated using metrics such as mean, standard deviation, and mean squared error (MSE). The simulation results show that the self-weighted quantile estimator proposed in this paper performs well across different metrics, such as 8.21% and 8.15% reduction of MSE at the 0.9 quantile for drift parameter γ and κ compared with the traditional quantile estimator. Finally, the proposed estimator is applied to inter-period statistical arbitrage of the CSI 300 Index Futures. The backtesting results indicate that the self-weighted quantile method proposed in this paper performs well in empirical applications. Full article
(This article belongs to the Special Issue New Trends in Stochastic Processes, Probability and Statistics)
Show Figures

Figure 1

34 pages, 2528 KiB  
Article
Inferences About Two-Parameter Multicollinear Gaussian Linear Regression Models: An Empirical Type I Error and Power Comparison
by Md Ariful Hoque, Zoran Bursac and B. M. Golam Kibria
Stats 2025, 8(2), 28; https://doi.org/10.3390/stats8020028 - 23 Apr 2025
Viewed by 463
Abstract
In linear regression analysis, the independence assumption is crucial and the ordinary least square (OLS) estimator generally regarded as the Best Linear Unbiased Estimator (BLUE) is applied. However, multicollinearity can complicate the estimation of the effect of individual variables, leading to potential inaccurate [...] Read more.
In linear regression analysis, the independence assumption is crucial and the ordinary least square (OLS) estimator generally regarded as the Best Linear Unbiased Estimator (BLUE) is applied. However, multicollinearity can complicate the estimation of the effect of individual variables, leading to potential inaccurate statistical inferences. Because of this issue, different types of two-parameter estimators have been explored. This paper compares t-tests for assessing the significance of regression coefficients, including several two-parameter estimators. We conduct a Monte Carlo study to evaluate these methods by examining their empirical type I error and power characteristics, based on established protocols. The simulation results indicate that some two-parameter estimators achieve better power gains while preserving the nominal size at 5%. Real-life data are analyzed to illustrate the findings of this paper. Full article
(This article belongs to the Section Statistical Methods)
Show Figures

Figure 1

15 pages, 2645 KiB  
Article
Establishing Models for Predicting Above-Ground Carbon Stock Based on Sentinel-2 Imagery for Evergreen Broadleaf Forests in South Central Coastal Ecoregion, Vietnam
by Nguyen Huu Tam, Nguyen Van Loi and Hoang Huy Tuan
Forests 2025, 16(4), 686; https://doi.org/10.3390/f16040686 - 15 Apr 2025
Cited by 1 | Viewed by 1477
Abstract
In Vietnam, models for estimating Above-Ground Biomass (AGB) to predict carbon stock are primarily based on diameter at breast height (DBH), tree height (H), and wood density (WD). However, remote sensing has increasingly been recognized as a cost-effective and accurate alternative. Within this [...] Read more.
In Vietnam, models for estimating Above-Ground Biomass (AGB) to predict carbon stock are primarily based on diameter at breast height (DBH), tree height (H), and wood density (WD). However, remote sensing has increasingly been recognized as a cost-effective and accurate alternative. Within this context, the present study aimed to develop correlation equations between Total Above-Ground Carbon (TAGC) and vegetation indices derived from Sentinel-2 imagery to enable direct estimation of carbon stock for assessing emissions and removals. In this study, the remote sensing indices most strongly associated with TAGC were identified using principal component analysis (PCA). TAGC values were calculated based on forest inventory data from 115 sample plots. Regression models were developed using Ordinary Least Squares and Maximum Likelihood methods and were validated through Monte Carlo cross-validation. The results revealed that Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index (SAVI), and Near Infrared Reflectance (NIR), as well as three variable combinations—(NDVI, ARVI), (SAVI, SIPI), and (NIR, EVI — Enhanced Vegetation Index)—had strong influences on TAGC. A total of 36 weighted linear and non-linear models were constructed using these selected variables. Among them, the quadratic models incorporating NIR and the (NIR, EVI) combination were identified as optimal, with AIC values of 756.924 and 752.493, R2 values of 0.86 and 0.87, and Mean Percentage Standard Errors (MPSEs) of 22.04% and 21.63%, respectively. Consequently, these two models are recommended for predicting carbon stocks in Evergreen Broadleaf (EBL) forests within Vietnam’s South Central Coastal Ecoregion. Full article
Show Figures

Figure 1

30 pages, 8061 KiB  
Article
Investment Analysis of Low-Carbon Yard Cranes: Integrating Monte Carlo Simulation and Jump Diffusion Processes with a Hybrid American–European Real Options Approach
by Ang Yang, Ang Li, Zongxing Li, Yuhui Sun and Jing Gao
Energies 2025, 18(8), 1928; https://doi.org/10.3390/en18081928 - 10 Apr 2025
Viewed by 505
Abstract
In order to realize green and low-carbon transformation, some ports have explored the path of sustainable equipment upgrading by adjusting the energy structure of yard cranes in recent years. However, there are multiple uncertainties in the investment process of hydrogen-powered yard cranes, and [...] Read more.
In order to realize green and low-carbon transformation, some ports have explored the path of sustainable equipment upgrading by adjusting the energy structure of yard cranes in recent years. However, there are multiple uncertainties in the investment process of hydrogen-powered yard cranes, and the existing valuation methods fail to effectively deal with these dynamic changes and lack scientifically sound decision support tools. To address this problem, this study constructs a multi-factor real options model that integrates the dynamic uncertainties of hydrogen price, carbon price, and technology maturity. In this study, a geometric Brownian motion is used for hydrogen price simulation, a Markov chain model with jump diffusion term and stochastic volatility is used for carbon price simulation, and a learning curve method is used to quantify the evolution of technology maturity. Aiming at the long investment cycle of ports, a hybrid option strategy of “American and European” is designed, and the timing and scale of investment are dynamically optimized by Monte Carlo simulation and least squares regression. Based on the empirical analysis of Qingdao Port, the results show that the optimal investment plan for hydrogen-powered yard cranes project under the framework of a multi-factor option model is to use an American-type option to maintain moderate flexibility in the early stage, and to use a European-type option to lock in the return in the later stage. The study provides decision support for the green development of ports and enhances economic returns and carbon emission reduction benefits. Full article
(This article belongs to the Section C: Energy Economics and Policy)
Show Figures

Figure 1

Back to TopTop