Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (479)

Search Parameters:
Keywords = interval entropy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4595 KB  
Article
The Unit Inverse Maxwell–Boltzmann Distribution: A Novel Single-Parameter Model for Unit-Interval Data
by Murat Genç and Ömer Özbilen
Axioms 2025, 14(8), 647; https://doi.org/10.3390/axioms14080647 - 21 Aug 2025
Viewed by 204
Abstract
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval [...] Read more.
The Unit Inverse Maxwell–Boltzmann (UIMB) distribution is introduced as a novel single-parameter model for data constrained within the unit interval (0,1), derived through an exponential transformation of the Inverse Maxwell–Boltzmann distribution. Designed to address the limitations of traditional unit-interval distributions, the UIMB model exhibits flexible density shapes and hazard rate behaviors, including right-skewed, left-skewed, unimodal, and bathtub-shaped patterns, making it suitable for applications in reliability engineering, environmental science, and health studies. This study derives the statistical properties of the UIMB distribution, including moments, quantiles, survival, and hazard functions, as well as stochastic ordering, entropy measures, and the moment-generating function, and evaluates its performance through simulation studies and real-data applications. Various estimation methods, including maximum likelihood, Anderson–Darling, maximum product spacing, least-squares, and Cramér–von Mises, are assessed, with maximum likelihood demonstrating superior accuracy. Simulation studies confirm the model’s robustness under normal and outlier-contaminated scenarios, with MLE showing resilience across varying skewness levels. Applications to manufacturing and environmental datasets reveal the UIMB distribution’s exceptional fit compared to competing models, as evidenced by lower information criteria and goodness-of-fit statistics. The UIMB distribution’s computational efficiency and adaptability position it as a robust tool for modeling complex unit-interval data, with potential for further extensions in diverse domains. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

16 pages, 1932 KB  
Article
2.5D Deep Learning and Machine Learning for Discriminative DLBCL and IDC with Radiomics on PET/CT
by Fei Liu, Wen Chen, Jianping Zhang, Jianling Zou, Bingxin Gu, Hongxing Yang, Silong Hu, Xiaosheng Liu and Shaoli Song
Bioengineering 2025, 12(8), 873; https://doi.org/10.3390/bioengineering12080873 - 12 Aug 2025
Viewed by 666
Abstract
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which [...] Read more.
We aimed to establish non-invasive diagnostic models comparable to pathology testing and explore reliable digital imaging biomarkers to classify diffuse large B-cell lymphoma (DLBCL) and invasive ductal carcinoma (IDC). Our study enrolled 386 breast nodules from 279 patients with DLBCL and IDC, which were pathologically confirmed and underwent 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) examination. Patients from two centers were separated into internal and external cohorts. Notably, we introduced 2.5D deep learning and machine learning to extract features, develop models, and discover biomarkers. Performances were assessed using the area under curve (AUC) and confusion matrix. Additionally, the Shapley additive explanation (SHAP) and local interpretable model-agnostic explanations (LIME) techniques were employed to interpret the model. On the internal cohort, the optimal model PT_TDC_SVM achieved an accuracy of 0.980 (95% confidence interval (CI): 0.957–0.991) and an AUC of 0.992 (95% CI: 0.946–0.998), surpassing the other models. On the external cohort, the accuracy was 0.975 (95% CI: 0.913–0.993) and the AUC was 0.996 (95% CI: 0.972–0.999). The optimal imaging biomarker PET_LBP-2D_gldm_DependenceEntropy demonstrated an average accuracy of 0.923/0.937 on internal/external testing. Our study presented an innovative automated model for DLBCL and IDC, identifying reliable digital imaging biomarkers with significant potential. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

22 pages, 474 KB  
Article
Fuzzy Multi-Attribute Group Decision-Making Method Based on Weight Optimization Models
by Qixiao Hu, Yuetong Liu, Chaolang Hu and Shiquan Zhang
Symmetry 2025, 17(8), 1305; https://doi.org/10.3390/sym17081305 - 12 Aug 2025
Viewed by 291
Abstract
For interval-valued intuitionistic fuzzy sets featuring complementary symmetry in evaluation relations, this paper proposes a novel, complete fuzzy multi-attribute group decision-making (MAGDM) method that optimizes both expert weights and attribute weights. First, an optimization model is constructed to determine expert weights by minimizing [...] Read more.
For interval-valued intuitionistic fuzzy sets featuring complementary symmetry in evaluation relations, this paper proposes a novel, complete fuzzy multi-attribute group decision-making (MAGDM) method that optimizes both expert weights and attribute weights. First, an optimization model is constructed to determine expert weights by minimizing the cumulative difference between individual evaluations and the overall consistent evaluations derived from all experts. Second, based on the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), the improved closeness index for evaluating each alternative is obtained. Finally, leveraging entropy theory, a concise and interpretable optimization model is established to determine the attribute weight. This weight is then incorporated into the closeness index to enable the ranking of alternatives. Integrating these features, the complete fuzzy MAGDM algorithm is formulated, effectively combining the strengths of subjective and objective weighting approaches. To conclude, the feasibility and effectiveness of the proposed method are thoroughly verified and compared through detailed examination of two real-world cases. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 2357 KB  
Article
Heart Rate Variability Alterations During Delayed-Onset Muscle Soreness-Inducing Exercise—With Piezo2 Interpretation
by Gergely Langmár, Tekla Sümegi, Benjámin Fülöp, Lilla Pozsgai, Tamás Mocsai, Miklós Tóth, Levente Rácz, Bence Kopper, András Dér, András Búzás and Balázs Sonkodi
Sports 2025, 13(8), 262; https://doi.org/10.3390/sports13080262 - 10 Aug 2025
Cited by 1 | Viewed by 798
Abstract
Heart rate variability (HRV) is often modulated by pain; therefore, the objective of this study was to assess whether the induction of delayed-onset muscle soreness (DOMS) is already affected by HRV alterations during exercise, in spite of the fact that pain evolves only [...] Read more.
Heart rate variability (HRV) is often modulated by pain; therefore, the objective of this study was to assess whether the induction of delayed-onset muscle soreness (DOMS) is already affected by HRV alterations during exercise, in spite of the fact that pain evolves only post-exercise. An isokinetic dynamometer was used to induce DOMS in this study on 19 young male elite handball players who were subjected to HRV measurements throughout a DOMS-inducing exercise session. The result of this study indicated that the heart rate (HR) dependence of time–frequency domain parameters could be described by an exponential-like function, while entropy showed a V-shaped function, with a minimum “turning point” separated by descending and ascending intervals. The DOMS protocol upshifted the time–frequency domain HRV parameters in the entire HR range, contrary to the sample entropy values that were systematically downshifted, indicative of an upregulated sympathetic tone. The group-averaged HR-dependent sample entropy function showed a nonlinear character under exercise, with lower values for higher DOMS than for the group with lower DOMS below the turning-point HR, and vice versa above it. The differences between the respective HRV(HR) point sets representing the low-DOMS and high-DOMS groups were quantified using a statistical method and found to be significant at the current sample size for all the HRV parameters used. Since oxidative stress is implicated in DOMS, we are the first to report that nonlinear alterations may impact HRV in a HR-dependent manner in DOMS using a Piezo2 interpretation. This finding provides further indirect evidence for an initiating neural microdamage that prevails under DOMS-inducing exercise, and the diagnostic detection of this point may provide control for avoiding further injury risk in sports and exercise activities. Full article
Show Figures

Figure 1

22 pages, 3409 KB  
Article
Short-Term Prediction Intervals for Photovoltaic Power via Multi-Level Analysis and Dual Dynamic Integration
by Kaiyang Kuang, Jingshan Zhang, Qifan Chen, Yan Zhou, Yan Yan, Litao Dai and Guanghu Wang
Electronics 2025, 14(15), 3068; https://doi.org/10.3390/electronics14153068 - 31 Jul 2025
Viewed by 287
Abstract
There is an obvious correlation between the photovoltaic (PV) output of different physical levels; that is, the overall power change trend of large-scale regional (high-level) stations can provide a reference for the prediction of the output of sub-regional (low-level) stations. The current PV [...] Read more.
There is an obvious correlation between the photovoltaic (PV) output of different physical levels; that is, the overall power change trend of large-scale regional (high-level) stations can provide a reference for the prediction of the output of sub-regional (low-level) stations. The current PV prediction methods have not deeply explored the multi-level PV power generation elements and have not considered the correlation between different levels, resulting in the inability to obtain potential information on PV power generation. Moreover, traditional probabilistic prediction models lack adaptability, which can lead to a decrease in prediction performance under different PV prediction scenarios. Therefore, a probabilistic prediction method for short-term PV power based on multi-level adaptive dynamic integration is proposed in this paper. Firstly, an analysis is conducted on the multi-level PV power stations together with the influence of the trend of high-level PV power generation on the forecast of low-level power generation. Then, the PV data are decomposed into multiple layers using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and analyzed by combining fuzzy entropy (FE) and mutual information (MI). After that, a new multi-level model prediction method, namely, the improved dual dynamic adaptive stacked generalization (I-Stacking) ensemble learning model, is proposed to construct short-term PV power generation prediction models. Finally, an improved dynamic adaptive kernel density estimation (KDE) method for prediction errors is proposed, which optimizes the performance of the prediction intervals (PIs) through variable bandwidth. Through comparative experiments and analysis using traditional methods, the effectiveness of the proposed method is verified. Full article
Show Figures

Figure 1

28 pages, 835 KB  
Article
Progressive First-Failure Censoring in Reliability Analysis: Inference for a New Weibull–Pareto Distribution
by Rashad M. EL-Sagheer and Mahmoud M. Ramadan
Mathematics 2025, 13(15), 2377; https://doi.org/10.3390/math13152377 - 24 Jul 2025
Viewed by 288
Abstract
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival [...] Read more.
This paper explores statistical techniques for estimating unknown lifetime parameters using data from a progressive first-failure censoring scheme. The failure times are modeled with a new Weibull–Pareto distribution. Maximum likelihood estimators are derived for the model parameters, as well as for the survival and hazard rate functions, although these estimators do not have explicit closed-form solutions. The Newton–Raphson algorithm is employed for the numerical computation of these estimates. Confidence intervals for the parameters are approximated based on the asymptotic normality of the maximum likelihood estimators. The Fisher information matrix is calculated using the missing information principle, and the delta technique is applied to approximate confidence intervals for the survival and hazard rate functions. Bayesian estimators are developed under squared error, linear exponential, and general entropy loss functions, assuming independent gamma priors. Markov chain Monte Carlo sampling is used to obtain Bayesian point estimates and the highest posterior density credible intervals for the parameters and reliability measures. Finally, the proposed methods are demonstrated through the analysis of a real dataset. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

32 pages, 8958 KB  
Article
A Monte Carlo Simulation Framework for Evaluating the Robustness and Applicability of Settlement Prediction Models in High-Speed Railway Soft Foundations
by Zhenyu Liu, Liyang Wang, Taifeng Li, Huiqin Guo, Feng Chen, Youming Zhao, Qianli Zhang and Tengfei Wang
Symmetry 2025, 17(7), 1113; https://doi.org/10.3390/sym17071113 - 10 Jul 2025
Viewed by 331
Abstract
Accurate settlement prediction for high-speed railway (HSR) soft foundations remains challenging due to the irregular and dynamic nature of real-world monitoring data, often represented as non-equidistant and non-stationary time series (NENSTS). Existing empirical models lack clear applicability criteria under such conditions, resulting in [...] Read more.
Accurate settlement prediction for high-speed railway (HSR) soft foundations remains challenging due to the irregular and dynamic nature of real-world monitoring data, often represented as non-equidistant and non-stationary time series (NENSTS). Existing empirical models lack clear applicability criteria under such conditions, resulting in subjective model selection. This study introduces a Monte Carlo-based evaluation framework that integrates data-driven simulation with geotechnical principles, embedding the concept of symmetry across both modeling and assessment stages. Equivalent permeability coefficients (EPCs) are used to normalize soil consolidation behavior, enabling the generation of a large, statistically robust dataset. Four empirical settlement prediction models—Hyperbolic, Exponential, Asaoka, and Hoshino—are systematically analyzed for sensitivity to temporal features and resistance to stochastic noise. A symmetry-aware comprehensive evaluation index (CEI), constructed via a robust entropy weight method (REWM), balances multiple performance metrics to ensure objective comparison. Results reveal that while settlement behavior evolves asymmetrically with respect to EPCs over time, a symmetrical structure emerges in model suitability across distinct EPC intervals: the Asaoka method performs best under low-permeability conditions (EPC ≤ 0.03 m/d), Hoshino excels in intermediate ranges (0.03 < EPC ≤ 0.7 m/d), and the Exponential model dominates in highly permeable soils (EPC > 0.7 m/d). This framework not only quantifies model robustness under complex data conditions but also formalizes the notion of symmetrical applicability, offering a structured path toward intelligent, adaptive settlement prediction in HSR subgrade engineering. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

26 pages, 3112 KB  
Article
Pre-Warning for the Remaining Time to Alarm Based on Variation Rates and Mixture Entropies
by Zijiang Yang, Jiandong Wang, Honghai Li and Song Gao
Entropy 2025, 27(7), 736; https://doi.org/10.3390/e27070736 - 9 Jul 2025
Viewed by 294
Abstract
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to [...] Read more.
Alarm systems play crucial roles in industrial process safety. To support tackling the accident that is about to occur after an alarm, a pre-warning method is proposed for a special class of industrial process variables to alert operators about the remaining time to alarm. The main idea of the proposed method is to estimate the remaining time to alarm based on variation rates and mixture entropies of qualitative trends in univariate variables. If the remaining time to alarm is no longer than the pre-warning threshold and its mixture entropy is small enough then a warning is generated to alert the operators. One challenge for the proposed method is how to determine an optimal pre-warning threshold by considering the uncertainties induced by the sample distribution of the remaining time to alarm, subject to the constraint of the required false warning rate. This challenge is addressed by utilizing Bayesian estimation theory to estimate the confidence intervals for all candidates of the pre-warning threshold, and the optimal one is selected as the one whose upper bound of the confidence interval is nearest to the required false warning rate. Another challenge is how to measure the possibility of the current trend segment increasing to the alarm threshold, and this challenge is overcome by adopting the mixture entropy as a possibility measurement. Numerical and industrial examples illustrate the effectiveness of the proposed method and the advantages of the proposed method over the existing methods. Full article
(This article belongs to the Special Issue Failure Diagnosis of Complex Systems)
Show Figures

Figure 1

17 pages, 572 KB  
Article
Statistical Analysis Under a Random Censoring Scheme with Applications
by Mustafa M. Hasaballah and Mahmoud M. Abdelwahab
Symmetry 2025, 17(7), 1048; https://doi.org/10.3390/sym17071048 - 3 Jul 2025
Cited by 1 | Viewed by 307
Abstract
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates [...] Read more.
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates for the unknown parameters are derived using the maximum likelihood estimation method, and confidence intervals are constructed based on the Fisher information matrix. From a Bayesian perspective, Bayes estimates of the parameters are obtained using the Markov Chain Monte Carlo method, and the average lengths of credible intervals are calculated. The Bayesian inference is performed under both the squared error loss function and the general entropy loss function. Additionally, a numerical simulation is conducted to evaluate the performance of the proposed methods. To demonstrate their practical applicability, a real world example is provided, illustrating the application and development of these inference techniques. In conclusion, the Bayesian method appears to outperform other approaches, although each method offers unique advantages. Full article
Show Figures

Figure 1

22 pages, 1770 KB  
Article
A Logarithmic Compression Method for Magnitude-Rich Data: The LPPIE Approach
by Vasileios Alevizos, Zongliang Yue, Sabrina Edralin, Clark Xu, Nikitas Gerolimos and George A. Papakostas
Technologies 2025, 13(7), 278; https://doi.org/10.3390/technologies13070278 - 1 Jul 2025
Viewed by 575
Abstract
This study introduces Logarithmic Positional Partition Interval Encoding (LPPIE), a novel lossless compression methodology employing iterative logarithmic transformations to drastically reduce data size. While conventional dictionary-based algorithms rely on repeated sequences, LPPIE translates numeric data sequences into highly compact logarithmic representations. This achieves [...] Read more.
This study introduces Logarithmic Positional Partition Interval Encoding (LPPIE), a novel lossless compression methodology employing iterative logarithmic transformations to drastically reduce data size. While conventional dictionary-based algorithms rely on repeated sequences, LPPIE translates numeric data sequences into highly compact logarithmic representations. This achieves significant reduction in data size, especially on large integer datasets. Experimental comparisons with established compression methods—such as ZIP, Brotli, and Zstandard—demonstrate LPPIE’s exceptional effectiveness, attaining compression ratios nearly 13 times superior to established methods. However, these substantial storage savings come with elevated computational overhead due to LPPIE’s complex numerical operations. The method’s robustness across diverse datasets and minimal scalability limitations underscore its potential for specialized archival scenarios where data fidelity is paramount and processing latency is tolerable. Future enhancements, such as GPU-accelerated computations and hybrid entropy encoding integration, are proposed to further optimize performance and broaden LPPIE’s applicability. Overall, LPPIE offers a compelling alternative in lossless data compression, substantially redefining efficiency boundaries in high-volume numeric data storage. Full article
Show Figures

Figure 1

26 pages, 5143 KB  
Article
Lag-Specific Transfer Entropy for Root Cause Diagnosis and Delay Estimation in Industrial Sensor Networks
by Rui Chen, Shu Liang, Jian-Guo Wang, Yuan Yao, Jing-Ru Su and Li-Lan Liu
Sensors 2025, 25(13), 3980; https://doi.org/10.3390/s25133980 - 26 Jun 2025
Viewed by 447
Abstract
Industrial plants now stream thousands of temperature, pressure, flow rate, and composition measurements at minute-level intervals. These multi-sensor records often contain variable transport or residence time delays that hinder accurate disturbance analysis. This study applies lag-specific transfer entropy (LSTE) to historical sensor logs [...] Read more.
Industrial plants now stream thousands of temperature, pressure, flow rate, and composition measurements at minute-level intervals. These multi-sensor records often contain variable transport or residence time delays that hinder accurate disturbance analysis. This study applies lag-specific transfer entropy (LSTE) to historical sensor logs to identify the instrument that first deviates from normal operation and the time required for that deviation to appear at downstream points. A self-prediction optimization step removes each sensor’s own information storage, after which LSTE is computed at candidate lags and tested against time-shifted surrogates for statistical significance. The method is benchmarked on a nonlinear simulation, the Tennessee Eastman plant, a three-phase separator test rig, and a full-scale blast furnace line. Across all cases, LSTE locates the disturbance origin and reports propagation times that match known process physics, while significantly reducing false links compared to classical transfer entropy. Full article
Show Figures

Figure 1

28 pages, 9823 KB  
Article
Local Entropy Optimization–Adaptive Demodulation Reassignment Transform for Advanced Analysis of Non-Stationary Mechanical Signals
by Yuli Niu, Zhongchao Liang, Hengshan Wu, Jianxin Tan, Tianyang Wang and Fulei Chu
Entropy 2025, 27(7), 660; https://doi.org/10.3390/e27070660 - 20 Jun 2025
Viewed by 291
Abstract
This research proposes a new method for time–frequency analysis, termed the Local Entropy Optimization–Adaptive Demodulation Reassignment Transform (LEOADRT), which is specifically designed to efficiently analyze complex, non-stationary mechanical vibration signals that exhibit multiple instantaneous frequencies or where the instantaneous frequency ridges are in [...] Read more.
This research proposes a new method for time–frequency analysis, termed the Local Entropy Optimization–Adaptive Demodulation Reassignment Transform (LEOADRT), which is specifically designed to efficiently analyze complex, non-stationary mechanical vibration signals that exhibit multiple instantaneous frequencies or where the instantaneous frequency ridges are in close proximity to each other. The method introduces a demodulation term to account for the signal’s dynamic behavior over time, converting each component into a stationary signal. Based on the local optimal theory of Rényi entropy, the demodulation parameters are precisely determined to optimize the time–frequency analysis. Then, the energy redistribution of the ridges already generated in the time–frequency map is performed using the maximum local energy criterion, significantly improving time–frequency resolution. Experimental results demonstrate that the performance of the LEOADRT algorithm is superior to existing methods such as SBCT, EMCT, VSLCT, and GLCT, especially in processing complex non-stationary signals with non-proportionality and closely spaced frequency intervals. This method provides strong support for mechanical fault diagnosis, condition monitoring, and predictive maintenance, making it particularly suitable for real-time analysis of multi-component and cross-frequency signals. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

37 pages, 776 KB  
Article
Fractional Inclusion Analysis of Superquadratic Stochastic Processes via Center-Radius Total Order Relation with Applications in Information Theory
by Mohsen Ayyash, Dawood Khan, Saad Ihsan Butt and Youngsoo Seol
Fractal Fract. 2025, 9(6), 375; https://doi.org/10.3390/fractalfract9060375 - 12 Jun 2025
Viewed by 387
Abstract
This study presents, for the first time, a new class of interval-valued superquadratic stochastic processes and examines their core properties through the lens of the center-radius total order relation on intervals. These processes serve as a powerful tool for modeling uncertainty in stochastic [...] Read more.
This study presents, for the first time, a new class of interval-valued superquadratic stochastic processes and examines their core properties through the lens of the center-radius total order relation on intervals. These processes serve as a powerful tool for modeling uncertainty in stochastic systems involving interval-valued data. By utilizing their intrinsic structure, we derive sharpened versions of Jensen-type and Hermite–Hadamard-type inequalities, along with their fractional extensions, within the framework of mean-square stochastic Riemann–Liouville fractional integrals. The theoretical findings are validated through extensive graphical representations and numerical simulations. Moreover, the applicability of the proposed processes is demonstrated in the domain of information theory by constructing novel stochastic divergence measures and Shannon’s entropy grounded in interval calculus. The outcomes of this work lay a solid foundation for further exploration in stochastic analysis, particularly in advancing generalized integral inequalities and formulating new stochastic models under uncertainty. Full article
Show Figures

Figure 1

23 pages, 544 KB  
Article
Estimation of Parameters and Reliability Based on Unified Hybrid Censoring Schemes with an Application to COVID-19 Mortality Datasets
by Mustafa M. Hasaballah, Mahmoud M. Abdelwahab and Khamis A. Al-Karawi
Axioms 2025, 14(6), 460; https://doi.org/10.3390/axioms14060460 - 12 Jun 2025
Cited by 1 | Viewed by 987
Abstract
This article presents maximum likelihood and Bayesian estimates for the parameters, reliability function, and hazard function of the Gumbel Type-II distribution using a unified hybrid censored sample. Bayesian estimates are derived under three loss functions: squared error, LINEX, and generalized entropy. The parameters [...] Read more.
This article presents maximum likelihood and Bayesian estimates for the parameters, reliability function, and hazard function of the Gumbel Type-II distribution using a unified hybrid censored sample. Bayesian estimates are derived under three loss functions: squared error, LINEX, and generalized entropy. The parameters are assumed to follow independent gamma prior distributions. Since closed-form solutions are not available, the MCMC approximation method is used to obtain the Bayesian estimates. The highest posterior density credible intervals for the model parameters are computed using importance sampling. Additionally, approximate confidence intervals are constructed based on the normal approximation to the maximum likelihood estimates. To derive asymptotic confidence intervals for the reliability and hazard functions, their variances are estimated using the delta method. A numerical study compares the proposed estimators in terms of their average values and mean squared error using Monte Carlo simulations. Finally, a real dataset is analyzed to illustrate the proposed estimation methods. Full article
Show Figures

Figure 1

16 pages, 1606 KB  
Article
Coherence Analysis of Cardiovascular Signals for Detecting Early Diabetic Cardiac Autonomic Neuropathy: Insights into Glycemic Control
by Yu-Chen Chen, Wei-Min Liu, Hsin-Ru Liu, Huai-Ren Chang, Po-Wei Chen and An-Bang Liu
Diagnostics 2025, 15(12), 1474; https://doi.org/10.3390/diagnostics15121474 - 10 Jun 2025
Viewed by 472
Abstract
Background: Cardiac autonomic neuropathy (CAN) is a common yet frequently underdiagnosed complication of diabetes. While our previous study demonstrated the utility of multiscale cross-approximate entropy (MS-CXApEn) in detecting early CAN, the present study further investigates the use of frequency-domain coherence analysis between systolic [...] Read more.
Background: Cardiac autonomic neuropathy (CAN) is a common yet frequently underdiagnosed complication of diabetes. While our previous study demonstrated the utility of multiscale cross-approximate entropy (MS-CXApEn) in detecting early CAN, the present study further investigates the use of frequency-domain coherence analysis between systolic blood pressure (SBP) and R-R intervals (RRI) and evaluates the effects of insulin treatment on autonomic function in diabetic rats. Methods: At the onset of diabetes induced by streptozotocin (STZ), rats were assessed for cardiovascular autonomic function both before and after insulin treatment. Spectral and coherence analyses were performed to evaluate baroreflex function and autonomic regulation. Parameters assessed included low-frequency power (LFP) and high-frequency power (HFP) of heart rate variability, coherence between SBP and RRI at low and high-frequency bands (LFCoh and HFCoh), spontaneous and phenylephrine-induced baroreflex sensitivity (BRSspn and BRSphe), HRV components derived from fast Fourier transform, and MS-CXApEn at multiple scales. Results: Compared to normal controls (LFCoh: 0.14 ± 0.07, HFCoh: 0.19 ± 0.06), early diabetic rats exhibited a significant reduction in both LFCoh (0.08 ± 0.04, p < 0.05) and HFCoh (0.16 ± 0.10, p > 0.05), indicating impaired autonomic modulation. Insulin treatment led to a recovery of LFCoh (0.11 ± 0.04) and HFCoh (0.24 ± 0.12), though differences remained statistically insignificant (p > 0.05 vs. normal). Additionally, low-frequency LFP increased at the onset of diabetes and decreased after insulin therapy in most rats significantly, while MS-CXApEn at all scale levels increased in the early diabetic rats, and MS-CXApEnlarge declined following hyperglycemia correction. The BRSspn and BRSphe showed no consistent trend. Conclusions: Coherence analysis provides valuable insights into autonomic dysfunction in early diabetes. The significant reduction in LFCoh in early diabetes supports its role as a potential marker for CAN. Although insulin treatment partially improved coherence, the lack of full recovery suggests persistent autonomic impairment despite glycemic correction. These findings underscore the importance of early detection and long-term management strategies for diabetic CAN. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

Back to TopTop