Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,302)

Search Parameters:
Keywords = exponential distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1179 KB  
Article
Robust Deep Knowledge Tracing with Out-of-Distribution Detection
by Riyan Hasan and Yupei Zhang
AI Educ. 2026, 2(1), 6; https://doi.org/10.3390/aieduc2010006 - 9 Mar 2026
Abstract
Modeling the temporal dynamics of student learning is a central goal in educational data mining. Deep Knowledge Tracing (DKT) has emerged as a key approach, yet existing models are highly sensitive to out-of-distribution (OOD) inputs, such as those arising from curriculum changes, new [...] Read more.
Modeling the temporal dynamics of student learning is a central goal in educational data mining. Deep Knowledge Tracing (DKT) has emerged as a key approach, yet existing models are highly sensitive to out-of-distribution (OOD) inputs, such as those arising from curriculum changes, new assessment formats, or behavioral noise, which severely degrade predictive reliability. To address this challenge, we propose Energy-Based Out-of-Distribution Deep Knowledge Tracing (EB-OOD DKT), a unified framework that integrates energy-based uncertainty estimation and contrastive representation learning within a transformer-based DKT architecture. The model computes energy scores via the negative log-sum-exponential of prediction logits, serving as confidence indicators for detecting OOD inputs during inference. Additionally, an InfoNCE-based contrastive loss enhances representation robustness by aligning in-distribution samples and separating OOD cases in latent space. Temporal and behavioral context features, such as normalized response intervals and cumulative attempt counts, are incorporated to enrich cognitive-behavioral modeling. Experiments on four public educational datasets demonstrate consistent improvements in prediction accuracy and OOD detection. EB-OOD DKT provides a promising approach for more reliable student modeling across educational platforms with different content distributions. Full article
Show Figures

Figure 1

17 pages, 431 KB  
Article
The Gamma Power Generalized Weibull Distribution: Modeling Bibliometric Data
by Arioane Primon Soares, Ryan Novaes Pereira, Fernando A. Peña-Ramírez, Luz Milena Zea Fernández and Renata Rojas Guerra
Stats 2026, 9(2), 26; https://doi.org/10.3390/stats9020026 - 5 Mar 2026
Viewed by 138
Abstract
In this study, we introduce the gamma power generalized Weibull (GPGW) distribution and investigate several of its main mathematical properties. The performance of the maximum likelihood estimators is evaluated through Monte Carlo simulations. The practical relevance of the proposed distribution is illustrated through [...] Read more.
In this study, we introduce the gamma power generalized Weibull (GPGW) distribution and investigate several of its main mathematical properties. The performance of the maximum likelihood estimators is evaluated through Monte Carlo simulations. The practical relevance of the proposed distribution is illustrated through an application to real bibliometric data, where the GPGW is used to model SCImago Journal Rank (SJR) indicators. In comparison with alternative models commonly employed for lifetime and positive data, the GPGW distribution exhibits strong competitive performance. In particular, in the real data application, it outperforms eleven competing distributions in terms of goodness of fit criteria, including the power generalized Weibull (PGW), the gamma-Nadarajah–Haghighi (GNH), and the exponentiated power generalized Weibull (EPGW) distributions. While inheriting several mathematical features of the EPGW distribution, such as expressions for moments, skewness, and kurtosis, the GPGW offers enhanced flexibility, making it a valuable modeling tool for lifetime data and heavy-tailed positive measurements. Full article
(This article belongs to the Section Statistical Methods)
Show Figures

Figure 1

15 pages, 1445 KB  
Article
Federated Learning Method Based on Data Distribution Heterogeneity Grading and Marginal Contribution Calculation
by Jianhua Liu, Weiqing Zhang, Yanglin Zeng and Yao Tong
Appl. Sci. 2026, 16(5), 2413; https://doi.org/10.3390/app16052413 - 2 Mar 2026
Viewed by 159
Abstract
As federated learning scales up in distributed scenarios, training instability and performance degradation caused by data quality issues—such as statistical heterogeneity and noise—have become major bottlenecks for practical deployment. Existing aggregation algorithms have been shown to not adequately account for differences in data [...] Read more.
As federated learning scales up in distributed scenarios, training instability and performance degradation caused by data quality issues—such as statistical heterogeneity and noise—have become major bottlenecks for practical deployment. Existing aggregation algorithms have been shown to not adequately account for differences in data importance. This can exacerbate client selection bias and incentive misalignment. As a result, global convergence can slow down and performance can deteriorate. To address this issue, this paper proposes a robust federated learning framework based on data heterogeneity grading and marginal contribution calculation. The objective of this study is to enhance the overall performance of federated learning systems in heterogeneous environments by quantifying data importance. The framework first grades and quantifies the heterogeneity of client data distributions, precisely characterizing data importance while reducing the computational space for Shapley value calculations, effectively lowering its exponential complexity. Subsequently, it integrates client marginal contributions with data distribution heterogeneity to establish a dynamic weighted aggregation mechanism that balances fairness, robustness, and differentiated data quality requirements. Multi-dataset comparative experiments demonstrate that the proposed method achieves consistent gains in model accuracy and convergence under non-IID splits and noisy-label settings. Full article
Show Figures

Figure 1

28 pages, 21321 KB  
Article
Study on Residual Subsidence Prediction of Goaf in Steeply Inclined Multi-Seam Based on Simulation Analysis
by Jilin Wang, Wan Cao, Zhuo Chen and Shenglin Wu
Appl. Sci. 2026, 16(5), 2328; https://doi.org/10.3390/app16052328 - 27 Feb 2026
Viewed by 165
Abstract
Particle Flow Code (PFC) numerical simulations were adopted to simulate the mining process and the process of goaf collapse and to predict the residual subsidence of abandoned goafs in steeply inclined multi-seam coal mines, taking the No. 101 Coal Mine in the Xishan [...] Read more.
Particle Flow Code (PFC) numerical simulations were adopted to simulate the mining process and the process of goaf collapse and to predict the residual subsidence of abandoned goafs in steeply inclined multi-seam coal mines, taking the No. 101 Coal Mine in the Xishan Mining Area of Urumqi, China, as an example. Scaled physical simulations were also employed to simulate the evolution of voids in the coal–rock mixture in the goaf. The results show that after mining, the roof of shallow coal seams becomes unstable and collapses in the anti-dip direction, causing the materials within the unconsolidated layer to fall and backfill the goaf, which further leads to ground subsidence. The mining of deep coal seams is also accompanied by the overall movement of overlying strata along the dip direction of the coal seams and surface subsidence. The content of voids within the broken coal–rock mass in the goaf tends to decrease with increasing pressure, showing a negative exponential correlation. Based on the observed relationship between displacement and void content obtained from the simulation experiments, it is inferred that the residual displacement under the current conditions of the study area accounts for approximately 10.5% of the total displacement. Combining the results of the PFC simulation and the evolution law of void content, the residual subsidence of the goaf in the study area since mine closure is predicted to range from 0 to 1 m, with a high-value zone distributed in the northeastern part of the study area. Deep goafs within the B7–B11–12 and B14–B18 coal seam groups mainly contribute to the residual subsidence. The distribution of goaf collapse pits, as revealed by field investigation, also verifies the reliability of the prediction results. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

16 pages, 2068 KB  
Article
A Spatiotemporal-Energy Clustering and Risk Index Model for Rock Fracture Early Warning Using Acoustic Emission Data
by Weijian Liu, Shilei Zhen, Zhongkai Peng, Jianbo Li, Shuai Teng, Zhizeng Zhang, Biqi Yuan and Ziwei Li
Processes 2026, 14(5), 774; https://doi.org/10.3390/pr14050774 - 27 Feb 2026
Viewed by 187
Abstract
To address the challenges of traditional methods for monitoring rock dynamic hazards in mines, which struggle to fully characterize the spatiotemporal heterogeneity of damage evolution and the resulting lag in early warning, this paper proposes a dynamic rock damage classification and fracture early [...] Read more.
To address the challenges of traditional methods for monitoring rock dynamic hazards in mines, which struggle to fully characterize the spatiotemporal heterogeneity of damage evolution and the resulting lag in early warning, this paper proposes a dynamic rock damage classification and fracture early warning model driven by acoustic emission data. Based on an improved dynamic K-means algorithm, this model fuses time dependence, energy intensity, and event spatial density characteristics through exponentially decaying weights to construct a spatiotemporal-energy synergistic clustering framework. Furthermore, a nonlinear coupling model for the comprehensive risk index (RI) is established, combining the static damage variable D with dynamic parameters such as energy release rate, ring count, and spatial clustering, to create a five-level early warning threshold. Experimental results demonstrate that the improved algorithm achieves clustering silhouette coefficients exceeding 0.7 for single-source, multi-source, and complex fracture patterns, and the error between cluster regions and actual fracture distribution is less than 1 mm. The RI model accurately identifies the damage state of the test block and effectively predicts critical instability, significantly improving both timeliness and accuracy. This research overcomes the limitations of traditional static evaluation and provides high-precision technical support for real-time monitoring of hidden rock fractures and prevention and control of mine dynamic hazards. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

29 pages, 674 KB  
Article
The Algorithmic Regulator
by Giulio Ruffini
Entropy 2026, 28(3), 257; https://doi.org/10.3390/e28030257 - 26 Feb 2026
Viewed by 861
Abstract
The regulator theorem states that, under certain conditions, any optimal controller must embody a model of the system it regulates, grounding the idea that controllers embed, explicitly or implicitly, internal models of the controlled. This principle underpins neuroscience and predictive brain theories like [...] Read more.
The regulator theorem states that, under certain conditions, any optimal controller must embody a model of the system it regulates, grounding the idea that controllers embed, explicitly or implicitly, internal models of the controlled. This principle underpins neuroscience and predictive brain theories like the Free-Energy Principle or Kolmogorov/Algorithmic Agent theory. However, the theorem is only proven in limited settings. Here, we treat the deterministic, closed, coupled world-regulator system (W,R) as a single self-delimiting program p via a constant-size wrapper that produces the world output string x fed to the regulator. We analyze regulation from the viewpoint of the algorithmic complexity of the output, K(x) (regulation as compression). We define R to be a good algorithmic regulator if it reduces the algorithmic complexity of the readout relative to a null (unregulated) baseline ⌀, i.e., Δ=KOW,KOW,R>0. We then prove that the larger Δ is, the more world-regulator pairs with high mutual algorithmic information are favored. More precisely, a complexity gap Δ>0 yields Pr((W,R)x)C 2M(W:R)2Δ, making low M(W:R) exponentially unlikely as Δ grows. This is an AIT version of the idea that “the regulator contains a model of the world.” The framework is distribution-free, applies to individual sequences, and complements the Internal Model Principle. Beyond this necessity claim, the same coding-theorem calculus singles out a canonical scalar objective and implicates a planner. On the realized episode, a regulator behaves as if it minimized the conditional description length of the readout. Full article
Show Figures

Graphical abstract

25 pages, 6477 KB  
Article
Characteristics of Thunderstorms in the Hinterland of the Tibetan Plateau and Impact of the Topographic Slope
by Siyu Chen, Chunsong Lu and Jinghua Chen
Remote Sens. 2026, 18(4), 650; https://doi.org/10.3390/rs18040650 - 20 Feb 2026
Viewed by 296
Abstract
Deep convection strongly influences regional water cycles over the Tibetan Plateau (TP), often referred to as the “Asian Water Tower.” Using FY-2E thundercloud observations, we examined the deep convection characteristics over the central TP. Deep convective storms over the TP exhibit pronounced spatiotemporal [...] Read more.
Deep convection strongly influences regional water cycles over the Tibetan Plateau (TP), often referred to as the “Asian Water Tower.” Using FY-2E thundercloud observations, we examined the deep convection characteristics over the central TP. Deep convective storms over the TP exhibit pronounced spatiotemporal heterogeneity. The frequency distribution of storm areas follows an exponential pattern in all seasons, and the cloud-top black body temperature (TBB) distribution is negatively skewed, with values concentrated between −40 and −36 °C. Deep convection is most active in summer, with storms that are larger and have colder cloud tops. In spring, storms are less frequent but tend to cover larger areas, whereas autumn is dominated by small- to medium-sized systems. Spatially, the southeastern and southwestern TP are high-frequency centers, with storm occurrence 2–3 times higher than in the northern TP. Associations between deep-convection properties and precipitation vary by season and region. In summer, storm-related precipitation is primarily linked to large storm areas, whereas in autumn it is more strongly associated with storms with lower TBB. In the southwestern TP, precipitation intensity is more strongly related to TBB, whereas in the northwestern TP, it is more sensitive to storm area. Topographic slope also modulates both precipitation and storm properties. Most storm precipitation occurs over slopes ≤14°, and heavy precipitation shows a bimodal dependence on slope, with peaks at 3–4° and 11–13°. Gentle slopes favor storm growth and horizontal expansion; as the slope increases, mean TBB increases, and deep convection weakens. Full article
Show Figures

Figure 1

29 pages, 9521 KB  
Article
Evolutionary Characteristics and Dynamic Mechanism of the Global Transportation Carbon Emission Spatial Correlation Network
by Yi Liang, Han Liu, Zhaoge Wu, Xiaoduo Wang and Zhaoxu Yuan
ISPRS Int. J. Geo-Inf. 2026, 15(2), 89; https://doi.org/10.3390/ijgi15020089 - 19 Feb 2026
Viewed by 260
Abstract
This study constructs a global transportation carbon emission spatial correlation network via a modified gravity model and explores its evolutionary characteristics and dynamic mechanisms by integrating three-dimensional evolutionary analysis (node, overall, structural) and temporal exponential random graph model (TERGM). The main findings are [...] Read more.
This study constructs a global transportation carbon emission spatial correlation network via a modified gravity model and explores its evolutionary characteristics and dynamic mechanisms by integrating three-dimensional evolutionary analysis (node, overall, structural) and temporal exponential random graph model (TERGM). The main findings are as follows: (1) Global transportation carbon emission spatial correlation intensity keeps rising, with improved connectivity and integration, forming three regionally agglomerated correlation poles centered on the United States (America), China (Asia) and major European countries (Europe). (2) Network centrality distributes asymmetrically: Switzerland, Norway and the United States remain core nodes, while China, Japan and other Asian economies with strong direct correlation radiation are not in the core tier. (3) Third, evolutionary dynamics stem from the synergistic interaction of multidimensional attributes. ① Economic level positively drives bidirectional connection emission and attraction; economic scale and openness curb emission but boost attraction, while tertiary industry structure inhibits both. ② Only economic level and government efficiency exert significant positive effects on absdiff, fostering network heterophilic attraction. ③ Spatial and institutional proximity in edgecov effectively facilitate connection formation. ④ Endogenous network variables present a collaborative mechanism of reciprocity and transmission, constrained by network density. ⑤ Temporal effects show early connection structure forms path dependence, resulting in low dynamic variability and overall network stability. Full article
(This article belongs to the Special Issue Spatial Data Science and Knowledge Discovery)
Show Figures

Figure 1

20 pages, 4390 KB  
Article
Study on Temperature Response Characteristics of Gas Containing Coal at Different Freezing Temperatures
by Qiang Wu, Zhaofeng Wang, Liguo Wang, Shujun Ma, Yongxin Sun, Shijie Li and Boyu Lin
Fuels 2026, 7(1), 11; https://doi.org/10.3390/fuels7010011 - 19 Feb 2026
Viewed by 147
Abstract
In the process of using the freezing method to uncover coal from stone gates, the thermal evolution profiles of the coal body during the freezing process tend to be complex due to the presence of gas and moisture. To investigate the temperature response [...] Read more.
In the process of using the freezing method to uncover coal from stone gates, the thermal evolution profiles of the coal body during the freezing process tend to be complex due to the presence of gas and moisture. To investigate the temperature response of coal containing gas under different freezing temperature conditions, a self-developed low-temperature freezing test system for coal containing water and gas was used to conduct freezing and cooling tests at different freezing temperatures (−5 °C to −30 °C). The temperature changes at various measuring points inside the coal over time were monitored in real time, and the temperature distribution, cooling law, and strain evolution process of the coal in the axial and radial directions were analyzed. The experimental results show that the cooling process of the center point of the coal can be divided into four stages: rapid cooling, extremely slow temperature drop, relatively slow cooling, and stable constant temperature. The time required to reach the stable constant temperature stage is inversely proportional to the freezing temperature, and corresponding prediction formulas have been established based on this. The standardized coal briquettes exhibit a gradient distribution characteristic of gradually increasing temperature from outside to inside in both axial and radial directions, with the radial temperature distribution being well matched by an exponential decay model. The strain of coal is affected by both thermal shrinkage and ice-induced expansion. The occurrence time of frost heave is positively correlated with freezing temperature, while the strain of frost heave is negatively correlated with freezing temperature. The axial frost heave effect is significantly stronger than the radial effect, but the radial frost heave occurs slightly earlier than the axial effect. This study reveals the thermal-mechanical coupling response mechanism of gas-containing coal during the low-temperature freezing process, and the research results can provide theoretical support for parameter optimization and engineering application of low-temperature freezing anti-outburst technology. Full article
Show Figures

Figure 1

43 pages, 8869 KB  
Article
Mathematical Modeling of Operational Reliability of Mine Lifting Equipment Based on Censored Data
by Denis A. Zadkov, Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Elezaveta E. Kuleshova and Denis V. Valuev
Mathematics 2026, 14(4), 716; https://doi.org/10.3390/math14040716 - 18 Feb 2026
Viewed by 394
Abstract
In this study, a comprehensive mathematical method for modeling the operational reliability of mine hoisting equipment under conditions of incomplete and heavily censored data is developed. The analyzed dataset includes 259 observations collected over a five-year period for six critical components, with the [...] Read more.
In this study, a comprehensive mathematical method for modeling the operational reliability of mine hoisting equipment under conditions of incomplete and heavily censored data is developed. The analyzed dataset includes 259 observations collected over a five-year period for six critical components, with the overall level of censoring reaching 62% and exceeding 70% for long life mechanical subsystems. Considering right, left, and interval censoring, the paper proposes a unified statistical procedure that combines empirical estimation of failure rates with parametric identification using Weibull, exponential, normal, and lognormal distributions. Model parameters are estimated using censored data–aware fitting procedures, while model selection is performed based on likelihood-based criteria, supplemented by correlation analysis to assess agreement between empirical and fitted reliability curves. The methodology is implemented computationally in the Mathcad Prime environment and is supplemented with mathematical tools for reconstructing survival curves, analyzing parameter sensitivity, and evaluating robustness at different censoring levels. In addition, an economic optimization model is formulated to determine cost-effective maintenance intervals by minimizing an integral functional that accounts for preventive maintenance, repair, and downtime costs. The results demonstrate that the proposed approach provides stable reliability estimates and reliable forecast intervals, enabling the construction of generalized life cycle curves for individual subsystems. The study establishes a rigorous mathematical basis for the transition from fixed-interval maintenance to adaptive, reliability-oriented maintenance strategies in industrial mine hoisting systems. Full article
(This article belongs to the Special Issue Reliability Analysis and Statistical Computing)
Show Figures

Figure 1

21 pages, 3969 KB  
Article
Modelling NO2 Emissions at Eskom’s Coal-Fired Power Station: Application of Statistical Distributions at Arnot
by Mpendulo Wiseman Mamba and Delson Chikobvu
Environments 2026, 13(2), 111; https://doi.org/10.3390/environments13020111 - 17 Feb 2026
Viewed by 477
Abstract
The combustion of coal comes with a heavy price of pollutant emissions. To assist in the planning and management of these emissions and to protect human health, the current study uses the relatively heavy-tailed distributions, namely, the Weibull, Lognormal and Pareto distributions to [...] Read more.
The combustion of coal comes with a heavy price of pollutant emissions. To assist in the planning and management of these emissions and to protect human health, the current study uses the relatively heavy-tailed distributions, namely, the Weibull, Lognormal and Pareto distributions to analyse and characterise the distribution of NO2 emission (in tons) from Arnot, a coal-fired power station of South Africa’s power utility, Eskom. Quantile–quantile (QQ) plots and their corresponding derivative plots for the three distributions are used to characterise the statistical distribution of NO2 emissions. The strength and advantage of using derivative plots of the three distributions, in particular, for characterising NO2 emissions from a coal-fuelled power station, is that they are able to better capture and explain the behaviour of the data across different components of this data. Although this method possesses flexible ways of characterisation of data, it is not commonly applied to emissions data, especially NO2 emissions from a coal-fuelled power station belonging to Eskom, such as Arnot. The choice of the distributions of this study is motivated by their ability to cater to varied tails relative to the exponential distribution. Thus, the tail heaviness ranks of the distributions from lighter to heavier tail, that is, Weibull, Lognormal and Pareto, are taken into consideration in order to arrive at the best-fitting distribution(s). The Weibull distribution with a lighter tail than the Exponential distribution gave the best-fitting distribution over the Lognormal and Pareto distributions for the main body of the data. The Pareto distribution, however, captures the extreme emission tail behaviour much better than the other two distributions. The Kolmogorov–Smirnov and Vasicek–Song (VS) goodness of fit statistics were used to further assess the appropriateness of the fitted distributions. The selection of the Weibull distribution implies that milder high values and less frequent very high NO2 emission data are expected, showing the weakness of such criteria when extremes are present. For authorities to plan and draw policies for the reduction and management of emissions, these findings may be of interest to them and can assist in better understanding their behaviour and the planning to reduce the impact on humans and the environment. This may also assist practitioners in air quality modelling before other, more sophisticated methods can be explored. Full article
(This article belongs to the Special Issue Air Pollution in Urban and Industrial Areas, 4th Edition)
Show Figures

Graphical abstract

17 pages, 2932 KB  
Article
Label-Free Detection of HeLa Cells Activity Excited by Blue LED
by Vera Gradišnik, Darko Gumbarević and Petar Kolar
Sensors 2026, 26(4), 1294; https://doi.org/10.3390/s26041294 - 17 Feb 2026
Viewed by 329
Abstract
This paper investigates a novel optical method that uses a high-responsivity a-Si:H photodiode for label-free detection of luminescence from HeLa cervical cancer cells excited by a blue LED. We examine the energy distribution of the energy-gap density of states (DOS) from the photodiode’s [...] Read more.
This paper investigates a novel optical method that uses a high-responsivity a-Si:H photodiode for label-free detection of luminescence from HeLa cervical cancer cells excited by a blue LED. We examine the energy distribution of the energy-gap density of states (DOS) from the photodiode’s long-time transient current, which shows exponential decay kinetics in the HeLa cell reaction. We analysed the transient response of a-Si:H p-i-n photodiode upon the illumination of the analyte with a pulsed blue LED light to better understand the HeLa cells activity and the fundamental defect kinetics processes in the a-Si:H material. Results suggest that the characteristic very low-level, time-varying light response of HeLa cells is due to chemiluminescence within cells, resulting from the reaction between nitric oxide (NO) and hydrogen peroxide (H2O2). Given the low signal intensity and noise, we applied a Savitzky–Golay (SG) filter to post-process the data. By reducing noise without attenuating chemiluminescent peaks, the Savitzky–Golay filter enabled accurate, reproducible quantification of the photocurrent response, reflecting the kinetics of cellular reactions. Further studies and more precise measurement instruments are needed for this real-time, label-free, non-destructive method, which applies SG-filtered signal processing to microfluidic optical biosensors. Full article
(This article belongs to the Special Issue Intelligent Microfluidics)
Show Figures

Figure 1

33 pages, 2814 KB  
Article
A Novel Gompertz-Type Distribution with Applications to Radiological Dose and Pharmacokinetic Data
by Ayşe Metin Karakaş, Fatma Bulut and Sultan Şahin Bal
Mathematics 2026, 14(4), 702; https://doi.org/10.3390/math14040702 - 16 Feb 2026
Viewed by 336
Abstract
This study introduces a novel four-parameter lifetime distribution constructed within the Topp–Leone Power Gompertz framework. Owing to its flexible structure, the proposed model accommodates a wide range of density shapes and hazard-rate patterns, including increasing, decreasing, bathtub-shaped, unimodal, and other non-monotone behaviors. Key [...] Read more.
This study introduces a novel four-parameter lifetime distribution constructed within the Topp–Leone Power Gompertz framework. Owing to its flexible structure, the proposed model accommodates a wide range of density shapes and hazard-rate patterns, including increasing, decreasing, bathtub-shaped, unimodal, and other non-monotone behaviors. Key distributional properties, including moments, entropy-based measures, quantile-based measures, and order statistics, are derived. Parameter inference is conducted using both likelihood-based and Bayesian approaches, and the finite-sample performance of the resulting estimators is assessed via Monte Carlo simulations. The practical relevance of the proposed distribution is illustrated using two real datasets and benchmarked against several competing lifetime models, including the Gompertz, Power Gompertz, Weibull, Topp–Leone Gompertz, Marshall–Olkin Gompertz, and Exponentiated Gompertz distributions. Overall, the comparative analyses demonstrate the superior fitting performance of the proposed model, highlighting its effectiveness for complex reliability, survival, and pharmacokinetic data. Full article
Show Figures

Figure 1

33 pages, 630 KB  
Article
PID Control for Uncertain Systems with Integral Measurements and DoS Attacks Using a Binary Encoding Scheme
by Nan Hou, Yanshuo Wu, Hongyu Gao, Zhongrui Hu and Xianye Bu
Entropy 2026, 28(2), 225; https://doi.org/10.3390/e28020225 - 15 Feb 2026
Viewed by 238
Abstract
In this paper, an observer-based proportional-integral-derivative (PID) controller is designed for a class of uncertain nonlinear systems with integral measurements, denial-of-service (DoS) attacks and bounded stochastic noises under a binary encoding scheme (BES). Parameter uncertainty is involved with a norm-bounded multiplicative expression. Integral [...] Read more.
In this paper, an observer-based proportional-integral-derivative (PID) controller is designed for a class of uncertain nonlinear systems with integral measurements, denial-of-service (DoS) attacks and bounded stochastic noises under a binary encoding scheme (BES). Parameter uncertainty is involved with a norm-bounded multiplicative expression. Integral measurements are considered to reflect the delayed signal collection of sensor. For communication, BES is put into use in the signal transmission process from the sensor to the observer and from the controller to the actuator. Random bit flipping is described that may take place caused by channel noises, whose impact is described by a stochastic noise. Randomly occurring DoS attacks are taken account of that may appear due to the shared network, which block the transmitted signals totally. Three sets of Bernoulli-distributed random variables are adopted to reveal the random occurrence of uncertainties, bit flipping and DoS attacks. The aim of this paper is to design an observer-based PID controller which guarantees that the closed-loop system reaches exponential ultimate boundedness in mean square (EUBMS). By virtue of Lyapunov stability theory, stochastic analysis technique and matrix inequality method, a sufficient condition is developed for designing the observer-based PID controller such that the closed-loop system achieves EUBMS performance, and the ultimate upper bound of the controlled output is bounded and such a bound is minimized. The gain matrices of the observer-based controller are acquired explicitly by virtue of solving the solution to an optimized issue with several matrix inequality constraints. Two simulation examples are given which indicate the usefulness of the proposed control method in this paper adequately. Full article
(This article belongs to the Special Issue Information Theory in Control Systems, 3rd Edition)
Show Figures

Figure 1

28 pages, 2384 KB  
Article
Bayesian Estimation of Spatial Lagged Panel Quantile Regression Model
by Man Zhao, Rushan Huang, Hanfang Li, Youxi Luo and Qiming Liu
Appl. Sci. 2026, 16(4), 1927; https://doi.org/10.3390/app16041927 - 14 Feb 2026
Viewed by 169
Abstract
This paper proposes a Bayesian estimation method for spatial lagged panel quantile models. The proposed model simultaneously considers spatial lag effects of the dependent variable and the quantile regression framework, enabling effective capture of spatial dependence and conditional distribution heterogeneity. The research constructs [...] Read more.
This paper proposes a Bayesian estimation method for spatial lagged panel quantile models. The proposed model simultaneously considers spatial lag effects of the dependent variable and the quantile regression framework, enabling effective capture of spatial dependence and conditional distribution heterogeneity. The research constructs a Bayesian estimation framework based on the asymmetric Laplace distribution by decomposing the random disturbance term into a combination of normal and exponential distributions, successfully developing a probabilistic model with both thick tail robustness and computational efficiency. On this basis, the study derives the full conditional posterior probability distributions of model parameters and designs a hybrid Markov Chain Monte Carlo (MCMC) sampling algorithm integrating Gibbs sampling and Metropolis–Hastings algorithm for parameter estimation. Numerical simulation experiments demonstrate that, compared with traditional estimation methods, the proposed Bayesian estimation approach exhibits superior estimation accuracy and robustness across different quantiles, with particularly pronounced advantages in small sample and heavy-tailed distribution scenarios. This methodology provides a more reliable theoretical tool for analyzing panel data with spatial dependencies. This method can not only accurately quantify the spatial spillover effect, but also identify the different effects of the same influencing factor at different emission levels, which provides a strong methodological support for formulating differentiated and precise emission reduction policies. Full article
Show Figures

Figure 1

Back to TopTop