Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,292)

Search Parameters:
Keywords = Bayesian estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 798 KB  
Article
A Bayesian Inference Algorithm for Equipment Software Price Estimation Based on Nonlinear Contribution Models
by Tian Meng and Guoping Jiang
Algorithms 2026, 19(5), 396; https://doi.org/10.3390/a19050396 (registering DOI) - 15 May 2026
Abstract
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing [...] Read more.
To address the challenges of difficult value quantification, lack of market benchmarks, and scarcity of historical data for embedded software amidst the intelligent transformation of equipment systems, this study develops a scientific price estimation method based on functional capability contribution. A nonlinear pricing model is constructed to accurately characterize the two-stage evolution of software price: diminishing marginal utility during the mature technology accumulation stage and exponential growth during the technical bottleneck breakthrough stage. To ensure the consistency of pricing logic between hardware and software, a penalty function is innovatively designed to modify the standard likelihood function, effectively transforming practical business logic into a model regularization term. Parameter estimation is achieved by employing a Bayesian inference framework integrated with operational constraints, utilizing Markov Chain Monte Carlo (MCMC) sampling to realize robust posterior inference under small-sample constraints. Empirical analysis demonstrates that the proposed method achieves superior cross-domain data transfer performance compared to traditional baseline models, with a Leave-One-Out Cross-Validation (LOOCV) Mean Absolute Percentage Error (MAPE) of 21.2%. This research provides a practical value-oriented price estimation method for embedded equipment software pricing. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
33 pages, 2272 KB  
Article
Statistical Inference of Stress–Strength Reliability for Multi-State System Based on Exponentiated Pareto Distribution Using Generalized Survival Signature
by Jiaojiao Guo, Jialin Su, Jianhui Li and Tian Guo
Symmetry 2026, 18(5), 846; https://doi.org/10.3390/sym18050846 (registering DOI) - 15 May 2026
Abstract
The stress–strength reliability model is widely applied in various fields such as mechanical engineering, materials science, and aerospace engineering to identify weak links in systems and thereby improve system reliability. This paper analyzes the stress–strength reliability for multi-state systems composed of multi-state components. [...] Read more.
The stress–strength reliability model is widely applied in various fields such as mechanical engineering, materials science, and aerospace engineering to identify weak links in systems and thereby improve system reliability. This paper analyzes the stress–strength reliability for multi-state systems composed of multi-state components. One of the main contributions is the derivation of a multi-state stress–strength reliability model under combined stresses based on the generalized survival signature theory. In the model analysis, it is assumed that each component of the system is subjected to two different stresses corresponding to two different strengths, and that the stress variables and strength variables are mutually independent and all follow the exponentiated Pareto distribution with the common second shape parameter. Another contribution is the use of maximum likelihood estimation, empirical Bayesian estimation, and weakly informative Bayesian estimation to estimate the variable parameters and the stress–strength reliability under the progressive first-failure censoring scheme. In addition, the asymptotic confidence intervals for the stress–strength reliability model are derived, and the Bayesian credible intervals are constructed based on MCMC sampling. Finally, through MCMC simulation of a three-state consecutive 3-out-of-5: G system, the accuracy of the variable parameters and the stress–strength reliability under the aforementioned point estimation and interval estimation methods is analyzed, and the performance of these estimation methods is compared under different sample sizes. In addition, sensitivity analyses were conducted on the common shape parameter and the hyperparameters of the weakly informative prior distributions. Furthermore, a real data set is applied to illustrate the proposed procedures. Full article
(This article belongs to the Section Mathematics)
20 pages, 7592 KB  
Article
Intelligent Elastic Parameter Inversion Method Based on Kernel Density Estimation Within a Bayesian Framework
by Lianqiao Wang, Dameng Liu, Jingbo Yang, Xuebin Yin, Zhenyu Li, Wenchao Xiang, Hao Chang and Siyuan Wei
Processes 2026, 14(10), 1604; https://doi.org/10.3390/pr14101604 - 15 May 2026
Abstract
Seismic inversion is a key technique for quantitative characterization of subsurface elastic parameters and detailed reservoir description. However, due to the limited bandwidth of seismic signals and the strong heterogeneity of complex reservoirs, conventional inversion methods struggle to simultaneously achieve high vertical resolution [...] Read more.
Seismic inversion is a key technique for quantitative characterization of subsurface elastic parameters and detailed reservoir description. However, due to the limited bandwidth of seismic signals and the strong heterogeneity of complex reservoirs, conventional inversion methods struggle to simultaneously achieve high vertical resolution and lateral continuity. To address these challenges, an intelligent elastic parameter inversion method based on kernel density estimation within a Bayesian framework is proposed. First, kernel density estimation is introduced to augment the training samples, thereby alleviating data scarcity. Second, a hybrid architecture integrating convolutional modules, Mamba, and cross-attention mechanisms is constructed to achieve collaborative modeling of local spatial features and long-range temporal dependencies. The cross-attention mechanism is further employed to adaptively weight and fuse multi-source features, thus enhancing the representation capability of the model. Subsequently, by designing a joint loss function, the strengths of deterministic inversion and data-driven approaches are effectively integrated, ensuring physical consistency while enhancing data adaptability, thereby improving the stability and accuracy of the inversion results. Furthermore, the neural network outputs are used as the initial model for Bayesian inversion to construct a probabilistic inversion framework for elastic parameter inversion. Finally, experimental results demonstrate that the proposed method improves the R2 values of inversion results by more than 8.0% and 5.0% compared with conventional methods in thin interbedded models and real data experiments, respectively. Full article
19 pages, 3396 KB  
Article
Bayesian Deep Learning and Probabilistic Forecasting of Stock Prices
by Ndivhuwo Nelufhangani and Daniel Maposa
Algorithms 2026, 19(5), 391; https://doi.org/10.3390/a19050391 - 14 May 2026
Abstract
This study investigates the effectiveness of Bayesian probabilistic methods for stock price forecasting on the Johannesburg Stock Exchange by implementing and comparing Gaussian process regression (GPR), Bayesian long short-term memory (Bayesian LSTM), and Bayesian neural networks (BNNs). Using daily open, high, low, close, [...] Read more.
This study investigates the effectiveness of Bayesian probabilistic methods for stock price forecasting on the Johannesburg Stock Exchange by implementing and comparing Gaussian process regression (GPR), Bayesian long short-term memory (Bayesian LSTM), and Bayesian neural networks (BNNs). Using daily open, high, low, close, and volume (OHLCV) data and engineered technical indicators for FirstRand and Discovery from January 2005 to June 2025 (5187 observations), models were trained and evaluated with the mean absolute error (MAE), root mean squared error (RMSE), and mean squared error (MSE). The GPR produced reliable, well-calibrated intervals in relatively stable regimes, but its performance degraded on the more volatile Discovery series. Bayesian LSTM delivered conservative uncertainty estimates with wide predictive intervals but showed the largest point forecast errors. The BNNs achieved the best balance between accuracy and uncertainty quantification, producing the lowest errors for FirstRand and competitive performance for Discovery. Comparative analysis indicates that BNNs are most suitable when point accuracy and calibrated uncertainty are both priorities, GPR is valuable for smaller or more stable data regimes, and Bayesian LSTM is preferable where conservative, risk-conscious intervals are required. This study highlights the practical value of embedding uncertainty into financial forecasts and recommends matching Bayesian model choice to market volatility, data availability, and decision maker risk appetite. Full article
Show Figures

Figure 1

21 pages, 2854 KB  
Article
Bayesian Estimation of Electric Vehicle Conversion Rates by Average Daily Vehicle Kilometers Traveled in South Korea
by Min Woo Byun, Oh Hoon Kwon and Wooseok Do
Appl. Sci. 2026, 16(10), 4837; https://doi.org/10.3390/app16104837 - 13 May 2026
Viewed by 80
Abstract
This study presents a Bayesian framework for estimating electric vehicle (EV) conversion rates based on average daily vehicle kilometers traveled (ADVKT) in South Korea. Although maximizing the environmental benefits of EVs requires accounting for real-world driving patterns and vehicle usage, the current EV [...] Read more.
This study presents a Bayesian framework for estimating electric vehicle (EV) conversion rates based on average daily vehicle kilometers traveled (ADVKT) in South Korea. Although maximizing the environmental benefits of EVs requires accounting for real-world driving patterns and vehicle usage, the current EV policies in South Korea largely focus on supply expansion and uniform subsidy schemes, with limited consideration of driver behavioral heterogeneity. Using 2023 national vehicle travel statistics and regional-level data, the study applies a Bayesian approach to estimate the posterior probability of EV conversion by ADVKT based on the ADVKT distributions of internal combustion engine vehicles (ICEVs) and EVs, with the overall EV conversion rate serving as the prior probability. The results reveal distinct conversion trends by vehicle type, usage, and region. Non-commercial passenger cars show peak conversion potential in the 70–75 km/day range across all regional classifications, supporting the feasibility of nationwide policies. In contrast, commercial vehicles (e.g., vans and trucks) exhibit more varied patterns, indicating the need for targeted approaches. A simulation-based validation demonstrates that the estimated conversion probabilities closely align with the observed distribution of EVs. These findings provide empirical guidance for distance-based EV subsidy design, charging infrastructure planning, and strategic vehicle targeting in South Korea’s transition to low-emission transport. Full article
(This article belongs to the Special Issue Intelligent Transportation and Mobility Analytics)
Show Figures

Figure 1

30 pages, 2075 KB  
Systematic Review
Human–AI Collaboration in Risk- and Uncertainty-Aware Portfolio Reinforcement Learning: A Critical Review
by Firdaous Khemlichi, Youness Idrissi Khamlichi and Safae Elhaj Ben Ali
Information 2026, 17(5), 476; https://doi.org/10.3390/info17050476 - 13 May 2026
Viewed by 88
Abstract
Financial markets are characterized by non-stationarity, regime shifts, and complex cross-asset interactions, which challenge traditional portfolio optimization and motivate reinforcement learning (RL) for adaptive decision-making. However, many RL-based approaches remain predominantly return-centric, with risk, uncertainty, and human oversight only weakly integrated, limiting robustness [...] Read more.
Financial markets are characterized by non-stationarity, regime shifts, and complex cross-asset interactions, which challenge traditional portfolio optimization and motivate reinforcement learning (RL) for adaptive decision-making. However, many RL-based approaches remain predominantly return-centric, with risk, uncertainty, and human oversight only weakly integrated, limiting robustness and practical applicability. This review provides a critical synthesis of risk-aware and uncertainty-sensitive reinforcement learning for portfolio optimization from a human–AI collaboration perspective. We analyze major architectural paradigms—including single-agent, hierarchical, multi-agent, and modular systems—together with risk modeling strategies (e.g., reward shaping, constraint-based optimization, and downside risk measures such as CVaR) and probabilistic approaches to uncertainty estimation (e.g., Bayesian neural networks, Monte Carlo dropout, and ensembles). A structured analysis of 57 fully assessed studies reveals that only 5 (9%) explicitly couple uncertainty estimation with risk constraint mechanisms, while 38 (69%) treat risk and uncertainty as structurally independent components. We identify a central structural limitation: risk objectives are rarely conditioned on epistemic uncertainty, while uncertainty estimates seldom influence constraint mechanisms or capital allocation. This decoupling leads to fragmented frameworks that remain difficult to deploy in real financial environments. By integrating architectural design, risk modeling, uncertainty estimation, and evaluation practices, this review proposes a unified, deployment-oriented perspective for developing governance-aligned portfolio decision-support systems. Full article
(This article belongs to the Special Issue Decision Models for Economics and Business Management)
Show Figures

Figure 1

25 pages, 2695 KB  
Article
Robust Pose and Inertial Parameter Estimation of An Unknown aircraft Based on Variational BAYESIAN Dual Vector Quaternion Extended Kalman Filter
by Shengli Xu, Yangwang Fang and Hanqiao Huang
Entropy 2026, 28(5), 549; https://doi.org/10.3390/e28050549 (registering DOI) - 12 May 2026
Viewed by 81
Abstract
Accurately determining the parameters of an unmodeled spacecraft is crucial. Filtering methods that are resilient to uncertainty, employing dual quaternion frameworks to ascertain orientation and position, introduce a design for an extended Kalman filter based on variational Bayesian inference and dual vector quaternions [...] Read more.
Accurately determining the parameters of an unmodeled spacecraft is crucial. Filtering methods that are resilient to uncertainty, employing dual quaternion frameworks to ascertain orientation and position, introduce a design for an extended Kalman filter based on variational Bayesian inference and dual vector quaternions (VB-DVQEKF) to carry out parameter estimation for a non-cooperative spacecraft. The system kinematics and dynamics are modeled using dual vector quaternions, rendering the representation manifestly concise. The method achieves thoroughness by accounting for the coupled interactions between translational and rotational motions. Furthermore, to address uncertainties in the measurements, a variational Bayesian approach is employed for the dependable simultaneous estimation of state parameters and measurement noise covariance. Mathematical simulations are used to verify the proposed VB-DVQEKF, and its robust capabilities are demonstrated through comparisons with several conventional parameter estimation techniques, including the conventional DVQ-EKF and the Sage–Husa adaptive DVQ-EKF (SH-DVQEKF). Quantitative results based on root-mean-square error (RMSE), convergence time, and final estimation error confirm that the proposed VB-DVQEKF achieves the smallest steady-state error among the compared methods and remains stable under white-burst, gradient (drift), and outlier-type measurement anomalies. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
19 pages, 4456 KB  
Article
Multi-Model Fusion of Lithium Battery SOC Estimation Based on Bayesian Principle
by Funian Hu and Bin Xie
Mathematics 2026, 14(10), 1642; https://doi.org/10.3390/math14101642 - 12 May 2026
Viewed by 102
Abstract
The battery management system (BMS) is the core of ensuring the safety and performance of new energy vehicles, and real-time high-precision estimation of battery state of charge (SOC) is its key function, which directly affects battery safety, endurance, and service life. Faced with [...] Read more.
The battery management system (BMS) is the core of ensuring the safety and performance of new energy vehicles, and real-time high-precision estimation of battery state of charge (SOC) is its key function, which directly affects battery safety, endurance, and service life. Faced with the challenges brought by high energy density and ultra-fast charging technology, lithium-ion batteries exhibit strong nonlinear and time-varying characteristics, making it difficult for existing SOC estimation methods to balance computational efficiency and accuracy. This study proposes a Bayesian-based Hammerstein multi-model (MM) fusion algorithm for accurate lithium battery SOC estimation across a wide temperature range, especially under low-temperature conditions. First, two Hammerstein SOC submodels are constructed: a traditional polynomial Hammerstein model and a TPA-Hammerstein model incorporating the temporal pattern attention mechanism. Second, KV-ADAM is employed for parameter training and identification of the submodels. Finally, a Bayesian weighted fusion strategy is used to dynamically integrate the outputs of the two submodels. The experimental results show that this method significantly improves the accuracy and robustness of SOC estimation, overcomes the limitations of a single model under complex dynamic conditions, provides an effective solution for lithium battery SOC estimation, and helps the safe operation of electric vehicles and the sustainable development of the industry. Full article
(This article belongs to the Special Issue Artificial Intelligence and Algorithms)
18 pages, 1711 KB  
Article
Analysis of Risk Factors Influencing the Outcomes of Capsizing, Sinking, and Flooding Accidents in Coastal Waters of the Republic of Korea: A Fuzzy Bayesian Network Approach
by Byung-Hwa Song
J. Mar. Sci. Eng. 2026, 14(10), 897; https://doi.org/10.3390/jmse14100897 (registering DOI) - 12 May 2026
Viewed by 148
Abstract
Capsizing, sinking, and flooding accidents occurring in the coastal waters of the Republic of Korea constitute a persistent marine safety concern, accounting for approximately 17% of total fatalities associated with marine accidents. Previous statistical analyses of accident causation have identified key contributing factors [...] Read more.
Capsizing, sinking, and flooding accidents occurring in the coastal waters of the Republic of Korea constitute a persistent marine safety concern, accounting for approximately 17% of total fatalities associated with marine accidents. Previous statistical analyses of accident causation have identified key contributing factors such as adverse weather conditions, improper cargo loading, and deficiencies in vessel maintenance; however, the complex interdependencies among these factors have not been sufficiently quantified. To address this limitation, this study proposes a fuzzy Bayesian network (FBN) model to systematically evaluate and quantify the risk factors associated with capsizing, sinking, and flooding accidents. A total of 164 adjudicated marine accident cases that occurred in Korean coastal waters over a 10-year period (2015–2024) were analyzed (data collection cutoff: 31 December 2024) to estimate prior probabilities for six major causal categories. Conditional probability tables (CPTs) were derived through a structured Delphi survey conducted with marine safety experts possessing more than 10 years of professional experience. To mitigate the subjectivity inherent in expert judgment, triangular fuzzy numbers (TFNs) and centroid-based defuzzification were applied. Sensitivity analysis identified sea state (SI = 0.0155) and cargo loading condition (SI = 0.0125) as the two most influential factors affecting the probability of capsizing. Scenario analysis further revealed that when adverse weather conditions and improper cargo loading occur simultaneously, the probability of capsizing increases to 39.3%, representing a 5.3 percentage point increase compared to the baseline. In addition, the model demonstrated a close agreement with observed accident outcome distributions, with a Kullback–Leibler (KL) divergence of 0.038, indicating differences within 1.3 percentage points across all outcome categories. The findings of this study provide practical implications for targeted marine safety interventions and the prioritization of regulatory measures in the coastal waters of the Republic of Korea. Full article
(This article belongs to the Special Issue Advanced Studies in Marine Data Analysis)
Show Figures

Figure 1

23 pages, 790 KB  
Article
A Novel Distribution on the Unit Interval with Properties and Applications for Electronic Components
by Farrukh Jamal, Mohamed A. Abd Elgawad, Muhammad Imran and Shahid Mohammad
Axioms 2026, 15(5), 359; https://doi.org/10.3390/axioms15050359 - 12 May 2026
Viewed by 182
Abstract
This paper introduces a novel continuous probability distribution on the unit interval called the unit Jamal distribution and explores its properties. The proposed distribution performs well in modeling bathtub-shaped data, effectively capturing its characteristic hazard rate behavior. Key mathematical characteristics such as moments, [...] Read more.
This paper introduces a novel continuous probability distribution on the unit interval called the unit Jamal distribution and explores its properties. The proposed distribution performs well in modeling bathtub-shaped data, effectively capturing its characteristic hazard rate behavior. Key mathematical characteristics such as moments, the moment generating function, order statistics, entropy, and the quantile function are thoroughly derived. Parameter estimation is conducted using maximum likelihood and Bayesian estimation methods. A simulation study is conducted to evaluate the accuracy of parameter estimates and to examine the distribution’s behavior. Additionally, the applicability of the proposed distribution is demonstrated through analysis of two real-world datasets, allowing for a comparison of its performance against existing models. Full article
Show Figures

Figure 1

16 pages, 1004 KB  
Article
Bayesian Estimation of Vancomycin Exposure and Population Pharmacokinetics in Obese Patients
by Ha-Jin Chun, Hyeon Gyeom Choi, Eun Sook Bang, Jin Ok Kyun, Young Rong Kim, Eun Jin Kim and So Hee Kim
Antibiotics 2026, 15(5), 485; https://doi.org/10.3390/antibiotics15050485 - 11 May 2026
Viewed by 167
Abstract
Background/Objectives: This study compared the pharmacokinetic (PK)/pharmacodynamic (PD) targets of vancomycin in obese patients using two Bayesian programs to evaluate the accuracy and bias of these estimates. Additionally, population pharmacokinetics in obese patients were compared with those in the general patient population. [...] Read more.
Background/Objectives: This study compared the pharmacokinetic (PK)/pharmacodynamic (PD) targets of vancomycin in obese patients using two Bayesian programs to evaluate the accuracy and bias of these estimates. Additionally, population pharmacokinetics in obese patients were compared with those in the general patient population. Methods: Medical records of obese adults [body mass index (BMI) ≥ 30 kg/m2] treated with vancomycin at Ajou University Hospital between 2010 and 2017 were retrospectively reviewed. Patients with peak (Cpeak) and trough (Ctrough) concentrations were included. Vancomycin area under the plasma concentration-time curve (AUC) and Ctrough were estimated using Capcil (ver 6.31) and MwPharm (ver 2.3.1.89) software. Accuracy and bias were analyzed, and population PK parameters were evaluated. Results: A total of 149 cases were analyzed. AUC estimates from one-point sampling using Capcil were significantly lower than the reference values, whereas MwPharm produced estimates closer to the reference with higher accuracy and lower bias. Ctrough estimates from one-point sampling showed high accuracy for both programs. Obese patients exhibit lower volume of distribution, higher total body clearance, and shorter elimination half-life than the general patient population. Conclusions: Vancomycin pharmacokinetics differ in obese patients. Bayesian analysis using MwPharm showed AUC and Ctrough estimates that were closer to the reference values and exhibited lower bias than those obtained with Capcil in this dataset. These findings may support more informed dosing decisions for vancomycin in obese patients. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics of Drugs)
Show Figures

Figure 1

27 pages, 4278 KB  
Article
Reframing COPD Instability Under GOLD 2026: A Bayesian Multi-Outcome Retrospective EHR Analysis in Primary Care
by José Manuel Helguera Quevedo, Pedro Mesa Rodríguez, Luis Richard Rodríguez, Zaira María Correcher Salvador, José Manuel Paredero Domínguez, Francisco Javier Plaza Zamora, Fernando María Navarro Ros and José David Maya Viejo
J. Respir. 2026, 6(2), 9; https://doi.org/10.3390/jor6020009 (registering DOI) - 11 May 2026
Viewed by 145
Abstract
Background/Objectives: COPD instability is heterogeneous; GOLD 2026 lowers the prior-year threshold to ≥1 moderate or severe exacerbation. We assessed whether this low-threshold criterion behaves as a high-sensitivity operational signal in primary-care EHRs. Methods: A retrospective multicenter same-window EHR pilot study in two Spanish [...] Read more.
Background/Objectives: COPD instability is heterogeneous; GOLD 2026 lowers the prior-year threshold to ≥1 moderate or severe exacerbation. We assessed whether this low-threshold criterion behaves as a high-sensitivity operational signal in primary-care EHRs. Methods: A retrospective multicenter same-window EHR pilot study in two Spanish primary-care centers (n = 106). Predictors and six binary endpoints were aggregated over the same 12-month window: any exacerbation, high-risk history, severe hospitalization, SABA dispensing, SAMA dispensing, and any rescue dispensing. We fitted Bayesian multi-outcome hierarchical logistic models with patient-level random intercepts, cross-endpoint partial pooling, regularizing priors, and missingness indicators. Robustness used prespecified scenarios, 10-fold ELPD cross-validation, and high-missingness exclusion. Results: Any exacerbation occurred in 53/106 patients; high-risk history in 25/106; hospitalization in 16/106; and any rescue dispensing in 65/106. Diagnostics were stable, and posterior predictive checks supported marginal adequacy. Heart failure showed the clearest positive pattern across exacerbation-defined endpoints; reliever-dispensing endpoints showed a distinct care-pathway-sensitive pattern. No scenario improved out-of-sample adequacy. High-missingness exclusion preserved directionality in 120/120 overlapping pairs; the median |ΔlogOR| was 0.061; and 119/120 remained within ±log(1.25). Conclusions: GOLD 2026 “any exacerbation” behaved as a high-sensitivity operational signal in an endpoint-operating-point sense, not as a homogeneous phenotype. Findings are within-window associations, not causal, medication-effect, or prospective prediction estimates; external validation is required. Full article
(This article belongs to the Collection Feature Papers in Journal of Respiration)
26 pages, 6023 KB  
Article
Sparse Spherical Harmonic Component Selection for Gravity Field Modeling
by Nijia Qian, Guobin Chang, Xun Zhang, Yong Feng and Dehu Yang
Remote Sens. 2026, 18(10), 1488; https://doi.org/10.3390/rs18101488 - 9 May 2026
Viewed by 259
Abstract
Gravity field modeling with spherical harmonics is a fundamental task in physical geodesy. In conventional solutions, all spherical harmonic (SH) components below a prescribed maximum degree and order are typically retained, even though some components may contribute little to the final model. This [...] Read more.
Gravity field modeling with spherical harmonics is a fundamental task in physical geodesy. In conventional solutions, all spherical harmonic (SH) components below a prescribed maximum degree and order are typically retained, even though some components may contribute little to the final model. This study investigates SH component selection in gravity field modeling using sparse regularization, specifically the Lasso and adaptive Lasso. A statistical strategy is introduced for incorporating a reference global gravity model by accounting for its variance-covariance information. The resulting L1-norm-regularized estimation problem is solved with an efficient gradient-based algorithm, and the regularization parameter is selected using tailored criteria, including generalized cross-validation and the corrected Akaike information criterion. In addition, an approximate variance-covariance matrix of the estimated parameters is derived analytically from a Bayesian perspective, showing that only the non-zero coefficients contribute to the non-zero covariance structure. Closed-loop simulations based on EGM2008 show that the proposed method achieves modeling accuracy comparable to that of conventional Tikhonov regularization while retaining less than 10% of the SH components. The results demonstrate the feasibility of sparse SH modeling for obtaining compact gravity field representations without substantial loss of accuracy. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

16 pages, 5602 KB  
Article
Tailoring Prevention and Control Strategies for Childhood Tuberculosis: From a Global Analysis of Burden Trends and Inequalities Across Three Age Groups (1990–2021) to Prevention and Control Strategies
by Xiaoming Liu, Howard Takiff, Hui Jiang and Weimin Li
Trop. Med. Infect. Dis. 2026, 11(5), 129; https://doi.org/10.3390/tropicalmed11050129 - 9 May 2026
Viewed by 211
Abstract
Background: Childhood tuberculosis (TB) is a major but underappreciated threat to human health. Because diagnosis of tuberculosis in children is difficult, there are a lack of accurate global statistics. This study aimed to comprehensively assess the long-term global, regional, and age-specific burden [...] Read more.
Background: Childhood tuberculosis (TB) is a major but underappreciated threat to human health. Because diagnosis of tuberculosis in children is difficult, there are a lack of accurate global statistics. This study aimed to comprehensively assess the long-term global, regional, and age-specific burden of childhood TB from 1990 to 2021, to examine its temporal trends and socioeconomic inequalities, and to project future patterns through 2045. Methods: We used incidence and mortality data from the GBD 2021 database for TB in children ages 0–14 years from 1990 to 2021. Children were stratified into three age groups—<5, 5–9 and 10–14 years—and classified by region and Socio-Demographic Index (SDI). Multiple statistical approaches were employed, including average annual percentage change and Bayesian age-period-cohort models, to analyze spatiotemporal trends in disease burden and generate projections for the next 20 years. We used decomposition analysis to separate demographic from epidemiological drivers and concentration indices to quantify socioeconomic inequalities. Results: In 2021 there were, globally, an estimated 759,300 incident cases of childhood TB and 70,659 deaths. Since 1990, childhood TB incidence and mortality rates have declined at average annual rates of 2.61% and 4.48%, respectively. The SDI showed a significant negative correlation with both incidence and mortality of childhood TB (p < 0.05). In 2021, 78.01% of childhood TB deaths were in children under 5 years of age, and over 80% of global childhood TB deaths occurred in Sub-Saharan Africa. Epidemiological interventions were partly offset by rapid population growth in low-SDI regions. The trends show that the incidence and mortality will continue to decline through 2045, but not enough to meet the goal of eliminating childhood TB by 2035. Conclusions: Global efforts should adopt an age-specific framework that prioritizes universal preventive treatment to eliminate mortality in children under 5 years, and implements active case finding to reduce transmission chains among children 5–14 years. Sustaining the decrease in the TB burdens of low-SDI regions requires international financing strategies attuned to expanding populations to ensure epidemiological success is not erased by demographic growth. Full article
Show Figures

Figure 1

22 pages, 625 KB  
Article
Assessment and Management Implications for Chub Mackerel (Scomber japonicus) in the North Pacific: Integrating Length-Based Bayesian and Catch-MSY Models
by Sisi Huang, Famou Zhang, Heng Zhang, Ming Gao and Kangbo Li
Fishes 2026, 11(5), 276; https://doi.org/10.3390/fishes11050276 - 9 May 2026
Viewed by 126
Abstract
Chub mackerel (Scomber japonicus) is a key pelagic commercial fish species in the North Pacific Ocean and is one of the priority species managed by the North Pacific Fisheries Commission (NPFC). This study assessed the stock status and maximum sustainable yield [...] Read more.
Chub mackerel (Scomber japonicus) is a key pelagic commercial fish species in the North Pacific Ocean and is one of the priority species managed by the North Pacific Fisheries Commission (NPFC). This study assessed the stock status and maximum sustainable yield (MSY) of S. japonicus using the Length-based Bayesian Biomass (LBB) estimator and the Catch-MSY model. The assessment was based on length-frequency data collected from the light purse seine fishery on the high seas of the North Pacific (6061 individuals sampled during April–December, 2019–2024) combined with historical catch data spanning 1995 to 2024. Management implications were further analyzed using the Kobe plot and the Kobe II Strategy Matrix (K2SM). The results showed that: (1) The LBB model revealed significant interannual variability in stock status of S. japonicus. Although the stock was within sustainable limits (B/BMSY ≥ 1.0) in 2019, 2021, and 2022, it suffered from overfishing or was heavily overfished in 2020, 2023, and 2024. Notably, integrated data analysis indicated that during 2023–2024, the B/BMSY ratio dropped to 0.51, reflecting a marked deterioration in the stock condition. (2) The Catch-MSY model estimated the maximum sustainable yield (MSY) at 278,000 t, with a 95% confidence interval (CI) ranging from 162,000 t to 475,000 t. (3) Based on catch data, the Kobe plot indicated that in the final assessment year, the probability of the stock being in a sustainable state was only 23.2% at the 95% confidence level. The probabilities of being recruitment-overfished, fishing-overfished, and severely overfished were 37.6%, 14.9%, and 24.3%, respectively. With relative biomass B/BMSY < 1 and relative fishing mortality F/FMSY < 1, the stock status in the final assessment year was diagnosed as recruitment overfishing. (4) Integrating the outputs of the two models, the Kobe plot, and the probabilistic projections of future stock status from the Kobe II Strategy Matrix (K2SM), it is recommended that the total allowable catch (TAC) be set within 60–80% of the baseline TAC during the medium-term management phase (5–10 years), corresponding to a range of 211,300 t to 281,700 t. In the long term, if the stock exhibits positive recovery trends, the TAC could be gradually increased to the MSY level to achieve the maximum sustainable utilization of this fishery resource. Full article
(This article belongs to the Special Issue Modeling Approach for Fish Stock Assessment)
Back to TopTop