Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,638)

Search Parameters:
Keywords = Bayesian framework

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2183 KB  
Article
A Bi-Level Intelligent Control Framework Integrating Deep Reinforcement Learning and Bayesian Optimization for Multi-Objective Adaptive Scheduling in Opto-Mechanical Automated Manufacturing
by Lingyu Yin, Zhenhua Fang, Kaicen Li, Jing Chen, Naiji Fan and Mengyang Li
Appl. Sci. 2026, 16(2), 732; https://doi.org/10.3390/app16020732 (registering DOI) - 10 Jan 2026
Abstract
The opto-mechanical automated manufacturing process, characterized by stringent process constraints, dynamic disturbances, and conflicting optimization objectives, presents significant control challenges for traditional scheduling and control approaches. We formulate the scheduling problem within a closed-loop control paradigm and propose a novel bi-level intelligent control [...] Read more.
The opto-mechanical automated manufacturing process, characterized by stringent process constraints, dynamic disturbances, and conflicting optimization objectives, presents significant control challenges for traditional scheduling and control approaches. We formulate the scheduling problem within a closed-loop control paradigm and propose a novel bi-level intelligent control framework integrating Deep Reinforcement Learning (DRL) and Bayesian Optimization (BO). The core of our approach is a bi-level intelligent control framework. An inner DRL agent acts as an adaptive controller, generating control actions (scheduling decisions) by perceiving the system state and learning a near-optimal policy through a carefully designed reward function, while an outer BO loop automatically tunes the DRL’s hyperparameters and reward weights for superior performance. This synergistic BO-DRL mechanism facilitates intelligent and adaptive decision-making. The proposed method is extensively evaluated against standard meta-heuristics, including Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), on a complex 20-jobs × 20-machines flexible job shop scheduling benchmark specific to opto-mechanical automated manufacturing. The experimental results demonstrate that our BO-DRL algorithm significantly outperforms these benchmarks, achieving reductions in makespan of 13.37% and 25.51% compared to GA and PSO, respectively, alongside higher machine utilization and better on-time delivery. Furthermore, the algorithm exhibits enhanced convergence speed, superior robustness under dynamic disruptions (e.g., machine failures, urgent orders), and excellent scalability to larger problem instances. This study confirms that integrating DRL’s perceptual decision-making capability with BO’s efficient parameter optimization yields a powerful and effective solution for intelligent scheduling in high-precision manufacturing environments. Full article
18 pages, 2633 KB  
Article
Prediction of Ammonia Mitigation Efficiency in Sodium Bisulfate-Treated Broiler Litter Using Artificial Neural Networks
by Busra Yayli and Ilker Kilic
Animals 2026, 16(2), 210; https://doi.org/10.3390/ani16020210 (registering DOI) - 10 Jan 2026
Abstract
The increasing demand for poultry meat, driven by its favorable nutritional profile, including low cholesterol and high protein content, has resulted in intensified production volumes and, consequently, elevated ammonia (NH3) emissions. Artificial intelligence-based predictive approaches offer an effective alternative to conventional [...] Read more.
The increasing demand for poultry meat, driven by its favorable nutritional profile, including low cholesterol and high protein content, has resulted in intensified production volumes and, consequently, elevated ammonia (NH3) emissions. Artificial intelligence-based predictive approaches offer an effective alternative to conventional treatment-oriented methods by enabling faster and more accurate estimation of NH3 removal performance. This study aimed to predict the ammonia removal efficiency of broiler litter generated during a production cycle under controlled laboratory-scale conditions using artificial neural networks (ANNs) trained with different learning algorithms. Four ANN models were developed based on the Levenberg–Marquardt (LM), Fletcher–Reeves (FR), Scaled Conjugate Gradient (SCG), and Bayesian Regularization (BR) algorithms. The results showed that the LM-based model with 12 hidden neurons achieved the highest predictive performance (R2 = 0.9777; MSE = 0.0033; RMSE = 0.0574; MAPE = 0.0833), while the BR-based model with 10 neurons showed comparable accuracy. In comparison with the FR and SCG models, the LM algorithm demonstrated superior predictive accuracy and generalization capability. Overall, the findings suggest that ANN-based modeling is a reliable, data-informed approach for estimating NH3 removal efficiency, providing a potential decision-support framework for ammonia mitigation strategies in poultry production systems. Full article
(This article belongs to the Section Poultry)
Show Figures

Figure 1

20 pages, 1520 KB  
Article
Autonomous Real-Time Regional Risk Monitoring for Unmanned Swarm Systems
by Tianruo Cao, Yuxizi Zheng, Lijun Liu and Yongqi Pan
Mathematics 2026, 14(2), 259; https://doi.org/10.3390/math14020259 - 9 Jan 2026
Abstract
Existing State-of-the-Art (SOTA) methods for situational awareness typically rely on high-bandwidth transmission of raw data or computationally intensive models, which are often impractical for resource-constrained edge devices in unstable communication environments. To address these limitations, this paper introduces a comprehensive framework for Regional [...] Read more.
Existing State-of-the-Art (SOTA) methods for situational awareness typically rely on high-bandwidth transmission of raw data or computationally intensive models, which are often impractical for resource-constrained edge devices in unstable communication environments. To address these limitations, this paper introduces a comprehensive framework for Regional Risk Monitoring utilizing unmanned swarm systems. We propose an innovative knowledge distillation approach (SIKD) that leverages both soft label dark knowledge and inter-layer relationships, enabling compressed models to run in real time on edge nodes while maintaining high accuracy. Furthermore, recognition results are fused using Bayesian inference to dynamically update the regional risk level. Experimental results demonstrate the feasibility of the proposed framework. Quantitatively, the proposed SIKD algorithm reduces the model parameters by 52.34% and computational complexity to 44.21% of the original model, achieving a 3× inference speedup on edge CPUs. Furthermore, it outperforms state-of-the-art baseline methods (e.g., DKD and IRG) in terms of convergence speed and classification accuracy, ensuring robust real-time risk monitoring. Full article
20 pages, 1275 KB  
Article
Fractional Viscoelastic Modeling of Multi-Step Creep and Relaxation in an Aerospace Epoxy Adhesive
by Jesús Gabino Puente-Córdova, Flor Yanhira Rentería-Baltiérrez, José de Jesús Villalobos-Luna and Pedro López-Cruz
Symmetry 2026, 18(1), 130; https://doi.org/10.3390/sym18010130 - 9 Jan 2026
Abstract
Structural adhesives in aeronautical applications are routinely exposed to complex loading histories that generate time-dependent deformation, making accurate prediction of their viscoelastic response essential for reliable assessment of joint integrity. This work presents an integrated experimental and modeling study of the aerospace-grade epoxy [...] Read more.
Structural adhesives in aeronautical applications are routinely exposed to complex loading histories that generate time-dependent deformation, making accurate prediction of their viscoelastic response essential for reliable assessment of joint integrity. This work presents an integrated experimental and modeling study of the aerospace-grade epoxy adhesive 3M Scotch-Weld EC-2216 using multi-step creep and stress-relaxation tests performed at room temperature and controlled loading rates, combined with fractional viscoelastic modeling. Unlike traditional single-step characterizations, the multi-step protocol employed here captures the cumulative loading effects and fading-memory dynamics that govern the adhesive’s mechanical response. The experimental data were analyzed using fractional Maxwell, Voigt–Kelvin, and Zener formulations. Statistical evaluation based on the Bayesian Information Criterion (BIC) consistently identified the Fractional Zener Model (FZM) as the most robust representation of the stress-relaxation behavior, effectively capturing both the unrelaxed and relaxed modulus. The results demonstrate that EC-2216 exhibits hierarchical relaxation mechanisms and history-dependent viscoelasticity that cannot be accurately described by classical integer-order models. Overall, the study validates the use of fractional operators to represent the broad and hierarchical relaxation spectra typical of toughened aerospace epoxies and provides a rigorous framework for durability assessment and predictive modeling of adhesively bonded structures. Full article
30 pages, 399 KB  
Article
Statistical Framework for Quantum Teleportation: Fidelity Analysis and Resource Optimization
by Nueraminaimu Maihemuti, Jiangang Tang and Jiayin Peng
Mathematics 2026, 14(2), 255; https://doi.org/10.3390/math14020255 - 9 Jan 2026
Abstract
This paper establishes a comprehensive statistical framework for analyzing quantum teleportation protocols under realistic noisy conditions. We develop novel mathematical tools to characterize the complete statistical distribution of teleportation fidelity, including both mean and variance, for systems experiencing decoherence and channel imperfections. Our [...] Read more.
This paper establishes a comprehensive statistical framework for analyzing quantum teleportation protocols under realistic noisy conditions. We develop novel mathematical tools to characterize the complete statistical distribution of teleportation fidelity, including both mean and variance, for systems experiencing decoherence and channel imperfections. Our analysis demonstrates that the teleportation fidelity follows a characteristic distribution FP(Favg,ΔF2) where the variance ΔF2 depends crucially on entanglement quality and channel noise. We derive the optimal resource allocation condition Eent/F/Cclassical/F=β/α that minimizes total resource consumption while achieving target fidelity. Furthermore, we introduce a Bayesian adaptive protocol that enhances robustness against noise through real-time statistical estimation. The theoretical framework is validated through numerical simulations and provides practical guidance for experimental implementations in quantum communication networks. Full article
(This article belongs to the Special Issue Quantum Information, Cryptography and Computation)
14 pages, 5439 KB  
Brief Report
Emergence and Phylodynamics of Influenza D Virus in Northeast China Reveal Sporadic Detection and Predominance of the D/Yamagata/2019 Lineage in Cattle
by Hongjin Li, Weiwen Yan, Xinxin Liu, Bing Gao, Jiahuizi Peng, Feng Jiang, Qixun Cui, Che Song, Xianyuan Kong, Hongli Li, Tobias Stoeger, Abdul Wajid, Aleksandar Dodovski, Chao Gao, Maria Inge Lusida, Claro N. Mingala, Dmitry B. Andreychuk and Renfu Yin
Viruses 2026, 18(1), 93; https://doi.org/10.3390/v18010093 - 9 Jan 2026
Abstract
Influenza D virus (IDV), an emerging orthomyxovirus with zoonotic potential, infects diverse hosts, causes respiratory disease, and remains poorly characterized in China despite its global expansion. From October 2023 to January 2025, we collected 563 nasal swabs from cattle across 28 farms in [...] Read more.
Influenza D virus (IDV), an emerging orthomyxovirus with zoonotic potential, infects diverse hosts, causes respiratory disease, and remains poorly characterized in China despite its global expansion. From October 2023 to January 2025, we collected 563 nasal swabs from cattle across 28 farms in Jilin Province, Northeast China, and identified seven IDV-positive samples (1.2%), recovering two viable isolates (JL/YB2024 and JL/CC2024). Full-genome sequencing revealed complete, stable seven-segment genomes with high nucleotide identity (up to 99.9%) to contemporary Chinese D/Yamagata/2019 strains and no evidence of reassortment. Maximum-likelihood and time-resolved Bayesian phylogenies of 231 global hemagglutinin-esterase-fusion (HEF) sequences placed the Jilin isolates within the East Asian D/Yamagata/2019 clade and traced their most recent common ancestor to approximately 2017 (95% highest posterior density: 2016–2018), suggesting a cross-border introduction likely associated with regional cattle movement. No IDV was detected in parallel surveillance of swine, underscoring cattle as the principal reservoir and amplifying host. Bayesian skyline analysis demonstrated a marked decline in global IDV genetic diversity during 2020–2022, coinciding with livestock-movement restrictions imposed during the COVID-19 pandemic. Collectively, these findings indicate that IDV circulation in China is sporadic and geographically localized, dominated by the D/Yamagata/2019 lineage, and shaped by multiple independent incursions rather than a single emergence. Both the incorporation of IDV diagnostics into routine bovine respiratory disease surveillance and cattle-import quarantine programs, and the adoption of a One Health framework to monitor potential human spillover and future viral evolution, were recommend. Full article
(This article belongs to the Special Issue Emerging and Re-Emerging Viral Zoonoses)
Show Figures

Figure 1

31 pages, 1893 KB  
Article
Board Management Characteristics and Financial Outcomes in Sustainability-Oriented European Companies
by Alexandra-Mădălina Țăran, Grațiela-Georgiana Noja, Mihaela Diaconu, Flavia Barna, Kamal Naser and Marilen-Gabriel Pirtea
Sustainability 2026, 18(2), 657; https://doi.org/10.3390/su18020657 - 8 Jan 2026
Abstract
Achieving long-term company financial performance requires strategic initiatives on the part of board management that can stimulate the sustainable development of companies through green strategies and eco-innovation. The research conducted in this study aims to identify the impact of executive management on financial [...] Read more.
Achieving long-term company financial performance requires strategic initiatives on the part of board management that can stimulate the sustainable development of companies through green strategies and eco-innovation. The research conducted in this study aims to identify the impact of executive management on financial performance based on the achievement of the sustainable development framework within European companies operating in various industries. An advanced empirical analysis was configured on a cross-sectional dataset based on 4.219 European companies, which was collected for the period of the 2022 fiscal year, considering dimensions such as corporate governance, sustainability, and financial performance. The methodological endeavor was founded on several modern econometric techniques, namely Generalized Structural Equation Modeling (GSEM) and Bayesian Network Analysis through Gaussian Graphical Models (GGMs). The main results highlight that companies having well-structured board management and corporate governance policies aligned with the SDGs facilitates the transition to sustainable economic models, enhancing financial performance, innovations, and long-term sustainable growth. Furthermore, policies should be tailored to emphasize the importance of optimal board management size and continuous professional training for human capital involved in sustainable activities, thus enhancing long-term financial performance. Full article
(This article belongs to the Special Issue Sustainable Innovation, Business Models and Economic Performance)
30 pages, 2643 KB  
Review
Computational Design Strategies and Software for Lattice Structures and Functionally Graded Materials
by Delia Alexandra Prisecaru, Oliver Ulerich, Andrei Calin and Georgiana Ionela Paduraru
J. Compos. Sci. 2026, 10(1), 32; https://doi.org/10.3390/jcs10010032 - 8 Jan 2026
Viewed by 31
Abstract
This study presents a comparative analysis of software platforms and computational methods used in the design of three-dimensional lattice structures and functionally graded materials (FGMs). Through systematic evaluation of 31 computational platforms across seven critical criteria (lattice type support, parametric control, conformal generation, [...] Read more.
This study presents a comparative analysis of software platforms and computational methods used in the design of three-dimensional lattice structures and functionally graded materials (FGMs). Through systematic evaluation of 31 computational platforms across seven critical criteria (lattice type support, parametric control, conformal generation, multi-material capabilities, ease of use, FEA integration, and AM compatibility), this review identifies that specialized platforms significantly outperform general-purpose CAD tools, with scores exceeding 30/35 points compared to 15–20/35 for conventional systems. The analysis reveals that implicit and voxel-based representations dominate high-performance applications, while traditional boundary-representation methods approach fundamental limitations for complex lattice generation. Emerging machine learning-driven frameworks demonstrate 82% reduction in optimization iterations through Bayesian optimization and achieve property prediction speedups of nearly 100× compared to computational homogenization, enabling rapid inverse design workflows previously computationally infeasible. These insights provide researchers with evidence-based guidance for selecting computational approaches aligned with specific manufacturing capabilities and design objectives. Full article
(This article belongs to the Special Issue Lattice Structures)
Show Figures

Figure 1

25 pages, 1075 KB  
Article
Prompt-Based Few-Shot Text Classification with Multi-Granularity Label Augmentation and Adaptive Verbalizer
by Deling Huang, Zanxiong Li, Jian Yu and Yulong Zhou
Information 2026, 17(1), 58; https://doi.org/10.3390/info17010058 - 8 Jan 2026
Viewed by 35
Abstract
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, [...] Read more.
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, existing verbalizer construction methods often rely on external knowledge bases, which require complex noise filtering and manual refinement, making the process time-consuming and labor-intensive, while approaches based on pre-trained language models (PLMs) frequently overlook inherent prediction biases. Furthermore, conventional data augmentation methods focus on modifying input instances while overlooking the integral role of label semantics in prompt tuning. This disconnection often leads to a trade-off where increased sample diversity comes at the cost of semantic consistency, resulting in marginal improvements. To address these limitations, this paper first proposes a novel Bayesian Mutual Information-based method that optimizes label mapping to retain general PLM features while reducing reliance on irrelevant or unfair attributes to mitigate latent biases. Based on this method, we propose two synergistic generators that synthesize semantically consistent samples by integrating label word information from the verbalizer to effectively enrich data distribution and alleviate sparsity. To guarantee the reliability of the augmented set, we propose a Low-Entropy Selector that serves as a semantic filter, retaining only high-confidence samples to safeguard the model against ambiguous supervision signals. Furthermore, we propose a Difficulty-Aware Adversarial Training framework that fosters generalized feature learning, enabling the model to withstand subtle input perturbations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on most few-shot and full-data splits, with F1 score improvements of up to +2.8% on the standard AG’s News benchmark and +1.0% on the challenging DBPedia benchmark. Full article
Show Figures

Graphical abstract

24 pages, 1787 KB  
Article
Uncertainty-Aware Machine Learning for NBA Forecasting in Digital Betting Markets
by Matteo Montrucchio, Enrico Barbierato and Alice Gatti
Information 2026, 17(1), 56; https://doi.org/10.3390/info17010056 - 8 Jan 2026
Viewed by 44
Abstract
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated [...] Read more.
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated against strong baselines including logistic regression, XGBoost, convolutional models, a GRU sequence model, and both market-only and non-market-only benchmarks. All experiments rely on strict chronological partitioning (train ≤ 2022, validation 2023, test 2024), ablation tests designed to eliminate any circularity with bookmaker odds, and cross-season robustness checks spanning 2012–2024. Predictive performance is assessed through accuracy, Brier score, log-loss, AUC, and calibration metrics (ECE/MCE), complemented by SHAP-based interpretability to verify that only pre-game information influences predictions. To quantify economic value, calibrated probabilities are fed into a frictionless betting simulator using fractional-Kelly staking, an expected-value threshold, and bootstrap-based uncertainty estimation. Empirically, the uncertainty-aware model delivers systematically better calibration than non-Bayesian baselines and benefits materially from the combination of shot-chart embeddings and recent-form features. Economic value emerges primarily in less-efficient segments of the market: The fused predictor outperforms both market-only and non-market-only variants on moneylines, while spreads and totals show limited exploitable edge, consistent with higher pricing efficiency. Sensitivity studies across Kelly multipliers, EV thresholds, odds caps, and sequence lengths confirm that the findings are robust to modelling and decision-layer perturbations. The paper contributes a reproducible, decision-focused framework linking uncertainty-aware prediction to economic outcomes, clarifying when predictive lift can be monetized in NBA markets, and outlining methodological pathways for improving robustness, calibration, and execution realism in sports forecasting. Full article
Show Figures

Graphical abstract

35 pages, 1515 KB  
Article
Bio-RegNet: A Meta-Homeostatic Bayesian Neural Network Framework Integrating Treg-Inspired Immunoregulation and Autophagic Optimization for Adaptive Community Detection and Stable Intelligence
by Yanfei Ma, Daozheng Qu and Mykhailo Pyrozhenko
Biomimetics 2026, 11(1), 48; https://doi.org/10.3390/biomimetics11010048 - 7 Jan 2026
Viewed by 69
Abstract
Contemporary neural and generative architectures are deficient in self-preservation mechanisms and sustainable stability. In uncertain or noisy situations, they frequently demonstrate oscillatory learning, overconfidence, and structural deterioration, indicating a lack of biological regulatory principles in artificial systems. We present Bio-RegNet, a meta-homeostatic Bayesian [...] Read more.
Contemporary neural and generative architectures are deficient in self-preservation mechanisms and sustainable stability. In uncertain or noisy situations, they frequently demonstrate oscillatory learning, overconfidence, and structural deterioration, indicating a lack of biological regulatory principles in artificial systems. We present Bio-RegNet, a meta-homeostatic Bayesian neural network architecture that integrates T-regulatory-cell-inspired immunoregulation with autophagic structural optimization. The model integrates three synergistic subsystems: the Bayesian Effector Network (BEN) for uncertainty-aware inference, the Regulatory Immune Network (RIN) for Lyapunov-based inhibitory control, and the Autophagic Optimization Engine (AOE) for energy-efficient regeneration, thereby establishing a closed energy–entropy loop that attains adaptive equilibrium among cognition, regulation, and metabolism. This triadic feedback achieves meta-homeostasis, transforming learning into a process of ongoing self-stabilization instead of static optimization. Bio-RegNet routinely outperforms state-of-the-art dynamic GNNs across twelve neuronal, molecular, and macro-scale benchmarks, enhancing calibration and energy efficiency by over 20% and expediting recovery from perturbations by 14%. Its domain-invariant equilibrium facilitates seamless transfer between biological and manufactured systems, exemplifying a fundamental notion of bio-inspired, self-sustaining intelligence—connecting generative AI and biomimetic design for sustainable, living computation. Bio-RegNet consistently outperforms the strongest baseline HGNN-ODE, improving ARI from 0.77 to 0.81 and NMI from 0.84 to 0.87, while increasing equilibrium coherence κ from 0.86 to 0.93. Full article
(This article belongs to the Special Issue Bio-Inspired AI: When Generative AI and Biomimicry Overlap)
Show Figures

Graphical abstract

22 pages, 2885 KB  
Article
Classifying National Pathways of Sustainable Development Through Bayesian Probabilistic Modelling
by Oksana Liashenko, Kostiantyn Pavlov, Olena Pavlova, Robert Chmura, Aneta Czechowska-Kosacka, Tetiana Vlasenko and Anna Sabat
Sustainability 2026, 18(2), 601; https://doi.org/10.3390/su18020601 - 7 Jan 2026
Viewed by 107
Abstract
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies [...] Read more.
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies of national performance across the seventeen Sustainable Development Goals. Unlike traditional ranking systems or composite indices, the proposed method uses raw, standardised goal-level indicators and accounts for both structural variation and classification uncertainty. The model integrates a Bayesian decision tree with penalised spline regressions and includes regional covariates to capture context-sensitive dynamics. Based on publicly available global datasets covering more than 150 countries, the analysis identifies three distinct development profiles: structurally vulnerable systems, transitional configurations, and consolidated performers. Posterior probabilities enable soft classification, highlighting ambiguous or hybrid country profiles that do not fit neatly into a single category. Results reveal both monotonic and non-monotonic indicator behaviours, including saturation effects in infrastructure-related goals and paradoxical patterns in climate performance. This typology-sensitive approach provides a transparent and interpretable alternative to aggregated indices, supporting more differentiated and evidence-based sustainability assessments. The findings provide a practical basis for tailoring national strategies to structural conditions and the multidimensional nature of sustainable development. Full article
Show Figures

Figure 1

18 pages, 1453 KB  
Article
Analysis of Incorporating Market Prices into Stock Assessments for the Japanese Flying Squid (Todarodes pacificus)
by Dong-Jin Kwak, Ji-Hoon Choi and Do-Hoon Kim
Fishes 2026, 11(1), 32; https://doi.org/10.3390/fishes11010032 - 7 Jan 2026
Viewed by 120
Abstract
This study aimed to evaluate the stock status of the Japanese flying squid (Todarodes pacificus), a critical fishery resource in the waters of Korea, China, and Japan. To achieve this objective, we employed the Bio-Economic Stock Assessment (BESA) model, which integrates [...] Read more.
This study aimed to evaluate the stock status of the Japanese flying squid (Todarodes pacificus), a critical fishery resource in the waters of Korea, China, and Japan. To achieve this objective, we employed the Bio-Economic Stock Assessment (BESA) model, which integrates catch and market price data to estimate the biological and economic parameters of Japanese flying squid biomass. The assessment results indicated that the current biomass level of Japanese flying squid is below the biomass at Maximum Sustainable Yield (BMSY), suggesting that the stock is overfished. Moreover, the findings from the BESA model were consistent with results obtained from the Monte Carlo Method (CMSY) and Bayesian State-Space (BSS) models, both of which also indicated a collapsed status. Unlike the CMSY and BSS models, which rely on catch and catch per unit effort (CPUE) data, the BESA model utilizes market price data from National Statistics and the Food and Agriculture Organization (FAO), thereby eliminating the need for CPUE standardization. Consequently, the BESA model presents an alternative framework that complements existing assessment methods and enhances the reliability of fishery stock evaluations through its integrated approach, suggesting its potential applicability to the stock assessment of Japanese flying squid in Korea. Full article
(This article belongs to the Special Issue Fish Monitoring and Stock Assessment for Fishery Management)
Show Figures

Figure 1

32 pages, 3255 KB  
Article
Integrated Blood Biomarker and Neurobehavioural Signatures of Latent Neuroinjury in Experienced Military Breachers Exposed to Repetitive Low-Intensity Blast
by Alex P. Di Battista, Maria Y. Shiu, Oshin Vartanian, Catherine Tenn, Ann Nakashima, Janani Vallikanthan, Timothy Lam and Shawn G. Rhind
Int. J. Mol. Sci. 2026, 27(2), 592; https://doi.org/10.3390/ijms27020592 - 6 Jan 2026
Viewed by 155
Abstract
Repeated exposure to low-level blast overpressure (BOP) during controlled detonations is an emerging occupational health concern for military breachers and Special Operations Forces personnel, given accumulating evidence that chronic exposure may produce subtle, subclinical neurotrauma. This study derived a latent neuroinjury construct integrating [...] Read more.
Repeated exposure to low-level blast overpressure (BOP) during controlled detonations is an emerging occupational health concern for military breachers and Special Operations Forces personnel, given accumulating evidence that chronic exposure may produce subtle, subclinical neurotrauma. This study derived a latent neuroinjury construct integrating three complementary domains of brain health—post-concussive symptoms, working-memory performance, and circulating biomarkers—to determine whether breachers exhibit coherent patterns of neurobiological alteration. Symptom severity was assessed using the Rivermead Post-Concussion Questionnaire (RPQ), and working memory was assessed with the N-Back task and a panel of thirteen neuroproteomic biomarkers was measured reflecting astroglial activation, neuronal and axonal injury, oxidative stress, inflammatory signaling, and neurotrophic regulation. Experienced Canadian Armed Forces breachers with extensive occupational BOP exposure were compared with unexposed controls. Bayesian latent-variable modeling provided probabilistic evidence for a chronic, subclinical neurobiological signal, with the strongest contributions arising from self-reported symptoms and smaller but consistent contributions from the biomarker domain. Working-memory performance did not load substantively on the latent factor. Several RPQ items and circulating biomarkers showed robust loadings, and the latent neuroinjury factor was elevated in breachers relative to controls (97% posterior probability). The pattern is broadly consistent with subclinical neurobiological stress in the absence of measurable cognitive impairment, suggesting early or compensated physiological alterations rather than overt dysfunction. This multidomain, biomarker-informed framework provides a mechanistically grounded and scalable approach for identifying subtle neurobiological strain in military personnel routinely exposed to repetitive low-level blast. It may offer value for risk stratification, operational health surveillance, and the longitudinal monitoring of neurobiological change in high-risk occupations. Full article
(This article belongs to the Section Molecular Neurobiology)
Show Figures

Figure 1

43 pages, 4289 KB  
Article
A Stochastic Model Approach for Modeling SAG Mill Production and Power Through Bayesian Networks: A Case Study of the Chilean Copper Mining Industry
by Manuel Saldana, Edelmira Gálvez, Mauricio Sales-Cruz, Eleazar Salinas-Rodríguez, Jonathan Castillo, Alessandro Navarra, Norman Toro, Dayana Arias and Luis A. Cisternas
Minerals 2026, 16(1), 60; https://doi.org/10.3390/min16010060 - 6 Jan 2026
Viewed by 101
Abstract
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This [...] Read more.
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This work develops and validates a stochastic model based on Discrete Bayesian networks (BNs) to represent the causal relationships governing SAG Production and SAG Power under uncertainty or partial knowledge of explanatory variables. Discretization is adopted for methodological reasons as well as for operational relevance, since SAG plant decisions are typically made using threshold-based categories. Using operational data from a Chilean mining operation, the model fitted integrates expert-guided structure learning (Hill-Climbing with BDeu/BIC scores) and Bayesian parameter estimation with Dirichlet priors. Although validation indicators show high predictive performance (R2 ≈ 0.85—0.90, RMSE < 0.5 bin, and micro-AUC ≈ 0.98), the primary purpose of the BN is not exact regression but explainable causal inference and probabilistic scenario evaluation. Sensitivity analysis identified water feed and solids percentage as key drivers of throughput (SAG Production), while rotational speed and pressure governed SAG Power behavior. The BN framework effectively balances accuracy and interpretability, offering an explainable probabilistic representation of SAG dynamics. These results demonstrate the potential of stochastic modeling to enhance process control and support uncertainty-aware decision making. Full article
Show Figures

Figure 1

Back to TopTop