Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (16,956)

Search Parameters:
Keywords = uncertainty modelling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4928 KB  
Article
An Enhanced MADDPG–A2C Framework for Optimized Resource Allocation in High-Speed Vehicular Networks
by Linna Hu, Weixian Zha, Penghao Xue, Shuhao Xie, Bin Guo and Wei Wang
Electronics 2026, 15(6), 1214; https://doi.org/10.3390/electronics15061214 - 13 Mar 2026
Abstract
To address the degradation in communication performance caused by the high mobility and dynamic uncertainty in vehicular network channels, this paper proposes a hybrid resource allocation framework that integrates the advantage actor–critic (A2C) algorithm with the multi-agent deep deterministic policy gradient (MADDPG) algorithm. [...] Read more.
To address the degradation in communication performance caused by the high mobility and dynamic uncertainty in vehicular network channels, this paper proposes a hybrid resource allocation framework that integrates the advantage actor–critic (A2C) algorithm with the multi-agent deep deterministic policy gradient (MADDPG) algorithm. By modeling the high-speed vehicular network environment, the resource allocation task is formulated as a multi-agent deep reinforcement learning (MADRL) problem within a continuous action space. The proposed framework leverages the advantage function to refine gradient estimation, thereby improving training stability and convergence behavior. Additionally, regularization penalty terms and constraint mechanisms are incorporated into the learning process to balance multiple communication objectives. Specifically, the method aims to maximize the throughput of vehicle-to-infrastructure (V2I) links while ensuring the transmission reliability of vehicle-to-vehicle (V2V) links. In simulation experiments, the proposed method performs better in terms of convergence. Compared with the conventional MADDPG algorithm, the average access success probability is improved by 1.6%, and the average V2I throughput increases by 3.5%, indicating a significant enhancement in overall vehicular communication efficiency and transmission performance. Full article
19 pages, 2440 KB  
Article
Stochastic Air Quality Modelling of Ship Emissions in Port Areas for Maritime Decarbonization Pathways
by Ramazan Şener and Yordan Garbatov
J. Mar. Sci. Eng. 2026, 14(6), 542; https://doi.org/10.3390/jmse14060542 - 13 Mar 2026
Abstract
Decarbonizing the maritime sector requires not only adopting alternative fuels and propulsion technologies but also quantitatively assessing their impacts on coastal and urban air quality. This study develops a stochastic, time-resolved air-quality modelling framework to evaluate ship-related pollutant dispersion in port environments. The [...] Read more.
Decarbonizing the maritime sector requires not only adopting alternative fuels and propulsion technologies but also quantitatively assessing their impacts on coastal and urban air quality. This study develops a stochastic, time-resolved air-quality modelling framework to evaluate ship-related pollutant dispersion in port environments. The approach integrates Automatic Identification System (AIS) trajectories, vessel-specific emission factors, and meteorological inputs within a moving-source Gaussian dispersion model to simulate the spatio-temporal evolution of pollutant concentrations. A 24 h case study for the Ports of Los Angeles and Long Beach demonstrates highly intermittent emission behaviour, with peak aggregated emission rates reaching approximately 1.2 kg/s for CO2 and 3.8 g/s for SO2. Temporally integrated concentration fields reveal maximum cumulative dosages of 0.145 g·s/m3 for NOx, 0.023 g·s/m3 for SO2, 0.014 g·s/m3 for total PM, and 7.5 g·s/m3 for CO2 in near-port traffic corridors. Sensitivity analysis indicates that effective emission height variations alter cumulative exposure by up to 17%, whereas temporal resolution changes produce deviations below 7%, confirming numerical stability. Monte Carlo uncertainty propagation demonstrates bounded but non-negligible variability in exposure estimates under realistic emission and wind uncertainties. Results show that cumulative exposure patterns differ substantially from short-term concentration peaks, highlighting the importance of time-integrated and receptor-based metrics for port air quality assessment. The proposed AIS-driven stochastic framework provides a reproducible and computationally efficient tool for evaluating operational mitigation strategies and supporting evidence-based maritime decarbonization pathways. Full article
20 pages, 2053 KB  
Article
The Supply–Demand Dynamics of Lithium Resources and Sustainable Pathways for Vehicle Electrification in China
by Li Song, Weijing Wang, Hui Hua, Songyan Jiang and Xuewei Liu
Sustainability 2026, 18(6), 2854; https://doi.org/10.3390/su18062854 - 13 Mar 2026
Abstract
Lithium is a critical mineral for traction batteries and a cornerstone of the sustainable transition toward low-carbon transportation. Understanding the supply–demand dynamics and resource-saving potential of lithium is essential for advancing circular economy goals and ensuring the long-term stability of the electric vehicle [...] Read more.
Lithium is a critical mineral for traction batteries and a cornerstone of the sustainable transition toward low-carbon transportation. Understanding the supply–demand dynamics and resource-saving potential of lithium is essential for advancing circular economy goals and ensuring the long-term stability of the electric vehicle (EV) industry. This study develops an integrated lithium forecast framework by coupling a System Dynamics (SD) model with dynamic Material Flow Analysis (MFA) and multi-scenario pathways. To ensure robust conclusions, the model is validated against historical data, and a multi-level sensitivity analysis is conducted to address the inherent uncertainties of evolving socio-technical assumptions over a ten-year horizon. The simulation results reveal that under the baseline scenario, China’s EV stocks and annual lithium demand will grow by 8.3 and 4.7 times from 2024 to 2035, respectively. This rapid expansion poses a significant sustainability challenge, as cumulative demand will deplete 50–71% of China’s domestic lithium reserves by 2035. Despite a projected supply–demand gap of 110–120 kt/yr, the study identifies critical pathways for resource decoupling and circularity. Technology-driven interventions, such as enhancing energy density and extending battery lifespan, can reduce primary lithium demand by up to 18.9%. Furthermore, optimizing the closed-loop recycling system can contract the supply–demand gap by 31–39%, demonstrating the pivotal role of secondary resource recovery in building a resilient supply chain. Despite this reduction, a persistent reliance on international markets remains inevitable. These findings provide a quantified scientific foundation for policymakers, emphasizing that lithium security requires a synergistic transition from volume-based subsidies to resource efficiency mandates and standardized, formal closed-loop recycling systems. Full article
(This article belongs to the Section Resources and Sustainable Utilization)
38 pages, 1285 KB  
Review
From Static Welfare Optimization to Dynamic Efficiency in Energy Policy: A Governance Framework for Complex and Uncertain Energy Systems
by Martin García-Vaquero, Antonio Sánchez-Bayón and Frank Daumann
Energies 2026, 19(6), 1460; https://doi.org/10.3390/en19061460 - 13 Mar 2026
Abstract
The energy transition represents a complex, multi-level system subject to profound uncertainty and recurrent shocks. Current policy design approaches predominantly rely on static optimization frameworks (centralized, calculative models that presume stable conditions and predictable technological trajectories). Yet evidence from the 2021–2023 energy crisis [...] Read more.
The energy transition represents a complex, multi-level system subject to profound uncertainty and recurrent shocks. Current policy design approaches predominantly rely on static optimization frameworks (centralized, calculative models that presume stable conditions and predictable technological trajectories). Yet evidence from the 2021–2023 energy crisis in Europe, coupled with structural challenges in market liberalization and renewable integration, demonstrates persistent challenges in policy implementation. Price interventions affect competitive dynamics; subsidies influence technology selection; capacity mechanisms create coordination tensions; and rigid tariff structures create misalignments with evolving grid needs. This paper argues that these recurrent policy tensions stem not from implementation gaps, but from an inadequate theoretical foundation: the treatment of energy systems as optimizable rather than as complex, adaptive systems operating under Knight–Mises uncertainty and Huerta de Soto dynamic efficiency. This work explores an alternative framework grounded in dynamic efficiency, complex–uncertain systems, decentralized incentives, and adaptive governance (international–domestic, public–private, etc.). This review uses the theoretical and methodological framework of the Heterodox Synthesis, an alternative to the Neoclassical Synthesis. There is a reinterpretation of some insights from Knight and Mises (uncertainty), Hayek (distributed knowledge), Huerta de Soto (dynamic efficiency) and contemporary complexity economics into operational criteria applicable to energy policy design: (1) robustness to deep uncertainty; (2) preservation of price signals and risk-bearing mechanisms; (3) alignment of incentives across distributed actors; (4) institutional adaptability; and (5) minimization of ex post policy corrections. Through illustrative application to four critical policy instruments (price caps, renewable subsidies, capacity mechanisms, and network tariff design), it is shown how this framework identifies systematic tensions and consequences that conventional analysis overlooks. The contribution is exploratory in a bootstrap way: theoretical, by integrating classical and contemporary economics into energy governance; methodological, by operationalizing dynamic efficiency into evaluable criteria distinct from existing adaptive governance frameworks; and sectorial, by providing policymakers and regulators with diagnostic tools for assessing design robustness in conditions of deep uncertainty and rapid transition. According to this review, improved energy policy design under uncertainty is not achieved through more sophisticated optimization (in a calculative way), but through institutional architectures that preserve creative and adaptive learning, maintain distributed decision-making capacity, and remain functional when assumptions prove incorrect or not well-known. Full article
Show Figures

Figure 1

32 pages, 24332 KB  
Article
Reciprocal Neural State–Disturbance Observer for Model-Free Trajectory Tracking of Robotic Manipulators
by Binluan Wang, Yuchen Peng, Hongzhe Jin and Jie Zhao
Mathematics 2026, 14(6), 983; https://doi.org/10.3390/math14060983 - 13 Mar 2026
Abstract
High-precision trajectory tracking of robotic manipulators is fundamentally challenged by strong nonlinear dynamics, unmodeled uncertainties, and external disturbances. This paper proposes a Reciprocal Neural State–Disturbance Observer (RNSDO) featuring a neural activation mechanism for adaptive gain modulation and a reciprocally coupled state–disturbance estimation architecture. [...] Read more.
High-precision trajectory tracking of robotic manipulators is fundamentally challenged by strong nonlinear dynamics, unmodeled uncertainties, and external disturbances. This paper proposes a Reciprocal Neural State–Disturbance Observer (RNSDO) featuring a neural activation mechanism for adaptive gain modulation and a reciprocally coupled state–disturbance estimation architecture. By reshaping the observer error dynamics through mutual feedback between state and disturbance estimation, the proposed structure alleviates the conflict between fast transient disturbance reconstruction and steady-state noise suppression, while requiring only position measurements. A decentralized position controller is designed based on RNSDO. The global asymptotic stability of the resulting closed-loop system is rigorously established via Lyapunov analysis. Extensive simulations on a PUMA 560 and experiments on a 7-DOF Franka FR3 robotic manipulator demonstrate highly consistent performance trends. The proposed method achieves improved state and disturbance estimation accuracy and enhanced robustness against unmodeled dynamics and payload variations compared with a linear Improved Extended State Observer (IESO), a classical Nonlinear Extended State Observer (NLESO), and a model-based Nonlinear Disturbance Observer-based Adaptive Robust Controller (NDO-ARC). Furthermore, the algorithm exhibits excellent real-time feasibility with a minimal computational footprint. Full article
(This article belongs to the Special Issue Mathematical Methods for Intelligent Robotic Control and Design)
22 pages, 799 KB  
Article
Adaptive Robust Control-Based Ride Comfort Enhancement for Nonlinear Suspension–Seat–Driver Systems
by Omur Can Can Ozguney
Electronics 2026, 15(6), 1213; https://doi.org/10.3390/electronics15061213 - 13 Mar 2026
Abstract
Ride comfort is a critical issue in vehicle dynamics, as excessive vibrations adversely affect passenger comfort and human health. This paper presents a comparative performance analysis of a passive suspension system, fuzzy logic control (FLC), and a newly designed adaptive robust control (ARC) [...] Read more.
Ride comfort is a critical issue in vehicle dynamics, as excessive vibrations adversely affect passenger comfort and human health. This paper presents a comparative performance analysis of a passive suspension system, fuzzy logic control (FLC), and a newly designed adaptive robust control (ARC) strategy applied to a nonlinear quarter-car suspension–seat–driver model. The primary objective is to improve ride comfort while maintaining vibration levels within accepted health criteria. First, the nonlinear dynamic model of the suspension–seat–driver system is established. The FLC structure and rule base are determined based on heuristic knowledge. Passive and FLC-based systems, while effective to some extent, suffer from limited adaptability to external disturbances and modeling uncertainties, slower convergence, and suboptimal vibration attenuation. The main contribution of this study is the design and implementation of a novel adaptive robust controller that effectively handles modeling uncertainties, external disturbances, and parameter variations. Different controller placement approaches within the system are also investigated. Numerical simulations are conducted under identical operating conditions for the uncontrolled system and all control strategies. The results demonstrate that although the FLC improves ride comfort compared to the passive system, the proposed ARC achieves the best overall performance, providing superior vibration attenuation, faster convergence, and enhanced robustness for nonlinear vehicle suspension systems. Quantitatively, the ARC reduces head acceleration RMS from 0.1693 m/s2 (passive) and 0.1422 m/s2 (FLC) to 0.0705 m/s2, and upper torso RMS from 0.1689 m/s2 (passive) and 0.1417 m/s2 (FLC) to 0.0703 m/s2, corresponding to approximately 58% reduction relative to passive and 50% improvement over FLC. Full article
(This article belongs to the Section Systems & Control Engineering)
33 pages, 4985 KB  
Article
Inference for Upper Record Ranked Set Sampling from Kies Model with k-Cycle Effect
by Zirui Chu, Min Wu, Liang Wang and Yuhlong Lio
Mathematics 2026, 14(6), 979; https://doi.org/10.3390/math14060979 - 13 Mar 2026
Abstract
This study investigates statistical inference for upper record ranked set sampling (URRSS) data from the Kies distribution. In multiple-cycle URRSS settings where the heterogeneity across cycles is non-ignorable, both classical and Bayesian approaches are adopted to estimate the unknown model parameters and associated [...] Read more.
This study investigates statistical inference for upper record ranked set sampling (URRSS) data from the Kies distribution. In multiple-cycle URRSS settings where the heterogeneity across cycles is non-ignorable, both classical and Bayesian approaches are adopted to estimate the unknown model parameters and associated reliability metrics. Likelihood-based point and interval estimates are derived for these parameters and reliability indices, and the existence and uniqueness of the maximum likelihood estimators for the Kies distribution parameters are rigorously established. Moreover, a hierarchical Bayesian framework is developed to accommodate cycle-specific variability, with a Metropolis–Hastings algorithm embedded within a Gibbs sampler proposed to facilitate posterior computation in complex scenarios. The performance of the suggested methods is assessed through extensive simulation studies, supplemented by two real-world data applications that demonstrate their practical utility. Numerical results show that the proposed estimators perform well overall, with the hierarchical Bayesian approach showing a particular advantage when uncertainty about the cycle effect is present. Full article
(This article belongs to the Section D1: Probability and Statistics)
28 pages, 4028 KB  
Article
Reliability-Aware Neural Decoding with Adaptive Multi-Source Information Fusion
by Pengxi Fu, Zhen Wang, Jianxin Guo, Yushuai Zhang, Feng Wang, Rui Zhu and Zhentao Huang
Entropy 2026, 28(3), 323; https://doi.org/10.3390/e28030323 - 13 Mar 2026
Abstract
Modern communication systems increasingly leverage multiple information streams—including channel observations, statistical models, and contextual knowledge—to enhance decoding reliability. However, the varying and often unpredictable quality of these sources poses a critical challenge: rigid combination rules fail when source reliability fluctuates, while manual tuning [...] Read more.
Modern communication systems increasingly leverage multiple information streams—including channel observations, statistical models, and contextual knowledge—to enhance decoding reliability. However, the varying and often unpredictable quality of these sources poses a critical challenge: rigid combination rules fail when source reliability fluctuates, while manual tuning cannot adapt to dynamic operating conditions. This paper presents a neural decoder architecture that automatically learns to assess and fuse heterogeneous information sources based on their instantaneous reliability. Central to our design is a learnable gating module that dynamically weights information streams, demonstrating emergent Bayesian-like behavior—increasing reliance on statistical models under high uncertainty while transitioning to observation-dominated processing as signal confidence improves. To combat the progressive dilution of auxiliary information in deep architectures, we propose a continuous injection strategy that refreshes auxiliary features at each processing layer through dedicated encoding pathways. The underlying message-passing network adopts a heterogeneous bipartite structure with direction-dependent edge parameterization, respecting the asymmetric computational roles inherent in iterative decoding algorithms. Comprehensive experiments validate that the proposed approach not only improves nominal performance but critically maintains robustness when auxiliary information quality degrades or becomes mismatched with actual conditions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

24 pages, 2590 KB  
Article
From Earthbound to Stars: Analyzing Humanity’s Path to a Type II Civilization
by Jonathan H. Jiang and Prithwis Das
Galaxies 2026, 14(2), 23; https://doi.org/10.3390/galaxies14020023 - 13 Mar 2026
Abstract
This study presents a quantitative, scenario-based framework for analyzing humanity’s potential progression along the Kardashev scale, with emphasis on the transition to Type I (planetary-scale) and Type II (stellar-scale) civilization status. Using humanity as an empirical reference case, we integrate four coupled dimensions [...] Read more.
This study presents a quantitative, scenario-based framework for analyzing humanity’s potential progression along the Kardashev scale, with emphasis on the transition to Type I (planetary-scale) and Type II (stellar-scale) civilization status. Using humanity as an empirical reference case, we integrate four coupled dimensions of civilizational development: energy utilization, information processing capacity, large-scale construction mass, and population dynamics, modeled through historical data, empirical trends, and physically motivated growth constraints. Energy availability is characterized using global energy production records and insolation statistics for potentially habitable exoplanets, explicitly acknowledging observational biases toward cooler host stars. Information processing growth is constrained by thermodynamic limits and observed trends in global data generation, while construction mass and population evolution are described using exponential and logistic growth models, respectively. These components are combined into a composite Civilization Development Index (CDI), a weighted logarithmic metric designed to track multi-scale civilizational advancement and tested through sensitivity analyses. Under optimistic assumptions of uninterrupted technological growth and absence of civilization-scale catastrophes, the framework suggests that humanity could reach Type I civilization status on the order of the 23rd century, while Type II status represents a substantially longer-term outcome extending into the third millennium or beyond. These timescales should be interpreted as lower bounds, as catastrophic events, sociopolitical constraints, or resource bottlenecks could significantly delay or prevent such transitions. By explicitly delineating assumptions, uncertainties, and physical constraints, this work provides a structured baseline for studies of long-term civilizational trajectories and the factors governing the emergence or absence of advanced technological civilizations. Full article
Show Figures

Figure 1

10 pages, 890 KB  
Proceeding Paper
Extreme Rainfall Analysis and Return Period Estimation Based on Extreme Value Theory
by Jieling Wu
Eng. Proc. 2026, 128(1), 31; https://doi.org/10.3390/engproc2026128031 - 13 Mar 2026
Abstract
Climate change has resulted in frequent extreme weather events such as heavy rainfall and heat waves in Japan, making accurate forecasting and countermeasures an urgent issue. Therefore, it is urgently required to analyze the statistical characteristics of extreme rainfall events using the extreme [...] Read more.
Climate change has resulted in frequent extreme weather events such as heavy rainfall and heat waves in Japan, making accurate forecasting and countermeasures an urgent issue. Therefore, it is urgently required to analyze the statistical characteristics of extreme rainfall events using the extreme value theory (EVT). The generalized extreme value (GEV) distribution, a core model for EVT, was applied in this study to rainfall data collected in Kakunodate, Akita Prefecture, Japan, spanning May 1976 to December 2023. The analysis results confirm the presence of extreme rainfall events. Through model fitting, the GEV parameters representing location, scale, and shape were accurately estimated. The model demonstrated a good fit, particularly for moderate-intensity rainfall. However, notable uncertainties emerged in the prediction of the most extreme events. Return period analysis results indicated that extreme rainfall events occur at intervals ranging from 2 to 100 years, suggesting the necessity of incorporating safety margins into long-term forecasting frameworks. Considering the increasing frequency of such events, cross-validation with alternative statistical methods and the potential adoption of non-smooth GEV models are recommended to enhance predictive reliability. Overall, the results of this study highlight the need for adaptive and flexible revisions to infrastructure design criteria in response to evolving patterns of extreme weather. Full article
Show Figures

Figure 1

31 pages, 2057 KB  
Review
Clinical AI in Radiology: Foundations, Trends, Applications, and Emerging Directions
by Iryna Hartsock, Nikolas Koutsoubis, Sabeen Ahmed, Nathan Parker, Matthew B. Schabath, Cyrillo Araujo, Aliya Qayyum, Cesar Lam, Robert A. Gatenby and Ghulam Rasool
Cancers 2026, 18(6), 942; https://doi.org/10.3390/cancers18060942 - 13 Mar 2026
Abstract
Artificial intelligence (AI) is at the vanguard of transforming radiology in several ways, including augmenting diagnoses, improving workflows, and increasing operational efficiency. Several integration challenges, including concerns over privacy, clinical usability, and workflow compatibility, still remain. This review discusses the foundations and current [...] Read more.
Artificial intelligence (AI) is at the vanguard of transforming radiology in several ways, including augmenting diagnoses, improving workflows, and increasing operational efficiency. Several integration challenges, including concerns over privacy, clinical usability, and workflow compatibility, still remain. This review discusses the foundations and current trends of clinical AI in radiology to provide essential context for ongoing developments. To illustrate translational potential, we describe representative applications, including: (1) local deployment of large language models (LLMs) for restructuring and streamlining radiology reports, improving clarity and consistency without relying on external resources; (2) multimodal AI frameworks combining CT images, clinical data, laboratory biomarkers, and LLM-extracted features from clinical notes for early detection of cachexia in pancreatic cancer; (3) privacy-preserving federated learning (FL) infrastructure enabling collaborative AI model development across institutions without sharing raw patient data; and (4) an uncertainty-aware de-identification pipeline for removing Protected Health Information (PHI) from radiology images and clinical reports to support secure data analysis and sharing. We further discuss emerging opportunities for tumor board decision support, clinical trial matching, radiology report quality assurance, and the development of an imaging complexity index. Collectively, these applications highlight the importance of local deployment, multimodal reasoning, privacy preservation, and human-in-the-loop oversight in translating AI models from research to oncology radiology practice. Full article
(This article belongs to the Special Issue Advances in Medical Imaging for Cancer Detection and Diagnosis)
Show Figures

Figure 1

7 pages, 201 KB  
Data Descriptor
Dataset for a Monte Carlo-Based Techno-Economic Assessment of the Methanol-to-Jet Fuel Production Pathway
by Enzo Komatz, Severin Sendlhofer and Christoph Markowitsch
Data 2026, 11(3), 56; https://doi.org/10.3390/data11030056 - 13 Mar 2026
Abstract
This article presents a dataset generated for a techno-economic assessment (TEA) of the methanol-to-jet (MtJ) fuel production pathway. The dataset was produced using a large-scale Monte Carlo (MC) sampling approach applied to a steady-state process model implemented in Aspen Plus V14. The techno-economic [...] Read more.
This article presents a dataset generated for a techno-economic assessment (TEA) of the methanol-to-jet (MtJ) fuel production pathway. The dataset was produced using a large-scale Monte Carlo (MC) sampling approach applied to a steady-state process model implemented in Aspen Plus V14. The techno-economic evaluation was conducted using an external cost model, with subsequent data processing performed in Python (Version 3.11). In total, three million individual data points were generated by varying key technical and economic input parameters within predefined ranges and are under public access. For each MC sample, the net production cost on a mass basis (NPCm, EUR kgjet-fuel−1) of synthetic jet fuel was calculated as the primary economic performance indicator. The dataset comprises both the sampled input parameters and the corresponding techno-economic output variables and is intended to support transparency, reproducibility, and further uncertainty analysis of MtJ fuel production pathways. Full article
19 pages, 882 KB  
Review
Artificial Intelligence and the Transformation of Cell and Gene Therapy Development
by Jared R. Auclair, Jeewon Joung, Maya A. Singh, Gaël Debauve and Rominder Singh
Pharmaceutics 2026, 18(3), 356; https://doi.org/10.3390/pharmaceutics18030356 - 13 Mar 2026
Abstract
Cell and Gene Therapy (CGT) represents a paradigm shift in medicine, offering curative potential for previously intractable diseases. However, the complexity, high cost, and manufacturing challenges inherent in developing, producing, and administering these therapies hinder their widespread accessibility. This review examines the critical [...] Read more.
Cell and Gene Therapy (CGT) represents a paradigm shift in medicine, offering curative potential for previously intractable diseases. However, the complexity, high cost, and manufacturing challenges inherent in developing, producing, and administering these therapies hinder their widespread accessibility. This review examines the critical and increasingly synergistic role of Artificial Intelligence (AI) and Machine Learning (ML) in overcoming these barriers across the entire CGT lifecycle, from discovery and construct design to smart manufacturing, clinical translation, and regulatory applications. We analyze how AI-driven approaches fundamentally differ from conventional methods, facilitating rapid construct optimization, generating highly predictive translational models, enabling the vision of autonomous, digital-twin-driven manufacturing, and establishing new paradigms for pharmacovigilance and regulatory oversight. The integration of AI is not merely an incremental improvement but a foundational transformation, positioning CGT to move from niche, bespoke treatments to scalable, accessible, and highly personalized medical modalities. We conclude by discussing current gaps, particularly data scarcity and regulatory uncertainty, and outlining a roadmap to realize the full potential of AI-enabled CGT. Full article
(This article belongs to the Section Gene and Cell Therapy)
Show Figures

Figure 1

36 pages, 1570 KB  
Review
Environmental Assessment Strategies for Biodegradable Polymer Composites: A Review of Life Cycle Perspectives on Agro-Waste Reinforced Materials
by Kastytis Pamakštys, Anastasiia Sholokhova, Inga Gurauskienė and Visvaldas Varžinskas
Polymers 2026, 18(6), 700; https://doi.org/10.3390/polym18060700 - 13 Mar 2026
Abstract
The growing interest in bio-based and biodegradable polymer composites reinforced with agricultural waste reflects global efforts to reduce dependence on fossil resources and improve the sustainability of materials. However, biocomposites are not necessarily more sustainable, and their environmental performance requires careful life cycle [...] Read more.
The growing interest in bio-based and biodegradable polymer composites reinforced with agricultural waste reflects global efforts to reduce dependence on fossil resources and improve the sustainability of materials. However, biocomposites are not necessarily more sustainable, and their environmental performance requires careful life cycle assessment (LCA). This review critically analyses recent LCA studies of biodegradable biocomposites reinforced with agricultural waste, focusing on methodological choices, data quality, results and limitations. A systematic literature review was conducted using the Scopus database, focusing on studies from the last five years. Selected studies were examined using a structure consistent with ISO 14040, with defined data extraction categories and key questions. The analysis shows that although biocomposites often demonstrate advantages in terms of climate change and fossil resource depletion compared to traditional materials, the results vary significantly depending on the definition of the functional unit, geographical context, processing pathways, and data assumptions. Limitations include reliance on laboratory data, uncertainties, incomplete system boundaries, inconsistent allocation methods, and limited end-of-life (EoL) modelling. Overall, the review highlights the need for improved data quality, performance-based functional units, geographically representative inventories, and more standardised LCA practices to ensure meaningful comparisons and support the sustainable development of biocomposites. Full article
(This article belongs to the Section Circular and Green Sustainable Polymer Science)
Show Figures

Figure 1

25 pages, 8120 KB  
Article
Cost-Aware Active Learning Framework for Efficient Small-Object Detection in Agricultural Images
by Mirjana Bonković, Ozana Uvodić, Josip Musić and Vladan Papić
Electronics 2026, 15(6), 1196; https://doi.org/10.3390/electronics15061196 - 13 Mar 2026
Abstract
Although active learning can reduce the effort required to annotate object detection data, many current methods rely on a single selection criterion or combine criteria without accounting for annotation costs or their interactions. This paper presents a multi-criterion, cost-aware active learning framework for [...] Read more.
Although active learning can reduce the effort required to annotate object detection data, many current methods rely on a single selection criterion or combine criteria without accounting for annotation costs or their interactions. This paper presents a multi-criterion, cost-aware active learning framework for detecting small objects in agricultural images. The framework jointly considers prediction uncertainty, object size, scene density, and annotation cost. We evaluate both scalarized and Pareto-based selection strategies across five cost models and conduct an ablation study to examine the role and interactions of each criterion. Experimental results demonstrate that explicit annotation cost modeling improves active learning efficiency by reducing the amount of annotation required to achieve a given level of detection performance. Across multiple cost formulations and selection strategies, cost-aware acquisition reaches comparable accuracy and reduces the estimated annotation effort required to reach comparable detection performance by up to 50% compared to random sampling, where annotation effort is approximated using prediction-derived cost proxies. Full article
Show Figures

Figure 1

Back to TopTop