Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,162)

Search Parameters:
Keywords = critical decisions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2105 KB  
Article
A Comparative Life Cycle Assessment of Conventional and Reusable Packaging Systems Under Alternative Logistic Configurations
by Giovanni Marmora, Carmen Ferrara, Vittorio Roselli and Giovanni De Feo
Recycling 2026, 11(1), 13; https://doi.org/10.3390/recycling11010013 (registering DOI) - 10 Jan 2026
Abstract
Packaging plays a crucial role in product preservation and distribution but also constitutes a major source of environmental burden. In the beverage sector, where unit value is low, secondary and tertiary packaging significantly influence the environmental profile of the final product. This study [...] Read more.
Packaging plays a crucial role in product preservation and distribution but also constitutes a major source of environmental burden. In the beverage sector, where unit value is low, secondary and tertiary packaging significantly influence the environmental profile of the final product. This study quantifies the environmental trade-offs between conventional single-use and reusable packaging systems for aluminum cans, identifying the operational thresholds that justify a transition to circular models. A standardized Life Cycle Assessment (LCA) approach is applied to five packaging configurations: three current market scenarios and two alternative solutions based on reusable plastic crates (RPCs). System boundaries include production, distribution, end-of-life, and, where applicable, reverse logistics. A functional unit of one fully packaged 0.33 L aluminum can is adopted. Results reveal that while single-use cardboard solutions achieve favorable performance under certain impact categories, reusable systems outperform them when a sufficient number of reuse cycles is achieved and reverse logistics are efficiently managed. Sensitivity analyses highlight the critical influence of transport distances and reuse frequency on overall impacts, with performance deteriorating for reusable systems beyond 200 km or below 50 reuse cycles. These findings offer concrete, evidence-based guidance for supply-chain and logistics decision-makers to optimize packaging choices and distribution network design. The study also provides robust quantitative insights for policymakers and industry stakeholders by defining the precise operational conditions under which reusable systems deliver real environmental benefits. By presenting a comprehensive, system-level comparison of complete packaging systems, this research closes a critical gap in LCA studies and sets out a practical pathway for implementing circular, low-impact packaging strategies consistent with emerging EU regulations. Full article
25 pages, 1514 KB  
Article
Policy Transmission Mechanisms and Effectiveness Evaluation of Territorial Spatial Planning in China
by Luge Wen, Yucheng Sun, Tianjiao Zhang and Tiyan Shen
Land 2026, 15(1), 145; https://doi.org/10.3390/land15010145 (registering DOI) - 10 Jan 2026
Abstract
This study is situated at the critical stage of comprehensive implementation of China’s territorial spatial planning system, addressing the strategic need for planning evaluation and optimization. We innovatively construct a Computable General Equilibrium Model for China’s Territorial Spatial Planning (CTSPM-CHN) that integrates dual [...] Read more.
This study is situated at the critical stage of comprehensive implementation of China’s territorial spatial planning system, addressing the strategic need for planning evaluation and optimization. We innovatively construct a Computable General Equilibrium Model for China’s Territorial Spatial Planning (CTSPM-CHN) that integrates dual factors of construction land costs and energy consumption costs. Through designing two policy scenarios of rigid constraints and structural optimization, we systematically simulate and evaluate the dynamic impacts of different territorial spatial governance strategies on macroeconomic indicators, residents’ welfare, and carbon emissions, revealing the multidimensional effects and operational mechanisms of territorial spatial planning policies. The findings demonstrate the following: First, strict implementation of land use scale control from the National Territorial Planning Outline (2016–2030) could reduce carbon emission growth rate by 12.3% but would decrease annual GDP growth rate by 0.8%, reflecting the trade-off between environmental benefits and economic growth. Second, industrial land structure optimization generates significant synergistic effects, with simulation results showing that by 2035, total GDP under this scenario would increase by 4.8% compared to the rigid constraint scenario, while carbon emission intensity per unit GDP would decrease by 18.6%, confirming the crucial role of structural optimization in promoting high-quality development. Third, manufacturing land adjustment exhibits policy thresholds: moderate reduction could lower carbon emission peak by 9.5% without affecting economic stability, but excessive cuts would lead to a 2.3 percentage point decline in industrial added value. Based on systematic multi-scenario analysis, this study proposes optimized pathways for territorial spatial governance: the planning system should transition from scale control to a structural optimization paradigm, establishing a flexible governance mechanism incorporating anticipatory constraint indicators; simultaneously advance efficiency improvement in key sector land allocation and energy structure decarbonization, constructing a coordinated “space–energy” governance framework. These findings provide quantitative decision-making support for improving territorial spatial governance systems and advancing ecological civilization construction. Full article
Show Figures

Figure 1

31 pages, 2375 KB  
Article
From Technical Feasibility to Governance Integration: Developing an Evaluation Matrix for Greywater Reuse in Urban Residential Areas
by Kohlhepp Gloria Maria, Lück Andrea, Müller Gerald and Beier Silvio
Water 2026, 18(2), 190; https://doi.org/10.3390/w18020190 (registering DOI) - 10 Jan 2026
Abstract
Greywater reuse presents a promising strategy for reducing potable water demand and supporting the irrigation of urban green infrastructure, yet its implementation in early planning phases remains limited by fragmented regulations, data gaps, and the absence of practical decision support tools. This study [...] Read more.
Greywater reuse presents a promising strategy for reducing potable water demand and supporting the irrigation of urban green infrastructure, yet its implementation in early planning phases remains limited by fragmented regulations, data gaps, and the absence of practical decision support tools. This study develops a comprehensive evaluation matrix based on Multi-Criteria Decision Analysis (MCDA) to assess the feasibility of greywater reuse in residential district development. The framework integrates eight domains (legal, technical, infrastructural, ecological, economic, and social factors) and is complemented by automated supporting worksheets for water balance, ecological indicators, and economic parameters. Application of the matrix to two contrasting residential case studies demonstrated its diagnostic value: the new-build district in Dortmund showed a high reuse potential, strongly influenced by favourable infrastructure conditions and ecological indicators, whereas the existing building in Weimar yielded a moderate potential due to infrastructural constraints and lower greywater availability. Sensitivity analyses further revealed that local water tariffs, intended-use scenarios, and stakeholder weightings substantially affect outcomes. Overall, the results show that the matrix supports transparent early-stage decision-making, identifies critical bottlenecks, and strengthens governance-oriented integration of greywater reuse in sustainable urban development. Full article
(This article belongs to the Section Water Resources Management, Policy and Governance)
Show Figures

Figure 1

17 pages, 1585 KB  
Review
Second-Opinion Systems for Rare Diseases: A Scoping Review of Digital Workflows and Networks
by Vinícius Lima, Mariana Mozini and Domingos Alves
Informatics 2026, 13(1), 6; https://doi.org/10.3390/informatics13010006 (registering DOI) - 10 Jan 2026
Abstract
Introduction: Rare diseases disperse expertise across institutions and borders, making structured second-opinion systems a pragmatic way to concentrate subspecialty knowledge and reduce diagnostic delays. This scoping review mapped the design, governance, adoption, and impacts of such services across implementation scales. Objectives: To describe [...] Read more.
Introduction: Rare diseases disperse expertise across institutions and borders, making structured second-opinion systems a pragmatic way to concentrate subspecialty knowledge and reduce diagnostic delays. This scoping review mapped the design, governance, adoption, and impacts of such services across implementation scales. Objectives: To describe how second-opinion services for rare diseases are organized and governed, to characterize technological and workflow models, to summarize benefits and barriers, and to identify priority evidence gaps for implementation. Methods: Using a population–concept–context approach, we included peer-reviewed studies describing implemented second-opinion systems for rare diseases and excluded isolated case reports, purely conceptual proposals, and work outside this focus. Searches in August 2025 covered PubMed/MEDLINE, Scopus, Web of Science Core Collection, Cochrane Library, IEEE Xplore, ACM Digital Library, and LILACS without date limits and were restricted to English, Portuguese, or Spanish. Two reviewers screened independently, and the data were charted with a standardized, piloted form. No formal critical appraisal was undertaken, and the synthesis was descriptive. Results: Initiatives were clustered by scale (European networks, national programs, regional systems, international collaborations) and favored hybrid models over asynchronous and synchronous ones. Across settings, services shared reproducible workflows and provided faster access to expertise, quicker decision-making, and more frequent clarification of care plans. These improvements were enabled by transparent governance and dedicated support but were constrained by platform complexity, the effort required to assemble panels, uneven incentives, interoperability gaps, and medico-legal uncertainty. Conclusions: Systematized second-opinion services for rare diseases are feasible and clinically relevant. Progress hinges on usability, aligned incentives, and pragmatic interoperability, advancing from registries toward bidirectional electronic health record connections, alongside prospective evaluations of outcomes, equity, experience, effectiveness, and costs. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

31 pages, 16955 KB  
Article
Uncertainty Assessment of the Impacts of Climate Change on Streamflow in the Iznik Lake Watershed, Türkiye
by Anıl Çalışkan Tezel, Adem Akpınar, Aslı Bor and Şebnem Elçi
Water 2026, 18(2), 187; https://doi.org/10.3390/w18020187 (registering DOI) - 10 Jan 2026
Abstract
Study region: This study focused on the Iznik Lake Watershed in northwestern Türkiye. Study focus: Climate change is increasingly affecting water resources worldwide, raising concerns about future hydrological sustainability. This study investigates the impacts of climate change on river streamflow in [...] Read more.
Study region: This study focused on the Iznik Lake Watershed in northwestern Türkiye. Study focus: Climate change is increasingly affecting water resources worldwide, raising concerns about future hydrological sustainability. This study investigates the impacts of climate change on river streamflow in the Iznik Lake Watershed, a critical freshwater resource in northwestern Türkiye. To capture possible future conditions, downscaled climate projections were integrated with the SWAT+ hydrological model. Recognizing the inherent uncertainties in climate models and model parameterization, the analysis examined the relative influence of climate realizations, emission scenarios, and hydrological parameters on streamflow outputs. By quantifying both the magnitude of climate-induced changes and the contribution of different sources of uncertainty, the study provides insights that can guide decision-makers in future management planning and be useful for forthcoming modeling efforts. New hydrological insights for the region: Projections indicate wetter winters and springs but drier summers, with an overall warming trend in the study area. Based on simulations driven by four representative grid points, the results at the Karadere station, which represents the main inflow of the watershed, indicate modest changes in mean annual streamflow, ranging from −7% to +56% in the near future and from +19% to +54% in the far future. Maximum flows (Qmax) exhibit notable increases, ranging from +0.9% to +47% in the near future and from +21% to +63% in the far future, indicating a tendency toward higher peak discharges under future climate conditions. Low-flow conditions, especially in summer, exhibit the greatest relative variability due to near-zero baseline discharges. Relative change analysis revealed considerable differences in Karadere and Findicak sub-catchments, reflecting heterogeneous hydrological responses even within the same basin. Uncertainty analysis, conducted using both an ANOVA-based approach and Bayesian Model Averaging (BMA), highlighted the dominant influence of climate projections and potential evapotranspiration calculation methods, while land use change contributed negligibly to overall uncertainty. Full article
Show Figures

Figure 1

22 pages, 2421 KB  
Article
Application of Large Language Models in the Protection of Industrial IoT Systems for Critical Infrastructure
by Anna Manowska and Jakub Syta
Appl. Sci. 2026, 16(2), 730; https://doi.org/10.3390/app16020730 (registering DOI) - 10 Jan 2026
Abstract
The increasing digitization of critical infrastructure and the increasing use of Industrial Internet of Things (IIoT) systems are leading to a significant increase in the exposure of operating systems to cyber threats. The integration of information (IT) and operational (OT) layers, characteristic of [...] Read more.
The increasing digitization of critical infrastructure and the increasing use of Industrial Internet of Things (IIoT) systems are leading to a significant increase in the exposure of operating systems to cyber threats. The integration of information (IT) and operational (OT) layers, characteristic of today’s industrial environments, results in an increase in the complexity of system architecture and the number of security events that require ongoing analysis. Under such conditions, classic approaches to monitoring and responding to incidents prove insufficient, especially in the context of systems with high reliability and business continuity requirements. The aim of this article is to analyze the possibilities of using Large Language Models (LLMs) in the protection of industrial IoT systems operating in critical infrastructure. The paper analyzes the architecture of industrial automation systems and identifies classes of cyber threat scenarios characteristic of IIoT environments, including availability disruptions, degradation of system operation, manipulation of process data, and supply-chain-based attacks. On this basis, the potential roles of large language models in security monitoring processes are examined, particularly with respect to incident interpretation, correlation of heterogeneous data sources, and contextual analysis under operational constraints. The experimental evaluation demonstrates that, when compared to a rule-based baseline, the LLM-based approach provides consistently improved classification of incident impact and attack vectors across IT, DMZ, and OT segments, while maintaining a low rate of unsupported responses. These results indicate that large language models can complement existing industrial IoT security mechanisms by enhancing context-aware analysis and decision support rather than replacing established detection and monitoring systems. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in the IoT)
Show Figures

Figure 1

19 pages, 2840 KB  
Article
Evolution of Computerized Provider Order Entry Documentation at a Leading Tertiary Care Referral Center in Riyadh
by Hanan Sabet Alanazi and Yazed Alruthia
Healthcare 2026, 14(2), 179; https://doi.org/10.3390/healthcare14020179 (registering DOI) - 10 Jan 2026
Abstract
Background: Computerized Provider Order Entry (CPOE) systems are critical for medication safety, but their effectiveness relies heavily on the completeness of entered data. Incomplete clinical and anthropometric information can disable Clinical Decision Support Systems (CDSSs), compromising patient safety. Objective: This study [...] Read more.
Background: Computerized Provider Order Entry (CPOE) systems are critical for medication safety, but their effectiveness relies heavily on the completeness of entered data. Incomplete clinical and anthropometric information can disable Clinical Decision Support Systems (CDSSs), compromising patient safety. Objective: This study aimed to assess the longitudinal evolution of CPOE data completeness, specifically focusing on “Breadth Completeness” (the presence of essential clinical variables), and to identify factors predicting data integrity in a tertiary care setting. Methods: A retrospective cross-sectional study was conducted at a 500-bed tertiary referral center in Riyadh. Data were extracted from the Cerner Millennium CPOE system for three “steady-state” years (2015, 2017, and 2019); years involving major system overhauls (2016 and 2018) were excluded to avoid structural bias. A total of 600 unique patient encounters (200 per year) were selected using systematic random sampling from a chronologically ordered sampling frame to minimize temporal bias. The primary outcome was “Breadth Completeness,” defined as the presence of eight key variables: age, gender, marital status, weight, height, diagnosis, vital signs, and allergies. Secondary outcomes included documentation consistency (daily notes). Multivariable logistic regression, adjusted for potential confounders, was used to determine predictors of completeness. Results: The rate of primary data completeness (Breadth) improved significantly over the study period, rising from 5.5% in 2015 to 26% in 2017 and 49.5% in 2019. In the multivariable analysis, the year of documentation (OR = 17.47 for 2019 vs. 2015, p < 0.0001) and length of hospitalization (OR = 1.04, p = 0.045) were significant predictors of completeness. Pharmacist-led medication reconciliation was associated with a 2.5-fold increase in data completeness in bivariate analysis (p < 0.0001). Conclusions: While system maturity has driven substantial improvements in CPOE documentation, critical gaps persist, particularly in anthropometric data required for safety alerts. The study underscores the necessity of mandating “hard stops” for core variables and formalizing pharmacist involvement in data reconciliation to ensure patient safety. Full article
Show Figures

Figure 1

17 pages, 1588 KB  
Article
Integrating Contextual Causal Deep Networks and LLM-Guided Policies for Sequential Decision-Making
by Jong-Min Kim
Mathematics 2026, 14(2), 269; https://doi.org/10.3390/math14020269 (registering DOI) - 10 Jan 2026
Abstract
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and [...] Read more.
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and assess subpopulation performance, we utilize a Collective Conditional Diffusion Network (CCDN) where covariates are partitioned into B=10 homogeneous blocks. Evaluating these policies across a high-dimensional treatment space (K=5, resulting in 25=32 actions), we tested performance in a simulated environment and three benchmark datasets: Boston Housing, Wine Quality, and Adult Income. Our results demonstrate that the Greedy strategy achieves the highest Model-Relative Optimal (MRO) coverage, reaching 1.00 in the Wine Quality and Adult Income datasets, though performance drops significantly to 0.05 in the Boston Housing environment. Thompson Sampling maintains competitive regret and, in the Boston Housing dataset, marginally outperforms Greedy in action selection precision. Conversely, the zero-shot LLM-guided policy consistently underperforms in numerical tabular settings, exhibiting the highest median regret and near-zero MRO coverage across most tasks. Furthermore, Wilcoxon tests reveal that differences in empirical outcomes between policies are often not statistically significant (ns), suggesting an optimization ceiling in zero-shot tabular settings. These findings indicate that while traditional model-driven policies are robust, LLM-guided approaches currently lack the numerical precision required for high-dimensional sequential decision-making without further calibration or hybrid integration. Full article
(This article belongs to the Special Issue Computational Methods and Machine Learning for Causal Inference)
19 pages, 782 KB  
Article
For Autonomous Driving: The LGAT Model—A Method for Long-Term Time Series Forecasting
by Guoyu Qi, Jiaqi Kang, Yufeng Sun and Guangle Song
Electronics 2026, 15(2), 305; https://doi.org/10.3390/electronics15020305 (registering DOI) - 9 Jan 2026
Abstract
Time series forecasting plays a critical role in a wide range of applications, including energy load forecasting, traffic flow management, weather prediction, and vision-based state prediction for autonomous driving. In the context of autonomous vehicles, accurate forecasting of sequential visual information—such as traffic [...] Read more.
Time series forecasting plays a critical role in a wide range of applications, including energy load forecasting, traffic flow management, weather prediction, and vision-based state prediction for autonomous driving. In the context of autonomous vehicles, accurate forecasting of sequential visual information—such as traffic participant trajectories, road condition variations, and obstacle motion trends perceived by onboard sensors—is a fundamental prerequisite for safe and reliable decision-making. To overcome the limitations of existing long-term time series forecasting models, particularly their insufficient capability in temporal feature extraction, this paper proposes a Local–Global Adaptive Transformer (LGAT) for long-term time series forecasting. The proposed model incorporates three key innovations: (1) a period-aware positional encoding mechanism that embeds intrinsic periodic patterns of time series into positional representations and adaptively adjusts encoding parameters according to data-specific periodicity; (2) a temporal feature enhancement module based on gated convolution, which effectively suppresses noise in raw inputs while emphasizing discriminative temporal characteristics; and (3) a local–global adaptive attention layer that combines sliding window–based local attention with importance-aware global attention to simultaneously capture short-term local variations and long-term global dependencies. Experimental results on five public benchmark datasets demonstrate that LGAT consistently outperforms most baseline models, indicating its strong potential for time series forecasting applications in autonomous driving scenarios. Full article
(This article belongs to the Special Issue Deep Perception in Autonomous Driving, 2nd Edition)
25 pages, 882 KB  
Article
A BERT and NSGA-II Based Model for Workforce Resource Allocation Optimization in the Operational Stage of Commercial Buildings
by Xiangjun Li and Junhao Ma
Buildings 2026, 16(2), 289; https://doi.org/10.3390/buildings16020289 - 9 Jan 2026
Abstract
Existing experience-based methods cannot effectively assist commercial building operators in allocating workforce resources according to contracts and balance multiple workforce management objectives under resource constraints, leading to misaligned allocation strategies. To address this issue, this study develops a workforce resource allocation optimization model [...] Read more.
Existing experience-based methods cannot effectively assist commercial building operators in allocating workforce resources according to contracts and balance multiple workforce management objectives under resource constraints, leading to misaligned allocation strategies. To address this issue, this study develops a workforce resource allocation optimization model based on BERT and the NSGA-II. First, a natural language processing (NLP) model is trained to extract operational tasks from contracts and match required workforce types, thereby establishing the framework for workforce allocation schemes. Second, a mathematical optimization model for workforce allocation strategies is constructed with the objectives of minimizing workforce wage costs (B1), maximizing average service levels (B2), and maximizing average digital technology acceptance (B3). An algorithm based on NSGA-II is then designed to solve the model and obtain the optimal Pareto solution set of allocation schemes. Third, the CRITIC–VIKOR method evaluates the Pareto set and determines the final recommended schemes. A case study was conducted on a university campus in Shandong, China, to validate the model’s effectiveness. The results show that the NLP model successfully identified 14 operational tasks and 13 required workforce types from the contract. Compared with the operator’s expected values (B1 = 46,0000 CNY, B2 = 65 points, B3 = 50 points), the optimal allocation scheme calculated using NSGA-II and the CRITIC–VIKOR method reduces B1 by 10.79%, increases B2 by 18.02%, and improves the B3 by 16.79%. This study formulates the workforce allocation problem in the operation stage as a mathematical optimization model and, for the first time, incorporates the workforce’s digital technology acceptance as an optimization objective, thereby filling a theoretical gap in workforce management for commercial building operations. The proposed model provides operators with a semi-automated decision-support tool to enhance workforce management, thereby promoting the sustainable operation of commercial buildings. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
20 pages, 288 KB  
Article
Dialogical AI for Cognitive Bias Mitigation in Medical Diagnosis
by Leonardo Guiducci, Claudia Saulle, Giovanna Maria Dimitri, Benedetta Valli, Simona Alpini, Cristiana Tenti and Antonio Rizzo
Appl. Sci. 2026, 16(2), 710; https://doi.org/10.3390/app16020710 - 9 Jan 2026
Abstract
Large Language Models (LLMs) promise to enhance clinical decision-making, yet empirical studies reveal a paradox: physician performance with LLM assistance shows minimal improvement or even deterioration. This failure stems from an “acquiescence problem”: current LLMs passively confirm rather than challenge clinicians’ hypotheses, reinforcing [...] Read more.
Large Language Models (LLMs) promise to enhance clinical decision-making, yet empirical studies reveal a paradox: physician performance with LLM assistance shows minimal improvement or even deterioration. This failure stems from an “acquiescence problem”: current LLMs passively confirm rather than challenge clinicians’ hypotheses, reinforcing cognitive biases such as anchoring and premature closure. To address these limitations, we propose a Dialogic Reasoning Framework that operationalizes Dialogical AI principles through a prototype implementation named “Diagnostic Dialogue” (DiDi). This framework operationalizes LLMs into three user-controlled roles: the Framework Coach (guiding structured reasoning), the Socratic Guide (asking probing questions), and the Red Team Partner (presenting evidence-based alternatives). Built upon Retrieval-Augmented Generation (RAG) architecture for factual grounding and traceability, this framework transforms LLMs from passive information providers into active reasoning partners that systematically mitigate cognitive bias. We evaluate the feasibility and qualitative impact of this framework through a pilot study (DiDi) deployed at Centro Chirurgico Toscano (CCT). Through purposive sampling of complex clinical scenarios, we present comparative case studies illustrating how the dialogic approach generates necessary cognitive friction to overcome acquiescence observed in standard LLM interactions. While rigorous clinical validation through randomized controlled trials remains necessary, this work establishes a methodological foundation for designing LLM-based clinical decision support systems that genuinely augment human clinical reasoning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
38 pages, 1376 KB  
Review
Risk Assessment of Chemical Mixtures in Foods: A Comprehensive Methodological and Regulatory Review
by Rosana González Combarros, Mariano González-García, Gerardo David Blanco-Díaz, Kharla Segovia Bravo, José Luis Reino Moya and José Ignacio López-Sánchez
Foods 2026, 15(2), 244; https://doi.org/10.3390/foods15020244 - 9 Jan 2026
Abstract
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory [...] Read more.
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory advances in cumulative risk assessment for dietary “cocktails” of pesticides, contaminants and other xenobiotics, with a specific focus on food-relevant exposure scenarios. At the toxicological level, the field is now anchored in concentration/dose addition as the default model for similarly acting chemicals, supported by extensive experimental evidence that most environmental mixtures behave approximately dose-additively at low effect levels. Building on this paradigm, a portfolio of quantitative metrics has been developed to operationalize component-based mixture assessment: HI as a conservative screening anchor; Relative Potency Factors (RPF) and Toxic Equivalents (TEQ) to express doses within cumulative assessment groups; the Maximum Cumulative Ratio (MCR) to diagnose whether risk is dominated by one or several components; and the combined Margin of Exposure (MOET) as a point-of-departure-based integrator that avoids compounding uncertainty factors. Regulatory frameworks developed by EFSA, the U.S. EPA and FAO/WHO converge on tiered assessment schemes, biologically informed grouping of chemicals and dose addition as the default model for similarly acting substances, while differing in scope, data infrastructure and legal embedding. Implementation in food safety critically depends on robust exposure data streams. Total Diet Studies provide population-level, “as eaten” exposure estimates through harmonized food-list construction, home-style preparation and composite sampling, and are increasingly combined with conventional monitoring. In parallel, human biomonitoring quantifies internal exposure to diet-related xenobiotics such as PFAS, phthalates, bisphenols and mycotoxins, embedding mixture assessment within a dietary-exposome perspective. Across these developments, structured uncertainty analysis and decision-oriented communication have become indispensable. By integrating advances in toxicology, exposure science and regulatory practice, this review outlines a coherent, tiered and uncertainty-aware framework for assessing real-world dietary mixtures of xenobiotics, and identifies priorities for future work, including mechanistically and data-driven grouping strategies, expanded use of physiologically based pharmacokinetic modelling and refined mixture-sensitive indicators to support public-health decision-making. Full article
(This article belongs to the Special Issue Research on Food Chemical Safety)
31 pages, 4648 KB  
Article
GF-NGB: A Graph-Fusion Natural Gradient Boosting Framework for Pavement Roughness Prediction Using Multi-Source Data
by Yuanjiao Hu, Mengyuan Niu, Liumei Zhang, Lili Pei, Zhenzhen Fan and Yang Yang
Symmetry 2026, 18(1), 134; https://doi.org/10.3390/sym18010134 - 9 Jan 2026
Abstract
Pavement roughness is a critical indicator for road maintenance decisions and driving safety assessment. Existing methods primarily rely on multi-source explicit features, which have limited capability in capturing implicit information such as spatial topology between road segments. Furthermore, their accuracy and stability remain [...] Read more.
Pavement roughness is a critical indicator for road maintenance decisions and driving safety assessment. Existing methods primarily rely on multi-source explicit features, which have limited capability in capturing implicit information such as spatial topology between road segments. Furthermore, their accuracy and stability remain insufficient in cross-regional and small-sample prediction scenarios. To address these limitations, we propose a Graph-Fused Natural Gradient Boosting framework (GF-NGB), which combines the spatial topology modeling capability of graph neural networks with the small-sample robustness of natural gradient boosting for high-precision cross-regional roughness prediction. The method first extracts an 18-dimensional set of multi-source features from the U.S. Long-Term Pavement Performance (LTPP) database and derives an 8-dimensional set of implicit spatial features using a graph neural network. These features are then concatenated and fed into a natural gradient boosting model, which is optimized by Optuna, to predict the dual objectives of left and right wheel-track roughness. To evaluate the generalization capability of the proposed method, we employ a spatially partitioned data split: the training set includes 1648 segments from Arizona, California, Florida, Ontario, and Missouri, while the test set comprises 330 segments from Manitoba and Nevada with distinct geographic and climatic conditions. Experimental results show that GF-NGB achieves the best performance on cross-regional tests, with average prediction accuracy improved by 1.7% and 3.6% compared to Natural Gradient Boosting (NGBoost) and a Graph Neural Network–Multilayer Perceptron hybrid model (GNN-MLP), respectively. This study reveals the synergistic effect of multi-source texture features and spatial topology information, providing a generalizable framework and technical pathway for cross-regional, small-sample intelligent pavement monitoring and smart maintenance. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Intelligent Transportation)
Show Figures

Figure 1

20 pages, 6621 KB  
Article
Sensor Fusion-Based Machine Learning Algorithms for Meteorological Conditions Nowcasting in Port Scenarios
by Marwan Haruna, Francesco Kotopulos De Angelis, Kaleb Gebremicheal Gebremeskel, Alexandr Tardo and Paolo Pagano
Sensors 2026, 26(2), 448; https://doi.org/10.3390/s26020448 - 9 Jan 2026
Abstract
Modern port operations face increasing challenges from rapidly changing weather and environmental conditions, requiring accurate short-term forecasting to support safe and efficient maritime activities. This study presents a sensor fusion-based machine learning framework for real-time multi-target nowcasting of wind gust speed, sustained wind [...] Read more.
Modern port operations face increasing challenges from rapidly changing weather and environmental conditions, requiring accurate short-term forecasting to support safe and efficient maritime activities. This study presents a sensor fusion-based machine learning framework for real-time multi-target nowcasting of wind gust speed, sustained wind speed, and wind direction using heterogeneous data collected at the Port of Livorno from February to November 2025. Using an IoT architecture compliant with the oneM2M standard and deployed at the Port of Livorno, CNIT integrated heterogeneous data from environmental sensors (meteorological stations, anemometers) and vessel-mounted LiDAR systems through feature-level fusion to enhance situational awareness, with gust speed treated as the primary safety-critical variable due to its substantial impact on berthing and crane operations. In addition, a comparative performance analysis of Random Forest, XGBoost, LSTM, Temporal Convolutional Network, Ensemble Neural Network, Transformer models, and a Kalman filter was performed. The results show that XGBoost consistently achieved the highest accuracy across all targets, with near-perfect performance in both single-split testing (R² ≈ 0.999) and five-fold cross-validation (mean R² = 0.9976). Ensemble models exhibited greater robustness than deep learning approaches. The proposed multi-target fusion framework demonstrates strong potential for real-time deployment in Maritime Autonomous Surface Ship (MASS) systems and port decision-support platforms, enabling safer manoeuvring and operational continuity under rapidly varying environmental conditions. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning for Sensor Systems)
29 pages, 1793 KB  
Review
Digital Twins for Cows and Chickens: From Hype Cycles to Hard Evidence in Precision Livestock Farming
by Suresh Neethirajan
Agriculture 2026, 16(2), 166; https://doi.org/10.3390/agriculture16020166 - 9 Jan 2026
Abstract
Digital twin technology is widely promoted as a transformative step for precision livestock farming, yet no fully realized, engineering-grade digital twins are deployed in commercial dairy or poultry systems today. This work establishes the current state of knowledge on dairy and poultry digital [...] Read more.
Digital twin technology is widely promoted as a transformative step for precision livestock farming, yet no fully realized, engineering-grade digital twins are deployed in commercial dairy or poultry systems today. This work establishes the current state of knowledge on dairy and poultry digital twins by synthesizing evidence through systematic database searches, thematic evidence mapping and critical analysis of validation gaps, carbon accounting and adoption barriers. Existing platforms are better described as near-digital-twin systems with partial sensing and modelling, digital-twin-inspired prototypes, simulation frameworks or decision-support tools that are often labelled as twins despite lacking continuous synchronization and closed-loop control. This distinction matters because the empirical foundation supporting many claims remains limited. Three critical gaps emerge: life-cycle carbon impacts of digital infrastructures are rarely quantified even as sustainability benefits are frequently asserted; field-validated improvements in feed efficiency, particularly in poultry feed conversion ratios, are scarce and inconsistent; and systematic reporting of failure rates, downtime and technology abandonment is almost absent, leaving uncertainties about long-term reliability. Adoption barriers persist across technical, economic and social dimensions, including rural connectivity limitations, sensor durability challenges, capital and operating costs, and farmer concerns regarding data rights, transparency and trust. Progress for cows and chickens will require rigorous validation in commercial environments, integration of mechanistic and statistical modelling, open and modular architectures and governance structures that support biological, economic and environmental accountability whilst ensuring that system intelligence is worth its material and energy cost. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Graphical abstract

Back to TopTop