Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,917)

Search Parameters:
Keywords = AI-based approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 9410 KB  
Article
Integrated AI Framework for Sustainable Environmental Management: Multivariate Air Pollution Interpretation and Prediction Using Ensemble and Deep Learning Models
by Youness El Mghouchi and Mihaela Tinca Udristioiu
Sustainability 2026, 18(3), 1457; https://doi.org/10.3390/su18031457 (registering DOI) - 1 Feb 2026
Abstract
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 [...] Read more.
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 and PM10), ground-level ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulphur dioxide (SO2), using a combination of ensemble and deep learning models. Five years of hourly air quality and meteorological data are analysed through correlation and Granger causality tests to uncover pollutant interdependencies and driving factors. The results of the Pearson correlation analysis reveal strong positive associations among primary pollutants (PM2.5–PM10, CO–nitrogen oxides NOx and VOCs) and inverse correlations between O3 and NOx (NO and NO2), confirming typical photochemical behaviour. Granger causality analysis further identified NO2 and NO as key causal drivers influencing other pollutants, particularly O3 formation. Among the 23 tested AI models for prediction, XGBoost, Random Forest, and Convolutional Neural Networks (CNNs) achieve the best performance for different pollutants. NO2 prediction using CNNs displays the highest accuracy in testing (R2 = 0.999, RMSE = 0.66 µg/m3), followed by PM2.5 and PM10 with XGBoost (R2 = 0.90 and 0.79 during testing, respectively). The Air Quality Index (AQI) analysis shows that SO2 and PM10 are the dominant contributors to poor air quality episodes, while ozone peaks occur during warm, high-radiation periods. The interpretability analysis based on Shapley Additive exPlanations (SHAP) highlights the key influence of relative humidity, temperature, solar brightness, and NOx species on pollutant concentrations, confirming their meteorological and chemical relevance. Finally, a deep-NARMAX model was applied to forecast the next horizons for the six air pollutants studied. Six formulas were elaborated using input data at times (t, t − 1, t − 2, …, t − n) to forecast a horizon of (t + 1) hours for single-step forecasting. For multi-step forecasting, the forecast is extended iteratively to (t + 2) hours and beyond. A recursive strategy is adopted for this purpose, whereby the forecast at (t + 1) is fed back as an input to generate the forecasts at (t + 2), and so forth. Overall, this integrated framework combines predictive accuracy with physical interpretability, offering a powerful data-driven tool for air quality assessment and policy support. This approach can be extended to real-time applications for sustainable environmental monitoring and decision-making systems. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

32 pages, 2836 KB  
Article
Towards Trustworthy AI Agents in Geriatric Medicine: A Secure and Assistive Architectural Blueprint
by Elena-Anca Paraschiv, Adrian Victor Vevera, Carmen Elena Cîrnu, Lidia Băjenaru, Andreea Dinu and Gabriel Ioan Prada
Future Internet 2026, 18(2), 75; https://doi.org/10.3390/fi18020075 (registering DOI) - 1 Feb 2026
Abstract
As artificial intelligence (AI) continues to expand across clinical environments, healthcare is transitioning from static decision-support tools to dynamic, autonomous agents capable of reasoning, coordination, and continuous interaction. In the context of geriatric medicine, a field characterized by multimorbidity, cognitive decline, and the [...] Read more.
As artificial intelligence (AI) continues to expand across clinical environments, healthcare is transitioning from static decision-support tools to dynamic, autonomous agents capable of reasoning, coordination, and continuous interaction. In the context of geriatric medicine, a field characterized by multimorbidity, cognitive decline, and the need for long-term personalized care, this evolution opens new frontiers for delivering adaptive, assistive, and trustworthy digital support. However, the autonomy and interconnectivity of these systems introduce heightened cybersecurity and ethical challenges. This paper presents a Secure Agentic AI Architecture (SAAA) tailored to the unique demands of geriatric healthcare. The architecture is designed around seven layers, grouped into five functional domains (cognitive, coordination, security, oversight, governance) to ensure modularity, interoperability, explainability, and robust protection of sensitive health data. A review of current AI agent implementations highlights limitations in security, transparency, and regulatory alignment, especially in multi-agent clinical settings. The proposed framework is illustrated through a practical use case involving home-based care for elderly patients with chronic conditions, where AI agents manage medication adherence, monitor vital signs, and support clinician communication. The architecture’s flexibility is further demonstrated through its application in perioperative care coordination, underscoring its potential across diverse clinical domains. By embedding trust, accountability, and security into the design of agentic systems, this approach aims to advance the safe and ethical integration of AI into aging-focused healthcare environments. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

39 pages, 3699 KB  
Article
Enhancing Decision Intelligence Using Hybrid Machine Learning Framework with Linear Programming for Enterprise Project Selection and Portfolio Optimization
by Abdullah, Nida Hafeez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz, Rolando Quintero Téllez, Eponon Anvi Alex, Grigori Sidorov and Alexander Gelbukh
AI 2026, 7(2), 52; https://doi.org/10.3390/ai7020052 (registering DOI) - 1 Feb 2026
Abstract
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we [...] Read more.
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we develop a three-step architecture for intelligent decision support. First, we introduce an extended Analytic Hierarchy Process (AHP) that incorporates organizational learning patterns to compute expert-validated criteria weights with a consistent level of reliability (CR=0.04), and Linear Programming is used for portfolio optimization. Second, we propose a machine learning architecture that integrates expert knowledge derived from AHP into models such as Transformers, TabNet, and Neural Oblivious Decision Ensembles through mechanisms including attention modulation, split criterion weighting, and differentiable tree regularization. Third, the hybrid AHP-Stacking classifier generates a meta-ensemble that adaptively balances expert-derived information with data-driven patterns. The analysis shows that the model achieves 97.5% accuracy, a 96.9% F1-score, and a 0.989 AUC-ROC, representing a 25% improvement compared to baseline methods. The framework also indicates a projected 68.2% improvement in portfolio value (estimated incremental value of USD 83.5 M) based on post factum financial results from the enterprise’s ventures.This study is evaluated retrospectively using data from a single enterprise, and while the results demonstrate strong robustness, generalizability to other organizational contexts requires further validation. This research contributes a structured approach to hybrid intelligent systems and demonstrates that combining expert knowledge with machine learning can provide reliable, transparent, and high-performing decision-support capabilities for project portfolio management. Full article
Show Figures

Figure 1

14 pages, 1464 KB  
Article
Data-Driven Contract Management at Scale: A Zero-Shot LLM Architecture for Big Data and Legal Intelligence
by Syed Omar Ali, Syed Abid Ali and Rabia Jafri
Technologies 2026, 14(2), 88; https://doi.org/10.3390/technologies14020088 (registering DOI) - 1 Feb 2026
Abstract
The exponential growth and complexity of legal agreements pose significant Big Data challenges and strategic risks for modern organizations, often overwhelming traditional, manual contract management workflows. While AI has enhanced legal research, most current applications require extensive domain-specific fine-tuning or substantial annotated data, [...] Read more.
The exponential growth and complexity of legal agreements pose significant Big Data challenges and strategic risks for modern organizations, often overwhelming traditional, manual contract management workflows. While AI has enhanced legal research, most current applications require extensive domain-specific fine-tuning or substantial annotated data, and Large Language Models (LLMs) remain susceptible to hallucination risk. This paper presents an AI-based Agreement Management System that addresses this methodological gap and scale. The system integrates a Python 3.1.2/MySQL 9.4.0-backed centralized repository for multi-format document ingestion, a role-based Collaboration and Access Control module, and a core AI Functions module. The core contribution lies in the AI module, which leverages zero-shot learning with OpenAI’s GPT-4o and structured prompt chaining to perform advanced contractual analysis without domain-specific fine-tuning. Key functions include automated metadata extraction, executive summarization, red-flag clause detection, and a novel feature for natural-language contract modification. This approach overcomes the cost and complexity of training proprietary models, democratizing legal insight and significantly reducing operational overhead. The system was validated through real-world testing at a leading industry partner, demonstrating its effectiveness as a scalable and secure foundation for managing the high volume of legal data. This work establishes a robust proof-of-concept for future enterprise-grade enhancements, including workflow automation and predictive analytics. Full article
Show Figures

Figure 1

24 pages, 4127 KB  
Article
Harnessing AI, Virtual Landscapes, and Anthropomorphic Imaginaries to Enhance Environmental Science Education at Jökulsárlón Proglacial Lagoon, Iceland
by Jacquelyn Kelly, Dianna Gielstra, Tomáš J. Oberding, Jim Bruno and Stephanie Cosentino
Glacies 2026, 3(1), 3; https://doi.org/10.3390/glacies3010003 (registering DOI) - 1 Feb 2026
Abstract
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the [...] Read more.
Introductory environmental science courses offer non-STEM students an entry point to address global challenges such as climate change and cryosphere preservation. Aligned with the International Year of Glacier Preservation and the Decade of Action for Cryospheric Sciences, this mixed-method, IRB-exempt study applied the Curriculum Redesign and Artificial Intelligence-Facilitated Transformation (CRAFT) model for course redesign. The project leveraged a human-centered AI approach to create anthropomorphized, place-based narratives for online learning. Generative AI is used to amend immersive virtual learning environments (VLEs) that animate glacial forces (water, rock, and elemental cycles) through narrative-driven virtual reality (VR) experiences. Students explored Iceland’s Jökulsárlón Glacier Lagoon via self-guided field simulations led by an imaginary water droplet, designed to foster environmental awareness and a sense of place. Data collection included a five-point Likert-scale survey and thematic coding of student comments. Findings revealed strong positive sentiment: 87.1% enjoyment of the imaginaries, 82.5% agreement on supporting connection to places, and 82.0% endorsement of their role in reinforcing spatial and systems thinking. Thematic analysis confirmed that anthropomorphic imaginaries enhanced emotional engagement and conceptual understanding of glacial processes, situating glacier preservation within geographic and global contexts. This AI-enhanced, multimodal approach demonstrates how narrative-based VR can make complex cryospheric concepts accessible for non-STEM learners, promoting early engagement with climate science and environmental stewardship. Full article
Show Figures

Figure 1

41 pages, 1026 KB  
Article
A DEMATEL–ANP-Based Evaluation of AI-Assisted Learning in Higher Education
by Galina Ilieva, Tania Yankova, Margarita Ruseva and Stanislava Klisarova-Belcheva
Computers 2026, 15(2), 79; https://doi.org/10.3390/computers15020079 (registering DOI) - 1 Feb 2026
Abstract
This study proposes an indicator system for evaluating AI-assisted learning in higher education, combining evidence-based indicator development with expert-validated weighting. First, we review recent studies to extract candidate indicators and organize them into coherent dimensions. Next, a Delphi session with domain experts refines [...] Read more.
This study proposes an indicator system for evaluating AI-assisted learning in higher education, combining evidence-based indicator development with expert-validated weighting. First, we review recent studies to extract candidate indicators and organize them into coherent dimensions. Next, a Delphi session with domain experts refines the second-order indicators and produces a measurable, non-redundant, implementation-ready index system. To capture interdependencies among indicators, we apply a hybrid Decision-Making Trial and Evaluation Laboratory–Analytic Network Process (DEMATEL–ANP, DANP) approach to derive global indicator weights. The framework is empirically illustrated through a course-level application to examine its decision usefulness, interpretability, and face validity based on expert evaluations and structured feedback from academic staff. The results indicate that pedagogical content quality, adaptivity (especially difficulty adjustment), formative feedback quality, and learner engagement act as key drivers in the evaluation network, while ethics-related indicators operate primarily as enabling constraints. The proposed framework provides a transparent and scalable tool for quality assurance in AI-assisted higher education, supporting instructional design, accreditation reporting, and continuous improvement. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

20 pages, 6349 KB  
Article
Ship Detectability of Satellite-Based Radio Frequency Data in a Congested Area
by Chan-Su Yang and Sree Juwel Kumar Chowdhury
Remote Sens. 2026, 18(3), 451; https://doi.org/10.3390/rs18030451 (registering DOI) - 1 Feb 2026
Abstract
This study examined the association between radio frequency (RF) data and ships in a congested area, with a focus on the Busan Port region in South Korea. RF datasets consisting of one L-band, four S-band, and two X-band frequencies were used in conjunction [...] Read more.
This study examined the association between radio frequency (RF) data and ships in a congested area, with a focus on the Busan Port region in South Korea. RF datasets consisting of one L-band, four S-band, and two X-band frequencies were used in conjunction with Automatic Identification System (AIS) and Small Fishing Vessel Tracking System (V-Pass) data collected during the corresponding RF data periods. A distance-based association approach was applied, and the RF–ship association ratio (RSAR) was estimated as the ratio between the number of AIS-reported vessels associated with RF data and the total number of AIS-reported vessels present within the time period. The results indicate low overall RSAR in the congested region, with 6.5% for L-band, 1.7–24.6% for S-band, and 7.7–17.2% for X-band. Under stable high-pressure conditions (101.4–102.2 kPa) and light breeze conditions (0.9–3.6 m/s), atmospheric impacts on the RSAR can be considered minimal. Moreover, to indicate the relationship between the acquired RF signal and vessel congestion, a congestion index (CI) was derived from AIS and V-Pass data using a spatial grid-based method. The CI density maps within the congested region indicate that the acquired RF signals exist dominantly in low-congested areas. Full article
Show Figures

Figure 1

22 pages, 561 KB  
Review
A Systematic Review of Anomaly and Fault Detection Using Machine Learning for Industrial Machinery
by Syed Haseeb Haider Zaidi, Alex Shenfield, Hongwei Zhang and Augustine Ikpehai
Algorithms 2026, 19(2), 108; https://doi.org/10.3390/a19020108 (registering DOI) - 1 Feb 2026
Abstract
Unplanned downtime in industrial machinery remains a major challenge, causing substantial economic losses and safety risks across sectors such as manufacturing, food processing, oil and gas, and transportation. This systematic review investigates the application of machine learning (ML) techniques for anomaly and fault [...] Read more.
Unplanned downtime in industrial machinery remains a major challenge, causing substantial economic losses and safety risks across sectors such as manufacturing, food processing, oil and gas, and transportation. This systematic review investigates the application of machine learning (ML) techniques for anomaly and fault detection within the broader context of predictive maintenance. Following a hybrid review methodology, relevant studies published between 2010 and 2025 were collected from major databases including IEEE Xplore, ScienceDirect, SpringerLink, Scopus, Web of Science, and arXiv. The review categorizes approaches into supervised, unsupervised, and hybrid paradigms, analyzing their pipelines from data collection and preprocessing to model deployment. Findings highlight the effectiveness of deep learning architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), autoencoders, and hybrid frameworks in detecting faults from time series and multimodal sensor data. At the same time, key limitations persist, including data scarcity, class imbalance, limited generalizability across equipment types, and a lack of interpretability in deep models. This review concludes that while ML-based predictive maintenance systems are enabling a transition from reactive to proactive strategies, future progress requires improved hybrid architectures, Explainable AI, and scalable real-time deployment. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

19 pages, 2658 KB  
Article
Unveiling the Gaps: Machine Learning Models for Unmeasured Ions
by Furkan Tontu and Zafer Çukurova
Diagnostics 2026, 16(3), 427; https://doi.org/10.3390/diagnostics16030427 (registering DOI) - 1 Feb 2026
Abstract
Background: Unmeasured ions (UIs) contribute significantly to acid–base disturbances in critically ill patients, yet the optimal parameter for their estimation remains uncertain. The most widely used indicators are the albumin-corrected anion gap (AGc), the strong ion gap (SIG), and the base excess gap [...] Read more.
Background: Unmeasured ions (UIs) contribute significantly to acid–base disturbances in critically ill patients, yet the optimal parameter for their estimation remains uncertain. The most widely used indicators are the albumin-corrected anion gap (AGc), the strong ion gap (SIG), and the base excess gap (BEGap). Methods: In this retrospective cohort study, a total of 2274 ICU patients (2018–2022) were included in the development cohort, and an independent external validation cohort of 1202 patients (2023–2025) was used to assess temporal generalizability. Three approaches to blood gas analysis—traditional (PaCO2, HCO3, AGc), Stewart (PaCO2, SIDa, ATOT, SIG), and partitioned base excess (PaCO2, BECl, BEAlb, BELac, BEGap)—were evaluated. Multivariable linear regression (MLR) and machine learning (ML, random forest [RF], extreme gradient boosting [XGBoost], and support vector regression [SVR]) were applied to evaluate the explanatory performance of analytical approaches with respect to arterial pH. Model performance was assessed using adjusted R2, RMSE, and MAE. Variable importance was quantified with tree-based methods, SHAP values, and permutation importance. All modeling and reporting steps followed the PROBAST-AI guideline. Results: In multiple linear regression (MLR), the partitioned base excess (BE) approach achieved the highest explanatory performance (adjusted R2 = 0.949), followed by the traditional (0.929) and Stewart approaches (0.926). In ML analyses, model fit was high across all approaches. For the traditional approach, R2 values were 0.979 with RF, 0.974 with XGBoost, and 0.934 with SVR. The Stewart’s approach showed lower overall explanatory performance, with R2 values of 0.876 (RF), 0.967 (XGBoost), and 0.996 (SVR). The partitioned BE approach again demonstrated the strongest explanatory performance, achieving R2 values of 0.975 with XGBoost and 0.989 with SVR. Across all analytical models, BEGap consistently emerged as a strong and independent determinant of arterial pH, outperforming SIG and AGc. SIG showed an intermediate contribution, whereas AGc provided minimal independent explanatory value. Among ML models, XGBoost showed the most stable and accurate explanatory performance across approaches. Conclusions: This study demonstrates that BEGap is a practical, physiologically informative, and bedside-applicable parameter for assessing unmeasured ions, outperforming both AGc and SIG across linear and non-linear analytical models. Full article
(This article belongs to the Special Issue From Data to Decisions: Deep Learning in Clinical Diagnostics)
Show Figures

Figure 1

25 pages, 2993 KB  
Article
Joint Forecasting of Energy Consumption and Generation in P2P Networks Using LSTM–CNN and Transformers
by Kandel L. Yandar, Oscar Revelo Sánchez and Manuel Bolaños Gonzales
Energies 2026, 19(3), 760; https://doi.org/10.3390/en19030760 (registering DOI) - 1 Feb 2026
Abstract
Electric energy is an essential resource in modern society; however, most current distribution systems are centralized and dependent on fossil fuels, posing risks of shortages and a potential energy crisis. The transition to renewable sources represents a sustainable alternative, though it introduces challenges [...] Read more.
Electric energy is an essential resource in modern society; however, most current distribution systems are centralized and dependent on fossil fuels, posing risks of shortages and a potential energy crisis. The transition to renewable sources represents a sustainable alternative, though it introduces challenges associated with intermittency and generation variability. In this context, peer-to-peer (P2P) networks and artificial intelligence (AI) emerge as strategies to promote decentralization, self-management, and efficiency in energy operation. This research proposes an AI-based knowledge discovery model to predict electricity generation and consumption in a P2P network. The study was developed in four phases: exploration of AI techniques for energy prediction; analysis of the most widely used techniques in the Knowledge Discovery in Databases (KDD) process; construction of the predictive model; and validation using real energy generation and consumption data from renewable energy sources. The LSTM–CNN and Transformer models achieved an R2 greater than 80% and mean absolute errors (MAE) of less than 0.02 kWh, demonstrating high prediction accuracy. The results confirm that integrating the KDD approach with deep LSTM–CNN and Transformer architectures significantly improves energy management in P2P networks, providing a solid foundation for the development of innovative and sustainable electrical systems. Full article
(This article belongs to the Special Issue The Impact of Artificial Intelligence on Modern Energy Systems)
Show Figures

Figure 1

19 pages, 4140 KB  
Article
Bamboo Forest Area Extraction and Clump Identification Using Semantic Segmentation and Instance Segmentation Models
by Keng-Hao Liu, Shih-Ji Lin, Che-Wei Hu and Chinsu Lin
Forests 2026, 17(2), 191; https://doi.org/10.3390/f17020191 (registering DOI) - 1 Feb 2026
Abstract
This study addresses the need for effective bamboo monitoring in smart forestry as UAV imagery and AI-based methods continue to advance. Bambusa stenostachya (thorny bamboo), commonly found in the badland regions of southern Taiwan, spreads rapidly due to its strong reproductive capacity and [...] Read more.
This study addresses the need for effective bamboo monitoring in smart forestry as UAV imagery and AI-based methods continue to advance. Bambusa stenostachya (thorny bamboo), commonly found in the badland regions of southern Taiwan, spreads rapidly due to its strong reproductive capacity and extensive rhizome system, often causing forestland degradation and challenges to sustainable management. An automated detection approach is therefore required to capture bamboo dynamics and support forest resource assessment. We use a dual-component framework for detecting bamboo forests and individual bamboo clumps from high-resolution UAV orthomosaic imagery. The first component performs semantic segmentation using U-Net or SegFormer to extract bamboo forest areas and generate a corresponding forest mask. The second component independently applies instance segmentation using YOLOv8-Seg and Mask R-CNN to delineate and localize individual bamboo clumps. The dataset was collected from Compartment 43 of the Qishan Working Circle in Kaohsiung, Taiwan. Experimental results show strong model performance: bamboo forest segmentation achieved an F1-score of 0.9569, while bamboo clump instance segmentation reached a precision of 0.8232. These findings demonstrate the promising potential of deep learning-based segmentation techniques for improving bamboo detection and supporting operational forest monitoring. Full article
(This article belongs to the Special Issue Application of Machine-Learning Methods in Forestry)
Show Figures

Figure 1

28 pages, 12486 KB  
Article
Sustainability-Focused Evaluation of Self-Compacting Concrete: Integrating Explainable Machine Learning and Mix Design Optimization
by Abdulaziz Aldawish and Sivakumar Kulasegaram
Appl. Sci. 2026, 16(3), 1460; https://doi.org/10.3390/app16031460 (registering DOI) - 31 Jan 2026
Abstract
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), [...] Read more.
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), SHapley Additive exPlanations (SHAP), and multi-objective optimization to improve SCC mixture design. A large and heterogeneous publicly available global SCC dataset, originally compiled from 156 independent peer-reviewed studies and further enhanced through a structured three-stage data augmentation strategy, was used to develop robust predictive models for key fresh-state properties. An optimized XGBoost model demonstrated strong predictive accuracy and generalization capability, achieving coefficients of determination of R2=0.835 for slump flow and R2=0.828 for T50 time, with reliable performance on independent industrial SCC datasets. SHAP-based interpretability analysis identified the water-to-binder ratio and superplasticizer dosage as the dominant factors governing fresh-state behavior, providing physically meaningful insights into mixture performance. A cradle-to-gate life cycle assessment was integrated within a multi-objective genetic algorithm to simultaneously minimize embodied CO2 emissions and material costs while satisfying workability constraints. The resulting Pareto-optimal mixtures achieved up to 3.9% reduction in embodied CO2 emissions compared to conventional SCC designs without compromising performance. External validation using independent industrial data confirms the practical reliability and transferability of the proposed framework. Overall, this study presents an interpretable and scalable AI-driven approach for the sustainable optimization of SCC mixture design. Full article
24 pages, 2163 KB  
Article
KFF-Transformer: A Human–AI Collaborative Framework for Fine-Grained Argument Element Identification
by Xuxun Cai, Jincai Yang, Meng Zheng and Jianping Zhu
Appl. Sci. 2026, 16(3), 1451; https://doi.org/10.3390/app16031451 (registering DOI) - 31 Jan 2026
Abstract
With the rapid development of intelligent computing and artificial intelligence, there is an increasing demand for efficient, interpretable, and interactive frameworks for fine-grained text analysis. In the field of argument mining, existing approaches are often constrained by sentence-level processing, limited exploitation of key [...] Read more.
With the rapid development of intelligent computing and artificial intelligence, there is an increasing demand for efficient, interpretable, and interactive frameworks for fine-grained text analysis. In the field of argument mining, existing approaches are often constrained by sentence-level processing, limited exploitation of key linguistic markers, and a lack of human–AI collaborative mechanisms, which restrict both recognition accuracy and computational efficiency. To address these challenges, this paper proposes KFF-Transformer, a computing-oriented human–AI collaborative framework for fine-grained argument element identification based on Toulmin’s model. The framework first employs an automatic key marker mining algorithm to expand a seed set of expert-labeled linguistic cues, significantly enhancing coverage and diversity. It then employs a lightweight deep learning architecture that combines BERT for contextual token encoding with a BiLSTM network enhanced by an attention mechanism to perform word-level classification of the six Toulmin elements. This approach leverages enriched key markers as critical features, enhancing both accuracy and interpretability. It should be noted that while our framework leverages BERT—a Transformer-based encoder—for contextual representation, the core sequence labeling module is based on BiLSTM and does not implement a standard Transformer block. Furthermore, a human-in-the-loop interaction mechanism is embedded to support real-time user correction and adaptive system refinement, improving robustness and practical usability. Experiments conducted on a dataset of 180 English argumentative essays demonstrate that KFF-Transformer identifies key markers in 1145 sentences and achieves an accuracy of 72.2% and an F1-score of 66.7%, outperforming a strong baseline by 3.7% and 2.8%, respectively. Moreover, the framework reduces processing time by 18.9% on CPU and achieves near-real-time performance of approximately 3.3 s on GPU. These results validate that KFF-Transformer effectively integrates linguistically grounded reasoning, efficient deep learning, and interactive design, providing a scalable and trustworthy solution for intelligent argument analysis in real-world educational applications. Full article
(This article belongs to the Special Issue Application of Smart Learning in Education)
Show Figures

Figure 1

54 pages, 2046 KB  
Review
Data-Driven Tools and Methods for Low-Carbon Industrial Parks: A Scoping Review of Industrial Symbiosis and Carbon Capture with Practitioner Insights
by Zheng Grace Ma, Joy Dalmacio Billanes and Bo Nørregaard Jørgensen
Energies 2026, 19(3), 755; https://doi.org/10.3390/en19030755 - 30 Jan 2026
Abstract
Industrial symbiosis and carbon capture are increasingly recognized as critical strategies for reducing emissions and resource consumption in industrial parks. However, existing research remains fragmented across tools, methods, and case-specific applications, providing limited guidance for effective real-world deployment of data-driven approaches. This study [...] Read more.
Industrial symbiosis and carbon capture are increasingly recognized as critical strategies for reducing emissions and resource consumption in industrial parks. However, existing research remains fragmented across tools, methods, and case-specific applications, providing limited guidance for effective real-world deployment of data-driven approaches. This study addresses this gap through a PRISMA-guided scoping review of 116 publications, complemented by a targeted practitioner survey conducted within the IEA IETS Task 21 initiative to assess practical relevance and adoption challenges. The review identifies a broad landscape of data-driven tools, ranging from high-technology-readiness simulation and optimization platforms to emerging visualization and matchmaking solutions. While the literature demonstrates substantial methodological maturity, the combined evidence reveals a persistent gap between tool availability and effective implementation. Key barriers include fragmented and non-standardized data infrastructures, confidentiality constraints, limited stakeholder coordination, and weak policy and market incentives. Based on the integrated analysis of literature and practitioner insights, the paper proposes a conceptual framework that links tools and methods with data infrastructure, stakeholder governance, policy, and market enablers, and implementation contexts. The findings highlight that improving data governance, interoperability, and collaborative implementation pathways is as critical as advancing analytical capabilities. The study concludes by outlining focused directions for future research, including AI-enabled optimization, standardized data-sharing frameworks, and coordinated pilot projects to support scalable low-carbon industrial transformation. Full article
Show Figures

Figure 1

24 pages, 1236 KB  
Review
Blood Pressure Variability (BPV) as a Novel Digital Biomarker of Multisystem Risk and Diagnostic Insight: Measurement, Mechanisms, and Emerging Artificial Intelligence Methods
by Lakshmi Sree Pugalenthi, Sidhartha Gautam Senapati, Jay Gohri, Hema Latha Anam, Hritik Madan, Adi Arora, Avni Arora, Jieun Lee, Gayathri Yerrapragada, Poonguzhali Elangovan, Mohammed Naveed Shariff, Thangeswaran Natarajan, Jayarajasekaran Janarthanan, Shreshta Agarwal, Shiva Sankari Karuppiah, Divyanshi Sood, Swetha Rapolu, Vivek N. Iyer, Scott A. Helgeson and Shivaram P. Arunachalam
Biomedicines 2026, 14(2), 317; https://doi.org/10.3390/biomedicines14020317 - 30 Jan 2026
Viewed by 27
Abstract
Hypertension has been traditionally known to be highlighted by mean blood pressure; however, emerging evidence exhibits that blood pressure variability (BPV), including short-term, day-to-day, and visit-to-visit fluctuations can have an implication across multiple body systems. Elevated BPV reflects repetitive hemodynamic stress, affecting the [...] Read more.
Hypertension has been traditionally known to be highlighted by mean blood pressure; however, emerging evidence exhibits that blood pressure variability (BPV), including short-term, day-to-day, and visit-to-visit fluctuations can have an implication across multiple body systems. Elevated BPV reflects repetitive hemodynamic stress, affecting the physiologic hemostasis contributing to vascular injury and end organ damage. This narrative review is a compilation of recent evidence on the prognostic value of BPV, explained by pathophysiology, various devices with its measurement approaches, and, essentially, the clinical implication of BPV and the use of such devices utilizing artificial intelligence. A comprehensive literature search across PubMed, Cochrane Library, Scopus, and Web of Science were conducted, focusing on observational studies, cohorts, randomized trials, and meta-analyses. Higher BPV has been associated with an increased risk of cardiovascular mortality, stroke, coronary events, and heart failure, the progression of chronic kidney disease, cognitive decline, and preeclampsia, among other end organ damage, despite mean blood pressure. The various pathophysiologic mechanisms include autonomic dysregulation, arterial stiffness, endothelial dysfunction, circadian rhythm alteration, and systemic inflammation, which result in vascular remodeling and multisystem damage. Antihypertensive medications such as calcium channel blockers and renin–angiotensin–aldosterone system inhibitors seem to reduce BPV; randomized trials have not specifically investigated their BPV-reducing effects. The aim of this review is to highlight that BPV is a dynamic marker of multisystem risk, and question how various AI-based devices can aid continuous BPV monitoring and patient specific risk stratification. Full article
(This article belongs to the Special Issue Recent Advanced Research in Hypertension)
Back to TopTop