Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (266)

Search Parameters:
Keywords = graph regression model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6454 KB  
Article
Probabilistic Photovoltaic Power Forecasting with Reliable Uncertainty Quantification via Multi-Scale Temporal–Spatial Attention and Conformalized Quantile Regression
by Guanghu Wang, Yan Zhou, Yan Yan, Zhihan Zhou, Zikang Yang, Litao Dai and Junpeng Huang
Sustainability 2026, 18(2), 739; https://doi.org/10.3390/su18020739 - 11 Jan 2026
Viewed by 122
Abstract
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting [...] Read more.
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting framework based on a Multi-scale Temporal–Spatial Attention Quantile Regression Network (MTSA-QRN) and an adaptive calibration mechanism to enhance uncertainty quantification and ensure statistically reliable prediction intervals. The framework employs a dual-pathway architecture: a temporal pathway combining Temporal Convolutional Networks (TCN) and multi-head self-attention to capture hierarchical temporal dependencies, and a spatial pathway based on Graph Attention Networks (GAT) to model nonlinear meteorological correlations. A learnable gated fusion mechanism adaptively integrates temporal–spatial representations, and weather-adaptive modules enhance robustness under diverse atmospheric conditions. Multi-quantile prediction intervals are calibrated using conformalized quantile regression to ensure reliable uncertainty coverage. Experiments on a real-world PV dataset (15 min resolution) demonstrate that the proposed method offers more accurate and sharper uncertainty estimates than competitive benchmarks, supporting risk-aware operational decision-making in power systems. Quantitative evaluation on a real-world 40 MW photovoltaic plant demonstrates that the proposed MTSA-QRN achieves a CRPS of 0.0400 before calibration, representing an improvement of over 55% compared with representative deep learning baselines such as Quantile-GRU, Quantile-LSTM, and Quantile-Transformer. After adaptive calibration, the proposed method attains a reliable empirical coverage close to the nominal level (PICP90 = 0.9053), indicating effective uncertainty calibration. Although the calibrated prediction intervals become wider, the model maintains a competitive CRPS value (0.0453), striking a favorable balance between reliability and probabilistic accuracy. These results demonstrate the effectiveness of the proposed framework for reliable probabilistic photovoltaic power forecasting. Full article
(This article belongs to the Topic Sustainable Energy Systems)
Show Figures

Figure 1

29 pages, 2855 KB  
Review
Advancing Drug–Drug Interaction Prediction with Biomimetic Improvements: Leveraging the Latest Artificial Intelligence Techniques to Guide Researchers in the Field
by Ridwan Boya Marqas, Zsuzsa Simó, Abdulazeez Mousa, Fatih Özyurt and Laszlo Barna Iantovics
Biomimetics 2026, 11(1), 39; https://doi.org/10.3390/biomimetics11010039 - 5 Jan 2026
Viewed by 475
Abstract
Drug–drug interactions (DDIs) can cause adverse reactions or reduce the efficiency of a drug. Using computers to predict DDIs is now critical in pharmacology, as this reduces risks, improves drug outcomes and lowers healthcare costs. Clinical trials are slow, expensive, and require a [...] Read more.
Drug–drug interactions (DDIs) can cause adverse reactions or reduce the efficiency of a drug. Using computers to predict DDIs is now critical in pharmacology, as this reduces risks, improves drug outcomes and lowers healthcare costs. Clinical trials are slow, expensive, and require a lot of effort. The use of artificial intelligence (AI), primarily in the form of machine learning (ML) and its subfield deep learning (DL), has made DDI prediction more accurate and efficient when handling large datasets from biological, chemical, and clinical domains. Many ML and DL approaches are bio-inspired, taking inspiration from natural systems, and are considered part of the broader class of biomimetic methods. This review provides a comprehensive overview of AI-based methods currently used for DDI prediction. These include classical ML algorithms, such as logistic regression (LR) and support vector machines (SVMs); advanced DL models, such as deep neural networks (DNNs) and long short-term memory networks (LSTMs); graph-based models, such as graph convolutional networks (GCNs) and graph attention networks (GATs); and ensemble techniques. The use of knowledge graphs and transformers to capture relations and meaningful data about drugs is also investigated. Additionally, emerging biomimetic approaches offer promising directions for the future in designing AI models that can emulate the complexity of pharmacological interactions. These upgrades include using genetic algorithms with LR and SVM, neuroevaluation (brain-inspired model optimization) to improve DNN and LSTM architectures, ant-colony-inspired path exploration with GCN and GAT, and immune-inspired attention mechanisms in transformer models. This manuscript reviews the typical types of data employed in DDI (pDDI) prediction studies and the evaluation methods employed, discussing the pros and cons of each. There are useful approaches outlined that reveal important points that require further research and suggest ways to improve the accuracy, usability, and understanding of DDI prediction models. Full article
Show Figures

Figure 1

19 pages, 3887 KB  
Article
RELoc: An Enhanced 3D WiFi Fingerprinting Indoor Localization Algorithm with RFECV Feature Selection
by Shehu Lukman Ayinla, Azrina Abd Aziz, Micheal Drieberg, Misfa Susanto and Anis Laouiti
Sensors 2026, 26(1), 326; https://doi.org/10.3390/s26010326 - 4 Jan 2026
Viewed by 285
Abstract
The use of Artificial Intelligence (AI) algorithms has enhanced WiFi fingerprinting-based indoor localization. However, most existing approaches are limited to 2D coordinate estimation, which leads to significant performance declines in multi-floor environments due to vertical ambiguity and inadequate spatial modeling. This limitation reduces [...] Read more.
The use of Artificial Intelligence (AI) algorithms has enhanced WiFi fingerprinting-based indoor localization. However, most existing approaches are limited to 2D coordinate estimation, which leads to significant performance declines in multi-floor environments due to vertical ambiguity and inadequate spatial modeling. This limitation reduces reliability in real-world applications where accurate indoor localization is essential. This study proposes RELoc, a new 3D indoor localization framework that integrates Recursive Feature Elimination with Cross-Validation (RFECV) for optimal Access Point (AP) selection and Extremely Randomized Trees (ERT) for precise 2D and 3D coordinate regression. The ERT hyperparameters are optimized using Bayesian optimization with Optuna’s Tree-structured Parzen Estimator (TPE) to ensure robust, stable, and accurate localization. Extensive evaluation on the SODIndoorLoc and UTSIndoorLoc datasets demonstrates that RELoc delivers superior performance in both 2D and 3D indoor localization. Specifically, RELoc achieves Mean Absolute Errors (MAEs) of 1.84 m and 4.39 m for 2D coordinate prediction on SODIndoorLoc and UTSIndoorLoc, respectively. When floor information is incorporated, RELoc improves by 33.15% and 26.88% over the 2D version on these datasets. Furthermore, RELoc outperforms state-of-the-art methods by 7.52% over Graph Neural Network (GNN) and 12.77% over Deep Neural Network (DNN) on SODIndoorLoc and 40.22% over Extra Tree (ET) on UTSIndoorLoc, showing consistent improvements across various indoor environments. This enhancement emphasizes the critical role of 3D modeling in achieving robust and spatially discriminative indoor localization. Full article
(This article belongs to the Special Issue Indoor Localization Techniques Based on Wireless Communication)
Show Figures

Figure 1

29 pages, 2471 KB  
Article
MISA-GMC: An Enhanced Multimodal Sentiment Analysis Framework with Gated Fusion and Momentum Contrastive Modality Relationship Modeling
by Zheng Du, Yapeng Wang, Xu Yang, Sio-Kei Im and Zhiwen Wang
Mathematics 2026, 14(1), 115; https://doi.org/10.3390/math14010115 - 28 Dec 2025
Viewed by 396
Abstract
Multimodal sentiment analysis jointly exploits textual, acoustic, and visual signals to recognize human emotions more accurately than unimodal models. However, real-world data often contain noisy or partially missing modalities, and naive fusion may allow unreliable signals to degrade overall performance. To address this, [...] Read more.
Multimodal sentiment analysis jointly exploits textual, acoustic, and visual signals to recognize human emotions more accurately than unimodal models. However, real-world data often contain noisy or partially missing modalities, and naive fusion may allow unreliable signals to degrade overall performance. To address this, we propose an enhanced framework named MISA-GMC, a lightweight extension of the widely used MISA backbone that explicitly accounts for modality reliability. The core idea is to adaptively reweight modalities at the sample level while regularizing cross-modal representations during training. Specifically, a reliability-aware gated fusion module down-weights unreliable modalities, and two auxiliary training-time regularizers (momentum contrastive learning and a lightweight correlation graph) help stabilize and refine multimodal representations without adding inference-time overhead. Experiments on three benchmark datasets—CMU-MOSI, CMU-MOSEI, and CH-SIMS—demonstrate the effectiveness of MISA-GMC. For instance, on CMU-MOSI, the proposed model improves 7-class accuracy from 43.29 to 45.92, reduces the mean absolute error (MAE) from 0.785 to 0.712, and increases the Pearson correlation coefficient (Corr) from 0.764 to 0.795. This indicates more accurate fine-grained sentiment prediction and better sentiment-intensity estimation. On CMU-MOSEI and CH-SIMS, MISA-GMC also achieves consistent gains over MISA and strong baselines such as LMF, ALMT, and MMIM across both classification and regression metrics. Ablation studies and missing-modality experiments further verify the contribution of each component and the robustness of MISA-GMC under partial-modality settings. Full article
(This article belongs to the Special Issue Applications of Machine Learning and Pattern Recognition)
Show Figures

Figure 1

22 pages, 2574 KB  
Article
FedTULGAC: A Federated Learning Method for Trajectory User Linking Based on Graph Attention and Clustering
by Haitao Zhang, Yang Xu, Huixiang Jiang, Yuanjian Liu, Weigang Wang, Yi Li, Yuhao Luo and Yuxuan Ge
Electronics 2025, 14(24), 4975; https://doi.org/10.3390/electronics14244975 - 18 Dec 2025
Viewed by 163
Abstract
Trajectory User Linking (TUL) is a pivotal technology for identifying and associating the trajectory information from the same user across various data sources. To address the privacy leakage challenges inherent in traditional TUL methods, this study introduces a novel federated learning-based TUL method: [...] Read more.
Trajectory User Linking (TUL) is a pivotal technology for identifying and associating the trajectory information from the same user across various data sources. To address the privacy leakage challenges inherent in traditional TUL methods, this study introduces a novel federated learning-based TUL method: FedTULGAC. This approach utilizes a federated learning framework to aggregate model parameters, thereby avoiding the sharing of local data. Within this framework, a Graph Attention-based Trajectory User Linking and Embedding Regression (GATULER) model and an FL-DBSCAN clustering algorithm are designed and integrated to capture short-term temporal dependencies in user movement trajectories and to handle the non-independent and identically distributed (Non-IID) characteristics of client-side data. Experimental results on the synthesized datasets demonstrate that the proposed method achieves the highest prediction accuracy compared to the baseline models and maintains stable performance with minimal sensitivity to variations in client selection ratios, which reveals its effectiveness in bandwidth-constrained real-world applications. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Graph Neural Networks)
Show Figures

Figure 1

13 pages, 557 KB  
Article
Synolitic Graph Neural Networks of High-Dimensional Proteomic Data Enhance Early Detection of Ovarian Cancer
by Alexey Zaikin, Ivan Sviridov, Janna G. Oganezova, Usha Menon, Aleksandra Gentry-Maharaj, John F. Timms and Oleg Blyuss
Cancers 2025, 17(24), 3972; https://doi.org/10.3390/cancers17243972 - 12 Dec 2025
Viewed by 430
Abstract
Background: Ovarian cancer is characterized by high mortality rates, primarily due to diagnosis at late stages. Current biomarkers, such as CA125, have demonstrated limited efficacy for early detection. While high-dimensional proteomics offers a more comprehensive view of systemic biology, the analysis of [...] Read more.
Background: Ovarian cancer is characterized by high mortality rates, primarily due to diagnosis at late stages. Current biomarkers, such as CA125, have demonstrated limited efficacy for early detection. While high-dimensional proteomics offers a more comprehensive view of systemic biology, the analysis of such data, where the number of features far exceeds the number of samples, presents a significant computational challenge. Methods: This study utilized a nested case–control cohort of longitudinal pre-diagnostic serum samples from the UK Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) profiled for eight candidate ovarian cancer biomarkers (CA125, HE4, PEBP4, CHI3L1, FSTL1, AGR2, SLPI, DNAH17) and 92 additional cancer-associated proteins from the Olink Oncology II panel. We employed a Synolitic Graph Neural Network framework that transforms high-dimensional multi-protein data into sample-specific, interconnected graphs using a synolitic network approach. These graphs, which encode the relational patterns between proteins, were then used to train Graph Neural Network (GNN) models for classification. Performance of the network approach was evaluated together with conventional machine learning approaches via 5-fold cross-validation on samples collected within one year of diagnosis and a separate holdout set of samples collected one to two years prior to diagnosis. Results: In samples collected within one year of ovarian cancer diagnosis, conventional machine learning models—including XGBoost, random forests, and logistic regression—achieved the highest discriminative performance, with XGBoost reaching an ROC-AUC of 92%. Graph Convolutional Networks (GCNs) achieved moderate performance in this interval (ROC-AUC ~71%), with balanced sensitivity and specificity comparable to mid-performing conventional models. In the 1–2 year early-detection window, conventional model performance declined sharply (XGBoost ROC-AUC 46%), whereas the GCN maintained robust discriminative ability (ROC-AUC ~74%) with relatively balanced sensitivity and specificity. These findings indicate that while conventional approaches excel at detecting late pre-diagnostic signals, GNNs are more stable and effective at capturing subtle early molecular changes. Conclusions: The synolitic GNN framework demonstrates robust performance in early pre-diagnostic detection of ovarian cancer, maintaining accuracy where conventional methods decline. These results highlight the potential of network-informed machine learning to identify subtle proteomic patterns and pathway-level dysregulation prior to clinical diagnosis. This proof-of-concept study supports further development of GNN approaches for early ovarian cancer detection and warrants validation in larger, independent cohorts. Full article
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Viewed by 792
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

25 pages, 14530 KB  
Article
Highway as Barriers to Park Visitation: A Fixed Effects Analysis Using Mobility Data
by Hyewon Yoon, Zipeng Guo, Yang Song, Hongmei Lu and Yunpei Zhang
Urban Sci. 2025, 9(12), 512; https://doi.org/10.3390/urbansci9120512 - 2 Dec 2025
Viewed by 426
Abstract
Urban parks provide critical benefits for public health, mental well-being, and social connection. However, inequities in park access and use persist, particularly among socially and economically vulnerable populations. While previous studies have established that segregation and social vulnerability each contribute to uneven park [...] Read more.
Urban parks provide critical benefits for public health, mental well-being, and social connection. However, inequities in park access and use persist, particularly among socially and economically vulnerable populations. While previous studies have established that segregation and social vulnerability each contribute to uneven park access, little is known about how these two forces interact to shape real visitation patterns. This study addresses this research gap and answers the research question: How does highway segregation relate to differences in the different aspects of social vulnerability in influencing park access across Austin’s east–west divide? SafeGraph mobility data from 2019 and the Social Vulnerability Index (SVI), which included four themes (i.e., socioeconomic status, household composition, minority status and language, and housing and transportation characteristics), were analyzed through fixed-effects regression models for Austin, Texas. Results show that household composition and minority vulnerabilities have negative associations with park visitation, indicating that areas with more elderly, single-parent, or minority residents visit parks less frequently. Interaction terms reveal that highway segregation functions as a structural barrier that conditions the influence of social vulnerability on park use. Those associated with socioeconomic resources diminish, while the disadvantages linked to household composition and minority status intensify on the east side of I-35, reflecting the cumulative effects of segregation and infrastructural division. These findings confirm that inequities in park access are more pronounced on the east side of the I-35, consistent with the highway’s role in reinforcing segregation. Efforts to strengthen connectivity represent key strategies for advancing equitable park visitation across Austin. Full article
Show Figures

Figure 1

12 pages, 597 KB  
Article
AgentMol: Multi-Model AI System for Automatic Drug-Target Identification and Molecule Development
by Piotr Karabowicz, Radosław Charkiewicz, Alicja Charkiewicz, Anetta Sulewska and Jacek Nikliński
Methods Protoc. 2025, 8(6), 143; https://doi.org/10.3390/mps8060143 - 1 Dec 2025
Viewed by 622
Abstract
Drug discovery remains a time-consuming and costly process, necessitating innovative computational approaches to accelerate early stage target identification and compound development. We introduce AgentMol, a modular multimodel AI system that integrates large language models, chemical language modeling, and deep learning–based affinity prediction to [...] Read more.
Drug discovery remains a time-consuming and costly process, necessitating innovative computational approaches to accelerate early stage target identification and compound development. We introduce AgentMol, a modular multimodel AI system that integrates large language models, chemical language modeling, and deep learning–based affinity prediction to automate the discovery pipeline. AgentMol begins with disease-related queries processed through a Retrieval-Augmented Generation system using the Large Language Model to identify protein targets. Protein sequences are then used to condition a GPT-2–based chemical language model, which generates corresponding small-molecule candidates in SMILES format. Finally, a regression convolutional neural network (RCNN) predicts the drug-target interaction by estimating binding affinities (pKi). Models were trained and validated on 470,560 ligand–protein pairs from the BindingDB database. The chemical language model achieved high validity (1.00), uniqueness (0.96), and diversity (0.89), whereas the RCNN model demonstrated robust predictive performance with R2 > 0.6 and Pearson’s R > 0.8. By leveraging LangGraph for orchestration, AgentMol delivers a scalable, interpretable pipeline, effectively enabling the end-to-end generation and evaluation of drug candidates conditioned on protein targets. This system represents a significant step toward practical AI-driven molecular discovery with accessible computational demands. Full article
(This article belongs to the Special Issue Advanced Methods and Technologies in Drug Discovery)
Show Figures

Graphical abstract

43 pages, 4725 KB  
Article
Graph-FEM/ML Framework for Inverse Load Identification in Thick-Walled Hyperelastic Pressure Vessels
by Nasser Firouzi, Ramy M. Hafez, Kareem N. Salloomi, Mohamed A. Abdelkawy and Raja Rizwan Hussain
Symmetry 2025, 17(12), 2021; https://doi.org/10.3390/sym17122021 - 23 Nov 2025
Viewed by 673
Abstract
The accurate identification of internal and external pressures in thick-walled hyperelastic vessels is a challenging inverse problem with significant implications for structural health monitoring, biomedical devices, and soft robotics. Conventional analytical and numerical approaches address the forward problem effectively but offer limited means [...] Read more.
The accurate identification of internal and external pressures in thick-walled hyperelastic vessels is a challenging inverse problem with significant implications for structural health monitoring, biomedical devices, and soft robotics. Conventional analytical and numerical approaches address the forward problem effectively but offer limited means for recovering unknown load conditions from observable deformations. In this study, we introduce a Graph-FEM/ML framework that couples high-fidelity finite element simulations with machine learning models to infer normalized internal and external pressures from measurable boundary deformations. A dataset of 1386 valid samples was generated through Latin Hypercube Sampling of geometric and loading parameters and simulated using finite element analysis with a Neo-Hookean constitutive model. Two complementary neural architectures were explored: graph neural networks (GNNs), which operate directly on resampled and feature-enriched boundary data, and convolutional neural networks (CNNs), which process image-based representations of undeformed and deformed cross-sections. The GNN models consistently achieved low root-mean-square errors (≈0.021) and stable correlations across training, validation, and test sets, particularly when augmented with displacement and directional features. In contrast, CNN models exhibited limited predictive accuracy: quarter-section inputs regressed toward mean values, while full-ring and filled-section inputs improved after Bayesian optimization but remained inferior to GNNs, with higher RMSEs (0.023–0.030) and modest correlations (R2). To the best of our knowledge, this is the first work to combine boundary deformation observations with graph-based learning for inverse load identification in hyperelastic vessels. The results highlight the advantages of boundary-informed GNNs over CNNs and establish a reproducible dataset and methodology for future investigations. This framework represents an initial step toward a new direction in mechanics-informed machine learning, with the expectation that future research will refine and extend the approach to improve accuracy, robustness, and applicability in broader engineering and biomedical contexts. Full article
(This article belongs to the Special Issue Symmetries in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2622 KB  
Article
Hybrid Supply Chain Model for Wheat Market
by Yulia Otmakhova, Dmitry Devyatkin and He Zhou
Systems 2025, 13(11), 1026; https://doi.org/10.3390/systems13111026 - 17 Nov 2025
Viewed by 542
Abstract
Accurate modeling of wheat supply chains is of great importance. The methods for forecasting them can be utilized as strategic planning tools to manage sustainable and balanced supply chains, ensuring a high level of food security, economic growth, and social development. In this [...] Read more.
Accurate modeling of wheat supply chains is of great importance. The methods for forecasting them can be utilized as strategic planning tools to manage sustainable and balanced supply chains, ensuring a high level of food security, economic growth, and social development. In this paper, we focus on wheat international trade indicators, and a regression model is a crucial component for the chain modeling. Trade indicators in the wheat market are inherently complex and exhibit significant stochasticity and non-stationarity due to the intricate interplay of various trade flows and factors, which pose challenges for accurate market forecasting. We proposed a novel hybrid recurrent and graph-transformer-based model to tackle these challenges. We collected and combined data from international providers such as UN FAOSTAT and UN Comtrade for all the world’s wheat exporters. The experiments show that the proposed model can accurately predict wheat export levels. We have also analyzed how the proposed model can be utilized to predict exports in the case of some pre-defined trade limitations. In the future, the proposed model could be naturally extended to various derivative products of wheat, supporting real-world grain chain models. Our forecasting methods could be used to create an analytical tool to support strategic decision-making in cognitive situation centers, taking into account the national interests and priorities of actors in the international wheat market. Full article
(This article belongs to the Section Supply Chain Management)
Show Figures

Figure 1

17 pages, 3261 KB  
Article
Scalable Generation of Synthetic IoT Network Datasets: A Case Study with Cooja
by Hrant Khachatrian, Aram Dovlatyan, Greta Grigoryan and Theofanis P. Raptis
Future Internet 2025, 17(11), 518; https://doi.org/10.3390/fi17110518 - 13 Nov 2025
Viewed by 604
Abstract
Predicting the behavior of Internet of Things (IoT) networks under irregular topologies and heterogeneous battery conditions remains a significant challenge. Simulation tools can capture these effects but can require high manual effort and computational capacity, motivating the use of machine learning surrogates. This [...] Read more.
Predicting the behavior of Internet of Things (IoT) networks under irregular topologies and heterogeneous battery conditions remains a significant challenge. Simulation tools can capture these effects but can require high manual effort and computational capacity, motivating the use of machine learning surrogates. This work introduces an automated pipeline for generating large-scale IoT network datasets by bringing together the Contiki-NG firmware, parameterized topology generation, and Slurm-based orchestration of Cooja simulations. The system supports a variety of network structures, scalable node counts, randomized battery allocations, and routing protocols to reproduce diverse failure modes. As a case study, we conduct over 10,000 Cooja simulations with 15–75 battery-powered motes arranged in sparse grid topologies and operating the RPL routing protocol, consuming 1300 CPU-hours in total. The simulations capture realistic failure modes, including unjoined nodes despite physical connectivity and cascading disconnects caused by battery depletion. The resulting graph-structured datasets are used for two prediction tasks: (1) estimating the last successful message delivery time for each node and (2) predicting network-wide spatial coverage. Graph neural network models trained on these datasets outperform baseline regression models and topology-aware heuristics while evaluating substantially faster than full simulations. The proposed framework provides a reproducible foundation for data-driven analysis of energy-limited IoT networks. Full article
Show Figures

Figure 1

35 pages, 2963 KB  
Article
Explainable Artificial Intelligence Framework for Predicting Treatment Outcomes in Age-Related Macular Degeneration
by Mini Han Wang
Sensors 2025, 25(22), 6879; https://doi.org/10.3390/s25226879 - 11 Nov 2025
Viewed by 1320
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and large language model (LLM) framework that combines mechanistic disease knowledge with multimodal ophthalmic data for explainable AMD treatment prognosis. In a pilot cohort of ten surgically managed AMD patients (six men, four women; mean age 67.8 ± 6.3 years), we collected 30 structured clinical documents and 100 paired imaging series (optical coherence tomography, fundus fluorescein angiography, scanning laser ophthalmoscopy, and ocular/superficial B-scan ultrasonography). Texts were semantically annotated and mapped to standardized ontologies, while images underwent rigorous DICOM-based quality control, lesion segmentation, and quantitative biomarker extraction. A domain-specific ophthalmic knowledge graph encoded causal disease and treatment relationships, enabling neuro-symbolic reasoning to constrain and guide neural feature learning. An LLM fine-tuned on ophthalmology literature and electronic health records ingested structured biomarkers and longitudinal clinical narratives through multimodal clinical-profile prompts, producing natural-language risk explanations with explicit evidence citations. On an independent test set, the hybrid model achieved AUROC 0.94 ± 0.03, AUPRC 0.92 ± 0.04, and a Brier score of 0.07, significantly outperforming purely neural and classical Cox regression baselines (p ≤ 0.01). Explainability metrics showed that >85% of predictions were supported by high-confidence knowledge-graph rules, and >90% of generated narratives accurately cited key biomarkers. A detailed case study demonstrated real-time, individualized risk stratification—for example, predicting an >70% probability of requiring three or more anti-VEGF injections within 12 months and a ~45% risk of chronic macular edema if therapy lapsed—with predictions matching the observed clinical course. These results highlight the framework’s ability to integrate multimodal evidence, provide transparent causal reasoning, and support personalized treatment planning. While limited by single-center scope and short-term follow-up, this work establishes a scalable, privacy-aware, and regulator-ready template for explainable, next-generation decision support in AMD management, with potential for expansion to larger, device-diverse cohorts and other complex retinal diseases. Full article
(This article belongs to the Special Issue Sensing Functional Imaging Biomarkers and Artificial Intelligence)
Show Figures

Figure 1

33 pages, 3008 KB  
Article
Interpretable Adaptive Graph Fusion Network for Mortality and Complication Prediction in ICUs
by Mehmet Akif Cifci, Batuhan Öney, Fazli Yildirim, Hülya Yilmaz Başer and Metin Zontul
Diagnostics 2025, 15(22), 2825; https://doi.org/10.3390/diagnostics15222825 - 7 Nov 2025
Viewed by 793
Abstract
Background: This study introduces the Adaptive Graph Fusion Network, an interpretable graph-based learning framework developed for large-scale prediction of intensive care outcomes. The proposed model dynamically constructs patient similarity networks through a density-aware kernel that adjusts neighborhood size based on local data distribution, [...] Read more.
Background: This study introduces the Adaptive Graph Fusion Network, an interpretable graph-based learning framework developed for large-scale prediction of intensive care outcomes. The proposed model dynamically constructs patient similarity networks through a density-aware kernel that adjusts neighborhood size based on local data distribution, thereby representing both frequent and rare clinical patterns. Methods: To characterize physiological evolution over time, the framework integrates a short-horizon convolutional encoder that captures acute variations in vital signs and laboratory results with a long-horizon recurrent memory unit that models gradual temporal trends. The approach was trained and internally validated on the publicly available eICU Collaborative Research Database, which includes more than 200,000 admissions from 208 hospitals across the United States. Results: The model achieved a mean area under the receiver operating characteristic curve of 0.91 across six critical outcomes, with in-hospital mortality reaching 0.96, outperforming logistic regression, temporal long short-term memory networks, and calibrated Transformer-based architectures. Feature attribution analysis using SHAP and temporal contribution mapping identified lactate trajectories, creatinine fluctuations, and vasopressor administration as dominant determinants of risk, consistent with established clinical understanding while revealing additional temporal dependencies overlooked by existing scoring systems. Conclusions: These findings demonstrate that adaptive graph construction combined with multi-horizon temporal reasoning improves predictive reliability and interpretability in heterogeneous intensive care populations, offering a transparent and reproducible foundation for future research in clinical machine learning. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 1147 KB  
Article
Detour Eccentric Sum Index for QSPR Modeling in Molecular Structures
by Supriya Rajendran, Radha Rajamani Iyer, Ahmad Asiri and Kanagasabapathi Somasundaram
Symmetry 2025, 17(11), 1897; https://doi.org/10.3390/sym17111897 - 6 Nov 2025
Viewed by 373
Abstract
In this paper, we study the detour eccentric sum index (DESI) to obtain the Quantitative Structure–Property Relationship (QSPR) for different molecular structures. We establish theoretical bounds for this index and compute its values across fundamental graph families. Through correlation analyses between the physicochemical [...] Read more.
In this paper, we study the detour eccentric sum index (DESI) to obtain the Quantitative Structure–Property Relationship (QSPR) for different molecular structures. We establish theoretical bounds for this index and compute its values across fundamental graph families. Through correlation analyses between the physicochemical properties of molecular structures representing anti-malarial and breast cancer drugs, we show the high predictive value of two topological parameters, detour diameter (DD) and detour radius (DR). Specifically, DR shows strong positive correlations with boiling point, enthalpy, and flash point (up to 0.94), while DD is highly correlated with properties such as molar volume, molar refraction, and polarizability (up to 0.97). The DESI was then selected for detailed curvilinear regression modeling and comparison against the established eccentric distance sum index. For anti-malarial drugs, the second-order model yields the best fit. The DESI provides optimal prediction for boiling point, enthalpy, and flash point. In breast cancer drugs, the second-order model is again favored for properties except for melting point, best described by a third-order model. The results highlight how well the index captures subtle structural characteristics. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

Back to TopTop