Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,237)

Search Parameters:
Keywords = graphs and networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 1861 KiB  
Article
A Logifold Structure for Measure Space
by Inkee Jung and Siu-Cheong Lau
Axioms 2025, 14(8), 599; https://doi.org/10.3390/axioms14080599 (registering DOI) - 1 Aug 2025
Abstract
In this paper, we develop a geometric formulation of datasets. The key novel idea is to formulate a dataset to be a fuzzy topological measure space as a global object and equip the space with an atlas of local charts using graphs of [...] Read more.
In this paper, we develop a geometric formulation of datasets. The key novel idea is to formulate a dataset to be a fuzzy topological measure space as a global object and equip the space with an atlas of local charts using graphs of fuzzy linear logical functions. We call such a space a logifold. In applications, the charts are constructed by machine learning with neural network models. We implement the logifold formulation to find fuzzy domains of a dataset and to improve accuracy in data classification problems. Full article
(This article belongs to the Special Issue Recent Advances in Function Spaces and Their Applications)
20 pages, 2774 KiB  
Article
Complex Network Analytics for Structural–Functional Decoding of Neural Networks
by Jiarui Zhang, Dongxiao Zhang, Hu Lou, Yueer Li, Taijiao Du and Yinjun Gao
Appl. Sci. 2025, 15(15), 8576; https://doi.org/10.3390/app15158576 (registering DOI) - 1 Aug 2025
Abstract
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework [...] Read more.
Neural networks (NNs) achieve breakthroughs in computer vision and natural language processing,yet their “black box” nature persists. Traditional methods prioritise parameter optimisation and loss design, overlooking NNs’ fundamental structure as topologically organised nonlinear computational systems. This work proposes a complex network theory framework decoding structure–function coupling by mapping convolutional layers, fully connected layers, and Dropout modules into graph representations. To overcome limitations of heuristic compression techniques, we develop a topology-sensitive adaptive pruning algorithm that evaluates critical paths via node strength centrality, preserving structural–functional integrity. On CIFAR-10, our method achieves 55.5% parameter reduction with only 7.8% accuracy degradation—significantly outperforming traditional approaches. Crucially, retrained pruned networks exceed original model accuracy by up to 2.63%, demonstrating that topology optimisation unlocks latent model potential. This research establishes a paradigm shift from empirical to topologically rationalised neural architecture design, providing theoretical foundations for deep learning optimisation dynamics. Full article
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks (2nd Edition))
17 pages, 3062 KiB  
Article
Spatiotemporal Risk-Aware Patrol Planning Using Value-Based Policy Optimization and Sensor-Integrated Graph Navigation in Urban Environments
by Swarnamouli Majumdar, Anjali Awasthi and Lorant Andras Szolga
Appl. Sci. 2025, 15(15), 8565; https://doi.org/10.3390/app15158565 (registering DOI) - 1 Aug 2025
Abstract
This study proposes an intelligent patrol planning framework that leverages reinforcement learning, spatiotemporal crime forecasting, and simulated sensor telemetry to optimize autonomous vehicle (AV) navigation in urban environments. Crime incidents from Washington DC (2024–2025) and Seattle (2008–2024) are modeled as a dynamic spatiotemporal [...] Read more.
This study proposes an intelligent patrol planning framework that leverages reinforcement learning, spatiotemporal crime forecasting, and simulated sensor telemetry to optimize autonomous vehicle (AV) navigation in urban environments. Crime incidents from Washington DC (2024–2025) and Seattle (2008–2024) are modeled as a dynamic spatiotemporal graph, capturing the evolving intensity and distribution of criminal activity across neighborhoods and time windows. The agent’s state space incorporates synthetic AV sensor inputs—including fuel level, visual anomaly detection, and threat signals—to reflect real-world operational constraints. We evaluate and compare three learning strategies: Deep Q-Network (DQN), Double Deep Q-Network (DDQN), and Proximal Policy Optimization (PPO). Experimental results show that DDQN outperforms DQN in convergence speed and reward accumulation, while PPO demonstrates greater adaptability in sensor-rich, high-noise conditions. Real-map simulations and hourly risk heatmaps validate the effectiveness of our approach, highlighting its potential to inform scalable, data-driven patrol strategies in next-generation smart cities. Full article
(This article belongs to the Special Issue AI-Aided Intelligent Vehicle Positioning in Urban Areas)
Show Figures

Figure 1

20 pages, 4782 KiB  
Article
Enhanced Spatiotemporal Landslide Displacement Prediction Using Dynamic Graph-Optimized GNSS Monitoring
by Jiangfeng Li, Jiahao Qin, Kaimin Kang, Mingzhi Liang, Kunpeng Liu and Xiaohua Ding
Sensors 2025, 25(15), 4754; https://doi.org/10.3390/s25154754 (registering DOI) - 1 Aug 2025
Abstract
Landslide displacement prediction is crucial for disaster mitigation, yet traditional methods often fail to capture the complex, non-stationary spatiotemporal dynamics of slope evolution. This study introduces an enhanced prediction framework that integrates multi-scale signal processing with dynamic, geology-aware graph modeling. The proposed methodology [...] Read more.
Landslide displacement prediction is crucial for disaster mitigation, yet traditional methods often fail to capture the complex, non-stationary spatiotemporal dynamics of slope evolution. This study introduces an enhanced prediction framework that integrates multi-scale signal processing with dynamic, geology-aware graph modeling. The proposed methodology first employs the Maximum Overlap Discrete Wavelet Transform (MODWT) to denoise raw Global Navigation Satellite System (GNSS)-monitored displacement time series data, enhancing the underlying deformation features. Subsequently, a geology-aware graph is constructed, using the temporal correlation of displacement series as a practical proxy for physical relatedness between monitoring nodes. The framework’s core innovation lies in a dynamic graph optimization model with low-rank constraints, which adaptively refines the graph topology to reflect time-varying inter-sensor dependencies driven by factors like mining activities. Experiments conducted on a real-world dataset from an active open-pit mine demonstrate the framework’s superior performance. The DCRNN-proposed model achieved the highest accuracy among eight competing models, recording a Root Mean Square Error (RMSE) of 2.773 mm in the Vertical direction, a 39.1% reduction compared to its baseline. This study validates that the proposed dynamic graph optimization approach provides a robust and significantly more accurate solution for landslide prediction in complex, real-world engineering environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

26 pages, 8736 KiB  
Article
Uncertainty-Aware Fault Diagnosis of Rotating Compressors Using Dual-Graph Attention Networks
by Seungjoo Lee, YoungSeok Kim, Hyun-Jun Choi and Bongjun Ji
Machines 2025, 13(8), 673; https://doi.org/10.3390/machines13080673 (registering DOI) - 1 Aug 2025
Abstract
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a [...] Read more.
Rotating compressors are foundational in various industrial processes, particularly in the oil-and-gas sector, where reliable fault detection is crucial for maintaining operational continuity. While Graph Attention Network (GAT) frameworks are widely available, this study advances the state of the art by introducing a Bayesian GAT method specifically tailored for vibration-based compressor fault diagnosis. The approach integrates domain-specific digital-twin simulations built with Rotordynamic software (1.3.0), and constructs dual adjacency matrices to encode both physically informed and data-driven sensor relationships. Additionally, a hybrid forecasting-and-reconstruction objective enables the model to capture short-term deviations as well as long-term waveform fidelity. Monte Carlo dropout further decomposes prediction uncertainty into aleatoric and epistemic components, providing a more robust and interpretable model. Comparative evaluations against conventional Long Short-Term Memory (LSTM)-based autoencoder and forecasting methods demonstrate that the proposed framework achieves superior fault-detection performance across multiple fault types, including misalignment, bearing failure, and unbalance. Moreover, uncertainty analyses confirm that fault severity correlates with increasing levels of both aleatoric and epistemic uncertainty, reflecting heightened noise and reduced model confidence under more severe conditions. By enhancing GAT fundamentals with a domain-tailored dual-graph strategy, specialized Bayesian inference, and digital-twin data generation, this research delivers a comprehensive and interpretable solution for compressor fault diagnosis, paving the way for more reliable and risk-aware predictive maintenance in complex rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

38 pages, 2981 KiB  
Article
Research on the Characteristics and Influencing Factors of Virtual Water Trade Networks in Chinese Provinces
by Guangyao Deng, Siqian Hou and Keyu Di
Sustainability 2025, 17(15), 6972; https://doi.org/10.3390/su17156972 (registering DOI) - 31 Jul 2025
Abstract
Promoting the sustainable development of virtual water trade is of great significance to safeguarding China’s water resource security and balanced regional economic growth. This study analyzes the virtual water trade network among 31 Chinese provinces based on multi-regional input–output tables from 2012, 2015, [...] Read more.
Promoting the sustainable development of virtual water trade is of great significance to safeguarding China’s water resource security and balanced regional economic growth. This study analyzes the virtual water trade network among 31 Chinese provinces based on multi-regional input–output tables from 2012, 2015, and 2017, using total trade decomposition, social network analysis, and exponential random graph models. The key findings are as follows: (1) The total virtual water trade volume remains stable, with Xinjiang, Jiangsu, and Guangdong as the core regions, while remote areas such as Shaanxi and Gansu have lower trade volumes. The primary industry dominates, and it is driven by simple value chains. (2) Provinces such as Xinjiang, Heilongjiang, and Jiangsu form the network’s core. Network density and symmetry increased from 2012 to 2015 but declined slightly in 2017, with efficiency peaking and then dropping, and the clustering coefficient decreased annually. Four economic sectors exhibit distinct interactions: frequent two-way flows in Sector 1, significant inflows in Sector 2, prominent net spillovers in Sector 3, and key brokers in Sector 4. (3) The network evolved from a core-periphery structure with weak ties to a stable, heterogeneous, and resilient system. (4) Influencing factors, such asper capita water resources, economic development, and population, significantly impact trade. Similarities in economic levels, population, and water endowments promote trade, while spatial distance has a limited effect, with geographic proximity showing a significant negative impact on long-distance trade. Full article
Show Figures

Figure 1

33 pages, 2962 KiB  
Review
Evolution of Data-Driven Flood Forecasting: Trends, Technologies, and Gaps—A Systematic Mapping Study
by Banujan Kuhaneswaran, Golam Sorwar, Ali Reza Alaei and Feifei Tong
Water 2025, 17(15), 2281; https://doi.org/10.3390/w17152281 - 31 Jul 2025
Abstract
This paper presents a Systematic Mapping Study (SMS) on data-driven approaches in flood forecasting from 2019 to 2024, a period marked by transformative developments in Deep Learning (DL) technologies. Analysing 363 selected studies, this paper provides an overview of the technological evolution in [...] Read more.
This paper presents a Systematic Mapping Study (SMS) on data-driven approaches in flood forecasting from 2019 to 2024, a period marked by transformative developments in Deep Learning (DL) technologies. Analysing 363 selected studies, this paper provides an overview of the technological evolution in this field, methodological approaches, evaluation practices and geographical distribution of studies. The study revealed that meteorological and hydrological factors constitute approximately 76% of input variables, with rainfall/precipitation and water level measurements forming the core predictive basis. Long Short-Term Memory (LSTM) networks emerged as the dominant algorithm (21% of implementations), whilst hybrid and ensemble approaches showed the most dramatic growth (from 2% in 2019 to 10% in 2024). The study also revealed a threefold increase in publications during this period, with significant geographical concentration in East and Southeast Asia (56% of studies), particularly China (36%). Several research gaps were identified, including limited exploration of graph-based approaches for modelling spatial relationships, underutilisation of transfer learning for data-scarce regions, and insufficient uncertainty quantification. This SMS provides researchers and practitioners with actionable insights into current trends, methodological practices, and future directions in data-driven flood forecasting, thereby advancing this critical field for disaster management. Full article
Show Figures

Figure 1

24 pages, 5286 KiB  
Article
Graph Neural Network-Enhanced Multi-Agent Reinforcement Learning for Intelligent UAV Confrontation
by Kunhao Hu, Hao Pan, Chunlei Han, Jianjun Sun, Dou An and Shuanglin Li
Aerospace 2025, 12(8), 687; https://doi.org/10.3390/aerospace12080687 (registering DOI) - 31 Jul 2025
Abstract
Unmanned aerial vehicles (UAVs) are widely used in surveillance and combat for their efficiency and autonomy, whilst complex, dynamic environments challenge the modeling of inter-agent relations and information transmission. This research proposes a novel UAV tactical choice-making algorithm utilizing graph neural networks to [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used in surveillance and combat for their efficiency and autonomy, whilst complex, dynamic environments challenge the modeling of inter-agent relations and information transmission. This research proposes a novel UAV tactical choice-making algorithm utilizing graph neural networks to tackle these challenges. The proposed algorithm employs a graph neural network to process the observed state information, the convolved output of which is then fed into a reconstructed critic network incorporating a Laplacian convolution kernel. This research first enhances the accuracy of obtaining unstable state information in hostile environments. The proposed algorithm uses this information to train a more precise critic network. In turn, this improved critic network guides the actor network to make decisions that better meet the needs of the battlefield. Coupled with a policy transfer mechanism, this architecture significantly enhances the decision-making efficiency and environmental adaptability within the multi-agent system. Results from the experiments show that the average effectiveness of the proposed algorithm across the six planned scenarios is 97.4%, surpassing the baseline by 23.4%. In addition, the integration of transfer learning makes the network convergence speed three times faster than that of the baseline algorithm. This algorithm effectively improves the information transmission efficiency between the environment and the UAV and provides strong support for UAV formation combat. Full article
(This article belongs to the Special Issue New Perspective on Flight Guidance, Control and Dynamics)
Show Figures

Figure 1

13 pages, 1879 KiB  
Article
Dynamic Graph Convolutional Network with Dilated Convolution for Epilepsy Seizure Detection
by Xiaoxiao Zhang, Chenyun Dai and Yao Guo
Bioengineering 2025, 12(8), 832; https://doi.org/10.3390/bioengineering12080832 (registering DOI) - 31 Jul 2025
Abstract
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that [...] Read more.
The electroencephalogram (EEG), widely used for measuring the brain’s electrophysiological activity, has been extensively applied in the automatic detection of epileptic seizures. However, several challenges remain unaddressed in prior studies on automated seizure detection: (1) Methods based on CNN and LSTM assume that EEG signals follow a Euclidean structure; (2) Algorithms leveraging graph convolutional networks rely on adjacency matrices constructed with fixed edge weights or predefined connection rules. To address these limitations, we propose a novel algorithm: Dynamic Graph Convolutional Network with Dilated Convolution (DGDCN). By leveraging a spatiotemporal attention mechanism, the proposed model dynamically constructs a task-specific adjacency matrix, which guides the graph convolutional network (GCN) in capturing localized spatial and temporal dependencies among adjacent nodes. Furthermore, a dilated convolutional module is incorporated to expand the receptive field, thereby enabling the model to capture long-range temporal dependencies more effectively. The proposed seizure detection system is evaluated on the TUSZ dataset, achieving AUC values of 88.7% and 90.4% on 12-s and 60-s segments, respectively, demonstrating competitive performance compared to current state-of-the-art methods. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

21 pages, 2909 KiB  
Article
Novel Federated Graph Contrastive Learning for IoMT Security: Protecting Data Poisoning and Inference Attacks
by Amarudin Daulay, Kalamullah Ramli, Ruki Harwahyu, Taufik Hidayat and Bernardi Pranggono
Mathematics 2025, 13(15), 2471; https://doi.org/10.3390/math13152471 - 31 Jul 2025
Abstract
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient [...] Read more.
Malware evolution presents growing security threats for resource-constrained Internet of Medical Things (IoMT) devices. Conventional federated learning (FL) often suffers from slow convergence, high communication overhead, and fairness issues in dynamic IoMT environments. In this paper, we propose FedGCL, a secure and efficient FL framework integrating contrastive graph representation learning for enhanced feature discrimination, a Jain-index-based fairness-aware aggregation mechanism, an adaptive synchronization scheduler to optimize communication rounds, and secure aggregation via homomorphic encryption within a Trusted Execution Environment. We evaluate FedGCL on four benchmark malware datasets (Drebin, Malgenome, Kronodroid, and TUANDROMD) using 5 to 15 graph neural network clients over 20 communication rounds. Our experiments demonstrate that FedGCL achieves 96.3% global accuracy within three rounds and converges to 98.9% by round twenty—reducing required training rounds by 45% compared to FedAvg—while incurring only approximately 10% additional computational overhead. By preserving patient data privacy at the edge, FedGCL enhances system resilience without sacrificing model performance. These results indicate FedGCL’s promise as a secure, efficient, and fair federated malware detection solution for IoMT ecosystems. Full article
Show Figures

Figure 1

24 pages, 4618 KiB  
Article
A Sensor Data Prediction and Early-Warning Method for Coal Mining Faces Based on the MTGNN-Bayesian-IF-DBSCAN Algorithm
by Mingyang Liu, Xiaodong Wang, Wei Qiao, Hongbo Shang, Zhenguo Yan and Zhixin Qin
Sensors 2025, 25(15), 4717; https://doi.org/10.3390/s25154717 (registering DOI) - 31 Jul 2025
Abstract
In the context of intelligent coal mine safety monitoring, an integrated prediction and early-warning method named MTGNN-Bayesian-IF-DBSCAN (Multi-Task Graph Neural Network–Bayesian Optimization–Isolation Forest–Density-Based Spatial Clustering of Applications with Noise) is proposed to address the challenges of gas concentration prediction and anomaly detection in [...] Read more.
In the context of intelligent coal mine safety monitoring, an integrated prediction and early-warning method named MTGNN-Bayesian-IF-DBSCAN (Multi-Task Graph Neural Network–Bayesian Optimization–Isolation Forest–Density-Based Spatial Clustering of Applications with Noise) is proposed to address the challenges of gas concentration prediction and anomaly detection in coal mining faces. The MTGNN (Multi-Task Graph Neural Network) is first employed to model the spatiotemporal coupling characteristics of gas concentration and wind speed data. By constructing a graph structure based on sensor spatial dependencies and utilizing temporal convolutional layers to capture long short-term time-series features, the high-precision dynamic prediction of gas concentrations is achieved via the MTGNN. Experimental results indicate that the MTGNN outperforms comparative algorithms, such as CrossGNN and FourierGNN, in prediction accuracy, with the mean absolute error (MAE) being as low as 0.00237 and the root mean square error (RMSE) maintained below 0.0203 across different sensor locations (T0, T1, T2). For anomaly detection, a Bayesian optimization framework is introduced to adaptively optimize the fusion weights of IF (Isolation Forest) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise). Through defining the objective function as the F1 score and employing Gaussian process surrogate models, the optimal weight combination (w_if = 0.43, w_dbscan = 0.52) is determined, achieving an F1 score of 1.0. By integrating original concentration data and residual features, gas anomalies are effectively identified by the proposed method, with the detection rate reaching a range of 93–96% and the false alarm rate controlled below 5%. Multidimensional analysis diagrams (e.g., residual distribution, 45° diagonal error plot, and boxplots) further validate the model’s robustness in different spatial locations, particularly in capturing abrupt changes and low-concentration anomalies. This study provides a new technical pathway for intelligent gas warning in coal mines, integrating spatiotemporal modeling, multi-algorithm fusion, and statistical optimization. The proposed framework not only enhances the accuracy and reliability of gas prediction and anomaly detection but also demonstrates potential for generalization to other industrial sensor networks. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

31 pages, 2007 KiB  
Review
Artificial Intelligence-Driven Strategies for Targeted Delivery and Enhanced Stability of RNA-Based Lipid Nanoparticle Cancer Vaccines
by Ripesh Bhujel, Viktoria Enkmann, Hannes Burgstaller and Ravi Maharjan
Pharmaceutics 2025, 17(8), 992; https://doi.org/10.3390/pharmaceutics17080992 - 30 Jul 2025
Abstract
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the [...] Read more.
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the AI’s impact on LNP engineering through machine learning-driven predictive models, generative adversarial networks (GANs) for novel lipid design, and neural network-enhanced biodistribution prediction. AI reduces the therapeutic development timeline through accelerated virtual screening of millions of lipid combinations, compared to conventional high-throughput screening. Furthermore, AI-optimized LNPs demonstrate improved tumor targeting. GAN-generated lipids show structural novelty while maintaining higher encapsulation efficiency; graph neural networks predict RNA-LNP binding affinity with high accuracy vs. experimental data; digital twins reduce lyophilization optimization from years to months; and federated learning models enable multi-institutional data sharing. We propose a framework to address key technical challenges: training data quality (min. 15,000 lipid structures), model interpretability (SHAP > 0.65), and regulatory compliance (21CFR Part 11). AI integration reduces manufacturing costs and makes personalized cancer vaccine affordable. Future directions need to prioritize quantum machine learning for stability prediction and edge computing for real-time formulation modifications. Full article
Show Figures

Figure 1

21 pages, 651 KiB  
Article
PAD-MPFN: Dynamic Fusion with Popularity Decay for News Recommendation
by Biyang Ma, Yiwei Deng and Huifan Gao
Electronics 2025, 14(15), 3057; https://doi.org/10.3390/electronics14153057 - 30 Jul 2025
Abstract
News recommendation systems must simultaneously address multiple challenges, including dynamic user interest modeling, nonlinear popularity patterns, and diversity recommendation in cold-start scenarios. We present a Popularity-Aware Dynamic Multi-Perspective Fusion Network (PAD-MPFN) that innovatively integrates three key components: adaptive subspace projection for multi-source interest [...] Read more.
News recommendation systems must simultaneously address multiple challenges, including dynamic user interest modeling, nonlinear popularity patterns, and diversity recommendation in cold-start scenarios. We present a Popularity-Aware Dynamic Multi-Perspective Fusion Network (PAD-MPFN) that innovatively integrates three key components: adaptive subspace projection for multi-source interest fusion, logarithmic time-decay factors for popularity bias mitigation, and dynamic gating mechanisms for personalized recommendation weighting. The framework uniquely combines sequential behavior analysis, social graph propagation, and temporal popularity modeling through a unified architecture. Experimental results on the MIND dataset, an open-source version of MSN News, demonstrate that PAD-MPFN outperforms existing methods in terms of recommendation performance and cold-start scenarios while effectively alleviating information overload. This study offers a new solution for dynamic interest modeling and diverse recommendation. Full article
(This article belongs to the Special Issue Data-Driven Intelligence in Autonomous Systems)
Show Figures

Figure 1

18 pages, 8520 KiB  
Article
Cross-Layer Controller Tasking Scheme Using Deep Graph Learning for Edge-Controlled Industrial Internet of Things (IIoT)
by Abdullah Mohammed Alharthi, Fahad S. Altuwaijri, Mohammed Alsaadi, Mourad Elloumi and Ali A. M. Al-Kubati
Future Internet 2025, 17(8), 344; https://doi.org/10.3390/fi17080344 - 30 Jul 2025
Abstract
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking [...] Read more.
Edge computing (EC) plays a critical role in advancing the next-generation Industrial Internet of Things (IIoT) by enhancing production, maintenance, and operational outcomes across heterogeneous network boundaries. This study builds upon EC intelligence and integrates graph-based learning to propose a Cross-Layer Controller Tasking Scheme (CLCTS). The scheme operates through two primary phases: task grouping assignment and cross-layer control. In the first phase, controller nodes executing similar tasks are grouped based on task timing to achieve monotonic and synchronized completions. The second phase governs controller re-tasking both within and across these groups. Graph structures connect the groups to facilitate concurrent tasking and completion. A learning model is trained on inverse outcomes from the first phase to mitigate task acceptance errors (TAEs), while the second phase focuses on task migration learning to reduce task prolongation. Edge nodes interlink the groups and synchronize tasking, migration, and re-tasking operations across IIoT layers within unified completion periods. Departing from simulation-based approaches, this study presents a fully implemented framework that combines learning-driven scheduling with coordinated cross-layer control. The proposed CLCTS achieves an 8.67% reduction in overhead, a 7.36% decrease in task processing time, and a 17.41% reduction in TAEs while enhancing the completion ratio by 13.19% under maximum edge node deployment. Full article
Show Figures

Figure 1

20 pages, 1536 KiB  
Article
Graph Convolution-Based Decoupling and Consistency-Driven Fusion for Multimodal Emotion Recognition
by Yingmin Deng, Chenyu Li, Yu Gu, He Zhang, Linsong Liu, Haixiang Lin, Shuang Wang and Hanlin Mo
Electronics 2025, 14(15), 3047; https://doi.org/10.3390/electronics14153047 - 30 Jul 2025
Abstract
Multimodal emotion recognition (MER) is essential for understanding human emotions from diverse sources such as speech, text, and video. However, modality heterogeneity and inconsistent expression pose challenges for effective feature fusion. To address this, we propose a novel MER framework combining a Dynamic [...] Read more.
Multimodal emotion recognition (MER) is essential for understanding human emotions from diverse sources such as speech, text, and video. However, modality heterogeneity and inconsistent expression pose challenges for effective feature fusion. To address this, we propose a novel MER framework combining a Dynamic Weighted Graph Convolutional Network (DW-GCN) for feature disentanglement and a Cross-Attention Consistency-Gated Fusion (CACG-Fusion) module for robust integration. DW-GCN models complex inter-modal relationships, enabling the extraction of both common and private features. The CACG-Fusion module subsequently enhances classification performance through dynamic alignment of cross-modal cues, employing attention-based coordination and consistency-preserving gating mechanisms to optimize feature integration. Experiments on the CMU-MOSI and CMU-MOSEI datasets demonstrate that our method achieves state-of-the-art performance, significantly improving the ACC7, ACC2, and F1 scores. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop