Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (70)

Search Parameters:
Keywords = enhanced message passing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 3627 KB  
Article
Multi-Radio Access Fusion with Contrastive Graph Message Passing Neural Networks for Intelligent Maritime Routing
by Xuan Zhou, Jin Chen and Haitao Lin
Electronics 2026, 15(6), 1268; https://doi.org/10.3390/electronics15061268 - 18 Mar 2026
Viewed by 237
Abstract
Maritime heterogeneous wireless networks are characterized by dynamic topology and significant heterogeneity in bandwidth, latency, and coverage across communication paradigms, rendering traditional terrestrial routing protocols inadequate. To address these challenges, this paper proposes a unified multi-radio access fusion infrastructure featuring a gateway that [...] Read more.
Maritime heterogeneous wireless networks are characterized by dynamic topology and significant heterogeneity in bandwidth, latency, and coverage across communication paradigms, rendering traditional terrestrial routing protocols inadequate. To address these challenges, this paper proposes a unified multi-radio access fusion infrastructure featuring a gateway that enables protocol conversion and collaborative resource management across heterogeneous systems. Building upon this infrastructure, we introduce CMPGNN-DQN, an intelligent routing algorithm that integrates Contrastive Message Passing Graph Neural Networks with Deep Reinforcement Learning. Specifically, the algorithm employs k-hop neighbor aggregation to expand the receptive field for routing decisions, and utilizes a dual-view contrastive learning mechanism—encompassing both homogeneous and heterogeneous perspectives—to enhance representation robustness against dynamic topology perturbations. By deeply fusing network topology features with real-time state information, including bandwidth, delay, and queue length, the agent makes hop-by-hop routing decisions via an ε-greedy policy within the DQN framework. Extensive simulations conducted across various scales of dynamic maritime communication scenarios demonstrate that CMPGNN-DQN outperforms state-of-the-art benchmark algorithms, including AODV, DQN, and GCN, across key metrics such as packet delivery ratio, transmission latency, and bandwidth utilization. Quantitatively, compared to the best-performing alternative (MPNN-DQN), our algorithm achieves throughput improvements of 2.06–5.04% under standard traffic loads and 6.6–27.1% under partial link failure conditions, while converging within merely 25 training episodes. Notably, under heavy network loads (40% load rate) or partial link failures, the algorithm maintains stable communication performance, demonstrating strong adaptability to complex dynamic environments. Full article
Show Figures

Figure 1

28 pages, 4028 KB  
Article
Reliability-Aware Neural Decoding with Adaptive Multi-Source Information Fusion
by Pengxi Fu, Zhen Wang, Jianxin Guo, Yushuai Zhang, Feng Wang, Rui Zhu and Zhentao Huang
Entropy 2026, 28(3), 323; https://doi.org/10.3390/e28030323 - 13 Mar 2026
Viewed by 298
Abstract
Modern communication systems increasingly leverage multiple information streams—including channel observations, statistical models, and contextual knowledge—to enhance decoding reliability. However, the varying and often unpredictable quality of these sources poses a critical challenge: rigid combination rules fail when source reliability fluctuates, while manual tuning [...] Read more.
Modern communication systems increasingly leverage multiple information streams—including channel observations, statistical models, and contextual knowledge—to enhance decoding reliability. However, the varying and often unpredictable quality of these sources poses a critical challenge: rigid combination rules fail when source reliability fluctuates, while manual tuning cannot adapt to dynamic operating conditions. This paper presents a neural decoder architecture that automatically learns to assess and fuse heterogeneous information sources based on their instantaneous reliability. Central to our design is a learnable gating module that dynamically weights information streams, demonstrating emergent Bayesian-like behavior—increasing reliance on statistical models under high uncertainty while transitioning to observation-dominated processing as signal confidence improves. To combat the progressive dilution of auxiliary information in deep architectures, we propose a continuous injection strategy that refreshes auxiliary features at each processing layer through dedicated encoding pathways. The underlying message-passing network adopts a heterogeneous bipartite structure with direction-dependent edge parameterization, respecting the asymmetric computational roles inherent in iterative decoding algorithms. Comprehensive experiments validate that the proposed approach not only improves nominal performance but critically maintains robustness when auxiliary information quality degrades or becomes mismatched with actual conditions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

26 pages, 1118 KB  
Article
Representation-Centric Approach for Android Malware Classification: Interpretability-Driven Feature Engineering on Function Call Graphs
by Gyumin Kim, Dongmin Yoon, NaeJoung Kwak and ByoungYup Lee
Appl. Sci. 2026, 16(6), 2670; https://doi.org/10.3390/app16062670 - 11 Mar 2026
Viewed by 348
Abstract
The existing research on Android malware detection using graph neural networks (GNNs) has largely focused on architectural improvements, while input node feature representations have received less systematic attention. This study adopts a representation-centric approach to enhance function call graph (FCG)-based malware classification through [...] Read more.
The existing research on Android malware detection using graph neural networks (GNNs) has largely focused on architectural improvements, while input node feature representations have received less systematic attention. This study adopts a representation-centric approach to enhance function call graph (FCG)-based malware classification through interpretability-driven feature engineering. We propose a dual-level structural feature framework integrating local topological patterns with global graph-level properties. The initial feature set comprises 13 dimensions: five local degree profile (LDP) features and eight global structural features capturing community structure, execution flow, and connectivity patterns. To mitigate the curse of dimensionality, we apply an interpretability-driven selection using integrated gradients (IG), gradient-weighted class activation mapping (GradCAM), and Shapley additive explanations (SHAP), yielding an optimized seven-dimensional subset. Experiments on the MalNet-Tiny benchmark demonstrate that the proposed approach achieves 94.47 ± 0.25% accuracy with jumping knowledge GraphSAGE (JK-GraphSAGE), improving the LDP-only baseline by 0.32 percentage points while reducing feature dimensionality by 46%. The selected features exhibit consistent importance across four GNN architectures and multiple message-passing layers, demonstrating model-agnostic effectiveness. The results reveal that aggregation mechanisms critically influence feature utility, highlighting the necessity of interpretability-guided design for robust malware detection. This work provides a systematic methodology for feature engineering in graph-based security applications. Full article
Show Figures

Figure 1

19 pages, 1356 KB  
Article
Signal Detection Method for OTFS System Based on Adaptive Wavelet Convolutional Neural Network
by You Wu and Mengyao Zhou
Sensors 2026, 26(4), 1397; https://doi.org/10.3390/s26041397 - 23 Feb 2026
Viewed by 435
Abstract
In Orthogonal Time–Frequency Space (OTFS) systems, signal detection algorithms based on convolutional neural networks (CNNs) suffer from insufficient feature extraction and are limited by local mixing. Additionally, fixed convolution kernels struggle to match the sparsity and non-stationary characteristics of OTFS signals in the [...] Read more.
In Orthogonal Time–Frequency Space (OTFS) systems, signal detection algorithms based on convolutional neural networks (CNNs) suffer from insufficient feature extraction and are limited by local mixing. Additionally, fixed convolution kernels struggle to match the sparsity and non-stationary characteristics of OTFS signals in the delay-Doppler domain, resulting in slow convergence and high training costs. We do not stop at simply integrating more features outside the existing CNN framework. Instead, we go deeper into the network and replace the fixed convolution kernels with wavelet convolution layers that have time–frequency-adaptive capabilities. This fundamental change allows the network to more intrinsically match the physical characteristics of OTFS signals in the delay-Doppler domain, thereby achieving excellent detection performance while also gaining faster convergence efficiency. Therefore, this paper proposes a signal detection method using an adaptive wavelet convolutional neural network (AWCNN). The approach replaces the first convolutional layer of a standard CNN with an adaptive wavelet layer, which leverages the time–frequency localization properties of Sym4 wavelet kernels along with learnable scaling and translation factors. This enhances the network’s ability to extract sparse features from OTFS signals. Additionally, the model incorporates both the original received signal and preliminary estimates from the message-passing (MP) algorithm as input features, enriching the dataset and further improving detection performance. Experimental results demonstrate that the AWCNN model achieves superior convergence efficiency compared to the CNN model, which attains a bit error rate (BER) comparable to that of the CNN algorithm at a low signal-to-noise ratio of 2 dB, operating without the need for pilot-assisted channel state information acquisition. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

16 pages, 1641 KB  
Article
Edge-Based GNN for Network Delay Prediction Enhanced by Flight Connectivity
by Zhixing Tang, Zhaolun Niu, Xuanting Chen, Shan Huang and Xinping Zhu
Aerospace 2026, 13(2), 161; https://doi.org/10.3390/aerospace13020161 - 10 Feb 2026
Viewed by 434
Abstract
Accurate prediction of network-wide delay is crucial for air traffic management and passenger service. However, the inherent complexity of large-scale air traffic networks, with their dense interconnectivity and multi-dimensional operational dynamics such as flight connectivity, makes this task highly challenging. While Graph Neural [...] Read more.
Accurate prediction of network-wide delay is crucial for air traffic management and passenger service. However, the inherent complexity of large-scale air traffic networks, with their dense interconnectivity and multi-dimensional operational dynamics such as flight connectivity, makes this task highly challenging. While Graph Neural Networks (GNNs) offer a promising framework, prevailing models are constrained by a “node → edge → node” representation paradigm, which fails to preserve the high-fidelity, edge-centric operational data that encodes delay propagation paths. To overcome this limitation, we propose a novel edge-based GNN. Our approach begins with a flight-connectivity-informed delay characterization, introducing delay width and delay strength as core metrics. The model implements an “edge → node” message-passing mechanism that explicitly encodes inbound and outbound flights, enabling direct learning of delay diffusion dynamics along air routes. Extensive experiments on real-world datasets demonstrate that our method outperforms state-of-the-art benchmarks, achieving the lowest RMSE, MAE, and MSE. A layered performance analysis reveals a key strength: the model delivers superior accuracy at major hub airports—which are critical to network performance—while maintaining robust precision at small-to-medium-sized airports. This balanced capability underscores the model’s practical utility and its enhanced capacity to capture the essential spatial–temporal dependencies governing delay propagation across diverse airport tiers. Full article
(This article belongs to the Special Issue AI, Machine Learning and Automation for Air Traffic Control (ATC))
Show Figures

Figure 1

24 pages, 23360 KB  
Article
Model-Data Hybrid-Driven Wideband Channel Estimation for Beamspace Massive MIMO Systems
by Yang Nie, Zhenghuan Ma and Lili Jing
Entropy 2026, 28(2), 154; https://doi.org/10.3390/e28020154 - 30 Jan 2026
Cited by 1 | Viewed by 344
Abstract
Accurate channel estimation is critical for enabling effective directional beamforming and spectrally efficient transmission in beamspace massive multiple-input multiple-output (MIMO) systems. However, conventional model-driven algorithms are derived from idealized mathematical models and typically suffer severe performance degradation under model mismatches caused by complex [...] Read more.
Accurate channel estimation is critical for enabling effective directional beamforming and spectrally efficient transmission in beamspace massive multiple-input multiple-output (MIMO) systems. However, conventional model-driven algorithms are derived from idealized mathematical models and typically suffer severe performance degradation under model mismatches caused by complex and nonideal propagation environments. Although data-driven deep learning (DL) approaches can learn channel characteristics from data, they typically require large-scale training datasets and demonstrate limited generalization capability. To overcome these limitations, we propose a model-data hybrid-driven network (MD-HDN) scheme to address the wideband beamspace channel estimation problem. In the MD-HDN scheme, we unfold the vector approximate message passing (VAMP) algorithm into a trainable network, where a novel shrinkage function is introduced to enhance the estimation accuracy. Extensive numerical results confirm that the proposed MD-HDN scheme can significantly outperform existing schemes under various signal-to-noise ratio (SNR), and achieve substantial improvements in both estimation accuracy and robustness. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

20 pages, 3869 KB  
Article
Dynamical Graph Neural Networks for Modern Power Grid Analysis
by Shu Huang, Jining Li, Ruijiang Zeng, Zhiyong Li and Jin Xu
Electronics 2026, 15(3), 493; https://doi.org/10.3390/electronics15030493 - 23 Jan 2026
Viewed by 663
Abstract
Modern power grids are crucial infrastructures underpinning societal stability, yet their complexity and dynamic nature pose significant challenges for traditional analytical methods. Graph Neural Networks (GNNs) have recently emerged as powerful tools for modeling complex relationships in graph-structured data, making them especially suitable [...] Read more.
Modern power grids are crucial infrastructures underpinning societal stability, yet their complexity and dynamic nature pose significant challenges for traditional analytical methods. Graph Neural Networks (GNNs) have recently emerged as powerful tools for modeling complex relationships in graph-structured data, making them especially suitable for analyzing power systems. However, existing GNN methods typically focus on static or simplified network models, failing to adequately address dynamic topological changes and suffering from the over-smoothing issue. To overcome these limitations, we propose a novel GNN framework incorporating dynamic message-passing mechanisms, comprising Dynamic Topological Learning (DTL) and Adaptive Message-Passing (AMP) modules. Specifically, DTL captures dynamic changes in the power grid topology conditioned on the current state of the system, while AMP dynamically adjusts the message-passing process to effectively preserve local node information according to the updated topology. This framework is model-agnostic, allowing it to be integrated with various GNN architectures. Extensive experiments on multiple benchmark power grid datasets demonstrate that our proposed framework significantly enhances existing GNN methods in power flow and optimal power flow analysis, consistently achieving lower mean absolute error and higher R-squared scores. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid)
Show Figures

Figure 1

30 pages, 10476 KB  
Article
Large-Scale Multi-UAV Task Allocation via a Centrality-Driven Load-Aware Adaptive Consensus Bundle Algorithm for Biomimetic Swarm Coordination
by Weifei Gan, Hongxuan Xu, Yunwei Bai, Xin Zhou, Wangyu Wu and Xiaofei Du
Biomimetics 2026, 11(1), 69; https://doi.org/10.3390/biomimetics11010069 - 14 Jan 2026
Viewed by 726
Abstract
Large multi-UAV mission systems operate over time-varying communication graphs with heterogeneous platforms, where classical distributed task assignment may incur excessive message passing and suboptimal task–resource matching. To address these challenges, this paper proposes CLAC-CBBA (Centrality-Driven and Load-Aware Adaptive Clustering CBBA), an enhanced variant [...] Read more.
Large multi-UAV mission systems operate over time-varying communication graphs with heterogeneous platforms, where classical distributed task assignment may incur excessive message passing and suboptimal task–resource matching. To address these challenges, this paper proposes CLAC-CBBA (Centrality-Driven and Load-Aware Adaptive Clustering CBBA), an enhanced variant of the Consensus-Based Bundle Algorithm (CBBA) for large heterogeneous swarms. The proposed method is biomimetic in the sense that it integrates swarm-inspired self-organization and load-aware self-regulation to improve scalability and robustness, resembling decentralized role emergence and negative-feedback workload balancing in natural swarms. Specifically, CLAC-CBBA first identifies key nodes via a centrality-based adaptive cluster-reconfiguration mechanism (CenCluster) and partitions the network into cooperation domains to reduce redundant communication. It then applies a load-aware cluster self-regulation mechanism (LCSR), which combines resource attributes and spatial information, uses K-medoids clustering, and triggers split/merge reconfiguration based on real-time load imbalance. CBBA bidding is executed locally within clusters, while anchors and cluster representatives synchronize winners/bids to ensure globally consistent, conflict-free assignments. Simulations across diverse network densities and swarm sizes show that CLAC-CBBA reduces communication overhead and runtime while improving total task score compared with CBBA and several advanced variants, with statistically significant gains. These results demonstrate that CLAC-CBBA is scalable and robust for large-scale heterogeneous UAV task allocation. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

33 pages, 493 KB  
Article
Heterogeneous Graph Neural Network with Local and Global Message Passing for AC-Optimal Power Flow Solutions
by Aihui Wen, Bao Wen, Jining Li and Jin Xu
Appl. Syst. Innov. 2026, 9(1), 18; https://doi.org/10.3390/asi9010018 - 5 Jan 2026
Viewed by 886
Abstract
The AC Optimal Power Flow (AC-OPF) problem remains a major computational bottleneck for real-time power system operation. Conventional solvers are accurate but time-consuming, while Graph Neural Networks (GNNs) offer faster approximations yet struggle to capture long-range dependencies and handle topological variations. To address [...] Read more.
The AC Optimal Power Flow (AC-OPF) problem remains a major computational bottleneck for real-time power system operation. Conventional solvers are accurate but time-consuming, while Graph Neural Networks (GNNs) offer faster approximations yet struggle to capture long-range dependencies and handle topological variations. To address these limitations, we propose a Heterogeneous Graph Transformer with bus-centric Local–Global Message Passing (LG-HGNN). The model performs type-specific local message passing over heterogeneous power graphs and applies a global Transformer only on bus nodes to capture system-wide correlations efficiently. Effective-resistance positional encodings and resistance-biased attention enhance electrical awareness, whereas bounded decoders and physics-informed regularization preserve operational feasibility. Experiments on IEEE 14-, 30-, and 118-bus systems show that LG-HGNN achieves near-optimal results within a few percent of the AC-OPF optimum and generalizes to thousands of unseen N-1 contingency topologies without retraining. Compared with interior-point solvers, it attains up to 190× speedup before power-flow correction and over 10× afterward on GOC 2000-bus systems, providing a scalable and physically consistent surrogate for real-time AC-OPF. Full article
Show Figures

Figure 1

23 pages, 3769 KB  
Article
Partial Discharge Pattern Recognition of GIS with Time–Frequency Energy Grayscale Maps and an Improved Variational Bayesian Autoencoder
by Yuhang He, Yuan Fang, Zongxi Zhang, Dianbo Zhou, Shaoqing Chen and Shi Jing
Energies 2026, 19(1), 127; https://doi.org/10.3390/en19010127 - 25 Dec 2025
Cited by 1 | Viewed by 562
Abstract
Partial discharge pattern recognition is a crucial task for assessing the insulation condition of Gas-Insulated Switchgear (GIS). However, the on-site environment presents challenges such as strong electromagnetic interference, leading to acquired signals with a low signal-to-noise ratio (SNR). Furthermore, traditional pattern recognition methods [...] Read more.
Partial discharge pattern recognition is a crucial task for assessing the insulation condition of Gas-Insulated Switchgear (GIS). However, the on-site environment presents challenges such as strong electromagnetic interference, leading to acquired signals with a low signal-to-noise ratio (SNR). Furthermore, traditional pattern recognition methods based on statistical parameters suffer from redundant and inefficient features that compromise classification accuracy, while existing artificial-intelligence-based classification methods lack the ability to quantify the uncertainty in defect classification. To address these issues, this paper proposes a novel GIS partial discharge pattern recognition method based on time–frequency energy grayscale maps and an improved variational Bayesian autoencoder. Firstly, a denoising-based approximate message passing algorithm is employed to sample and denoise the discharge signals, which enhances the SNR while simultaneously reducing the number of sampling points. Subsequently, a two-dimensional time–instantaneous frequency energy grayscale map of the discharge signal is constructed based on the Hilbert–Huang Transform and energy grayscale mapping, effectively extracting key time–frequency features. Finally, an improved variational Bayesian autoencoder is utilized for the unsupervised learning of the image features, establishing a GIS defect classification method with an associated confidence level by integrating probabilistic features. Validation based on measured data demonstrates the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Operation, Control, and Planning of New Power Systems)
Show Figures

Figure 1

33 pages, 2145 KB  
Article
Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation
by Călin Gheorghe Buzea, Florin Nedeff, Diana Mirilă, Maricel Agop and Decebal Vasincu
Fractal Fract. 2025, 9(12), 810; https://doi.org/10.3390/fractalfract9120810 - 11 Dec 2025
Viewed by 651
Abstract
The fractal extension of the time-dependent Ginzburg–Landau (TDGL) equation, formulated within the framework of Scale Relativity, generalizes superconducting dynamics to non-differentiable space–time. Although analytically well established, its numerical solution remains difficult because of the strong coupling between amplitude and phase curvature. Here we [...] Read more.
The fractal extension of the time-dependent Ginzburg–Landau (TDGL) equation, formulated within the framework of Scale Relativity, generalizes superconducting dynamics to non-differentiable space–time. Although analytically well established, its numerical solution remains difficult because of the strong coupling between amplitude and phase curvature. Here we develop two complementary deep learning solvers for the fractal TDGL (FTDGL) system. The Fractal Physics-Informed Neural Network (F-PINN) embeds the Scale-Relativity covariant derivative through automatic differentiation on continuous fields, whereas the Fractal Graph Neural Network (F-GNN) represents the same dynamics on a sparse spatial graph and learns local gauge-covariant interactions via message passing. Both models are trained against finite-difference reference data, and a parametric study over the dimensionless fractality parameter D quantifies its influence on the coherence length, penetration depth, and peak magnetic field. Across multivortex benchmarks, the F-GNN reduces the relative L2 error on ψ2 from 0.190 to 0.046 and on Bz from approximately 0.62 to 0.36 (averaged over three seeds). This ≈4× improvement in condensate-density accuracy corresponds to a substantial enhancement in vortex-core localization—from tens of pixels of uncertainty to sub-pixel precision—and yields a cleaner reconstruction of the 2π phase winding around each vortex, improving the extraction of experimentally relevant observables such as ξeff, λeff, and local Bz peaks. The model also preserves flux quantization and remains robust under 2–5% Gaussian noise, demonstrating stable learning under experimentally realistic perturbations. The D—scan reveals broader vortex cores, a non-monotonic variation in the penetration depth, and moderate modulation of the peak magnetic field, while preserving topological structure. These results show that graph-based learning provides a superior inductive bias for modeling non-differentiable, gauge-coupled systems. The proposed F-PINN and F-GNN architectures therefore offer accurate, data-efficient solvers for fractal superconductivity and open pathways toward data-driven inference of fractal parameters from magneto-optical or Hall-probe imaging experiments. Full article
Show Figures

Figure 1

38 pages, 4109 KB  
Article
End-to-End DAE–LDPC–OFDM Transceiver with Learned Belief Propagation Decoder for Robust and Power-Efficient Wireless Communication
by Mohaimen Mohammed and Mesut Çevik
Sensors 2025, 25(21), 6776; https://doi.org/10.3390/s25216776 - 5 Nov 2025
Viewed by 1225
Abstract
This paper presents a Deep Autoencoder–LDPC–OFDM (DAE–LDPC–OFDM) transceiver architecture that integrates a learned belief propagation (BP) decoder to achieve robust, energy-efficient, and adaptive wireless communication. Unlike conventional modular systems that treat encoding, modulation, and decoding as independent stages, the proposed framework performs end-to-end [...] Read more.
This paper presents a Deep Autoencoder–LDPC–OFDM (DAE–LDPC–OFDM) transceiver architecture that integrates a learned belief propagation (BP) decoder to achieve robust, energy-efficient, and adaptive wireless communication. Unlike conventional modular systems that treat encoding, modulation, and decoding as independent stages, the proposed framework performs end-to-end joint optimization of all components, enabling dynamic adaptation to varying channel and noise conditions. The learned BP decoder introduces trainable parameters into the iterative message-passing process, allowing adaptive refinement of log-likelihood ratio (LLR) statistics and enhancing decoding accuracy across diverse SNR regimes. Extensive experimental results across multiple datasets and channel scenarios demonstrate the effectiveness of the proposed design. At 10 dB SNR, the DAE–LDPC–OFDM achieves a BER of 1.72% and BLER of 2.95%, outperforming state-of-the-art models such as Transformer–OFDM, CNN–OFDM, and GRU–OFDM by 25–30%, and surpassing traditional LDPC–OFDM systems by 38–42% across all tested datasets. The system also achieves a PAPR reduction of 26.6%, improving transmitter power amplifier efficiency, and maintains a low inference latency of 3.9 ms per frame, validating its suitability for real-time applications. Moreover, it maintains reliable performance under time-varying, interference-rich, and multipath fading channels, confirming its robustness in realistic wireless environments. The results establish the DAE–LDPC–OFDM as a high-performance, power-efficient, and scalable architecture capable of supporting the demands of 6G and beyond, delivering superior reliability, low-latency performance, and energy-efficient communication in next-generation intelligent networks. Full article
(This article belongs to the Special Issue AI-Driven Security and Privacy for IIoT Applications)
Show Figures

Figure 1

24 pages, 7694 KB  
Article
LA-GATs: A Multi-Feature Constrained and Spatially Adaptive Graph Attention Network for Building Clustering
by Xincheng Yang, Xukang Xie and Dingming Liu
ISPRS Int. J. Geo-Inf. 2025, 14(11), 415; https://doi.org/10.3390/ijgi14110415 - 23 Oct 2025
Viewed by 886
Abstract
Building clustering is a key challenge in cartographic generalization, where the goal is to group spatially related buildings into semantically coherent clusters while preserving the true distribution patterns of urban structures. Existing methods often rely on either spatial distance or building feature similarity [...] Read more.
Building clustering is a key challenge in cartographic generalization, where the goal is to group spatially related buildings into semantically coherent clusters while preserving the true distribution patterns of urban structures. Existing methods often rely on either spatial distance or building feature similarity alone, leading to clusters that sacrifice either accuracy or spatial continuity. Moreover, most deep learning-based approaches, including graph attention networks (GATs), fail to explicitly incorporate spatial distance constraints and typically restrict message passing to first-order neighborhoods, limiting their ability to capture long-range structural dependencies. To address these issues, this paper proposes LA-GATs, a multi-feature constrained and spatially adaptive building clustering network. First, a Delaunay triangulation is constructed based on nearest-neighbor distances to represent spatial topology, and a heterogeneous feature matrix is built by integrating architectural spatial features, including compactness, orientation, color, and height. Then, a spatial distance-constrained attention mechanism is designed, where attention weights are adjusted using a distance decay function to enhance local spatial correlation. A second-order neighborhood aggregation strategy is further introduced to extend message propagation and mitigate the impact of triangulation errors. Finally, spectral clustering is performed on the learned similarity matrix. Comprehensive experimental validation on real-world datasets from Xi’an and Beijing, showing that LA-GATs outperforms existing clustering methods in both compactness, silhouette coefficient and adjusted rand index, with up to about 21% improvement in residential clustering accuracy. Full article
Show Figures

Figure 1

20 pages, 2320 KB  
Article
Signal Detection Method for OTFS System Based on Feature Fusion and CNN
by You Wu, Mengyao Zhou, Yuanjin Lin and Zixing Liao
Electronics 2025, 14(20), 4041; https://doi.org/10.3390/electronics14204041 - 14 Oct 2025
Cited by 1 | Viewed by 945
Abstract
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the [...] Read more.
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the message passing (MP) algorithm exhibits low efficiency in iterative signal updates. This paper proposes a signal detection method for an OTFS system based on feature fusion and a CNN (MP-WCNN), which employs wavelet decomposition to extract multi-scale signal features, combining MP enhancement for feature fusion and constructing high-dimensional feature tensors through channel-wise concatenation as CNN input to achieve signal detection. Experimental results demonstrate that the proposed MP-WCNN method achieves approximately 9 dB signal-to-noise ratio (SNR) gain compared to the MP algorithm at the same bit error rate (BER). Furthermore, the proposed method operates without requiring pilot assistance for CSI acquisition. Full article
Show Figures

Figure 1

23 pages, 3132 KB  
Article
Symmetry-Aware Superpixel-Enhanced Few-Shot Semantic Segmentation
by Lan Guo, Xuyang Li, Jinqiang Wang, Yuqi Tong, Jie Xiao, Rui Zhou, Ling-Huey Li, Qingguo Zhou and Kuan-Ching Li
Symmetry 2025, 17(10), 1726; https://doi.org/10.3390/sym17101726 - 14 Oct 2025
Cited by 1 | Viewed by 974
Abstract
Few-Shot Semantic Segmentation (FSS) faces significant challenges in modeling complex backgrounds and maintaining prediction consistency due to limited training samples. Existing methods oversimplify backgrounds as single negative classes and rely solely on pixel-level alignments. To address these issues, we propose a symmetry-aware superpixel-enhanced [...] Read more.
Few-Shot Semantic Segmentation (FSS) faces significant challenges in modeling complex backgrounds and maintaining prediction consistency due to limited training samples. Existing methods oversimplify backgrounds as single negative classes and rely solely on pixel-level alignments. To address these issues, we propose a symmetry-aware superpixel-enhanced FSS framework with a symmetric dual-branch architecture that explicitly models the superpixel region-graph in both the support and query branches. First, top–down cross-layer fusion injects low-level edge and texture cues into high-level semantics to build a more complete representation of complex backgrounds, improving foreground–background separability and boundary quality. Second, images are partitioned into superpixels and aggregated into “superpixel tokens” to construct a Region Adjacency Graph (RAG). Support-set prototypes are used to initialize query-pixel predictions, which are then projected into the superpixel space for cross-image prototype alignment with support superpixels. We further perform message passing/energy minimization on the RAG to enhance intra-region consistency and boundary adherence, and finally back-project the predictions to the pixel space. Lastly, by aggregating homogeneous semantic information, we construct robust foreground and background prototype representations, enhancing the model’s ability to perceive both seen and novel targets. Extensive experiments on the PASCAL-5i and COCO-20i benchmarks demonstrate that our proposed model achieves superior segmentation performance over the baseline and remains competitive with existing FSS methods. Full article
(This article belongs to the Special Issue Symmetry in Process Optimization)
Show Figures

Figure 1

Back to TopTop