Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (57)

Search Parameters:
Keywords = message passing enhancement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2320 KB  
Article
Signal Detection Method for OTFS System Based on Feature Fusion and CNN
by You Wu, Mengyao Zhou, Yuanjin Lin and Zixing Liao
Electronics 2025, 14(20), 4041; https://doi.org/10.3390/electronics14204041 - 14 Oct 2025
Abstract
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the [...] Read more.
For orthogonal time–frequency space (OTFS) systems in high-mobility scenarios, traditional signal detection algorithms face challenges due to their reliance on channel state information (CSI), requiring excessive pilot overhead. Meanwhile, based on convolutional neural network (CNN) detection suffer from insufficient signal feature extraction, the message passing (MP) algorithm exhibits low efficiency in iterative signal updates. This paper proposes a signal detection method for an OTFS system based on feature fusion and a CNN (MP-WCNN), which employs wavelet decomposition to extract multi-scale signal features, combining MP enhancement for feature fusion and constructing high-dimensional feature tensors through channel-wise concatenation as CNN input to achieve signal detection. Experimental results demonstrate that the proposed MP-WCNN method achieves approximately 9 dB signal-to-noise ratio (SNR) gain compared to the MP algorithm at the same bit error rate (BER). Furthermore, the proposed method operates without requiring pilot assistance for CSI acquisition. Full article
Show Figures

Figure 1

23 pages, 3132 KB  
Article
Symmetry-Aware Superpixel-Enhanced Few-Shot Semantic Segmentation
by Lan Guo, Xuyang Li, Jinqiang Wang, Yuqi Tong, Jie Xiao, Rui Zhou, Ling-Huey Li, Qingguo Zhou and Kuan-Ching Li
Symmetry 2025, 17(10), 1726; https://doi.org/10.3390/sym17101726 - 14 Oct 2025
Viewed by 67
Abstract
Few-Shot Semantic Segmentation (FSS) faces significant challenges in modeling complex backgrounds and maintaining prediction consistency due to limited training samples. Existing methods oversimplify backgrounds as single negative classes and rely solely on pixel-level alignments. To address these issues, we propose a symmetry-aware superpixel-enhanced [...] Read more.
Few-Shot Semantic Segmentation (FSS) faces significant challenges in modeling complex backgrounds and maintaining prediction consistency due to limited training samples. Existing methods oversimplify backgrounds as single negative classes and rely solely on pixel-level alignments. To address these issues, we propose a symmetry-aware superpixel-enhanced FSS framework with a symmetric dual-branch architecture that explicitly models the superpixel region-graph in both the support and query branches. First, top–down cross-layer fusion injects low-level edge and texture cues into high-level semantics to build a more complete representation of complex backgrounds, improving foreground–background separability and boundary quality. Second, images are partitioned into superpixels and aggregated into “superpixel tokens” to construct a Region Adjacency Graph (RAG). Support-set prototypes are used to initialize query-pixel predictions, which are then projected into the superpixel space for cross-image prototype alignment with support superpixels. We further perform message passing/energy minimization on the RAG to enhance intra-region consistency and boundary adherence, and finally back-project the predictions to the pixel space. Lastly, by aggregating homogeneous semantic information, we construct robust foreground and background prototype representations, enhancing the model’s ability to perceive both seen and novel targets. Extensive experiments on the PASCAL-5i and COCO-20i benchmarks demonstrate that our proposed model achieves superior segmentation performance over the baseline and remains competitive with existing FSS methods. Full article
(This article belongs to the Special Issue Symmetry in Process Optimization)
Show Figures

Figure 1

26 pages, 1076 KB  
Article
NL-COMM: Enabling High-Performing Next-Generation Networks via Advanced Non-Linear Processing
by Chathura Jayawardena, George Ntavazlis Katsaros and Konstantinos Nikitopoulos
Future Internet 2025, 17(10), 447; https://doi.org/10.3390/fi17100447 - 30 Sep 2025
Viewed by 244
Abstract
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely [...] Read more.
Future wireless networks are expected to deliver enhanced spectral efficiency while being energy efficient. MIMO and other non-orthogonal transmission schemes, such as non-orthogonal multiple access (NOMA), offer substantial theoretical spectral efficiency gains. However, these gains have yet to translate into practical deployments, largely due to limitations in current signal processing methods. Linear transceiver processing, though widely adopted, fails to fully exploit non-orthogonal transmissions, forcing massive MIMO systems to use a disproportionately large number of RF chains for relatively few streams, increasing power consumption. Non-linear processing can unlock the full potential of non-orthogonal schemes but is hindered by high computational complexity and integration challenges. Moreover, existing message-passing receivers for NOMA depend on specially designed sparse signals, limiting resource allocation flexibility and efficiency. This work presents NL-COMM, an efficient non-linear processing framework that translates the theoretical gains of non-orthogonal transmissions into practical benefits for both the uplink and downlink. NL-COMM delivers over 200% spectral efficiency gains, enables 50% reductions in antennas and RF chains (and thus base station power consumption), and increases concurrently supported users by 450%. In distributed MIMO deployments, the antenna reduction halves fronthaul bandwidth requirements, mitigating a key system bottleneck. Furthermore, NL-COMM offers the flexibility to unlock new NOMA schemes. Finally, we present both hardware and software architectures for NL-COMM that support massively parallel execution, demonstrating how advanced non-linear processing can be realized in practice to meet the demands of next-generation networks. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

17 pages, 394 KB  
Article
Boosting Clean-Label Backdoor Attacks on Graph Classification
by Yadong Wang, Zhiwei Zhang, Ye Yuan and Guoren Wang
Electronics 2025, 14(18), 3632; https://doi.org/10.3390/electronics14183632 - 13 Sep 2025
Viewed by 419
Abstract
Graph Neural Networks (GNNs) have become a cornerstone for graph classification, yet their vulnerability to backdoor attacks remains a significant security concern. While clean-label attacks provide a stealthier approach by preserving original labels, they tend to be less effective in graph settings compared [...] Read more.
Graph Neural Networks (GNNs) have become a cornerstone for graph classification, yet their vulnerability to backdoor attacks remains a significant security concern. While clean-label attacks provide a stealthier approach by preserving original labels, they tend to be less effective in graph settings compared to traditional dirty-label methods. This performance gap arises from the inherent dominance of rich, benign structural patterns in target-class graphs, which overshadow the injected backdoor trigger during the GNNs’ learning process. We demonstrate that prior strategies, such as adversarial perturbations used in other domains to suppress benign features, fail in graph settings due to the amplification effects of the GNNs’ message-passing mechanism. To address this issue, we propose two strategies aimed at enabling the model to better learn backdoor features. First, we introduce a long-distance trigger injection method, placing trigger nodes at topologically distant locations. This enhances the global propagation of the backdoor signal while interfering with the aggregation of native substructures. Second, we propose a vulnerability-aware sample selection method, which identifies graphs that contribute more to the success of the backdoor attack based on low model confidence or frequent forgetting events. We conduct extensive experiments on benchmark datasets such as NCI1, NCI109, Mutagenicity, and ENZYMES, demonstrating that our approach significantly improves attack success rates (ASRs) while maintaining a low clean accuracy drop (CAD) compared to existing methods. This work offers valuable insights into manipulating the competition between benign and backdoor features in graph-structured data. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Figure 1

19 pages, 1407 KB  
Article
Eigenvector Distance-Modulated Graph Neural Network: Spectral Weighting for Enhanced Node Classification
by Ahmed Begga, Francisco Escolano and Miguel Ángel Lozano
Mathematics 2025, 13(17), 2895; https://doi.org/10.3390/math13172895 - 8 Sep 2025
Viewed by 472
Abstract
Graph Neural Networks (GNNs) face significant challenges in node classification across diverse graph structures. Traditional message passing mechanisms often fail to adaptively weight node relationships, thereby limiting performance in both homophilic and heterophilic graph settings. We propose the Eigenvector Distance-Modulated Graph Neural Network [...] Read more.
Graph Neural Networks (GNNs) face significant challenges in node classification across diverse graph structures. Traditional message passing mechanisms often fail to adaptively weight node relationships, thereby limiting performance in both homophilic and heterophilic graph settings. We propose the Eigenvector Distance-Modulated Graph Neural Network (EDM-GNN), which enhances message passing by incorporating spectral information from the graph’s eigenvectors. Our method introduces a novel weighting scheme that modulates information flow based on a combined similarity measure. This measure balances feature-based similarity with structural similarity derived from eigenvector distances. This approach creates a more discriminative aggregation process that adapts to the underlying graph topology. It does not require prior knowledge of homophily characteristics. We implement a hierarchical neighborhood aggregation framework that utilizes these spectral weights across multiple powers of the adjacency matrix. Experimental results on benchmark datasets demonstrate that EDM-GNN achieves competitive performance with state-of-the-art methods across both homophilic and heterophilic settings. Our approach provides a unified solution for node classification problems with strong theoretical foundations in spectral graph theory and significant empirical improvements in classification accuracy. Full article
Show Figures

Figure 1

20 pages, 7914 KB  
Article
Channel Estimation for Intelligent Reflecting Surface Empowered Coal Mine Wireless Communication Systems
by Yang Liu, Kaikai Guo, Xiaoyue Li, Bin Wang and Yanhong Xu
Entropy 2025, 27(9), 932; https://doi.org/10.3390/e27090932 - 4 Sep 2025
Viewed by 599
Abstract
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems. [...] Read more.
The confined space of coal mines characterized by curved tunnels with rough surfaces and a variety of deployed production equipment induces severe signal attenuation and interruption, which significantly degrades the accuracy of conventional channel estimation algorithms applied in coal mine wireless communication systems. To address these challenges, we propose a modified Bilinear Generalized Approximate Message Passing (mBiGAMP) algorithm enhanced by intelligent reflecting surface (IRS) technology to improve channel estimation accuracy in coal mine scenarios. Due to the presence of abundant coal-carrying belt conveyors, we establish a hybrid channel model integrating both fast-varying and quasi-static components to accurately model the unique propagation environment in coal mines. Specifically, the fast-varying channel captures the varying signal paths affected by moving conveyors, while the quasi-static channel represents stable direct links. Since this hybrid structure necessitates an augmented factor graph, we introduce two additional factor nodes and variable nodes to characterize the distinct message-passing behaviors and then rigorously derive the mBiGAMP algorithm. Simulation results demonstrate that the proposed mBiGAMP algorithm achieves superior channel estimation accuracy in dynamic conveyor-affected coal mine scenarios compared with other state-of-the-art methods, showing significant improvements in both separated and cascaded channel estimation. Specifically, when the NMSE is 103, the SNR of mBiGAMP is improved by approximately 5 dB, 6 dB, and 14 dB compared with the Dual-Structure Orthogonal Matching Pursuit (DS-OMP), Parallel Factor (PARAFAC), and Least Squares (LS) algorithms, respectively. We also verify the convergence behavior of the proposed mBiGAMP algorithm across the operational signal-to-noise ratios range. Furthermore, we investigate the impact of the number of pilots on the channel estimation performance, which reveals that the proposed mBiGAMP algorithm consumes fewer number of pilots to accurately recover channel state information than other methods while preserving estimation fidelity. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

15 pages, 3863 KB  
Proceeding Paper
Fast Parallel Gaussian Filter Based on Partial Sums
by Atanaska Bosakova-Ardenska, Hristina Andreeva and Ivan Halvadzhiev
Eng. Proc. 2025, 104(1), 1; https://doi.org/10.3390/engproc2025104001 - 21 Aug 2025
Viewed by 408
Abstract
As a convolutional operation in a space domain, Gaussian filtering involves a large number of computational operations, a number that increases when the sizes of images and the kernel size also increase. Thus, finding methods to accelerate such computations is significant for overall [...] Read more.
As a convolutional operation in a space domain, Gaussian filtering involves a large number of computational operations, a number that increases when the sizes of images and the kernel size also increase. Thus, finding methods to accelerate such computations is significant for overall time complexity enhancement, and the current paper proposes the use of partial sums to achieve this acceleration. The MPI (Message Passing Interface) library and the C programming language are used for the parallel program implementation of Gaussian filtering, based on a 1D kernel and 2D kernel working with and without the use of partial sums, and then a theoretical and practical evaluation of the effectiveness of the proposed implementations is made. The experimental results indicate a significant acceleration of the computational process when partial sums are used in both sequential and parallel processing. A PSNR (Peak Signal to Noise Ratio) metric is used to assess the quality of filtering for the proposed algorithms in comparison with the MATLAB implementation of Gaussian filtering, and time performance for the proposed algorithms is also evaluated. Full article
Show Figures

Figure 1

26 pages, 2688 KB  
Article
Improved Parallel Differential Evolution Algorithm with Small Population for Multi-Period Optimal Dispatch Problem of Microgrids
by Tianle Li, Yifei Li, Fang Wang, Cheng Gong, Jingrui Zhang and Hao Ma
Energies 2025, 18(14), 3852; https://doi.org/10.3390/en18143852 - 19 Jul 2025
Cited by 3 | Viewed by 446
Abstract
Microgrids have drawn attention due to their helpfulness in the development of renewable energy. It is necessary to make an optimal power dispatch scheme for each micro-source in a microgrid in order to make the best use of fluctuating and unpredictable renewable energy. [...] Read more.
Microgrids have drawn attention due to their helpfulness in the development of renewable energy. It is necessary to make an optimal power dispatch scheme for each micro-source in a microgrid in order to make the best use of fluctuating and unpredictable renewable energy. However, the computational time of solving the optimal dispatch problem increases greatly when the grid’s structure is more complex. An improved parallel differential evolution (PDE) approach based on a message-passing interface (MPI) is proposed, aiming at the solution of the optimal dispatch problem of a microgrid (MG), reducing the consumed time effectively but not destroying the quality of the obtained solution. In the new approach, the main population of the parallel algorithm is divided into several small populations, and each performs the original operators of a differential evolution algorithm, i.e., mutation, crossover, and selection, in different processes concurrently. The gather and scatter operations are employed after several iterations to enhance population diversity. Some improvements on mutation, adaptive parameters, and the introduction of migration operation are also proposed in the approach. Two test systems are employed to verify and evaluate the proposed approach, and the comparisons with traditional differential evolution are also reported. The results show that the proposed PDE algorithm can reduce the consumed time on the premise of obtaining no worse solutions. Full article
Show Figures

Figure 1

12 pages, 1145 KB  
Article
Non-Iterative Reconstruction and Selection Network-Assisted Channel Estimation for mmWave MIMO Communications
by Jing Yang, Yabo Guo, Xinying Guo and Pengpeng Wang
Sensors 2025, 25(13), 4172; https://doi.org/10.3390/s25134172 - 4 Jul 2025
Viewed by 459
Abstract
Millimeter-wave (mmWave) MIMO systems have emerged as a key enabling technology for next-generation wireless networks, addressing the growing demand for ultra-high data rates through the utilization of wide bandwidths and large-scale antenna configurations. Beyond communication capabilities, these systems offer inherent advantages for integrated [...] Read more.
Millimeter-wave (mmWave) MIMO systems have emerged as a key enabling technology for next-generation wireless networks, addressing the growing demand for ultra-high data rates through the utilization of wide bandwidths and large-scale antenna configurations. Beyond communication capabilities, these systems offer inherent advantages for integrated sensing applications, particularly in scenarios requiring precise object detection and localization. The sparse mmWave channel in the beamspace domain allows fewer radio-frequency (RF) chains by selecting dominant beams, boosting both communication efficiency and sensing resolution. However, existing channel estimation methods, such as learned approximate message passing (LAMP) networks, rely on computationally intensive iterations. This becomes particularly problematic in large-scale system deployments, where estimation inaccuracies can severely degrade sensing performance. To address these limitations, we propose a low-complexity channel estimator using a non-iterative reconstruction network (NIRNet) with a learning-based selection matrix (LSM). NIRNet employs a convolutional layer for efficient, non-iterative beamspace channel reconstruction, significantly reducing computational overhead compared to LAMP-based methods, which is vital for real-time sensing. The LSM generates a signal-aware Gaussian measurement matrix, outperforming traditional Bernoulli matrices, while a denoising network enhances accuracy under low SNR conditions, improving sensing resolution. Simulations show the NIRNet-based algorithm achieves a superior normalized mean squared error (NMSE) and an achievable sum rate (ASR) with lower complexity and reduced training overhead. Full article
Show Figures

Figure 1

17 pages, 2347 KB  
Article
Adaptive Damping Log-Domain Message-Passing Algorithm for FTN-OTFS in V2X Communications
by Hui Xu, Chaorong Zhang, Qingying Wu, Benjamin K. Ng and Chan-Tong Lam
Sensors 2025, 25(12), 3692; https://doi.org/10.3390/s25123692 - 12 Jun 2025
Cited by 1 | Viewed by 658
Abstract
To enable highly reliable and spectrum-efficient vehicle-to-everything (V2X) communications under conditions with severe Doppler effects and rapidly time-varying channels, we propose a novel faster-than-Nyquist orthogonal time frequency space (FTN-OTFS) modulation scheme. In this scheme, FTN signaling is integrated with spectrally efficient frequency division [...] Read more.
To enable highly reliable and spectrum-efficient vehicle-to-everything (V2X) communications under conditions with severe Doppler effects and rapidly time-varying channels, we propose a novel faster-than-Nyquist orthogonal time frequency space (FTN-OTFS) modulation scheme. In this scheme, FTN signaling is integrated with spectrally efficient frequency division multiplexing (SEFDM) within the OTFS framework, enabling a higher symbol-transmission density within a fixed time–frequency resource block and thus enhancing spectral efficiency without increasing the occupied bandwidth. An analytical input–output model is derived in both the delay–Doppler and time–frequency domains. To further enhance numerical stability, an improved detection algorithm called adaptive damping log-domain message-passing (ADL-MP) is developed for the proposed scheme. Simulation results demonstrate that the proposed scheme achieves robust and reliable performance in high-mobility scenarios and that the proposed algorithm consistently outperforms conventional methods in terms of bit error rate (BER) under both the extended vehicular A (EVA) model and the high-speed train (HST) scenario, confirming its effectiveness and superiority for V2X communications. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

22 pages, 2191 KB  
Review
Towards Efficient HPC: Exploring Overlap Strategies Using MPI Non-Blocking Communication
by Yuntian Zheng and Jianping Wu
Mathematics 2025, 13(11), 1848; https://doi.org/10.3390/math13111848 - 2 Jun 2025
Viewed by 1382
Abstract
As high-performance computing (HPC) platforms continue to scale up, communication costs have become a critical bottleneck affecting overall application performance. An effective strategy to overcome this limitation is to overlap communication with computation. The Message Passing Interface (MPI), as the de facto standard [...] Read more.
As high-performance computing (HPC) platforms continue to scale up, communication costs have become a critical bottleneck affecting overall application performance. An effective strategy to overcome this limitation is to overlap communication with computation. The Message Passing Interface (MPI), as the de facto standard for communication in HPC, provides non-blocking communication primitives that make such overlapping feasible. By enabling asynchronous communication, non-blocking operations reduce idle time of cores caused by data transfer delays, thereby improving resource utilization. Overlapping communication with computation is particularly important for enhancing the performance of large-scale scientific applications, such as numerical simulations, climate modeling, and other data-intensive tasks. However, achieving efficient overlapping is non-trivial and depends not only on advances in hardware technologies such as Remote Direct Memory Access (RDMA), but also on well-designed and optimized MPI implementations. This paper presents a comprehensive survey on the principles of MPI non-blocking communication, the core techniques for achieving computation–communication overlap, and some representative applications in scientific computing. Alongside the survey, we include a preliminary experimental study evaluating the effectiveness of asynchronous progress mechanism on modern HPC platforms to support the development of parallel programs for HPC researchers and practitioners. Full article
(This article belongs to the Special Issue Numerical Analysis and Algorithms for High-Performance Computing)
Show Figures

Figure 1

16 pages, 3393 KB  
Article
Enhanced Channel Estimation for RIS-Assisted OTFS Systems by Introducing ELM Network
by Mintao Zhang, Zhiying Liu, Li Wang, Wenquan Hu and Chaojin Qing
Sensors 2025, 25(11), 3292; https://doi.org/10.3390/s25113292 - 23 May 2025
Viewed by 1167
Abstract
In high-mobility communication scenarios, leveraging reconfigurable intelligent surfaces (RISs) to assist orthogonal time frequency space (OTFS) systems proves advantageous. Nevertheless, the integration of RIS into OTFS systems increases the complexity of channel estimation (CE). Utilizing the benefits of machine learning (ML) to address [...] Read more.
In high-mobility communication scenarios, leveraging reconfigurable intelligent surfaces (RISs) to assist orthogonal time frequency space (OTFS) systems proves advantageous. Nevertheless, the integration of RIS into OTFS systems increases the complexity of channel estimation (CE). Utilizing the benefits of machine learning (ML) to address such intricate issues holds the potential to reduce CE complexity. Despite this potential, there is a lack of investigations of ML-based CE in RIS-assisted OTFS systems, leaving significant gaps and posing challenges for intelligent applications. Moreover, ML-based CE methods encounter numerous difficulties, including intricate parameter tuning and long training time. Motivated by the inherent advantages of the single-hidden layer feed-forward network structure, we introduce extreme learning machine (ELM) into RIS-assisted OTFS systems to improve CE accuracy. In this method, we incorporate a threshold-based approach to extract initial features, aiming to remedy the inherent limitations of the ELM network, such as inadequate network parameters compared to the deep learning network. This initial feature extraction contributes to an enhanced ELM learning ability, leading to improved CE accuracy. Applying the classic message passing algorithm for data symbol detection, simulation results demonstrate the effectiveness of the proposed method in improving the symbol detection (SD) performance of RIS-assisted OTFS systems. Furthermore, the SD performance exhibits its robustness against variations in modulation order, maximum velocity, and the number of sub-surfaces. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

26 pages, 5240 KB  
Article
Extending LoRaWAN: Mesh Architecture and Performance Analysis for Long-Range IoT Connectivity in Maritime Environments
by Nuno Cruz, Carlos Mendes, Nuno Cota, Gonçalo Esteves, João Pinelo, João Casaleiro, Rafael Teixeira and Leonor Lobo
Systems 2025, 13(5), 381; https://doi.org/10.3390/systems13050381 - 15 May 2025
Cited by 1 | Viewed by 1603
Abstract
A LoRaWAN application architecture comprises three functional components: (i) nodes, which convert and wirelessly transmit data as LoRaWAN messages; (ii) gateways, which receive and forward these transmissions; and (iii) network servers, which process the received data for application delivery. The nodes convert data [...] Read more.
A LoRaWAN application architecture comprises three functional components: (i) nodes, which convert and wirelessly transmit data as LoRaWAN messages; (ii) gateways, which receive and forward these transmissions; and (iii) network servers, which process the received data for application delivery. The nodes convert data into LoRaWAN messages and transmit them wirelessly with the hope that one or more LoRaWAN gateway will receive the messages successfully. Then, the gateways pass on the received messages to a distant network server, where various processing steps occur before the messages are forwarded to the end application. If none of the gateways can receive the messages, then they will be lost. Although this default behaviour is suitable for some applications, there are others where ensuring messages are successfully delivered at a higher rate would be helpful. One such scenario is the application in this paper: monitoring maritime vessels and fishing equipment in offshore environments characterised by intermittent or absent shore connectivity. To address this challenge, the Custodian project was initiated to develop a maritime monitoring solution with enhanced connectivity capabilities. Two additional features are especially welcome in this scenario. The most important feature is the transmission of messages created in offshore areas to end users who are offshore, regardless of the unavailability of the ground network server. An example would be fishermen who are offshore and wish to position their fishing equipment, also offshore, based on location data transmitted from nodes via LoRaWAN, even when both entities are far away from the mainland. The second aspect concerns the potential use of gateway-to-gateway communications, through gateways on various ships, to transmit messages to the coast. This setup enables fishing gear and fishing vessels to be monitored from the coast, even in the absence of a direct connection. The functional constraints of conventional commercial gateways necessitated the conceptualisation and implementation of C-Mesh, a novel relay architecture that extends LoRaWAN functionality beyond standard protocol implementations. The C-Mesh integrates with the Custodian ecosystem, alongside C-Beacon and C-Point devices, while maintaining transparent compatibility with standard LoRaWAN infrastructure components through protocol-compliant gateway emulation. Thus, compatibility with both commercially available nodes and gateways and those already in deployment is guaranteed. We provide a comprehensive description of C-Mesh, describing its hardware architecture (communications, power, and self-monitoring abilities) and data processing ability (filtering duplicate messages, security, and encryption). Sea trials carried out on board a commercial fishing vessel in Sesimbra, Portugal, proved C-Mesh to be effective. Location messages derived from fishing gear left at sea were received by an end user aboard the fishing vessel, independently of the network server on land. Additionally, field tests demonstrated that a single C-Mesh deployment functioning as a signal repeater on a vessel with an antenna elevation of 15m above sea level achieved a quantifiable coverage extension of 13 km (representing a 20% increase in effective transmission range), demonstrating the capacity of C-Mesh to increase LoRaWAN’s coverage. Full article
(This article belongs to the Special Issue Integration of Cybersecurity, AI, and IoT Technologies)
Show Figures

Figure 1

35 pages, 11134 KB  
Article
Error Classification and Static Detection Methods in Tri-Programming Models: MPI, OpenMP, and CUDA
by Saeed Musaad Altalhi, Fathy Elbouraey Eassa, Sanaa Abdullah Sharaf, Ahmed Mohammed Alghamdi, Khalid Ali Almarhabi and Rana Ahmad Bilal Khalid
Computers 2025, 14(5), 164; https://doi.org/10.3390/computers14050164 - 28 Apr 2025
Viewed by 870
Abstract
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and [...] Read more.
The growing adoption of supercomputers across various scientific disciplines, particularly by researchers without a background in computer science, has intensified the demand for parallel applications. These applications are typically developed using a combination of programming models within languages such as C, C++, and Fortran. However, modern multi-core processors and accelerators necessitate fine-grained control to achieve effective parallelism, complicating the development process. To address this, developers commonly utilize high-level programming models such as Open Multi-Processing (OpenMP), Open Accelerators (OpenACCs), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). These models may be used independently or combined into dual- or tri-model applications to leverage their complementary strengths. However, integrating multiple models introduces subtle and difficult-to-detect runtime errors such as data races, deadlocks, and livelocks that often elude conventional compilers. This complexity is exacerbated in applications that simultaneously incorporate MPI, OpenMP, and CUDA, where the origin of runtime errors, whether from individual models, user logic, or their interactions, becomes ambiguous. Moreover, existing tools are inadequate for detecting such errors in tri-model applications, leaving a critical gap in development support. To address this gap, the present study introduces a static analysis tool designed specifically for tri-model applications combining MPI, OpenMP, and CUDA in C++-based environments. The tool analyzes source code to identify both actual and potential runtime errors prior to execution. Central to this approach is the introduction of error dependency graphs, a novel mechanism for systematically representing and analyzing error correlations in hybrid applications. By offering both error classification and comprehensive static detection, the proposed tool enhances error visibility and reduces manual testing effort. This contributes significantly to the development of more robust parallel applications for high-performance computing (HPC) and future exascale systems. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

15 pages, 4228 KB  
Article
Combining the Viterbi Algorithm and Graph Neural Networks for Efficient MIMO Detection
by Thien An Nguyen, Xuan-Toan Dang, Oh-Soon Shin and Jaejin Lee
Electronics 2025, 14(9), 1698; https://doi.org/10.3390/electronics14091698 - 22 Apr 2025
Viewed by 932
Abstract
In the advancement of wireless communication, multiple-input, multiple-output (MIMO) detection has emerged as a promising technique to meet the high throughput requirements of 6G networks. Traditionally, MIMO detection relies on conventional algorithms, such as zero forcing and minimum mean square error, to mitigate [...] Read more.
In the advancement of wireless communication, multiple-input, multiple-output (MIMO) detection has emerged as a promising technique to meet the high throughput requirements of 6G networks. Traditionally, MIMO detection relies on conventional algorithms, such as zero forcing and minimum mean square error, to mitigate interference and enhance the desired signal. Mathematically, these algorithms operate as linear transformations or functions of received signals. To further enhance MIMO detection performance, researchers have explored the use of nonlinear transformations and functions by leveraging deep learning structures and models. In this paper, we propose a novel model that integrates the Viterbi algorithm with a graph neural network (GNN) to improve signal detection in MIMO systems. Our approach begins by detecting the received signal using the VA, whose output serves as the initial input for the GNN model. Within the GNN framework, the initial signal and the received signal are represented as nodes, while the MIMO channel structure defines the edges. Through an iterative message-passing mechanism, the GNN progressively refines the initial signal, enhancing its accuracy to better approximate the originally transmitted signal. Experimental results demonstrate that the proposed model outperforms conventional and existing approaches, leading to superior detection performance. Full article
(This article belongs to the Special Issue New Trends in Next-Generation Wireless Transmissions)
Show Figures

Figure 1

Back to TopTop