Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (177)

Search Parameters:
Keywords = LDPC codes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1950 KB  
Article
Joint Optimization of Four-Edge Type LDPC Codes with Symmetric Decoding Structure Based on EXIT Functions
by Ying You, Guodong Su and Weiwei Lin
Symmetry 2026, 18(5), 794; https://doi.org/10.3390/sym18050794 - 6 May 2026
Viewed by 141
Abstract
Four-edge type low-density parity-check (FET-LDPC) codes, as an important subclass of multi-edge type LDPC codes, offer greater design flexibility and performance potential due to their heterogeneous edge type structure. However, their multi-dimensional degree distribution significantly increases the complexity of optimization. This paper proposes [...] Read more.
Four-edge type low-density parity-check (FET-LDPC) codes, as an important subclass of multi-edge type LDPC codes, offer greater design flexibility and performance potential due to their heterogeneous edge type structure. However, their multi-dimensional degree distribution significantly increases the complexity of optimization. This paper proposes a joint optimization framework for FET-LDPC codes leveraging the symmetric decoding structure inherent in their dual-branch architecture. The main contributions are as follows. First, an improved decoding model is established to analyze the mutual information transmission among the four edge types during iterative decoding, where the symmetry between the accumulator (ACC) and single parity-check (SPC) branches facilitates balanced information exchange. Second, a full-dimensional extrinsic information transfer (FEXIT) chart suitable for FET-LDPC codes is constructed, capturing the mutual information flow across branches. Third, a collaborative optimization model is designed by integrating the FEXIT chart with multi-constraint linear programming (LP) to perform asymmetric optimization for different edge types. The simulation results show that the proposed method achieves significant performance improvement over unoptimized codes in additive white Gaussian noise (AWGN) channels, particularly at a bit error rate (BER) of 10−6. Full article
(This article belongs to the Section Computer)
20 pages, 1837 KB  
Article
The Equitable Coloring of Circulant Graphs
by Xiaoyu Jin, Guiying Yan and Weihua Yang
Symmetry 2026, 18(5), 774; https://doi.org/10.3390/sym18050774 - 30 Apr 2026
Viewed by 205
Abstract
A proper vertex coloring is equitable if the sizes of any two color classes differ by at most one. Any graph G with maximum degree Δ(G) admits an equitable Δ(G)+1-coloring, computable in [...] Read more.
A proper vertex coloring is equitable if the sizes of any two color classes differ by at most one. Any graph G with maximum degree Δ(G) admits an equitable Δ(G)+1-coloring, computable in O(Δ(G)n2) time for n vertices. A circulant graph G(n;D) is the graph with vertex set Zn and two vertices x,y are adjacent if |xy|±Dmodn. The partitioning problem in parallel decoding of multi-edge QC-LDPC codes can be interpreted as an equitable coloring problem. We prove some upper bounds for χ=(G(n;D)) and develop equitable coloring algorithms, including pattern-based periodic coloring and step-based coloring. The proposed methods typically use fewer than Δ(G)+1 colors and have computational complexity lower than O(Δ(G)n2) for circulant graphs G(n;D) with small |D|. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2026)
32 pages, 4697 KB  
Article
Parameter-Coupled Offset Min-Sum Decoding with Edge-Type Differentiation for MET-LDPC Codes
by Ying You, Guodong Su and Weiwei Lin
Mathematics 2026, 14(8), 1352; https://doi.org/10.3390/math14081352 - 17 Apr 2026
Viewed by 240
Abstract
To improve the decoding performance of multi-edge type low-density parity-check (MET-LDPC) codes, this paper proposes an edge-type differentiated parameter coupling offset min-sum (EDPC-OMS) decoding algorithm. The contributions are threefold. First, we replace the traditional uniform compensation with edge-type differentiated compensation, resolving the mismatch [...] Read more.
To improve the decoding performance of multi-edge type low-density parity-check (MET-LDPC) codes, this paper proposes an edge-type differentiated parameter coupling offset min-sum (EDPC-OMS) decoding algorithm. The contributions are threefold. First, we replace the traditional uniform compensation with edge-type differentiated compensation, resolving the mismatch between the decoding model and code structure. Second, we introduce a parameter coupling mechanism that enables joint optimization of multiple edge types while maintaining differentiated configurations. Third, a practically feasible design combining precomputation and look-up tables enables dynamic parameter adjustment with moderate additional overhead, achieving a favorable performance–complexity trade-off. Simulation results over additive white Gaussian noise (AWGN) channels and Rayleigh fading channels demonstrate that the proposed algorithm adaptively selects offset factors according to channel conditions and edge types without introducing significant computational complexity, effectively lowering the bit error rate and enhancing decoding capability. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

31 pages, 1954 KB  
Article
HASCom: A Heterogeneous Affective-Semantic Communication Framework for Speech Transmission
by Zhenjia Yu, Taojie Zhu, Md Arman Hossain, Zineb Zbarna and Lei Wang
Sensors 2026, 26(7), 2158; https://doi.org/10.3390/s26072158 - 31 Mar 2026
Viewed by 652
Abstract
Driven by the development of next-generation wireless networks and the widespread adoption of sensing, communication is shifting from traditional bit-level transmission to intelligent, rich interactions within our digital social system. However, existing speech semantic communication frameworks predominantly focus on textual accuracy, neglecting the [...] Read more.
Driven by the development of next-generation wireless networks and the widespread adoption of sensing, communication is shifting from traditional bit-level transmission to intelligent, rich interactions within our digital social system. However, existing speech semantic communication frameworks predominantly focus on textual accuracy, neglecting the critical affective information (e.g., tone and emotion) that is essential for natural human-centric interactions in the real world. To address this limitation, we propose the Heterogeneous Affective Speech Semantic Communication (HASCom) framework, designed for the robust transmission of highly expressive speech over complex wireless channels. Specifically, we design a heterogeneous dual-stream transmission architecture that decouples discrete phoneme-level linguistic content from continuous emotional embeddings. For discrete semantic information, we use reliable digital coding protected by Low-Density Parity-Check (LDPC) to guarantee strict recoverability. Conversely, for emotional features, we employ Deep Joint Source-Channel Coding (JSCC) analog transmission to prevent irreversible quantization errors and the cliff effect. Additionally, we develop a prior-guided diffusion reconstruction module at the receiving end. This module leverages a structural prior network to align the decoded semantics, which then steers the reverse diffusion process conditioned on the recovered affective features. Extensive experiments under both AWGN and Rayleigh fading channels demonstrate that HASCom significantly outperforms state-of-the-art baselines. Specifically, it achieves superior objective semantic similarity and subjective Mean Opinion Score (MOS) at low Signal-to-Noise Ratios (SNRs), while the JSCC transmission modules maintain an ultra-low inference latency of less than 0.1 ms, validating its high efficiency and robustness for practical deployments. Full article
Show Figures

Figure 1

11 pages, 581 KB  
Article
Experimental Study of Alien Crosstalk Limits in Densely Bundled Commodity 10GBASE-T Ethernet Cables
by Aleksei Demin, Viktoriia Vasileva and Dmitrii Chaikovskii
Network 2026, 6(1), 14; https://doi.org/10.3390/network6010014 - 9 Mar 2026
Viewed by 495
Abstract
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental [...] Read more.
In the realm of high-speed Ethernet networks, alien crosstalk (AXT) significantly undermines the integrity and efficiency of data transmission. While existing works mostly focus on modeling and physical-layer mitigation techniques such as PAM16/DSQ128 modulation and LDPC coding, there is a lack of experimental evidence on how severe AXT affects commodity 10GBASE-T equipment in realistic, densely cabled installations. In this study, we assemble and evaluate the experimental testbed that emulates a highly adverse AXT environment by tightly bundling up to seven 60 m twisted-pair Ethernet cables and using only off-the-shelf 10GBASE-T network cards. We quantitatively characterize how increasing cable density leads to automatic speed downgrades, connection failures, and non-linear saturation of the aggregate throughput, and relate these effects to the observed link quality on individual ports. Our results demonstrate that, even in the presence of standard crosstalk mitigation and error-correction mechanisms, severe AXT can force commodity 10GBASE-T links to fall back from 10 Gbit/s to 1 Gbit/s or below. Based on these findings, we derive practical guidelines for dense-cabling deployments and identify key requirements for experimental testbeds that can more reliably quantify AXT severity and its impact on commodity 10GBASE-T link stability (rate fallback and link loss) under realistic conditions. Full article
Show Figures

Figure 1

25 pages, 3084 KB  
Article
A Regional Message Scaling Min-Sum Decoding Algorithm for MET-LDPC Codes
by Ying You, Guodong Su and Weiwei Lin
Symmetry 2026, 18(3), 444; https://doi.org/10.3390/sym18030444 - 4 Mar 2026
Cited by 1 | Viewed by 328
Abstract
To offer multi-edge type low-density parity-check (MET-LDPC) codes with better performance, this paper proposes a regional message scaling min-sum (RMS) decoding algorithm which improves the performance of the traditional min-sum (MS) decoding algorithm and its modified versions. The contributions of this study are [...] Read more.
To offer multi-edge type low-density parity-check (MET-LDPC) codes with better performance, this paper proposes a regional message scaling min-sum (RMS) decoding algorithm which improves the performance of the traditional min-sum (MS) decoding algorithm and its modified versions. The contributions of this study are as follows. First, based on the edge-type topology of MET-LDPC codes, we fully exploit their inherent structural information to develop a cross-region decoding architecture by dynamically partitioning the edges of the Tanner graph into three functional regions. Second, we introduce cross-region message scaling (CMS) factors to establish an asymmetric information flow control mechanism, which adaptively regulates the intensity of information exchange across regions. Third, by integrating the multi-edge structure, the cross-region decoding architecture, and the asymmetric information flow control mechanism into a unified framework, we propose the RMS decoding algorithm tailored for MET-LDPC codes. For various code lengths, simulation results demonstrate that the proposed algorithm achieves a significantly lower error floor compared to the traditional MS decoding algorithm and its modified versions over the additive white Gaussian noise (AWGN) channel. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 2699 KB  
Article
Multiantenna NOMA with Finite Blocklength: A Pragmatic Paradigm for Ultra-Dense Networking
by Haoming Wang, Zhenzhen Zhang, Xinhao Wu and Bing Li
Entropy 2026, 28(3), 281; https://doi.org/10.3390/e28030281 - 1 Mar 2026
Viewed by 440
Abstract
This paper addresses the design and performance analysis of nonorthogonal multiple access (NOMA) for ultra-dense networking of the Internet of Things (IoT) based on low-power sensors. The proposed NOMA schemes consist of an Nr-antenna access point and K single antenna sensors [...] Read more.
This paper addresses the design and performance analysis of nonorthogonal multiple access (NOMA) for ultra-dense networking of the Internet of Things (IoT) based on low-power sensors. The proposed NOMA schemes consist of an Nr-antenna access point and K single antenna sensors given KNr. A power allocation technique and forward error correction (FEC) are combined to enable concurrent uplink transmission and the successful separation of all K sensors at the access point. In scenarios where KNr, large dimensional analysis is employed to derive a deterministic expression for the received signal-to-interference-plus-noise ratio (SINR) within the finite blocklength regime. Three distinct Forward Error Correction (FEC) codes—convolutional codes (CCs), polar codes, and low-density parity-check codes (LDPCs)—are assessed. These evaluations indicate that all three codes achieve near-capacity performance while supporting massive connectivity in the finite-blocklength context. Notably, convolutional codes demonstrate comparable performance with reduced complexity, a desirable attribute for prolonging the life cycle of wireless sensor network-based IoT applications. Full article
(This article belongs to the Special Issue Next-Generation Multiple Access for Future Wireless Communications)
Show Figures

Figure 1

17 pages, 2129 KB  
Article
A-SNNMS: An Attentive Shared Neural Normalized Min-Sum Decoder for LDPC Codes
by Fengquan Zheng, Liqian Wang, Kunfeng Liu and Zhiguo Zhang
Electronics 2026, 15(5), 1023; https://doi.org/10.3390/electronics15051023 - 28 Feb 2026
Viewed by 458
Abstract
To address the limitations of static message aggregation and training instability in the existing Shared Neural Normalized Min-Sum (SNNMS) algorithm, this paper proposes A-SNNMS, an attentive deep LDPC decoding network with adaptive training. First, an attention mechanism is introduced into the variable node [...] Read more.
To address the limitations of static message aggregation and training instability in the existing Shared Neural Normalized Min-Sum (SNNMS) algorithm, this paper proposes A-SNNMS, an attentive deep LDPC decoding network with adaptive training. First, an attention mechanism is introduced into the variable node update phase to dynamically weight incoming messages based on their reliability, effectively suppressing noise interference. Second, a collaborative training scheme incorporating an exponential decay adaptive learning rate and L2 regularization is designed to mitigate convergence oscillation and overfitting in long-code training. Simulation results for IEEE 802.16e standard codes demonstrate that A-SNNMS achieves a net coding gain of approximately 0.4 dB over the baseline SNNMS at a Bit Error Rate (BER) of 10−3. Furthermore, it achieves comparable performance with only 50% of the iterations required by the baseline. In conclusion, the A-SNNMS decoder significantly improves both decoding efficiency and system robustness, offering a promising solution for high-reliability communications. Full article
Show Figures

Figure 1

20 pages, 1721 KB  
Article
RL-Based Parallel LDPC Decoding with Clustered Scheduling
by Yusuf Ozkan, Yauhen Yakimenka and Jörg Kliewer
Entropy 2026, 28(2), 215; https://doi.org/10.3390/e28020215 - 12 Feb 2026
Viewed by 499
Abstract
We propose a reinforcement learning (RL)-based decoding framework for high-throughput parallel decoding of low-density parity-check (LDPC) codes using clustered scheduling. Parallel LDPC decoders must balance error-correction performance and decoding latency while avoiding memory conflicts. To address this trade-off, we construct clusters of check [...] Read more.
We propose a reinforcement learning (RL)-based decoding framework for high-throughput parallel decoding of low-density parity-check (LDPC) codes using clustered scheduling. Parallel LDPC decoders must balance error-correction performance and decoding latency while avoiding memory conflicts. To address this trade-off, we construct clusters of check nodes that satisfy a two-edge independence property, which enables conflict-free row-parallel belief propagation. An RL agent is trained offline to assign Q-values to clusters and to prioritize their update order during decoding. To overcome the exponential storage requirements of existing RL-based scheduling methods, we introduce the Q-Sum method, which approximates cluster-level Q-values as the sum of Q-values of individual check nodes, reducing storage complexity from exponential to linear in the number of check nodes. We further propose an On-the-Fly clustering strategy that enforces two-edge independence dynamically during decoding and provides additional flexibility when static clustering is not feasible. Simulation results for array-based LDPC codes over additive white Gaussian noise (AWGN) channels show that the proposed methods improve the latency-versus-performance trade-off of parallel LDPC decoders, achieving lower decoding latency and higher throughput while maintaining error rates comparable to state-of-the-art decoding methods. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
Show Figures

Figure 1

25 pages, 45647 KB  
Article
A Novel FEC Implementation for VSAT Terminals Using High-Level Synthesis
by Najmeh Khosroshahi, Ron Mankarious and Mohammad Reza Soleymani
Aerospace 2026, 13(2), 155; https://doi.org/10.3390/aerospace13020155 - 6 Feb 2026
Viewed by 456
Abstract
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using [...] Read more.
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using the Xilinx Vitis high-level synthesis (HLS) 2025 (AMD Xilinx, San Jose, CA, USA) tool, and then packaged and integrated as an intellectual property (IP) core within the Vivado Design Suite 2024 (AMD Xilinx, San Jose, CA, USA), enabling rapid prototyping and portability across FPGA platforms. Unlike conventional normalized min-sum (NMS) and two-dimensional normalized min-sum (2D-NMS) architectures, the proposed 2D-CNMS scheme employs dyadic, multiplier-free normalization combined with two-level magnitude correction, achieving near sum-product performance with reduced complexity and latency. The design is implemented on a Zynq UltraScale+ multiprocessor system-on-chip (MPSoC) (AMD Xilinx, San Jose, CA, USA) and supports real-time operation with a throughput of 29–41 Mbps at 100 MHz, while using only 9.6–22.4 k look-up tables (LUTs), 2.1–5.9 k flip-flops (FFs), and no digital signal processing (DSP) slices or block random-access memories (BRAMs). Bit-error-rate (BER) simulations over an additive white Gaussian noise (AWGN) channel show no error floor down to 108. These results demonstrate that the proposed HLS-based 2D-CNMS IP core provides a resource-efficient, high-performance LDPC decoding solution as compared with existing LDPC implementation approaches. This LDPC solution targets performance enhancement in wireless communication systems and has been deployed on a multi-frequency time-division multiple-access (MF-TDMA) satellite link to assess its overall behavior, demonstrating improved performance with reduced resource usage. Full article
(This article belongs to the Special Issue Advanced Satellite Communications for Engineers and Scientists)
Show Figures

Figure 1

21 pages, 1073 KB  
Article
Near-Optimal Decoding Algorithm for Color Codes Using Population Annealing
by Fernando Martínez-García, Francisco Revson F. Pereira and Pedro Parrado-Rodríguez
Entropy 2026, 28(1), 91; https://doi.org/10.3390/e28010091 - 12 Jan 2026
Viewed by 619
Abstract
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. [...] Read more.
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. In this work, we implement a decoder that finds the recovery operation with the highest success probability by mapping the decoding problem to a spin system and using Population Annealing to estimate the free energy of the different error classes. We study the decoder performance on a 4.8.8 color code lattice under different noise models, including code capacity with bit-flip and depolarizing noise, and phenomenological noise, which considers noisy measurements, with performance reaching near-optimal thresholds for bit-flip and depolarizing noise, and the highest reported threshold for phenomenological noise. This decoding algorithm can be applied to a wide variety of stabilizer codes, including surface codes and quantum Low-Density Parity Check (qLDPC) codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
Show Figures

Figure 1

12 pages, 440 KB  
Article
Symmetrized Extrinsic Information Transfer Chart Analysis for Joint Decoding Between LDPC Codes and CCDMs
by Gang Yang, Fei Yang and Yanan Luo
Electronics 2026, 15(1), 228; https://doi.org/10.3390/electronics15010228 - 4 Jan 2026
Viewed by 347
Abstract
This paper proposes a symmetrized extrinsic information transfer (S-EXIT) chart analysis for probabilistic shaped (PS) systems to optimize the joint decoding of low-density parity-check (LDPC) codes and constant composition distribution matchers (CCDMs). A major challenge in analyzing PS systems is the non-uniform channel [...] Read more.
This paper proposes a symmetrized extrinsic information transfer (S-EXIT) chart analysis for probabilistic shaped (PS) systems to optimize the joint decoding of low-density parity-check (LDPC) codes and constant composition distribution matchers (CCDMs). A major challenge in analyzing PS systems is the non-uniform channel input caused by shaping, which invalidates the all-zero assumption of traditional EXIT charts, coupled with the three-node structure of the joint decoder (variable nodes, check nodes, and shaping nodes) that exceeds the two-decoder framework of conventional EXIT analysis. To resolve these issues, we first prove the symmetry of the joint decoder and introduce a “symmetrized density” transformation to render the channel output symmetric, thereby enabling the extension of EXIT chart analysis to PS systems. We then approximate the EXIT function of the shaping node decoder via polynomial fitting and integrate it with the variable node decoder into a unified model (VSND) for threshold analysis. On one hand, the proposed S-EXIT chart provides a theoretical threshold for the joint decoder, which is crucial for guiding system design. On the other hand, it enables the joint optimization of LDPC code rates and CCDM rates, unlocking additional performance gains. Simulations over additive white Gaussian noise (AWGN) channels demonstrate that short-blocklength CCDMs (e.g., blocklength 20) achieve up to 1.2 dB gain over uniform systems via S-EXIT-based rate optimization. This work addresses the performance limitations of short-blocklength CCDMs in high-speed optical transmissions, offering a practical and efficient analytical tool for PS system design. Full article
(This article belongs to the Special Issue Advances in Optical Communications and Optical Networks)
Show Figures

Figure 1

12 pages, 845 KB  
Article
Rate-Adaptive Information Reconciliation for CV-QKD Systems at Low Signal-to-Noise Ratios
by Huiting Fu, Jisheng Dai, Yan Feng, Han Hai, Huayong Ge, Peng Huang and Xue-Qin Jiang
Entropy 2026, 28(1), 10; https://doi.org/10.3390/e28010010 - 20 Dec 2025
Viewed by 705
Abstract
In continuous-variable quantum key distribution (CV-QKD) systems, information reconciliation (IR) is a crucial step that significantly affects the secret key rate (SKR). The fixed-rate error-correcting codes used in IR are highly sensitive to changes in the signal-to-noise ratio (SNR) and cannot maintain a [...] Read more.
In continuous-variable quantum key distribution (CV-QKD) systems, information reconciliation (IR) is a crucial step that significantly affects the secret key rate (SKR). The fixed-rate error-correcting codes used in IR are highly sensitive to changes in the signal-to-noise ratio (SNR) and cannot maintain a high reconciliation efficiency in practical CV-QKD systems. To address this issue, we first propose a rate-adaptive IR protocol, namely Threshold-based IR (TIR), which changes the code rate of low-density parity-check (LDPC) codes by selectively revealing bits with lower reliability and adjusting their log-likelihood ratios (LLRs). Then, we propose a rate-adaptive IR protocol, namely Sorting-based IR (SIR), which not only adjusts the code rate according to variations in SNR, but also enables the CV-QKD systems to achieve high reconciliation efficiency over a wide range of SNRs. Furthermore, we perform an analysis of the protocols in terms of code rate, reconciliation efficiency, and complexity. The simulation results demonstrate that the proposed protocols outperform other rate-adaptive IR protocols, achieving a reconciliation efficiency higher than 98.5% in the SNR range below −20 dB and maintaining a certain SKR in long-distance transmission. Full article
(This article belongs to the Special Issue Recent Advances in Continuous-Variable Quantum Key Distribution)
Show Figures

Figure 1

14 pages, 1586 KB  
Article
Efficient Error Correction Coding for Physically Unclonable Functions
by Sreehari K. Narayanan, Ramesh Bhakthavatchalu and Remya Ajai Ajayan Sarala
J. Low Power Electron. Appl. 2025, 15(4), 70; https://doi.org/10.3390/jlpea15040070 - 12 Dec 2025
Viewed by 1241
Abstract
Physically unclonable functions (PUFs) generate keys for cryptographic applications, eliminating the need for conventional key storage mechanisms. Since PUF responses are inherently noise-sensitive, their reliability can decrease under varying conditions. Integrating channel coding can enhance response stability and consistency. This work presents an [...] Read more.
Physically unclonable functions (PUFs) generate keys for cryptographic applications, eliminating the need for conventional key storage mechanisms. Since PUF responses are inherently noise-sensitive, their reliability can decrease under varying conditions. Integrating channel coding can enhance response stability and consistency. This work presents an efficient scheme that integrates a delay-base d PUF with a Low-Density Parity-Check (LDPC) code. Specifically, a feed-forward PUF is combined with LDPC coding to reliably regenerate the cryptographic key. Our design reproduces the key with minimal error using channel coding. The scheme achieves 96% key-generation reliability, representing a notable improvement over PUF-based key generation without error-correction coding. LDPC decoding with the min-sum algorithm provides better error correction than the bit-flipping algorithm, but it is more computationally intensive. We could design the proposed scheme with minimum hardware resource utilization using Xilinx Vivado 2018.2 and Cadence Genus tools. Full article
Show Figures

Figure 1

17 pages, 1117 KB  
Article
High-Efficiency Lossy Source Coding Based on Multi-Layer Perceptron Neural Network
by Yuhang Wang, Weihua Chen, Linjing Song, Zhiping Xu, Dan Song and Lin Wang
Entropy 2025, 27(10), 1065; https://doi.org/10.3390/e27101065 - 14 Oct 2025
Cited by 1 | Viewed by 837
Abstract
With the rapid growth of data volume in sensor networks, lossy source coding systems achieve high–efficiency data compression with low distortion under limited transmission bandwidth. However, conventional compression algorithms rely on a two–stage framework with high computational complexity and frequently struggle to balance [...] Read more.
With the rapid growth of data volume in sensor networks, lossy source coding systems achieve high–efficiency data compression with low distortion under limited transmission bandwidth. However, conventional compression algorithms rely on a two–stage framework with high computational complexity and frequently struggle to balance compression performance with generalization ability. To address these issues, an end–to–end lossy compression method is proposed in this paper. The approach integrates an enhanced belief propagation algorithm with a multi–layer perceptron neural network, aiming to introduce a novel joint optimization architecture described as “encoding–structured encoding–decoding”. In addition, a quantization module incorporating random perturbation and the straight–through estimator is designed to address the non–differentiability in the quantization process. Simulation results demonstrate that the proposed system significantly improves compression performance while offering superior generalization and reconstruction quality. Furthermore, the designed neural architecture is both simple and efficient, reducing system complexity and enhancing feasibility for practical deployment. Full article
(This article belongs to the Special Issue Next-Generation Channel Coding: Theory and Applications)
Show Figures

Figure 1

Back to TopTop