Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (506)

Search Parameters:
Keywords = look-up table

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 5449 KB  
Article
A Device-Centric Research of Power Side-Channel in FPGAs
by Kaishun Zhang, Changhao Wang and Tao Su
Electronics 2026, 15(8), 1546; https://doi.org/10.3390/electronics15081546 - 8 Apr 2026
Viewed by 251
Abstract
As a widely used computing substrate, the side-channel security of FPGAs has attracted considerable attention, yet a systematic understanding of how FPGA device types contribute to exploitable leakage remains limited. This work presents a device-centric evaluation that maps an S-box-like function onto common [...] Read more.
As a widely used computing substrate, the side-channel security of FPGAs has attracted considerable attention, yet a systematic understanding of how FPGA device types contribute to exploitable leakage remains limited. This work presents a device-centric evaluation that maps an S-box-like function onto common FPGA primitives, including look-up table (LUT), flip-flop (FF), block RAM (BRAM), and distributed RAM (LUTRAM), and assesses Correlation Power Analysis (CPA) outcomes under the Hamming Weight (HW) and Hamming Distance (HD) power models. The results show pronounced leakage differences across device types: FF- and BRAM-based implementations exhibit substantially stronger leakage than LUT- and LUTRAM-based ones, and they frequently achieve GE=0 in our configurations, while the HD model is generally more effective than the HW model in the performed CPA evaluations. Notably, FF-, BRAM-, and LUTRAM-based implementations can already be breakable starting from one instance under the HD model in our device-level tests, indicating that exploitable leakage may manifest in real FPGA applications. These device-level observations are further validated on a practical cipher by analyzing two SM4 encryption modules that differ only in the S-box implementation style; the BRAM-based design shows significantly stronger leakage than the LUT-based design, achieving GE=2.58 versus GE=78.3 at 10,000 traces. This work highlights the critical role of device selection and implementation style in FPGA side-channel security, and it provides practical insights for designing secure FPGA applications against power side-channel analysis. Full article
(This article belongs to the Special Issue Secure and Privacy-Enhanced Data Sharing)
Show Figures

Figure 1

35 pages, 5635 KB  
Article
Urban and Peri-Urban Ecosystem Functions Under Climate Change: From Empirical Analysis to Adaptation and Mitigation Planning
by Marcela Prokopová, Renata Včeláková, Vilém Pechanec, Lenka Štěrbová, Luca Salvati, Ondřej Cudlín, Ahmed Alhuseen, Jan Purkyt and Pavel Cudlín
Land 2026, 15(4), 569; https://doi.org/10.3390/land15040569 - 30 Mar 2026
Viewed by 403
Abstract
Urban expansion in Europe is accelerating, increasing impermeable surfaces and intensifying climate-related pressures, while reducing the capacity of natural and semi-natural habitats to regulate climate. Despite growing interest in ecosystem service (ES), the assessment of resilience, and thus the stability of ES providers, [...] Read more.
Urban expansion in Europe is accelerating, increasing impermeable surfaces and intensifying climate-related pressures, while reducing the capacity of natural and semi-natural habitats to regulate climate. Despite growing interest in ecosystem service (ES), the assessment of resilience, and thus the stability of ES providers, as well as their integration into spatial planning tools, remain limited. This study develops and tests a comprehensive assessment framework that (i) evaluates the current performance of selected ecosystem functions underpinning key regulating ES important for climate adaptation using a look-up table method; (ii) assesses ecosystem resilience by quantification its preconditions; and (iii) applies spatial prioritization to identify and prioritize climate adaptation measures that enhance ecosystem functions and strengthen resilience. The framework was applied to the cadastral area of Liberec (Czech Republic). Results indicate that areas with the highest urgency for intervention were identified consistently across urban and peri-urban zones. However, proposed measures were more diverse and spatially differentiated in peri-urban and rural areas, whereas a single dominant measure prevailed in urban areas, suggesting higher practical applicability outside densely built environments. The approach supports evidence-based spatial planning and contributes to the implementation of the EU Adaptation Strategy by promoting resilient green infrastructure in urban and peri-urban landscapes. Full article
Show Figures

Figure 1

31 pages, 6311 KB  
Article
Synthesis of FPGA-Based Moore FSMs with Two Cores of Partial Functions
by Alexander Barkalov, Larysa Titarenko and Kazimierz Krzywicki
Electronics 2026, 15(6), 1279; https://doi.org/10.3390/electronics15061279 - 18 Mar 2026
Viewed by 401
Abstract
A new architecture of FPGA-based Moore finite state machine (FSM) is proposed, as well as the corresponding method of synthesis. The proposed architecture of FSM circuit includes two cores of partial Boolean functions. The first core is based on functional decomposition, the second [...] Read more.
A new architecture of FPGA-based Moore finite state machine (FSM) is proposed, as well as the corresponding method of synthesis. The proposed architecture of FSM circuit includes two cores of partial Boolean functions. The first core is based on functional decomposition, the second core is based on structural decomposition. Under certain conditions, the proposed method improves both spatial and temporal characteristics of FSM circuits. The FSM states have two codes. The first of them is a maximum binary code (MBC) having minimum possible number of bits. The second code is a partial state code representing a state as the element of some class of compatibility. The method can be applied if Moore FSM circuits are implemented using look-up table (LUT) elements of field-programmable gate arrays. To improve characteristics of resulting FSM circuits, the classes of pseudoequivalent states are used. This allows diminishing the numbers of literals in sum-of-products representing partial input memory functions. The first core is multi-level. For the second core, all partial functions are generated by single-LUT circuits. These cores form the first level of FSM circuit. The LUTs of the second level generate bits of MBCs. These codes are used by the third circuit level for generating both FSM outputs and partial state codes. An example of synthesis is shown. The experiments are conducted using a known library of benchmark Moore FSMs. The experiments show that the proposed approach can be used for complex FSMs where the total number of FSM inputs and state variables is at least twice the number of inputs of the base LUT. The results of experiments show that the proposed method allows improving both the spatial and temporal characteristics for complex FSMs compared with their counterparts based on other known design methods. Full article
(This article belongs to the Topic VLSI-Based Sequential Devices in Cyber-Physical Systems)
Show Figures

Figure 1

29 pages, 962 KB  
Review
Looking into the i of the Storm: An Overview of Mid-1880s Contingency Table Indices for Studying Tornado Data
by Eric J. Beh
Mathematics 2026, 14(6), 1019; https://doi.org/10.3390/math14061019 - 17 Mar 2026
Viewed by 202
Abstract
One of the first serious attempts to study the indices that assess the association between the variables of a 2 × 2 contingency table was undertaken in the mid-1880s. Central to this study is the 1884 tornado observation/prediction data collected by Seargent John [...] Read more.
One of the first serious attempts to study the indices that assess the association between the variables of a 2 × 2 contingency table was undertaken in the mid-1880s. Central to this study is the 1884 tornado observation/prediction data collected by Seargent John Park Finley (1854–1943), while working for the US Army Signal Service, and the controversial index he proposed to evaluate the success of his tornado predictions, which he denoted i. Subsequent improvements to Finley’s index were proposed, all of which pre-date the development of association measures made by pioneers such as Sir Francis Galton and Karl Pearson. This paper discusses Finley’s data, his index i, and the improvements made to this index. We also give historical context to Finley and his successors and their place in the early development of contingency table analysis. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

24 pages, 10576 KB  
Article
Accurate Road User Position Estimation for V2I Using Point Clouds from Mobile Mapping Systems
by Ju Hee Yoo, Ho Gi Jung and Jae Kyu Suhr
Electronics 2026, 15(6), 1238; https://doi.org/10.3390/electronics15061238 - 16 Mar 2026
Viewed by 226
Abstract
Accurate detection and positioning of road users are essential for vehicle-to-infrastructure (V2I)-assisted autonomous driving. For this purpose, the road user’s ground contact point is usually detected in a monocular camera image. Then, a homography-based method is used to convert this detected point into [...] Read more.
Accurate detection and positioning of road users are essential for vehicle-to-infrastructure (V2I)-assisted autonomous driving. For this purpose, the road user’s ground contact point is usually detected in a monocular camera image. Then, a homography-based method is used to convert this detected point into its corresponding map position. However, the homography-based method assumes that the ground is planar, which leads to significant positioning errors in real-world environments. This limitation degrades the reliability of V2I-assisted autonomous driving, particularly in environments with complex road geometries. This study presents a method for accurately estimating the positions of road users using 3D point clouds generated by a Mobile Mapping System (MMS) for map construction without incurring additional costs. Moreover, since surveillance cameras are typically installed in urban areas, point clouds for these regions are often already available. The proposed method uses a pre-generated Look-Up Table (LUT), which is created by projecting MMS-based 3D point clouds onto the image coordinate system, so that each pixel in the image stores its corresponding 3D map position. Once the ground contact points of road users are detected in the image, the corresponding 3D positions on the map can be directly obtained by referencing the LUT. In the experiments, the proposed method was evaluated using surveillance camera images and MMS-based point clouds collected from various real-world environments. The results show that the proposed method reduces positioning errors of road users by an average of 61.4% compared to the conventional homography-based method. The improvement is particularly significant in environments with ground slope variations. In addition, the proposed method demonstrates real-time feasibility on an embedded camera, achieving low latency and power-efficient performance suitable for V2I edge deployment. Full article
(This article belongs to the Special Issue Autonomous Vehicles: Sensing, Mapping, and Positioning)
Show Figures

Figure 1

32 pages, 866 KB  
Review
Review of Floating-Point Arithmetic Algorithms Using Taylor Series Expansion and Mantissa Region Division Techniques
by Jianglin Wei and Haruo Kobayashi
Electronics 2026, 15(5), 1106; https://doi.org/10.3390/electronics15051106 - 6 Mar 2026
Viewed by 363
Abstract
This paper presents a comprehensive review of digital floating-point arithmetic algorithms that utilize Taylor series expansion in combination with mantissa-region division techniques, and it further demonstrates their generalization and applicability based on the findings of our research. While the discussion is broad in [...] Read more.
This paper presents a comprehensive review of digital floating-point arithmetic algorithms that utilize Taylor series expansion in combination with mantissa-region division techniques, and it further demonstrates their generalization and applicability based on the findings of our research. While the discussion is broad in scope, this paper consolidates and systematizes the authors’ method within a broader contextual discussion, rather than presenting a fully systematic review of the entire state of the art in floating-point arithmetic algorithms. In many scientific computing applications, compact and low-power hardware implementations are essential. To address these requirements, this review presents algorithms specifically designed to operate under such constraints. The focus is placed on efficient floating-point operations—including division, inverse square root, square root, exponentiation, and logarithmic functions—all realized through Taylor series expansion with mantissa region division techniques. Furthermore, the trade-offs are examined in detail, covering factors such as the required numbers of additions, subtractions, and multiplications, along with the look-up table (LUT) size. The study further identifies the environments and application domains where the Taylor series expansion method combined with mantissa-region division is most effective, based on comparisons with various other floating-point computation algorithms and their corresponding hardware implementations. Overall, the review underscores the value of this unified framework in enabling efficient and adaptable floating-point computation across a wide range of hardware-constrained environments. Full article
Show Figures

Figure 1

24 pages, 2003 KB  
Article
Multi-Memory Approach for Random Number Generators in FPGA
by Thiago Campos Acácio Paschoalin, Tiago Motta Quirino and Luciano Manhães de Andrade Filho
Appl. Sci. 2026, 16(5), 2537; https://doi.org/10.3390/app16052537 - 6 Mar 2026
Viewed by 301
Abstract
Random number generation is essential in many application domains, including high-energy physics simulations. Implementing Monte Carlo methods that generate samples following a desired probability distribution is particularly challenging on hardware platforms such as FPGAs. Direct implementations of analytical distribution functions are often resource-intensive, [...] Read more.
Random number generation is essential in many application domains, including high-energy physics simulations. Implementing Monte Carlo methods that generate samples following a desired probability distribution is particularly challenging on hardware platforms such as FPGAs. Direct implementations of analytical distribution functions are often resource-intensive, making them impractical for real-time systems. An efficient alternative is the use of the inverse cumulative distribution function (CDF), which can be implemented using look-up tables (LUTs). In this approach, a uniformly distributed random number—generated by Linear Feedback Shift Registers (LFSRs)—is used as an address to access LUTs containing discretized x-axis values of the CDF, thereby yielding the target random variable. However, this method presents limited accuracy in low-probability regions of the distribution. To address this issue, this paper proposes a segmented CDF implementation based on multiple LUTs, improving resolution in poorly sampled regions. A cascade of decision logic selects the appropriate memory output, increasing resolution only where necessary while optimizing memory usage. The proposed method was validated through Monte Carlo simulations in particle physics applications, achieving close agreement with theoretical distributions while requiring limited FPGA resources and no DSP blocks. Full article
Show Figures

Figure 1

20 pages, 3159 KB  
Article
ROM-Less Co(Sine) Synthesizer
by Florentina-Giulia Stoica, Alex Calinescu and Marius Enachescu
Electronics 2026, 15(5), 1093; https://doi.org/10.3390/electronics15051093 - 5 Mar 2026
Viewed by 830
Abstract
Sine and cosine wave synthesis is utilized for generating sinusoidal-like values in the digital domain. While this task is commonly handled through software, dedicated hardware like Direct Digital Synthesis (DDS) is also available. However, both methods rely on memory resources, such as look-up [...] Read more.
Sine and cosine wave synthesis is utilized for generating sinusoidal-like values in the digital domain. While this task is commonly handled through software, dedicated hardware like Direct Digital Synthesis (DDS) is also available. However, both methods rely on memory resources, such as look-up tables and Read-Only Memories (ROMs), which face latency limitations related to additional memory access times on top of additional Si area. With the advent of real-time arithmetic for sine wave approximation, this paper presents a digital module that employs iterative multiply-accumulate (MAC) operations for sine and cosine synthesis. To support the integration of this module into Systems-on-Chip (SoCs), Field-Programmable Gate Arrays (FPGAs), and standalone Application-Specific Integrated Circuits (ASICs), a comprehensive figure of merit (FoM) comparison against various ROM-less methods is provided. When implemented on a Xilinx (AMD) XC7A100T-3CSG324 FPGA, the proposed architecture compared to other ROM-less solutions like the Taylor approximation, achieves 80.80% lower resource utilization, 80.89% reduced propagation delay, and 36.66% higher accuracy in sine and cosine wave approximation, both operating as 32-bit systems with one sample per clock cycle. Furthermore, the proposed sine accelerator, accompanying control and communication IPs, and custom firmware were deployed on an FPGA-based function generator platform and experimentally validated. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

27 pages, 3000 KB  
Article
Response-Driven Optimal Emergency Control of Power Systems via Deep Learning-Based Sensitivity Embedded Optimization
by Lin Cheng, Han Wang, Yiwei Su and Gengfeng Li
Energies 2026, 19(5), 1284; https://doi.org/10.3390/en19051284 - 4 Mar 2026
Viewed by 289
Abstract
The transition towards high-renewable power systems introduces high-dimensional nonlinearity and uncertainty, rendering traditional offline look-up table schemes prone to control mismatch against “unseen” contingencies. Meanwhile, existing response-driven approaches face a dilemma between the computational latency of physics-based optimization and the safety risks of [...] Read more.
The transition towards high-renewable power systems introduces high-dimensional nonlinearity and uncertainty, rendering traditional offline look-up table schemes prone to control mismatch against “unseen” contingencies. Meanwhile, existing response-driven approaches face a dilemma between the computational latency of physics-based optimization and the safety risks of end-to-end AI. To bridge this gap, this paper proposes a Response-Driven Optimal Emergency Control Framework that ensures both millisecond-level speed and rigorous physical constraints. First, a deep learning-based predictor is employed to extract spatiotemporal features from real-time PMU data, enabling high-fidelity prediction of stability margins. Crucially, instead of direct black-box control, the data-driven model is utilized to derive linear control sensitivities via a batch-processing perturbation mechanism. This transforms the intractable Transient Stability Constrained Optimal Power Flow (TSC-OPF) problem into a real-time solvable Linear Programming model. Case studies on a regional AC/DC hybrid grid demonstrate that the proposed framework achieves high prediction accuracy and effectively restores stability in mismatch scenarios where traditional schemes fail. Furthermore, the decision speed of the proposed method is significantly improved compared to traditional time-domain simulations, thus strictly satisfying the real-time requirements of the second line of defense. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

19 pages, 4899 KB  
Article
Leakage Current Elimination for Safer Direct Torque-Controlled Induction Motor Drives with Transformerless Multilevel Photovoltaic Inverters
by Zouhaira Ben Mahmoud and Adel Khedher
Electricity 2026, 7(1), 19; https://doi.org/10.3390/electricity7010019 - 1 Mar 2026
Viewed by 360
Abstract
The use of photovoltaic (PV) water pumping technology offers a viable and sustainable alternative to conventional diesel-driven pumping systems. In PV-based pumping installations, the elimination of bulky transformers significantly reduces the overall system size and weight, which is particularly advantageous for rural and [...] Read more.
The use of photovoltaic (PV) water pumping technology offers a viable and sustainable alternative to conventional diesel-driven pumping systems. In PV-based pumping installations, the elimination of bulky transformers significantly reduces the overall system size and weight, which is particularly advantageous for rural and remote irrigation applications. However, removing the transformer can result in high common-mode voltage (CMV) when the induction motor is controlled using a direct torque control (DTC) scheme. This elevated CMV induces leakage currents that may damage the motor, compromise system reliability, and pose potential safety hazards. To ensure a more compact and safer PV pumping system, this paper introduces an improved DTC-based control strategy for induction motors driven by transformerless multilevel PV inverters. The proposed approach effectively suppresses leakage current by mitigating its main source, CMV, while maintaining the simple structure and dynamic performance inherent to conventional DTC. Two new look-up tables (LUTs) are developed to control the stator flux and electromagnetic torque while simultaneously eliminating leakage current. The first method, termed zero-medium vector DTC (ZMV-DTC), employs both zero and medium voltage vectors from the space vector diagram. The second, referred to as medium vector DTC (MV-DTC), utilizes only medium vectors. Numerical simulation results validate the feasibility and superior performance of the proposed algorithms in terms of leakage current suppression. Compared with a conventional DTC (C-DTC) scheme that is designed to limit the CMV, the proposed DTC algorithms achieve a much stronger reduction in the CMV, confining its amplitude to only a few volts, instead of the levels ±Vdc/6 typically produced by the C-DTC. As a result, the leakage current is effectively eliminated, ensuring safer and more reliable operation of the system. Full article
Show Figures

Figure 1

27 pages, 1058 KB  
Article
Ordered Eigenvalue Decomposition Implementation on Systolic Arrays via Virtual Rewiring
by Chengqian Tang, Yaxuan Lu, Bowen Liang, Yunhe Cao and Mengmeng Han
Electronics 2026, 15(5), 941; https://doi.org/10.3390/electronics15050941 - 25 Feb 2026
Viewed by 370
Abstract
Eigenvalue Decomposition (EVD) is a fundamental operation in real-time signal processing, yet obtaining sorted outputs from systolic arrays remains a persistent engineering challenge. The conventional Brent–Luk architecture relies on external sorting networks to reorder eigenvalues. Attempts to achieve in-place sorting via angle adjustment [...] Read more.
Eigenvalue Decomposition (EVD) is a fundamental operation in real-time signal processing, yet obtaining sorted outputs from systolic arrays remains a persistent engineering challenge. The conventional Brent–Luk architecture relies on external sorting networks to reorder eigenvalues. Attempts to achieve in-place sorting via angle adjustment fail due to “topological mismatch,” a conflict between implicit data permutation from large-angle Givens rotations and fixed hardware routing. To address this, we propose a virtual rewiring mechanism. By exploiting the inherent half-cycle reversal pattern of Round-Robin scheduling, we derive a correction algorithm requiring only sign-bit operations. This achieves automatic descending-order arrangement without modifying physical interconnects. Field-Programmable Gate Array (FPGA) experiments demonstrate that the proposed scheme requires negligible additional resources (0.8% Look-Up Tables (LUTs)) while reducing sorting-related logic by 91%. Furthermore, sorting is achieved entirely within the existing computational pipeline, resulting in zero additional hardware latency per sweep. Full article
(This article belongs to the Special Issue New Advances of FPGAs in Signal Processing)
Show Figures

Figure 1

14 pages, 3762 KB  
Article
An IF-MPWM Algorithm to Extend the Clean Bandwidth for All-Digital Transmitters
by Yutong Liu, Qiang Zhou, Jie Yang, Lei Zhu and Haoyang Fu
Electronics 2026, 15(4), 800; https://doi.org/10.3390/electronics15040800 - 13 Feb 2026
Viewed by 252
Abstract
In all-digital transmitters (ADTx), the in-band quantization noise generated by pulse coding provides only limited clean bandwidth (CBW), significantly increasing the difficulty of analog filter design. To address the constrained CBW of RF pulse sequences in ADTx, this paper proposes an optimization strategy [...] Read more.
In all-digital transmitters (ADTx), the in-band quantization noise generated by pulse coding provides only limited clean bandwidth (CBW), significantly increasing the difficulty of analog filter design. To address the constrained CBW of RF pulse sequences in ADTx, this paper proposes an optimization strategy for suppressing noise across a broader frequency domain. Distinguished from traditional schemes with limited noise suppression range, the expansion of CBW is innovatively achieved by setting multiple groups of frequency observation points near the carrier frequency, enabling more comprehensive constraints of in-band noise. Meanwhile, aiming at the problems of large look-up table scale and slow query speed, a partitioned look-up strategy is proposed. During a look-up, traversal is confined only to the partition containing the input point, eliminating the need to scan all elements. This strategy substantially reduces the number of error calculations and comparisons, significantly improving the real-time performance of mapping look-up and lowering the computational demands on digital processing devices. Through the collaborative optimization of noise suppression and query efficiency, this study highlights its breakthrough contributions and provides technical support for the optimization of RF pulse sequences in ADTx. Full article
Show Figures

Figure 1

18 pages, 890 KB  
Article
Physical Unclonable Function Based Privacy-Preserving Authentication Scheme for Autonomous Vehicles Using Hardware Acceleration
by Rabeea Fatima, Ujunwa Madububambachu, Ahmed Sherif, Muhammad Hataba, Nick Rahimi and Kasem Khalil
Sensors 2026, 26(4), 1088; https://doi.org/10.3390/s26041088 - 7 Feb 2026
Viewed by 366
Abstract
With the rise of smart cities, technology has enabled more efficient urban management. A key part of this is the Internet of Vehicles (IoVs), which connects vehicles to smart city systems to improve transportation safety and efficiency. This integrated system enables wireless connection [...] Read more.
With the rise of smart cities, technology has enabled more efficient urban management. A key part of this is the Internet of Vehicles (IoVs), which connects vehicles to smart city systems to improve transportation safety and efficiency. This integrated system enables wireless connection between vehicles, allowing for the sharing of essential traffic information. However, with all this connectivity, there are growing concerns about IoV security and privacy. This paper presents a new privacy-preserving authentication scheme for Autonomous Vehicles (AVs) in the IoV field using physical unclonable functions (PUFs). This scheme employs a bilinear pairing-based encryption technique that supports search over encrypted data. The primary aim of this scheme is to authenticate AVs inside the IoV architecture. A novel PUF design generates random keys for our authentication technique, hence boosting security. This dual-layer security strategy safeguards against a range of cyber threats, including identity fraud, man-in-the-middle attacks, and unauthorized access to personal user data. The PUF design will guarantee the true randomness of the AVs’ users’ secret keys. To handle the large amount of data involved, we use hardware acceleration with different Field-Programmable Gate Arrays (FPGAs). Our examination of privacy and security demonstrates the achievement of the defined design goals. The proposed authentication framework was fully implemented and validated on FPGA platforms to demonstrate its hardware feasibility and efficiency. The integrated heterogeneous PUF achieves an average reliability exceeding 98.5% across a wide temperature range, while maintaining near-ideal randomness with an average Hamming weight of 49.7% over multiple challenge sets. Furthermore, the uniqueness metric approaches 49.9%, confirming strong inter-device distinguishability among different PUF instances. The complete authentication architecture was synthesized on Nexys-100T, Zynq-104, and Kintex-116 devices, where the design utilizes less than 80% of slice Look-Up Tables (LUTs), under 27% of on-chip memory resources, and below 16% of DSP blocks, demonstrating low hardware overhead. Full article
(This article belongs to the Special Issue Privacy and Security in Sensor Networks)
Show Figures

Figure 1

14 pages, 2997 KB  
Article
Impact of Non-Linear CT Resampling on Enhancing Synthetic-CT Generation in Total Marrow and Lymphoid Irradiation
by Monica Bianchi, Nicola Lambri, Daniele Loiacono, Stefano Tomatis, Marta Scorsetti, Cristina Lenardi and Pietro Mancosu
Appl. Sci. 2026, 16(3), 1660; https://doi.org/10.3390/app16031660 - 6 Feb 2026
Viewed by 328
Abstract
Computed tomography (CT) images are stored at a 12-bit depth. However, many deep learning libraries and pre-trained models are designed for 8-bit images, requiring an intermediate compression step before restoring the original 12-bit physical range. This process causes information loss and can compromise [...] Read more.
Computed tomography (CT) images are stored at a 12-bit depth. However, many deep learning libraries and pre-trained models are designed for 8-bit images, requiring an intermediate compression step before restoring the original 12-bit physical range. This process causes information loss and can compromise image reliability. This study investigated the impact of two CT resampling methods (8-bit compression; 12-bit decompression) on dose calculation and image quality. Ten total marrow and lymphoid irradiation patients were selected. CT scans were resampled using linear and non-linear look-up tables (l_LUT/nl_LUT). Original and resampled CTs were evaluated considering: (i) Hounsfield unit (HU) root mean squared error (RMSE); (ii) dose-volume histogram (DVH) statistics for target volume and several organs; (iii) 3D gamma passing rate (GPR) with a 1%/1.25 mm criterion; (iv) lymph nodes contouring and diagnostic quality (scale 1–5). The RMSE for l_LUT vs. nl_LUT was 7 ± 1 vs. 10 ± 1 HU. Maximum differences in DVH statistics were 0.4%, with a 3D-GPR = 100% for all cases. CTs resampled with l_LUT exhibited evident brain pixelation (score = 1), whereas nl_LUT matched the original CT quality (score = 4). Both LUTs were acceptable for lymph nodes delineation. The nl_LUT optimized the CT resampling process, providing a more efficient method for possible deep learning applications in synthetic CT generation. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

25 pages, 45647 KB  
Article
A Novel FEC Implementation for VSAT Terminals Using High-Level Synthesis
by Najmeh Khosroshahi, Ron Mankarious and Mohammad Reza Soleymani
Aerospace 2026, 13(2), 155; https://doi.org/10.3390/aerospace13020155 - 6 Feb 2026
Viewed by 371
Abstract
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using [...] Read more.
This paper presents a hardware-efficient field-programmable gate array (FPGA) implementation of a layered two-dimensional corrected normalized min-sum (2D-CNMS) decoder for quasi-cyclic low-density parity-check (QC-LDPC) codes in very small aperture terminal (VSAT) satellite communication systems. The decoder is described in C++ and synthesized using the Xilinx Vitis high-level synthesis (HLS) 2025 (AMD Xilinx, San Jose, CA, USA) tool, and then packaged and integrated as an intellectual property (IP) core within the Vivado Design Suite 2024 (AMD Xilinx, San Jose, CA, USA), enabling rapid prototyping and portability across FPGA platforms. Unlike conventional normalized min-sum (NMS) and two-dimensional normalized min-sum (2D-NMS) architectures, the proposed 2D-CNMS scheme employs dyadic, multiplier-free normalization combined with two-level magnitude correction, achieving near sum-product performance with reduced complexity and latency. The design is implemented on a Zynq UltraScale+ multiprocessor system-on-chip (MPSoC) (AMD Xilinx, San Jose, CA, USA) and supports real-time operation with a throughput of 29–41 Mbps at 100 MHz, while using only 9.6–22.4 k look-up tables (LUTs), 2.1–5.9 k flip-flops (FFs), and no digital signal processing (DSP) slices or block random-access memories (BRAMs). Bit-error-rate (BER) simulations over an additive white Gaussian noise (AWGN) channel show no error floor down to 108. These results demonstrate that the proposed HLS-based 2D-CNMS IP core provides a resource-efficient, high-performance LDPC decoding solution as compared with existing LDPC implementation approaches. This LDPC solution targets performance enhancement in wireless communication systems and has been deployed on a multi-frequency time-division multiple-access (MF-TDMA) satellite link to assess its overall behavior, demonstrating improved performance with reduced resource usage. Full article
(This article belongs to the Special Issue Advanced Satellite Communications for Engineers and Scientists)
Show Figures

Figure 1

Back to TopTop