Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (264)

Search Parameters:
Keywords = digital signal processing in power systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 2729 KiB  
Review
Memristor Emulator Circuits: Recent Advances in Design Methodologies, Healthcare Applications, and Future Prospects
by Amel Neifar, Imen Barraj, Hassen Mestiri and Mohamed Masmoudi
Micromachines 2025, 16(7), 818; https://doi.org/10.3390/mi16070818 - 17 Jul 2025
Abstract
Memristors, as the fourth fundamental circuit element, have attracted significant interest for their potential in analog signal processing, computing, and memory storage technologies. However, physical memristor implementations still face challenges in reproducibility, scalability, and integration with standard CMOS processes. Memristor emulator circuits, implemented [...] Read more.
Memristors, as the fourth fundamental circuit element, have attracted significant interest for their potential in analog signal processing, computing, and memory storage technologies. However, physical memristor implementations still face challenges in reproducibility, scalability, and integration with standard CMOS processes. Memristor emulator circuits, implemented using analog, digital, and mixed components, have emerged as practical alternatives, offering tunability, cost effectiveness, and compatibility with existing fabrication technologies for research and prototyping. This review paper provides a comprehensive analysis of recent advancements in memristor emulator design methodologies, including active and passive analog circuits, digital implementations, and hybrid approaches. A critical evaluation of these emulation techniques is conducted based on several performance metrics, including maximum operational frequency range, power consumption, and circuit topology. Additional parameters are also taken into account to ensure a comprehensive assessment. Furthermore, the paper examines promising healthcare applications of memristor and memristor emulators, focusing on their integration into biomedical systems. Finally, key challenges and promising directions for future research in memristor emulator development are outlined. Overall, the research presented highlights the promising future of memristor emulator technology in bridging the gap between theoretical memristor models and practical circuit implementations. Full article
(This article belongs to the Section E:Engineering and Technology)
Show Figures

Figure 1

22 pages, 3438 KiB  
Article
Revolutionizing Detection of Minimal Residual Disease in Breast Cancer Using Patient-Derived Gene Signature
by Chen Yeh, Hung-Chih Lai, Nathan Grabbe, Xavier Willett and Shu-Ti Lin
Onco 2025, 5(3), 35; https://doi.org/10.3390/onco5030035 - 12 Jul 2025
Viewed by 128
Abstract
Background: Many patients harbor minimal residual disease (MRD)—small clusters of residual tumor cells that survive therapy and evade conventional detection but drive recurrence. Although advances in molecular and computational methods have improved circulating tumor DNA (ctDNA)-based MRD detection, these approaches face challenges: ctDNA [...] Read more.
Background: Many patients harbor minimal residual disease (MRD)—small clusters of residual tumor cells that survive therapy and evade conventional detection but drive recurrence. Although advances in molecular and computational methods have improved circulating tumor DNA (ctDNA)-based MRD detection, these approaches face challenges: ctDNA shedding fluctuates widely across tumor types, disease stages, and histological features. Additionally, low levels of driver mutations originating from healthy tissues can create background noise, complicating the accurate identification of bona fide tumor-specific signals. These limitations underscore the need for refined technologies to further enhance MRD detection beyond DNA sequences in solid malignancies. Methods: Profiling circulating cell-free mRNA (cfmRNA), which is hyperactive in tumor and non-tumor microenvironments, could address these limitations to inform postoperative surveillance and treatment strategies. This study reported the development of OncoMRD BREAST, a customized, gene signature-informed cfmRNA assay for residual disease monitoring in breast cancer. OncoMRD BREAST introduces several advanced technologies that distinguish it from the existing ctDNA-MRD tests. It builds on the patient-derived gene signature for capturing tumor activities while introducing significant upgrades to its liquid biopsy transcriptomic profiling, digital scoring systems, and tracking capabilities. Results: The OncoMRD BREAST test processes inputs from multiple cutting-edge biomarkers—tumor and non-tumor microenvironment—to provide enhanced awareness of tumor activities in real time. By fusing data from these diverse intra- and inter-cellular networks, OncoMRD BREAST significantly improves the sensitivity and reliability of MRD detection and prognosis analysis, even under challenging and complex conditions. In a proof-of-concept real-world pilot trial, OncoMRD BREAST’s rapid quantification of potential tumor activity helped reduce the risk of incorrect treatment strategies, while advanced predictive analytics contributed to the overall benefits and improved outcomes of patients. Conclusions: By tailoring the assay to individual tumor profiles, we aimed to enhance early identification of residual disease and optimize therapeutic decision-making. OncoMRD BREAST is the world’s first and only gene signature-powered test for monitoring residual disease in solid tumors. Full article
Show Figures

Figure 1

14 pages, 1981 KiB  
Article
A Sparse Bayesian Technique to Learn the Frequency-Domain Active Regressors in OFDM Wireless Systems
by Carlos Crespo-Cadenas, María José Madero-Ayora, Juan A. Becerra, Elías Marqués-Valderrama and Sergio Cruces
Sensors 2025, 25(14), 4266; https://doi.org/10.3390/s25144266 - 9 Jul 2025
Viewed by 196
Abstract
Digital predistortion and nonlinear behavioral modeling of power amplifiers (PA) have been the subject of intensive research in the time domain (TD), in contrast with the limited number of works conducted in the frequency domain (FD). However, the adoption of orthogonal frequency division [...] Read more.
Digital predistortion and nonlinear behavioral modeling of power amplifiers (PA) have been the subject of intensive research in the time domain (TD), in contrast with the limited number of works conducted in the frequency domain (FD). However, the adoption of orthogonal frequency division multiplexing (OFDM) as a prevalent modulation scheme in current wireless communication standards provides a promising avenue for employing an FD approach. In this work, a procedure to model nonlinear distortion in wireless OFDM systems in the frequency domain is demonstrated for general model structures based on a sparse Bayesian learning (SBL) algorithm to identify a reduced set of regressors capable of an efficient and accurate prediction. The FD-SBL algorithm is proposed to first identify the active FD regressors and estimate the coefficients of the PA model using a given symbol, and then, the coefficients are employed to predict the distortion of successive OFDM symbols. The performance of this proposed FD-SBL with a validation NMSE of 47 dB for a signal of 30 MHz bandwidth is comparable to 46.6 dB of the previously proposed implementation of the TD-SBL. In terms of execution time, the TD-SBL fails due to excessive processing time and numerical problems for a 100 MHz bandwidth signal, whereas the FD-SBL yields an adequate validation NMSE of −38.6 dB. Full article
Show Figures

Figure 1

31 pages, 11216 KiB  
Article
An Optimal Integral Fast Terminal Synergetic Control Scheme for a Grid-to-Vehicle and Vehicle-to-Grid Battery Electric Vehicle Charger Based on the Black-Winged Kite Algorithm
by Ishak Aris, Yanis Sadou and Abdelbaset Laib
Energies 2025, 18(13), 3397; https://doi.org/10.3390/en18133397 - 27 Jun 2025
Viewed by 365
Abstract
The utilization of electric vehicles (EVs) has grown significantly and continuously in recent years, encouraging the creation of new implementation opportunities. The battery electric vehicle (BEV) charging system can be effectively used during peak load periods, for voltage regulation, and for the improvement [...] Read more.
The utilization of electric vehicles (EVs) has grown significantly and continuously in recent years, encouraging the creation of new implementation opportunities. The battery electric vehicle (BEV) charging system can be effectively used during peak load periods, for voltage regulation, and for the improvement of power system stability within the smart grid. It provides an efficient bidirectional interface for charging the battery from the grid and discharging the battery into the grid. These two operation modes are referred to as grid-to-vehicle (G2V) and vehicle-to-grid (V2G), respectively. The management of power flow in both directions is highly complex and sensitive, which requires employing a robust control scheme. In this paper, an Integral Fast Terminal Synergetic Control Scheme (IFTSC) is designed to control the BEV charger system through accurately tracking the required current and voltage in both G2V and V2G system modes. Moreover, the Black-Winged Kite Algorithm is introduced to select the optimal gains of the proposed IFTS control scheme. The system stability is checked using the Lyapunov stability method. Comprehensive simulations using MATLAB/Simulink are conducted to assess the safety and efficacy of the suggested optimal IFTSC in comparison with IFTSC, optimal integral synergetic, and conventional PID controllers. Furthermore, processor-in-the-loop (PIL) co-simulation is carried out for the studied system using the C2000 launchxl-f28379d digital signal processing (DSP) board to confirm the practicability and effectiveness of the proposed OIFTS. The analysis of the obtained quantitative comparison proves that the proposed optimal IFTSC provides higher control performance under several critical testing scenarios. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

35 pages, 2010 KiB  
Article
Intelligent Transmission Control Scheme for 5G mmWave Networks Employing Hybrid Beamforming
by Hazem (Moh’d Said) Hatamleh, As’ad Mahmoud As’ad Alnaser, Roba Mahmoud Ali Aloglah, Tomader Jamil Bani Ata, Awad Mohamed Ramadan and Omar Radhi Aqeel Alzoubi
Future Internet 2025, 17(7), 277; https://doi.org/10.3390/fi17070277 - 24 Jun 2025
Viewed by 272
Abstract
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is [...] Read more.
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is required due to the growing demands for spectrum resources in upcoming enormous machine-type communication applications. Ultra-high data speed, reduced latency, and improved connection are all promised by the development of 5G mmWave networks. Yet, due to severe route loss and directional communication requirements, there are substantial obstacles to transmission reliability and energy efficiency. To address this limitation in this research we present an intelligent transmission control scheme tailored to 5G mmWave networks. Transport control protocol (TCP) performance over mmWave links can be enhanced for network protocols by utilizing the mmWave scalable (mmS)-TCP. To ensure that users have the stronger average power, we suggest a novel method called row compression two-stage learning-based accurate multi-path processing network with received signal strength indicator-based association strategy (RCTS-AMP-RSSI-AS) for an estimate of both the direct and indirect channels. To change user scenarios and maintain effective communication constantly, we utilize the innovative method known as multi-user scenario-based MATD3 (Mu-MATD3). To improve performance, we introduce the novel method of “digital and analog beam training with long-short term memory (DAH-BT-LSTM)”. Finally, as optimizing network performance requires bottleneck-aware congestion reduction, the low-latency congestion control schemes (LLCCS) are proposed. The overall proposed method improves the performance of 5G mmWave networks. Full article
(This article belongs to the Special Issue Advances in Wireless and Mobile Networking—2nd Edition)
Show Figures

Figure 1

22 pages, 4727 KiB  
Article
Intelligent Robust Control Design with Closed-Loop Voltage Sensing for UPS Inverters in IoT Devices
by En-Chih Chang, Yuan-Wei Tseng and Chun-An Cheng
Sensors 2025, 25(13), 3849; https://doi.org/10.3390/s25133849 - 20 Jun 2025
Viewed by 325
Abstract
High-performance UPS inverters prevent IoT devices from power outages, thus protecting critical data. This paper suggests an intelligent, robust control technique with closed-loop voltage sensing for UPS (uninterruptible power supply) inverters in IoT (internet of things) devices. The suggested control technique synthesizes a [...] Read more.
High-performance UPS inverters prevent IoT devices from power outages, thus protecting critical data. This paper suggests an intelligent, robust control technique with closed-loop voltage sensing for UPS (uninterruptible power supply) inverters in IoT (internet of things) devices. The suggested control technique synthesizes a modified gray fast variable structure sliding mode control (MGFVSSMC) together with a neural network (NN). The MGFVSSMC allows system states to speedily converge towards the equilibrium within a shorter time while eliminating the problems of chattering and steady-state errors. The MGFVSSMC may experience state prediction errors when the UPS inverter is subjected to external highly nonlinear loads or internal parameters changing drastically. This results in high harmonic distortion and inferior dynamic response of the inverter output, affecting the guarding of the IoT device. An NN by means of a learning mechanism is employed to properly compensate for the prediction error of the MGFVSSMC, achieving a high-performance UPS inverter. The suggested control technique operates with one voltage sensing, which can yield fast transience and low inverter output-voltage distortion. Both simulations and digital signal processing (DSP) implementation results demonstrate the effectiveness of the suggested control technique under a variety of load conditions. Full article
(This article belongs to the Special Issue Mobile Sensing and Computing in Internet of Things)
Show Figures

Figure 1

20 pages, 999 KiB  
Article
Efficient Real-Time Isotope Identification on SoC FPGA
by Katherine Guerrero-Morejón, José María Hinojo-Montero, Jorge Jiménez-Sánchez, Cristian Rocha-Jácome, Ramón González-Carvajal and Fernando Muñoz-Chavero
Sensors 2025, 25(12), 3758; https://doi.org/10.3390/s25123758 - 16 Jun 2025
Viewed by 270
Abstract
Efficient real-time isotope identification is a critical challenge in nuclear spectroscopy, with important applications such as radiation monitoring, nuclear waste management, and medical imaging. This work presents a novel approach for isotope classification using a System-on-Chip FPGA, integrating hardware-accelerated principal component analysis (PCA) [...] Read more.
Efficient real-time isotope identification is a critical challenge in nuclear spectroscopy, with important applications such as radiation monitoring, nuclear waste management, and medical imaging. This work presents a novel approach for isotope classification using a System-on-Chip FPGA, integrating hardware-accelerated principal component analysis (PCA) for feature extraction and a software-based random forest classifier. The system leverages the FPGA’s parallel processing capabilities to implement PCA, reducing the dimensionality of digitized nuclear signals and optimizing computational efficiency. A key feature of the design is its ability to perform real-time classification without storing ADC samples, directly processing nuclear pulse data as it is acquired. The extracted features are classified by a random forest model running on the embedded microprocessor. PCA quantization is applied to minimize power consumption and resource usage without compromising accuracy. The experimental validation was conducted using datasets from high-resolution pulse-shape digitization, including closely matched isotope pairs (12C/13C, 36Ar/40Ar, and 80Kr/84Kr). The results demonstrate that the proposed SoC FPGA system significantly outperforms conventional software-only implementations, reducing latency while maintaining classification accuracy above 98%. This study provides a scalable, precise, and energy-efficient solution for real-time isotope identification. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

19 pages, 1706 KiB  
Article
Demonstration of 50 Gbps Long-Haul D-Band Radio-over-Fiber System with 2D-Convolutional Neural Network Equalizer for Joint Phase Noise and Nonlinearity Mitigation
by Yachen Jiang, Sicong Xu, Qihang Wang, Jie Zhang, Jingtao Ge, Jingwen Lin, Yuan Ma, Siqi Wang, Zhihang Ou and Wen Zhou
Sensors 2025, 25(12), 3661; https://doi.org/10.3390/s25123661 - 11 Jun 2025
Viewed by 384
Abstract
High demand for 6G wireless has made photonics-aided D-band (110–170 GHz) communication a research priority. Photonics-aided technology integrates optical and wireless communications to boost spectral efficiency and transmission distance. This study presents a Radio-over-Fiber (RoF) communication system utilizing photonics-aided technology for 4600 m [...] Read more.
High demand for 6G wireless has made photonics-aided D-band (110–170 GHz) communication a research priority. Photonics-aided technology integrates optical and wireless communications to boost spectral efficiency and transmission distance. This study presents a Radio-over-Fiber (RoF) communication system utilizing photonics-aided technology for 4600 m long-distance D-band transmission. We successfully show the transmission of a 50 Gbps (25 Gbaud) QPSK signal utilizing a 128.75 GHz carrier frequency. Notwithstanding these encouraging outcomes, RoF systems encounter considerable obstacles, including pronounced nonlinear distortions and phase noise related to laser linewidth. Numerous factors can induce nonlinear impairments, including high-power amplifiers (PAs) in wireless channels, the operational mechanisms of optoelectronic devices (such as electrical amplifiers, modulators, and photodiodes), and elevated optical power levels during fiber transmission. Phase noise (PN) is generated by laser linewidth. Despite the notable advantages of classical Volterra series and deep neural network (DNN) methods in alleviating nonlinear distortion, they display considerable performance limitations in adjusting for phase noise. To address these problems, we propose a novel post-processing approach utilizing a two-dimensional convolutional neural network (2D-CNN). This methodology allows for the extraction of intricate features from data preprocessed using traditional Digital Signal Processing (DSP) techniques, enabling concurrent compensation for phase noise and nonlinear distortions. The 4600 m long-distance D-band transmission experiment demonstrated that the proposed 2D-CNN post-processing method achieved a Bit Error Rate (BER) of 5.3 × 10−3 at 8 dBm optical power, satisfying the soft-decision forward error correction (SD-FEC) criterion of 1.56 × 10−2 with a 15% overhead. The 2D-CNN outperformed Volterra series and deep neural network approaches in long-haul D-band RoF systems by compensating for phase noise and nonlinear distortions via spatiotemporal feature integration, hierarchical feature extraction, and nonlinear modelling. Full article
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)
Show Figures

Figure 1

12 pages, 1326 KiB  
Article
A Wideband Digital Pre-Distortion Algorithm Based on Edge Signal Correction
by Yan Lu, Hongwei Zhang and Zheng Gong
Electronics 2025, 14(11), 2170; https://doi.org/10.3390/electronics14112170 - 27 May 2025
Viewed by 290
Abstract
With the continuous expansion of communication bandwidth, accurately modeling the non-linear characteristics of power amplifiers has become increasingly challenging, directly affecting the performance of digital pre-distortion (DPD) technology. The high peak-to-average power ratio and complex modulation schemes of wideband signals further exacerbate the [...] Read more.
With the continuous expansion of communication bandwidth, accurately modeling the non-linear characteristics of power amplifiers has become increasingly challenging, directly affecting the performance of digital pre-distortion (DPD) technology. The high peak-to-average power ratio and complex modulation schemes of wideband signals further exacerbate the difficulty of DPD implementation, necessitating more efficient algorithms. To address these challenges, this paper proposes a wideband DPD algorithm based on edge signal correction. By acquiring signals near the center frequency and comparing them with equally band-limited feedback signals, the algorithm effectively reduces the required processing bandwidth. The incorporation of cross-terms for model calibration enhances the model fitting accuracy, leading to significant improvement in pre-distortion performance. Simulation results demonstrate that compared with traditional DPD algorithms, the proposed method reduces the error vector magnitude (EVM) from 1.112% to 0.512%. Experimental validation shows an average improvement of 11.75 dBm in adjacent channel power at a 2 MHz frequency offset compared to conventional memory polynomial DPD. These improvements provide a novel solution for power amplifier linearization in wideband communication systems. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

19 pages, 6845 KiB  
Article
A Combined Detection Method for AC Fault Arcs Based on RLMD Decomposition and Pulse Density
by Li Yang and Dujuan Hu
Electronics 2025, 14(11), 2144; https://doi.org/10.3390/electronics14112144 - 24 May 2025
Viewed by 274
Abstract
Despite the proliferation of methods for detecting AC arc faults, current solutions often fall short in balancing cost efficiency, real-time performance, and implementation flexibility. This paper proposes a novel joint detection method based on robust local mean decomposition (RLMD) and pulse density analysis. [...] Read more.
Despite the proliferation of methods for detecting AC arc faults, current solutions often fall short in balancing cost efficiency, real-time performance, and implementation flexibility. This paper proposes a novel joint detection method based on robust local mean decomposition (RLMD) and pulse density analysis. The method leverages a dual-path analog signal processing framework: the low-frequency component of the current transformer (CT) signal is digitized and decomposed using RLMD to extract statistical feature indicators, while the high-frequency component is fed into a high-speed comparator to generate TTL pulses, from which the pulse density is calculated within a sliding time window. By combining the characteristic quantities derived from RLMD with the temporal pulse density, the proposed scheme achieves accurate and efficient detection of arc faults. Experimental results validate the approach, demonstrating high detection probability, adjustable sensitivity, low power consumption, and cost-effectiveness. These attributes underscore the method’s practical relevance and engineering significance in intelligent fault monitoring systems. Full article
Show Figures

Figure 1

23 pages, 2620 KiB  
Article
A Novel Overload Control Algorithm for Distributed Control Systems to Enhance Reliability in Industrial Automation
by Taikyeong Jeong
Appl. Sci. 2025, 15(10), 5766; https://doi.org/10.3390/app15105766 - 21 May 2025
Viewed by 453
Abstract
This paper presents a novel real-time overload detection algorithm for distributed control systems (DCSs), particularly applied to thermoelectric power plant environments. The proposed method is integrated with a modular multi-functional processor (MFP) architecture, designed to enhance system reliability, optimize resource utilization, and improve [...] Read more.
This paper presents a novel real-time overload detection algorithm for distributed control systems (DCSs), particularly applied to thermoelectric power plant environments. The proposed method is integrated with a modular multi-functional processor (MFP) architecture, designed to enhance system reliability, optimize resource utilization, and improve fault resilience under dynamic operational conditions. As legacy DCS platforms, such as those installed at the Tae-An Thermoelectric Power Plant, face limitations in applying advanced logic mechanisms, a simulation-based test bench was developed to validate the algorithm in anticipation of future DCS upgrades. The algorithm operates by partitioning function code executions into segment groups, enabling fine-grained, real-time CPU and memory utilization monitoring. Simulation studies, including a modeled denitrification process, demonstrated the system’s effectiveness in maintaining load balance, reducing power consumption to 17 mW under a 2 Gbps data throughput, and mitigating overload levels by approximately 31.7%, thereby outperforming conventional control mechanisms. The segmentation strategy, combined with summation logic, further supports scalable deployment across both legacy and next-generation DCS infrastructures. By enabling proactive overload mitigation and intelligent energy utilization, the proposed solution contributes to the advancement of self-regulating power control systems. Its applicability extends to energy management, production scheduling, and digital signal processing—domains where real-time optimization and operational reliability are essential. Full article
Show Figures

Figure 1

18 pages, 3160 KiB  
Article
Ultrasonic Beamforming-Based Visual Localisation of Minor and Multiple Gas Leaks Using a Microelectromechanical System (MEMS) Microphone Array
by Tao Wang, Jiawen Ji, Jianglong Lan and Bo Wang
Sensors 2025, 25(10), 3190; https://doi.org/10.3390/s25103190 - 19 May 2025
Viewed by 638
Abstract
The development of a universal method for real-time gas leak localisation imaging is crucial for preventing substantial financial losses and hazardous incidents. To achieve this objective, this study integrates array signal processing and electronic techniques to construct an ultrasonic sensor array for gas [...] Read more.
The development of a universal method for real-time gas leak localisation imaging is crucial for preventing substantial financial losses and hazardous incidents. To achieve this objective, this study integrates array signal processing and electronic techniques to construct an ultrasonic sensor array for gas leak detection and localisation. A digital microelectromechanical system microphone array is used to capture spatial ultrasonic information. By processing the array signals using beamforming algorithms, an acoustic spatial power spectrum is obtained, which facilitates the estimation of the locations of potential gas leak sources. In the pre-processing of beamforming, the Hilbert transform is employed instead of the fast Fourier transform to save computational resources. Subsequently, the spatial power spectrum is fused with visible-light images to generate acoustic localisation images, which enables the visualisation of gas leak sources. Experimental validation demonstrates that the system detects minor and multiple gas leaks in real time, meeting the sensitivity and accuracy requirements of embedded industrial applications. These findings contribute to the development of practical, cost-effective, and scalable gas leak detection systems for industrial and environmental safety applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 4280 KiB  
Article
Performance Characteristics of the Battery-Operated Silicon PIN Diode Detector with an Integrated Preamplifier and Data Acquisition Module for Fusion Particle Detection
by Allan Xi Chen, Benjamin F. Sigal, John Martinis, Alfred YiuFai Wong, Alexander Gunn, Matthew Salazar, Nawar Abdalla and Kai-Jian Xiao
J. Nucl. Eng. 2025, 6(2), 15; https://doi.org/10.3390/jne6020015 - 15 May 2025
Viewed by 617
Abstract
We present the performance and application of a commercial off-the-shelf Si PIN diode (Hamamatsu S14605) as a charged particle detector in a compact ion beam system (IBS) capable of generating D–D and p–B fusion charged particles. This detector is inexpensive, widely available, and [...] Read more.
We present the performance and application of a commercial off-the-shelf Si PIN diode (Hamamatsu S14605) as a charged particle detector in a compact ion beam system (IBS) capable of generating D–D and p–B fusion charged particles. This detector is inexpensive, widely available, and operates in photoconductive mode under a reverse bias voltage of 12 V, supplied by an A23 battery. A charge-sensitive preamplifier (CSP) is mounted on the backside of the detector’s four-layer PCB and powered by two ±3 V lithium batteries (A123). Both the detector and CSP are housed together on the vacuum side of the IBS, facing the fusion target. The system employs a CF-2.75-flanged DB-9 connector feedthrough to supply the signal, bias voltage, and rail voltages. To mitigate the high sensitivity of the detector to optical light, a thin aluminum foil assembly is used to block optical emissions from the ion beam and target. Charged particles generate step responses at the preamplifier output, with pulse rise times in the order of 0.2 to 0.3 µs. These signals are recorded using a custom-built data acquisition unit, which features an optical fiber data link to ensure the electrical isolation of the detector electronics. Subsequent digital signal processing is employed to optimally shape the pulses using a CR-RCn filter to produce Gaussian-shaped signals, enabling the accurate extraction of energy information. Performance results indicate that the detector’s baseline RMS ripple noise can be as low as 0.24 mV. Under actual laboratory conditions, the estimated signal-to-noise ratios (S/N) for charged particles from D–D fusion—protons, tritons, and helions—are approximately 225, 75, and 41, respectively. Full article
Show Figures

Graphical abstract

38 pages, 4395 KiB  
Article
Exploring Bio-Impedance Sensing for Intelligent Wearable Devices
by Nafise Arabsalmani, Arman Ghouchani, Shahin Jafarabadi Ashtiani and Milad Zamani
Bioengineering 2025, 12(5), 521; https://doi.org/10.3390/bioengineering12050521 - 14 May 2025
Viewed by 1001
Abstract
The rapid growth of wearable technology has opened new possibilities for smart health-monitoring systems. Among various sensing methods, bio-impedance sensing has stood out as a powerful, non-invasive, and energy-efficient way to track physiological changes and gather important health information. This review looks at [...] Read more.
The rapid growth of wearable technology has opened new possibilities for smart health-monitoring systems. Among various sensing methods, bio-impedance sensing has stood out as a powerful, non-invasive, and energy-efficient way to track physiological changes and gather important health information. This review looks at the basic principles behind bio-impedance sensing, how it is being built into wearable devices, and its use in healthcare and everyday wellness tracking. We examine recent progress in sensor design, signal processing, and machine learning, and show how these developments are making real-time health monitoring more effective. While bio-impedance systems offer many advantages, they also face challenges, particularly when it comes to making devices smaller, reducing power use, and improving the accuracy of collected data. One key issue is that analyzing bio-impedance signals often relies on complex digital signal processing, which can be both computationally heavy and energy-hungry. To address this, researchers are exploring the use of neuromorphic processors—hardware inspired by the way the human brain works. These processors use spiking neural networks (SNNs) and event-driven designs to process signals more efficiently, allowing bio-impedance sensors to pick up subtle physiological changes while using far less power. This not only extends battery life but also brings us closer to practical, long-lasting health-monitoring solutions. In this paper, we aim to connect recent engineering advances with real-world applications, highlighting how bio-impedance sensing could shape the next generation of intelligent wearable devices. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 7054 KiB  
Article
A 13.44-Bit Low-Power SAR ADC for Brain–Computer Interface Applications
by Hongyuan Yang, Jiahao Cheong and Cheng Liu
Appl. Sci. 2025, 15(10), 5494; https://doi.org/10.3390/app15105494 - 14 May 2025
Viewed by 461
Abstract
This paper presents a successive approximation register analog-to-digital converter (SAR ADC) specifically optimized for brain–computer interface (BCI) applications. Designed and post-layout-simulated using 180 nm CMOS technology, the proposed SAR ADC achieves a 13.44-bit effective number of bits (ENOB) and 27.9 μW of power [...] Read more.
This paper presents a successive approximation register analog-to-digital converter (SAR ADC) specifically optimized for brain–computer interface (BCI) applications. Designed and post-layout-simulated using 180 nm CMOS technology, the proposed SAR ADC achieves a 13.44-bit effective number of bits (ENOB) and 27.9 μW of power consumption at a supply voltage of 1.8 V, enabled by a piecewise monotonic switching scheme and dynamic logic architecture. The ADC supports a high input range of ±500 mV, making it suitable for neural signal acquisition. Through an optimized capacitive digital-to-analog converter (CDAC) array and a high-speed dynamic comparator, the ADC demonstrates a signal-to-noise-and-distortion ratio (SINAD) of 81.94 dB and a spurious-free dynamic range (SFDR) of 91.69 dBc at a sampling rate of 320 kS/s. Experimental results validate the design’s superior performance in terms of low-power operation, high resolution, and moderate sampling rate, positioning it as a competitive solution for high-density integration and precision neural signal processing in next-generation BCI systems. Full article
(This article belongs to the Special Issue Low-Power Integrated Circuit Design and Application)
Show Figures

Figure 1

Back to TopTop